content_type
stringclasses 8
values | main_lang
stringclasses 7
values | message
stringlengths 1
50
| sha
stringlengths 40
40
| patch
stringlengths 52
962k
| file_count
int64 1
300
|
---|---|---|---|---|---|
Text | Text | revison the some columns to make table clearer | bf15f675b9042db661de720900f367eff19737c0 | <ide><path>docs/sources/reference/run.md
<ide> This can be overridden using a third `:rwm` set of options to each `--device` fl
<ide>
<ide> In addition to `--privileged`, the operator can have fine grain control over the
<ide> capabilities using `--cap-add` and `--cap-drop`. By default, Docker has a default
<del>list of capabilities that are kept. Here is a table to list the reference information on capabilities.
<add>list of capabilities that are kept. The following table lists the Linux capability options which can be added or dropped.
<ide>
<ide> | Capability Key | Capability Description |
<ide> | :----------------- | :---------------| :-------------------- |
<ide> list of capabilities that are kept. Here is a table to list the reference inform
<ide> | SYS_PACCT | Use acct(2), switch process accounting on or off. |
<ide> | SYS_ADMIN | Perform a range of system administration operations. |
<ide> | SYS_NICE | Raise process nice value (nice(2), setpriority(2)) and change the nice value for arbitrary processes. |
<del>| SYS_RESOURCE | Override Resource Limits. |
<add>| SYS_RESOURCE | Override resource Limits. |
<ide> | SYS_TIME | Set system clock (settimeofday(2), stime(2), adjtimex(2)); set real-time (hardware) clock. |
<ide> | SYS_TTY_CONFIG | Use vhangup(2); employ various privileged ioctl(2) operations on virtual terminals. |
<ide> | MKNOD | Create special files using mknod(2). |
<ide> list of capabilities that are kept. Here is a table to list the reference inform
<ide> | SETGID | Make arbitrary manipulations of process GIDs and supplementary GID list. |
<ide> | SETUID | Make arbitrary manipulations of process UIDs. |
<ide> | LINUX_IMMUTABLE | Set the FS_APPEND_FL and FS_IMMUTABLE_FL i-node flags. |
<del>| NET_BIND_SERVICE | Bind a socket to Internet domain privileged ports (port numbers less than 1024). |
<add>| NET_BIND_SERVICE | Bind a socket to internet domain privileged ports (port numbers less than 1024). |
<ide> | NET_BROADCAST | Make socket broadcasts, and listen to multicasts. |
<ide> | IPC_LOCK | Lock memory (mlock(2), mlockall(2), mmap(2), shmctl(2)). |
<ide> | IPC_OWNER | Bypass permission checks for operations on System V IPC objects. |
<ide> list of capabilities that are kept. Here is a table to list the reference inform
<ide> | WAKE_ALARM | Trigger something that will wake up the system. |
<ide> | BLOCK_SUSPEND | Employ features that can block system suspend. |
<ide>
<del>For futher understanding, please check [capabilities(7) - Linux man page](http://linux.die.net/man/7/capabilities)
<add>Further reference information is available on the [capabilities(7) - Linux man page](http://linux.die.net/man/7/capabilities)
<ide>
<ide> Both flags support the value `all`, so if the
<ide> operator wants to have all capabilities but `MKNOD` they could use: | 1 |
Text | Text | add release notes for 1.0.5 and 1.1.3 | 776dfc678b0e726fa9cfbe9fe1c2760b48002296 | <ide><path>CHANGELOG.md
<add><a name="1.1.3"></a>
<add># 1.1.3 radioactive-gargle (2013-02-20)
<add>
<add>_Note: 1.1.x releases are [considered unstable](http://blog.angularjs.org/2012/07/angularjs-10-12-roadmap.html).
<add>They pass all tests but we reserve the right to change new features/apis in between minor releases. Check them
<add>out and please give us feedback._
<add>
<add>_Note: This release also contains all bug fixes available in [1.0.5](#1.0.5)._
<add>
<add>
<add>## Bug Fixes
<add>
<add>- **$compile:**
<add> - initialize interpolated attributes before directive linking
<add> ([bb8448c0](https://github.com/angular/angular.js/commit/bb8448c011127306df08c7479b66e5afe7a0fa94))
<add> - interpolate @ locals before the link function runs
<add> ([2ed53087](https://github.com/angular/angular.js/commit/2ed53087d7dd06d728e333a449265f7685275548))
<add>- **$http:**
<add> - do not encode special characters `@$:,` in params
<add> ([288b69a3](https://github.com/angular/angular.js/commit/288b69a314e9bd14458b6647532eb62aad5c5cdf))
<add>- **$resource:**
<add> - params should expand array values properly
<add> ([2a212344](https://github.com/angular/angular.js/commit/2a2123441c2b749b8f316a24c3ca3f77a9132a01))
<add>
<add>
<add>
<add>## Features
<add>
<add>- **$http:** allow overriding the XSRF header and cookie name
<add> ([8155c3a2](https://github.com/angular/angular.js/commit/8155c3a29ea0eb14806913b8ac08ba7727e1969c))
<add>- **$parse:** added `constant` and `literal` properties
<add> ([1ed63858](https://github.com/angular/angular.js/commit/1ed638582d2f2c7f89384d9712f4cfac52cc5b70))
<add>- **$resource:** expose promise based api via $then and $resolved
<add> ([dba6bc73](https://github.com/angular/angular.js/commit/dba6bc73e802fdae685a9f351d3e23c7efa8568a))
<add>- **$routeProvider:** add support to catch-all parameters in routes
<add> ([7eafbb98](https://github.com/angular/angular.js/commit/7eafbb98c64c0dc079d7d3ec589f1270b7f6fea5))
<add>- **Scope:**
<add> - expose transcluded and isolate scope info for batarang
<add> ([649b8922](https://github.com/angular/angular.js/commit/649b892205615a144dafff9984c0e6ab10ed341d))
<add> - only evaluate constant $watch expressions once
<add> ([1d7a95df](https://github.com/angular/angular.js/commit/1d7a95df565192fc02a18b0b297b39dd615eaeb5))
<add>- **angular.noConflict:** added api to restore previous angular namespace reference
<add> ([12ba6cec](https://github.com/angular/angular.js/commit/12ba6cec4fb79521101744e02a7e09f9fbb591c4))
<add>- **Directives:**
<add> - **ngSwitch:** support multiple matches on ngSwitchWhen and ngSwitchDefault
<add> ([0af17204](https://github.com/angular/angular.js/commit/0af172040e03811c59d01682968241e3df226774),
<add> [#1074](https://github.com/angular/angular.js/issues/1074))
<add>- **Filters:**
<add> - **date:** add `[.,]sss` formatter for milliseconds
<add> ([df744f3a](https://github.com/angular/angular.js/commit/df744f3af46fc227a934f16cb63c7a6038e7133b))
<add> - **filter:** add comparison function to filter
<add> ([ace54ff0](https://github.com/angular/angular.js/commit/ace54ff08c4593195b49eadb04d258e6409d969e))
<add>
<add>
<add>## Breaking Changes
<add>
<add>- **$http:** due to [288b69a3](https://github.com/angular/angular.js/commit/288b69a314e9bd14458b6647532eb62aad5c5cdf),
<add> $http now follows RFC3986 and does not encode special characters like `$@,:` in params.
<add> If your application needs to encode these characters, encode them manually, before sending the request.
<add>- **$resource:** due to [2a212344](https://github.com/angular/angular.js/commit/2a2123441c2b749b8f316a24c3ca3f77a9132a01),
<add> if the server relied on the buggy behavior of serializing arrays as http query arguments then
<add> either the backend should be fixed or a simple serialization of the array should be done
<add> on the client before calling the resource service.
<add>
<add>
<add>
<add>
<add><a name="1.0.5"></a>
<add># 1.0.5 flatulent-propulsion (2013-02-20)
<add>
<add>
<add>## Bug Fixes
<add>
<add>- **$compile:**
<add> - sanitize values bound to a[href]
<add> ([9532234b](https://github.com/angular/angular.js/commit/9532234bf1c408af9a6fd2c4743fdb585b920531))
<add> - rename $compileNote to compileNode
<add> ([92ca7efa](https://github.com/angular/angular.js/commit/92ca7efaa4bc4f37da3008b234e19343a1fa4207),
<add> [#1941](https://github.com/angular/angular.js/issues/1941))
<add> - should not leak memory when there are top level empty text nodes
<add> ([791804bd](https://github.com/angular/angular.js/commit/791804bdbfa6da7a39283623bd05628a01cd8720))
<add> - allow startingTag method to handle text / comment nodes
<add> ([755beb2b](https://github.com/angular/angular.js/commit/755beb2b66ce9f9f9a218f2355bbaf96d94fbc15))
<add>- **$cookies:** set cookies on Safari&IE when base[href] is undefined
<add> ([70909245](https://github.com/angular/angular.js/commit/7090924515214752b919b0c5630b3ea5e7c77223),
<add> [#1190](https://github.com/angular/angular.js/issues/1190))
<add>- **$http:**
<add> - patch for Firefox bug w/ CORS and response headers
<add> ([e19b04c9](https://github.com/angular/angular.js/commit/e19b04c9ec985821edf1269c628cfa261f81d631),
<add> [#1468](https://github.com/angular/angular.js/issues/1468))
<add>- **$resource:**
<add> - update RegExp to allow urlParams with out leading slash
<add> ([b7e1fb05](https://github.com/angular/angular.js/commit/b7e1fb0515798e1b4f3f2426f6b050951bee2617))
<add>- **Directives:**
<add> - **a:** workaround IE bug affecting mailto urls
<add> ([37e8b122](https://github.com/angular/angular.js/commit/37e8b12265291918396bfee65d444a8f63697b73),
<add> [#1949](https://github.com/angular/angular.js/issues/1949))
<add> - **ngClass:** keep track of old ngClass value manually
<add> ([5f5d4fea](https://github.com/angular/angular.js/commit/5f5d4feadbfa9d8ecc8150041dfd2bca2b2e9fea),
<add> [#1637](https://github.com/angular/angular.js/issues/1637))
<add> - **ngSwitch:** make ngSwitch compatible with controller backwards-compatiblity module
<add> ([9b7c1d0f](https://github.com/angular/angular.js/commit/9b7c1d0f7ce442d4ad2ec587e66d2d335e64fa4e))
<add>- **Filters:**
<add> - **date:** invert timezone sign and always display sign
<add> ([b001c8ec](https://github.com/angular/angular.js/commit/b001c8ece5472626bf49cf82753e8ac1aafd2513),
<add> [#1261](https://github.com/angular/angular.js/issues/1261))
<add> - **number:** fix formatting when "0" passed as fractionSize
<add> ([f5835963](https://github.com/angular/angular.js/commit/f5835963d5982003a713dd354eefd376ed39ac02))
<add>- **scenario runner:** include error messages in XML output
<add> ([d46fe3c2](https://github.com/angular/angular.js/commit/d46fe3c23fa269dcc10249148f2af14f3db6b066))
<add>- **Misc:**
<add> - don't use instanceof to detect arrays
<add> ([3c2aee01](https://github.com/angular/angular.js/commit/3c2aee01b0b299995eb92f4255159585b0f53c10),
<add> [#1966](https://github.com/angular/angular.js/issues/1966))
<add> - angular.forEach should correctly iterate over objects with length prop
<add> ([ec54712f](https://github.com/angular/angular.js/commit/ec54712ff3dab1ade44f94fa82d67edeffa79a1d),
<add> [#1840](https://github.com/angular/angular.js/issues/1840))
<add>
<add>
<add>
<ide> <a name="1.1.2"></a>
<ide> # 1.1.2 tofu-animation (2013-01-22)
<ide> | 1 |
Javascript | Javascript | remove unused import | 7691bfa119f4e1b1ee96b8be71f26a2999c3bfeb | <ide><path>lib/dependencies/HarmonyImportSideEffectDependency.js
<ide>
<ide> const HarmonyImportDependency = require("./HarmonyImportDependency");
<ide>
<del>/** @typedef {import("webpack-sources").ReplaceSource} ReplaceSource */
<ide> /** @typedef {import("../Dependency")} Dependency */
<ide> /** @typedef {import("../DependencyTemplates")} DependencyTemplates */
<ide> /** @typedef {import("../InitFragment")} InitFragment */ | 1 |
Javascript | Javascript | respect morph targets in outlinepass | 39bff4a955304d23f6de79edabd712acbac9e6da | <ide><path>examples/js/postprocessing/OutlinePass.js
<ide> #include <project_vertex>
<ide>
<ide> vPosition = mvPosition;
<del> vec4 worldPosition = modelMatrix * vec4( position, 1.0 );
<add> vec4 worldPosition = modelMatrix * vec4( transformed, 1.0 );
<ide> projTexCoord = textureMatrix * worldPosition;
<ide>
<ide> }`,
<ide><path>examples/jsm/postprocessing/OutlinePass.js
<ide> class OutlinePass extends Pass {
<ide> #include <project_vertex>
<ide>
<ide> vPosition = mvPosition;
<del> vec4 worldPosition = modelMatrix * vec4( position, 1.0 );
<add> vec4 worldPosition = modelMatrix * vec4( transformed, 1.0 );
<ide> projTexCoord = textureMatrix * worldPosition;
<ide>
<ide> }`, | 2 |
Javascript | Javascript | fix regexp nits | 76340e3f1007998c7cb9d69fa1a42d42663ca6c2 | <ide><path>test/common/index.js
<ide> if (exports.isWindows) {
<ide> }
<ide>
<ide> const ifaces = os.networkInterfaces();
<add>const re = /lo/;
<ide> exports.hasIPv6 = Object.keys(ifaces).some(function(name) {
<del> return /lo/.test(name) && ifaces[name].some(function(info) {
<add> return re.test(name) && ifaces[name].some(function(info) {
<ide> return info.family === 'IPv6';
<ide> });
<ide> });
<ide> function leakedGlobals() {
<ide> leaked.push(val);
<ide>
<ide> if (global.__coverage__) {
<del> return leaked.filter((varname) => !/^(cov_|__cov)/.test(varname));
<add> return leaked.filter((varname) => !/^(?:cov_|__cov)/.test(varname));
<ide> } else {
<ide> return leaked;
<ide> }
<ide><path>test/debugger/helper-debugger-repl.js
<ide> function startDebugger(scriptToDebug) {
<ide> child.stderr.pipe(process.stderr);
<ide>
<ide> child.on('line', function(line) {
<del> line = line.replace(/^(debug> *)+/, '');
<add> line = line.replace(/^(?:debug> *)+/, '');
<ide> console.log(line);
<ide> assert.ok(expected.length > 0, `Got unexpected line: ${line}`);
<ide>
<ide> const expectedLine = expected[0].lines.shift();
<del> assert.ok(line.match(expectedLine) !== null, `${line} != ${expectedLine}`);
<add> assert.ok(expectedLine.test(line), `${line} != ${expectedLine}`);
<ide>
<ide> if (expected[0].lines.length === 0) {
<ide> const callback = expected[0].callback;
<ide><path>test/doctool/test-doctool-html.js
<ide> const testData = [
<ide> },
<ide> ];
<ide>
<add>const spaces = /\s/g;
<add>
<ide> testData.forEach((item) => {
<ide> // Normalize expected data by stripping whitespace
<del> const expected = item.html.replace(/\s/g, '');
<add> const expected = item.html.replace(spaces, '');
<ide> const includeAnalytics = typeof item.analyticsId !== 'undefined';
<ide>
<ide> fs.readFile(item.file, 'utf8', common.mustCall((err, input) => {
<ide> testData.forEach((item) => {
<ide> common.mustCall((err, output) => {
<ide> assert.ifError(err);
<ide>
<del> const actual = output.replace(/\s/g, '');
<add> const actual = output.replace(spaces, '');
<ide> // Assert that the input stripped of all whitespace contains the
<ide> // expected list
<ide> assert.notStrictEqual(actual.indexOf(expected), -1);
<ide><path>test/inspector/inspector-helper.js
<ide> exports.startNodeForInspectorTest = function(callback,
<ide> clearTimeout(timeoutId);
<ide> console.log('[err]', text);
<ide> if (found) return;
<del> const match = text.match(/Debugger listening on ws:\/\/(.+):(\d+)\/(.+)/);
<add> const match = text.match(/Debugger listening on ws:\/\/.+:(\d+)\/.+/);
<ide> found = true;
<ide> child.stderr.removeListener('data', dataCallback);
<ide> assert.ok(match, text);
<del> callback(new Harness(match[2], child));
<add> callback(new Harness(match[1], child));
<ide> });
<ide>
<ide> child.stderr.on('data', dataCallback);
<ide><path>test/inspector/test-inspector.js
<ide> function checkListResponse(err, response) {
<ide> assert.strictEqual(1, response.length);
<ide> assert.ok(response[0]['devtoolsFrontendUrl']);
<ide> assert.ok(
<del> response[0]['webSocketDebuggerUrl']
<del> .match(/ws:\/\/127\.0\.0\.1:\d+\/[0-9A-Fa-f]{8}-/));
<add> /ws:\/\/127\.0\.0\.1:\d+\/[0-9A-Fa-f]{8}-/
<add> .test(response[0]['webSocketDebuggerUrl']));
<ide> }
<ide>
<ide> function checkVersion(err, response) {
<ide><path>test/parallel/test-assert-checktag.js
<ide> const util = require('util');
<ide> // for assert.throws()
<ide> function re(literals, ...values) {
<ide> let result = literals[0];
<add> const escapeRE = /[\\^$.*+?()[\]{}|=!<>:-]/g;
<ide> for (const [i, value] of values.entries()) {
<ide> const str = util.inspect(value);
<ide> // Need to escape special characters.
<del> result += str.replace(/[\\^$.*+?()[\]{}|=!<>:-]/g, '\\$&');
<add> result += str.replace(escapeRE, '\\$&');
<ide> result += literals[i + 1];
<ide> }
<ide> return common.expectsError({
<ide><path>test/parallel/test-assert-deep.js
<ide> const util = require('util');
<ide> // for assert.throws()
<ide> function re(literals, ...values) {
<ide> let result = literals[0];
<add> const escapeRE = /[\\^$.*+?()[\]{}|=!<>:-]/g;
<ide> for (const [i, value] of values.entries()) {
<ide> const str = util.inspect(value);
<ide> // Need to escape special characters.
<del> result += str.replace(/[\\^$.*+?()[\]{}|=!<>:-]/g, '\\$&');
<add> result += str.replace(escapeRE, '\\$&');
<ide> result += literals[i + 1];
<ide> }
<ide> return common.expectsError({
<ide><path>test/parallel/test-assert.js
<ide> assert.throws(() => {
<ide> {
<ide> // bad args to AssertionError constructor should throw TypeError
<ide> const args = [1, true, false, '', null, Infinity, Symbol('test'), undefined];
<add> const re = /^The "options" argument must be of type object$/;
<ide> args.forEach((input) => {
<ide> assert.throws(
<ide> () => new assert.AssertionError(input),
<ide> common.expectsError({
<ide> code: 'ERR_INVALID_ARG_TYPE',
<ide> type: TypeError,
<del> message: /^The "options" argument must be of type object$/
<add> message: re
<ide> }));
<ide> });
<ide> }
<ide><path>test/parallel/test-buffer-bytelength.js
<ide> const SlowBuffer = require('buffer').SlowBuffer;
<ide> const vm = require('vm');
<ide>
<ide> // coerce values to string
<del>assert.throws(() => { Buffer.byteLength(32, 'latin1'); },
<del> /"string" must be a string, Buffer, or ArrayBuffer/);
<del>assert.throws(() => { Buffer.byteLength(NaN, 'utf8'); },
<del> /"string" must be a string, Buffer, or ArrayBuffer/);
<del>assert.throws(() => { Buffer.byteLength({}, 'latin1'); },
<del> /"string" must be a string, Buffer, or ArrayBuffer/);
<del>assert.throws(() => { Buffer.byteLength(); },
<del> /"string" must be a string, Buffer, or ArrayBuffer/);
<add>const re = /"string" must be a string, Buffer, or ArrayBuffer/;
<add>assert.throws(() => { Buffer.byteLength(32, 'latin1'); }, re);
<add>assert.throws(() => { Buffer.byteLength(NaN, 'utf8'); }, re);
<add>assert.throws(() => { Buffer.byteLength({}, 'latin1'); }, re);
<add>assert.throws(() => { Buffer.byteLength(); }, re);
<ide>
<ide> assert.strictEqual(Buffer.byteLength('', undefined, true), -1);
<ide>
<ide><path>test/parallel/test-buffer-prototype-inspect.js
<ide> const util = require('util');
<ide>
<ide> {
<ide> const buf = Buffer.from('x'.repeat(51));
<del> assert.ok(/^<Buffer (78 ){50}\.\.\. >$/.test(util.inspect(buf)));
<add> assert.ok(/^<Buffer (?:78 ){50}\.\.\. >$/.test(util.inspect(buf)));
<ide> }
<ide><path>test/parallel/test-child-process-constructor.js
<ide> assert.strictEqual(typeof ChildProcess, 'function');
<ide> {
<ide> // Verify that invalid options to spawn() throw.
<ide> const child = new ChildProcess();
<add> const re = /^TypeError: "options" must be an object$/;
<ide>
<ide> [undefined, null, 'foo', 0, 1, NaN, true, false].forEach((options) => {
<ide> assert.throws(() => {
<ide> child.spawn(options);
<del> }, /^TypeError: "options" must be an object$/);
<add> }, re);
<ide> });
<ide> }
<ide>
<ide> {
<ide> // Verify that spawn throws if file is not a string.
<ide> const child = new ChildProcess();
<add> const re = /^TypeError: "file" must be a string$/;
<ide>
<ide> [undefined, null, 0, 1, NaN, true, false, {}].forEach((file) => {
<ide> assert.throws(() => {
<ide> child.spawn({ file });
<del> }, /^TypeError: "file" must be a string$/);
<add> }, re);
<ide> });
<ide> }
<ide>
<ide> {
<ide> // Verify that spawn throws if envPairs is not an array or undefined.
<ide> const child = new ChildProcess();
<add> const re = /^TypeError: "envPairs" must be an array$/;
<ide>
<ide> [null, 0, 1, NaN, true, false, {}, 'foo'].forEach((envPairs) => {
<ide> assert.throws(() => {
<ide> child.spawn({ envPairs, stdio: ['ignore', 'ignore', 'ignore', 'ipc'] });
<del> }, /^TypeError: "envPairs" must be an array$/);
<add> }, re);
<ide> });
<ide> }
<ide>
<ide> {
<ide> // Verify that spawn throws if args is not an array or undefined.
<ide> const child = new ChildProcess();
<add> const re = /^TypeError: "args" must be an array$/;
<ide>
<ide> [null, 0, 1, NaN, true, false, {}, 'foo'].forEach((args) => {
<ide> assert.throws(() => {
<ide> child.spawn({ file: 'foo', args });
<del> }, /^TypeError: "args" must be an array$/);
<add> }, re);
<ide> });
<ide> }
<ide>
<ide><path>test/parallel/test-cli-syntax.js
<ide> const syntaxArgs = [
<ide> ['--check']
<ide> ];
<ide>
<add>const syntaxErrorRE = /^SyntaxError: Unexpected identifier$/m;
<add>const notFoundRE = /^Error: Cannot find module/m;
<add>
<ide> // test good syntax with and without shebang
<ide> [
<ide> 'syntax/good_syntax.js',
<ide> const syntaxArgs = [
<ide> assert(c.stderr.startsWith(file), "stderr doesn't start with the filename");
<ide>
<ide> // stderr should have a syntax error message
<del> const match = c.stderr.match(/^SyntaxError: Unexpected identifier$/m);
<del> assert(match, 'stderr incorrect');
<add> assert(syntaxErrorRE.test(c.stderr), 'stderr incorrect');
<ide>
<ide> assert.strictEqual(c.status, 1, `code === ${c.status}`);
<ide> });
<ide> const syntaxArgs = [
<ide> assert.strictEqual(c.stdout, '', 'stdout produced');
<ide>
<ide> // stderr should have a module not found error message
<del> const match = c.stderr.match(/^Error: Cannot find module/m);
<del> assert(match, 'stderr incorrect');
<add> assert(notFoundRE.test(c.stderr), 'stderr incorrect');
<ide>
<ide> assert.strictEqual(c.status, 1, `code === ${c.status}`);
<ide> });
<ide> syntaxArgs.forEach(function(args) {
<ide> assert.strictEqual(c.stdout, '', 'stdout produced');
<ide>
<ide> // stderr should have a syntax error message
<del> const match = c.stderr.match(/^SyntaxError: Unexpected identifier$/m);
<del> assert(match, 'stderr incorrect');
<add> assert(syntaxErrorRE.test(c.stderr), 'stderr incorrect');
<ide>
<ide> assert.strictEqual(c.status, 1, `code === ${c.status}`);
<ide> });
<ide><path>test/parallel/test-crypto-authenticated.js
<ide> const TEST_CASES = [
<ide> tag: 'a44a8266ee1c8eb0c8b5d4cf5ae9f19a', tampered: false },
<ide> ];
<ide>
<add>const errMessages = {
<add> auth: / auth/,
<add> state: / state/,
<add> FIPS: /not supported in FIPS mode/,
<add> length: /Invalid IV length/,
<add>};
<add>
<ide> const ciphers = crypto.getCiphers();
<ide>
<ide> for (const i in TEST_CASES) {
<ide> for (const i in TEST_CASES) {
<ide> assert.strictEqual(msg, test.plain);
<ide> } else {
<ide> // assert that final throws if input data could not be verified!
<del> assert.throws(function() { decrypt.final('ascii'); }, / auth/);
<add> assert.throws(function() { decrypt.final('ascii'); }, errMessages.auth);
<ide> }
<ide> }
<ide>
<ide> if (test.password) {
<ide> if (common.hasFipsCrypto) {
<ide> assert.throws(() => { crypto.createCipher(test.algo, test.password); },
<del> /not supported in FIPS mode/);
<add> errMessages.FIPS);
<ide> } else {
<ide> const encrypt = crypto.createCipher(test.algo, test.password);
<ide> if (test.aad)
<ide> for (const i in TEST_CASES) {
<ide> if (test.password) {
<ide> if (common.hasFipsCrypto) {
<ide> assert.throws(() => { crypto.createDecipher(test.algo, test.password); },
<del> /not supported in FIPS mode/);
<add> errMessages.FIPS);
<ide> } else {
<ide> const decrypt = crypto.createDecipher(test.algo, test.password);
<ide> decrypt.setAuthTag(Buffer.from(test.tag, 'hex'));
<ide> for (const i in TEST_CASES) {
<ide> assert.strictEqual(msg, test.plain);
<ide> } else {
<ide> // assert that final throws if input data could not be verified!
<del> assert.throws(function() { decrypt.final('ascii'); }, / auth/);
<add> assert.throws(function() { decrypt.final('ascii'); }, errMessages.auth);
<ide> }
<ide> }
<ide> }
<ide> for (const i in TEST_CASES) {
<ide> Buffer.from(test.key, 'hex'),
<ide> Buffer.from(test.iv, 'hex'));
<ide> encrypt.update('blah', 'ascii');
<del> assert.throws(function() { encrypt.getAuthTag(); }, / state/);
<add> assert.throws(function() { encrypt.getAuthTag(); }, errMessages.state);
<ide> }
<ide>
<ide> {
<ide> for (const i in TEST_CASES) {
<ide> Buffer.from(test.key, 'hex'),
<ide> Buffer.from(test.iv, 'hex'));
<ide> assert.throws(() => { encrypt.setAuthTag(Buffer.from(test.tag, 'hex')); },
<del> / state/);
<add> errMessages.state);
<ide> }
<ide>
<ide> {
<ide> // trying to read tag from decryption object:
<ide> const decrypt = crypto.createDecipheriv(test.algo,
<ide> Buffer.from(test.key, 'hex'),
<ide> Buffer.from(test.iv, 'hex'));
<del> assert.throws(function() { decrypt.getAuthTag(); }, / state/);
<add> assert.throws(function() { decrypt.getAuthTag(); }, errMessages.state);
<ide> }
<ide>
<ide> {
<ide> for (const i in TEST_CASES) {
<ide> Buffer.from(test.key, 'hex'),
<ide> Buffer.alloc(0)
<ide> );
<del> }, /Invalid IV length/);
<add> }, errMessages.length);
<ide> }
<ide> }
<ide>
<ide> for (const i in TEST_CASES) {
<ide> '6fKjEjR3Vl30EUYC');
<ide> encrypt.update('blah', 'ascii');
<ide> encrypt.final();
<del> assert.throws(() => encrypt.getAuthTag(), / state/);
<del> assert.throws(() => encrypt.setAAD(Buffer.from('123', 'ascii')), / state/);
<add> assert.throws(() => encrypt.getAuthTag(), errMessages.state);
<add> assert.throws(() => encrypt.setAAD(Buffer.from('123', 'ascii')),
<add> errMessages.state);
<ide> }
<ide><path>test/parallel/test-crypto-cipheriv-decipheriv.js
<ide> testCipher2(Buffer.from('0123456789abcd0123456789'), Buffer.from('12345678'));
<ide> // Zero-sized IV should be accepted in ECB mode.
<ide> crypto.createCipheriv('aes-128-ecb', Buffer.alloc(16), Buffer.alloc(0));
<ide>
<add>const errMessage = /Invalid IV length/;
<add>
<ide> // But non-empty IVs should be rejected.
<ide> for (let n = 1; n < 256; n += 1) {
<ide> assert.throws(
<ide> () => crypto.createCipheriv('aes-128-ecb', Buffer.alloc(16),
<ide> Buffer.alloc(n)),
<del> /Invalid IV length/);
<add> errMessage);
<ide> }
<ide>
<ide> // Correctly sized IV should be accepted in CBC mode.
<ide> for (let n = 0; n < 256; n += 1) {
<ide> assert.throws(
<ide> () => crypto.createCipheriv('aes-128-cbc', Buffer.alloc(16),
<ide> Buffer.alloc(n)),
<del> /Invalid IV length/);
<add> errMessage);
<ide> }
<ide>
<ide> // Zero-sized IV should be rejected in GCM mode.
<ide> assert.throws(
<ide> () => crypto.createCipheriv('aes-128-gcm', Buffer.alloc(16),
<ide> Buffer.alloc(0)),
<del> /Invalid IV length/);
<add> errMessage);
<ide>
<ide> // But all other IV lengths should be accepted.
<ide> for (let n = 1; n < 256; n += 1) {
<ide><path>test/parallel/test-crypto-dh.js
<ide> if (availableCurves.has('prime256v1') && availableCurves.has('secp256k1')) {
<ide> // rejected.
<ide> ecdh5.setPrivateKey(cafebabeKey, 'hex');
<ide>
<del> [ // Some invalid private keys for the secp256k1 curve.
<del> '0000000000000000000000000000000000000000000000000000000000000000',
<del> 'FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141',
<del> 'FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF',
<add> // Some invalid private keys for the secp256k1 curve.
<add> const errMessage = /^Error: Private key is not valid for specified curve\.$/;
<add> ['0000000000000000000000000000000000000000000000000000000000000000',
<add> 'FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141',
<add> 'FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF',
<ide> ].forEach((element) => {
<ide> assert.throws(() => {
<ide> ecdh5.setPrivateKey(element, 'hex');
<del> }, /^Error: Private key is not valid for specified curve\.$/);
<add> }, errMessage);
<ide> // Verify object state did not change.
<ide> assert.strictEqual(ecdh5.getPrivateKey('hex'), cafebabeKey);
<ide> });
<ide><path>test/parallel/test-crypto-random.js
<ide> const expectedErrorRegexp = /^TypeError: size must be a number >= 0$/;
<ide> Buffer.alloc(10),
<ide> new Uint8Array(new Array(10).fill(0))
<ide> ];
<add> const errMessages = {
<add> offsetNotNumber: /offset must be a number/,
<add> offsetOutOfRange: /offset out of range/,
<add> offsetNotUInt32: /offset must be a uint32/,
<add> sizeNotNumber: /size must be a number/,
<add> sizeNotUInt32: /size must be a uint32/,
<add> bufferTooSmall: /buffer too small/,
<add> };
<ide>
<ide> for (const buf of bufs) {
<ide> const len = Buffer.byteLength(buf);
<ide> assert.strictEqual(len, 10, `Expected byteLength of 10, got ${len}`);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, 'test');
<del> }, /offset must be a number/);
<add> }, errMessages.offsetNotNumber);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, NaN);
<del> }, /offset must be a number/);
<add> }, errMessages.offsetNotNumber);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, 'test', common.mustNotCall());
<del> }, /offset must be a number/);
<add> }, errMessages.offsetNotNumber);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, NaN, common.mustNotCall());
<del> }, /offset must be a number/);
<add> }, errMessages.offsetNotNumber);
<ide>
<ide> const max = require('buffer').kMaxLength + 1;
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, 11);
<del> }, /offset out of range/);
<add> }, errMessages.offsetOutOfRange);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, max);
<del> }, /offset out of range/);
<add> }, errMessages.offsetOutOfRange);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, 11, common.mustNotCall());
<del> }, /offset out of range/);
<add> }, errMessages.offsetOutOfRange);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, max, common.mustNotCall());
<del> }, /offset out of range/);
<add> }, errMessages.offsetOutOfRange);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, 0, 'test');
<del> }, /size must be a number/);
<add> }, errMessages.sizeNotNumber);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, 0, NaN);
<del> }, /size must be a number/);
<add> }, errMessages.sizeNotNumber);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, 0, 'test', common.mustNotCall());
<del> }, /size must be a number/);
<add> }, errMessages.sizeNotNumber);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, 0, NaN, common.mustNotCall());
<del> }, /size must be a number/);
<add> }, errMessages.sizeNotNumber);
<ide>
<ide> {
<ide> const size = (-1 >>> 0) + 1;
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, 0, -10);
<del> }, /size must be a uint32/);
<add> }, errMessages.sizeNotUInt32);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, 0, size);
<del> }, /size must be a uint32/);
<add> }, errMessages.sizeNotUInt32);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, 0, -10, common.mustNotCall());
<del> }, /size must be a uint32/);
<add> }, errMessages.sizeNotUInt32);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, 0, size, common.mustNotCall());
<del> }, /size must be a uint32/);
<add> }, errMessages.sizeNotUInt32);
<ide> }
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, -10);
<del> }, /offset must be a uint32/);
<add> }, errMessages.offsetNotUInt32);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, -10, common.mustNotCall());
<del> }, /offset must be a uint32/);
<add> }, errMessages.offsetNotUInt32);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, 1, 10);
<del> }, /buffer too small/);
<add> }, errMessages.bufferTooSmall);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, 1, 10, common.mustNotCall());
<del> }, /buffer too small/);
<add> }, errMessages.bufferTooSmall);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, 0, 12);
<del> }, /buffer too small/);
<add> }, errMessages.bufferTooSmall);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, 0, 12, common.mustNotCall());
<del> }, /buffer too small/);
<add> }, errMessages.bufferTooSmall);
<ide>
<ide> {
<ide> // Offset is too big
<ide> const offset = (-1 >>> 0) + 1;
<ide> assert.throws(() => {
<ide> crypto.randomFillSync(buf, offset, 10);
<del> }, /offset must be a uint32/);
<add> }, errMessages.offsetNotUInt32);
<ide>
<ide> assert.throws(() => {
<ide> crypto.randomFill(buf, offset, 10, common.mustNotCall());
<del> }, /offset must be a uint32/);
<add> }, errMessages.offsetNotUInt32);
<ide> }
<ide> }
<ide> }
<ide><path>test/parallel/test-crypto-sign-verify.js
<ide> const modSize = 1024;
<ide> getEffectiveSaltLength(crypto.constants.RSA_PSS_SALTLEN_MAX_SIGN),
<ide> 0, 16, 32, 64, 128
<ide> ];
<add> const errMessage = /^Error:.*data too large for key size$/;
<ide>
<ide> signSaltLengths.forEach((signSaltLength) => {
<ide> if (signSaltLength > max) {
<ide> const modSize = 1024;
<ide> padding: crypto.constants.RSA_PKCS1_PSS_PADDING,
<ide> saltLength: signSaltLength
<ide> });
<del> }, /^Error:.*data too large for key size$/);
<add> }, errMessage);
<ide> } else {
<ide> // Otherwise, a valid signature should be generated
<ide> const s4 = crypto.createSign(algo)
<ide> const modSize = 1024;
<ide>
<ide> // Test exceptions for invalid `padding` and `saltLength` values
<ide> {
<add> const paddingNotInteger = /^TypeError: padding must be an integer$/;
<add> const saltLengthNotInteger = /^TypeError: saltLength must be an integer$/;
<add>
<ide> [null, undefined, NaN, 'boom', {}, [], true, false]
<ide> .forEach((invalidValue) => {
<ide> assert.throws(() => {
<ide> const modSize = 1024;
<ide> key: keyPem,
<ide> padding: invalidValue
<ide> });
<del> }, /^TypeError: padding must be an integer$/);
<add> }, paddingNotInteger);
<ide>
<ide> assert.throws(() => {
<ide> crypto.createSign('RSA-SHA256')
<ide> const modSize = 1024;
<ide> padding: crypto.constants.RSA_PKCS1_PSS_PADDING,
<ide> saltLength: invalidValue
<ide> });
<del> }, /^TypeError: saltLength must be an integer$/);
<add> }, saltLengthNotInteger);
<ide> });
<ide>
<ide> assert.throws(() => {
<ide><path>test/parallel/test-crypto.js
<ide> validateList(cryptoCiphers);
<ide> const tlsCiphers = tls.getCiphers();
<ide> assert(tls.getCiphers().includes('aes256-sha'));
<ide> // There should be no capital letters in any element.
<del>assert(tlsCiphers.every((value) => /^[^A-Z]+$/.test(value)));
<add>const noCapitals = /^[^A-Z]+$/;
<add>assert(tlsCiphers.every((value) => noCapitals.test(value)));
<ide> validateList(tlsCiphers);
<ide>
<ide> // Assert that we have sha and sha1 but not SHA and SHA1.
<ide><path>test/parallel/test-dgram-createSocket-type.js
<ide> const validTypes = [
<ide> { type: 'udp4' },
<ide> { type: 'udp6' }
<ide> ];
<add>const errMessage = /^Bad socket type specified\. Valid types are: udp4, udp6$/;
<ide>
<ide> // Error must be thrown with invalid types
<ide> invalidTypes.forEach((invalidType) => {
<ide> invalidTypes.forEach((invalidType) => {
<ide> }, common.expectsError({
<ide> code: 'ERR_SOCKET_BAD_TYPE',
<ide> type: Error,
<del> message: /^Bad socket type specified\. Valid types are: udp4, udp6$/
<add> message: errMessage
<ide> }));
<ide> });
<ide>
<ide><path>test/parallel/test-error-reporting.js
<ide> function errExec(script, callback) {
<ide> });
<ide> }
<ide>
<add>const syntaxErrorMessage = /SyntaxError/;
<add>
<ide>
<ide> // Simple throw error
<ide> errExec('throws_error.js', common.mustCall(function(err, stdout, stderr) {
<ide> errExec('throws_error.js', common.mustCall(function(err, stdout, stderr) {
<ide>
<ide> // Trying to JSON.parse(undefined)
<ide> errExec('throws_error2.js', common.mustCall(function(err, stdout, stderr) {
<del> assert.ok(/SyntaxError/.test(stderr));
<add> assert.ok(syntaxErrorMessage.test(stderr));
<ide> }));
<ide>
<ide>
<ide> // Trying to JSON.parse(undefined) in nextTick
<ide> errExec('throws_error3.js', common.mustCall(function(err, stdout, stderr) {
<del> assert.ok(/SyntaxError/.test(stderr));
<add> assert.ok(syntaxErrorMessage.test(stderr));
<ide> }));
<ide>
<ide>
<ide> // throw ILLEGAL error
<ide> errExec('throws_error4.js', common.mustCall(function(err, stdout, stderr) {
<ide> assert.ok(/\/\*\*/.test(stderr));
<del> assert.ok(/SyntaxError/.test(stderr));
<add> assert.ok(syntaxErrorMessage.test(stderr));
<ide> }));
<ide>
<ide> // Specific long exception line doesn't result in stack overflow
<ide> errExec('throws_error5.js', common.mustCall(function(err, stdout, stderr) {
<del> assert.ok(/SyntaxError/.test(stderr));
<add> assert.ok(syntaxErrorMessage.test(stderr));
<ide> }));
<ide>
<ide> // Long exception line with length > errorBuffer doesn't result in assertion
<ide> errExec('throws_error6.js', common.mustCall(function(err, stdout, stderr) {
<del> assert.ok(/SyntaxError/.test(stderr));
<add> assert.ok(syntaxErrorMessage.test(stderr));
<ide> }));
<ide>
<ide> // Object that throws in toString() doesn't print garbage
<ide><path>test/parallel/test-event-emitter-max-listeners.js
<ide> e.on('maxListeners', common.mustCall());
<ide> e.setMaxListeners(42);
<ide>
<ide> const throwsObjs = [NaN, -1, 'and even this'];
<add>const maxError = /^TypeError: "n" argument must be a positive number$/;
<add>const defError = /^TypeError: "defaultMaxListeners" must be a positive number$/;
<ide>
<ide> for (const obj of throwsObjs) {
<del> assert.throws(() => e.setMaxListeners(obj),
<del> /^TypeError: "n" argument must be a positive number$/);
<del> assert.throws(() => events.defaultMaxListeners = obj,
<del> /^TypeError: "defaultMaxListeners" must be a positive number$/);
<add> assert.throws(() => e.setMaxListeners(obj), maxError);
<add> assert.throws(() => events.defaultMaxListeners = obj, defError);
<ide> }
<ide>
<ide> e.emit('maxListeners');
<ide><path>test/parallel/test-fs-null-bytes.js
<ide> function check(async, sync) {
<ide> const expected = /Path must be a string without null bytes/;
<ide> const argsSync = Array.prototype.slice.call(arguments, 2);
<ide> const argsAsync = argsSync.concat((er) => {
<del> assert(er && er.message.match(expected));
<add> assert(er && expected.test(er.message));
<ide> assert.strictEqual(er.code, 'ENOENT');
<ide> });
<ide>
<ide><path>test/parallel/test-fs-read-stream-throw-type-error.js
<ide> assert.doesNotThrow(function() {
<ide> fs.createReadStream(example, {encoding: 'utf8'});
<ide> });
<ide>
<add>const errMessage = /"options" must be a string or an object/;
<ide> assert.throws(function() {
<ide> fs.createReadStream(example, 123);
<del>}, /"options" must be a string or an object/);
<add>}, errMessage);
<ide> assert.throws(function() {
<ide> fs.createReadStream(example, 0);
<del>}, /"options" must be a string or an object/);
<add>}, errMessage);
<ide> assert.throws(function() {
<ide> fs.createReadStream(example, true);
<del>}, /"options" must be a string or an object/);
<add>}, errMessage);
<ide> assert.throws(function() {
<ide> fs.createReadStream(example, false);
<del>}, /"options" must be a string or an object/);
<add>}, errMessage);
<ide><path>test/parallel/test-global-console-exists.js
<ide> const leakWarning = /EventEmitter memory leak detected\. 2 hello listeners/;
<ide>
<ide> common.hijackStderr(common.mustCall(function(data) {
<ide> if (process.stderr.writeTimes === 0) {
<del> assert.ok(data.match(leakWarning));
<add> assert.ok(leakWarning.test(data));
<ide> } else {
<ide> assert.fail('stderr.write should be called only once');
<ide> }
<ide><path>test/parallel/test-http-client-unescaped-path.js
<ide> const common = require('../common');
<ide> const assert = require('assert');
<ide> const http = require('http');
<ide>
<add>const errMessage = /contains unescaped characters/;
<ide> for (let i = 0; i <= 32; i += 1) {
<ide> const path = `bad${String.fromCharCode(i)}path`;
<del> assert.throws(() => http.get({ path }, common.mustNotCall()),
<del> /contains unescaped characters/);
<add> assert.throws(() => http.get({ path }, common.mustNotCall()), errMessage);
<ide> }
<ide><path>test/parallel/test-http-hostname-typechecking.js
<ide> const http = require('http');
<ide> // when passed as the value of either options.hostname or options.host
<ide> const vals = [{}, [], NaN, Infinity, -Infinity, true, false, 1, 0, new Date()];
<ide>
<del>function errCheck(name) {
<del> return new RegExp(`^TypeError: "options\\.${name}" must either be a ` +
<del> 'string, undefined or null$');
<del>}
<add>const errHostname =
<add> /^TypeError: "options\.hostname" must either be a string, undefined or null$/;
<add>const errHost =
<add> /^TypeError: "options\.host" must either be a string, undefined or null$/;
<ide>
<ide> vals.forEach((v) => {
<del> assert.throws(() => http.request({hostname: v}), errCheck('hostname'));
<del> assert.throws(() => http.request({host: v}), errCheck('host'));
<add> assert.throws(() => http.request({hostname: v}), errHostname);
<add> assert.throws(() => http.request({host: v}), errHost);
<ide> });
<ide>
<ide> // These values are OK and should not throw synchronously
<ide><path>test/parallel/test-http-server.js
<ide> process.on('exit', function() {
<ide> assert.strictEqual(4, requests_sent);
<ide>
<ide> const hello = new RegExp('/hello');
<del> assert.notStrictEqual(null, hello.exec(server_response));
<add> assert.ok(hello.test(server_response));
<ide>
<ide> const quit = new RegExp('/quit');
<del> assert.notStrictEqual(null, quit.exec(server_response));
<add> assert.ok(quit.test(server_response));
<ide>
<ide> assert.strictEqual(true, client_got_eof);
<ide> });
<ide><path>test/parallel/test-icu-punycode.js
<ide> const wptToASCIITests = require('../fixtures/url-toascii.js');
<ide> }
<ide>
<ide> {
<add> const errMessage = /^Error: Cannot convert name to ASCII$/;
<add>
<ide> for (const [i, test] of wptToASCIITests.entries()) {
<ide> if (typeof test === 'string')
<ide> continue; // skip comments
<ide> const wptToASCIITests = require('../fixtures/url-toascii.js');
<ide> caseComment += ` (${comment})`;
<ide> if (output === null) {
<ide> assert.throws(() => icu.toASCII(input),
<del> /^Error: Cannot convert name to ASCII$/,
<del> `ToASCII ${caseComment}`);
<add> errMessage, `ToASCII ${caseComment}`);
<ide> assert.doesNotThrow(() => icu.toASCII(input, true),
<ide> `ToASCII ${caseComment} in lenient mode`);
<ide> } else {
<ide><path>test/parallel/test-internal-errors.js
<ide> const common = require('../common');
<ide> const errors = require('internal/errors');
<ide> const assert = require('assert');
<ide>
<add>const errMessages = {
<add> objectString: /^'object' === 'string'$/,
<add> booleanString: /^'boolean' === 'string'$/,
<add> numberString: /^'number' === 'string'$/,
<add> invalidKey: /^An invalid error message key was used: TEST_FOO_KEY\.$/,
<add>};
<add>
<ide> errors.E('TEST_ERROR_1', 'Error for testing purposes: %s');
<ide> errors.E('TEST_ERROR_2', (a, b) => `${a} ${b}`);
<ide>
<ide> assert.throws(
<ide> () => new errors.Error('TEST_FOO_KEY'),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^An invalid error message key was used: TEST_FOO_KEY\.$/
<add> message: errMessages.invalidKey
<ide> }));
<ide> // Calling it twice yields same result (using the key does not create it)
<ide> assert.throws(
<ide> () => new errors.Error('TEST_FOO_KEY'),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^An invalid error message key was used: TEST_FOO_KEY\.$/
<add> message: errMessages.invalidKey
<ide> }));
<ide> assert.throws(
<ide> () => new errors.Error(1),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'number' === 'string'$/
<add> message: errMessages.numberString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.Error({}),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'object' === 'string'$/
<add> message: errMessages.objectString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.Error([]),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'object' === 'string'$/
<add> message: errMessages.objectString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.Error(true),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'boolean' === 'string'$/
<add> message: errMessages.booleanString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.TypeError(1),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'number' === 'string'$/
<add> message: errMessages.numberString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.TypeError({}),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'object' === 'string'$/
<add> message: errMessages.objectString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.TypeError([]),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'object' === 'string'$/
<add> message: errMessages.objectString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.TypeError(true),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'boolean' === 'string'$/
<add> message: errMessages.booleanString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.RangeError(1),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'number' === 'string'$/
<add> message: errMessages.numberString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.RangeError({}),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'object' === 'string'$/
<add> message: errMessages.objectString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.RangeError([]),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'object' === 'string'$/
<add> message: errMessages.objectString
<ide> }));
<ide> assert.throws(
<ide> () => new errors.RangeError(true),
<ide> common.expectsError({
<ide> code: 'ERR_ASSERTION',
<del> message: /^'boolean' === 'string'$/
<add> message: errMessages.booleanString
<ide> }));
<ide>
<ide>
<ide><path>test/parallel/test-net-connect-options-port.js
<ide> function canConnect(port) {
<ide>
<ide> function asyncFailToConnect(port) {
<ide> const onError = () => common.mustCall(function(err) {
<del> const regexp = /^Error: connect (E\w+)(.+)$/;
<add> const regexp = /^Error: connect E\w+.+$/;
<ide> assert(regexp.test(String(err)), String(err));
<ide> });
<ide>
<ide><path>test/parallel/test-path.js
<ide> const path = require('path');
<ide> const f = __filename;
<ide> const failures = [];
<ide>
<add>const slashRE = /\//g;
<add>const backslashRE = /\\/g;
<add>
<ide> // path.basename tests
<ide> assert.strictEqual(path.basename(f), 'test-path.js');
<ide> assert.strictEqual(path.basename(f, '.js'), 'test-path');
<ide> assert.strictEqual(path.win32.dirname('foo'), '.');
<ide> let input = test[0];
<ide> let os;
<ide> if (extname === path.win32.extname) {
<del> input = input.replace(/\//g, '\\');
<add> input = input.replace(slashRE, '\\');
<ide> os = 'win32';
<ide> } else {
<ide> os = 'posix';
<ide> joinTests.forEach((test) => {
<ide> let actualAlt;
<ide> let os;
<ide> if (join === path.win32.join) {
<del> actualAlt = actual.replace(/\\/g, '/');
<add> actualAlt = actual.replace(backslashRE, '/');
<ide> os = 'win32';
<ide> } else {
<ide> os = 'posix';
<ide> resolveTests.forEach((test) => {
<ide> let actualAlt;
<ide> const os = resolve === path.win32.resolve ? 'win32' : 'posix';
<ide> if (resolve === path.win32.resolve && !common.isWindows)
<del> actualAlt = actual.replace(/\\/g, '/');
<add> actualAlt = actual.replace(backslashRE, '/');
<ide> else if (resolve !== path.win32.resolve && common.isWindows)
<del> actualAlt = actual.replace(/\//g, '\\');
<add> actualAlt = actual.replace(slashRE, '\\');
<ide>
<ide> const expected = test[1];
<ide> const message =
<ide><path>test/parallel/test-process-chdir.js
<ide> process.chdir('..');
<ide> assert.strictEqual(process.cwd().normalize(),
<ide> path.resolve(common.tmpDir).normalize());
<ide>
<add>const errMessage = /^TypeError: Bad argument\.$/;
<ide> assert.throws(function() { process.chdir({}); },
<del> /^TypeError: Bad argument\.$/, 'Bad argument.');
<add> errMessage, 'Bad argument.');
<ide> assert.throws(function() { process.chdir(); },
<del> /^TypeError: Bad argument\.$/, 'Bad argument.');
<add> errMessage, 'Bad argument.');
<ide> assert.throws(function() { process.chdir('x', 'y'); },
<del> /^TypeError: Bad argument\.$/, 'Bad argument.');
<add> errMessage, 'Bad argument.');
<ide><path>test/parallel/test-process-emitwarning.js
<ide> const testType = 'CustomWarning';
<ide>
<ide> process.on('warning', common.mustCall((warning) => {
<ide> assert(warning);
<del> assert(/^(Warning|CustomWarning)/.test(warning.name));
<add> assert(/^(?:Warning|CustomWarning)/.test(warning.name));
<ide> assert.strictEqual(warning.message, testMsg);
<ide> if (warning.code) assert.strictEqual(warning.code, testCode);
<ide> if (warning.detail) assert.strictEqual(warning.detail, testDetail);
<ide><path>test/parallel/test-process-setuid-setgid.js
<ide> if (process.getuid() !== 0) {
<ide>
<ide> assert.throws(
<ide> () => { process.setgid('nobody'); },
<del> /^Error: (EPERM, .+|setgid group id does not exist)$/
<add> /^Error: (?:EPERM, .+|setgid group id does not exist)$/
<ide> );
<ide>
<ide> assert.throws(
<ide> () => { process.setuid('nobody'); },
<del> /^Error: (EPERM, .+|setuid user id does not exist)$/
<add> /^Error: (?:EPERM, .+|setuid user id does not exist)$/
<ide> );
<ide> return;
<ide> }
<ide><path>test/parallel/test-process-versions.js
<ide> const actual_keys = Object.keys(process.versions).sort();
<ide>
<ide> assert.deepStrictEqual(actual_keys, expected_keys);
<ide>
<del>assert(/^\d+\.\d+\.\d+(-.*)?$/.test(process.versions.ares));
<del>assert(/^\d+\.\d+\.\d+(-.*)?$/.test(process.versions.http_parser));
<del>assert(/^\d+\.\d+\.\d+(-.*)?$/.test(process.versions.node));
<del>assert(/^\d+\.\d+\.\d+(-.*)?$/.test(process.versions.uv));
<del>assert(/^\d+\.\d+\.\d+(-.*)?$/.test(process.versions.zlib));
<del>assert(/^\d+\.\d+\.\d+(\.\d+)?( \(candidate\))?$/.test(process.versions.v8));
<add>const commonTemplate = /^\d+\.\d+\.\d+(?:-.*)?$/;
<add>
<add>assert(commonTemplate.test(process.versions.ares));
<add>assert(commonTemplate.test(process.versions.http_parser));
<add>assert(commonTemplate.test(process.versions.node));
<add>assert(commonTemplate.test(process.versions.uv));
<add>assert(commonTemplate.test(process.versions.zlib));
<add>
<add>assert(/^\d+\.\d+\.\d+(?:\.\d+)?(?: \(candidate\))?$/.test(process.versions.v8));
<ide> assert(/^\d+$/.test(process.versions.modules));
<ide><path>test/parallel/test-repl.js
<ide> function error_test() {
<ide> let expect = client_unix.expect;
<ide> if (expect === prompt_multiline)
<ide> expect = /[.]{3} /;
<del> assert.ok(read_buffer.match(expect));
<add> assert.ok(RegExp(expect).test(read_buffer));
<ide> console.error('match');
<ide> }
<ide> read_buffer = '';
<ide> function error_test() {
<ide> expect: /^(?!repl)/ },
<ide> // Avoid emitting stack trace
<ide> { client: client_unix, send: 'a = 3.5e',
<del> expect: /^(?!\s+at\s)/gm },
<add> expect: /^(?!\s+at\s)/m },
<ide>
<ide> // https://github.com/nodejs/node/issues/9850
<ide> { client: client_unix, send: 'function* foo() {}; foo().next();',
<ide><path>test/parallel/test-require-json.js
<ide> const assert = require('assert');
<ide> try {
<ide> require(path.join(common.fixturesDir, 'invalid.json'));
<ide> } catch (err) {
<del> const re = /test[/\\]fixtures[/\\]invalid\.json: Unexpected string/;
<del> const i = err.message.match(re);
<del> assert.notStrictEqual(null, i, 'require() json error should include path');
<add> assert.ok(
<add> /test[/\\]fixtures[/\\]invalid\.json: Unexpected string/.test(err.message),
<add> 'require() json error should include path');
<ide> }
<ide><path>test/parallel/test-socket-address.js
<ide> server.listen(0, common.mustCall(function() {
<ide> return -1;
<ide> };
<ide> assert.throws(() => this.address(),
<del> /^Error: address ([\w|\s-\d])+$/);
<add> /^Error: address [\w|\s-\d]+$/);
<ide> server.close();
<ide> }));
<ide><path>test/parallel/test-stream-readable-invalid-chunk.js
<ide> const readable = new stream.Readable({
<ide> read: common.noop
<ide> });
<ide>
<del>assert.throws(() => readable.push([]), /Invalid non-string\/buffer chunk/);
<del>assert.throws(() => readable.push({}), /Invalid non-string\/buffer chunk/);
<del>assert.throws(() => readable.push(0), /Invalid non-string\/buffer chunk/);
<add>const errMessage = /Invalid non-string\/buffer chunk/;
<add>assert.throws(() => readable.push([]), errMessage);
<add>assert.throws(() => readable.push({}), errMessage);
<add>assert.throws(() => readable.push(0), errMessage);
<ide><path>test/parallel/test-string-decoder.js
<ide> function test(encoding, input, expected, singleSequence) {
<ide> } else {
<ide> sequences = [singleSequence];
<ide> }
<add> const hexNumberRE = /.{2}/g;
<ide> sequences.forEach((sequence) => {
<ide> const decoder = new StringDecoder(encoding);
<ide> let output = '';
<ide> function test(encoding, input, expected, singleSequence) {
<ide> const message =
<ide> 'Expected "' + unicodeEscape(expected) + '", ' +
<ide> 'but got "' + unicodeEscape(output) + '"\n' +
<del> 'input: ' + input.toString('hex').match(/.{2}/g) + '\n' +
<add> 'input: ' + input.toString('hex').match(hexNumberRE) + '\n' +
<ide> 'Write sequence: ' + JSON.stringify(sequence) + '\n' +
<ide> 'Full Decoder State: ' + inspect(decoder);
<ide> assert.fail(output, expected, message);
<ide><path>test/parallel/test-timers-throw-when-cb-not-function.js
<ide> function doSetTimeout(callback, after) {
<ide> };
<ide> }
<ide>
<del>assert.throws(doSetTimeout('foo'),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetTimeout({foo: 'bar'}),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetTimeout(),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetTimeout(undefined, 0),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetTimeout(null, 0),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetTimeout(false, 0),
<del> /"callback" argument must be a function/);
<add>const errMessage = /"callback" argument must be a function/;
<add>
<add>assert.throws(doSetTimeout('foo'), errMessage);
<add>assert.throws(doSetTimeout({foo: 'bar'}), errMessage);
<add>assert.throws(doSetTimeout(), errMessage);
<add>assert.throws(doSetTimeout(undefined, 0), errMessage);
<add>assert.throws(doSetTimeout(null, 0), errMessage);
<add>assert.throws(doSetTimeout(false, 0), errMessage);
<ide>
<ide>
<ide> function doSetInterval(callback, after) {
<ide> function doSetInterval(callback, after) {
<ide> };
<ide> }
<ide>
<del>assert.throws(doSetInterval('foo'),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetInterval({foo: 'bar'}),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetInterval(),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetInterval(undefined, 0),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetInterval(null, 0),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetInterval(false, 0),
<del> /"callback" argument must be a function/);
<add>assert.throws(doSetInterval('foo'), errMessage);
<add>assert.throws(doSetInterval({foo: 'bar'}), errMessage);
<add>assert.throws(doSetInterval(), errMessage);
<add>assert.throws(doSetInterval(undefined, 0), errMessage);
<add>assert.throws(doSetInterval(null, 0), errMessage);
<add>assert.throws(doSetInterval(false, 0), errMessage);
<ide>
<ide>
<ide> function doSetImmediate(callback, after) {
<ide> function doSetImmediate(callback, after) {
<ide> };
<ide> }
<ide>
<del>assert.throws(doSetImmediate('foo'),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetImmediate({foo: 'bar'}),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetImmediate(),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetImmediate(undefined, 0),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetImmediate(null, 0),
<del> /"callback" argument must be a function/);
<del>assert.throws(doSetImmediate(false, 0),
<del> /"callback" argument must be a function/);
<add>assert.throws(doSetImmediate('foo'), errMessage);
<add>assert.throws(doSetImmediate({foo: 'bar'}), errMessage);
<add>assert.throws(doSetImmediate(), errMessage);
<add>assert.throws(doSetImmediate(undefined, 0), errMessage);
<add>assert.throws(doSetImmediate(null, 0), errMessage);
<add>assert.throws(doSetImmediate(false, 0), errMessage);
<ide><path>test/parallel/test-tls-client-mindhsize.js
<ide> testDHE1024();
<ide> assert.throws(() => test(512, true, common.mustNotCall()),
<ide> /DH parameter is less than 1024 bits/);
<ide>
<add>let errMessage = /minDHSize is not a positive number/;
<ide> [0, -1, -Infinity, NaN].forEach((minDHSize) => {
<ide> assert.throws(() => tls.connect({ minDHSize }),
<del> /minDHSize is not a positive number/);
<add> errMessage);
<ide> });
<ide>
<add>errMessage = /minDHSize is not a number/;
<ide> [true, false, null, undefined, {}, [], '', '1'].forEach((minDHSize) => {
<del> assert.throws(() => tls.connect({ minDHSize }), /minDHSize is not a number/);
<add> assert.throws(() => tls.connect({ minDHSize }), errMessage);
<ide> });
<ide>
<ide> process.on('exit', function() {
<ide><path>test/parallel/test-tls-env-bad-extra-ca.js
<ide> fork(__filename, opts)
<ide> assert.strictEqual(status, 0, 'client did not succeed in connecting');
<ide> }))
<ide> .on('close', common.mustCall(function() {
<del> assert(stderr.match(
<del> /Warning: Ignoring extra certs from.*no-such-file-exists.* load failed:.*No such file or directory/
<del> ), stderr);
<add> const re = /Warning: Ignoring extra certs from.*no-such-file-exists.* load failed:.*No such file or directory/;
<add> assert(re.test(stderr), stderr);
<ide> }))
<ide> .stderr.setEncoding('utf8').on('data', function(str) {
<ide> stderr += str;
<ide><path>test/parallel/test-tls-no-sslv23.js
<ide> assert.throws(function() {
<ide> tls.createSecureContext({ secureProtocol: 'blargh' });
<ide> }, /Unknown method/);
<ide>
<add>const errMessageSSLv2 = /SSLv2 methods disabled/;
<add>
<ide> assert.throws(function() {
<ide> tls.createSecureContext({ secureProtocol: 'SSLv2_method' });
<del>}, /SSLv2 methods disabled/);
<add>}, errMessageSSLv2);
<ide>
<ide> assert.throws(function() {
<ide> tls.createSecureContext({ secureProtocol: 'SSLv2_client_method' });
<del>}, /SSLv2 methods disabled/);
<add>}, errMessageSSLv2);
<ide>
<ide> assert.throws(function() {
<ide> tls.createSecureContext({ secureProtocol: 'SSLv2_server_method' });
<del>}, /SSLv2 methods disabled/);
<add>}, errMessageSSLv2);
<add>
<add>const errMessageSSLv3 = /SSLv3 methods disabled/;
<ide>
<ide> assert.throws(function() {
<ide> tls.createSecureContext({ secureProtocol: 'SSLv3_method' });
<del>}, /SSLv3 methods disabled/);
<add>}, errMessageSSLv3);
<ide>
<ide> assert.throws(function() {
<ide> tls.createSecureContext({ secureProtocol: 'SSLv3_client_method' });
<del>}, /SSLv3 methods disabled/);
<add>}, errMessageSSLv3);
<ide>
<ide> assert.throws(function() {
<ide> tls.createSecureContext({ secureProtocol: 'SSLv3_server_method' });
<del>}, /SSLv3 methods disabled/);
<add>}, errMessageSSLv3);
<ide>
<ide> // Note that SSLv2 and SSLv3 are disallowed but SSLv2_method and friends are
<ide> // still accepted. They are OpenSSL's way of saying that all known protocols
<ide><path>test/parallel/test-tls-passphrase.js
<ide> server.listen(0, common.mustCall(function() {
<ide> }, common.mustCall());
<ide> })).unref();
<ide>
<add>const errMessagePassword = /bad password read/;
<add>
<ide> // Missing passphrase
<ide> assert.throws(function() {
<ide> tls.connect({
<ide> assert.throws(function() {
<ide> cert: cert,
<ide> rejectUnauthorized: false
<ide> });
<del>}, /bad password read/);
<add>}, errMessagePassword);
<ide>
<ide> assert.throws(function() {
<ide> tls.connect({
<ide> assert.throws(function() {
<ide> cert: cert,
<ide> rejectUnauthorized: false
<ide> });
<del>}, /bad password read/);
<add>}, errMessagePassword);
<ide>
<ide> assert.throws(function() {
<ide> tls.connect({
<ide> assert.throws(function() {
<ide> cert: cert,
<ide> rejectUnauthorized: false
<ide> });
<del>}, /bad password read/);
<add>}, errMessagePassword);
<add>
<add>const errMessageDecrypt = /bad decrypt/;
<ide>
<ide> // Invalid passphrase
<ide> assert.throws(function() {
<ide> assert.throws(function() {
<ide> cert: cert,
<ide> rejectUnauthorized: false
<ide> });
<del>}, /bad decrypt/);
<add>}, errMessageDecrypt);
<ide>
<ide> assert.throws(function() {
<ide> tls.connect({
<ide> assert.throws(function() {
<ide> cert: cert,
<ide> rejectUnauthorized: false
<ide> });
<del>}, /bad decrypt/);
<add>}, errMessageDecrypt);
<ide>
<ide> assert.throws(function() {
<ide> tls.connect({
<ide> assert.throws(function() {
<ide> cert: cert,
<ide> rejectUnauthorized: false
<ide> });
<del>}, /bad decrypt/);
<add>}, errMessageDecrypt);
<ide>
<ide> assert.throws(function() {
<ide> tls.connect({
<ide> assert.throws(function() {
<ide> cert: cert,
<ide> rejectUnauthorized: false
<ide> });
<del>}, /bad decrypt/);
<add>}, errMessageDecrypt);
<ide><path>test/parallel/test-tls-server-failed-handshake-emits-clienterror.js
<ide> const server = tls.createServer({})
<ide> }).on('tlsClientError', common.mustCall(function(e) {
<ide> assert.ok(e instanceof Error,
<ide> 'Instance of Error should be passed to error handler');
<del> assert.ok(e.message.match(
<del> /SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol/),
<del> 'Expecting SSL unknown protocol');
<add> assert.ok(
<add> /SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol/.test(e.message),
<add> 'Expecting SSL unknown protocol');
<ide>
<ide> server.close();
<ide> }));
<ide><path>test/parallel/test-tls-socket-failed-handshake-emits-error.js
<ide> const server = net.createServer(function(c) {
<ide> s.on('error', common.mustCall(function(e) {
<ide> assert.ok(e instanceof Error,
<ide> 'Instance of Error should be passed to error handler');
<del> assert.ok(e.message.match(
<del> /SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol/),
<del> 'Expecting SSL unknown protocol');
<add> assert.ok(
<add> /SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol/.test(e.message),
<add> 'Expecting SSL unknown protocol');
<ide> }));
<ide>
<ide> s.on('close', function() {
<ide><path>test/parallel/test-url-parse-invalid-input.js
<ide> const assert = require('assert');
<ide> const url = require('url');
<ide>
<ide> // https://github.com/joyent/node/issues/568
<add>const errMessage = /^TypeError: Parameter "url" must be a string, not (?:undefined|boolean|number|object|function|symbol)$/;
<ide> [
<ide> undefined,
<ide> null,
<ide> const url = require('url');
<ide> () => {},
<ide> Symbol('foo')
<ide> ].forEach((val) => {
<del> assert.throws(() => { url.parse(val); },
<del> /^TypeError: Parameter "url" must be a string, not (undefined|boolean|number|object|function|symbol)$/);
<add> assert.throws(() => { url.parse(val); }, errMessage);
<ide> });
<ide>
<ide> assert.throws(() => { url.parse('http://%E0%A4%A@fail'); },
<ide><path>test/parallel/test-util-inspect.js
<ide> if (typeof Symbol !== 'undefined') {
<ide> {
<ide> function checkAlignment(container) {
<ide> const lines = util.inspect(container).split('\n');
<add> const numRE = /\d/;
<ide> let pos;
<ide> lines.forEach((line) => {
<del> const npos = line.search(/\d/);
<add> const npos = line.search(numRE);
<ide> if (npos !== -1) {
<ide> if (pos !== undefined) {
<ide> assert.strictEqual(pos, npos, 'container items not aligned');
<ide><path>test/parallel/test-util-internal.js
<ide> function setHiddenValue(obj, index, val) {
<ide> };
<ide> }
<ide>
<del>assert.throws(getHiddenValue(), /obj must be an object/);
<del>assert.throws(getHiddenValue(null, 'foo'), /obj must be an object/);
<del>assert.throws(getHiddenValue(undefined, 'foo'), /obj must be an object/);
<del>assert.throws(getHiddenValue('bar', 'foo'), /obj must be an object/);
<del>assert.throws(getHiddenValue(85, 'foo'), /obj must be an object/);
<del>assert.throws(getHiddenValue({}), /index must be an uint32/);
<del>assert.throws(getHiddenValue({}, null), /index must be an uint32/);
<del>assert.throws(getHiddenValue({}, []), /index must be an uint32/);
<add>const errMessageObj = /obj must be an object/;
<add>const errMessageIndex = /index must be an uint32/;
<add>
<add>assert.throws(getHiddenValue(), errMessageObj);
<add>assert.throws(getHiddenValue(null, 'foo'), errMessageObj);
<add>assert.throws(getHiddenValue(undefined, 'foo'), errMessageObj);
<add>assert.throws(getHiddenValue('bar', 'foo'), errMessageObj);
<add>assert.throws(getHiddenValue(85, 'foo'), errMessageObj);
<add>assert.throws(getHiddenValue({}), errMessageIndex);
<add>assert.throws(getHiddenValue({}, null), errMessageIndex);
<add>assert.throws(getHiddenValue({}, []), errMessageIndex);
<ide> assert.deepStrictEqual(
<ide> binding.getHiddenValue({}, kArrowMessagePrivateSymbolIndex),
<ide> undefined);
<ide>
<del>assert.throws(setHiddenValue(), /obj must be an object/);
<del>assert.throws(setHiddenValue(null, 'foo'), /obj must be an object/);
<del>assert.throws(setHiddenValue(undefined, 'foo'), /obj must be an object/);
<del>assert.throws(setHiddenValue('bar', 'foo'), /obj must be an object/);
<del>assert.throws(setHiddenValue(85, 'foo'), /obj must be an object/);
<del>assert.throws(setHiddenValue({}), /index must be an uint32/);
<del>assert.throws(setHiddenValue({}, null), /index must be an uint32/);
<del>assert.throws(setHiddenValue({}, []), /index must be an uint32/);
<add>assert.throws(setHiddenValue(), errMessageObj);
<add>assert.throws(setHiddenValue(null, 'foo'), errMessageObj);
<add>assert.throws(setHiddenValue(undefined, 'foo'), errMessageObj);
<add>assert.throws(setHiddenValue('bar', 'foo'), errMessageObj);
<add>assert.throws(setHiddenValue(85, 'foo'), errMessageObj);
<add>assert.throws(setHiddenValue({}), errMessageIndex);
<add>assert.throws(setHiddenValue({}, null), errMessageIndex);
<add>assert.throws(setHiddenValue({}, []), errMessageIndex);
<ide> const obj = {};
<ide> assert.strictEqual(
<ide> binding.setHiddenValue(obj, kArrowMessagePrivateSymbolIndex, 'bar'),
<ide><path>test/parallel/test-util-log.js
<ide> const tests = [
<ide> ];
<ide>
<ide> // test util.log()
<add>const re = /[0-9]{1,2} [A-Z][a-z]{2} [0-9]{2}:[0-9]{2}:[0-9]{2} - (.+)$/;
<ide> tests.forEach(function(test) {
<ide> util.log(test.input);
<ide> const result = strings.shift().trim();
<del> const re = (/[0-9]{1,2} [A-Z][a-z]{2} [0-9]{2}:[0-9]{2}:[0-9]{2} - (.+)$/);
<ide> const match = re.exec(result);
<ide> assert.ok(match);
<ide> assert.strictEqual(match[1], test.output);
<ide><path>test/parallel/test-zlib-truncated.js
<ide> const inputString = 'ΩΩLorem ipsum dolor sit amet, consectetur adipiscing eli'
<ide> 'm arcu mi, sodales non suscipit id, ultrices ut massa. S' +
<ide> 'ed ac sem sit amet arcu malesuada fermentum. Nunc sed. ';
<ide>
<add>const errMessage = /unexpected end of file/;
<add>
<ide> [
<ide> { comp: 'gzip', decomp: 'gunzip', decompSync: 'gunzipSync' },
<ide> { comp: 'gzip', decomp: 'unzip', decompSync: 'unzipSync' },
<ide> const inputString = 'ΩΩLorem ipsum dolor sit amet, consectetur adipiscing eli'
<ide> // sync truncated input test
<ide> assert.throws(function() {
<ide> zlib[methods.decompSync](truncated);
<del> }, /unexpected end of file/);
<add> }, errMessage);
<ide>
<ide> // async truncated input test
<ide> zlib[methods.decomp](truncated, function(err, result) {
<del> assert(/unexpected end of file/.test(err.message));
<add> assert(errMessage.test(err.message));
<ide> });
<ide>
<ide> const syncFlushOpt = { finishFlush: zlib.constants.Z_SYNC_FLUSH };
<ide><path>test/pummel/test-net-pingpong.js
<ide> function pingPongTest(port, host, on_complete) {
<ide> console.log(`server got: ${JSON.stringify(data)}`);
<ide> assert.strictEqual('open', socket.readyState);
<ide> assert.strictEqual(true, count <= N);
<del> if (/PING/.exec(data)) {
<add> if (/PING/.test(data)) {
<ide> socket.write('PONG');
<ide> }
<ide> });
<ide><path>test/sequential/test-module-loading.js
<ide> const assert = require('assert');
<ide> const path = require('path');
<ide> const fs = require('fs');
<ide>
<add>const backslash = /\\/g;
<add>
<ide> console.error('load test-module-loading.js');
<ide>
<ide> // assert that this is the main module.
<ide> try {
<ide> require(`${loadOrder}file3`);
<ide> } catch (e) {
<ide> // Not a real .node module, but we know we require'd the right thing.
<del> assert.ok(e.message.replace(/\\/g, '/').match(/file3\.node/));
<add> assert.ok(/file3\.node/.test(e.message.replace(backslash, '/')));
<ide> }
<ide> assert.strictEqual(require(`${loadOrder}file4`).file4, 'file4.reg', msg);
<ide> assert.strictEqual(require(`${loadOrder}file5`).file5, 'file5.reg2', msg);
<ide> assert.strictEqual(require(`${loadOrder}file6`).file6, 'file6/index.js', msg);
<ide> try {
<ide> require(`${loadOrder}file7`);
<ide> } catch (e) {
<del> assert.ok(e.message.replace(/\\/g, '/').match(/file7\/index\.node/));
<add> assert.ok(/file7\/index\.node/.test(e.message.replace(backslash, '/')));
<ide> }
<ide> assert.strictEqual(require(`${loadOrder}file8`).file8, 'file8/index.reg',
<ide> msg);
<ide> try {
<ide>
<ide> const children = module.children.reduce(function red(set, child) {
<ide> let id = path.relative(path.dirname(__dirname), child.id);
<del> id = id.replace(/\\/g, '/');
<add> id = id.replace(backslash, '/');
<ide> set[id] = child.children.reduce(red, {});
<ide> return set;
<ide> }, {});
<ide><path>test/sequential/test-process-warnings.js
<ide> const normal = [warnmod];
<ide> const noWarn = ['--no-warnings', warnmod];
<ide> const traceWarn = ['--trace-warnings', warnmod];
<ide>
<add>const warningMessage = /^\(.+\)\sWarning: a bad practice warning/;
<add>
<ide> execFile(node, normal, function(er, stdout, stderr) {
<ide> // Show Process Warnings
<ide> assert.strictEqual(er, null);
<ide> assert.strictEqual(stdout, '');
<del> assert(/^\(.+\)\sWarning: a bad practice warning/.test(stderr));
<add> assert(warningMessage.test(stderr));
<ide> });
<ide>
<ide> execFile(node, noWarn, function(er, stdout, stderr) {
<ide> // Hide Process Warnings
<ide> assert.strictEqual(er, null);
<ide> assert.strictEqual(stdout, '');
<del> assert(!/^\(.+\)\sWarning: a bad practice warning/.test(stderr));
<add> assert(!warningMessage.test(stderr));
<ide> });
<ide>
<ide> execFile(node, traceWarn, function(er, stdout, stderr) {
<ide> // Show Warning Trace
<ide> assert.strictEqual(er, null);
<ide> assert.strictEqual(stdout, '');
<del> assert(/^\(.+\)\sWarning: a bad practice warning/.test(stderr));
<add> assert(warningMessage.test(stderr));
<ide> assert(/at Object\.<anonymous>\s\(.+warnings\.js:3:9\)/.test(stderr));
<ide> });
<ide><path>test/sequential/test-regress-GH-784.js
<ide> const responses = [];
<ide> function afterPing(result) {
<ide> responses.push(result);
<ide> console.error(`afterPing. responses.length = ${responses.length}`);
<add> const ECONNREFUSED_RE = /ECONNREFUSED/;
<add> const successRE = /success/;
<ide> switch (responses.length) {
<ide> case 2:
<del> assert.ok(/ECONNREFUSED/.test(responses[0]));
<del> assert.ok(/ECONNREFUSED/.test(responses[1]));
<add> assert.ok(ECONNREFUSED_RE.test(responses[0]));
<add> assert.ok(ECONNREFUSED_RE.test(responses[1]));
<ide> serverOn();
<ide> break;
<ide>
<ide> case 4:
<del> assert.ok(/success/.test(responses[2]));
<del> assert.ok(/success/.test(responses[3]));
<add> assert.ok(successRE.test(responses[2]));
<add> assert.ok(successRE.test(responses[3]));
<ide> serverOff();
<ide> break;
<ide>
<ide> case 6:
<del> assert.ok(/ECONNREFUSED/.test(responses[4]));
<del> assert.ok(/ECONNREFUSED/.test(responses[5]));
<add> assert.ok(ECONNREFUSED_RE.test(responses[4]));
<add> assert.ok(ECONNREFUSED_RE.test(responses[5]));
<ide> serverOn();
<ide> break;
<ide>
<ide> case 8:
<del> assert.ok(/success/.test(responses[6]));
<del> assert.ok(/success/.test(responses[7]));
<add> assert.ok(successRE.test(responses[6]));
<add> assert.ok(successRE.test(responses[7]));
<ide> server.close();
<ide> // we should go to process.on('exit') from here.
<ide> break; | 56 |
Python | Python | add tests for safe_join | 06a170ea9b73ffe4f2e64453c70ed6b44619ecc8 | <ide><path>tests/test_helpers.py
<ide> import datetime
<ide> import flask
<ide> from logging import StreamHandler
<del>from werkzeug.exceptions import BadRequest
<add>from werkzeug.exceptions import BadRequest, NotFound
<ide> from werkzeug.http import parse_cache_control_header, parse_options_header
<ide> from werkzeug.http import http_date
<ide> from flask._compat import StringIO, text_type
<ide> def generate():
<ide> rv = c.get('/?name=World')
<ide> assert rv.data == b'Hello World!'
<ide> assert called == [42]
<add>
<add>
<add>class TestSafeJoin(object):
<add>
<add> def test_safe_join(self):
<add> # Valid combinations of *args and expected joined paths.
<add> passing = (
<add> (('a/b/c', ), 'a/b/c'),
<add> (('/', 'a/', 'b/', 'c/', ), '/a/b/c'),
<add> (('a', 'b', 'c', ), 'a/b/c'),
<add> (('/a', 'b/c', ), '/a/b/c'),
<add> (('a/b', 'X/../c'), 'a/b/c', ),
<add> (('/a/b', 'c/X/..'), '/a/b/c', ),
<add> # If last path is '' add a slash
<add> (('/a/b/c', '', ), '/a/b/c/', ),
<add> # Preserve dot slash
<add> (('/a/b/c', './', ), '/a/b/c/.', ),
<add> (('a/b/c', 'X/..'), 'a/b/c/.', ),
<add> # Base directory is always considered safe
<add> (('../', 'a/b/c'), '../a/b/c'),
<add> (('/..', ), '/..'),
<add> )
<add>
<add> for args, expected in passing:
<add> assert flask.safe_join(*args) == expected
<add>
<add> def test_safe_join_exceptions(self):
<add> # Should raise werkzeug.exceptions.NotFound on unsafe joins.
<add> failing = (
<add> # path.isabs and ``..'' checks
<add> ('/a', 'b', '/c'),
<add> ('/a', '../b/c', ),
<add> ('/a', '..', 'b/c'),
<add> # Boundaries violations after path normalization
<add> ('/a', 'b/../b/../../c', ),
<add> ('/a', 'b', 'c/../..'),
<add> ('/a', 'b/../../c', ),
<add> )
<add>
<add> for args in failing:
<add> with pytest.raises(NotFound):
<add> print(flask.safe_join(*args)) | 1 |
Python | Python | fix pickling error with asyncresult | fd89eb8d0ac1a6a4085edc9d7f2274daf607ba92 | <ide><path>celery/result.py
<ide> def __reduce__(self):
<ide> return self.__class__, self.__reduce_args__()
<ide>
<ide> def __reduce_args__(self):
<del> return self.id, self.backend, self.task_name, self.parent
<add> return self.id, self.backend, self.task_name, self.app, self.parent
<ide>
<ide> @cached_property
<ide> def graph(self): | 1 |
Javascript | Javascript | add tests for normalmodule | eb2d6d6d5fd1eca6d803adf7740ff7e73f908ce0 | <ide><path>lib/NormalModule.js
<ide> function asString(buf) {
<ide> return buf;
<ide> }
<ide>
<del>function contextify(options, request) {
<add>function contextify(context, request) {
<ide> return request.split("!").map(function(r) {
<del> let rp = path.relative(options.context, r);
<add> let rp = path.relative(context, r);
<ide> if(path.sep === "\\")
<ide> rp = rp.replace(/\\/g, "/");
<ide> if(rp.indexOf("../") !== 0)
<ide> class NormalModule extends Module {
<ide> }
<ide>
<ide> libIdent(options) {
<del> return contextify(options, this.userRequest);
<add> return contextify(options.context, this.userRequest);
<ide> }
<ide>
<ide> nameForCondition() {
<ide><path>test/NormalModule.test.js
<add>/* globals describe, it, beforeEach, afterEach */
<add>"use strict";
<add>require("should");
<add>const sinon = require("sinon");
<add>const NormalModule = require("../lib/NormalModule");
<add>const path = require("path");
<add>const SourceMapSource = require("webpack-sources").SourceMapSource;
<add>const OriginalSource = require("webpack-sources").OriginalSource;
<add>const RawSource = require("webpack-sources").RawSource;
<add>
<add>describe("NormalModule", function() {
<add> let normalModule;
<add> let request;
<add> let userRequest;
<add> let rawRequest;
<add> let loaders;
<add> let resource;
<add> let parser;
<add> beforeEach(function() {
<add> request = "some/request";
<add> userRequest = "some/userRequest";
<add> rawRequest = "some/rawRequest";
<add> loaders = [];
<add> resource = "some/resource";
<add> parser = {
<add> parser() {}
<add> };
<add> normalModule = new NormalModule(
<add> request,
<add> userRequest,
<add> rawRequest,
<add> loaders,
<add> resource,
<add> parser
<add> );
<add> });
<add> describe("#identifier", function() {
<add> it("returns an identifier for this module", function() {
<add> normalModule.identifier().should.eql(request);
<add> });
<add> });
<add>
<add> describe("#readableIdentifier", function() {
<add> it("calls the given requestShortener with the user request", function() {
<add> const spy = sinon.spy();
<add> normalModule.readableIdentifier({
<add> shorten: spy
<add> });
<add> spy.callCount.should.eql(1);
<add> spy.args[0][0].should.eql(userRequest);
<add> });
<add> });
<add>
<add> describe("#libIdent", function() {
<add> it("contextifies the userRequest of the module", function() {
<add> normalModule.libIdent({
<add> context: "some/context"
<add> }).should.eql("../userRequest");
<add> });
<add> describe("given a userRequest containing loaders", function() {
<add> beforeEach(function() {
<add> userRequest = "some/userRequest!some/other/userRequest!some/thing/is/off/here";
<add> normalModule = new NormalModule(
<add> request,
<add> userRequest,
<add> rawRequest,
<add> loaders,
<add> resource,
<add> parser
<add> );
<add> });
<add> it("contextifies every path in the userRequest", function() {
<add> normalModule.libIdent({
<add> context: "some/context"
<add> }).should.eql("../userRequest!../other/userRequest!../thing/is/off/here");
<add> });
<add> });
<add>
<add> describe("when running on a windows machine", function() {
<add> let sep;
<add> beforeEach(function() {
<add> userRequest = "some\\userRequest!some\\other\\userRequest!some\\thing\\is\\off\\here";
<add> sep = path.sep;
<add> path.sep = "\\";
<add> normalModule = new NormalModule(
<add> request,
<add> userRequest,
<add> rawRequest,
<add> loaders,
<add> resource,
<add> parser
<add> );
<add> });
<add> afterEach(function() {
<add> path.sep = sep;
<add> });
<add> it("contextifies every path in the userRequest", function() {
<add> normalModule.libIdent({
<add> context: "some/context"
<add> }).should.eql("../../some/userRequest!../../some/other/userRequest!../../some/thing/is/off/here");
<add> });
<add> });
<add> });
<add>
<add> describe("#nameForCondition", function() {
<add> it("return the resource", function() {
<add> normalModule.nameForCondition().should.eql(resource);
<add> });
<add> describe("given a resource containing a ?-sign", function() {
<add> const baseResource = "some/resource";
<add> beforeEach(function() {
<add> resource = baseResource + "?some=query";
<add> normalModule = new NormalModule(
<add> request,
<add> userRequest,
<add> rawRequest,
<add> loaders,
<add> resource,
<add> parser
<add> );
<add> });
<add> it("return only the part before the ?-sign", function() {
<add> normalModule.nameForCondition().should.eql(baseResource);
<add> });
<add> });
<add> });
<add>
<add> describe("#createSourceForAsset", function() {
<add> let name;
<add> let content;
<add> let sourceMap;
<add> beforeEach(function() {
<add> name = "some name";
<add> content = "some content";
<add> sourceMap = "some sourcemap";
<add> });
<add> describe("given no sourcemap", function() {
<add> it("returns a RawSource", function() {
<add> normalModule.createSourceForAsset(name, content).should.be.instanceOf(RawSource);
<add> });
<add> });
<add> describe("given a string as the sourcemap", function() {
<add> it("returns a OriginalSource", function() {
<add> normalModule.createSourceForAsset(name, content, sourceMap).should.be.instanceOf(OriginalSource);
<add> });
<add> });
<add> describe("given a some other kind of sourcemap", function() {
<add> beforeEach(function() {
<add> sourceMap = () => {};
<add> });
<add> it("returns a SourceMapSource", function() {
<add> normalModule.createSourceForAsset(name, content, sourceMap).should.be.instanceOf(SourceMapSource);
<add> });
<add> });
<add> });
<add>
<add> describe("#source", function() {
<add> describe("without the module having any source", function() {
<add> beforeEach(function() {
<add> normalModule._source = null;
<add> });
<add> it("returns a Source containing an Error", function() {
<add> normalModule.source().should.be.instanceOf(RawSource);
<add> normalModule.source().source().should.eql("throw new Error('No source available');");
<add> });
<add> });
<add> });
<add>
<add> describe("#updateHashWithSource", function() {
<add> let hashSpy;
<add> let hash;
<add> beforeEach(function() {
<add> hashSpy = sinon.spy();
<add> hash = {
<add> update: hashSpy
<add> };
<add> });
<add> describe("without the module having any source", function() {
<add> beforeEach(function() {
<add> normalModule._source = null;
<add> });
<add> it("calls hash function with \"null\"", function() {
<add> normalModule.updateHashWithSource(hash);
<add> hashSpy.callCount.should.eql(1);
<add> hashSpy.args[0][0].should.eql("null");
<add> });
<add> });
<add> describe("without the module having source", function() {
<add> let expectedSource = "wurst suppe";
<add> beforeEach(function() {
<add> normalModule._source = new RawSource(expectedSource);
<add> });
<add> it("calls hash function with \"source\" and then the actual source of the module", function() {
<add> normalModule.updateHashWithSource(hash);
<add> hashSpy.callCount.should.eql(2);
<add> hashSpy.args[0][0].should.eql("source");
<add> hashSpy.args[1][0].should.eql(expectedSource);
<add> });
<add> });
<add> });
<add> describe("#needRebuild", function() {
<add> let fileTimestamps;
<add> let contextTimestamps;
<add> let fileDependencies;
<add> let contextDependencies;
<add> let fileA;
<add> let fileB;
<add> function setDeps(
<add> fileDependencies,
<add> contextDependencies) {
<add> normalModule.fileDependencies = fileDependencies;
<add> normalModule.contextDependencies = contextDependencies;
<add> }
<add> beforeEach(function() {
<add> fileA = "fileA";
<add> fileB = "fileB";
<add> fileDependencies = [ fileA, fileB ];
<add> contextDependencies = [ fileA, fileB ];
<add> fileTimestamps = {
<add> [fileA]: 1,
<add> [fileB]: 1,
<add> };
<add> contextTimestamps = {
<add> [fileA]: 1,
<add> [fileB]: 1,
<add> };
<add> normalModule.buildTimestamp = 2;
<add> setDeps(fileDependencies, contextDependencies);
<add> });
<add> describe("given all timestamps are older than the buildTimestamp", function() {
<add> it("returns false", function() {
<add> normalModule.needRebuild(fileTimestamps, contextTimestamps).should.eql(false);
<add> });
<add> });
<add> describe("given a file timestamp is newer than the buildTimestamp", function() {
<add> beforeEach(function() {
<add> fileTimestamps[fileA] = 3;
<add> });
<add> it("returns true", function() {
<add> normalModule.needRebuild(fileTimestamps, contextTimestamps).should.eql(true);
<add> });
<add> });
<add> describe("given a no file timestamp exists", function() {
<add> beforeEach(function() {
<add> fileTimestamps = {};
<add> });
<add> it("returns true", function() {
<add> normalModule.needRebuild(fileTimestamps, contextTimestamps).should.eql(true);
<add> });
<add> });
<add> describe("given a context timestamp is newer than the buildTimestamp", function() {
<add> beforeEach(function() {
<add> contextTimestamps[fileA] = 3;
<add> });
<add> it("returns true", function() {
<add> normalModule.needRebuild(fileTimestamps, contextTimestamps).should.eql(true);
<add> });
<add> });
<add> describe("given a no context timestamp exists", function() {
<add> beforeEach(function() {
<add> contextTimestamps = {};
<add> });
<add> it("returns true", function() {
<add> normalModule.needRebuild(fileTimestamps, contextTimestamps).should.eql(true);
<add> });
<add> });
<add> });
<add> describe("#splitVariablesInUniqueNamedChunks", function() {
<add> let variables;
<add> beforeEach(function() {
<add> variables = [
<add> { name: "foo" },
<add> { name: "bar" },
<add> { name: "baz" },
<add> { name: "wurst" },
<add> { name: "suppe" }
<add> ];
<add> });
<add> describe("given an empty array of vars", function() {
<add> it("returns an empty array", function() {
<add> normalModule.splitVariablesInUniqueNamedChunks([]).should.eql([]);
<add> });
<add> });
<add> describe("given an array of distrinct variables", function() {
<add> it("returns an array containing an array containing the variables", function() {
<add> normalModule.splitVariablesInUniqueNamedChunks(variables).should.eql([variables]);
<add> });
<add> });
<add> describe("given an array with duplicate variables", function() {
<add> it("returns several arrays each containing only distinct variable names", function() {
<add> normalModule.splitVariablesInUniqueNamedChunks(variables.concat(variables)).should.eql([variables, variables]);
<add> });
<add> describe("and a duplicate as the last variable", function() {
<add> it("returns correctly split distinct arrays", function() {
<add> normalModule.splitVariablesInUniqueNamedChunks(variables.concat(variables).concat(variables[0])).should.eql([variables, variables, [variables[0]]]);
<add> });
<add> });
<add> });
<add> });
<add>}); | 2 |
PHP | PHP | add fix and test for camelback input names | fa8bdfd0ddaba47f9e787dc58dc72f9fe930365d | <ide><path>lib/Cake/Test/Case/View/HelperTest.php
<ide> public function schema($field = false) {
<ide> 'author_id' => array('type' => 'integer', 'null' => false, 'default' => '', 'length' => '8'),
<ide> 'title' => array('type' => 'string', 'null' => false, 'default' => '', 'length' => '255'),
<ide> 'body' => array('type' => 'string', 'null' => true, 'default' => '', 'length' => ''),
<add> 'BigField' => array('type' => 'string', 'null' => true, 'default' => '', 'length' => ''),
<ide> 'created' => array('type' => 'date', 'null' => true, 'default' => '', 'length' => ''),
<ide> 'modified' => array('type' => 'datetime', 'null' => true, 'default' => '', 'length' => null)
<ide> );
<ide> public static function entityProvider() {
<ide> array(
<ide> 'HelperTest.1.Comment.body',
<ide> array('HelperTest', '1', 'Comment', 'body')
<add> ),
<add> array(
<add> 'HelperTestComment.BigField',
<add> array('HelperTestComment', 'BigField')
<ide> )
<ide> );
<ide> }
<ide> public function testSetEntityAssociated() {
<ide> $this->assertEquals($expected, $this->Helper->entity());
<ide>
<ide> $this->assertEquals('HelperTestComment', $this->Helper->model());
<add>
<add> }
<add>
<add>/**
<add> * Test that setEntity doesn't make CamelCase fields that are not associations an
<add> * associated model.
<add> *
<add> * @return void
<add> */
<add> public function testSetEntityAssociatedCamelCaseField() {
<add> $this->Helper->fieldset = array('HelperTestComment' => array('fields' => array('BigField' => 'something')));
<add> $this->Helper->setEntity('HelperTestComment', true);
<add> $this->Helper->setEntity('HelperTestComment.BigField');
<add>
<add> $this->assertEquals('HelperTestComment', $this->Helper->model());
<add> $this->assertEquals('BigField', $this->Helper->field());
<ide> }
<ide>
<ide> /**
<ide><path>lib/Cake/View/Helper.php
<ide> class Helper extends Object {
<ide> */
<ide> public $plugin = null;
<ide>
<add>/**
<add> * Fields this helper is using.
<add> *
<add> * @var array
<add> */
<add> public $fieldset = array();
<add>
<ide> /**
<ide> * Holds tag templates.
<ide> *
<ide> public function setEntity($entity, $setScope = false) {
<ide> // check for associated model.
<ide> $reversed = array_reverse($parts);
<ide> foreach ($reversed as $part) {
<del> if (preg_match('/^[A-Z]/', $part)) {
<add> if (empty($this->fieldset[$this->_modelScope]['fields'][$part]) && preg_match('/^[A-Z]/', $part)) {
<ide> $this->_association = $part;
<ide> break;
<ide> } | 2 |
Python | Python | fix handling of right edge of final bin | 3991939341a25000c16171647e4547eaa6d86055 | <ide><path>numpy/lib/function_base.py
<ide> def histogram(a, bins=10, range=None, normed=False, weights=None,
<ide> # Compute the bin indices, and for values that lie exactly on mx we
<ide> # need to subtract one
<ide> indices = tmp_a.astype(np.intp)
<del> equals_endpoint = (indices == bins)
<del> indices[equals_endpoint] -= 1
<add> indices[indices == bins] -= 1
<ide>
<ide> # The index computation is not guaranteed to give exactly
<ide> # consistent results within ~1 ULP of the bin edges.
<ide> decrement = tmp_a_data < bin_edges[indices]
<ide> indices[decrement] -= 1
<del> increment = (tmp_a_data >= bin_edges[indices + 1]) & ~equals_endpoint
<add> # The last bin includes the right edge. The other bins do not.
<add> increment = (tmp_a_data >= bin_edges[indices + 1]) & (indices != bins - 1)
<ide> indices[increment] += 1
<ide>
<ide> # We now compute the histogram using bincount
<ide><path>numpy/lib/tests/test_function_base.py
<ide> def test_bin_edge_cases(self):
<ide> self.assertGreaterEqual(x, left)
<ide> self.assertLess(x, right)
<ide>
<add> def test_last_bin_inclusive_range(self):
<add> arr = np.array([0., 0., 0., 1., 2., 3., 3., 4., 5.])
<add> hist, edges = np.histogram(arr, bins=30, range=(-0.5, 5))
<add> self.assertEqual(hist[-1], 1)
<add>
<ide>
<ide> class TestHistogramOptimBinNums(TestCase):
<ide> """ | 2 |
Java | Java | improve #tostring for annotationattributes | 4036ffc4d4f62626208dccc63ffdefc112d4ed1c | <ide><path>spring-core/src/main/java/org/springframework/core/annotation/AnnotationAttributes.java
<ide>
<ide> import static java.lang.String.format;
<ide>
<add>import java.util.Iterator;
<ide> import java.util.LinkedHashMap;
<ide> import java.util.Map;
<ide>
<ide> import org.springframework.util.Assert;
<add>import org.springframework.util.StringUtils;
<ide>
<ide> /**
<ide> * {@link LinkedHashMap} subclass representing annotation attribute key/value pairs
<ide> private <T> T doGet(String attributeName, Class<T> expectedType) {
<ide> attributeName, value.getClass().getSimpleName(), expectedType.getSimpleName()));
<ide> return (T) value;
<ide> }
<del>}
<ide>\ No newline at end of file
<add>
<add> public String toString() {
<add> Iterator<Map.Entry<String, Object>> entries = entrySet().iterator();
<add> StringBuilder sb = new StringBuilder("{");
<add> while (entries.hasNext()) {
<add> Map.Entry<String, Object> entry = entries.next();
<add> sb.append(entry.getKey());
<add> sb.append('=');
<add> sb.append(valueToString(entry.getValue()));
<add> sb.append(entries.hasNext() ? ", " : "");
<add> }
<add> sb.append("}");
<add> return sb.toString();
<add> }
<add>
<add> private String valueToString(Object value) {
<add> if (value == this) {
<add> return "(this Map)";
<add> }
<add> if (value instanceof Object[]) {
<add> return "[" + StringUtils.arrayToCommaDelimitedString((Object[]) value) + "]";
<add> }
<add> return String.valueOf(value);
<add> }
<add>} | 1 |
Python | Python | fix tf start docstrings | cf450b776f1205c9938b978ed1e6913277eeb930 | <ide><path>src/transformers/models/albert/modeling_tf_albert.py
<ide> class TFAlbertForPreTrainingOutput(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/bart/modeling_tf_bart.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/bert/modeling_tf_bert.py
<ide> class TFBertForPreTrainingOutput(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/blenderbot/modeling_tf_blenderbot.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/blenderbot_small/modeling_tf_blenderbot_small.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/camembert/modeling_tf_camembert.py
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/clip/modeling_tf_clip.py
<ide> class TFCLIPPreTrainedModel(TFPreTrainedModel):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<del> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<add> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<del> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<add> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<add>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<ide>
<ide> </Tip>
<ide>
<ide><path>src/transformers/models/convbert/modeling_tf_convbert.py
<ide> class TFConvBertPreTrainedModel(TFPreTrainedModel):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/convnext/modeling_tf_convnext.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<del>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<add>
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<add>
<add> - a single Tensor with `pixel_values` only and nothing else: `model(pixel_values)`
<add> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<add> `model([pixel_values, attention_mask])` or `model([pixel_values, attention_mask, token_type_ids])`
<add> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<add> `model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})`
<add>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<ide>
<ide> </Tip>
<ide>
<ide><path>src/transformers/models/ctrl/modeling_tf_ctrl.py
<ide> class TFCTRLPreTrainedModel(TFPreTrainedModel):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/data2vec/modeling_tf_data2vec_vision.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<del>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<add>
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<add>
<add> - a single Tensor with `pixel_values` only and nothing else: `model(pixel_values)`
<add> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<add> `model([pixel_values, attention_mask])` or `model([pixel_values, attention_mask, token_type_ids])`
<add> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<add> `model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})`
<add>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<ide>
<ide> </Tip>
<ide>
<ide><path>src/transformers/models/deberta/modeling_tf_deberta.py
<ide> class TFDebertaPreTrainedModel(TFPreTrainedModel):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py
<ide> class TFDebertaV2PreTrainedModel(TFPreTrainedModel):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/distilbert/modeling_tf_distilbert.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<del> `model([input_ids, attention_mask])`
<add> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<del> `model({"input_ids": input_ids})`
<add> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<add>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<ide>
<ide> </Tip>
<ide>
<ide><path>src/transformers/models/dpr/modeling_tf_dpr.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/electra/modeling_tf_electra.py
<ide> class TFElectraForPreTrainingOutput(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/flaubert/modeling_tf_flaubert.py
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/funnel/modeling_tf_funnel.py
<ide> class TFFunnelForPreTrainingOutput(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/gpt2/modeling_tf_gpt2.py
<ide> class TFGPT2DoubleHeadsModelOutput(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/gptj/modeling_tf_gptj.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/hubert/modeling_tf_hubert.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_values` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_values` only and nothing else: `model(input_values)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_values, attention_mask])` or `model([input_values, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_values": input_values, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/layoutlm/modeling_tf_layoutlm.py
<ide> class TFLayoutLMPreTrainedModel(TFPreTrainedModel):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/layoutlmv3/modeling_tf_layoutlmv3.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<del>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<add>
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<add>
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<add> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<add> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<add> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<add> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<add>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<ide>
<ide> </Tip>
<ide>
<ide><path>src/transformers/models/led/modeling_tf_led.py
<ide> class TFLEDSeq2SeqLMOutput(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/longformer/modeling_tf_longformer.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/lxmert/modeling_tf_lxmert.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/marian/modeling_tf_marian.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/mbart/modeling_tf_mbart.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/mobilebert/modeling_tf_mobilebert.py
<ide> class TFMobileBertForPreTrainingOutput(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/mobilevit/modeling_tf_mobilevit.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<del>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<add>
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<add>
<add> - a single Tensor with `pixel_values` only and nothing else: `model(pixel_values)`
<add> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<add> `model([pixel_values, attention_mask])` or `model([pixel_values, attention_mask, token_type_ids])`
<add> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<add> `model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})`
<add>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<ide>
<ide> </Tip>
<ide>
<ide><path>src/transformers/models/mpnet/modeling_tf_mpnet.py
<ide> def call(
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensor in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<del> `model([input_ids, attention_mask])`
<add> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<del> `model({"input_ids": input_ids, "attention_mask": attention_mask})`
<add> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<add>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<ide>
<ide> </Tip>
<ide>
<ide><path>src/transformers/models/openai/modeling_tf_openai.py
<ide> class TFOpenAIGPTDoubleHeadsModelOutput(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/opt/modeling_tf_opt.py
<ide> def call(
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/pegasus/modeling_tf_pegasus.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/rembert/modeling_tf_rembert.py
<ide> def dummy_inputs(self):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/roberta/modeling_tf_roberta.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/roformer/modeling_tf_roformer.py
<ide> class TFRoFormerPreTrainedModel(TFPreTrainedModel):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/t5/modeling_tf_t5.py
<ide> def _shift_right(self, input_ids):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/tapas/modeling_tf_tapas.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/transfo_xl/modeling_tf_transfo_xl.py
<ide> class TFTransfoXLSequenceClassifierOutputWithPast(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/vit/modeling_tf_vit.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<del>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<add>
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<add>
<add> - a single Tensor with `pixel_values` only and nothing else: `model(pixel_values)`
<add> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<add> `model([pixel_values, attention_mask])` or `model([pixel_values, attention_mask, token_type_ids])`
<add> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<add> `model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})`
<add>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<ide>
<ide> </Tip>
<ide>
<ide><path>src/transformers/models/vit_mae/modeling_tf_vit_mae.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<del>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<add>
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<add>
<add> - a single Tensor with `pixel_values` only and nothing else: `model(pixel_values)`
<add> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<add> `model([pixel_values, attention_mask])` or `model([pixel_values, attention_mask, token_type_ids])`
<add> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<add> `model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})`
<add>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<ide>
<ide> </Tip>
<ide>
<ide><path>src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_values` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_values` only and nothing else: `model(input_values)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_values, attention_mask])` or `model([input_values, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_values": input_values, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/xglm/modeling_tf_xglm.py
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide><path>src/transformers/models/xlm/modeling_tf_xlm.py
<ide> class TFXLMWithLMHeadModelOutput(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/xlm_roberta/modeling_tf_xlm_roberta.py
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>src/transformers/models/xlnet/modeling_tf_xlnet.py
<ide> class TFXLNetForQuestionAnsweringSimpleOutput(ModelOutput):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all the
<del> tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second
<add> format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with
<add> the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first
<add> positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the
<del> first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with
<add> [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry
<add> about any of this, as you can just pass inputs like you would to any other Python function!
<add>
<ide> </Tip>
<ide>
<ide> Parameters:
<ide><path>templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/modeling_tf_{{cookiecutter.lowercase_modelname}}.py
<ide> def dummy_inputs(self):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having
<del> all the tensors in the first argument of the model call function: `model(inputs)`.
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating
<add> your own layers or models with the Keras `Functional` API, there are three possibilities you
<add> can use to gather all the input Tensors in the first positional argument:
<ide>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors
<del> in the first positional argument :
<del>
<del> - a single Tensor with `input_ids` only and nothing else: `model(inputs_ids)`
<add> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with (subclassing)[https://keras.io/guides/making_new_layers_and_models_via_subclassing/]
<add> then you don't need to worry about any of this, as you can just pass inputs like you would to any other Python
<add> function!
<add>
<ide> </Tip>
<ide>
<ide> Args:
<ide> def serving(self, inputs):
<ide>
<ide> <Tip>
<ide>
<del> TF 2.0 models accepts two formats as inputs:
<add> TensorFlow models and layers in `transformers` accept two formats as input:
<ide>
<ide> - having all inputs as keyword arguments (like PyTorch models), or
<del> - having all inputs as a list, tuple or dict in the first positional arguments.
<add> - having all inputs as a list, tuple or dict in the first positional argument.
<ide>
<del> This second option is useful when using [`tf.keras.Model.fit`] method which currently requires having all
<del> the tensors in the first argument of the model call function: `model(inputs)`.
<del>
<del> If you choose this second option, there are three possibilities you can use to gather all the input Tensors in
<del> the first positional argument :
<add> The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
<add> and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just
<add> pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second format outside of Keras methods like `fit()` and `predict()`, such as when creating
<add> your own layers or models with the Keras `Functional` API, there are three possibilities you
<add> can use to gather all the input Tensors in the first positional argument:
<ide>
<ide> - a single Tensor with `input_ids` only and nothing else: `model(input_ids)`
<ide> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
<ide> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`
<ide> - a dictionary with one or several input Tensors associated to the input names given in the docstring:
<ide> `model({"input_ids": input_ids, "token_type_ids": token_type_ids})`
<ide>
<add> Note that when creating models and layers with (subclassing)[https://keras.io/guides/making_new_layers_and_models_via_subclassing/]
<add> then you don't need to worry about any of this, as you can just pass inputs like you would to any other Python
<add> function!
<add>
<ide> </Tip>
<ide>
<ide> Args: | 49 |
Ruby | Ruby | fix tab tests | c904c71792233c5564a5b73814bfbb8d81389b50 | <ide><path>Library/Homebrew/test/test_tab.rb
<ide> def test_universal?
<ide> end
<ide>
<ide> def test_options
<del> assert_equal (@used + @unused).to_a, @tab.options.to_a
<add> assert_equal (@used + @unused).sort, @tab.options.sort
<ide> end
<ide>
<ide> def test_cxxstdlib
<ide> def test_from_file
<ide> path = Pathname.new(TEST_DIRECTORY).join("fixtures", "receipt.json")
<ide> tab = Tab.from_file(path)
<ide>
<del> assert_equal @used.to_a, tab.used_options.to_a
<del> assert_equal @unused.to_a, tab.unused_options.to_a
<del> assert_equal (@used + @unused).to_a, tab.options.to_a
<add> assert_equal @used.sort, tab.used_options.sort
<add> assert_equal @unused.sort, tab.unused_options.sort
<add> assert_equal (@used + @unused).sort, tab.options.sort
<ide> refute_predicate tab, :built_as_bottle
<ide> assert_predicate tab, :poured_from_bottle
<ide> assert_equal "Homebrew/homebrew", tab.tapped_from
<ide> def test_from_file
<ide> end
<ide>
<ide> def test_to_json
<del> assert_equal @tab, Tab.new(Utils::JSON.load(@tab.to_json))
<add> tab = Tab.new(Utils::JSON.load(@tab.to_json))
<add> assert_equal @tab.used_options.sort, tab.used_options.sort
<add> assert_equal @tab.unused_options.sort, tab.unused_options.sort
<add> assert_equal @tab.built_as_bottle, tab.built_as_bottle
<add> assert_equal @tab.poured_from_bottle, tab.poured_from_bottle
<add> assert_equal @tab.tapped_from, tab.tapped_from
<add> assert_equal @tab.time, tab.time
<add> assert_equal @tab.HEAD, tab.HEAD
<add> assert_equal @tab.compiler, tab.compiler
<add> assert_equal @tab.stdlib, tab.stdlib
<ide> end
<ide> end
<ide> | 1 |
PHP | PHP | add additional test for inputs in labels | c22621cc562cf9bea5f48031164d652e4441c923 | <ide><path>src/View/Helper/FormHelper.php
<ide> public function error($field, $text = null, array $options = []) {
<ide> * <label for="post-publish">Publish <input type="text" name="published"></label>
<ide> * }}}
<ide> *
<add> * If you want to nest inputs in the labels, you will need to modify the default templates.
<add> *
<ide> * @param string $fieldName This should be "Modelname.fieldname"
<ide> * @param string $text Text that will appear in the label field. If
<ide> * $text is left undefined the text will be inflected from the
<ide><path>tests/TestCase/View/Helper/FormHelperTest.php
<ide> public function testLabel() {
<ide> $this->assertTags($result, array('label' => array('for' => 'person-2-name'), '/label'));
<ide> }
<ide>
<add>/**
<add> * Test that label() can accept an input with the correct template vars.
<add> *
<add> * @return void
<add> */
<add> public function testLabelContainInput() {
<add> $this->Form->templates([
<add> 'label' => '<label{{attrs}}>{{input}}{{text}}</label>',
<add> ]);
<add> $result = $this->Form->label('Person.accept_terms', 'Accept', [
<add> 'input' => '<input type="checkbox" name="accept_tos" >'
<add> ]);
<add> $expected = [
<add> 'label' => ['for' => 'person-accept-terms'],
<add> 'input' => ['type' => 'checkbox', 'name' => 'accept_tos'],
<add> 'Accept',
<add> '/label',
<add> ];
<add> $this->assertTags($result, $expected);
<add> }
<add>
<ide> /**
<ide> * testTextbox method
<ide> * | 2 |
Text | Text | revise collaborator description in governance.md | 181052d7c255900a91a3c7c21d6406898ff5cf09 | <ide><path>GOVERNANCE.md
<ide>
<ide> ## Collaborators
<ide>
<del>The [nodejs/node][] GitHub repository is maintained by Node.js Core
<del>Collaborators. Upon becoming Collaborators, they:
<del>
<del>* Become members of the @nodejs/collaborators team
<del>* Gain individual membership of the Node.js foundation
<del>
<del>Their privileges include but are not limited to:
<add>Node.js Core Collaborators maintain the [nodejs/node][] GitHub repository.
<add>The GitHub team for Node.js Core Collaborators is @nodejs/collaborators. Their
<add>privileges include but are not limited to:
<ide>
<ide> * Commit access to the [nodejs/node][] repository
<ide> * Access to the Node.js continuous integration (CI) jobs | 1 |
Text | Text | add missing zlib link to stream api docs | 2a46e57d139a93caaff4dc63000e61b9a27669bb | <ide><path>doc/api/stream.md
<ide> readable buffer so there is nothing for a user to consume.
<ide> [fs read streams]: fs.html#fs_class_fs_readstream
<ide> [fs write streams]: fs.html#fs_class_fs_writestream
<ide> [http-incoming-message]: http.html#http_class_http_incomingmessage
<add>[zlib]: zlib.html
<ide> [stream-_flush]: #stream_transform_flush_callback
<ide> [stream-_read]: #stream_readable_read_size_1
<ide> [stream-_transform]: #stream_transform_transform_chunk_encoding_callback | 1 |
Ruby | Ruby | fix typo in deprecation message | 11f5434a8c0226801e14f7b1dc7caca82427e28d | <ide><path>activemodel/lib/active_model/errors.rb
<ide> def add_on_blank(attributes, options = {})
<ide>
<ide> To achieve the same use:
<ide>
<del> errors.add(attribute, :empty, options) if value.blank?
<add> errors.add(attribute, :blank, options) if value.blank?
<ide> MESSAGE
<ide>
<ide> Array(attributes).each do |attribute| | 1 |
Javascript | Javascript | run javascript only when dom is ready | 2f49df82422c4e8920ae265f9ab78aaea29dae10 | <ide><path>rest_framework/static/rest_framework/js/default.js
<ide> function getCookie(c_name)
<ide> return c_value;
<ide> }
<ide>
<del>// JSON highlighting.
<del>prettyPrint();
<add>$(document).ready(function () {
<add> // JSON highlighting.
<add> prettyPrint();
<ide>
<del>// Bootstrap tooltips.
<del>$('.js-tooltip').tooltip({
<del> delay: 1000,
<del> container: 'body'
<del>});
<add> // Bootstrap tooltips.
<add> $('.js-tooltip').tooltip({
<add> delay: 1000,
<add> container: 'body'
<add> });
<ide>
<del>// Deal with rounded tab styling after tab clicks.
<del>$('a[data-toggle="tab"]:first').on('shown', function (e) {
<del> $(e.target).parents('.tabbable').addClass('first-tab-active');
<del>});
<del>$('a[data-toggle="tab"]:not(:first)').on('shown', function (e) {
<del> $(e.target).parents('.tabbable').removeClass('first-tab-active');
<del>});
<add> // Deal with rounded tab styling after tab clicks.
<add> $('a[data-toggle="tab"]:first').on('shown', function (e) {
<add> $(e.target).parents('.tabbable').addClass('first-tab-active');
<add> });
<add> $('a[data-toggle="tab"]:not(:first)').on('shown', function (e) {
<add> $(e.target).parents('.tabbable').removeClass('first-tab-active');
<add> });
<ide>
<del>$('a[data-toggle="tab"]').click(function(){
<del> document.cookie="tabstyle=" + this.name + "; path=/";
<del>});
<add> $('a[data-toggle="tab"]').click(function(){
<add> document.cookie="tabstyle=" + this.name + "; path=/";
<add> });
<ide>
<del>// Store tab preference in cookies & display appropriate tab on load.
<del>var selectedTab = null;
<del>var selectedTabName = getCookie('tabstyle');
<add> // Store tab preference in cookies & display appropriate tab on load.
<add> var selectedTab = null;
<add> var selectedTabName = getCookie('tabstyle');
<ide>
<del>if (selectedTabName) {
<del> selectedTabName = selectedTabName.replace(/[^a-z-]/g, '');
<del>}
<add> if (selectedTabName) {
<add> selectedTabName = selectedTabName.replace(/[^a-z-]/g, '');
<add> }
<ide>
<del>if (selectedTabName) {
<del> selectedTab = $('.form-switcher a[name=' + selectedTabName + ']');
<del>}
<add> if (selectedTabName) {
<add> selectedTab = $('.form-switcher a[name=' + selectedTabName + ']');
<add> }
<ide>
<del>if (selectedTab && selectedTab.length > 0) {
<del> // Display whichever tab is selected.
<del> selectedTab.tab('show');
<del>} else {
<del> // If no tab selected, display rightmost tab.
<del> $('.form-switcher a:first').tab('show');
<del>}
<add> if (selectedTab && selectedTab.length > 0) {
<add> // Display whichever tab is selected.
<add> selectedTab.tab('show');
<add> } else {
<add> // If no tab selected, display rightmost tab.
<add> $('.form-switcher a:first').tab('show');
<add> }
<ide>
<del>$(window).load(function(){
<del> $('#errorModal').modal('show');
<add> $(window).load(function(){
<add> $('#errorModal').modal('show');
<add> });
<ide> }); | 1 |
Python | Python | fix double logging with some task logging handler | 933fefca27a5cd514c9083040344a866c7f517db | <ide><path>airflow/utils/log/file_task_handler.py
<ide> from airflow.exceptions import RemovedInAirflow3Warning
<ide> from airflow.utils.context import Context
<ide> from airflow.utils.helpers import parse_template_string, render_template_to_string
<del>from airflow.utils.log.logging_mixin import DISABLE_PROPOGATE
<ide> from airflow.utils.log.non_caching_file_handler import NonCachingFileHandler
<ide> from airflow.utils.session import create_session
<ide> from airflow.utils.state import State
<ide>
<ide> if TYPE_CHECKING:
<ide> from airflow.models import TaskInstance
<add> from airflow.utils.log.logging_mixin import SetContextPropagate
<ide>
<ide>
<ide> class FileTaskHandler(logging.Handler):
<ide> def __init__(self, base_log_folder: str, filename_template: str | None = None):
<ide> stacklevel=(2 if type(self) == FileTaskHandler else 3),
<ide> )
<ide>
<del> def set_context(self, ti: TaskInstance):
<add> def set_context(self, ti: TaskInstance) -> None | SetContextPropagate:
<ide> """
<ide> Provide task_instance context to airflow task handler.
<ide>
<ide> def set_context(self, ti: TaskInstance):
<ide> if self.formatter:
<ide> self.handler.setFormatter(self.formatter)
<ide> self.handler.setLevel(self.level)
<del>
<del> return DISABLE_PROPOGATE
<add> return None
<ide>
<ide> def emit(self, record):
<ide> if self.handler:
<ide><path>airflow/utils/log/logging_mixin.py
<ide> from __future__ import annotations
<ide>
<ide> import abc
<add>import enum
<ide> import logging
<ide> import re
<ide> import sys
<ide> from io import IOBase
<ide> from logging import Handler, Logger, StreamHandler
<del>from typing import IO
<add>from typing import IO, cast
<ide>
<ide> # 7-bit C1 ANSI escape sequences
<ide> ANSI_ESCAPE = re.compile(r"\x1B[@-_][0-?]*[ -/]*[@-~]")
<ide>
<del># Private: A sentinel object
<del>DISABLE_PROPOGATE = object()
<add>
<add># Private: A sentinel objects
<add>class SetContextPropagate(enum.Enum):
<add> """:meta private:"""
<add>
<add> # If a `set_context` function wants to _keep_ propagation set on it's logger it needs to return this
<add> # special value.
<add> MAINTAIN_PROPAGATE = object()
<add> # Don't use this one anymore!
<add> DISABLE_PROPAGATE = object()
<add>
<add>
<add>def __getattr__(name):
<add> if name in ("DISABLE_PROPOGATE", "DISABLE_PROPAGATE"):
<add> # Compat for spelling on off chance someone is using this directly
<add> # And old object that isn't needed anymore
<add> return SetContextPropagate.DISABLE_PROPAGATE
<add> raise AttributeError(f"module {__name__} has no attribute {name}")
<ide>
<ide>
<ide> def remove_escape_codes(text: str) -> str:
<ide> def set_context(logger, value):
<ide> :param value: value to set
<ide> """
<ide> while logger:
<add> orig_propagate = logger.propagate
<ide> for handler in logger.handlers:
<ide> # Not all handlers need to have context passed in so we ignore
<ide> # the error when handlers do not have set_context defined.
<del> set_context = getattr(handler, "set_context", None)
<del> if set_context and set_context(value) is DISABLE_PROPOGATE:
<del> logger.propagate = False
<del> if logger.propagate is True:
<add>
<add> # Don't use getatrr so we have type checking. And we don't care if handler is actually a
<add> # FileTaskHandler, it just needs to have a set_context function!
<add> if hasattr(handler, "set_context"):
<add> from airflow.utils.log.file_task_handler import FileTaskHandler
<add>
<add> flag = cast(FileTaskHandler, handler).set_context(value)
<add> # By default we disable propagate once we have configured the logger, unless that handler
<add> # explicitly asks us to keep it on.
<add> if flag is not SetContextPropagate.MAINTAIN_PROPAGATE:
<add> logger.propagate = False
<add> if orig_propagate is True:
<add> # If we were set to propagate before we turned if off, then keep passing set_context up
<ide> logger = logger.parent
<ide> else:
<ide> break
<ide><path>tests/utils/test_logging_mixin.py
<ide> # under the License.
<ide> from __future__ import annotations
<ide>
<add>import logging
<add>import sys
<ide> import warnings
<ide> from unittest import mock
<ide>
<del>from airflow.utils.log.logging_mixin import StreamLogWriter, set_context
<add>import pytest
<add>
<add>from airflow.utils.log.logging_mixin import SetContextPropagate, StreamLogWriter, set_context
<add>
<add>
<add>@pytest.fixture
<add>def logger():
<add> parent = logging.getLogger(__name__)
<add> parent.propagate = False
<add> yield parent
<add>
<add> parent.propagate = True
<add>
<add>
<add>@pytest.fixture
<add>def child_logger(logger):
<add> yield logger.getChild("child")
<add>
<add>
<add>@pytest.fixture
<add>def parent_child_handlers(child_logger):
<add> parent_handler = logging.NullHandler()
<add> parent_handler.handle = mock.MagicMock(name="parent_handler.handle")
<add>
<add> child_handler = logging.NullHandler()
<add> child_handler.handle = mock.MagicMock(name="handler.handle")
<add>
<add> logger = child_logger.parent
<add> logger.addHandler(parent_handler)
<add>
<add> child_logger.addHandler(child_handler),
<add> child_logger.propagate = True
<add>
<add> yield parent_handler, child_handler
<add>
<add> logger.removeHandler(parent_handler)
<add> child_logger.removeHandler(child_handler)
<ide>
<ide>
<ide> class TestLoggingMixin:
<ide> def setup_method(self):
<ide> warnings.filterwarnings(action="always")
<ide>
<del> def test_set_context(self):
<del> handler1 = mock.MagicMock()
<del> handler2 = mock.MagicMock()
<del> parent = mock.MagicMock()
<add> def test_set_context(self, child_logger, parent_child_handlers):
<add> handler1, handler2 = parent_child_handlers
<add> handler1.set_context = mock.MagicMock()
<add> handler2.set_context = mock.MagicMock()
<add>
<add> parent = logging.getLogger(__name__)
<ide> parent.propagate = False
<del> parent.handlers = [
<del> handler1,
<del> ]
<del> log = mock.MagicMock()
<del> log.handlers = [
<del> handler2,
<del> ]
<del> log.parent = parent
<add> parent.addHandler(handler1)
<add> log = parent.getChild("child")
<add> log.addHandler(handler2),
<ide> log.propagate = True
<ide>
<ide> value = "test"
<ide> def test_iobase_compatibility(self):
<ide> assert not log.closed
<ide> # has no specific effect
<ide> log.close()
<add>
<add>
<add>@pytest.mark.parametrize(["maintain_propagate"], [[SetContextPropagate.MAINTAIN_PROPAGATE], [None]])
<add>def test_set_context_propagation(parent_child_handlers, child_logger, maintain_propagate):
<add> # Test the behaviour of set_context and logger propagation and the MAINTAIN_PROPAGATE return
<add>
<add> parent_handler, handler = parent_child_handlers
<add> handler.set_context = mock.MagicMock(return_value=maintain_propagate)
<add>
<add> # Before settting_context, ensure logs make it to the parent
<add> line = sys._getframe().f_lineno + 1
<add> record = child_logger.makeRecord(
<add> child_logger.name, logging.INFO, __file__, line, "test message", [], None
<add> )
<add> child_logger.handle(record)
<add>
<add> handler.handle.assert_called_once_with(record)
<add> # Should call the parent handler too in the default/unconfigured case
<add> parent_handler.handle.assert_called_once_with(record)
<add>
<add> parent_handler.handle.reset_mock()
<add> handler.handle.reset_mock()
<add>
<add> # Ensure that once we've called set_context on the handler we disable propagation to parent loggers by
<add> # default!
<add> set_context(child_logger, {})
<add>
<add> child_logger.handle(record)
<add>
<add> handler.handle.assert_called_once_with(record)
<add> if maintain_propagate is SetContextPropagate.MAINTAIN_PROPAGATE:
<add> parent_handler.handle.assert_called_once_with(record)
<add> else:
<add> parent_handler.handle.assert_not_called() | 3 |
Go | Go | handle ip route showing mask-less ip addresses | 0ca133dd7681bb3af1d1de18a5ea6ed42142a11e | <ide><path>network.go
<ide> func checkRouteOverlaps(dockerNetwork *net.IPNet) error {
<ide> continue
<ide> }
<ide> if _, network, err := net.ParseCIDR(strings.Split(line, " ")[0]); err != nil {
<del> return fmt.Errorf("Unexpected ip route output: %s (%s)", err, line)
<add> // is this a mask-less IP address?
<add> if ip := net.ParseIP(strings.Split(line, " ")[0]); ip == nil {
<add> // fail only if it's neither a network nor a mask-less IP address
<add> return fmt.Errorf("Unexpected ip route output: %s (%s)", err, line)
<add> }
<ide> } else if networkOverlaps(dockerNetwork, network) {
<ide> return fmt.Errorf("Network %s is already routed: '%s'", dockerNetwork.String(), line)
<ide> } | 1 |
PHP | PHP | apply suggestions from code review | b1acb9b16440e87d7cc8b2556402e1d082957d84 | <ide><path>src/Http/Uri.php
<ide> *
<ide> * @copyright Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide> * @link https://cakephp.org CakePHP(tm) Project
<del> * @since 3.3.0
<add> * @since 4.4.0
<ide> * @license https://opensource.org/licenses/mit-license.php MIT License
<ide> */
<ide> namespace Cake\Http; | 1 |
Python | Python | fix ticket #599 | 1f0d060bdf6a5cd18d6f301c22cff0f0d482eed4 | <ide><path>numpy/core/fromnumeric.py
<ide> def _wrapit(obj, method, *args, **kwds):
<ide> except AttributeError:
<ide> wrap = None
<ide> result = getattr(asarray(obj),method)(*args, **kwds)
<del> if wrap and isinstance(result, mu.ndarray):
<add> if wrap:
<ide> if not isinstance(result, mu.ndarray):
<ide> result = asarray(result)
<ide> result = wrap(result) | 1 |
Ruby | Ruby | add failing test for | a7648c213d17f1a58a022fb220efe6310e3f56c3 | <ide><path>railties/test/application/assets_test.rb
<ide> class ::PostsController < ActionController::Base ; end
<ide> assert_equal 1, output.scan("enhancement").size
<ide> end
<ide>
<add> test "digested assets are not mistakenly removed" do
<add> app_file "public/assets/application.js", "alert();"
<add> add_to_config "config.assets.compile = true"
<add> add_to_config "config.assets.digest = true"
<add>
<add> quietly do
<add> Dir.chdir(app_path){ `bundle exec rake assets:clean assets:precompile` }
<add> end
<add>
<add> files = Dir["#{app_path}/public/assets/application-*.js"]
<add> assert_equal 1, files.length, "Expected digested application.js asset to be generated, but none found"
<add> end
<add>
<ide> private
<ide>
<ide> def app_with_assets_in_view | 1 |
Go | Go | publish installed v2 plugins to manager | 2a97ea9a6e03443d4d10fd2f440feb779ab8699e | <ide><path>daemon/cluster/executor/container/executor.go
<ide> import (
<ide> "github.com/docker/docker/api/types/network"
<ide> executorpkg "github.com/docker/docker/daemon/cluster/executor"
<ide> clustertypes "github.com/docker/docker/daemon/cluster/provider"
<add> "github.com/docker/docker/plugin"
<ide> networktypes "github.com/docker/libnetwork/types"
<ide> "github.com/docker/swarmkit/agent/exec"
<ide> "github.com/docker/swarmkit/agent/secrets"
<ide> func (e *executor) Describe(ctx context.Context) (*api.NodeDescription, error) {
<ide> }
<ide> }
<ide>
<add> // add v1 plugins
<ide> addPlugins("Volume", info.Plugins.Volume)
<ide> // Add builtin driver "overlay" (the only builtin multi-host driver) to
<ide> // the plugin list by default.
<ide> addPlugins("Network", append([]string{"overlay"}, info.Plugins.Network...))
<ide> addPlugins("Authorization", info.Plugins.Authorization)
<ide>
<add> // add v2 plugins
<add> v2Plugins, err := plugin.GetManager().List()
<add> if err == nil {
<add> for _, plgn := range v2Plugins {
<add> for _, typ := range plgn.Config.Interface.Types {
<add> if typ.Prefix != "docker" || !plgn.Enabled {
<add> continue
<add> }
<add> plgnTyp := typ.Capability
<add> if typ.Capability == "volumedriver" {
<add> plgnTyp = "Volume"
<add> } else if typ.Capability == "networkdriver" {
<add> plgnTyp = "Network"
<add> }
<add> plgnName := plgn.Name
<add> if plgn.Tag != "" {
<add> plgnName += ":" + plgn.Tag
<add> }
<add> plugins[api.PluginDescription{
<add> Type: plgnTyp,
<add> Name: plgnName,
<add> }] = struct{}{}
<add> }
<add> }
<add> }
<add>
<ide> pluginFields := make([]api.PluginDescription, 0, len(plugins))
<ide> for k := range plugins {
<ide> pluginFields = append(pluginFields, k) | 1 |
Python | Python | fix cuda compability for evaluation | 1d53f9cb7244e242c3eee858948607a35ed5d3cc | <ide><path>run_classifier_pytorch.py
<ide> def main():
<ide> input_ids = input_ids.to(device)
<ide> input_mask = input_mask.float().to(device)
<ide> segment_ids = segment_ids.to(device)
<add> label_ids = label_ids.to(device)
<ide>
<ide> tmp_eval_loss, logits = model(input_ids, segment_ids, input_mask, label_ids)
<ide> | 1 |
Go | Go | remove dependencies on registry packages | dbb4b03bfc82eadefaf68c1a81d215949980550e | <ide><path>registry/v2/errors_test.go
<ide> import (
<ide> "encoding/json"
<ide> "reflect"
<ide> "testing"
<del>
<del> "github.com/docker/docker-registry/digest"
<ide> )
<ide>
<ide> // TestErrorCodes ensures that error code format, mappings and
<ide> func TestErrorsManagement(t *testing.T) {
<ide>
<ide> errs.Push(ErrorCodeDigestInvalid)
<ide> errs.Push(ErrorCodeBlobUnknown,
<del> map[string]digest.Digest{"digest": "sometestblobsumdoesntmatter"})
<add> map[string]string{"digest": "sometestblobsumdoesntmatter"})
<ide>
<ide> p, err := json.Marshal(errs)
<ide>
<ide><path>registry/v2/regexp.go
<add>package v2
<add>
<add>import "regexp"
<add>
<add>// This file defines regular expressions for use in route definition. These
<add>// are also defined in the registry code base. Until they are in a common,
<add>// shared location, and exported, they must be repeated here.
<add>
<add>// RepositoryNameComponentRegexp restricts registtry path components names to
<add>// start with at least two letters or numbers, with following parts able to
<add>// separated by one period, dash or underscore.
<add>var RepositoryNameComponentRegexp = regexp.MustCompile(`[a-z0-9]+(?:[._-][a-z0-9]+)*`)
<add>
<add>// RepositoryNameRegexp builds on RepositoryNameComponentRegexp to allow 2 to
<add>// 5 path components, separated by a forward slash.
<add>var RepositoryNameRegexp = regexp.MustCompile(`(?:` + RepositoryNameComponentRegexp.String() + `/){1,4}` + RepositoryNameComponentRegexp.String())
<add>
<add>// TagNameRegexp matches valid tag names. From docker/docker:graph/tags.go.
<add>var TagNameRegexp = regexp.MustCompile(`[\w][\w.-]{0,127}`)
<ide><path>registry/v2/routes.go
<ide> package v2
<ide>
<del>import (
<del> "github.com/docker/docker-registry/common"
<del> "github.com/gorilla/mux"
<del>)
<add>import "github.com/gorilla/mux"
<ide>
<ide> // The following are definitions of the name under which all V2 routes are
<ide> // registered. These symbols can be used to look up a route based on the name.
<ide> func Router() *mux.Router {
<ide> // PUT /v2/<name>/manifest/<tag> Image Manifest Upload the image manifest identified by name and tag.
<ide> // DELETE /v2/<name>/manifest/<tag> Image Manifest Delete the image identified by name and tag.
<ide> router.
<del> Path("/v2/{name:" + common.RepositoryNameRegexp.String() + "}/manifests/{tag:" + common.TagNameRegexp.String() + "}").
<add> Path("/v2/{name:" + RepositoryNameRegexp.String() + "}/manifests/{tag:" + TagNameRegexp.String() + "}").
<ide> Name(RouteNameManifest)
<ide>
<ide> // GET /v2/<name>/tags/list Tags Fetch the tags under the repository identified by name.
<ide> router.
<del> Path("/v2/{name:" + common.RepositoryNameRegexp.String() + "}/tags/list").
<add> Path("/v2/{name:" + RepositoryNameRegexp.String() + "}/tags/list").
<ide> Name(RouteNameTags)
<ide>
<ide> // GET /v2/<name>/blob/<digest> Layer Fetch the blob identified by digest.
<ide> router.
<del> Path("/v2/{name:" + common.RepositoryNameRegexp.String() + "}/blobs/{digest:[a-zA-Z0-9-_+.]+:[a-zA-Z0-9-_+.=]+}").
<add> Path("/v2/{name:" + RepositoryNameRegexp.String() + "}/blobs/{digest:[a-zA-Z0-9-_+.]+:[a-zA-Z0-9-_+.=]+}").
<ide> Name(RouteNameBlob)
<ide>
<ide> // POST /v2/<name>/blob/upload/ Layer Upload Initiate an upload of the layer identified by tarsum.
<ide> router.
<del> Path("/v2/{name:" + common.RepositoryNameRegexp.String() + "}/blobs/uploads/").
<add> Path("/v2/{name:" + RepositoryNameRegexp.String() + "}/blobs/uploads/").
<ide> Name(RouteNameBlobUpload)
<ide>
<ide> // GET /v2/<name>/blob/upload/<uuid> Layer Upload Get the status of the upload identified by tarsum and uuid.
<ide> // PUT /v2/<name>/blob/upload/<uuid> Layer Upload Upload all or a chunk of the upload identified by tarsum and uuid.
<ide> // DELETE /v2/<name>/blob/upload/<uuid> Layer Upload Cancel the upload identified by layer and uuid
<ide> router.
<del> Path("/v2/{name:" + common.RepositoryNameRegexp.String() + "}/blobs/uploads/{uuid}").
<add> Path("/v2/{name:" + RepositoryNameRegexp.String() + "}/blobs/uploads/{uuid}").
<ide> Name(RouteNameBlobUploadChunk)
<ide>
<ide> return router
<ide><path>registry/v2/urls.go
<ide> import (
<ide> "net/http"
<ide> "net/url"
<ide>
<del> "github.com/docker/docker-registry/digest"
<ide> "github.com/gorilla/mux"
<ide> )
<ide>
<ide> func (ub *URLBuilder) BuildManifestURL(name, tag string) (string, error) {
<ide> }
<ide>
<ide> // BuildBlobURL constructs the url for the blob identified by name and dgst.
<del>func (ub *URLBuilder) BuildBlobURL(name string, dgst digest.Digest) (string, error) {
<add>func (ub *URLBuilder) BuildBlobURL(name string, dgst string) (string, error) {
<ide> route := ub.cloneRoute(RouteNameBlob)
<ide>
<del> layerURL, err := route.URL("name", name, "digest", dgst.String())
<add> layerURL, err := route.URL("name", name, "digest", dgst)
<ide> if err != nil {
<ide> return "", err
<ide> } | 4 |
Javascript | Javascript | add rn_fb bundles for react-is | c09596cc6021e1f9f8a88179add93f80fc07823b | <ide><path>scripts/rollup/bundles.js
<ide> const bundles = [
<ide> FB_WWW_PROD,
<ide> UMD_DEV,
<ide> UMD_PROD,
<add> RN_FB_DEV,
<add> RN_FB_PROD,
<add> RN_FB_PROFILING,
<ide> ],
<ide> moduleType: ISOMORPHIC,
<ide> entry: 'react-is',
<ide> global: 'ReactIs',
<ide> minifyWithProdErrorCodes: true,
<ide> wrapWithModuleBoundaries: false,
<del> externals: [],
<add> externals: ['ReactNativeInternalFeatureFlags'],
<ide> },
<ide>
<ide> /******* React Debug Tools *******/
<ide><path>scripts/rollup/packaging.js
<ide> function getBundleOutputPath(bundleType, filename, packageName) {
<ide> switch (packageName) {
<ide> case 'scheduler':
<ide> case 'react':
<add> case 'react-is':
<ide> case 'react-test-renderer':
<ide> return `build/facebook-react-native/${packageName}/cjs/${filename}`;
<ide> case 'react-native-renderer': | 2 |
Javascript | Javascript | fix half way club gitter link | a028059aafe4a7536a8b0298885261494b47b531 | <ide><path>server/boot/challenge.js
<ide> module.exports = function(app) {
<ide> req.flash('info', {
<ide> msg: dedent`
<ide> Once you have completed all of our challenges, you should
<del> join our <a href=\"//gitter.im/freecodecamp/HalfWayClub\"
<del> target=\"_blank\">Half Way Club</a> and start getting
<add> join our <a href="https://gitter.im/freecodecamp/HalfWayClub"
<add> target="_blank">Half Way Club</a> and start getting
<ide> ready for our nonprofit projects.
<ide> `.split('\n').join(' ')
<ide> }); | 1 |
PHP | PHP | fix doc block | e9cf10be548d6e233416a634e79478859e23168b | <ide><path>src/Illuminate/Contracts/Filesystem/Factory.php
<ide> interface Factory {
<ide>
<ide> /**
<del> * Get an OAuth provider implementation.
<add> * Get a filesystem implementation.
<ide> *
<ide> * @param string $name
<ide> * @return \Illuminate\Contracts\Filesystem\Filesystem | 1 |
Ruby | Ruby | move the order by to the rownumber method | 680bac202da2e9c717d4e47c9402f291a8971fad | <ide><path>lib/arel/visitors/mssql.rb
<ide> def visit_Arel_Nodes_Top o
<ide> end
<ide>
<ide> def visit_Arel_Visitors_MSSQL_RowNumber o
<del> "ROW_NUMBER() OVER (#{o.expr}) as _row_num"
<add> "ROW_NUMBER() OVER (ORDER BY #{o.expr}) as _row_num"
<ide> end
<ide>
<ide> def visit_Arel_Nodes_SelectStatement o
<ide> def get_offset_limit_clause o
<ide>
<ide> def determine_order_by orders, x
<ide> if orders.any?
<del> "ORDER BY #{orders.map { |x| visit x }.join(', ')}"
<add> "#{orders.map { |x| visit x }.join(', ')}"
<ide> else
<ide> if x.groups.any?
<del> "ORDER BY #{x.groups.map { |g| visit g }.join ', ' }"
<add> "#{x.groups.map { |g| visit g }.join ', ' }"
<ide> else
<del> "ORDER BY #{find_left_table_pk(x.froms)}"
<add> "#{find_left_table_pk(x.froms)}"
<ide> end
<ide> end
<ide> end | 1 |
Ruby | Ruby | keep object#fork private | f223c795e8f43d096811e715aecea022113d7fa7 | <ide><path>activesupport/lib/active_support/fork_tracker.rb
<ide> def fork(*)
<ide> end
<ide> end
<ide>
<add> module CoreExtPrivate
<add> include CoreExt
<add> private :fork
<add> end
<add>
<ide> @pid = Process.pid
<ide> @callbacks = []
<ide>
<ide> def check!
<ide> end
<ide>
<ide> def hook!
<del> ::Object.prepend(CoreExt)
<add> ::Object.prepend(CoreExtPrivate)
<ide> ::Kernel.singleton_class.prepend(CoreExt)
<ide> ::Process.singleton_class.prepend(CoreExt)
<ide> end
<ide><path>activesupport/test/fork_tracker_test.rb
<ide> def test_object_fork
<ide> write.write "forked"
<ide> end
<ide>
<add> assert_not respond_to?(:fork)
<ide> pid = fork do
<ide> read.close
<ide> write.close | 2 |
Text | Text | fix typo in step-010.md | 5aaf8677d4c85ecafd41785ead6a718b4a533b5c | <ide><path>curriculum/challenges/english/14-responsive-web-design-22/learn-html-forms-by-building-a-registration-form/step-010.md
<ide> dashedName: step-10
<ide>
<ide> # --description--
<ide>
<del>As suggested by the title, you are creating a form. So, after the `p` element, insert a `form` with an `action` attribute targetting `https://fcc-registration-form.com`.
<add>As suggested by the title, you are creating a form. So, after the `p` element, insert a `form` with an `action` attribute targeting `https://fcc-registration-form.com`.
<ide>
<ide> # --hints--
<ide> | 1 |
Python | Python | replace str with compat.unicode_ | 82f5f1f98fe572910b5c5c1762ed73ac8ba677e6 | <ide><path>spacy/cli/info.py
<ide> import platform
<ide> from pathlib import Path
<ide>
<add>from ..compat import unicode_
<ide> from .. import about
<ide> from .. import util
<ide>
<ide> def info(model=None, markdown=False):
<ide> data = util.parse_package_meta(util.get_data_path(), model, require=True)
<ide> model_path = Path(__file__).parent / util.get_data_path() / model
<ide> if model_path.resolve() != model_path:
<del> data['link'] = str(model_path)
<del> data['source'] = str(model_path.resolve())
<add> data['link'] = unicode_(model_path)
<add> data['source'] = unicode_(model_path.resolve())
<ide> else:
<del> data['source'] = str(model_path)
<add> data['source'] = unicode_(model_path)
<ide> print_info(data, "model " + model, markdown)
<ide> else:
<ide> data = get_spacy_data()
<ide> def print_info(data, title, markdown):
<ide> def get_spacy_data():
<ide> return {
<ide> 'spaCy version': about.__version__,
<del> 'Location': str(Path(__file__).parent.parent),
<add> 'Location': unicode_(Path(__file__).parent.parent),
<ide> 'Platform': platform.platform(),
<ide> 'Python version': platform.python_version(),
<ide> 'Installed models': ', '.join(list_models()) | 1 |
Java | Java | fix timezone specific failing test | 50f20162934c1d3282615dce6696e3b56141f7ac | <ide><path>spring-webmvc/src/test/java/org/springframework/web/servlet/config/MvcNamespaceTests.java
<ide> public void testDefaultConfig() throws Exception {
<ide>
<ide> adapter.handle(request, response, handlerMethod);
<ide> assertThat(handler.recordedValidationError).isTrue();
<del> assertThat(handler.date).isInSameDayAs("2009-10-31");
<add> assertThat(handler.date).isInSameDayAs("2009-10-31T00:00:00+00:00");
<ide> assertThat(handler.percent).isEqualTo(Double.valueOf(0.9999));
<ide>
<ide> CompositeUriComponentsContributor uriComponentsContributor = this.appContext.getBean( | 1 |
Python | Python | update affected deployment tests | e74c131d8ddc6748a4036b699420eb7d1e164153 | <ide><path>libcloud/test/compute/test_deployment.py
<ide> from libcloud.compute.drivers.rackspace import RackspaceFirstGenNodeDriver as Rackspace
<ide>
<ide> from libcloud.test import MockHttp, XML_HEADERS
<del>from libcloud.test.file_fixtures import ComputeFileFixtures, OpenStackFixtures
<add>from libcloud.test.file_fixtures import ComputeFileFixtures
<ide> from mock import Mock, patch
<ide>
<ide> from libcloud.test.secrets import RACKSPACE_PARAMS
<ide> def test_wait_until_running_timeout(self):
<ide>
<ide> try:
<ide> self.driver.wait_until_running(nodes=[self.node], wait_period=0.5,
<del> timeout=1)
<add> timeout=1)
<ide> except LibcloudError:
<ide> e = sys.exc_info()[1]
<ide> self.assertTrue(e.value.find('Timed out') != -1)
<ide> def test_exception_is_thrown_is_paramiko_is_not_available(self,
<ide>
<ide>
<ide> class RackspaceMockHttp(MockHttp):
<del>
<ide> fixtures = ComputeFileFixtures('openstack')
<del> auth_fixtures = OpenStackFixtures()
<del>
<del> def _v1_1_auth(self, method, url, body, headers):
<del> body = self.auth_fixtures.load('_v1_1__auth.json')
<del> return (httplib.OK, body, {'content-type': 'application/json; charset=UTF-8'}, httplib.responses[httplib.OK])
<del>
<del> # fake auth token response
<del> def _v1_0(self, method, url, body, headers):
<del> headers = {'x-server-management-url': 'https://servers.api.rackspacecloud.com/v1.0/slug',
<del> 'x-auth-token': 'FE011C19-CF86-4F87-BE5D-9229145D7A06',
<del> 'x-cdn-management-url': 'https://cdn.clouddrive.com/v1/MossoCloudFS_FE011C19-CF86-4F87-BE5D-9229145D7A06',
<del> 'x-storage-token': 'FE011C19-CF86-4F87-BE5D-9229145D7A06',
<del> 'x-storage-url': 'https://storage4.clouddrive.com/v1/MossoCloudFS_FE011C19-CF86-4F87-BE5D-9229145D7A06'}
<del> return (httplib.NO_CONTENT, "", headers, httplib.responses[httplib.NO_CONTENT])
<add>
<add> def _v2_0_tokens(self, method, url, body, headers):
<add> body = self.fixtures.load('_v2_0__auth_deployment.json')
<add> headers = {
<add> 'content-type': 'application/json'
<add> }
<add> return (httplib.OK, body, headers,
<add> httplib.responses[httplib.OK])
<ide>
<ide> def _v1_0_slug_servers_detail(self, method, url, body, headers):
<ide> body = self.fixtures.load('v1_slug_servers_detail_deployment_success.xml') | 1 |
PHP | PHP | return response to routemiddleware too | d07f5ce82a361b457329b54e0eb6b688a226c376 | <ide><path>src/Illuminate/Routing/Router.php
<ide> protected function runRouteWithinStack(Route $route, Request $request)
<ide> ->through($middleware)
<ide> ->then(function($request) use ($route)
<ide> {
<del> return $route->run($request);
<add> return $this->prepareResponse(
<add> $request,
<add> $route->run($request)
<add> );
<ide> });
<ide> }
<ide> | 1 |
Python | Python | fix race condition when using dynamic dags | b9eb51a0fb32cd660a5459d73d7323865b34dd99 | <ide><path>airflow/jobs/scheduler_job.py
<ide>
<ide> from airflow import models, settings
<ide> from airflow.configuration import conf
<del>from airflow.exceptions import AirflowException, TaskNotFound
<add>from airflow.exceptions import AirflowException, SerializedDagNotFound, TaskNotFound
<ide> from airflow.executors.executor_loader import UNPICKLEABLE_EXECUTORS
<ide> from airflow.jobs.base_job import BaseJob
<ide> from airflow.models import DAG, DagModel, SlaMiss, errors
<ide> def _do_scheduling(self, session) -> int:
<ide> active_runs_by_dag_id[dag_id].add(execution_date)
<ide>
<ide> for dag_run in dag_runs:
<del> self._schedule_dag_run(dag_run, active_runs_by_dag_id.get(dag_run.dag_id, set()), session)
<add> # Use try_except to not stop the Scheduler when a Serialized DAG is not found
<add> # This takes care of Dynamic DAGs especially
<add> # SerializedDagNotFound should not happen here in the same loop because the DagRun would
<add> # not be created in self._create_dag_runs if Serialized DAG does not exist
<add> # But this would take care of the scenario when the Scheduler is restarted after DagRun is
<add> # created and the DAG is deleted / renamed
<add> try:
<add> self._schedule_dag_run(dag_run, active_runs_by_dag_id.get(dag_run.dag_id, set()), session)
<add> except SerializedDagNotFound:
<add> self.log.exception("DAG '%s' not found in serialized_dag table", dag_run.dag_id)
<add> continue
<ide>
<ide> guard.commit()
<ide>
<ide> def _create_dag_runs(self, dag_models: Iterable[DagModel], session: Session) ->
<ide> if/when the next DAGRun should be created
<ide> """
<ide> for dag_model in dag_models:
<del> dag = self.dagbag.get_dag(dag_model.dag_id, session=session)
<add> try:
<add> dag = self.dagbag.get_dag(dag_model.dag_id, session=session)
<add> except SerializedDagNotFound:
<add> self.log.exception("DAG '%s' not found in serialized_dag table", dag_model.dag_id)
<add> continue
<add>
<ide> dag_hash = self.dagbag.dags_hash.get(dag.dag_id)
<ide> dag.create_dagrun(
<ide> run_type=DagRunType.SCHEDULED,
<ide> def _update_dag_next_dagruns(self, dag_models: Iterable[DagModel], session: Sess
<ide> )
<ide>
<ide> for dag_model in dag_models:
<del> dag = self.dagbag.get_dag(dag_model.dag_id, session=session)
<add> # Get the DAG in a try_except to not stop the Scheduler when a Serialized DAG is not found
<add> # This takes care of Dynamic DAGs especially
<add> try:
<add> dag = self.dagbag.get_dag(dag_model.dag_id, session=session)
<add> except SerializedDagNotFound:
<add> self.log.exception("DAG '%s' not found in serialized_dag table", dag_model.dag_id)
<add> continue
<ide> active_runs_of_dag = active_runs_of_dags.get(dag.dag_id, 0)
<ide> if dag.max_active_runs and active_runs_of_dag >= dag.max_active_runs:
<ide> self.log.info(
<ide><path>airflow/models/dagbag.py
<ide> def _serialze_dag_capturing_errors(dag, session):
<ide> )
<ide> self.log.debug("Calling the DAG.bulk_sync_to_db method")
<ide> try:
<del> DAG.bulk_write_to_db(self.dags.values(), session=session)
<del>
<ide> # Write Serialized DAGs to DB, capturing errors
<ide> for dag in self.dags.values():
<ide> serialize_errors.extend(_serialze_dag_capturing_errors(dag, session))
<add>
<add> DAG.bulk_write_to_db(self.dags.values(), session=session)
<ide> except OperationalError:
<ide> session.rollback()
<ide> raise
<ide><path>tests/jobs/test_scheduler_job.py
<ide> def test_scheduler_sets_job_id_on_dag_run(self):
<ide>
<ide> assert dag.get_last_dagrun().creating_job_id == scheduler.id
<ide>
<add> def test_scheduler_create_dag_runs_does_not_raise_error(self):
<add> """
<add> Test that scheduler._create_dag_runs does not raise an error when the DAG does not exist
<add> in serialized_dag table
<add> """
<add> dag = DAG(dag_id='test_scheduler_create_dag_runs_does_not_raise_error', start_date=DEFAULT_DATE)
<add>
<add> DummyOperator(
<add> task_id='dummy',
<add> dag=dag,
<add> )
<add>
<add> dagbag = DagBag(
<add> dag_folder=os.devnull,
<add> include_examples=False,
<add> read_dags_from_db=False,
<add> )
<add> dagbag.bag_dag(dag=dag, root_dag=dag)
<add> # Only write to dag table and not serialized_dag table
<add> DAG.bulk_write_to_db(dagbag.dags.values())
<add> dag_model = DagModel.get_dagmodel(dag.dag_id)
<add>
<add> scheduler = SchedulerJob(subdir=os.devnull, executor=self.null_exec)
<add> scheduler.processor_agent = mock.MagicMock()
<add>
<add> with create_session() as session, self.assertLogs(
<add> 'airflow.jobs.scheduler_job', level="ERROR"
<add> ) as log_output:
<add> scheduler._create_dag_runs([dag_model], session)
<add>
<add> assert (
<add> "airflow.exceptions.SerializedDagNotFound: DAG "
<add> "'test_scheduler_create_dag_runs_does_not_raise_error' not found in serialized_dag table"
<add> ) in log_output.output[0]
<add>
<ide> def test_do_schedule_max_active_runs_upstream_failed(self):
<ide> """
<ide> Test that tasks in upstream failed don't count as actively running. | 3 |
Go | Go | remove dup tests | 0a3abe33f0bd9b7dfc36021b6f10d0857a432d40 | <ide><path>integration/server_test.go
<ide> package docker
<ide>
<ide> import (
<ide> "bytes"
<del> "strings"
<ide> "testing"
<ide> "time"
<ide>
<ide> func TestCreateNumberHostname(t *testing.T) {
<ide> createTestContainer(eng, config, t)
<ide> }
<ide>
<del>func TestCreateNumberUsername(t *testing.T) {
<del> eng := NewTestEngine(t)
<del> defer mkDaemonFromEngine(eng, t).Nuke()
<del>
<del> config, _, _, err := runconfig.Parse([]string{"-u", "1002", unitTestImageID, "echo test"}, nil)
<del> if err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> createTestContainer(eng, config, t)
<del>}
<del>
<ide> func TestCommit(t *testing.T) {
<ide> eng := NewTestEngine(t)
<ide> defer mkDaemonFromEngine(eng, t).Nuke()
<ide> func TestRunWithTooLowMemoryLimit(t *testing.T) {
<ide> }
<ide> }
<ide>
<del>func TestRmi(t *testing.T) {
<del> eng := NewTestEngine(t)
<del> srv := mkServerFromEngine(eng, t)
<del> defer mkDaemonFromEngine(eng, t).Nuke()
<del>
<del> initialImages := getAllImages(eng, t)
<del>
<del> config, hostConfig, _, err := runconfig.Parse([]string{unitTestImageID, "echo", "test"}, nil)
<del> if err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> containerID := createTestContainer(eng, config, t)
<del>
<del> //To remove
<del> job := eng.Job("start", containerID)
<del> if err := job.ImportEnv(hostConfig); err != nil {
<del> t.Fatal(err)
<del> }
<del> if err := job.Run(); err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> if err := eng.Job("wait", containerID).Run(); err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> job = eng.Job("commit", containerID)
<del> job.Setenv("repo", "test")
<del> var outputBuffer = bytes.NewBuffer(nil)
<del> job.Stdout.Add(outputBuffer)
<del> if err := job.Run(); err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> if err := eng.Job("tag", engine.Tail(outputBuffer, 1), "test", "0.1").Run(); err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> containerID = createTestContainer(eng, config, t)
<del>
<del> //To remove
<del> job = eng.Job("start", containerID)
<del> if err := job.ImportEnv(hostConfig); err != nil {
<del> t.Fatal(err)
<del> }
<del> if err := job.Run(); err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> if err := eng.Job("wait", containerID).Run(); err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> job = eng.Job("commit", containerID)
<del> job.Setenv("repo", "test")
<del> if err := job.Run(); err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> images := getAllImages(eng, t)
<del>
<del> if images.Len()-initialImages.Len() != 2 {
<del> t.Fatalf("Expected 2 new images, found %d.", images.Len()-initialImages.Len())
<del> }
<del>
<del> if err = srv.DeleteImage(engine.Tail(outputBuffer, 1), engine.NewTable("", 0), true, false, false); err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> images = getAllImages(eng, t)
<del>
<del> if images.Len()-initialImages.Len() != 1 {
<del> t.Fatalf("Expected 1 new image, found %d.", images.Len()-initialImages.Len())
<del> }
<del>
<del> for _, image := range images.Data {
<del> if strings.Contains(unitTestImageID, image.Get("Id")) {
<del> continue
<del> }
<del> if image.GetList("RepoTags")[0] == "<none>:<none>" {
<del> t.Fatalf("Expected tagged image, got untagged one.")
<del> }
<del> }
<del>}
<del>
<ide> func TestImagesFilter(t *testing.T) {
<ide> eng := NewTestEngine(t)
<ide> defer nuke(mkDaemonFromEngine(eng, t)) | 1 |
Python | Python | add documentation to dag test function | 9644b05d683abaa45f9b0f8b3c1b05a2263b8523 | <ide><path>airflow/models/dag.py
<ide> def test(
<ide> variable_file_path: str | None = None,
<ide> session: Session = NEW_SESSION,
<ide> ) -> None:
<del> """Execute one single DagRun for a given DAG and execution date."""
<add> """
<add> Execute one single DagRun for a given DAG and execution date.
<add>
<add> :param execution_date: execution date for the DAG run
<add> :param run_conf: configuration to pass to newly created dagrun
<add> :param conn_file_path: file path to a connection file in either yaml or json
<add> :param variable_file_path: file path to a variable file in either yaml or json
<add> :param session: database connection (optional)
<add> """
<ide>
<ide> def add_logger_if_needed(ti: TaskInstance):
<ide> """ | 1 |
PHP | PHP | remove todo item | e5b516a8726d3ee3937e2a5523ba76f678bfa751 | <ide><path>src/Controller/Component.php
<ide> public function __construct(ComponentRegistry $registry, $config = []) {
<ide>
<ide> $this->config($config);
<ide>
<del> $this->_set($this->config()); //@TODO get rid of public properties and remove this
<add> $this->_set($this->config());
<ide>
<ide> if (!empty($this->components)) {
<ide> $this->_componentMap = $registry->normalizeArray($this->components); | 1 |
Javascript | Javascript | remove trailing whitespace from text editor docs | 2c2d9597a7445a1505ffd0560179aadad4e9d390 | <ide><path>src/text-editor.js
<ide> class TextEditor {
<ide> // coordinates. Useful with {Config::get}.
<ide> //
<ide> // For example, if called with a position inside the parameter list of an
<del> // anonymous CoffeeScript function, this method returns a {ScopeDescriptor} with
<add> // anonymous CoffeeScript function, this method returns a {ScopeDescriptor} with
<ide> // the following scopes array:
<ide> // `["source.coffee", "meta.function.inline.coffee", "meta.parameters.coffee", "variable.parameter.function.coffee"]`
<ide> // | 1 |
Javascript | Javascript | fix util.inspect() line width calculation | 1f5570471896b6723b723342d55ad50013ce3b82 | <ide><path>lib/util.js
<ide> function reduceToSingleString(output, base, braces) {
<ide> var length = output.reduce(function(prev, cur) {
<ide> numLinesEst++;
<ide> if (cur.indexOf('\n') >= 0) numLinesEst++;
<del> return prev + cur.length + 1;
<add> return prev + cur.replace(/\u001b\[\d\d?m/g, '').length + 1;
<ide> }, 0);
<ide>
<ide> if (length > 60) {
<ide><path>test/simple/test-util-inspect.js
<ide> assert(util.inspect(subject, { customInspect: false }).indexOf('inspect') !== -1
<ide> subject.inspect = function() { return { foo: 'bar' }; };
<ide>
<ide> assert.equal(util.inspect(subject), '{ foo: \'bar\' }');
<add>
<add>// util.inspect with "colors" option should produce as many lines as without it
<add>function test_lines(input) {
<add> var count_lines = function(str) {
<add> return (str.match(/\n/g) || []).length;
<add> }
<add>
<add> var without_color = util.inspect(input);
<add> var with_color = util.inspect(input, {colors: true});
<add> assert.equal(count_lines(without_color), count_lines(with_color));
<add>}
<add>
<add>test_lines([1, 2, 3, 4, 5, 6, 7]);
<add>test_lines(function() {
<add> var big_array = [];
<add> for (var i = 0; i < 100; i++) {
<add> big_array.push(i);
<add> }
<add> return big_array;
<add>}());
<add>test_lines({foo: 'bar', baz: 35, b: {a: 35}});
<add>test_lines({
<add> foo: 'bar',
<add> baz: 35,
<add> b: {a: 35},
<add> very_long_key: 'very_long_value',
<add> even_longer_key: ['with even longer value in array']
<add>}); | 2 |
Python | Python | add pytorch native amp support in trainer | 0034a1d248e1053dad743dc02c994bbe37a743af | <ide><path>src/transformers/trainer.py
<ide> from tqdm.auto import tqdm, trange
<ide>
<ide> from .data.data_collator import DataCollator, default_data_collator
<del>from .file_utils import is_apex_available, is_torch_tpu_available
<add>from .file_utils import is_torch_tpu_available
<ide> from .modeling_utils import PreTrainedModel
<ide> from .optimization import AdamW, get_linear_schedule_with_warmup
<ide> from .trainer_utils import (
<ide> from .training_args import TrainingArguments
<ide>
<ide>
<del>if is_apex_available():
<del> from apex import amp
<add>_use_native_amp = False
<add>_use_apex = False
<add>
<add># Check if Pytorch version >= 1.6 to switch between Native AMP and Apex
<add>if version.parse(torch.__version__) < version.parse("1.6"):
<add> from transformers.file_utils import is_apex_available
<add>
<add> if is_apex_available():
<add> from apex import amp
<add> _use_apex = True
<add>else:
<add> _use_native_amp = True
<add> from torch.cuda.amp import autocast
<ide>
<ide>
<ide> if is_torch_tpu_available():
<ide> def __init__(
<ide> ),
<ide> FutureWarning,
<ide> )
<add> if self.args.fp16 and _use_native_amp:
<add> self.scaler = torch.cuda.amp.GradScaler()
<ide>
<ide> def _get_train_sampler(self) -> Optional[torch.utils.data.sampler.Sampler]:
<ide> if isinstance(self.train_dataset, torch.utils.data.IterableDataset):
<ide> def train(self, model_path: Optional[str] = None):
<ide> scheduler.load_state_dict(torch.load(os.path.join(model_path, "scheduler.pt")))
<ide>
<ide> model = self.model
<del> if self.args.fp16:
<add> if self.args.fp16 and _use_apex:
<ide> if not is_apex_available():
<ide> raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.")
<ide> model, optimizer = amp.initialize(model, optimizer, opt_level=self.args.fp16_opt_level)
<ide> def train(self, model_path: Optional[str] = None):
<ide> len(epoch_iterator) <= self.args.gradient_accumulation_steps
<ide> and (step + 1) == len(epoch_iterator)
<ide> ):
<del> if self.args.fp16:
<add> if self.args.fp16 and _use_native_amp:
<add> self.scaler.unscale_(optimizer)
<add> torch.nn.utils.clip_grad_norm_(model.parameters(), self.args.max_grad_norm)
<add> elif self.args.fp16 and _use_apex:
<ide> torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), self.args.max_grad_norm)
<ide> else:
<ide> torch.nn.utils.clip_grad_norm_(model.parameters(), self.args.max_grad_norm)
<ide>
<ide> if is_torch_tpu_available():
<ide> xm.optimizer_step(optimizer)
<add>
<add> if self.args.fp16 and _use_native_amp:
<add> self.scaler.step(optimizer)
<add> self.scaler.update()
<ide> else:
<ide> optimizer.step()
<ide>
<ide> def training_step(
<ide> model.train()
<ide> inputs = self._prepare_inputs(inputs, model)
<ide>
<del> outputs = model(**inputs)
<del> # We don't use .loss here since the model may return tuples instead of ModelOutput.
<del> loss = outputs[0]
<add> if self.args.fp16 and _use_native_amp:
<add> with autocast():
<add> outputs = model(**inputs)
<add> loss = outputs[0]
<add> else:
<add> outputs = model(**inputs)
<add> # We don't use .loss here since the model may return tuples instead of ModelOutput.
<add> loss = outputs[0]
<ide>
<ide> if self.args.past_index >= 0:
<ide> self._past = outputs[self.args.past_index]
<ide>
<ide> if self.args.n_gpu > 1:
<ide> loss = loss.mean() # mean() to average on multi-gpu parallel training
<add>
<ide> if self.args.gradient_accumulation_steps > 1:
<ide> loss = loss / self.args.gradient_accumulation_steps
<ide>
<del> if self.args.fp16:
<add> if self.args.fp16 and _use_native_amp:
<add> self.scaler.scale(loss).backward()
<add> elif self.args.fp16 and _use_apex:
<ide> with amp.scale_loss(loss, optimizer) as scaled_loss:
<ide> scaled_loss.backward()
<ide> else: | 1 |
Go | Go | fix version checks to work properly | 4bf7a84c969b9309b0534a61af55b8bb824acc0a | <ide><path>contrib/apparmor/main.go
<ide> import (
<ide> )
<ide>
<ide> type profileData struct {
<del> MajorVersion int
<del> MinorVersion int
<add> Version int
<ide> }
<ide>
<ide> func main() {
<ide> func main() {
<ide> // parse the arg
<ide> apparmorProfilePath := os.Args[1]
<ide>
<del> majorVersion, minorVersion, err := aaparser.GetVersion()
<add> version, err := aaparser.GetVersion()
<ide> if err != nil {
<ide> log.Fatal(err)
<ide> }
<ide> data := profileData{
<del> MajorVersion: majorVersion,
<del> MinorVersion: minorVersion,
<add> Version: version,
<ide> }
<ide> fmt.Printf("apparmor_parser is of version %+v\n", data)
<ide>
<ide><path>contrib/apparmor/template.go
<ide> profile /usr/bin/docker (attach_disconnected, complain) {
<ide>
<ide> umount,
<ide> pivot_root,
<del>{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
<add>{{if ge .Version 209000}}
<ide> signal (receive) peer=@{profile_name},
<ide> signal (receive) peer=unconfined,
<ide> signal (send),
<del>{{end}}{{end}}
<add>{{end}}
<ide> network,
<ide> capability,
<ide> owner /** rw,
<ide> profile /usr/bin/docker (attach_disconnected, complain) {
<ide> /etc/ld.so.cache r,
<ide> /etc/passwd r,
<ide>
<del>{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
<add>{{if ge .Version 209000}}
<ide> ptrace peer=@{profile_name},
<ide> ptrace (read) peer=docker-default,
<ide> deny ptrace (trace) peer=docker-default,
<ide> deny ptrace peer=/usr/bin/docker///bin/ps,
<del>{{end}}{{end}}
<add>{{end}}
<ide>
<ide> /usr/lib/** rm,
<ide> /lib/** rm,
<ide> profile /usr/bin/docker (attach_disconnected, complain) {
<ide> /sbin/zfs rCx,
<ide> /sbin/apparmor_parser rCx,
<ide>
<del>{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
<add>{{if ge .Version 209000}}
<ide> # Transitions
<ide> change_profile -> docker-*,
<ide> change_profile -> unconfined,
<del>{{end}}{{end}}
<add>{{end}}
<ide>
<ide> profile /bin/cat (complain) {
<ide> /etc/ld.so.cache r,
<ide> profile /usr/bin/docker (attach_disconnected, complain) {
<ide> /dev/null rw,
<ide> /bin/ps mr,
<ide>
<del>{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
<add>{{if ge .Version 209000}}
<ide> # We don't need ptrace so we'll deny and ignore the error.
<ide> deny ptrace (read, trace),
<del>{{end}}{{end}}
<add>{{end}}
<ide>
<ide> # Quiet dac_override denials
<ide> deny capability dac_override,
<ide> profile /usr/bin/docker (attach_disconnected, complain) {
<ide> /proc/tty/drivers r,
<ide> }
<ide> profile /sbin/iptables (complain) {
<del>{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
<add>{{if ge .Version 209000}}
<ide> signal (receive) peer=/usr/bin/docker,
<del>{{end}}{{end}}
<add>{{end}}
<ide> capability net_admin,
<ide> }
<ide> profile /sbin/auplink flags=(attach_disconnected, complain) {
<del>{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
<add>{{if ge .Version 209000}}
<ide> signal (receive) peer=/usr/bin/docker,
<del>{{end}}{{end}}
<add>{{end}}
<ide> capability sys_admin,
<ide> capability dac_override,
<ide>
<ide> profile /usr/bin/docker (attach_disconnected, complain) {
<ide> /proc/[0-9]*/mounts rw,
<ide> }
<ide> profile /sbin/modprobe /bin/kmod (complain) {
<del>{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
<add>{{if ge .Version 209000}}
<ide> signal (receive) peer=/usr/bin/docker,
<del>{{end}}{{end}}
<add>{{end}}
<ide> capability sys_module,
<ide> /etc/ld.so.cache r,
<ide> /lib/** rm,
<ide> profile /usr/bin/docker (attach_disconnected, complain) {
<ide> }
<ide> # xz works via pipes, so we do not need access to the filesystem.
<ide> profile /usr/bin/xz (complain) {
<del>{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
<add>{{if ge .Version 209000}}
<ide> signal (receive) peer=/usr/bin/docker,
<del>{{end}}{{end}}
<add>{{end}}
<ide> /etc/ld.so.cache r,
<ide> /lib/** rm,
<ide> /usr/bin/xz rm,
<ide><path>pkg/aaparser/aaparser.go
<ide> const (
<ide> )
<ide>
<ide> // GetVersion returns the major and minor version of apparmor_parser.
<del>func GetVersion() (int, int, error) {
<add>func GetVersion() (int, error) {
<ide> output, err := cmd("", "--version")
<ide> if err != nil {
<del> return -1, -1, err
<add> return -1, err
<ide> }
<ide>
<del> return parseVersion(string(output))
<add> return parseVersion(output)
<ide> }
<ide>
<ide> // LoadProfile runs `apparmor_parser -r -W` on a specified apparmor profile to
<ide> func cmd(dir string, arg ...string) (string, error) {
<ide> }
<ide>
<ide> // parseVersion takes the output from `apparmor_parser --version` and returns
<del>// the major and minor version for `apparor_parser`.
<del>func parseVersion(output string) (int, int, error) {
<add>// a representation of the {major, minor, patch} version as a single number of
<add>// the form MMmmPPP {major, minor, patch}.
<add>func parseVersion(output string) (int, error) {
<ide> // output is in the form of the following:
<ide> // AppArmor parser version 2.9.1
<ide> // Copyright (C) 1999-2008 Novell Inc.
<ide> // Copyright 2009-2012 Canonical Ltd.
<add>
<ide> lines := strings.SplitN(output, "\n", 2)
<ide> words := strings.Split(lines[0], " ")
<ide> version := words[len(words)-1]
<ide>
<ide> // split by major minor version
<ide> v := strings.Split(version, ".")
<del> if len(v) < 2 {
<del> return -1, -1, fmt.Errorf("parsing major minor version failed for output: `%s`", output)
<add> if len(v) == 0 || len(v) > 3 {
<add> return -1, fmt.Errorf("parsing version failed for output: `%s`", output)
<ide> }
<ide>
<add> // Default the versions to 0.
<add> var majorVersion, minorVersion, patchLevel int
<add>
<ide> majorVersion, err := strconv.Atoi(v[0])
<ide> if err != nil {
<del> return -1, -1, err
<add> return -1, err
<ide> }
<del> minorVersion, err := strconv.Atoi(v[1])
<del> if err != nil {
<del> return -1, -1, err
<add>
<add> if len(v) > 1 {
<add> minorVersion, err = strconv.Atoi(v[1])
<add> if err != nil {
<add> return -1, err
<add> }
<add> }
<add> if len(v) > 2 {
<add> patchLevel, err = strconv.Atoi(v[2])
<add> if err != nil {
<add> return -1, err
<add> }
<ide> }
<ide>
<del> return majorVersion, minorVersion, nil
<add> // major*10^5 + minor*10^3 + patch*10^0
<add> numericVersion := majorVersion*1e5 + minorVersion*1e3 + patchLevel
<add> return numericVersion, nil
<ide> }
<ide><path>pkg/aaparser/aaparser_test.go
<ide> import (
<ide> )
<ide>
<ide> type versionExpected struct {
<del> output string
<del> major int
<del> minor int
<add> output string
<add> version int
<ide> }
<ide>
<ide> func TestParseVersion(t *testing.T) {
<ide> Copyright (C) 1999-2008 Novell Inc.
<ide> Copyright 2009-2012 Canonical Ltd.
<ide>
<ide> `,
<del> major: 2,
<del> minor: 10,
<add> version: 210000,
<ide> },
<ide> {
<ide> output: `AppArmor parser version 2.8
<ide> Copyright (C) 1999-2008 Novell Inc.
<ide> Copyright 2009-2012 Canonical Ltd.
<ide>
<ide> `,
<del> major: 2,
<del> minor: 8,
<add> version: 208000,
<ide> },
<ide> {
<ide> output: `AppArmor parser version 2.20
<ide> Copyright (C) 1999-2008 Novell Inc.
<ide> Copyright 2009-2012 Canonical Ltd.
<ide>
<ide> `,
<del> major: 2,
<del> minor: 20,
<add> version: 220000,
<ide> },
<ide> {
<ide> output: `AppArmor parser version 2.05
<ide> Copyright (C) 1999-2008 Novell Inc.
<ide> Copyright 2009-2012 Canonical Ltd.
<ide>
<ide> `,
<del> major: 2,
<del> minor: 5,
<add> version: 205000,
<add> },
<add> {
<add> output: `AppArmor parser version 2.9.95
<add>Copyright (C) 1999-2008 Novell Inc.
<add>Copyright 2009-2012 Canonical Ltd.
<add>
<add>`,
<add> version: 209095,
<add> },
<add> {
<add> output: `AppArmor parser version 3.14.159
<add>Copyright (C) 1999-2008 Novell Inc.
<add>Copyright 2009-2012 Canonical Ltd.
<add>
<add>`,
<add> version: 314159,
<ide> },
<ide> }
<ide>
<ide> for _, v := range versions {
<del> major, minor, err := parseVersion(v.output)
<add> version, err := parseVersion(v.output)
<ide> if err != nil {
<ide> t.Fatalf("expected error to be nil for %#v, got: %v", v, err)
<ide> }
<del> if major != v.major {
<del> t.Fatalf("expected major version to be %d, was %d, for: %#v\n", v.major, major, v)
<del> }
<del> if minor != v.minor {
<del> t.Fatalf("expected minor version to be %d, was %d, for: %#v\n", v.minor, minor, v)
<add> if version != v.version {
<add> t.Fatalf("expected version to be %d, was %d, for: %#v\n", v.version, version, v)
<ide> }
<ide> }
<ide> }
<ide><path>profiles/apparmor/apparmor.go
<ide> type profileData struct {
<ide> Imports []string
<ide> // InnerImports defines the apparmor functions to import in the profile.
<ide> InnerImports []string
<del> // MajorVersion is the apparmor_parser major version.
<del> MajorVersion int
<del> // MinorVersion is the apparmor_parser minor version.
<del> MinorVersion int
<add> // Version is the {major, minor, patch} version of apparmor_parser as a single number.
<add> Version int
<ide> }
<ide>
<ide> // generateDefault creates an apparmor profile from ProfileData.
<ide><path>profiles/apparmor/template.go
<ide> profile {{.Name}} flags=(attach_disconnected,mediate_deleted) {
<ide> deny /sys/firmware/efi/efivars/** rwklx,
<ide> deny /sys/kernel/security/** rwklx,
<ide>
<del>{{if ge .MajorVersion 2}}{{if ge .MinorVersion 8}}
<add>{{if ge .Version 208000}}
<ide> # suppress ptrace denials when using 'docker ps' or using 'ps' inside a container
<ide> ptrace (trace,read) peer=docker-default,
<del>{{end}}{{end}}
<del>{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
<add>{{end}}
<add>{{if ge .Version 209000}}
<ide> # docker daemon confinement requires explict allow rule for signal
<ide> signal (receive) set=(kill,term) peer={{.ExecPath}},
<del>{{end}}{{end}}
<add>{{end}}
<ide> }
<ide> ` | 6 |
Text | Text | remove oxford comma | b3603b5de8033e926f71cedf2f7f3fd95ec58a2d | <ide><path>docs/Common-Issues.md
<ide> brew upgrade
<ide>
<ide> ### Other local issues
<ide>
<del>If your Homebrew installation gets messed up (and fixing the issues found by `brew doctor` doesn't solve the problem), reinstalling Homebrew may help to reset to a normal state. To easily reinstall Homebrew, use [Homebrew Bundle](https://github.com/Homebrew/homebrew-bundle) to automatically restore your installed formulae and casks. To do so, run `brew bundle dump`, [uninstall](https://docs.brew.sh/FAQ#how-do-i-uninstall-homebrew), [reinstall](https://docs.brew.sh/Installation), and run `brew bundle install`.
<add>If your Homebrew installation gets messed up (and fixing the issues found by `brew doctor` doesn't solve the problem), reinstalling Homebrew may help to reset to a normal state. To easily reinstall Homebrew, use [Homebrew Bundle](https://github.com/Homebrew/homebrew-bundle) to automatically restore your installed formulae and casks. To do so, run `brew bundle dump`, [uninstall](https://docs.brew.sh/FAQ#how-do-i-uninstall-homebrew), [reinstall](https://docs.brew.sh/Installation) and run `brew bundle install`. | 1 |
PHP | PHP | remove unneeded method | bf8e0faecb13385cbb9fa0445d1ef3516a8a8345 | <ide><path>src/Illuminate/Session/Store.php
<ide> public function get($name, $default = null)
<ide> return parent::get($name) ?: value($default);
<ide> }
<ide>
<del> /**
<del> * Determine if the session has a flash item.
<del> *
<del> * @param string $name
<del> * @return bool
<del> */
<del> public function hasFlash($name)
<del> {
<del> return $this->has($name);
<del> }
<del>
<ide> /**
<ide> * Determine if the session contains old input.
<ide> * | 1 |
Ruby | Ruby | fix memcachestore local cache duplication | 5d1e8884bd626d1949af79cdb2f2c9ab54ab2028 | <ide><path>activesupport/lib/active_support/cache/mem_cache_store.rb
<ide> raise e
<ide> end
<ide>
<add>require "delegate"
<ide> require "active_support/core_ext/enumerable"
<ide> require "active_support/core_ext/array/extract_options"
<ide>
<ide> def self.supports_cache_versioning?
<ide> prepend Strategy::LocalCache
<ide>
<ide> module DupLocalCache
<del> class LocalStore < Strategy::LocalCache::LocalStore
<add> class DupLocalStore < DelegateClass(Strategy::LocalCache::LocalStore)
<ide> def write_entry(_key, entry)
<ide> if entry.is_a?(Entry)
<ide> entry.dup_value!
<ide> def write_entry(_key, entry)
<ide> end
<ide>
<ide> def fetch_entry(key)
<del> entry = @data.fetch(key) do
<add> entry = super do
<ide> new_entry = yield
<ide> if entry.is_a?(Entry)
<ide> new_entry.dup_value!
<ide> end
<del> @data[key] = new_entry
<add> new_entry
<ide> end
<ide> entry = entry.dup
<ide>
<ide> def fetch_entry(key)
<ide> end
<ide> end
<ide>
<del> def with_local_cache
<del> if ActiveSupport::Cache.format_version == 6.1
<del> use_temporary_local_cache(LocalStore.new) { yield }
<del> else
<del> super
<add> private
<add> def local_cache
<add> if ActiveSupport::Cache.format_version == 6.1
<add> if local_cache = super
<add> DupLocalStore.new(local_cache)
<add> end
<add> else
<add> super
<add> end
<ide> end
<del> end
<ide> end
<ide> prepend DupLocalCache
<ide> | 1 |
Python | Python | replace references to `typedict` with `sctypedict` | a4260ab10fff84710c3ae4a67271f03f823ac75c | <ide><path>benchmarks/benchmarks/common.py
<ide> 'int64', 'float64', 'complex64',
<ide> 'longfloat', 'complex128',
<ide> ]
<del>if 'complex256' in numpy.typeDict:
<add>if 'complex256' in numpy.sctypeDict:
<ide> TYPES1.append('complex256')
<ide>
<ide>
<ide><path>numpy/core/numerictypes.py
<ide> Exported symbols include:
<ide>
<ide> Dictionary with all registered number types (including aliases):
<del> typeDict
<add> sctypeDict
<ide>
<ide> Type objects (not all will be available, depends on platform):
<ide> see variable sctypes for which ones you have
<ide><path>numpy/core/records.py
<ide> # of the letter code '(2,3)f4' and ' ( 2 , 3 ) f4 '
<ide> # are equally allowed
<ide>
<del>numfmt = nt.typeDict
<add>numfmt = nt.sctypeDict
<ide>
<ide> # taken from OrderedDict recipes in the Python documentation
<ide> # https://docs.python.org/3.3/library/collections.html#ordereddict-examples-and-recipes
<ide><path>numpy/core/tests/test_multiarray.py
<ide> def test_warnonwrite(self):
<ide> a[2] = 10
<ide> # only warn once
<ide> assert_(len(w) == 1)
<del>
<add>
<ide> @pytest.mark.parametrize(["flag", "flag_value", "writeable"],
<ide> [("writeable", True, True),
<ide> # Delete _warn_on_write after deprecation and simplify
<ide> def testassign(arr, v):
<ide> a = np.array([(1,2)], dtype=[('a', 'i4'), ('b', 'i4')])
<ide> a[['a', 'b']] = a[['b', 'a']]
<ide> assert_equal(a[0].item(), (2,1))
<del>
<add>
<ide> def test_scalar_assignment(self):
<ide> with assert_raises(ValueError):
<del> arr = np.arange(25).reshape(5, 5)
<del> arr.itemset(3)
<add> arr = np.arange(25).reshape(5, 5)
<add> arr.itemset(3)
<ide>
<ide> def test_structuredscalar_indexing(self):
<ide> # test gh-7262
<ide> def test_roundtrip_half(self):
<ide> self._check_roundtrip(x)
<ide>
<ide> def test_roundtrip_single_types(self):
<del> for typ in np.typeDict.values():
<add> for typ in np.sctypeDict.values():
<ide> dtype = np.dtype(typ)
<ide>
<ide> if dtype.char in 'Mm':
<ide><path>numpy/core/tests/test_regression.py
<ide> def test_fromstring_crash(self):
<ide> np.fromstring(b'aa, aa, 1.0', sep=',')
<ide>
<ide> def test_ticket_1539(self):
<del> dtypes = [x for x in np.typeDict.values()
<add> dtypes = [x for x in np.sctypeDict.values()
<ide> if (issubclass(x, np.number)
<ide> and not issubclass(x, np.timedelta64))]
<ide> a = np.array([], np.bool_) # not x[0] because it is unordered
<ide> def test_invalid_structured_dtypes(self):
<ide>
<ide> def test_correct_hash_dict(self):
<ide> # gh-8887 - __hash__ would be None despite tp_hash being set
<del> all_types = set(np.typeDict.values()) - {np.void}
<add> all_types = set(np.sctypeDict.values()) - {np.void}
<ide> for t in all_types:
<ide> val = t()
<ide>
<ide><path>numpy/core/tests/test_scalarmath.py
<ide> def test_iinfo_long_values(self):
<ide> assert_(res == tgt)
<ide>
<ide> for code in np.typecodes['AllInteger']:
<del> res = np.typeDict[code](np.iinfo(code).max)
<add> res = np.sctypeDict[code](np.iinfo(code).max)
<ide> tgt = np.iinfo(code).max
<ide> assert_(res == tgt)
<ide>
<ide> def test_int_raise_behaviour(self):
<ide> def overflow_error_func(dtype):
<del> np.typeDict[dtype](np.iinfo(dtype).max + 1)
<add> np.sctypeDict[dtype](np.iinfo(dtype).max + 1)
<ide>
<ide> for code in 'lLqQ':
<ide> assert_raises(OverflowError, overflow_error_func, code) | 6 |
Ruby | Ruby | add ability to check for rosetta | d9135c5a57f38c578cb7c9c7ba8eaab63b6d7651 | <ide><path>Library/Homebrew/extend/os/mac/hardware/cpu.rb
<ide> def universal_archs
<ide> [arch_64_bit, arch_32_bit].extend ArchitectureListExtension
<ide> end
<ide>
<add> # True when running under an Intel-based shell via Rosetta on an
<add> # Apple Silicon Mac. This can be detected via seeing if there's a
<add> # conflict between what `uname` report and the underlying `sysctl` flags,
<add> # since the `sysctl` flags don't change behaviour under Rosetta.
<add> def running_under_rosetta?
<add> intel? && physical_cpu_arm64?
<add> end
<add>
<ide> def features
<ide> @features ||= sysctl_n(
<ide> "machdep.cpu.features",
<ide> def sse4_2?
<ide>
<ide> private
<ide>
<add> # Note: this is more reliable than checking uname.
<add> # `sysctl` returns the right answer even when running in Rosetta.
<add> def physical_cpu_arm64?
<add> sysctl_bool("hw.optional.arm64")
<add> end
<add>
<ide> def sysctl_bool(key)
<ide> sysctl_int(key) == 1
<ide> end
<ide><path>Library/Homebrew/hardware.rb
<ide> def arch_flag(arch)
<ide>
<ide> "-march=#{arch}"
<ide> end
<add>
<add> def running_under_rosetta?
<add> false
<add> end
<ide> end
<ide> end
<ide> | 2 |
Ruby | Ruby | separate two groups of retryable db exceptions | da52b0d954a356421ccc064df3b1285f2ba10eb5 | <ide><path>activerecord/lib/active_record/connection_adapters/abstract_adapter.rb
<ide> def with_raw_connection(allow_retry: false, uses_transaction: true)
<ide> result = yield @raw_connection
<ide> @verified = true
<ide> result
<del> rescue => ex
<del> if retries_available > 0 && retryable_error?(ex) && reconnect_can_restore_state?
<add> rescue => original_exception
<add> translated_exception = translate_exception_class(original_exception, nil, nil)
<add>
<add> if retries_available > 0
<ide> retries_available -= 1
<del> reconnect!(restore_transactions: true)
<del> retry
<add>
<add> if retryable_query_error?(translated_exception)
<add> backoff(connection_retries - retries_available)
<add> retry
<add> elsif retryable_connection_error?(translated_exception)
<add> reconnect!(restore_transactions: true)
<add> retry
<add> end
<ide> end
<ide>
<del> raise
<add> raise translated_exception
<ide> ensure
<ide> dirty_current_transaction if uses_transaction
<ide> end
<ide> end
<ide> end
<ide>
<del> def retryable_error?(exception)
<del> false
<add> def retryable_connection_error?(exception)
<add> exception.is_a?(ConnectionNotEstablished)
<add> end
<add>
<add> def retryable_query_error?(exception)
<add> exception.is_a?(Deadlocked) ||
<add> exception.is_a?(LockWaitTimeout)
<add> end
<add>
<add> def backoff(counter)
<add> sleep 0.1 * counter
<ide> end
<ide>
<ide> # Returns a raw connection for internal use with methods that are known
<ide> def translate_exception_class(e, sql, binds)
<ide> exception
<ide> end
<ide>
<del> def log(sql, name = "SQL", binds = [], type_casted_binds = [], statement_name = nil, async: false) # :doc:
<add> def log(sql, name = "SQL", binds = [], type_casted_binds = [], statement_name = nil, async: false, &block) # :doc:
<ide> @instrumenter.instrument(
<ide> "sql.active_record",
<ide> sql: sql,
<ide> def log(sql, name = "SQL", binds = [], type_casted_binds = [], statement_name =
<ide> type_casted_binds: type_casted_binds,
<ide> statement_name: statement_name,
<ide> async: async,
<del> connection: self) do
<del> yield
<del> rescue => e
<del> raise translate_exception_class(e, sql, binds)
<del> end
<add> connection: self,
<add> &block
<add> )
<add> rescue ActiveRecord::StatementInvalid => ex
<add> raise ex.set_query(sql, binds)
<ide> end
<ide>
<ide> def transform_query(sql)
<ide><path>activerecord/lib/active_record/connection_adapters/abstract_mysql_adapter.rb
<ide> def translate_exception(exception, message:, sql:, binds:)
<ide> end
<ide> end
<ide>
<del> def retryable_error?(exception)
<del> error_number(exception).nil? &&
<del> exception.message.match?(/MySQL client is not connected/i)
<del> end
<del>
<ide> def change_column_for_alter(table_name, column_name, type, **options)
<ide> column = column_for(table_name, column_name)
<ide> type ||= column.sql_type
<ide> def build_statement_pool
<ide> StatementPool.new(self.class.type_cast_config_to_integer(@config[:statement_limit]))
<ide> end
<ide>
<del> def mismatched_foreign_key(message, sql:, binds:)
<add> def mismatched_foreign_key_details(message:, sql:)
<ide> foreign_key_pat =
<ide> /Referencing column '(\w+)' and referenced/i =~ message ? $1 : '\w+'
<ide>
<ide> def mismatched_foreign_key(message, sql:, binds:)
<ide> REFERENCES\s*(`?(?<target_table>\w+)`?)\s*\(`?(?<primary_key>\w+)`?\)
<ide> /xmi.match(sql)
<ide>
<del> options = {
<del> message: message,
<del> sql: sql,
<del> binds: binds,
<del> }
<add> options = {}
<ide>
<ide> if match
<ide> options[:table] = match[:table]
<ide> def mismatched_foreign_key(message, sql:, binds:)
<ide> options[:primary_key_column] = column_for(match[:target_table], match[:primary_key])
<ide> end
<ide>
<add> options
<add> end
<add>
<add> def mismatched_foreign_key(message, sql:, binds:)
<add> options = {
<add> message: message,
<add> sql: sql,
<add> binds: binds,
<add> }
<add>
<add> if sql
<add> options.update mismatched_foreign_key_details(message: message, sql: sql)
<add> else
<add> options[:query_parser] = ->(sql) { mismatched_foreign_key_details(message: message, sql: sql) }
<add> end
<add>
<ide> MismatchedForeignKey.new(**options)
<ide> end
<ide>
<ide><path>activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb
<ide> def translate_exception(exception, message:, sql:, binds:)
<ide> when nil
<ide> if exception.message.match?(/connection is closed/i)
<ide> ConnectionNotEstablished.new(exception)
<add> elsif exception.is_a?(PG::ConnectionBad) && !exception.message.end_with?("\n")
<add> ConnectionNotEstablished.new(exception)
<ide> else
<ide> super
<ide> end
<ide> def translate_exception(exception, message:, sql:, binds:)
<ide> end
<ide> end
<ide>
<del> def retryable_error?(exception)
<add> def retryable_query_error?(exception)
<add> end
<add>
<add> def retryable_connection_error?(exception)
<ide> case exception
<ide> when PG::ConnectionBad; !exception.message.end_with?("\n")
<ide> end
<ide><path>activerecord/lib/active_record/connection_adapters/sqlite3_adapter.rb
<ide> def translate_exception(exception, message:, sql:, binds:)
<ide> end
<ide> end
<ide>
<del> def retryable_error?(exception)
<del> exception.message.match?(/called on a closed database/i)
<del> end
<del>
<ide> COLLATE_REGEX = /.*"(\w+)".*collate\s+"(\w+)".*/i.freeze
<ide>
<ide> def table_structure_with_collation(table_name, basic_structure)
<ide><path>activerecord/lib/active_record/errors.rb
<ide> def initialize(message = nil, sql: nil, binds: nil)
<ide> end
<ide>
<ide> attr_reader :sql, :binds
<add>
<add> def set_query(sql, binds)
<add> unless @sql
<add> @sql = sql
<add> @binds = binds
<add> end
<add>
<add> self
<add> end
<ide> end
<ide>
<ide> # Defunct wrapper class kept for compatibility.
<ide> def initialize(
<ide> foreign_key: nil,
<ide> target_table: nil,
<ide> primary_key: nil,
<del> primary_key_column: nil
<add> primary_key_column: nil,
<add> query_parser: nil
<ide> )
<add> @original_message = message
<add> @query_parser = query_parser
<add>
<ide> if table
<ide> type = primary_key_column.bigint? ? :bigint : primary_key_column.type
<ide> msg = <<~EOM.squish
<ide> def initialize(
<ide> if message
<ide> msg << "\nOriginal message: #{message}"
<ide> end
<add>
<ide> super(msg, sql: sql, binds: binds)
<ide> end
<add>
<add> def set_query(sql, binds)
<add> if @query_parser && !@sql
<add> self.class.new(
<add> message: @original_message,
<add> sql: sql,
<add> binds: binds,
<add> **@query_parser.call(sql)
<add> ).tap do |exception|
<add> exception.set_backtrace backtrace
<add> end
<add> else
<add> super
<add> end
<add> end
<ide> end
<ide>
<ide> # Raised when a record cannot be inserted or updated because it would violate a not null constraint.
<ide><path>activerecord/test/cases/statement_invalid_test.rb
<ide> def error_number
<ide> sql = Book.where(author_id: 96, cover: "hard").to_sql
<ide> error = assert_raises(ActiveRecord::StatementInvalid) do
<ide> Book.connection.send(:log, sql, Book.name) do
<del> raise MockDatabaseError
<add> Book.connection.send(:with_raw_connection) do
<add> raise MockDatabaseError
<add> end
<ide> end
<ide> end
<ide> assert_not error.message.include?("SELECT")
<ide> def error_number
<ide> binds = [Minitest::Mock.new, Minitest::Mock.new]
<ide> error = assert_raises(ActiveRecord::StatementInvalid) do
<ide> Book.connection.send(:log, sql, Book.name, binds) do
<del> raise MockDatabaseError
<add> Book.connection.send(:with_raw_connection) do
<add> raise MockDatabaseError
<add> end
<ide> end
<ide> end
<ide> assert_equal error.sql, sql | 6 |
Javascript | Javascript | handle video challenges | 1ec6cf1efd92fc5c8588466986dbe2dbbd64f930 | <ide><path>curriculum/getChallenges.js
<ide> async function buildCurriculum(file, curriculum) {
<ide> async function parseTranslation(engPath, transPath, dict) {
<ide> const engChal = await parseMarkdown(engPath);
<ide> const translatedChal = await parseMarkdown(transPath);
<del> const codeLang = engChal.files[0] ? engChal.files[0].ext : null;
<del>
<del> const engWithTranslatedComments = translateCommentsInChallenge(
<del> engChal,
<del> getChallengeLang(transPath),
<del> dict,
<del> codeLang
<del> );
<add> const codeLang =
<add> engChal.files && engChal.files[0] ? engChal.files[0].ext : null;
<add>
<add> const engWithTranslatedComments = codeLang
<add> ? translateCommentsInChallenge(
<add> engChal,
<add> getChallengeLang(transPath),
<add> dict,
<add> codeLang
<add> )
<add> : engChal;
<ide> return mergeChallenges(engWithTranslatedComments, translatedChal);
<ide> }
<ide>
<ide><path>tools/challenge-md-parser/translation-parser/__fixtures__/challenge-objects.js
<ide> const ENGLISH_CHALLENGE_NO_FILES = {
<ide> files: []
<ide> };
<ide>
<add>const ENGLISH_VIDEO_CHALLENGE = {
<add> id: 'id',
<add> title: 'Title',
<add> challengeType: 0,
<add> videoId: 'abc123',
<add> forumTopicId: 12345,
<add> question: 'english question',
<add> description: 'description html string',
<add> instructions: 'instructions html string'
<add>};
<add>
<ide> const TRANSLATED_CERTIFICATE = {
<ide> id: '561add10cb82ac38a17513bc',
<ide> title: 'Responsive Web Design Certificate',
<ide> const TRANSLATED_CHALLENGE = {
<ide> ]
<ide> };
<ide>
<add>const TRANSLATED_VIDEO_CHALLENGE = {
<add> id: 'id',
<add> title: 'Title',
<add> challengeType: 0,
<add> videoId: 'abc123',
<add> forumTopicId: 12345,
<add> question: 'translated question',
<add> description: 'translated description html string',
<add> instructions: 'translated instructions html string'
<add>};
<add>
<ide> const WRONG_NUM_TESTS_CHALLENGE = {
<ide> id: 'id',
<ide> title: 'Title',
<ide> exports.ENGLISH_CERTIFICATE = ENGLISH_CERTIFICATE;
<ide> exports.ENGLISH_CHALLENGE = ENGLISH_CHALLENGE;
<ide> exports.ENGLISH_CHALLENGE_TWO_SOLUTIONS = ENGLISH_CHALLENGE_TWO_SOLUTIONS;
<ide> exports.ENGLISH_CHALLENGE_NO_FILES = ENGLISH_CHALLENGE_NO_FILES;
<add>exports.ENGLISH_VIDEO_CHALLENGE = ENGLISH_VIDEO_CHALLENGE;
<ide> exports.TRANSLATED_CERTIFICATE = TRANSLATED_CERTIFICATE;
<ide> exports.TRANSLATED_CHALLENGE = TRANSLATED_CHALLENGE;
<add>exports.TRANSLATED_VIDEO_CHALLENGE = TRANSLATED_VIDEO_CHALLENGE;
<ide> exports.WRONG_NUM_TESTS_CHALLENGE = WRONG_NUM_TESTS_CHALLENGE;
<ide><path>tools/challenge-md-parser/translation-parser/translation-parser.js
<ide> exports.translateCommentsInChallenge = (challenge, lang, dict, codeLang) => {
<ide> };
<ide>
<ide> exports.mergeChallenges = (engChal, transChal) => {
<del> if (!transChal.tests || transChal.tests.length !== engChal.tests.length)
<del> throw Error(
<del> `Challenges in both languages must have the same number of tests.
<del> title: ${engChal.title}
<del> localeTitle: ${transChal.localeTitle}`
<del> );
<del>
<del> const translatedTests =
<del> engChal.challengeType === 7
<del> ? transChal.tests.map(({ title }, i) => ({
<del> title,
<del> id: engChal.tests[i].id
<del> }))
<del> : transChal.tests.map(({ text }, i) => ({
<del> text,
<del> testString: engChal.tests[i].testString
<del> }));
<add> const hasTests =
<add> (engChal.tests && transChal.tests) ||
<add> (engChal.question && transChal.question);
<ide> const challenge = {
<ide> ...engChal,
<ide> description: transChal.description,
<ide> instructions: transChal.instructions,
<ide> localeTitle: transChal.localeTitle,
<del> forumTopicId: transChal.forumTopicId,
<del> tests: translatedTests
<add> forumTopicId: transChal.forumTopicId
<ide> };
<add> if (!hasTests)
<add> throw Error(
<add> `Both challenges must have tests or questions.
<add> title: ${engChal.title}
<add> localeTitle: ${transChal.localeTitle}`
<add> );
<add> // TODO: this should break the build when we go to production, but
<add> // not for testing.
<add> if (transChal.tests && transChal.tests.length !== engChal.tests.length) {
<add> console.error(
<add> `Challenges in both languages must have the same number of tests.
<add> title: ${engChal.title}
<add> localeTitle: ${transChal.localeTitle}`
<add> );
<add> return challenge;
<add> }
<add>
<add> // throw Error(
<add> // `Challenges in both languages must have the same number of tests.
<add> // title: ${engChal.title}
<add> // localeTitle: ${transChal.localeTitle}`
<add> // );
<add>
<add> if (transChal.tests) {
<add> const translatedTests =
<add> engChal.challengeType === 7
<add> ? transChal.tests.map(({ title }, i) => ({
<add> title,
<add> id: engChal.tests[i].id
<add> }))
<add> : transChal.tests.map(({ text }, i) => ({
<add> text,
<add> testString: engChal.tests[i].testString
<add> }));
<add> challenge.tests = translatedTests;
<add> } else {
<add> challenge.question = transChal.question;
<add> }
<add>
<ide> // certificates do not have forumTopicIds
<ide> if (challenge.challengeType === 7) delete challenge.forumTopicId;
<ide> return challenge;
<ide><path>tools/challenge-md-parser/translation-parser/translation-parser.test.js
<ide> const {
<ide> ENGLISH_CHALLENGE,
<ide> ENGLISH_CHALLENGE_NO_FILES,
<ide> ENGLISH_CHALLENGE_TWO_SOLUTIONS,
<add> ENGLISH_VIDEO_CHALLENGE,
<ide> TRANSLATED_CERTIFICATE,
<ide> TRANSLATED_CHALLENGE,
<del> WRONG_NUM_TESTS_CHALLENGE
<add> TRANSLATED_VIDEO_CHALLENGE
<add> // WRONG_NUM_TESTS_CHALLENGE
<ide> } = require('./__fixtures__/challenge-objects');
<ide> const { SIMPLE_TRANSLATION } = require('./__mocks__/mock-comments');
<ide>
<ide> const COMBINED_CERTIFICATE = mergeChallenges(
<ide> TRANSLATED_CERTIFICATE
<ide> );
<ide>
<add>const COMBINED_VIDEO_CHALLENGE = mergeChallenges(
<add> ENGLISH_VIDEO_CHALLENGE,
<add> TRANSLATED_VIDEO_CHALLENGE
<add>);
<add>
<ide> let logSpy;
<ide>
<ide> describe('translation parser', () => {
<ide> describe('translation parser', () => {
<ide> TRANSLATED_CHALLENGE.localeTitle
<ide> );
<ide> });
<del> it('throws an error if the numbers of tests do not match', () => {
<del> expect(() =>
<del> mergeChallenges(ENGLISH_CHALLENGE, WRONG_NUM_TESTS_CHALLENGE)
<del> ).toThrow();
<del> });
<add> // TODO: reinstate this after alpha testing.
<add> // it('throws an error if the numbers of tests do not match', () => {
<add> // expect(() =>
<add> // mergeChallenges(ENGLISH_CHALLENGE, WRONG_NUM_TESTS_CHALLENGE)
<add> // ).toThrow();
<add> // });
<ide> it('takes the forum id from the second challenge', () => {
<ide> expect(COMBINED_CHALLENGE.forumTopicId).toBe(
<ide> TRANSLATED_CHALLENGE.forumTopicId
<ide> describe('translation parser', () => {
<ide> false
<ide> );
<ide> });
<add> it('takes the question from the second challenge', () => {
<add> expect(COMBINED_VIDEO_CHALLENGE.question).toBe(
<add> TRANSLATED_VIDEO_CHALLENGE.question
<add> );
<add> });
<ide> });
<ide>
<ide> describe('translateCommentsInChallenge', () => { | 4 |
PHP | PHP | fix incorrect paths in missingelementexception | 7177ecaa0fa131b53bf8ef6fa2630ddfd50c97c7 | <ide><path>src/View/Exception/MissingTemplateException.php
<ide> */
<ide> class MissingTemplateException extends CakeException
<ide> {
<add> /**
<add> * @var string
<add> */
<add> protected $templateName;
<add>
<ide> /**
<ide> * @var string
<ide> */
<ide> class MissingTemplateException extends CakeException
<ide> */
<ide> public function __construct($file, array $paths = [], ?int $code = null, ?Throwable $previous = null)
<ide> {
<del> $this->file = is_array($file) ? array_pop($file) : $file;
<add> if (is_array($file)) {
<add> $this->file = array_pop($file);
<add> $this->templateName = array_pop($file);
<add> } else {
<add> $this->file = $file;
<add> }
<ide> $this->paths = $paths;
<ide>
<ide> parent::__construct($this->formatMessage(), $code, $previous);
<ide> public function __construct($file, array $paths = [], ?int $code = null, ?Throwa
<ide> */
<ide> public function formatMessage(): string
<ide> {
<del> $message = "{$this->type} file `{$this->file}` could not be found.";
<add> $name = $this->templateName ?? $this->file;
<add> $message = "{$this->type} file `{$name}` could not be found.";
<ide> if ($this->paths) {
<ide> $message .= "\n\nThe following paths were searched:\n\n";
<ide> foreach ($this->paths as $path) {
<ide><path>src/View/View.php
<ide> public function element(string $name, array $data = [], array $options = []): st
<ide> }
<ide>
<ide> if (empty($options['ignoreMissing'])) {
<del> [$plugin] = $this->pluginSplit($name, $pluginCheck);
<add> [$plugin, $elementName] = $this->pluginSplit($name, $pluginCheck);
<ide> $paths = iterator_to_array($this->getElementPaths($plugin));
<del> throw new MissingElementException($name . $this->_ext, $paths);
<add> throw new MissingElementException([$name . $this->_ext, $elementName . $this->_ext], $paths);
<ide> }
<ide>
<ide> return '';
<ide><path>tests/TestCase/View/ViewTest.php
<ide> public function testElementMissingPluginElement()
<ide> {
<ide> $this->expectException(\Cake\View\Exception\MissingElementException::class);
<ide> $this->expectExceptionMessage('Element file `TestPlugin.nope.php` could not be found');
<add> $this->expectExceptionMessage('test_app/templates/plugin/TestPlugin/element/nope.php');
<add> $this->expectExceptionMessage('test_app/Plugin/TestPlugin/templates/element/nope.php');
<ide>
<ide> $this->View->element('TestPlugin.nope');
<ide> } | 3 |
PHP | PHP | remove redundant test that was failing | f43aa6fe8717ce94402cb9f0c08ce03e206303a8 | <ide><path>lib/Cake/Test/TestCase/Cache/Engine/ApcEngineTest.php
<ide> *
<ide> * @copyright Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org)
<ide> * @link http://book.cakephp.org/2.0/en/development/testing.html CakePHP(tm) Tests
<del> * @package Cake.Test.Case.Cache.Engine
<ide> * @since CakePHP(tm) v 1.2.0.5434
<ide> * @license http://www.opensource.org/licenses/mit-license.php MIT License
<ide> */
<ide> /**
<ide> * ApcEngineTest class
<ide> *
<del> * @package Cake.Test.Case.Cache.Engine
<ide> */
<ide> class ApcEngineTest extends TestCase {
<ide>
<ide> public function testExpiry() {
<ide> sleep(2);
<ide> $result = Cache::read('other_test', 'apc');
<ide> $this->assertFalse($result);
<del>
<del> $this->_configCache();
<del>
<del> $data = 'this is a test of the emergency broadcasting system';
<del> $result = Cache::write('other_test', $data, 'apc');
<del> $this->assertTrue($result);
<del>
<del> sleep(2);
<del> $result = Cache::read('other_test', 'apc');
<del> $this->assertFalse($result);
<del>
<del> sleep(2);
<del> $result = Cache::read('other_test', 'apc');
<del> $this->assertFalse($result);
<ide> }
<ide>
<ide> /** | 1 |
Text | Text | unify quotes in an assert.md code example | 211813c99c97e2c48c67323a7ee5cb2ed7f03d57 | <ide><path>doc/api/assert.md
<ide> assert.notStrictEqual(a, b);
<ide> assert(!Object.is(a, b));
<ide> // but Object.is() does!
<ide>
<del>const str1 = "foo";
<del>const str2 = "foo";
<add>const str1 = 'foo';
<add>const str2 = 'foo';
<ide> assert.strictEqual(str1 / 1, str2 / 1);
<ide> // AssertionError: NaN === NaN
<ide> // Strict Equality Comparison can't be used to check NaN... | 1 |
Go | Go | add non-experimental daemon as a test requirement | c7076d26709f3fa277bd11e1dffdc8fc7833d38e | <ide><path>integration-cli/docker_cli_daemon_not_experimental_test.go
<del>// +build daemon,!windows,!experimental
<del>
<del>package main
<del>
<del>import (
<del> "io/ioutil"
<del> "os"
<del> "strings"
<del>
<del> "github.com/go-check/check"
<del>)
<del>
<del>// os.Kill should kill daemon ungracefully, leaving behind container mounts.
<del>// A subsequent daemon restart shoud clean up said mounts.
<del>func (s *DockerDaemonSuite) TestCleanupMountsAfterDaemonKill(c *check.C) {
<del> c.Assert(s.d.StartWithBusybox(), check.IsNil)
<del>
<del> out, err := s.d.Cmd("run", "-d", "busybox", "top")
<del> c.Assert(err, check.IsNil, check.Commentf("Output: %s", out))
<del> id := strings.TrimSpace(out)
<del> c.Assert(s.d.cmd.Process.Signal(os.Kill), check.IsNil)
<del> mountOut, err := ioutil.ReadFile("/proc/self/mountinfo")
<del> c.Assert(err, check.IsNil, check.Commentf("Output: %s", mountOut))
<del>
<del> // container mounts should exist even after daemon has crashed.
<del> comment := check.Commentf("%s should stay mounted from older daemon start:\nDaemon root repository %s\n%s", id, s.d.folder, mountOut)
<del> c.Assert(strings.Contains(string(mountOut), id), check.Equals, true, comment)
<del>
<del> // restart daemon.
<del> if err := s.d.Restart(); err != nil {
<del> c.Fatal(err)
<del> }
<del>
<del> // Now, container mounts should be gone.
<del> mountOut, err = ioutil.ReadFile("/proc/self/mountinfo")
<del> c.Assert(err, check.IsNil, check.Commentf("Output: %s", mountOut))
<del> comment = check.Commentf("%s is still mounted from older daemon start:\nDaemon root repository %s\n%s", id, s.d.folder, mountOut)
<del> c.Assert(strings.Contains(string(mountOut), id), check.Equals, false, comment)
<del>}
<ide><path>integration-cli/docker_cli_daemon_test.go
<ide> func (s *DockerDaemonSuite) TestRunContainerWithBridgeNone(c *check.C) {
<ide> check.Commentf("The network interfaces in container should be the same with host when --net=host when bridge network is disabled: %s", out))
<ide> }
<ide>
<add>// os.Kill should kill daemon ungracefully, leaving behind container mounts.
<add>// A subsequent daemon restart shoud clean up said mounts.
<add>func (s *DockerDaemonSuite) TestCleanupMountsAfterDaemonKill(c *check.C) {
<add> testRequires(c, NotExperimentalDaemon)
<add> c.Assert(s.d.StartWithBusybox(), check.IsNil)
<add>
<add> out, err := s.d.Cmd("run", "-d", "busybox", "top")
<add> c.Assert(err, check.IsNil, check.Commentf("Output: %s", out))
<add> id := strings.TrimSpace(out)
<add> c.Assert(s.d.cmd.Process.Signal(os.Kill), check.IsNil)
<add> mountOut, err := ioutil.ReadFile("/proc/self/mountinfo")
<add> c.Assert(err, check.IsNil, check.Commentf("Output: %s", mountOut))
<add>
<add> // container mounts should exist even after daemon has crashed.
<add> comment := check.Commentf("%s should stay mounted from older daemon start:\nDaemon root repository %s\n%s", id, s.d.folder, mountOut)
<add> c.Assert(strings.Contains(string(mountOut), id), check.Equals, true, comment)
<add>
<add> // restart daemon.
<add> if err := s.d.Restart(); err != nil {
<add> c.Fatal(err)
<add> }
<add>
<add> // Now, container mounts should be gone.
<add> mountOut, err = ioutil.ReadFile("/proc/self/mountinfo")
<add> c.Assert(err, check.IsNil, check.Commentf("Output: %s", mountOut))
<add> comment = check.Commentf("%s is still mounted from older daemon start:\nDaemon root repository %s\n%s", id, s.d.folder, mountOut)
<add> c.Assert(strings.Contains(string(mountOut), id), check.Equals, false, comment)
<add>}
<add>
<ide> func (s *DockerDaemonSuite) TestDaemonRestartWithContainerRunning(t *check.C) {
<ide> if err := s.d.StartWithBusybox(); err != nil {
<ide> t.Fatal(err)
<ide><path>integration-cli/requirements.go
<ide> import (
<ide> "strings"
<ide> "time"
<ide>
<add> "github.com/docker/docker/utils"
<ide> "github.com/go-check/check"
<ide> )
<ide>
<ide> var (
<ide> func() bool { return daemonPlatform == "linux" },
<ide> "Test requires a Linux daemon",
<ide> }
<add> NotExperimentalDaemon = testRequirement{
<add> func() bool { return !utils.ExperimentalBuild() },
<add> "Test requires a non experimental daemon",
<add> }
<ide> NotArm = testRequirement{
<ide> func() bool { return os.Getenv("DOCKER_ENGINE_GOARCH") != "arm" },
<ide> "Test requires a daemon not running on ARM", | 3 |
Mixed | Python | add example code of binary/text data conversion | 85836119d8c628f92b9bbc3b2a3857b76231da5e | <ide><path>textsum/README.md
<ide> for example vocabulary format. In <b>How To Run</b> below, users can use toy
<ide> data and vocab provided in the data/ directory to run the training by replacing
<ide> the data directory flag.
<ide>
<add>data_convert_example.py contains example of convert between binary and text.
<add>
<ide>
<ide> <b>Experiment Result</b>
<ide>
<ide><path>textsum/data_convert_example.py
<add>"""Example of Converting TextSum model data.
<add>Usage:
<add>python data_convert_example.py --command binary_to_text --in_file data/data --out_file data/text_data
<add>python data_convert_example.py --command text_to_binary --in_file data/text_data --out_file data/binary_data
<add>python data_convert_example.py --command binary_to_text --in_file data/binary_data --out_file data/text_data2
<add>diff data/text_data2 data/text_data
<add>"""
<add>
<add>import struct
<add>import sys
<add>
<add>import tensorflow as tf
<add>from tensorflow.core.example import example_pb2
<add>
<add>FLAGS = tf.app.flags.FLAGS
<add>tf.app.flags.DEFINE_string('command', 'binary_to_text',
<add> 'Either binary_to_text or text_to_binary.'
<add> 'Specify FLAGS.in_file accordingly.')
<add>tf.app.flags.DEFINE_string('in_file', '', 'path to file')
<add>tf.app.flags.DEFINE_string('out_file', '', 'path to file')
<add>
<add>def _binary_to_text():
<add> reader = open(FLAGS.in_file, 'rb')
<add> writer = open(FLAGS.out_file, 'w')
<add> while True:
<add> len_bytes = reader.read(8)
<add> if not len_bytes:
<add> sys.stderr.write('Done reading\n')
<add> return
<add> str_len = struct.unpack('q', len_bytes)[0]
<add> tf_example_str = struct.unpack('%ds' % str_len, reader.read(str_len))[0]
<add> tf_example = example_pb2.Example.FromString(tf_example_str)
<add> examples = []
<add> for key in tf_example.features.feature:
<add> examples.append('%s=%s' % (key, tf_example.features.feature[key].bytes_list.value[0]))
<add> writer.write('%s\n' % '\t'.join(examples))
<add> reader.close()
<add> writer.close()
<add>
<add>
<add>def _text_to_binary():
<add> inputs = open(FLAGS.in_file, 'r').readlines()
<add> writer = open(FLAGS.out_file, 'wb')
<add> for inp in inputs:
<add> tf_example = example_pb2.Example()
<add> for feature in inp.strip().split('\t'):
<add> (k, v) = feature.split('=')
<add> tf_example.features.feature[k].bytes_list.value.extend([v])
<add> tf_example_str = tf_example.SerializeToString()
<add> str_len = len(tf_example_str)
<add> writer.write(struct.pack('q', str_len))
<add> writer.write(struct.pack('%ds' % str_len, tf_example_str))
<add> writer.close()
<add>
<add>
<add>def main(unused_argv):
<add> assert FLAGS.command and FLAGS.in_file and FLAGS.out_file
<add> if FLAGS.command == 'binary_to_text':
<add> _binary_to_text()
<add> elif FLAGS.command == 'text_to_binary':
<add> _text_to_binary()
<add>
<add>
<add>if __name__ == '__main__':
<add> tf.app.run() | 2 |
Python | Python | remove docstrings from test_ functions | 420f6099085f6742a4b49e4130559dabfdf6276a | <ide><path>numpy/random/tests/test_regression.py
<ide> class TestRegression(TestCase):
<ide>
<ide> def test_VonMises_range(self):
<del> """Make sure generated random variables are in [-pi, pi].
<del>
<del> Regression test for ticket #986.
<del> """
<add> # Make sure generated random variables are in [-pi, pi].
<add> # Regression test for ticket #986.
<ide> for mu in np.linspace(-7., 7., 5):
<ide> r = random.mtrand.vonmises(mu, 1, 50)
<ide> assert_(np.all(r > -np.pi) and np.all(r <= np.pi))
<ide>
<ide> def test_hypergeometric_range(self):
<del> """Test for ticket #921"""
<add> # Test for ticket #921
<ide> assert_(np.all(np.random.hypergeometric(3, 18, 11, size=10) < 4))
<ide> assert_(np.all(np.random.hypergeometric(18, 3, 11, size=10) > 0))
<ide>
<ide> def test_logseries_convergence(self):
<del> """Test for ticket #923"""
<add> # Test for ticket #923
<ide> N = 1000
<ide> np.random.seed(0)
<ide> rvsn = np.random.logseries(0.8, size=N)
<ide> def test_randint_range(self) :
<ide> raise AssertionError
<ide>
<ide> def test_shuffle_mixed_dimension(self):
<del> """Test for trac ticket #2074"""
<add> # Test for trac ticket #2074
<ide> for t in [[1, 2, 3, None],
<ide> [(1, 1), (2, 2), (3, 3), None],
<ide> [1, (2, 2), (3, 3), None],
<ide> def test_call_within_randomstate(self):
<ide> assert_array_equal(m.choice(10, size=10, p=np.ones(10.)/10), res)
<ide>
<ide> def test_multivariate_normal_size_types(self):
<del> """Test for multivariate_normal issue with 'size' argument.
<del>
<del> Check that the multivariate_normal size argument can be a
<del> numpy integer.
<del>
<del> """
<add> # Test for multivariate_normal issue with 'size' argument.
<add> # Check that the multivariate_normal size argument can be a
<add> # numpy integer.
<ide> np.random.multivariate_normal([0], [[0]], size=1)
<ide> np.random.multivariate_normal([0], [[0]], size=np.int_(1))
<ide> np.random.multivariate_normal([0], [[0]], size=np.int64(1)) | 1 |
Text | Text | fix a broken link in contributing.md | ac293fc38bcdedc2b9e1e29e8b9a5d421e716bf4 | <ide><path>CONTRIBUTING.md
<ide> Small pull requests are much easier to review and more likely to get merged. Mak
<ide> 1. Fork [the repository](https://github.com/facebook/react-native) and create your branch from `master`.
<ide> 2. Add the copyright notice to the top of any new files you've added.
<ide> 3. Describe your [**test plan**](https://facebook.github.io/react-native/docs/contributing.html#test-plan) in your commit.
<del>4. Ensure [**tests pass**](https://facebook.github.io/react-native/docs/contributing.html#contrinuous-integration-tests) on both Travis and Circle CI.
<add>4. Ensure [**tests pass**](https://facebook.github.io/react-native/docs/contributing.html#continuous-integration-tests) on both Travis and Circle CI.
<ide> 5. Make sure your code lints (`npm run lint`).
<ide> 6. If you haven't already, [sign the CLA](https://code.facebook.com/cla).
<ide> | 1 |
PHP | PHP | adjust jsonresource $wrap docblock | 724cd761e662971d6d36799934b3ac7c57cefb7d | <ide><path>src/Illuminate/Http/Resources/Json/JsonResource.php
<ide> class JsonResource implements ArrayAccess, JsonSerializable, Responsable, UrlRou
<ide> /**
<ide> * The "data" wrapper that should be applied.
<ide> *
<del> * @var string
<add> * @var string|null
<ide> */
<ide> public static $wrap = 'data';
<ide> | 1 |
PHP | PHP | prevent insecure characters in locale | c248521f502c74c6cea7b0d221639d4aa752d5db | <ide><path>src/Illuminate/Translation/Translator.php
<ide> use Illuminate\Support\NamespacedItemResolver;
<ide> use Illuminate\Support\Str;
<ide> use Illuminate\Support\Traits\Macroable;
<add>use InvalidArgumentException;
<ide>
<ide> class Translator extends NamespacedItemResolver implements TranslatorContract
<ide> {
<ide> public function getLocale()
<ide> */
<ide> public function setLocale($locale)
<ide> {
<add> if (Str::contains($locale, ['.', '/', '\\'])) {
<add> throw new InvalidArgumentException('Invalid characters present in locale.');
<add> }
<add>
<ide> $this->locale = $locale;
<ide> }
<ide> | 1 |
PHP | PHP | remove cake autoloader | b3c82ea47f903d84e10f81c81952ffbdf7964e18 | <ide><path>lib/Cake/bootstrap.php
<ide>
<ide> require CAKE . 'basics.php';
<ide>
<del>if (!class_exists('Cake\Core\App')) {
<del> require CAKE . 'Core/ClassLoader.php';
<del> (new \Cake\Core\ClassLoader('Cake', CORE_PATH))->register();
<del>}
<del>
<ide> use Cake\Core\App;
<ide> use Cake\Core\Configure;
<ide> | 1 |
Javascript | Javascript | improve code style | 760da5943d6d31caa739772f15704b95f8e3a2ae | <ide><path>examples/js/loaders/SVGLoader.js
<ide> THREE.SVGLoader.prototype = {
<ide>
<ide> transformStack.pop();
<ide>
<del> if ( transformStack.length > 0 ) currentTransform.copy( transformStack[ transformStack.length - 1 ] );
<del> else currentTransform.identity();
<add> if ( transformStack.length > 0 ) {
<add>
<add> currentTransform.copy( transformStack[ transformStack.length - 1 ] );
<add>
<add> }
<add> else {
<add>
<add> currentTransform.identity();
<add>
<add> }
<ide>
<ide> }
<ide> | 1 |
PHP | PHP | remove content form 204 test | 661bc89cf4f1e404c4b3aef468265ee15864e4b0 | <ide><path>tests/TestCase/Http/Client/ResponseTest.php
<ide> public static function isSuccessProvider()
<ide> new Response([
<ide> 'HTTP/1.1 204 No Content',
<ide> 'Content-Type: text/html'
<del> ], 'ok')
<add> ], '')
<ide> ],
<ide> [
<ide> false, | 1 |
Text | Text | add missing comma | e0af2122d241c3562b8302b9b1a0ed433c0a0c32 | <ide><path>docs/getting-started.md
<ide> in preferences.
<ide>
<ide> ## Configuration
<ide>
<del>Press `cmd-,` to open the Settings view. This is the place to change settings
<add>Press `cmd-,` to open the Settings view. This is the place to change settings,
<ide> install packages, and change the theme.
<ide>
<ide> For more advanced configuration see the [customization guide][customization]. | 1 |
Ruby | Ruby | remove unused method | 3aa75f5e18fa2cde5cf4bfe5974fb700edc63114 | <ide><path>Library/Homebrew/metafiles.rb
<ide> class Metafiles
<ide> news notes notice readme todo
<ide> ]
<ide>
<del> def + other
<del> @metafiles + other
<del> end
<del>
<ide> def should_copy? file
<ide> include? file
<ide> end | 1 |
Ruby | Ruby | remove old method before redefining it | 5ced275ac1fc8d52654521bf61742cb7f2f0d796 | <ide><path>actionpack/lib/action_dispatch/routing/route_set.rb
<ide> def define_hash_access(route, name, kind, options)
<ide>
<ide> # We use module_eval to avoid leaks
<ide> @module.module_eval <<-END_EVAL, __FILE__, __LINE__ + 1
<add> remove_method :#{selector} if method_defined?(:#{selector})
<ide> def #{selector}(*args)
<ide> options = args.extract_options!
<ide>
<ide> def define_url_helper(route, name, kind, options)
<ide> hash_access_method = hash_access_name(name, kind)
<ide>
<ide> @module.module_eval <<-END_EVAL, __FILE__, __LINE__ + 1
<add> remove_method :#{selector} if method_defined?(:#{selector})
<ide> def #{selector}(*args)
<ide> url_for(#{hash_access_method}(*args))
<ide> end | 1 |
PHP | PHP | fix loading of controller with nested prefix | 515aba9043291442a0944849f4a25f4cfff73df8 | <ide><path>src/Routing/Filter/ControllerFactoryFilter.php
<ide> protected function _getController($request, $response)
<ide> $controller = $request->params['controller'];
<ide> }
<ide> if (!empty($request->params['prefix'])) {
<del> if (strpos('/', $request->params['prefix']) === false) {
<add> if (strpos($request->params['prefix'], '/') === false) {
<ide> $namespace .= '/' . Inflector::camelize($request->params['prefix']);
<ide> } else {
<ide> $prefixes = array_map(
<ide><path>tests/TestCase/Routing/Filter/ControllerFactoryFilterTest.php
<ide> public function testBeforeDispatch()
<ide> 'TestApp\Controller\Admin\PostsController',
<ide> get_class($event->data['controller'])
<ide> );
<add>
<add> $request->addParams(['prefix' => 'admin/sub', 'controller' => 'Posts', 'action' => 'index']);
<add> $event = new Event(__CLASS__, $this, compact('request', 'response'));
<add> $filter->beforeDispatch($event);
<add>
<add> $this->assertEquals(
<add> 'TestApp\Controller\Admin\Sub\PostsController',
<add> get_class($event->data['controller'])
<add> );
<ide> }
<ide> }
<ide><path>tests/test_app/TestApp/Controller/Admin/Sub/PostsController.php
<add><?php
<add>/**
<add> * CakePHP(tm) : Rapid Development Framework (http://cakephp.org)
<add> * Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org)
<add> *
<add> * Licensed under The MIT License
<add> * Redistributions of files must retain the above copyright notice.
<add> *
<add> * @copyright Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org)
<add> * @link http://cakephp.org CakePHP(tm) Project
<add> * @since 3.2.1
<add> * @license http://www.opensource.org/licenses/mit-license.php MIT License
<add> */
<add>namespace TestApp\Controller\Admin\Sub;
<add>
<add>use Cake\Controller\Controller;
<add>
<add>/**
<add> * Posts Controller class.
<add> *
<add> * For testing nested prefix routing / controller loading.
<add> */
<add>class PostsController extends Controller
<add>{
<add>
<add> /**
<add> * index action
<add> *
<add> * @return void
<add> */
<add> public function index()
<add> {
<add> }
<add>
<add> /**
<add> * index action
<add> *
<add> * @return void
<add> */
<add> public function add()
<add> {
<add> }
<add>} | 3 |
PHP | PHP | fix pivot bug | 1d8c79de4832114175150735792213532986f9a6 | <ide><path>src/Illuminate/Database/Eloquent/Relations/BelongsToMany.php
<ide> public function newExistingPivot(array $attributes = array())
<ide> */
<ide> public function withPivot($columns)
<ide> {
<del> $this->pivotColumns = is_array($columns) ? $columns : func_get_args();
<add> $columns = is_array($columns) ? $columns : func_get_args();
<add>
<add> $this->pivotColumns = array_merge($this->pivotColumns, $columns);
<ide>
<ide> return $this;
<ide> } | 1 |
Text | Text | update napi_async_init documentation | 3662b0c2c75b670599c363be8322064a3359e432 | <ide><path>doc/api/n-api.md
<ide> napi_status napi_async_init(napi_env env,
<ide> ```
<ide>
<ide> * `[in] env`: The environment that the API is invoked under.
<del>* `[in] async_resource`: An optional object associated with the async work
<add>* `[in] async_resource`: Object associated with the async work
<ide> that will be passed to possible `async_hooks` [`init` hooks][].
<add> In order to retain ABI compatibility with previous versions,
<add> passing `NULL` for `async_resource` will not result in an error, however,
<add> this will result incorrect operation of async hooks for the
<add> napi_async_context created. Potential issues include
<add> loss of async context when using the AsyncLocalStorage API.
<ide> * `[in] async_resource_name`: Identifier for the kind of resource
<ide> that is being provided for diagnostic information exposed by the
<ide> `async_hooks` API. | 1 |
Python | Python | fix tfsegformerforsemanticsegmentation doctest | 51227e26ab8fe6d1a19804da697786649f9340e3 | <ide><path>src/transformers/models/segformer/modeling_tf_segformer.py
<ide> def call(
<ide> >>> outputs = model(**inputs, training=False)
<ide> >>> # logits are of shape (batch_size, num_labels, height, width)
<ide> >>> logits = outputs.logits
<del> >>> logits.shape
<del> (1, 150, 128, 128)
<add> >>> list(logits.shape)
<add> [1, 150, 128, 128]
<ide> ```"""
<ide> return_dict = return_dict if return_dict is not None else self.config.use_return_dict
<ide> output_hidden_states = ( | 1 |
Ruby | Ruby | add homebrew/tex to search params | ec7b513a538a779e4ae585dfc9656865d0fb1e57 | <ide><path>Library/Homebrew/cmd/search.rb
<ide> def search
<ide> elsif ARGV.include? '--debian'
<ide> exec_browser "https://packages.debian.org/search?keywords=#{ARGV.next}&searchon=names&suite=all§ion=all"
<ide> elsif ARGV.include? '--opensuse'
<del> exec_browser "http://software.opensuse.org/search?q=#{ARGV.next}"
<add> exec_browser "https://software.opensuse.org/search?q=#{ARGV.next}"
<ide> elsif ARGV.include? '--fedora'
<ide> exec_browser "https://admin.fedoraproject.org/pkgdb/packages/%2A#{ARGV.next}%2A/"
<ide> elsif ARGV.include? '--ubuntu'
<ide> def search
<ide> %w{Homebrew binary},
<ide> %w{Homebrew python},
<ide> %w{Homebrew php},
<add> %w{Homebrew tex},
<ide> %w{Homebrew x11},
<ide> %w{Caskroom cask},
<ide> ] | 1 |
Go | Go | remove pointers from the sysinfo struct | c2bc637a0306657a86f3739bd5bdcd5db8d22539 | <ide><path>pkg/sysinfo/sysinfo.go
<ide> type SysInfo struct {
<ide> // Whether the kernel supports AppArmor or not
<ide> AppArmor bool
<ide>
<del> *cgroupMemInfo
<del> *cgroupCPUInfo
<del> *cgroupBlkioInfo
<del> *cgroupCpusetInfo
<add> cgroupMemInfo
<add> cgroupCPUInfo
<add> cgroupBlkioInfo
<add> cgroupCpusetInfo
<ide>
<ide> // Whether IPv4 forwarding is supported or not, if this was disabled, networking will not work
<ide> IPv4ForwardingDisabled bool
<ide><path>pkg/sysinfo/sysinfo_linux.go
<ide> func New(quiet bool) *SysInfo {
<ide> return sysInfo
<ide> }
<ide>
<del>func checkCgroupMem(quiet bool) *cgroupMemInfo {
<del> info := &cgroupMemInfo{}
<add>// checkCgroupMem reads the memory information from the memory cgroup mount point.
<add>func checkCgroupMem(quiet bool) cgroupMemInfo {
<ide> mountPoint, err := cgroups.FindCgroupMountpoint("memory")
<ide> if err != nil {
<ide> if !quiet {
<ide> logrus.Warnf("Your kernel does not support cgroup memory limit: %v", err)
<ide> }
<del> return info
<add> return cgroupMemInfo{}
<ide> }
<del> info.MemoryLimit = true
<ide>
<del> info.SwapLimit = cgroupEnabled(mountPoint, "memory.memsw.limit_in_bytes")
<del> if !quiet && !info.SwapLimit {
<add> swapLimit := cgroupEnabled(mountPoint, "memory.memsw.limit_in_bytes")
<add> if !quiet && !swapLimit {
<ide> logrus.Warn("Your kernel does not support swap memory limit.")
<ide> }
<del> info.OomKillDisable = cgroupEnabled(mountPoint, "memory.oom_control")
<del> if !quiet && !info.OomKillDisable {
<add> oomKillDisable := cgroupEnabled(mountPoint, "memory.oom_control")
<add> if !quiet && !oomKillDisable {
<ide> logrus.Warnf("Your kernel does not support oom control.")
<ide> }
<del> info.MemorySwappiness = cgroupEnabled(mountPoint, "memory.swappiness")
<del> if !quiet && !info.MemorySwappiness {
<add> memorySwappiness := cgroupEnabled(mountPoint, "memory.swappiness")
<add> if !quiet && !memorySwappiness {
<ide> logrus.Warnf("Your kernel does not support memory swappiness.")
<ide> }
<ide>
<del> return info
<add> return cgroupMemInfo{
<add> MemoryLimit: true,
<add> SwapLimit: swapLimit,
<add> OomKillDisable: oomKillDisable,
<add> MemorySwappiness: memorySwappiness,
<add> }
<ide> }
<ide>
<del>func checkCgroupCPU(quiet bool) *cgroupCPUInfo {
<del> info := &cgroupCPUInfo{}
<add>// checkCgroupCPU reads the cpu information from the cpu cgroup mount point.
<add>func checkCgroupCPU(quiet bool) cgroupCPUInfo {
<ide> mountPoint, err := cgroups.FindCgroupMountpoint("cpu")
<ide> if err != nil {
<ide> if !quiet {
<ide> logrus.Warn(err)
<ide> }
<del> return info
<add> return cgroupCPUInfo{}
<ide> }
<ide>
<del> info.CPUShares = cgroupEnabled(mountPoint, "cpu.shares")
<del> if !quiet && !info.CPUShares {
<add> cpuShares := cgroupEnabled(mountPoint, "cpu.shares")
<add> if !quiet && !cpuShares {
<ide> logrus.Warn("Your kernel does not support cgroup cpu shares")
<ide> }
<ide>
<del> info.CPUCfsPeriod = cgroupEnabled(mountPoint, "cpu.cfs_period_us")
<del> if !quiet && !info.CPUCfsPeriod {
<add> cpuCfsPeriod := cgroupEnabled(mountPoint, "cpu.cfs_period_us")
<add> if !quiet && !cpuCfsPeriod {
<ide> logrus.Warn("Your kernel does not support cgroup cfs period")
<ide> }
<ide>
<del> info.CPUCfsQuota = cgroupEnabled(mountPoint, "cpu.cfs_quota_us")
<del> if !quiet && !info.CPUCfsQuota {
<add> cpuCfsQuota := cgroupEnabled(mountPoint, "cpu.cfs_quota_us")
<add> if !quiet && !cpuCfsQuota {
<ide> logrus.Warn("Your kernel does not support cgroup cfs quotas")
<ide> }
<del> return info
<add> return cgroupCPUInfo{
<add> CPUShares: cpuShares,
<add> CPUCfsPeriod: cpuCfsPeriod,
<add> CPUCfsQuota: cpuCfsQuota,
<add> }
<ide> }
<ide>
<del>func checkCgroupBlkioInfo(quiet bool) *cgroupBlkioInfo {
<del> info := &cgroupBlkioInfo{}
<add>// checkCgroupBlkioInfo reads the blkio information from the blkio cgroup mount point.
<add>func checkCgroupBlkioInfo(quiet bool) cgroupBlkioInfo {
<ide> mountPoint, err := cgroups.FindCgroupMountpoint("blkio")
<ide> if err != nil {
<ide> if !quiet {
<ide> logrus.Warn(err)
<ide> }
<del> return info
<add> return cgroupBlkioInfo{}
<ide> }
<ide>
<del> info.BlkioWeight = cgroupEnabled(mountPoint, "blkio.weight")
<del> if !quiet && !info.BlkioWeight {
<add> w := cgroupEnabled(mountPoint, "blkio.weight")
<add> if !quiet && !w {
<ide> logrus.Warn("Your kernel does not support cgroup blkio weight")
<ide> }
<del> return info
<add> return cgroupBlkioInfo{BlkioWeight: w}
<ide> }
<ide>
<del>func checkCgroupCpusetInfo(quiet bool) *cgroupCpusetInfo {
<del> info := &cgroupCpusetInfo{}
<add>// checkCgroupCpusetInfo reads the cpuset information from the cpuset cgroup mount point.
<add>func checkCgroupCpusetInfo(quiet bool) cgroupCpusetInfo {
<ide> _, err := cgroups.FindCgroupMountpoint("cpuset")
<ide> if err != nil {
<ide> if !quiet {
<ide> logrus.Warn(err)
<ide> }
<del> return info
<add> return cgroupCpusetInfo{}
<ide> }
<ide>
<del> info.Cpuset = true
<del> return info
<add> return cgroupCpusetInfo{Cpuset: true}
<ide> }
<ide>
<ide> func cgroupEnabled(mountPoint, name string) bool { | 2 |
Python | Python | show timestamps in drift warning | 77587a784e54ce98041f0b34ca5216c942899631 | <ide><path>celery/events/state.py
<ide>
<ide> import threading
<ide>
<add>from datetime import datetime
<ide> from heapq import heappush, heappop
<ide> from itertools import islice
<ide> from time import time
<ide>
<ide> DRIFT_WARNING = """\
<ide> Substantial drift from %s may mean clocks are out of sync. Current drift is
<del>%s seconds (including message overhead).\
<add>%s seconds. [orig: %s recv: %s]
<ide> """
<ide>
<ide> logger = get_logger(__name__)
<ide> def update_heartbeat(self, received, timestamp):
<ide> return
<ide> drift = abs(int(received) - int(timestamp))
<ide> if drift > HEARTBEAT_DRIFT_MAX:
<del> warn(DRIFT_WARNING, self.hostname, drift)
<add> warn(DRIFT_WARNING, self.hostname, drift,
<add> datetime.fromtimestamp(received),
<add> datetime.fromtimestamp(timestamp))
<ide> heartbeats, hbmax = self.heartbeats, self.heartbeat_max
<ide> if not heartbeats or (received and received > heartbeats[-1]):
<ide> heappush(heartbeats, received) | 1 |
Python | Python | redirect standard fds to /dev/null | 0ebbc5ff9eeb45008f524ca83623b20e3ded77a0 | <ide><path>celery/platforms.py
<ide> def _create_pidlock(pidfile):
<ide> return pidlock
<ide>
<ide>
<add>def fileno(f):
<add> try:
<add> return f.fileno()
<add> except AttributeError:
<add> pass
<add>
<add>
<ide> class DaemonContext(object):
<ide> _is_open = False
<ide> workdir = DAEMON_WORKDIR
<ide> def __init__(self, pidfile=None, workdir=None, umask=None,
<ide> self.workdir = workdir or self.workdir
<ide> self.umask = self.umask if umask is None else umask
<ide> self.fake = fake
<add> self.stdfds = (sys.stdin, sys.stdout, sys.stderr)
<add>
<add> def redirect_to_null(self, fd):
<add> if fd:
<add> dest = os.open(os.devnull, os.O_RDWR)
<add> os.dup2(dest, fd)
<ide>
<ide> def open(self):
<ide> if not self._is_open:
<ide> def open(self):
<ide> os.chdir(self.workdir)
<ide> os.umask(self.umask)
<ide>
<add> preserve = [fileno(f) for f in self.stdfds if fileno(f)]
<ide> for fd in reversed(range(get_fdmax(default=2048))):
<del> with ignore_EBADF():
<del> os.close(fd)
<add> if fd not in preserve:
<add> with ignore_EBADF():
<add> os.close(fd)
<add>
<add> for fd in self.stdfds:
<add> self.redirect_to_null(fileno(fd))
<ide>
<ide> os.open(DAEMON_REDIRECT_TO, os.O_RDWR)
<ide> os.dup2(0, 1)
<ide><path>celery/tests/app/test_log.py
<ide> def test_logging_proxy(self):
<ide> p.flush()
<ide> p.close()
<ide> self.assertFalse(p.isatty())
<del> self.assertIsNone(p.fileno())
<ide>
<ide> def test_logging_proxy_recurse_protection(self):
<ide> logger = self.setup_logger(loglevel=logging.ERROR, logfile=None, | 2 |
Javascript | Javascript | fix typos in comments | 836c659d8f0b7683b7c1269b6d5ce567d7fa3a90 | <ide><path>lib/_http_server.js
<ide> function connectionListener(socket) {
<ide> }
<ide>
<ide> // When we're finished writing the response, check if this is the last
<del> // respose, if so destroy the socket.
<add> // response, if so destroy the socket.
<ide> res.on('finish', resOnFinish);
<ide> function resOnFinish() {
<ide> // Usually the first incoming element should be our request. it may
<ide><path>lib/util.js
<ide> exports.debuglog = function(set) {
<ide>
<ide>
<ide> /**
<del> * Echos the value of a value. Trys to print the value out
<add> * Echos the value of a value. Tries to print the value out
<ide> * in the best way possible given the different types.
<ide> *
<ide> * @param {Object} obj The object to print out.
<ide> function formatValue(ctx, value, recurseTimes) {
<ide>
<ide> if (typeof raw === 'string') {
<ide> // for boxed Strings, we have to remove the 0-n indexed entries,
<del> // since they just noisey up the output and are redundant
<add> // since they just noisy up the output and are redundant
<ide> keys = keys.filter(function(key) {
<ide> return !(key >= 0 && key < raw.length);
<ide> }); | 2 |
PHP | PHP | send charset=utf-8 if content-type is json | 9b479958f613e2ce1d290f26d246e76313e821f2 | <ide><path>lib/Cake/Network/CakeResponse.php
<ide> protected function _setContentType() {
<ide> if (in_array($this->_status, array(304, 204))) {
<ide> return;
<ide> }
<del> if (strpos($this->_contentType, 'text/') === 0 || $this->_contentType === 'application/json') {
<add> if (strpos($this->_contentType, 'text/') === 0) {
<ide> $this->header('Content-Type', "{$this->_contentType}; charset={$this->_charset}");
<add> } else if ($this->_contentType === 'application/json') {
<add> $this->header('Content-Type', "{$this->_contentType}; charset=UTF-8");
<ide> } else {
<ide> $this->header('Content-Type', "{$this->_contentType}");
<ide> } | 1 |
Python | Python | fix auto-linking in download command | ac4b88cce9f4090124be94bcea2d1c5d8fb2d81a | <ide><path>spacy/download.py
<ide> import plac
<ide> import requests
<ide> from os import path
<del>from .link import link
<add>from .link import link, link_package
<ide> from . import about
<ide> from . import util
<ide>
<ide> def download(model=None, direct=False):
<ide> compatibility = get_compatibility()
<ide> version = get_version(model_name, compatibility)
<ide> download_model('{m}-{v}/{m}-{v}.tar.gz'.format(m=model_name, v=version))
<del> link(model_name, model, force=True)
<add> link_package(model_name, model, force=True)
<ide>
<ide>
<ide> def get_compatibility():
<ide><path>spacy/link.py
<ide> def link(origin, link_name, force=False):
<ide> """Create a symlink for models within the spacy/data directory. Accepts
<ide> either the name of a pip package, or the local path to the model data
<ide> directory. Linking models allows loading them via spacy.load(link_name)."""
<del>
<ide> if is_package(origin):
<del> package_path = site.getsitepackages()[0]
<del> meta = get_meta(package_path, origin)
<del> data_dir = origin + '-' + meta['version']
<del> model_path = os.path.join(package_path, origin, data_dir)
<del> symlink(model_path, link_name, force)
<add> link_package(origin, link_name)
<ide> else:
<ide> symlink(origin, link_name, force)
<ide>
<ide>
<add>def link_package(origin, link_name, force=False):
<add> package_path = site.getsitepackages()[0]
<add> meta = get_meta(package_path, origin)
<add> data_dir = origin + '-' + meta['version']
<add> model_path = os.path.join(package_path, origin, data_dir)
<add> symlink(model_path, link_name, force)
<add>
<add>
<ide> def symlink(model_path, link_name, force):
<ide> if not os.path.isdir(model_path):
<ide> util.sys_exit( | 2 |
Javascript | Javascript | add coverage for util.inspect() | 0594577e69b11dccfa1d45453a712450a0cefbe0 | <ide><path>test/parallel/test-util-inspect.js
<ide> assert.strictEqual(
<ide> util.inspect(123456789.12345678, { numericSeparator: true }),
<ide> '123_456_789.123_456_78'
<ide> );
<add>
<add> assert.strictEqual(
<add> util.inspect(-123456789.12345678, { numericSeparator: true }),
<add> '-123_456_789.123_456_78'
<add> );
<ide> } | 1 |
Javascript | Javascript | simplify multi element directive check | b837fc3116e697aaf18977867a5defd9541f7f8c | <ide><path>src/ng/compile.js
<ide> function $CompileProvider($provide, $$sanitizeUriProvider) {
<ide> return template.replace(/\{\{/g, startSymbol).replace(/}}/g, endSymbol);
<ide> },
<ide> NG_ATTR_BINDING = /^ngAttr[A-Z]/;
<add> var MULTI_ELEMENT_DIR_RE = /^(.+)Start$/;
<ide>
<ide> compile.$$addBindingInfo = debugInfoEnabled ? function $$addBindingInfo($element, binding) {
<ide> var bindings = $element.data('$binding') || [];
<ide> function $CompileProvider($provide, $$sanitizeUriProvider) {
<ide> });
<ide> }
<ide>
<del> var directiveNName = ngAttrName.replace(/(Start|End)$/, '');
<del> if (directiveIsMultiElement(directiveNName)) {
<del> if (ngAttrName === directiveNName + 'Start') {
<del> attrStartName = name;
<del> attrEndName = name.substr(0, name.length - 5) + 'end';
<del> name = name.substr(0, name.length - 6);
<del> }
<add> var multiElementMatch = ngAttrName.match(MULTI_ELEMENT_DIR_RE);
<add> if (multiElementMatch && directiveIsMultiElement(multiElementMatch[1])) {
<add> attrStartName = name;
<add> attrEndName = name.substr(0, name.length - 5) + 'end';
<add> name = name.substr(0, name.length - 6);
<ide> }
<ide>
<ide> nName = directiveNormalize(name.toLowerCase()); | 1 |
Go | Go | launch docker fail with space named drive | acea488eb6ed40e6e5894e1b259ad861c9a98042 | <ide><path>pkg/mount/mountinfo_linux.go
<ide> func parseInfoFile(r io.Reader) ([]*MountInfo, error) {
<ide> // Safe as mountinfo encodes mountpoints with spaces as \040.
<ide> index := strings.Index(text, " - ")
<ide> postSeparatorFields := strings.Fields(text[index+3:])
<del> if len(postSeparatorFields) != 3 {
<del> return nil, fmt.Errorf("Error did not find 3 fields post '-' in '%s'", text)
<add> if len(postSeparatorFields) < 3 {
<add> return nil, fmt.Errorf("Error found less than 3 fields post '-' in %q", text)
<ide> }
<add>
<ide> p.Fstype = postSeparatorFields[0]
<ide> p.Source = postSeparatorFields[1]
<del> p.VfsOpts = postSeparatorFields[2]
<add> p.VfsOpts = strings.Join(postSeparatorFields[2:], " ")
<ide> out = append(out, p)
<ide> }
<ide> return out, nil
<ide><path>pkg/mount/mountinfo_linux_test.go
<ide> const (
<ide> 235 35 253:32 / /var/lib/docker/devicemapper/mnt/1a28059f29eda821578b1bb27a60cc71f76f846a551abefabce6efd0146dce9f rw,relatime shared:217 - ext4 /dev/mapper/docker-253:2-425882-1a28059f29eda821578b1bb27a60cc71f76f846a551abefabce6efd0146dce9f rw,seclabel,discard,stripe=16,data=ordered
<ide> 239 35 253:33 / /var/lib/docker/devicemapper/mnt/e9aa60c60128cad1 rw,relatime shared:221 - ext4 /dev/mapper/docker-253:2-425882-e9aa60c60128cad1 rw,seclabel,discard,stripe=16,data=ordered
<ide> 243 35 253:34 / /var/lib/docker/devicemapper/mnt/5fec11304b6f4713fea7b6ccdcc1adc0a1966187f590fe25a8227428a8df275d-init rw,relatime shared:225 - ext4 /dev/mapper/docker-253:2-425882-5fec11304b6f4713fea7b6ccdcc1adc0a1966187f590fe25a8227428a8df275d-init rw,seclabel,discard,stripe=16,data=ordered
<del> 247 35 253:35 / /var/lib/docker/devicemapper/mnt/5fec11304b6f4713fea7b6ccdcc1adc0a1966187f590fe25a8227428a8df275d rw,relatime shared:229 - ext4 /dev/mapper/docker-253:2-425882-5fec11304b6f4713fea7b6ccdcc1adc0a1966187f590fe25a8227428a8df275d rw,seclabel,discard,stripe=16,data=ordered`
<add> 247 35 253:35 / /var/lib/docker/devicemapper/mnt/5fec11304b6f4713fea7b6ccdcc1adc0a1966187f590fe25a8227428a8df275d rw,relatime shared:229 - ext4 /dev/mapper/docker-253:2-425882-5fec11304b6f4713fea7b6ccdcc1adc0a1966187f590fe25a8227428a8df275d rw,seclabel,discard,stripe=16,data=ordered
<add> 31 21 0:23 / /DATA/foo_bla_bla rw,relatime - cifs //foo/BLA\040BLA\040BLA/ rw,sec=ntlm,cache=loose,unc=\\foo\BLA BLA BLA,username=my_login,domain=mydomain.com,uid=12345678,forceuid,gid=12345678,forcegid,addr=10.1.30.10,file_mode=0755,dir_mode=0755,nounix,rsize=61440,wsize=65536,actimeo=1`
<ide>
<ide> ubuntuMountInfo = `15 20 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
<ide> 16 20 0:3 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw | 2 |
Python | Python | fix code examples of detr and yolos | bf0addc56e82b51199bda577b3c2faec15117fed | <ide><path>src/transformers/models/detr/modeling_detr.py
<ide> def forward(
<ide>
<ide> >>> # convert outputs (bounding boxes and class logits) to COCO API
<ide> >>> target_sizes = torch.tensor([image.size[::-1]])
<del> >>> results = feature_extractor.post_process_object_detection(outputs, target_sizes=target_sizes)[0]
<add> >>> results = feature_extractor.post_process_object_detection(
<add> ... outputs, threshold=0.9, target_sizes=target_sizes
<add> ... )[0]
<ide>
<ide> >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
<ide> ... box = [round(i, 2) for i in box.tolist()]
<del> ... # let's only keep detections with score > 0.9
<del> ... if score > 0.9:
<del> ... print(
<del> ... f"Detected {model.config.id2label[label.item()]} with confidence "
<del> ... f"{round(score.item(), 3)} at location {box}"
<del> ... )
<add> ... print(
<add> ... f"Detected {model.config.id2label[label.item()]} with confidence "
<add> ... f"{round(score.item(), 3)} at location {box}"
<add> ... )
<ide> Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98]
<ide> Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66]
<ide> Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76]
<ide><path>src/transformers/models/yolos/feature_extraction_yolos.py
<ide> import pathlib
<ide> import warnings
<ide> from collections import defaultdict
<del>from typing import Dict, List, Optional, Union
<add>from typing import Dict, List, Optional, Tuple, Union
<ide>
<ide> import numpy as np
<ide> from PIL import Image
<ide> def to_tuple(tup):
<ide> preds.append(predictions)
<ide> return preds
<ide>
<add> # Copied from transformers.models.detr.feature_extraction_detr.DetrFeatureExtractor.post_process_object_detection
<add> def post_process_object_detection(
<add> self, outputs, threshold: float = 0.5, target_sizes: Union[TensorType, List[Tuple]] = None
<add> ):
<add> """
<add> Converts the output of [`DetrForObjectDetection`] into the format expected by the COCO api. Only supports
<add> PyTorch.
<add>
<add> Args:
<add> outputs ([`DetrObjectDetectionOutput`]):
<add> Raw outputs of the model.
<add> threshold (`float`, *optional*):
<add> Score threshold to keep object detection predictions.
<add> target_sizes (`torch.Tensor` or `List[Tuple[int, int]]`, *optional*, defaults to `None`):
<add> Tensor of shape `(batch_size, 2)` or list of tuples (`Tuple[int, int]`) containing the target size
<add> (height, width) of each image in the batch. If left to None, predictions will not be resized.
<add>
<add> Returns:
<add> `List[Dict]`: A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
<add> in the batch as predicted by the model.
<add> """
<add> out_logits, out_bbox = outputs.logits, outputs.pred_boxes
<add>
<add> if target_sizes is not None:
<add> if len(out_logits) != len(target_sizes):
<add> raise ValueError(
<add> "Make sure that you pass in as many target sizes as the batch dimension of the logits"
<add> )
<add>
<add> prob = nn.functional.softmax(out_logits, -1)
<add> scores, labels = prob[..., :-1].max(-1)
<add>
<add> # Convert to [x0, y0, x1, y1] format
<add> boxes = center_to_corners_format(out_bbox)
<add>
<add> # Convert from relative [0, 1] to absolute [0, height] coordinates
<add> if target_sizes is not None:
<add> if isinstance(target_sizes, List):
<add> img_h = torch.Tensor([i[0] for i in target_sizes])
<add> img_w = torch.Tensor([i[1] for i in target_sizes])
<add> else:
<add> img_h, img_w = target_sizes.unbind(1)
<add>
<add> scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1)
<add> boxes = boxes * scale_fct[:, None, :]
<add>
<add> results = []
<add> for s, l, b in zip(scores, labels, boxes):
<add> score = s[s > threshold]
<add> label = l[s > threshold]
<add> box = b[s > threshold]
<add> results.append({"scores": score, "labels": label, "boxes": box})
<add>
<add> return results
<add>
<ide> # Copied from transformers.models.detr.feature_extraction_detr.DetrFeatureExtractor.post_process_instance
<ide> def post_process_instance(self, results, outputs, orig_target_sizes, max_target_sizes, threshold=0.5):
<ide> """
<ide><path>src/transformers/models/yolos/modeling_yolos.py
<ide> def forward(
<ide> Returns:
<ide>
<ide> Examples:
<add>
<ide> ```python
<del> >>> from transformers import YolosFeatureExtractor, YolosForObjectDetection
<add> >>> from transformers import AutoFeatureExtractor, AutoModelForObjectDetection
<add> >>> import torch
<ide> >>> from PIL import Image
<ide> >>> import requests
<ide>
<ide> >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
<ide> >>> image = Image.open(requests.get(url, stream=True).raw)
<ide>
<del> >>> feature_extractor = YolosFeatureExtractor.from_pretrained("hustvl/yolos-small")
<del> >>> model = YolosForObjectDetection.from_pretrained("hustvl/yolos-small")
<add> >>> feature_extractor = AutoFeatureExtractor.from_pretrained("hustvl/yolos-tiny")
<add> >>> model = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-tiny")
<ide>
<ide> >>> inputs = feature_extractor(images=image, return_tensors="pt")
<del>
<ide> >>> outputs = model(**inputs)
<ide>
<del> >>> # model predicts bounding boxes and corresponding COCO classes
<del> >>> logits = outputs.logits
<del> >>> bboxes = outputs.pred_boxes
<add> >>> # convert outputs (bounding boxes and class logits) to COCO API
<add> >>> target_sizes = torch.tensor([image.size[::-1]])
<add> >>> results = feature_extractor.post_process_object_detection(
<add> ... outputs, threshold=0.9, target_sizes=target_sizes
<add> ... )[0]
<add>
<add> >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
<add> ... box = [round(i, 2) for i in box.tolist()]
<add> ... print(
<add> ... f"Detected {model.config.id2label[label.item()]} with confidence "
<add> ... f"{round(score.item(), 3)} at location {box}"
<add> ... )
<add> Detected remote with confidence 0.994 at location [46.96, 72.61, 181.02, 119.73]
<add> Detected remote with confidence 0.975 at location [340.66, 79.19, 372.59, 192.65]
<add> Detected cat with confidence 0.984 at location [12.27, 54.25, 319.42, 470.99]
<add> Detected remote with confidence 0.922 at location [41.66, 71.96, 178.7, 120.33]
<add> Detected cat with confidence 0.914 at location [342.34, 21.48, 638.64, 372.46]
<ide> ```"""
<ide> return_dict = return_dict if return_dict is not None else self.config.use_return_dict
<ide> | 3 |
Python | Python | fix typo in doc | 4a89bb6ced975a2cf339725af721268cecd94c22 | <ide><path>keras/optimizers/optimizer_experimental/adamw.py
<ide> class AdamW(optimizer.Optimizer):
<ide>
<ide> AdamW optimization is a stochastic gradient descent method that is based on
<ide> adaptive estimation of first-order and second-order moments with an added
<del> method to decay weights per the techniques discussed in the paeper,
<add> method to decay weights per the techniques discussed in the paper,
<ide> 'Decoupled Weight Decay Regularization' by
<ide> [Loshchilov, Hutter et al., 2019](https://arxiv.org/abs/1711.05101).
<ide> | 1 |
Python | Python | add user_dict entries and small refactor | 150a39ccca2426fcd10638c8515d7ec98cb79d8f | <ide><path>spacy/lang/ja/__init__.py
<ide>
<ide>
<ide> # Hold the attributes we need with convenient names
<del>DetailedToken = namedtuple("DetailedToken", ["surface", "pos", "lemma"])
<del>
<del># Handling for multiple spaces in a row is somewhat awkward, this simplifies
<del># the flow by creating a dummy with the same interface.
<del>DummyNode = namedtuple("DummyNode", ["surface", "pos", "lemma"])
<del>DummySpace = DummyNode(" ", " ", " ")
<add>DetailedToken = namedtuple("DetailedToken", ["surface", "tag", "inf", "lemma", "reading", "sub_tokens"])
<ide>
<ide>
<ide> def try_sudachi_import(split_mode="A"):
<ide> def try_sudachi_import(split_mode="A"):
<ide> )
<ide>
<ide>
<del>def resolve_pos(orth, pos, next_pos):
<add>def resolve_pos(orth, tag, next_tag):
<ide> """If necessary, add a field to the POS tag for UD mapping.
<ide> Under Universal Dependencies, sometimes the same Unidic POS tag can
<ide> be mapped differently depending on the literal token or its context
<ide> def resolve_pos(orth, pos, next_pos):
<ide> # Some tokens have their UD tag decided based on the POS of the following
<ide> # token.
<ide>
<del> # orth based rules
<del> if pos[0] in TAG_ORTH_MAP:
<del> orth_map = TAG_ORTH_MAP[pos[0]]
<add> # apply orth based mapping
<add> if tag in TAG_ORTH_MAP:
<add> orth_map = TAG_ORTH_MAP[tag]
<ide> if orth in orth_map:
<del> return orth_map[orth], None
<add> return orth_map[orth], None # current_pos, next_pos
<ide>
<del> # tag bi-gram mapping
<del> if next_pos:
<del> tag_bigram = pos[0], next_pos[0]
<add> # apply tag bi-gram mapping
<add> if next_tag:
<add> tag_bigram = tag, next_tag
<ide> if tag_bigram in TAG_BIGRAM_MAP:
<del> bipos = TAG_BIGRAM_MAP[tag_bigram]
<del> if bipos[0] is None:
<del> return TAG_MAP[pos[0]][POS], bipos[1]
<add> current_pos, next_pos = TAG_BIGRAM_MAP[tag_bigram]
<add> if current_pos is None: # apply tag uni-gram mapping for current_pos
<add> return TAG_MAP[tag][POS], next_pos # only next_pos is identified by tag bi-gram mapping
<ide> else:
<del> return bipos
<del>
<del> return TAG_MAP[pos[0]][POS], None
<del>
<del>
<del># Use a mapping of paired punctuation to avoid splitting quoted sentences.
<del>pairpunct = {'「':'」', '『': '』', '【': '】'}
<del>
<add> return current_pos, next_pos
<ide>
<del>def separate_sentences(doc):
<del> """Given a doc, mark tokens that start sentences based on Unidic tags.
<del> """
<del>
<del> stack = [] # save paired punctuation
<del>
<del> for i, token in enumerate(doc[:-2]):
<del> # Set all tokens after the first to false by default. This is necessary
<del> # for the doc code to be aware we've done sentencization, see
<del> # `is_sentenced`.
<del> token.sent_start = (i == 0)
<del> if token.tag_:
<del> if token.tag_ == "補助記号-括弧開":
<del> ts = str(token)
<del> if ts in pairpunct:
<del> stack.append(pairpunct[ts])
<del> elif stack and ts == stack[-1]:
<del> stack.pop()
<del>
<del> if token.tag_ == "補助記号-句点":
<del> next_token = doc[i+1]
<del> if next_token.tag_ != token.tag_ and not stack:
<del> next_token.sent_start = True
<del>
<del>
<del>def get_dtokens(tokenizer, text):
<del> tokens = tokenizer.tokenize(text)
<del> words = []
<del> for ti, token in enumerate(tokens):
<del> tag = '-'.join([xx for xx in token.part_of_speech()[:4] if xx != '*'])
<del> inf = '-'.join([xx for xx in token.part_of_speech()[4:] if xx != '*'])
<del> dtoken = DetailedToken(
<del> token.surface(),
<del> (tag, inf),
<del> token.dictionary_form())
<del> if ti > 0 and words[-1].pos[0] == '空白' and tag == '空白':
<del> # don't add multiple space tokens in a row
<del> continue
<del> words.append(dtoken)
<add> # apply tag uni-gram mapping
<add> return TAG_MAP[tag][POS], None
<ide>
<del> # remove empty tokens. These can be produced with characters like … that
<del> # Sudachi normalizes internally.
<del> words = [ww for ww in words if len(ww.surface) > 0]
<del> return words
<ide>
<del>
<del>def get_words_lemmas_tags_spaces(dtokens, text, gap_tag=("空白", "")):
<add>def get_dtokens_and_spaces(dtokens, text, gap_tag="空白"):
<add> # Compare the content of tokens and text, first
<ide> words = [x.surface for x in dtokens]
<ide> if "".join("".join(words).split()) != "".join(text.split()):
<ide> raise ValueError(Errors.E194.format(text=text, words=words))
<del> text_words = []
<del> text_lemmas = []
<del> text_tags = []
<add>
<add> text_dtokens = []
<ide> text_spaces = []
<ide> text_pos = 0
<ide> # handle empty and whitespace-only texts
<ide> if len(words) == 0:
<del> return text_words, text_lemmas, text_tags, text_spaces
<add> return text_dtokens, text_spaces
<ide> elif len([word for word in words if not word.isspace()]) == 0:
<ide> assert text.isspace()
<del> text_words = [text]
<del> text_lemmas = [text]
<del> text_tags = [gap_tag]
<add> text_dtokens = [DetailedToken(text, gap_tag, '', text, None, None)]
<ide> text_spaces = [False]
<del> return text_words, text_lemmas, text_tags, text_spaces
<del> # normalize words to remove all whitespace tokens
<del> norm_words, norm_dtokens = zip(*[(word, dtokens) for word, dtokens in zip(words, dtokens) if not word.isspace()])
<del> # align words with text
<del> for word, dtoken in zip(norm_words, norm_dtokens):
<add> return text_dtokens, text_spaces
<add>
<add> # align words and dtokens by referring text, and insert gap tokens for the space char spans
<add> for word, dtoken in zip(words, dtokens):
<add> # skip all space tokens
<add> if word.isspace():
<add> continue
<ide> try:
<ide> word_start = text[text_pos:].index(word)
<ide> except ValueError:
<ide> raise ValueError(Errors.E194.format(text=text, words=words))
<add>
<add> # space token
<ide> if word_start > 0:
<ide> w = text[text_pos:text_pos + word_start]
<del> text_words.append(w)
<del> text_lemmas.append(w)
<del> text_tags.append(gap_tag)
<add> text_dtokens.append(DetailedToken(w, gap_tag, '', w, None, None))
<ide> text_spaces.append(False)
<ide> text_pos += word_start
<del> text_words.append(word)
<del> text_lemmas.append(dtoken.lemma)
<del> text_tags.append(dtoken.pos)
<add>
<add> # content word
<add> text_dtokens.append(dtoken)
<ide> text_spaces.append(False)
<ide> text_pos += len(word)
<add> # poll a space char after the word
<ide> if text_pos < len(text) and text[text_pos] == " ":
<ide> text_spaces[-1] = True
<ide> text_pos += 1
<add>
<add> # trailing space token
<ide> if text_pos < len(text):
<ide> w = text[text_pos:]
<del> text_words.append(w)
<del> text_lemmas.append(w)
<del> text_tags.append(gap_tag)
<add> text_dtokens.append(DetailedToken(w, gap_tag, '', w, None, None))
<ide> text_spaces.append(False)
<del> return text_words, text_lemmas, text_tags, text_spaces
<add>
<add> return text_dtokens, text_spaces
<ide>
<ide>
<ide> class JapaneseTokenizer(DummyTokenizer):
<ide> def __init__(self, cls, nlp=None, config={}):
<ide> self.tokenizer = try_sudachi_import(self.split_mode)
<ide>
<ide> def __call__(self, text):
<del> dtokens = get_dtokens(self.tokenizer, text)
<del>
<del> words, lemmas, unidic_tags, spaces = get_words_lemmas_tags_spaces(dtokens, text)
<add> # convert sudachipy.morpheme.Morpheme to DetailedToken and merge continuous spaces
<add> sudachipy_tokens = self.tokenizer.tokenize(text)
<add> dtokens = self._get_dtokens(sudachipy_tokens)
<add> dtokens, spaces = get_dtokens_and_spaces(dtokens, text)
<add>
<add> # create Doc with tag bi-gram based part-of-speech identification rules
<add> words, tags, inflections, lemmas, readings, sub_tokens_list = zip(*dtokens) if dtokens else [[]] * 6
<add> sub_tokens_list = list(sub_tokens_list)
<ide> doc = Doc(self.vocab, words=words, spaces=spaces)
<del> next_pos = None
<del> for idx, (token, lemma, unidic_tag) in enumerate(zip(doc, lemmas, unidic_tags)):
<del> token.tag_ = unidic_tag[0]
<del> if next_pos:
<add> next_pos = None # for bi-gram rules
<add> for idx, (token, dtoken) in enumerate(zip(doc, dtokens)):
<add> token.tag_ = dtoken.tag
<add> if next_pos: # already identified in previous iteration
<ide> token.pos = next_pos
<ide> next_pos = None
<ide> else:
<ide> token.pos, next_pos = resolve_pos(
<ide> token.orth_,
<del> unidic_tag,
<del> unidic_tags[idx + 1] if idx + 1 < len(unidic_tags) else None
<add> dtoken.tag,
<add> tags[idx + 1] if idx + 1 < len(tags) else None
<ide> )
<del>
<ide> # if there's no lemma info (it's an unk) just use the surface
<del> token.lemma_ = lemma
<del> doc.user_data["unidic_tags"] = unidic_tags
<add> token.lemma_ = dtoken.lemma if dtoken.lemma else dtoken.surface
<add>
<add> doc.user_data["inflections"] = inflections
<add> doc.user_data["reading_forms"] = readings
<add> doc.user_data["sub_tokens"] = sub_tokens_list
<ide>
<ide> return doc
<ide>
<add> def _get_dtokens(self, sudachipy_tokens, need_sub_tokens=True):
<add> sub_tokens_list = self._get_sub_tokens(sudachipy_tokens) if need_sub_tokens else None
<add> dtokens = [
<add> DetailedToken(
<add> token.surface(), # orth
<add> '-'.join([xx for xx in token.part_of_speech()[:4] if xx != '*']), # tag
<add> ','.join([xx for xx in token.part_of_speech()[4:] if xx != '*']), # inf
<add> token.dictionary_form(), # lemma
<add> token.reading_form(), # user_data['reading_forms']
<add> sub_tokens_list[idx] if sub_tokens_list else None, # user_data['sub_tokens']
<add> ) for idx, token in enumerate(sudachipy_tokens) if len(token.surface()) > 0
<add> # remove empty tokens which can be produced with characters like … that
<add> ]
<add> # Sudachi normalizes internally and outputs each space char as a token.
<add> # This is the preparation for get_dtokens_and_spaces() to merge the continuous space tokens
<add> return [
<add> t for idx, t in enumerate(dtokens) if
<add> idx == 0 or
<add> not t.surface.isspace() or t.tag != '空白' or
<add> not dtokens[idx - 1].surface.isspace() or dtokens[idx - 1].tag != '空白'
<add> ]
<add>
<add> def _get_sub_tokens(self, sudachipy_tokens):
<add> if self.split_mode is None or self.split_mode == "A": # do nothing for default split mode
<add> return None
<add>
<add> sub_tokens_list = [] # list of (list of list of DetailedToken | None)
<add> for token in sudachipy_tokens:
<add> sub_a = token.split(self.tokenizer.SplitMode.A)
<add> if len(sub_a) == 1: # no sub tokens
<add> sub_tokens_list.append(None)
<add> elif self.split_mode == "B":
<add> sub_tokens_list.append([self._get_dtokens(sub_a, False)])
<add> else: # "C"
<add> sub_b = token.split(self.tokenizer.SplitMode.B)
<add> if len(sub_a) == len(sub_b):
<add> dtokens = self._get_dtokens(sub_a, False)
<add> sub_tokens_list.append([dtokens, dtokens])
<add> else:
<add> sub_tokens_list.append([self._get_dtokens(sub_a, False), self._get_dtokens(sub_b, False)])
<add> return sub_tokens_list
<add>
<ide> def _get_config(self):
<ide> config = OrderedDict(
<ide> (
<ide><path>spacy/lang/ja/bunsetu.py
<del># coding: utf8
<del>from __future__ import unicode_literals
<del>
<del>from .stop_words import STOP_WORDS
<del>
<del>
<del>POS_PHRASE_MAP = {
<del> "NOUN": "NP",
<del> "NUM": "NP",
<del> "PRON": "NP",
<del> "PROPN": "NP",
<del>
<del> "VERB": "VP",
<del>
<del> "ADJ": "ADJP",
<del>
<del> "ADV": "ADVP",
<del>
<del> "CCONJ": "CCONJP",
<del>}
<del>
<del>
<del># return value: [(bunsetu_tokens, phrase_type={'NP', 'VP', 'ADJP', 'ADVP'}, phrase_tokens)]
<del>def yield_bunsetu(doc, debug=False):
<del> bunsetu = []
<del> bunsetu_may_end = False
<del> phrase_type = None
<del> phrase = None
<del> prev = None
<del> prev_tag = None
<del> prev_dep = None
<del> prev_head = None
<del> for t in doc:
<del> pos = t.pos_
<del> pos_type = POS_PHRASE_MAP.get(pos, None)
<del> tag = t.tag_
<del> dep = t.dep_
<del> head = t.head.i
<del> if debug:
<del> print(t.i, t.orth_, pos, pos_type, dep, head, bunsetu_may_end, phrase_type, phrase, bunsetu)
<del>
<del> # DET is always an individual bunsetu
<del> if pos == "DET":
<del> if bunsetu:
<del> yield bunsetu, phrase_type, phrase
<del> yield [t], None, None
<del> bunsetu = []
<del> bunsetu_may_end = False
<del> phrase_type = None
<del> phrase = None
<del>
<del> # PRON or Open PUNCT always splits bunsetu
<del> elif tag == "補助記号-括弧開":
<del> if bunsetu:
<del> yield bunsetu, phrase_type, phrase
<del> bunsetu = [t]
<del> bunsetu_may_end = True
<del> phrase_type = None
<del> phrase = None
<del>
<del> # bunsetu head not appeared
<del> elif phrase_type is None:
<del> if bunsetu and prev_tag == "補助記号-読点":
<del> yield bunsetu, phrase_type, phrase
<del> bunsetu = []
<del> bunsetu_may_end = False
<del> phrase_type = None
<del> phrase = None
<del> bunsetu.append(t)
<del> if pos_type: # begin phrase
<del> phrase = [t]
<del> phrase_type = pos_type
<del> if pos_type in {"ADVP", "CCONJP"}:
<del> bunsetu_may_end = True
<del>
<del> # entering new bunsetu
<del> elif pos_type and (
<del> pos_type != phrase_type or # different phrase type arises
<del> bunsetu_may_end # same phrase type but bunsetu already ended
<del> ):
<del> # exceptional case: NOUN to VERB
<del> if phrase_type == "NP" and pos_type == "VP" and prev_dep == 'compound' and prev_head == t.i:
<del> bunsetu.append(t)
<del> phrase_type = "VP"
<del> phrase.append(t)
<del> # exceptional case: VERB to NOUN
<del> elif phrase_type == "VP" and pos_type == "NP" and (
<del> prev_dep == 'compound' and prev_head == t.i or
<del> dep == 'compound' and prev == head or
<del> prev_dep == 'nmod' and prev_head == t.i
<del> ):
<del> bunsetu.append(t)
<del> phrase_type = "NP"
<del> phrase.append(t)
<del> else:
<del> yield bunsetu, phrase_type, phrase
<del> bunsetu = [t]
<del> bunsetu_may_end = False
<del> phrase_type = pos_type
<del> phrase = [t]
<del>
<del> # NOUN bunsetu
<del> elif phrase_type == "NP":
<del> bunsetu.append(t)
<del> if not bunsetu_may_end and ((
<del> (pos_type == "NP" or pos == "SYM") and (prev_head == t.i or prev_head == head) and prev_dep in {'compound', 'nummod'}
<del> ) or (
<del> pos == "PART" and (prev == head or prev_head == head) and dep == 'mark'
<del> )):
<del> phrase.append(t)
<del> else:
<del> bunsetu_may_end = True
<del>
<del> # VERB bunsetu
<del> elif phrase_type == "VP":
<del> bunsetu.append(t)
<del> if not bunsetu_may_end and pos == "VERB" and prev_head == t.i and prev_dep == 'compound':
<del> phrase.append(t)
<del> else:
<del> bunsetu_may_end = True
<del>
<del> # ADJ bunsetu
<del> elif phrase_type == "ADJP" and tag != '連体詞':
<del> bunsetu.append(t)
<del> if not bunsetu_may_end and ((
<del> pos == "NOUN" and (prev_head == t.i or prev_head == head) and prev_dep in {'amod', 'compound'}
<del> ) or (
<del> pos == "PART" and (prev == head or prev_head == head) and dep == 'mark'
<del> )):
<del> phrase.append(t)
<del> else:
<del> bunsetu_may_end = True
<del>
<del> # other bunsetu
<del> else:
<del> bunsetu.append(t)
<del>
<del> prev = t.i
<del> prev_tag = t.tag_
<del> prev_dep = t.dep_
<del> prev_head = head
<del>
<del> if bunsetu:
<del> yield bunsetu, phrase_type, phrase
<ide><path>spacy/tests/lang/ja/test_tokenizer.py
<ide> import pytest
<ide>
<ide> from ...tokenizer.test_naughty_strings import NAUGHTY_STRINGS
<del>from spacy.lang.ja import Japanese
<add>from spacy.lang.ja import Japanese, DetailedToken
<ide>
<ide> # fmt: off
<ide> TOKENIZER_TESTS = [
<ide> def test_ja_tokenizer_split_modes(ja_tokenizer, text, len_a, len_b, len_c):
<ide> assert len(nlp_c(text)) == len_c
<ide>
<ide>
<add>@pytest.mark.parametrize("text,sub_tokens_list_a,sub_tokens_list_b,sub_tokens_list_c",
<add> [
<add> (
<add> "選挙管理委員会",
<add> [None, None, None, None],
<add> [None, None, [
<add> [
<add> DetailedToken(surface='委員', tag='名詞-普通名詞-一般', inf='', lemma='委員', reading='イイン', sub_tokens=None),
<add> DetailedToken(surface='会', tag='名詞-普通名詞-一般', inf='', lemma='会', reading='カイ', sub_tokens=None),
<add> ]
<add> ]],
<add> [[
<add> [
<add> DetailedToken(surface='選挙', tag='名詞-普通名詞-サ変可能', inf='', lemma='選挙', reading='センキョ', sub_tokens=None),
<add> DetailedToken(surface='管理', tag='名詞-普通名詞-サ変可能', inf='', lemma='管理', reading='カンリ', sub_tokens=None),
<add> DetailedToken(surface='委員', tag='名詞-普通名詞-一般', inf='', lemma='委員', reading='イイン', sub_tokens=None),
<add> DetailedToken(surface='会', tag='名詞-普通名詞-一般', inf='', lemma='会', reading='カイ', sub_tokens=None),
<add> ], [
<add> DetailedToken(surface='選挙', tag='名詞-普通名詞-サ変可能', inf='', lemma='選挙', reading='センキョ', sub_tokens=None),
<add> DetailedToken(surface='管理', tag='名詞-普通名詞-サ変可能', inf='', lemma='管理', reading='カンリ', sub_tokens=None),
<add> DetailedToken(surface='委員会', tag='名詞-普通名詞-一般', inf='', lemma='委員会', reading='イインカイ', sub_tokens=None),
<add> ]
<add> ]]
<add> ),
<add> ]
<add>)
<add>def test_ja_tokenizer_sub_tokens(ja_tokenizer, text, sub_tokens_list_a, sub_tokens_list_b, sub_tokens_list_c):
<add> nlp_a = Japanese(meta={"tokenizer": {"config": {"split_mode": "A"}}})
<add> nlp_b = Japanese(meta={"tokenizer": {"config": {"split_mode": "B"}}})
<add> nlp_c = Japanese(meta={"tokenizer": {"config": {"split_mode": "C"}}})
<add>
<add> assert ja_tokenizer(text).user_data["sub_tokens"] == sub_tokens_list_a
<add> assert nlp_a(text).user_data["sub_tokens"] == sub_tokens_list_a
<add> assert nlp_b(text).user_data["sub_tokens"] == sub_tokens_list_b
<add> assert nlp_c(text).user_data["sub_tokens"] == sub_tokens_list_c
<add>
<add>
<add>@pytest.mark.parametrize("text,inflections,reading_forms",
<add> [
<add> (
<add> "取ってつけた",
<add> ("五段-ラ行,連用形-促音便", "", "下一段-カ行,連用形-一般", "助動詞-タ,終止形-一般"),
<add> ("トッ", "テ", "ツケ", "タ"),
<add> ),
<add> ]
<add>)
<add>def test_ja_tokenizer_inflections_reading_forms(ja_tokenizer, text, inflections, reading_forms):
<add> assert ja_tokenizer(text).user_data["inflections"] == inflections
<add> assert ja_tokenizer(text).user_data["reading_forms"] == reading_forms
<add>
<add>
<ide> def test_ja_tokenizer_emptyish_texts(ja_tokenizer):
<ide> doc = ja_tokenizer("")
<ide> assert len(doc) == 0 | 3 |
PHP | PHP | add more coverage | 4d467642568540a068bbce7c44377240d1be5e5a | <ide><path>tests/TestCase/Error/Middleware/ErrorHandlerMiddlewareTest.php
<ide> use Cake\Log\Log;
<ide> use Cake\TestSuite\TestCase;
<ide> use Error;
<add>use InvalidArgumentException;
<ide> use LogicException;
<ide> use TestApp\Http\TestRequestHandler;
<ide>
<ide> public function tearDown(): void
<ide> Log::drop('error_test');
<ide> }
<ide>
<add> /**
<add> * Test constructor error
<add> *
<add> * @return void
<add> */
<add> public function testConstructorInvalid()
<add> {
<add> $this->expectException(InvalidArgumentException::class);
<add> $this->expectExceptionMessage('$errorHandler argument must be a config array or ErrorHandler');
<add> new ErrorHandlerMiddleware('nope');
<add> }
<add>
<ide> /**
<ide> * Test returning a response works ok.
<ide> * | 1 |
PHP | PHP | use generated fqcn | 6575d9aea46c191fc37ad82561ccfc8ad006e5c7 | <ide><path>src/Http/ServerRequest.php
<ide> protected function _setConfig(array $config): void
<ide>
<ide> if (isset($config['uri'])) {
<ide> if (!$config['uri'] instanceof UriInterface) {
<del> throw new Exception('The `uri` key must be an instanceof Psr\Http\Message\UriInterface.');
<add> throw new Exception('The `uri` key must be an instance of ' . UriInterface::class);
<ide> }
<ide> $uri = $config['uri'];
<ide> } else { | 1 |
Mixed | Python | update stop_words.py in portuguese (a,o,e) | 7a0222f260ad5b12b4978cf553d68ee26d916849 | <ide><path>.github/contributors/cristianasp.md
<add># spaCy contributor agreement
<add>
<add>This spaCy Contributor Agreement (**"SCA"**) is based on the
<add>[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
<add>The SCA applies to any contribution that you make to any product or project
<add>managed by us (the **"project"**), and sets out the intellectual property rights
<add>you grant to us in the contributed materials. The term **"us"** shall mean
<add>[ExplosionAI GmbH](https://explosion.ai/legal). The term
<add>**"you"** shall mean the person or entity identified below.
<add>
<add>If you agree to be bound by these terms, fill in the information requested
<add>below and include the filled-in version with your first pull request, under the
<add>folder [`.github/contributors/`](/.github/contributors/). The name of the file
<add>should be your GitHub username, with the extension `.md`. For example, the user
<add>example_user would create the file `.github/contributors/example_user.md`.
<add>
<add>Read this agreement carefully before signing. These terms and conditions
<add>constitute a binding legal agreement.
<add>
<add>## Contributor Agreement
<add>
<add>1. The term "contribution" or "contributed materials" means any source code,
<add>object code, patch, tool, sample, graphic, specification, manual,
<add>documentation, or any other material posted or submitted by you to the project.
<add>
<add>2. With respect to any worldwide copyrights, or copyright applications and
<add>registrations, in your contribution:
<add>
<add> * you hereby assign to us joint ownership, and to the extent that such
<add> assignment is or becomes invalid, ineffective or unenforceable, you hereby
<add> grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
<add> royalty-free, unrestricted license to exercise all rights under those
<add> copyrights. This includes, at our option, the right to sublicense these same
<add> rights to third parties through multiple levels of sublicensees or other
<add> licensing arrangements;
<add>
<add> * you agree that each of us can do all things in relation to your
<add> contribution as if each of us were the sole owners, and if one of us makes
<add> a derivative work of your contribution, the one who makes the derivative
<add> work (or has it made will be the sole owner of that derivative work;
<add>
<add> * you agree that you will not assert any moral rights in your contribution
<add> against us, our licensees or transferees;
<add>
<add> * you agree that we may register a copyright in your contribution and
<add> exercise all ownership rights associated with it; and
<add>
<add> * you agree that neither of us has any duty to consult with, obtain the
<add> consent of, pay or render an accounting to the other for any use or
<add> distribution of your contribution.
<add>
<add>3. With respect to any patents you own, or that you can license without payment
<add>to any third party, you hereby grant to us a perpetual, irrevocable,
<add>non-exclusive, worldwide, no-charge, royalty-free license to:
<add>
<add> * make, have made, use, sell, offer to sell, import, and otherwise transfer
<add> your contribution in whole or in part, alone or in combination with or
<add> included in any product, work or materials arising out of the project to
<add> which your contribution was submitted, and
<add>
<add> * at our option, to sublicense these same rights to third parties through
<add> multiple levels of sublicensees or other licensing arrangements.
<add>
<add>4. Except as set out above, you keep all right, title, and interest in your
<add>contribution. The rights that you grant to us under these terms are effective
<add>on the date you first submitted a contribution to us, even if your submission
<add>took place before the date you sign these terms.
<add>
<add>5. You covenant, represent, warrant and agree that:
<add>
<add> * Each contribution that you submit is and shall be an original work of
<add> authorship and you can legally grant the rights set out in this SCA;
<add>
<add> * to the best of your knowledge, each contribution will not violate any
<add> third party's copyrights, trademarks, patents, or other intellectual
<add> property rights; and
<add>
<add> * each contribution shall be in compliance with U.S. export control laws and
<add> other applicable export and import laws. You agree to notify us if you
<add> become aware of any circumstance which would make any of the foregoing
<add> representations inaccurate in any respect. We may publicly disclose your
<add> participation in the project, including the fact that you have signed the SCA.
<add>
<add>6. This SCA is governed by the laws of the State of California and applicable
<add>U.S. Federal law. Any choice of law rules will not apply.
<add>
<add>7. Please place an “x” on one of the applicable statement below. Please do NOT
<add>mark both statements:
<add>
<add> * [X] I am signing on behalf of myself as an individual and no other person
<add> or entity, including my employer, has or will have rights with respect to my
<add> contributions.
<add>
<add> * [ ] I am signing on behalf of my employer or a legal entity and I have the
<add> actual authority to contractually bind that entity.
<add>
<add>## Contributor Details
<add>
<add>| Field | Entry |
<add>|------------------------------- | -------------------- |
<add>| Name | Cristiana S Parada |
<add>| Company name (if applicable) | |
<add>| Title or role (if applicable) | |
<add>| Date | 2020-11-04 |
<add>| GitHub username | cristianasp |
<add>| Website (optional) | |
<add>
<ide><path>spacy/lang/pt/stop_words.py
<ide>
<ide> STOP_WORDS = set(
<ide> """
<del>à às área acerca ademais adeus agora ainda algo algumas alguns ali além ambas ambos antes
<add>a à às área acerca ademais adeus agora ainda algo algumas alguns ali além ambas ambos antes
<ide> ao aos apenas apoia apoio apontar após aquela aquelas aquele aqueles aqui aquilo
<ide> as assim através atrás até aí
<ide>
<ide> desta deste deve devem deverá dez dezanove dezasseis dezassete dezoito diante
<ide> direita disso diz dizem dizer do dois dos doze duas dá dão
<ide>
<del>é és ela elas ele eles em embora enquanto entre então era essa essas esse esses esta
<add>e é és ela elas ele eles em embora enquanto entre então era essa essas esse esses esta
<ide> estado estar estará estas estava este estes esteve estive estivemos estiveram
<ide> estiveste estivestes estou está estás estão eu eventual exemplo
<ide>
<ide> nossas nosso nossos nova novas nove novo novos num numa nunca nuns não nível nós
<ide> número números
<ide>
<del>obrigada obrigado oitava oitavo oito onde ontem onze ora os ou outra outras outros
<add>o obrigada obrigado oitava oitavo oito onde ontem onze ora os ou outra outras outros
<ide>
<ide> para parece parte partir pegar pela pelas pelo pelos perto pode podem poder poderá
<ide> podia pois ponto pontos por porquanto porque porquê portanto porém posição
<ide> um uma umas uns usa usar último
<ide>
<ide> vai vais valor veja vem vens ver vez vezes vinda vindo vinte você vocês vos vossa
<del>vossas vosso vossos vários vão vêm vós
<add>vossas vosso vossos vários vão vêm vós
<ide>
<ide> zero
<ide> """.split()
<del>)
<add>) | 2 |
PHP | PHP | fix tests on inserting with negative index and cs | 9c7623b9cd843109da3f806ed60d54c754a4eccd | <ide><path>tests/TestCase/Http/MiddlewareQueueTest.php
<ide> public function testInsertAtNegative()
<ide> $queue->add($one)->insertAt(-1, $two)->insertAt(-1, $three);
<ide>
<ide> $this->assertCount(3, $queue);
<del> $this->assertSame($three, $queue->get(0));
<del> $this->assertSame($two, $queue->get(1));
<add> $this->assertSame($two, $queue->get(0));
<add> $this->assertSame($three, $queue->get(1));
<ide> $this->assertSame($one, $queue->get(2));
<ide> }
<ide>
<ide> public function testInsertAfter()
<ide> };
<ide> $three = function () {
<ide> };
<add> $four = new DumbMiddleware();
<ide> $queue = new MiddlewareQueue();
<del> $queue->add($one)->add($two)->insertAfter(SampleMiddleware::class, $three);
<add> $queue
<add> ->add($one)
<add> ->add($two)
<add> ->insertAfter(SampleMiddleware::class, $three)
<add> ->insertAfter(SampleMiddleware::class, $four);
<ide>
<del> $this->assertCount(3, $queue);
<add> $this->assertCount(4, $queue);
<ide> $this->assertSame($one, $queue->get(0));
<del> $this->assertSame($three, $queue->get(1));
<del> $this->assertSame($two, $queue->get(2));
<add> $this->assertSame($four, $queue->get(1));
<add> $this->assertSame($three, $queue->get(2));
<add> $this->assertSame($two, $queue->get(3));
<ide>
<ide> $one = 'Sample';
<ide> $queue = new MiddlewareQueue();
<ide><path>tests/test_app/TestApp/Middleware/DumbMiddleware.php
<ide> *
<ide> * @copyright Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org)
<ide> * @link http://cakephp.org CakePHP(tm) Project
<del> * @since 3.3.0
<add> * @since 3.3.1
<ide> * @license http://www.opensource.org/licenses/mit-license.php MIT License
<ide> */
<ide> namespace TestApp\Middleware;
<ide>
<ide> /**
<ide> * Testing stub for middleware tests.
<ide> */
<del>class DumbMiddleWare {
<del> function __invoke($request, $response, $next)
<add>class DumbMiddleWare
<add>{
<add> public function __invoke($request, $response, $next)
<ide> {
<ide> return $next($request, $response);
<ide> } | 2 |
Text | Text | remove smartos from official binaries | ceca86740e9590e2500e2a9bc818ee7743849ac5 | <ide><path>BUILDING.md
<ide> Binaries at <https://nodejs.org/download/release/> are produced on:
<ide> | linux-ppc64le | CentOS 7 with devtoolset-6 / GCC 6 <sup>[7](#fn7)</sup> |
<ide> | linux-s390x | RHEL 7 with devtoolset-6 / GCC 6 <sup>[7](#fn7)</sup> |
<ide> | linux-x64 | CentOS 7 with devtoolset-6 / GCC 6 <sup>[7](#fn7)</sup> |
<del>| sunos-x64 | SmartOS 18 with GCC 7 |
<ide> | win-x64 and win-x86 | Windows 2012 R2 (x64) with Visual Studio 2019 |
<ide>
<ide> <em id="fn7">7</em>: The Enterprise Linux devtoolset-6 allows us to compile | 1 |
Javascript | Javascript | fix native event batching in concurrent mode | 89acfa639bcecebdb18276ba2c0f4d5beee2de19 | <ide><path>packages/react-debug-tools/src/__tests__/ReactDevToolsHooksIntegration-test.js
<ide> describe('React hooks DevTools integration', () => {
<ide> if (__DEV__) {
<ide> // First render was locked
<ide> expect(renderer.toJSON().children).toEqual(['Loading']);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Loading']);
<ide>
<ide> // Release the lock
<ide> setSuspenseHandler(() => false);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Done']);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Done']);
<ide>
<ide> // Lock again
<ide> setSuspenseHandler(() => true);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Loading']);
<ide>
<ide> // Release the lock again
<ide> setSuspenseHandler(() => false);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Done']);
<ide>
<ide> // Ensure it checks specific fibers.
<ide> setSuspenseHandler(f => f === fiber || f === fiber.alternate);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Loading']);
<ide> setSuspenseHandler(f => f !== fiber && f !== fiber.alternate);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Done']);
<ide> } else {
<ide> expect(renderer.toJSON().children).toEqual(['Done']);
<ide> describe('React hooks DevTools integration', () => {
<ide> if (__DEV__) {
<ide> // First render was locked
<ide> expect(renderer.toJSON().children).toEqual(['Loading']);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Loading']);
<ide>
<ide> // Release the lock
<ide> setSuspenseHandler(() => false);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> Scheduler.unstable_flushAll();
<ide> expect(renderer.toJSON().children).toEqual(['Done']);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Done']);
<ide>
<ide> // Lock again
<ide> setSuspenseHandler(() => true);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Loading']);
<ide>
<ide> // Release the lock again
<ide> setSuspenseHandler(() => false);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Done']);
<ide>
<ide> // Ensure it checks specific fibers.
<ide> setSuspenseHandler(f => f === fiber || f === fiber.alternate);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Loading']);
<ide> setSuspenseHandler(f => f !== fiber && f !== fiber.alternate);
<del> scheduleUpdate(fiber); // Re-render
<add> act(() => scheduleUpdate(fiber)); // Re-render
<ide> expect(renderer.toJSON().children).toEqual(['Done']);
<ide> } else {
<ide> expect(renderer.toJSON().children).toEqual(['Done']);
<ide><path>packages/react-dom/src/__tests__/ReactDOMNativeEventHeuristic-test.js
<ide> describe('ReactDOMNativeEventHeuristic-test', () => {
<ide> expect(container.textContent).toEqual('hovered');
<ide> });
<ide> });
<add>
<add> // @gate experimental
<add> it('should batch inside native events', async () => {
<add> const root = ReactDOM.unstable_createRoot(container);
<add>
<add> const target = React.createRef(null);
<add> function Foo() {
<add> const [count, setCount] = React.useState(0);
<add> const countRef = React.useRef(-1);
<add>
<add> React.useLayoutEffect(() => {
<add> countRef.current = count;
<add> target.current.onclick = () => {
<add> setCount(countRef.current + 1);
<add> // Now update again. If these updates are batched, then this should be
<add> // a no-op, because we didn't re-render yet and `countRef` hasn't
<add> // been mutated.
<add> setCount(countRef.current + 1);
<add> };
<add> });
<add> return <div ref={target}>Count: {count}</div>;
<add> }
<add>
<add> await act(async () => {
<add> root.render(<Foo />);
<add> });
<add> expect(container.textContent).toEqual('Count: 0');
<add>
<add> // Ignore act warning. We can't use act because it forces batched updates.
<add> spyOnDev(console, 'error');
<add>
<add> const pressEvent = document.createEvent('Event');
<add> pressEvent.initEvent('click', true, true);
<add> dispatchAndSetCurrentEvent(target.current, pressEvent);
<add> // If this is 2, that means the `setCount` calls were not batched.
<add> expect(container.textContent).toEqual('Count: 1');
<add>
<add> // Assert that the `act` warnings were the only ones that fired.
<add> if (__DEV__) {
<add> expect(console.error).toHaveBeenCalledTimes(2);
<add> expect(console.error.calls.argsFor(0)[0]).toContain(
<add> 'was not wrapped in act',
<add> );
<add> expect(console.error.calls.argsFor(1)[0]).toContain(
<add> 'was not wrapped in act',
<add> );
<add> }
<add> });
<ide> });
<ide><path>packages/react-reconciler/src/ReactFiberWorkLoop.new.js
<ide> export function scheduleUpdateOnFiber(
<ide> } else {
<ide> ensureRootIsScheduled(root, eventTime);
<ide> schedulePendingInteractions(root, lane);
<del> if (executionContext === NoContext) {
<add> if (
<add> executionContext === NoContext &&
<add> (fiber.mode & ConcurrentMode) === NoMode
<add> ) {
<ide> // Flush the synchronous work now, unless we're already working or inside
<ide> // a batch. This is intentionally inside scheduleUpdateOnFiber instead of
<ide> // scheduleCallbackForFiber to preserve the ability to schedule a callback
<ide><path>packages/react-reconciler/src/ReactFiberWorkLoop.old.js
<ide> export function scheduleUpdateOnFiber(
<ide> } else {
<ide> ensureRootIsScheduled(root, eventTime);
<ide> schedulePendingInteractions(root, lane);
<del> if (executionContext === NoContext) {
<add> if (
<add> executionContext === NoContext &&
<add> (fiber.mode & ConcurrentMode) === NoMode
<add> ) {
<ide> // Flush the synchronous work now, unless we're already working or inside
<ide> // a batch. This is intentionally inside scheduleUpdateOnFiber instead of
<ide> // scheduleCallbackForFiber to preserve the ability to schedule a callback | 4 |
PHP | PHP | fix code standards warnings | 3851ad08a678bfdb19a3c7f7ce16607cf52e37c4 | <ide><path>lib/Cake/Test/Case/I18n/L10nTest.php
<ide> class L10nTest extends CakeTestCase {
<ide> * @return void
<ide> */
<ide> public function testGet() {
<del> $l10n = new L10n();
<add> $localize = new L10n();
<ide>
<ide> // Catalog Entry
<del> $l10n->get('en');
<add> $localize->get('en');
<ide>
<del> $this->assertEquals($l10n->language, 'English');
<del> $this->assertEquals($l10n->languagePath, array('eng', 'eng'));
<del> $this->assertEquals($l10n->locale, 'eng');
<add> $this->assertEquals($localize->language, 'English');
<add> $this->assertEquals($localize->languagePath, array('eng', 'eng'));
<add> $this->assertEquals($localize->locale, 'eng');
<ide>
<ide> // Map Entry
<del> $l10n->get('eng');
<add> $localize->get('eng');
<ide>
<del> $this->assertEquals($l10n->language, 'English');
<del> $this->assertEquals($l10n->languagePath, array('eng', 'eng'));
<del> $this->assertEquals($l10n->locale, 'eng');
<add> $this->assertEquals($localize->language, 'English');
<add> $this->assertEquals($localize->languagePath, array('eng', 'eng'));
<add> $this->assertEquals($localize->locale, 'eng');
<ide>
<ide> // Catalog Entry
<del> $l10n->get('en-ca');
<add> $localize->get('en-ca');
<ide>
<del> $this->assertEquals($l10n->language, 'English (Canadian)');
<del> $this->assertEquals($l10n->languagePath, array('en_ca', 'eng'));
<del> $this->assertEquals($l10n->locale, 'en_ca');
<add> $this->assertEquals($localize->language, 'English (Canadian)');
<add> $this->assertEquals($localize->languagePath, array('en_ca', 'eng'));
<add> $this->assertEquals($localize->locale, 'en_ca');
<ide>
<ide> // Default Entry
<ide> define('DEFAULT_LANGUAGE', 'en-us');
<ide>
<del> $l10n->get('use_default');
<add> $localize->get('use_default');
<ide>
<del> $this->assertEquals($l10n->language, 'English (United States)');
<del> $this->assertEquals($l10n->languagePath, array('en_us', 'eng'));
<del> $this->assertEquals($l10n->locale, 'en_us');
<add> $this->assertEquals($localize->language, 'English (United States)');
<add> $this->assertEquals($localize->languagePath, array('en_us', 'eng'));
<add> $this->assertEquals($localize->locale, 'en_us');
<ide>
<del> $l10n->get('es');
<del> $l10n->get('');
<del> $this->assertEquals($l10n->lang, 'en-us');
<add> $localize->get('es');
<add> $localize->get('');
<add> $this->assertEquals($localize->lang, 'en-us');
<ide>
<ide> // Using $this->default
<del> $l10n = new L10n();
<add> $localize = new L10n();
<ide>
<del> $l10n->get('use_default');
<del> $this->assertEquals($l10n->language, 'English (United States)');
<del> $this->assertEquals($l10n->languagePath, array('en_us', 'eng', 'eng'));
<del> $this->assertEquals($l10n->locale, 'en_us');
<add> $localize->get('use_default');
<add> $this->assertEquals($localize->language, 'English (United States)');
<add> $this->assertEquals($localize->languagePath, array('en_us', 'eng', 'eng'));
<add> $this->assertEquals($localize->locale, 'en_us');
<ide> }
<ide>
<ide> /**
<ide> public function testGetAutoLanguage() {
<ide> $serverBackup = $_SERVER;
<ide> $_SERVER['HTTP_ACCEPT_LANGUAGE'] = 'inexistent,en-ca';
<ide>
<del> $l10n = new L10n();
<del> $l10n->get();
<add> $localize = new L10n();
<add> $localize->get();
<ide>
<del> $this->assertEquals($l10n->language, 'English (Canadian)');
<del> $this->assertEquals($l10n->languagePath, array('en_ca', 'eng', 'eng'));
<del> $this->assertEquals($l10n->locale, 'en_ca');
<add> $this->assertEquals($localize->language, 'English (Canadian)');
<add> $this->assertEquals($localize->languagePath, array('en_ca', 'eng', 'eng'));
<add> $this->assertEquals($localize->locale, 'en_ca');
<ide>
<ide> $_SERVER['HTTP_ACCEPT_LANGUAGE'] = 'es_mx';
<del> $l10n->get();
<add> $localize->get();
<ide>
<del> $this->assertEquals($l10n->language, 'Spanish (Mexican)');
<del> $this->assertEquals($l10n->languagePath, array('es_mx', 'spa', 'eng'));
<del> $this->assertEquals($l10n->locale, 'es_mx');
<add> $this->assertEquals($localize->language, 'Spanish (Mexican)');
<add> $this->assertEquals($localize->languagePath, array('es_mx', 'spa', 'eng'));
<add> $this->assertEquals($localize->locale, 'es_mx');
<ide>
<ide> $_SERVER['HTTP_ACCEPT_LANGUAGE'] = 'en_xy,en_ca';
<del> $l10n->get();
<add> $localize->get();
<ide>
<del> $this->assertEquals($l10n->language, 'English');
<del> $this->assertEquals($l10n->languagePath, array('eng', 'eng', 'eng'));
<del> $this->assertEquals($l10n->locale, 'eng');
<add> $this->assertEquals($localize->language, 'English');
<add> $this->assertEquals($localize->languagePath, array('eng', 'eng', 'eng'));
<add> $this->assertEquals($localize->locale, 'eng');
<ide>
<ide> $_SERVER = $serverBackup;
<ide> }
<ide> public function testGetAutoLanguage() {
<ide> * @return void
<ide> */
<ide> public function testMap() {
<del> $l10n = new L10n();
<add> $localize = new L10n();
<ide>
<del> $result = $l10n->map(array('afr', 'af'));
<add> $result = $localize->map(array('afr', 'af'));
<ide> $expected = array('afr' => 'af', 'af' => 'afr');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('alb', 'sq'));
<add> $result = $localize->map(array('alb', 'sq'));
<ide> $expected = array('alb' => 'sq', 'sq' => 'alb');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('ara', 'ar'));
<add> $result = $localize->map(array('ara', 'ar'));
<ide> $expected = array('ara' => 'ar', 'ar' => 'ara');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('hye', 'hy'));
<add> $result = $localize->map(array('hye', 'hy'));
<ide> $expected = array('hye' => 'hy', 'hy' => 'hye');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('baq', 'eu'));
<add> $result = $localize->map(array('baq', 'eu'));
<ide> $expected = array('baq' => 'eu', 'eu' => 'baq');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('baq', 'eu'));
<add> $result = $localize->map(array('baq', 'eu'));
<ide> $expected = array('baq' => 'eu', 'eu' => 'baq');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('bos', 'bs'));
<add> $result = $localize->map(array('bos', 'bs'));
<ide> $expected = array('bos' => 'bs', 'bs' => 'bos');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('bul', 'bg'));
<add> $result = $localize->map(array('bul', 'bg'));
<ide> $expected = array('bul' => 'bg', 'bg' => 'bul');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('bel', 'be'));
<add> $result = $localize->map(array('bel', 'be'));
<ide> $expected = array('bel' => 'be', 'be' => 'bel');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('cat', 'ca'));
<add> $result = $localize->map(array('cat', 'ca'));
<ide> $expected = array('cat' => 'ca', 'ca' => 'cat');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('chi', 'zh'));
<add> $result = $localize->map(array('chi', 'zh'));
<ide> $expected = array('chi' => 'zh', 'zh' => 'chi');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('zho', 'zh'));
<add> $result = $localize->map(array('zho', 'zh'));
<ide> $expected = array('zho' => 'zh', 'zh' => 'chi');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('hrv', 'hr'));
<add> $result = $localize->map(array('hrv', 'hr'));
<ide> $expected = array('hrv' => 'hr', 'hr' => 'hrv');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('ces', 'cs'));
<add> $result = $localize->map(array('ces', 'cs'));
<ide> $expected = array('ces' => 'cs', 'cs' => 'cze');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('cze', 'cs'));
<add> $result = $localize->map(array('cze', 'cs'));
<ide> $expected = array('cze' => 'cs', 'cs' => 'cze');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('dan', 'da'));
<add> $result = $localize->map(array('dan', 'da'));
<ide> $expected = array('dan' => 'da', 'da' => 'dan');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('dut', 'nl'));
<add> $result = $localize->map(array('dut', 'nl'));
<ide> $expected = array('dut' => 'nl', 'nl' => 'dut');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('nld', 'nl'));
<add> $result = $localize->map(array('nld', 'nl'));
<ide> $expected = array('nld' => 'nl', 'nl' => 'dut');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('nld'));
<add> $result = $localize->map(array('nld'));
<ide> $expected = array('nld' => 'nl');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('eng', 'en'));
<add> $result = $localize->map(array('eng', 'en'));
<ide> $expected = array('eng' => 'en', 'en' => 'eng');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('est', 'et'));
<add> $result = $localize->map(array('est', 'et'));
<ide> $expected = array('est' => 'et', 'et' => 'est');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('fao', 'fo'));
<add> $result = $localize->map(array('fao', 'fo'));
<ide> $expected = array('fao' => 'fo', 'fo' => 'fao');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('fas', 'fa'));
<add> $result = $localize->map(array('fas', 'fa'));
<ide> $expected = array('fas' => 'fa', 'fa' => 'fas');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('per', 'fa'));
<add> $result = $localize->map(array('per', 'fa'));
<ide> $expected = array('per' => 'fa', 'fa' => 'fas');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('fin', 'fi'));
<add> $result = $localize->map(array('fin', 'fi'));
<ide> $expected = array('fin' => 'fi', 'fi' => 'fin');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('fra', 'fr'));
<add> $result = $localize->map(array('fra', 'fr'));
<ide> $expected = array('fra' => 'fr', 'fr' => 'fre');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('fre', 'fr'));
<add> $result = $localize->map(array('fre', 'fr'));
<ide> $expected = array('fre' => 'fr', 'fr' => 'fre');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('gla', 'gd'));
<add> $result = $localize->map(array('gla', 'gd'));
<ide> $expected = array('gla' => 'gd', 'gd' => 'gla');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('glg', 'gl'));
<add> $result = $localize->map(array('glg', 'gl'));
<ide> $expected = array('glg' => 'gl', 'gl' => 'glg');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('deu', 'de'));
<add> $result = $localize->map(array('deu', 'de'));
<ide> $expected = array('deu' => 'de', 'de' => 'deu');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('ger', 'de'));
<add> $result = $localize->map(array('ger', 'de'));
<ide> $expected = array('ger' => 'de', 'de' => 'deu');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('ell', 'el'));
<add> $result = $localize->map(array('ell', 'el'));
<ide> $expected = array('ell' => 'el', 'el' => 'gre');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('gre', 'el'));
<add> $result = $localize->map(array('gre', 'el'));
<ide> $expected = array('gre' => 'el', 'el' => 'gre');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('heb', 'he'));
<add> $result = $localize->map(array('heb', 'he'));
<ide> $expected = array('heb' => 'he', 'he' => 'heb');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('hin', 'hi'));
<add> $result = $localize->map(array('hin', 'hi'));
<ide> $expected = array('hin' => 'hi', 'hi' => 'hin');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('hun', 'hu'));
<add> $result = $localize->map(array('hun', 'hu'));
<ide> $expected = array('hun' => 'hu', 'hu' => 'hun');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('ice', 'is'));
<add> $result = $localize->map(array('ice', 'is'));
<ide> $expected = array('ice' => 'is', 'is' => 'ice');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('isl', 'is'));
<add> $result = $localize->map(array('isl', 'is'));
<ide> $expected = array('isl' => 'is', 'is' => 'ice');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('ind', 'id'));
<add> $result = $localize->map(array('ind', 'id'));
<ide> $expected = array('ind' => 'id', 'id' => 'ind');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('gle', 'ga'));
<add> $result = $localize->map(array('gle', 'ga'));
<ide> $expected = array('gle' => 'ga', 'ga' => 'gle');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('ita', 'it'));
<add> $result = $localize->map(array('ita', 'it'));
<ide> $expected = array('ita' => 'it', 'it' => 'ita');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('jpn', 'ja'));
<add> $result = $localize->map(array('jpn', 'ja'));
<ide> $expected = array('jpn' => 'ja', 'ja' => 'jpn');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('kor', 'ko'));
<add> $result = $localize->map(array('kor', 'ko'));
<ide> $expected = array('kor' => 'ko', 'ko' => 'kor');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('lav', 'lv'));
<add> $result = $localize->map(array('lav', 'lv'));
<ide> $expected = array('lav' => 'lv', 'lv' => 'lav');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('lit', 'lt'));
<add> $result = $localize->map(array('lit', 'lt'));
<ide> $expected = array('lit' => 'lt', 'lt' => 'lit');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('mac', 'mk'));
<add> $result = $localize->map(array('mac', 'mk'));
<ide> $expected = array('mac' => 'mk', 'mk' => 'mac');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('mkd', 'mk'));
<add> $result = $localize->map(array('mkd', 'mk'));
<ide> $expected = array('mkd' => 'mk', 'mk' => 'mac');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('may', 'ms'));
<add> $result = $localize->map(array('may', 'ms'));
<ide> $expected = array('may' => 'ms', 'ms' => 'may');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('msa', 'ms'));
<add> $result = $localize->map(array('msa', 'ms'));
<ide> $expected = array('msa' => 'ms', 'ms' => 'may');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('mlt', 'mt'));
<add> $result = $localize->map(array('mlt', 'mt'));
<ide> $expected = array('mlt' => 'mt', 'mt' => 'mlt');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('nor', 'no'));
<add> $result = $localize->map(array('nor', 'no'));
<ide> $expected = array('nor' => 'no', 'no' => 'nor');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('nob', 'nb'));
<add> $result = $localize->map(array('nob', 'nb'));
<ide> $expected = array('nob' => 'nb', 'nb' => 'nob');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('nno', 'nn'));
<add> $result = $localize->map(array('nno', 'nn'));
<ide> $expected = array('nno' => 'nn', 'nn' => 'nno');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('pol', 'pl'));
<add> $result = $localize->map(array('pol', 'pl'));
<ide> $expected = array('pol' => 'pl', 'pl' => 'pol');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('por', 'pt'));
<add> $result = $localize->map(array('por', 'pt'));
<ide> $expected = array('por' => 'pt', 'pt' => 'por');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('roh', 'rm'));
<add> $result = $localize->map(array('roh', 'rm'));
<ide> $expected = array('roh' => 'rm', 'rm' => 'roh');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('ron', 'ro'));
<add> $result = $localize->map(array('ron', 'ro'));
<ide> $expected = array('ron' => 'ro', 'ro' => 'rum');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('rum', 'ro'));
<add> $result = $localize->map(array('rum', 'ro'));
<ide> $expected = array('rum' => 'ro', 'ro' => 'rum');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('rus', 'ru'));
<add> $result = $localize->map(array('rus', 'ru'));
<ide> $expected = array('rus' => 'ru', 'ru' => 'rus');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('smi', 'sz'));
<add> $result = $localize->map(array('smi', 'sz'));
<ide> $expected = array('smi' => 'sz', 'sz' => 'smi');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('scc', 'sr'));
<add> $result = $localize->map(array('scc', 'sr'));
<ide> $expected = array('scc' => 'sr', 'sr' => 'scc');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('srp', 'sr'));
<add> $result = $localize->map(array('srp', 'sr'));
<ide> $expected = array('srp' => 'sr', 'sr' => 'scc');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('slk', 'sk'));
<add> $result = $localize->map(array('slk', 'sk'));
<ide> $expected = array('slk' => 'sk', 'sk' => 'slo');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('slo', 'sk'));
<add> $result = $localize->map(array('slo', 'sk'));
<ide> $expected = array('slo' => 'sk', 'sk' => 'slo');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('slv', 'sl'));
<add> $result = $localize->map(array('slv', 'sl'));
<ide> $expected = array('slv' => 'sl', 'sl' => 'slv');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('wen', 'sb'));
<add> $result = $localize->map(array('wen', 'sb'));
<ide> $expected = array('wen' => 'sb', 'sb' => 'wen');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('spa', 'es'));
<add> $result = $localize->map(array('spa', 'es'));
<ide> $expected = array('spa' => 'es', 'es' => 'spa');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('swe', 'sv'));
<add> $result = $localize->map(array('swe', 'sv'));
<ide> $expected = array('swe' => 'sv', 'sv' => 'swe');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('tha', 'th'));
<add> $result = $localize->map(array('tha', 'th'));
<ide> $expected = array('tha' => 'th', 'th' => 'tha');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('tso', 'ts'));
<add> $result = $localize->map(array('tso', 'ts'));
<ide> $expected = array('tso' => 'ts', 'ts' => 'tso');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('tsn', 'tn'));
<add> $result = $localize->map(array('tsn', 'tn'));
<ide> $expected = array('tsn' => 'tn', 'tn' => 'tsn');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('tur', 'tr'));
<add> $result = $localize->map(array('tur', 'tr'));
<ide> $expected = array('tur' => 'tr', 'tr' => 'tur');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('ukr', 'uk'));
<add> $result = $localize->map(array('ukr', 'uk'));
<ide> $expected = array('ukr' => 'uk', 'uk' => 'ukr');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('urd', 'ur'));
<add> $result = $localize->map(array('urd', 'ur'));
<ide> $expected = array('urd' => 'ur', 'ur' => 'urd');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('ven', 've'));
<add> $result = $localize->map(array('ven', 've'));
<ide> $expected = array('ven' => 've', 've' => 'ven');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('vie', 'vi'));
<add> $result = $localize->map(array('vie', 'vi'));
<ide> $expected = array('vie' => 'vi', 'vi' => 'vie');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('xho', 'xh'));
<add> $result = $localize->map(array('xho', 'xh'));
<ide> $expected = array('xho' => 'xh', 'xh' => 'xho');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('cy', 'cym'));
<add> $result = $localize->map(array('cy', 'cym'));
<ide> $expected = array('cym' => 'cy', 'cy' => 'cym');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('yid', 'yi'));
<add> $result = $localize->map(array('yid', 'yi'));
<ide> $expected = array('yid' => 'yi', 'yi' => 'yid');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->map(array('zul', 'zu'));
<add> $result = $localize->map(array('zul', 'zu'));
<ide> $expected = array('zul' => 'zu', 'zu' => 'zul');
<ide> $this->assertEquals($expected, $result);
<ide> }
<ide> public function testMap() {
<ide> * @return void
<ide> */
<ide> public function testCatalog() {
<del> $l10n = new L10n();
<add> $localize = new L10n();
<ide>
<del> $result = $l10n->catalog(array('af'));
<add> $result = $localize->catalog(array('af'));
<ide> $expected = array(
<ide> 'af' => array('language' => 'Afrikaans', 'locale' => 'afr', 'localeFallback' => 'afr', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('ar', 'ar-ae', 'ar-bh', 'ar-dz', 'ar-eg', 'ar-iq', 'ar-jo', 'ar-kw', 'ar-lb', 'ar-ly', 'ar-ma',
<add> $result = $localize->catalog(array('ar', 'ar-ae', 'ar-bh', 'ar-dz', 'ar-eg', 'ar-iq', 'ar-jo', 'ar-kw', 'ar-lb', 'ar-ly', 'ar-ma',
<ide> 'ar-om', 'ar-qa', 'ar-sa', 'ar-sy', 'ar-tn', 'ar-ye'));
<ide> $expected = array(
<ide> 'ar' => array('language' => 'Arabic', 'locale' => 'ara', 'localeFallback' => 'ara', 'charset' => 'utf-8', 'direction' => 'rtl'),
<ide> public function testCatalog() {
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('be'));
<add> $result = $localize->catalog(array('be'));
<ide> $expected = array(
<ide> 'be' => array('language' => 'Byelorussian', 'locale' => 'bel', 'localeFallback' => 'bel', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('bg'));
<add> $result = $localize->catalog(array('bg'));
<ide> $expected = array(
<ide> 'bg' => array('language' => 'Bulgarian', 'locale' => 'bul', 'localeFallback' => 'bul', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('bs'));
<add> $result = $localize->catalog(array('bs'));
<ide> $expected = array(
<ide> 'bs' => array('language' => 'Bosnian', 'locale' => 'bos', 'localeFallback' => 'bos', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('ca'));
<add> $result = $localize->catalog(array('ca'));
<ide> $expected = array(
<ide> 'ca' => array('language' => 'Catalan', 'locale' => 'cat', 'localeFallback' => 'cat', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('cs'));
<add> $result = $localize->catalog(array('cs'));
<ide> $expected = array(
<ide> 'cs' => array('language' => 'Czech', 'locale' => 'cze', 'localeFallback' => 'cze', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('da'));
<add> $result = $localize->catalog(array('da'));
<ide> $expected = array(
<ide> 'da' => array('language' => 'Danish', 'locale' => 'dan', 'localeFallback' => 'dan', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('de', 'de-at', 'de-ch', 'de-de', 'de-li', 'de-lu'));
<add> $result = $localize->catalog(array('de', 'de-at', 'de-ch', 'de-de', 'de-li', 'de-lu'));
<ide> $expected = array(
<ide> 'de' => array('language' => 'German (Standard)', 'locale' => 'deu', 'localeFallback' => 'deu', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'de-at' => array('language' => 'German (Austria)', 'locale' => 'de_at', 'localeFallback' => 'deu', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> public function testCatalog() {
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('e', 'el'));
<add> $result = $localize->catalog(array('e', 'el'));
<ide> $expected = array(
<ide> 'e' => array('language' => 'Greek', 'locale' => 'gre', 'localeFallback' => 'gre', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'el' => array('language' => 'Greek', 'locale' => 'gre', 'localeFallback' => 'gre', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('en', 'en-au', 'en-bz', 'en-ca', 'en-gb', 'en-ie', 'en-jm', 'en-nz', 'en-tt', 'en-us', 'en-za'));
<add> $result = $localize->catalog(array('en', 'en-au', 'en-bz', 'en-ca', 'en-gb', 'en-ie', 'en-jm', 'en-nz', 'en-tt', 'en-us', 'en-za'));
<ide> $expected = array(
<ide> 'en' => array('language' => 'English', 'locale' => 'eng', 'localeFallback' => 'eng', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'en-au' => array('language' => 'English (Australian)', 'locale' => 'en_au', 'localeFallback' => 'eng', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> public function testCatalog() {
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('es', 'es-ar', 'es-bo', 'es-cl', 'es-co', 'es-cr', 'es-do', 'es-ec', 'es-es', 'es-gt', 'es-hn',
<add> $result = $localize->catalog(array('es', 'es-ar', 'es-bo', 'es-cl', 'es-co', 'es-cr', 'es-do', 'es-ec', 'es-es', 'es-gt', 'es-hn',
<ide> 'es-mx', 'es-ni', 'es-pa', 'es-pe', 'es-pr', 'es-py', 'es-sv', 'es-uy', 'es-ve'));
<ide> $expected = array(
<ide> 'es' => array('language' => 'Spanish (Spain - Traditional)', 'locale' => 'spa', 'localeFallback' => 'spa', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> public function testCatalog() {
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('et'));
<add> $result = $localize->catalog(array('et'));
<ide> $expected = array(
<ide> 'et' => array('language' => 'Estonian', 'locale' => 'est', 'localeFallback' => 'est', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('eu'));
<add> $result = $localize->catalog(array('eu'));
<ide> $expected = array(
<ide> 'eu' => array('language' => 'Basque', 'locale' => 'baq', 'localeFallback' => 'baq', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('fa'));
<add> $result = $localize->catalog(array('fa'));
<ide> $expected = array(
<ide> 'fa' => array('language' => 'Farsi', 'locale' => 'per', 'localeFallback' => 'per', 'charset' => 'utf-8', 'direction' => 'rtl')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('fi'));
<add> $result = $localize->catalog(array('fi'));
<ide> $expected = array(
<ide> 'fi' => array('language' => 'Finnish', 'locale' => 'fin', 'localeFallback' => 'fin', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('fo'));
<add> $result = $localize->catalog(array('fo'));
<ide> $expected = array(
<ide> 'fo' => array('language' => 'Faeroese', 'locale' => 'fao', 'localeFallback' => 'fao', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('fr', 'fr-be', 'fr-ca', 'fr-ch', 'fr-fr', 'fr-lu'));
<add> $result = $localize->catalog(array('fr', 'fr-be', 'fr-ca', 'fr-ch', 'fr-fr', 'fr-lu'));
<ide> $expected = array(
<ide> 'fr' => array('language' => 'French (Standard)', 'locale' => 'fre', 'localeFallback' => 'fre', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'fr-be' => array('language' => 'French (Belgium)', 'locale' => 'fr_be', 'localeFallback' => 'fre', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> public function testCatalog() {
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('ga'));
<add> $result = $localize->catalog(array('ga'));
<ide> $expected = array(
<ide> 'ga' => array('language' => 'Irish', 'locale' => 'gle', 'localeFallback' => 'gle', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('gd', 'gd-ie'));
<add> $result = $localize->catalog(array('gd', 'gd-ie'));
<ide> $expected = array(
<ide> 'gd' => array('language' => 'Gaelic (Scots)', 'locale' => 'gla', 'localeFallback' => 'gla', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'gd-ie' => array('language' => 'Gaelic (Irish)', 'locale' => 'gd_ie', 'localeFallback' => 'gla', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('gl'));
<add> $result = $localize->catalog(array('gl'));
<ide> $expected = array(
<ide> 'gl' => array('language' => 'Galician', 'locale' => 'glg', 'localeFallback' => 'glg', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('he'));
<add> $result = $localize->catalog(array('he'));
<ide> $expected = array(
<ide> 'he' => array('language' => 'Hebrew', 'locale' => 'heb', 'localeFallback' => 'heb', 'charset' => 'utf-8', 'direction' => 'rtl')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('hi'));
<add> $result = $localize->catalog(array('hi'));
<ide> $expected = array(
<ide> 'hi' => array('language' => 'Hindi', 'locale' => 'hin', 'localeFallback' => 'hin', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('hr'));
<add> $result = $localize->catalog(array('hr'));
<ide> $expected = array(
<ide> 'hr' => array('language' => 'Croatian', 'locale' => 'hrv', 'localeFallback' => 'hrv', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('hu'));
<add> $result = $localize->catalog(array('hu'));
<ide> $expected = array(
<ide> 'hu' => array('language' => 'Hungarian', 'locale' => 'hun', 'localeFallback' => 'hun', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('hy'));
<add> $result = $localize->catalog(array('hy'));
<ide> $expected = array(
<ide> 'hy' => array('language' => 'Armenian - Armenia', 'locale' => 'hye', 'localeFallback' => 'hye', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('id', 'in'));
<add> $result = $localize->catalog(array('id', 'in'));
<ide> $expected = array(
<ide> 'id' => array('language' => 'Indonesian', 'locale' => 'ind', 'localeFallback' => 'ind', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'in' => array('language' => 'Indonesian', 'locale' => 'ind', 'localeFallback' => 'ind', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('is'));
<add> $result = $localize->catalog(array('is'));
<ide> $expected = array(
<ide> 'is' => array('language' => 'Icelandic', 'locale' => 'ice', 'localeFallback' => 'ice', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('it', 'it-ch'));
<add> $result = $localize->catalog(array('it', 'it-ch'));
<ide> $expected = array(
<ide> 'it' => array('language' => 'Italian', 'locale' => 'ita', 'localeFallback' => 'ita', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'it-ch' => array('language' => 'Italian (Swiss) ', 'locale' => 'it_ch', 'localeFallback' => 'ita', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('ja'));
<add> $result = $localize->catalog(array('ja'));
<ide> $expected = array(
<ide> 'ja' => array('language' => 'Japanese', 'locale' => 'jpn', 'localeFallback' => 'jpn', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('ko', 'ko-kp', 'ko-kr'));
<add> $result = $localize->catalog(array('ko', 'ko-kp', 'ko-kr'));
<ide> $expected = array(
<ide> 'ko' => array('language' => 'Korean', 'locale' => 'kor', 'localeFallback' => 'kor', 'charset' => 'kr', 'direction' => 'ltr'),
<ide> 'ko-kp' => array('language' => 'Korea (North)', 'locale' => 'ko_kp', 'localeFallback' => 'kor', 'charset' => 'kr', 'direction' => 'ltr'),
<ide> 'ko-kr' => array('language' => 'Korea (South)', 'locale' => 'ko_kr', 'localeFallback' => 'kor', 'charset' => 'kr', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('koi8-r', 'ru', 'ru-mo'));
<add> $result = $localize->catalog(array('koi8-r', 'ru', 'ru-mo'));
<ide> $expected = array(
<ide> 'koi8-r' => array('language' => 'Russian', 'locale' => 'koi8_r', 'localeFallback' => 'rus', 'charset' => 'koi8-r', 'direction' => 'ltr'),
<ide> 'ru' => array('language' => 'Russian', 'locale' => 'rus', 'localeFallback' => 'rus', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'ru-mo' => array('language' => 'Russian (Moldavia)', 'locale' => 'ru_mo', 'localeFallback' => 'rus', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('lt'));
<add> $result = $localize->catalog(array('lt'));
<ide> $expected = array(
<ide> 'lt' => array('language' => 'Lithuanian', 'locale' => 'lit', 'localeFallback' => 'lit', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('lv'));
<add> $result = $localize->catalog(array('lv'));
<ide> $expected = array(
<ide> 'lv' => array('language' => 'Latvian', 'locale' => 'lav', 'localeFallback' => 'lav', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('mk', 'mk-mk'));
<add> $result = $localize->catalog(array('mk', 'mk-mk'));
<ide> $expected = array(
<ide> 'mk' => array('language' => 'FYRO Macedonian', 'locale' => 'mk', 'localeFallback' => 'mac', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'mk-mk' => array('language' => 'Macedonian', 'locale' => 'mk_mk', 'localeFallback' => 'mac', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('ms'));
<add> $result = $localize->catalog(array('ms'));
<ide> $expected = array(
<ide> 'ms' => array('language' => 'Malaysian', 'locale' => 'may', 'localeFallback' => 'may', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('mt'));
<add> $result = $localize->catalog(array('mt'));
<ide> $expected = array(
<ide> 'mt' => array('language' => 'Maltese', 'locale' => 'mlt', 'localeFallback' => 'mlt', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('n', 'nl', 'nl-be'));
<add> $result = $localize->catalog(array('n', 'nl', 'nl-be'));
<ide> $expected = array(
<ide> 'n' => array('language' => 'Dutch (Standard)', 'locale' => 'dut', 'localeFallback' => 'dut', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'nl' => array('language' => 'Dutch (Standard)', 'locale' => 'dut', 'localeFallback' => 'dut', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'nl-be' => array('language' => 'Dutch (Belgium)', 'locale' => 'nl_be', 'localeFallback' => 'dut', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog('nl');
<add> $result = $localize->catalog('nl');
<ide> $expected = array('language' => 'Dutch (Standard)', 'locale' => 'dut', 'localeFallback' => 'dut', 'charset' => 'utf-8', 'direction' => 'ltr');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog('nld');
<add> $result = $localize->catalog('nld');
<ide> $expected = array('language' => 'Dutch (Standard)', 'locale' => 'dut', 'localeFallback' => 'dut', 'charset' => 'utf-8', 'direction' => 'ltr');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog('dut');
<add> $result = $localize->catalog('dut');
<ide> $expected = array('language' => 'Dutch (Standard)', 'locale' => 'dut', 'localeFallback' => 'dut', 'charset' => 'utf-8', 'direction' => 'ltr');
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('nb'));
<add> $result = $localize->catalog(array('nb'));
<ide> $expected = array(
<ide> 'nb' => array('language' => 'Norwegian Bokmal', 'locale' => 'nob', 'localeFallback' => 'nor', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('nn', 'no'));
<add> $result = $localize->catalog(array('nn', 'no'));
<ide> $expected = array(
<ide> 'nn' => array('language' => 'Norwegian Nynorsk', 'locale' => 'nno', 'localeFallback' => 'nor', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'no' => array('language' => 'Norwegian', 'locale' => 'nor', 'localeFallback' => 'nor', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('p', 'pl'));
<add> $result = $localize->catalog(array('p', 'pl'));
<ide> $expected = array(
<ide> 'p' => array('language' => 'Polish', 'locale' => 'pol', 'localeFallback' => 'pol', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'pl' => array('language' => 'Polish', 'locale' => 'pol', 'localeFallback' => 'pol', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('pt', 'pt-br'));
<add> $result = $localize->catalog(array('pt', 'pt-br'));
<ide> $expected = array(
<ide> 'pt' => array('language' => 'Portuguese (Portugal)', 'locale' => 'por', 'localeFallback' => 'por', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'pt-br' => array('language' => 'Portuguese (Brazil)', 'locale' => 'pt_br', 'localeFallback' => 'por', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('rm'));
<add> $result = $localize->catalog(array('rm'));
<ide> $expected = array(
<ide> 'rm' => array('language' => 'Rhaeto-Romanic', 'locale' => 'roh', 'localeFallback' => 'roh', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('ro', 'ro-mo'));
<add> $result = $localize->catalog(array('ro', 'ro-mo'));
<ide> $expected = array(
<ide> 'ro' => array('language' => 'Romanian', 'locale' => 'rum', 'localeFallback' => 'rum', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'ro-mo' => array('language' => 'Romanian (Moldavia)', 'locale' => 'ro_mo', 'localeFallback' => 'rum', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('sb'));
<add> $result = $localize->catalog(array('sb'));
<ide> $expected = array(
<ide> 'sb' => array('language' => 'Sorbian', 'locale' => 'wen', 'localeFallback' => 'wen', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('sk'));
<add> $result = $localize->catalog(array('sk'));
<ide> $expected = array(
<ide> 'sk' => array('language' => 'Slovak', 'locale' => 'slo', 'localeFallback' => 'slo', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('sl'));
<add> $result = $localize->catalog(array('sl'));
<ide> $expected = array(
<ide> 'sl' => array('language' => 'Slovenian', 'locale' => 'slv', 'localeFallback' => 'slv', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('sq'));
<add> $result = $localize->catalog(array('sq'));
<ide> $expected = array(
<ide> 'sq' => array('language' => 'Albanian', 'locale' => 'alb', 'localeFallback' => 'alb', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('sr'));
<add> $result = $localize->catalog(array('sr'));
<ide> $expected = array(
<ide> 'sr' => array('language' => 'Serbian', 'locale' => 'scc', 'localeFallback' => 'scc', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('sv', 'sv-fi'));
<add> $result = $localize->catalog(array('sv', 'sv-fi'));
<ide> $expected = array(
<ide> 'sv' => array('language' => 'Swedish', 'locale' => 'swe', 'localeFallback' => 'swe', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'sv-fi' => array('language' => 'Swedish (Finland)', 'locale' => 'sv_fi', 'localeFallback' => 'swe', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('sx'));
<add> $result = $localize->catalog(array('sx'));
<ide> $expected = array(
<ide> 'sx' => array('language' => 'Sutu', 'locale' => 'sx', 'localeFallback' => 'sx', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('sz'));
<add> $result = $localize->catalog(array('sz'));
<ide> $expected = array(
<ide> 'sz' => array('language' => 'Sami (Lappish)', 'locale' => 'smi', 'localeFallback' => 'smi', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('th'));
<add> $result = $localize->catalog(array('th'));
<ide> $expected = array(
<ide> 'th' => array('language' => 'Thai', 'locale' => 'tha', 'localeFallback' => 'tha', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('tn'));
<add> $result = $localize->catalog(array('tn'));
<ide> $expected = array(
<ide> 'tn' => array('language' => 'Tswana', 'locale' => 'tsn', 'localeFallback' => 'tsn', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('tr'));
<add> $result = $localize->catalog(array('tr'));
<ide> $expected = array(
<ide> 'tr' => array('language' => 'Turkish', 'locale' => 'tur', 'localeFallback' => 'tur', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('ts'));
<add> $result = $localize->catalog(array('ts'));
<ide> $expected = array(
<ide> 'ts' => array('language' => 'Tsonga', 'locale' => 'tso', 'localeFallback' => 'tso', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('uk'));
<add> $result = $localize->catalog(array('uk'));
<ide> $expected = array(
<ide> 'uk' => array('language' => 'Ukrainian', 'locale' => 'ukr', 'localeFallback' => 'ukr', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('ur'));
<add> $result = $localize->catalog(array('ur'));
<ide> $expected = array(
<ide> 'ur' => array('language' => 'Urdu', 'locale' => 'urd', 'localeFallback' => 'urd', 'charset' => 'utf-8', 'direction' => 'rtl')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('ve'));
<add> $result = $localize->catalog(array('ve'));
<ide> $expected = array(
<ide> 've' => array('language' => 'Venda', 'locale' => 'ven', 'localeFallback' => 'ven', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('vi'));
<add> $result = $localize->catalog(array('vi'));
<ide> $expected = array(
<ide> 'vi' => array('language' => 'Vietnamese', 'locale' => 'vie', 'localeFallback' => 'vie', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('cy'));
<add> $result = $localize->catalog(array('cy'));
<ide> $expected = array(
<ide> 'cy' => array('language' => 'Welsh', 'locale' => 'cym', 'localeFallback' => 'cym', 'charset' => 'utf-8',
<ide> 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('xh'));
<add> $result = $localize->catalog(array('xh'));
<ide> $expected = array(
<ide> 'xh' => array('language' => 'Xhosa', 'locale' => 'xho', 'localeFallback' => 'xho', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('yi'));
<add> $result = $localize->catalog(array('yi'));
<ide> $expected = array(
<ide> 'yi' => array('language' => 'Yiddish', 'locale' => 'yid', 'localeFallback' => 'yid', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('zh', 'zh-cn', 'zh-hk', 'zh-sg', 'zh-tw'));
<add> $result = $localize->catalog(array('zh', 'zh-cn', 'zh-hk', 'zh-sg', 'zh-tw'));
<ide> $expected = array(
<ide> 'zh' => array('language' => 'Chinese', 'locale' => 'chi', 'localeFallback' => 'chi', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'zh-cn' => array('language' => 'Chinese (PRC)', 'locale' => 'zh_cn', 'localeFallback' => 'chi', 'charset' => 'GB2312', 'direction' => 'ltr'),
<ide> public function testCatalog() {
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('zu'));
<add> $result = $localize->catalog(array('zu'));
<ide> $expected = array(
<ide> 'zu' => array('language' => 'Zulu', 'locale' => 'zul', 'localeFallback' => 'zul', 'charset' => 'utf-8', 'direction' => 'ltr')
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('en-nz', 'es-do', 'sz', 'ar-lb', 'zh-hk', 'pt-br'));
<add> $result = $localize->catalog(array('en-nz', 'es-do', 'sz', 'ar-lb', 'zh-hk', 'pt-br'));
<ide> $expected = array(
<ide> 'en-nz' => array('language' => 'English (New Zealand)', 'locale' => 'en_nz', 'localeFallback' => 'eng', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'es-do' => array('language' => 'Spanish (Dominican Republic)', 'locale' => 'es_do', 'localeFallback' => 'spa', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> public function testCatalog() {
<ide> );
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = $l10n->catalog(array('eng', 'deu', 'zho', 'rum', 'zul', 'yid'));
<add> $result = $localize->catalog(array('eng', 'deu', 'zho', 'rum', 'zul', 'yid'));
<ide> $expected = array(
<ide> 'eng' => array('language' => 'English', 'locale' => 'eng', 'localeFallback' => 'eng', 'charset' => 'utf-8', 'direction' => 'ltr'),
<ide> 'deu' => array('language' => 'German (Standard)', 'locale' => 'deu', 'localeFallback' => 'deu', 'charset' => 'utf-8', 'direction' => 'ltr'), | 1 |
Ruby | Ruby | add `to_bottle_hash` method | 8b0f7e7ada254c8b2a046776174b6ff1b80a5499 | <ide><path>Library/Homebrew/formula.rb
<ide> def to_hash
<ide> hsh
<ide> end
<ide>
<add> # @api private
<add> # Generate a hash to be used to install a formula from a JSON file
<add> def to_bottle_hash(top_level: true)
<add> bottle = bottle_hash
<add>
<add> bottles = bottle["files"].map do |tag, file|
<add> info = {
<add> "url" => file["url"],
<add> "sha256" => file["sha256"],
<add> }
<add> [tag.to_s, info]
<add> end.to_h
<add>
<add> return bottles unless top_level
<add>
<add> {
<add> "bottles" => bottles,
<add> "dependencies" => deps.map { |dep| dep.to_formula.to_bottle_hash(top_level: false) },
<add> }
<add> end
<add>
<ide> # Returns the bottle information for a formula
<ide> def bottle_hash
<ide> bottle_spec = stable.bottle_specification
<ide><path>Library/Homebrew/test/formula_spec.rb
<ide> expect(h["versions"]["bottle"]).to be_truthy
<ide> end
<ide>
<add> specify "#to_bottle_hash" do
<add> f1 = formula "foo" do
<add> url "foo-1.0"
<add>
<add> bottle do
<add> sha256 cellar: :any, Utils::Bottles.tag.to_sym => TEST_SHA256
<add> sha256 cellar: :any, foo: TEST_SHA256
<add> end
<add> end
<add>
<add> h = f1.to_bottle_hash
<add>
<add> expect(h).to be_a(Hash)
<add> expect(h["bottles"].keys).to eq [Utils::Bottles.tag.to_s, "x86_64_foo"]
<add> expect(h["bottles"][Utils::Bottles.tag.to_s].keys).to eq ["url", "sha256"]
<add> expect(h["bottles"][Utils::Bottles.tag.to_s]["sha256"]).to eq TEST_SHA256
<add> expect(h["dependencies"]).to eq []
<add> end
<add>
<ide> describe "#eligible_kegs_for_cleanup" do
<ide> it "returns Kegs eligible for cleanup" do
<ide> f1 = Class.new(Testball) do | 2 |
Javascript | Javascript | move `mergevertices` to `buffergeometryutils` | 7f521d49556b9fc37b335a3f2875bd17d55348bd | <ide><path>examples/js/utils/BufferGeometryUtils.js
<ide> THREE.BufferGeometryUtils = {
<ide> mem += indices ? indices.count * indices.itemSize * indices.array.BYTES_PER_ELEMENT : 0;
<ide> return mem;
<ide>
<add> },
<add>
<add> /**
<add> * @param {THREE.BufferGeometry} geometry
<add> * @param {number} tolerance
<add> * @return {THREE.BufferGeometry>}
<add> */
<add> mergeVertices: function ( geometry, tolerance = 1e-4 ) {
<add>
<add> tolerance = Math.max( tolerance, Number.EPSILON );
<add>
<add> // Generate an index buffer if the geometry doesn't have one, or optimize it
<add> // if it's already available.
<add> var hashToIndex = {};
<add> var indices = geometry.getIndex();
<add> var positions = geometry.getAttribute( 'position' );
<add> var vertexCount = indices ? indices.count : positions.count;
<add>
<add> // next value for triangle indices
<add> var nextIndex = 0;
<add>
<add> // attributes and new attribute arrays
<add> var attributeNames = Object.keys( geometry.attributes );
<add> var attrArrays = {};
<add> var morphAttrsArrays = {};
<add> var newIndices = [];
<add> var getters = [ 'getX', 'getY', 'getZ', 'getW' ];
<add>
<add> // initialize the arrays
<add> for ( var name of attributeNames ) {
<add>
<add> attrArrays[ name ] = [];
<add>
<add> var morphAttr = geometry.morphAttributes[ name ];
<add> if ( morphAttr ) {
<add>
<add> morphAttrsArrays[ name ] = new Array( morphAttr.length ).fill().map( () => [] );
<add>
<add> }
<add>
<add> }
<add>
<add> // convert the error tolerance to an amount of decimal places to truncate to
<add> var decimalShift = Math.log10( 1 / tolerance );
<add> var shiftMultiplier = Math.pow( 10, decimalShift );
<add> for ( var i = 0; i < vertexCount; i ++ ) {
<add>
<add> var index = indices ? indices.getX( i ) : i;
<add>
<add> // Generate a hash for the vertex attributes at the current index 'i'
<add> var hash = '';
<add> for ( var j = 0, l = attributeNames.length; j < l; j ++ ) {
<add>
<add> var name = attributeNames[ j ];
<add> var attribute = geometry.getAttribute( name );
<add> var itemSize = attribute.itemSize;
<add>
<add> for ( var k = 0; k < itemSize; k ++ ) {
<add>
<add> // double tilde truncates the decimal value
<add> hash += `${ ~ ~ ( attribute[ getters[ k ] ]( index ) * shiftMultiplier ) },`;
<add>
<add> }
<add>
<add> }
<add>
<add> // Add another reference to the vertex if it's already
<add> // used by another index
<add> if ( hash in hashToIndex ) {
<add>
<add> newIndices.push( hashToIndex[ hash ] );
<add>
<add> } else {
<add>
<add> // copy data to the new index in the attribute arrays
<add> for ( var j = 0, l = attributeNames.length; j < l; j ++ ) {
<add>
<add> var name = attributeNames[ j ];
<add> var attribute = geometry.getAttribute( name );
<add> var morphAttr = geometry.morphAttributes[ name ];
<add> var itemSize = attribute.itemSize;
<add> var newarray = attrArrays[ name ];
<add> var newMorphArrays = morphAttrsArrays[ name ];
<add>
<add> for ( var k = 0; k < itemSize; k ++ ) {
<add>
<add> var getterFunc = getters[ k ];
<add> newarray.push( attribute[ getterFunc ]( index ) );
<add>
<add> if ( morphAttr ) {
<add>
<add> for ( var m = 0, ml = morphAttr.length; m < ml; m ++ ) {
<add>
<add> newMorphArrays[ m ].push( morphAttr[ m ][ getterFunc ]( index ) );
<add>
<add> }
<add>
<add> }
<add>
<add> }
<add>
<add> }
<add>
<add> hashToIndex[ hash ] = nextIndex;
<add> newIndices.push( nextIndex );
<add> nextIndex ++;
<add>
<add> }
<add>
<add> }
<add>
<add> // Generate typed arrays from new attribute arrays and update
<add> // the attributeBuffers
<add> const result = geometry.clone();
<add> for ( var i = 0, l = attributeNames.length; i < l; i ++ ) {
<add>
<add> var name = attributeNames[ i ];
<add> var oldAttribute = geometry.getAttribute( name );
<add> var attribute;
<add>
<add> var buffer = new oldAttribute.array.constructor( attrArrays[ name ] );
<add> if ( oldAttribute.isInterleavedBufferAttribute ) {
<add>
<add> attribute = new THREE.BufferAttribute( buffer, oldAttribute.itemSize, oldAttribute.itemSize );
<add>
<add> } else {
<add>
<add> attribute = geometry.getAttribute( name ).clone();
<add> attribute.setArray( buffer );
<add>
<add> }
<add>
<add> result.addAttribute( name, attribute );
<add>
<add> // Update the attribute arrays
<add> if ( name in morphAttrsArrays ) {
<add>
<add> for ( var j = 0; j < morphAttrsArrays[ name ].length; j ++ ) {
<add>
<add> var morphAttribute = geometry.morphAttributes[ name ][ j ].clone();
<add> morphAttribute.setArray( new morphAttribute.array.constructor( morphAttrsArrays[ name ][ j ] ) );
<add> result.morphAttributes[ name ][ j ] = morphAttribute;
<add>
<add> }
<add>
<add> }
<add>
<add> }
<add>
<add> // Generate an index buffer typed array
<add> var cons = Uint8Array;
<add> if ( newIndices.length >= Math.pow( 2, 8 ) ) cons = Uint16Array;
<add> if ( newIndices.length >= Math.pow( 2, 16 ) ) cons = Uint32Array;
<add>
<add> var newIndexBuffer = new cons( newIndices );
<add> var newIndices = null;
<add> if ( indices === null ) {
<add>
<add> newIndices = new THREE.BufferAttribute( newIndexBuffer, 1 );
<add>
<add> } else {
<add>
<add> newIndices = geometry.getIndex().clone();
<add> newIndices.setArray( newIndexBuffer );
<add>
<add> }
<add>
<add> result.setIndex( newIndices );
<add>
<add> return result;
<add>
<ide> }
<ide>
<ide> };
<ide><path>src/core/BufferGeometry.js
<ide> BufferGeometry.prototype = Object.assign( Object.create( EventDispatcher.prototy
<ide>
<ide> },
<ide>
<del> mergeVertices: function ( tolerance = 1e-4 ) {
<del>
<del> tolerance = Math.max( tolerance, Number.EPSILON );
<del>
<del> // Generate an index buffer if the geometry doesn't have one, or optimize it
<del> // if it's already available.
<del> var hashToIndex = {};
<del> var indices = this.getIndex();
<del> var positions = this.getAttribute( 'position' );
<del> var vertexCount = indices ? indices.count : positions.count;
<del>
<del> // next value for triangle indices
<del> var nextIndex = 0;
<del>
<del> // attributes and new attribute arrays
<del> var attributeNames = Object.keys( this.attributes );
<del> var attrArrays = {};
<del> var morphAttrsArrays = {};
<del> var newIndices = [];
<del> var getters = [ 'getX', 'getY', 'getZ', 'getW' ];
<del>
<del> // initialize the arrays
<del> for ( var name of attributeNames ) {
<del>
<del> attrArrays[ name ] = [];
<del>
<del> var morphAttr = this.morphAttributes[ name ];
<del> if ( morphAttr ) {
<del>
<del> morphAttrsArrays[ name ] = new Array( morphAttr.length ).fill().map( () => [] );
<del>
<del> }
<del>
<del> }
<del>
<del> // convert the error tolerance to an amount of decimal places to truncate to
<del> var decimalShift = Math.log10( 1 / tolerance );
<del> var shiftMultiplier = Math.pow( 10, decimalShift );
<del> for ( var i = 0; i < vertexCount; i ++ ) {
<del>
<del> var index = indices ? indices.getX( i ) : i;
<del>
<del> // Generate a hash for the vertex attributes at the current index 'i'
<del> var hash = '';
<del> for ( var j = 0, l = attributeNames.length; j < l; j ++ ) {
<del>
<del> var name = attributeNames[ j ];
<del> var attribute = this.getAttribute( name );
<del> var itemSize = attribute.itemSize;
<del>
<del> for ( var k = 0; k < itemSize; k ++ ) {
<del>
<del> // double tilde truncates the decimal value
<del> hash += `${ ~ ~ ( attribute[ getters[ k ] ]( index ) * shiftMultiplier ) },`;
<del>
<del> }
<del>
<del> }
<del>
<del> // Add another reference to the vertex if it's already
<del> // used by another index
<del> if ( hash in hashToIndex ) {
<del>
<del> newIndices.push( hashToIndex[ hash ] );
<del>
<del> } else {
<del>
<del> // copy data to the new index in the attribute arrays
<del> for ( var j = 0, l = attributeNames.length; j < l; j ++ ) {
<del>
<del> var name = attributeNames[ j ];
<del> var attribute = this.getAttribute( name );
<del> var morphAttr = this.morphAttributes[ name ];
<del> var itemSize = attribute.itemSize;
<del> var newarray = attrArrays[ name ];
<del> var newMorphArrays = morphAttrsArrays[ name ];
<del>
<del> for ( var k = 0; k < itemSize; k ++ ) {
<del>
<del> var getterFunc = getters[ k ];
<del> newarray.push( attribute[ getterFunc ]( index ) );
<del>
<del> if ( morphAttr ) {
<del>
<del> for ( var m = 0, ml = morphAttr.length; m < ml; m ++ ) {
<del>
<del> newMorphArrays[ m ].push( morphAttr[ m ][ getterFunc ]( index ) );
<del>
<del> }
<del>
<del> }
<del>
<del> }
<del>
<del> }
<del>
<del> hashToIndex[ hash ] = nextIndex;
<del> newIndices.push( nextIndex );
<del> nextIndex ++;
<del>
<del> }
<del>
<del> }
<del>
<del> // Generate typed arrays from new attribute arrays and update
<del> // the attributeBuffers
<del> for ( var i = 0, l = attributeNames.length; i < l; i ++ ) {
<del>
<del> var name = attributeNames[ i ];
<del> var oldAttribute = this.getAttribute( name );
<del> var attribute;
<del>
<del> var buffer = new oldAttribute.array.constructor( attrArrays[ name ] );
<del> if ( oldAttribute.isInterleavedBufferAttribute ) {
<del>
<del> attribute = new THREE.BufferAttribute( buffer, oldAttribute.itemSize, oldAttribute.itemSize );
<del>
<del> } else {
<del>
<del> attribute = this.getAttribute( name ).clone();
<del> attribute.setArray( buffer );
<del>
<del> }
<del>
<del> this.addAttribute( name, attribute );
<del>
<del> // Update the attribute arrays
<del> if ( name in morphAttrsArrays ) {
<del>
<del> for ( var j = 0; j < morphAttrsArrays[ name ].length; j ++ ) {
<del>
<del> var morphAttribute = this.morphAttributes[ name ][ j ].clone();
<del> morphAttribute.setArray( new morphAttribute.array.constructor( morphAttrsArrays[ name ][ j ] ) );
<del> this.morphAttributes[ name ][ j ] = morphAttribute;
<del>
<del> }
<del>
<del> }
<del>
<del> }
<del>
<del> // Generate an index buffer typed array
<del> var cons = Uint8Array;
<del> if ( newIndices.length >= Math.pow( 2, 8 ) ) cons = Uint16Array;
<del> if ( newIndices.length >= Math.pow( 2, 16 ) ) cons = Uint32Array;
<del>
<del> var newIndexBuffer = new cons( newIndices );
<del> var newIndices = null;
<del> if ( indices === null ) {
<del>
<del> newIndices = new THREE.BufferAttribute( newIndexBuffer, 1 );
<del>
<del> } else {
<del>
<del> newIndices = this.getIndex().clone();
<del> newIndices.setArray( newIndexBuffer );
<del>
<del> }
<del>
<del> this.setIndex( newIndices );
<del>
<del> return this;
<del>
<del> },
<del>
<ide> toJSON: function () {
<ide>
<ide> var data = { | 2 |
Ruby | Ruby | install adoptopenjdk for linux | 0ac5cbbda9ddca02da5e0196d29e9ae2a2d6ae98 | <ide><path>Library/Homebrew/extend/os/linux/dependency_collector.rb
<ide> class DependencyCollector
<ide> def java_dep_if_needed(tags)
<ide> req = JavaRequirement.new(tags)
<ide> begin
<del> dep = Dependency.new("openjdk", tags)
<add> dep = Dependency.new("adoptopenjdk", tags)
<ide> return dep if dep.installed?
<ide> return req if req.satisfied?
<ide> | 1 |
Ruby | Ruby | add collectionproxy#last documentation | b0f55c68887930ada279302cceaf10e2ca67de52 | <ide><path>activerecord/lib/active_record/associations/collection_proxy.rb
<ide> class CollectionProxy < Relation
<ide> # another_person_without.pets.first # => nil
<ide> # another_person_without.pets.first(3) # => []
<ide>
<add> ##
<add> # :method: last
<add> # Returns the last record, or the last +n+ records, from the collection.
<add> # If the collection is empty, the first form returns nil, and the second
<add> # form returns an empty array.
<add> #
<add> # class Person < ActiveRecord::Base
<add> # has_many :pets
<add> # end
<add> #
<add> # person.pets
<add> # # => [
<add> # # #<Pet id: 1, name: "Fancy-Fancy", person_id: 1>,
<add> # # #<Pet id: 2, name: "Spook", person_id: 1>,
<add> # # #<Pet id: 3, name: "Choo-Choo", person_id: 1>
<add> #
<add> # person.pets.last # => #<Pet id: 3, name: "Choo-Choo", person_id: 1>
<add> # person.pets.last(2)
<add> # # => [
<add> # # #<Pet id: 2, name: "Spook", person_id: 1>,
<add> # # #<Pet id: 3, name: "Choo-Choo", person_id: 1>
<add> # # ]
<add> #
<add> # another_person_without.pets # => []
<add> # another_person_without.pets.last # => nil
<add> # another_person_without.pets.last(3) # => []
<add>
<ide> ##
<ide> # :method: concat
<ide> # Add one or more records to the collection by setting their foreign keys | 1 |
Text | Text | fix some broken links in guides | e17b5fd572618fd9ac9257a05103e0b5fad714ab | <ide><path>guides/source/2_2_release_notes.md
<ide> There are two big additions to talk about here: transactional migrations and poo
<ide>
<ide> Historically, multiple-step Rails migrations have been a source of trouble. If something went wrong during a migration, everything before the error changed the database and everything after the error wasn't applied. Also, the migration version was stored as having been executed, which means that it couldn't be simply rerun by `rake db:migrate:redo` after you fix the problem. Transactional migrations change this by wrapping migration steps in a DDL transaction, so that if any of them fail, the entire migration is undone. In Rails 2.2, transactional migrations are supported on PostgreSQL out of the box. The code is extensible to other database types in the future - and IBM has already extended it to support the DB2 adapter.
<ide>
<del>* Lead Contributor: [Adam Wiggins](http://adam.heroku.com/)
<add>* Lead Contributor: [Adam Wiggins](http://about.adamwiggins.com/)
<ide> * More information:
<ide> * [DDL Transactions](http://adam.heroku.com/past/2008/9/3/ddl_transactions/)
<ide> * [A major milestone for DB2 on Rails](http://db2onrails.com/2008/11/08/a-major-milestone-for-db2-on-rails/)
<ide> You can unpack or install a single gem by specifying `GEM=_gem_name_` on the com
<ide> * Lead Contributor: [Matt Jones](https://github.com/al2o3cr)
<ide> * More information:
<ide> * [What's New in Edge Rails: Gem Dependencies](http://archives.ryandaigle.com/articles/2008/4/1/what-s-new-in-edge-rails-gem-dependencies)
<del> * [Rails 2.1.2 and 2.2RC1: Update Your RubyGems](http://afreshcup.com/2008/10/25/rails-212-and-22rc1-update-your-rubygems/)
<add> * [Rails 2.1.2 and 2.2RC1: Update Your RubyGems](https://afreshcup.com/home/2008/10/25/rails-212-and-22rc1-update-your-rubygems)
<ide> * [Detailed discussion on Lighthouse](http://rails.lighthouseapp.com/projects/8994-ruby-on-rails/tickets/1128)
<ide>
<ide> ### Other Railties Changes
<ide><path>guides/source/2_3_release_notes.md
<ide> Rails chooses between file, template, and action depending on whether there is a
<ide> If you're one of the people who has always been bothered by the special-case naming of `application.rb`, rejoice! It's been reworked to be `application_controller.rb` in Rails 2.3. In addition, there's a new rake task, `rake rails:update:application_controller` to do this automatically for you - and it will be run as part of the normal `rake rails:update` process.
<ide>
<ide> * More Information:
<del> * [The Death of Application.rb](http://afreshcup.com/2008/11/17/rails-2x-the-death-of-applicationrb/)
<add> * [The Death of Application.rb](https://afreshcup.com/home/2008/11/17/rails-2x-the-death-of-applicationrb)
<ide> * [What's New in Edge Rails: Application.rb Duality is no More](http://archives.ryandaigle.com/articles/2008/11/19/what-s-new-in-edge-rails-application-rb-duality-is-no-more)
<ide>
<ide> ### HTTP Digest Authentication Support
<ide> options_from_collection_for_select(@product.sizes, :name, :id, :disabled => lamb
<ide> ```
<ide>
<ide> * Lead Contributor: [Tekin Suleyman](http://tekin.co.uk/)
<del>* More Information: [New in rails 2.3 - disabled option tags and lambdas for selecting and disabling options from collections](http://tekin.co.uk/2009/03/new-in-rails-23-disabled-option-tags-and-lambdas-for-selecting-and-disabling-options-from-collections/)
<add>* More Information: [New in rails 2.3 - disabled option tags and lambdas for selecting and disabling options from collections](https://tekin.co.uk/2009/03/new-in-rails-23-disabled-option-tags-and-lambdas-for-selecting-and-disabling-options-from-collections)
<ide>
<ide> ### A Note About Template Loading
<ide>
<ide> If you look up the spec on the "json.org" site, you'll discover that all keys in
<ide> ### Other Active Support Changes
<ide>
<ide> * You can use `Enumerable#none?` to check that none of the elements match the supplied block.
<del>* If you're using Active Support [delegates](http://afreshcup.com/2008/10/19/coming-in-rails-22-delegate-prefixes/) the new `:allow_nil` option lets you return `nil` instead of raising an exception when the target object is nil.
<add>* If you're using Active Support [delegates](https://afreshcup.com/home/2008/10/19/coming-in-rails-22-delegate-prefixes) the new `:allow_nil` option lets you return `nil` instead of raising an exception when the target object is nil.
<ide> * `ActiveSupport::OrderedHash`: now implements `each_key` and `each_value`.
<ide> * `ActiveSupport::MessageEncryptor` provides a simple way to encrypt information for storage in an untrusted location (like cookies).
<ide> * Active Support's `from_xml` no longer depends on XmlSimple. Instead, Rails now includes its own XmlMini implementation, with just the functionality that it requires. This lets Rails dispense with the bundled copy of XmlSimple that it's been carting around.
<ide> The internals of the various <code>rake gem</code> tasks have been substantially
<ide> * Internal Rails testing has been switched from `Test::Unit::TestCase` to `ActiveSupport::TestCase`, and the Rails core requires Mocha to test.
<ide> * The default `environment.rb` file has been decluttered.
<ide> * The dbconsole script now lets you use an all-numeric password without crashing.
<del>* `Rails.root` now returns a `Pathname` object, which means you can use it directly with the `join` method to [clean up existing code](http://afreshcup.com/2008/12/05/a-little-rails_root-tidiness/) that uses `File.join`.
<add>* `Rails.root` now returns a `Pathname` object, which means you can use it directly with the `join` method to [clean up existing code](https://afreshcup.wordpress.com/2008/12/05/a-little-rails_root-tidiness/) that uses `File.join`.
<ide> * Various files in /public that deal with CGI and FCGI dispatching are no longer generated in every Rails application by default (you can still get them if you need them by adding `--with-dispatchers` when you run the `rails` command, or add them later with `rake rails:update:generate_dispatchers`).
<ide> * Rails Guides have been converted from AsciiDoc to Textile markup.
<ide> * Scaffolded views and controllers have been cleaned up a bit.
<ide> Deprecated
<ide>
<ide> A few pieces of older code are deprecated in this release:
<ide>
<del>* If you're one of the (fairly rare) Rails developers who deploys in a fashion that depends on the inspector, reaper, and spawner scripts, you'll need to know that those scripts are no longer included in core Rails. If you need them, you'll be able to pick up copies via the [irs_process_scripts](https://github.com/rails/irs_process_scripts/tree) plugin.
<add>* If you're one of the (fairly rare) Rails developers who deploys in a fashion that depends on the inspector, reaper, and spawner scripts, you'll need to know that those scripts are no longer included in core Rails. If you need them, you'll be able to pick up copies via the [irs_process_scripts](https://github.com/rails/irs_process_scripts) plugin.
<ide> * `render_component` goes from "deprecated" to "nonexistent" in Rails 2.3. If you still need it, you can install the [render_component plugin](https://github.com/rails/render_component/tree/master).
<ide> * Support for Rails components has been removed.
<ide> * If you were one of the people who got used to running `script/performance/request` to look at performance based on integration tests, you need to learn a new trick: that script has been removed from core Rails now. There's a new request_profiler plugin that you can install to get the exact same functionality back.
<ide><path>guides/source/3_0_release_notes.md
<ide> Railties now deprecates:
<ide> More information:
<ide>
<ide> * [Discovering Rails 3 generators](http://blog.plataformatec.com.br/2010/01/discovering-rails-3-generators)
<del>* [The Rails Module (in Rails 3)](http://litanyagainstfear.com/blog/2010/02/03/the-rails-module/)
<add>* [The Rails Module (in Rails 3)](http://quaran.to/blog/2010/02/03/the-rails-module/)
<ide>
<ide> Action Pack
<ide> -----------
<ide><path>guides/source/upgrading_ruby_on_rails.md
<ide> gem 'rails-deprecated_sanitizer'
<ide>
<ide> ### Rails DOM Testing
<ide>
<del>The [`TagAssertions` module](http://api.rubyonrails.org/classes/ActionDispatch/Assertions/TagAssertions.html) (containing methods such as `assert_tag`), [has been deprecated](https://github.com/rails/rails/blob/6061472b8c310158a2a2e8e9a6b81a1aef6b60fe/actionpack/lib/action_dispatch/testing/assertions/dom.rb) in favor of the `assert_select` methods from the `SelectorAssertions` module, which has been extracted into the [rails-dom-testing gem](https://github.com/rails/rails-dom-testing).
<add>The [`TagAssertions` module](http://api.rubyonrails.org/v4.1/classes/ActionDispatch/Assertions/TagAssertions.html) (containing methods such as `assert_tag`), [has been deprecated](https://github.com/rails/rails/blob/6061472b8c310158a2a2e8e9a6b81a1aef6b60fe/actionpack/lib/action_dispatch/testing/assertions/dom.rb) in favor of the `assert_select` methods from the `SelectorAssertions` module, which has been extracted into the [rails-dom-testing gem](https://github.com/rails/rails-dom-testing).
<ide>
<ide>
<ide> ### Masked Authenticity Tokens | 4 |
Javascript | Javascript | trackballcamera first version, rotation only | 902e31621eaa0e3567942eec57be79529a70850a | <ide><path>src/extras/cameras/TrackballCamera.js
<add>/**
<add> * @author Eberhard Gräther / http://egraether.com/
<add>
<add> * parameters = {
<add> * fov: <float>,
<add> * aspect: <float>,
<add> * near: <float>,
<add> * far: <float>,
<add> * target: <THREE.Object3D>,
<add>
<add> * radius: <float>,
<add>
<add> * zoomSpeed: <float>,
<add> * panSpeed: <float>,
<add>
<add> * noZoom: <bool>,
<add> * noPan: <bool>,
<add>
<add> * domElement: <HTMLElement>,
<add> * }
<add> */
<add>
<add>THREE.TrackballCamera = function ( parameters ) {
<add>
<add> THREE.Camera.call( this, parameters.fov, parameters.aspect, parameters.near, parameters.far, parameters.target );
<add>
<add> this.radius = ( window.innerWidth + window.innerHeight ) / 4;
<add>
<add> this.zoomSpeed = 1.0;
<add> this.panSpeed = 1.0;
<add>
<add> this.noZoom = false;
<add> this.noPan = false;
<add>
<add> this.domElement = document;
<add>
<add> if ( parameters ) {
<add>
<add> if ( parameters.radius !== undefined ) this.radius = parameters.radius;
<add>
<add> if ( parameters.zoomSpeed !== undefined ) this.zoomSpeed = parameters.zoomSpeed;
<add> if ( parameters.panSpeed !== undefined ) this.panSpeed = parameters.panSpeed;
<add>
<add> if ( parameters.noZoom !== undefined ) this.noZoom = parameters.noZoom;
<add> if ( parameters.noPan !== undefined ) this.noPan = parameters.noPan;
<add>
<add> if ( parameters.domElement !== undefined ) this.domElement = parameters.domElement;
<add>
<add> }
<add>
<add> this.useTarget = true;
<add>
<add> this.mouseDragOn = false;
<add>
<add> this.screen = this.getScreenDimensions();
<add>
<add> this.start = new THREE.Vector3();
<add> this.end = new THREE.Vector3();
<add>
<add> function bind( scope, fn ) {
<add>
<add> return function () {
<add>
<add> fn.apply( scope, arguments );
<add>
<add> };
<add>
<add> };
<add>
<add> this.domElement.addEventListener( 'mousemove', bind( this, this.mousemove ), false );
<add> this.domElement.addEventListener( 'mousedown', bind( this, this.mousedown ), false );
<add> this.domElement.addEventListener( 'mouseup', bind( this, this.mouseup ), false );
<add>
<add> window.addEventListener( 'keydown', bind( this, this.keydown ), false );
<add> window.addEventListener( 'keyup', bind( this, this.keyup ), false );
<add>
<add>};
<add>
<add>THREE.TrackballCamera.prototype = new THREE.Camera();
<add>THREE.TrackballCamera.prototype.constructor = THREE.TrackballCamera;
<add>THREE.TrackballCamera.prototype.supr = THREE.Camera.prototype;
<add>
<add>THREE.TrackballCamera.prototype.handleEvent = function ( event ) {
<add>
<add> if ( typeof this[ event.type ] == 'function' ) {
<add>
<add> this[ event.type ]( event );
<add>
<add> }
<add>
<add>};
<add>
<add>THREE.TrackballCamera.prototype.keydown = function( event ) {
<add>
<add>
<add>
<add>};
<add>
<add>THREE.TrackballCamera.prototype.keyup = function( event ) {
<add>
<add>
<add>
<add>};
<add>
<add>THREE.TrackballCamera.prototype.mousedown = function(event) {
<add>
<add> event.preventDefault();
<add> event.stopPropagation();
<add>
<add> this.mouseDragOn = true;
<add>
<add> this.start = this.getMouseProjectionOnBall( event.clientX, event.clientY );
<add>
<add>};
<add>
<add>THREE.TrackballCamera.prototype.mousemove = function( event ) {
<add>
<add> if ( this.mouseDragOn ) {
<add>
<add> this.end = this.getMouseProjectionOnBall( event.clientX, event.clientY );
<add>
<add> var angle = Math.acos( this.start.dot( this.end ) / this.start.length() / this.end.length() );
<add>
<add> if ( angle ) {
<add>
<add> var axis = (new THREE.Vector3()).cross( this.end, this.start ).normalize(),
<add> quaternion = new THREE.Quaternion();
<add>
<add> quaternion.setFromAxisAngle( axis, angle );
<add>
<add> quaternion.multiplyVector3( this.position );
<add> quaternion.multiplyVector3( this.up );
<add>
<add> }
<add>
<add> }
<add>
<add>};
<add>
<add>THREE.TrackballCamera.prototype.mouseup = function( event ) {
<add>
<add> event.preventDefault();
<add> event.stopPropagation();
<add>
<add> this.mouseDragOn = false;
<add>
<add>};
<add>
<add>THREE.TrackballCamera.prototype.getScreenDimensions = function() {
<add>
<add> if ( this.domElement != document ) {
<add>
<add> return {
<add> width : this.domElement.offsetWidth,
<add> height : this.domElement.offsetHeight,
<add> offsetLeft : this.domElement.offsetLeft,
<add> offsetTop : this.domElement.offsetTop
<add> };
<add>
<add> } else {
<add>
<add> return {
<add> width : window.innerWidth,
<add> height : window.innerHeight,
<add> offsetLeft : 0,
<add> offsetTop : 0
<add> };
<add>
<add> }
<add>
<add>};
<add>
<add>THREE.TrackballCamera.prototype.getMouseProjectionOnBall = function( clientX, clientY ) {
<add>
<add> var mouse = new THREE.Vector3(
<add> ( clientX - this.screen.width * 0.5 - this.screen.offsetLeft ) / this.radius,
<add> ( this.screen.height * 0.5 + this.screen.offsetTop - clientY ) / this.radius,
<add> 0.0
<add> );
<add>
<add> var length = mouse.length();
<add>
<add> if ( length > 1.0 ) {
<add>
<add> mouse.divideScalar( length );
<add>
<add> } else {
<add>
<add> mouse.z = Math.sqrt( 1.0 - length * length );
<add>
<add> }
<add>
<add> var projection = this.up.clone().setLength( mouse.y );
<add> projection.addSelf( this.up.clone().crossSelf( this.position ).setLength( mouse.x ) );
<add> projection.addSelf( this.position.clone().setLength( mouse.z ) );
<add>
<add> return projection;
<add>
<add>}; | 1 |
Go | Go | use int64 instead of int | 62bfef59f7ae6f9128bfc3e7ef2e6ed5e4441d2e | <ide><path>pkg/beam/beam.go
<ide> type ReceiveSender interface {
<ide> }
<ide>
<ide> const (
<del> R int = 1 << (32 - 1 - iota)
<add> R = iota
<ide> W
<ide> )
<ide> | 1 |
Ruby | Ruby | include vi in list of binaries already in os x | 94bafe05f45ef72e07d9ad9fae16e20a92884a2c | <ide><path>Library/Homebrew/blacklist.rb
<ide> def blacklisted? name
<ide> case name.downcase
<del> when 'vim', 'screen', /^rubygems?$/ then <<-EOS.undent
<add> when /^vim?$/, 'screen', /^rubygems?$/ then <<-EOS.undent
<ide> Apple distributes #{name} with OS X, you can find it in /usr/bin.
<ide> EOS
<ide> when 'libarchive', 'libpcap' then <<-EOS.undent | 1 |
PHP | PHP | fix a bug with json responses | d8da912905fa4da22d88e27a6e278a9dae06f31d | <ide><path>src/Illuminate/Http/JsonResponse.php
<ide> <?php namespace Illuminate\Http;
<ide>
<add>use Illuminate\Support\Contracts\JsonableInterface;
<add>
<ide> class JsonResponse extends \Symfony\Component\HttpFoundation\JsonResponse {
<ide>
<ide> /**
<ide> * {@inheritdoc}
<ide> */
<ide> public function setData($data = array())
<ide> {
<del> $this->data = json_encode($data);
<add> $this->data = $data instanceof JsonableInterface ? $data->toJson() : json_encode($data);
<ide>
<ide> return $this->update();
<ide> }
<ide><path>src/Illuminate/Support/Facades/Response.php
<ide> public static function view($view, $data = array(), $status = 200, array $header
<ide> */
<ide> public static function json($data = array(), $status = 200, array $headers = array())
<ide> {
<del> if ($data instanceof JsonableInterface)
<del> {
<del> $data = $data->toJson();
<del> }
<del> elseif ($data instanceof ArrayableInterface)
<add> if ($data instanceof ArrayableInterface)
<ide> {
<ide> $data = $data->toArray();
<ide> } | 2 |
Ruby | Ruby | pass the build object into the tab | 393e10849be14528ee1726e5659562a698686c92 | <ide><path>Library/Homebrew/build.rb
<ide> def install
<ide> # link against it.
<ide> stdlibs = keg.detect_cxx_stdlibs :skip_executables => true
<ide>
<del> Tab.create(f, ENV.compiler, stdlibs.first,
<del> Options.coerce(ARGV.options_only)).write
<add> Tab.create(f, ENV.compiler, stdlibs.first, f.build).write
<ide> rescue Exception => e
<ide> if ARGV.debug?
<ide> debrew e, f
<ide><path>Library/Homebrew/tab.rb
<ide> class Tab < OpenStruct
<ide> FILENAME = 'INSTALL_RECEIPT.json'
<ide>
<del> def self.create f, compiler, stdlib, args
<del> build = f.build.dup
<del> build.args = args
<del>
<add> def self.create(formula, compiler, stdlib, build)
<ide> Tab.new :used_options => build.used_options,
<ide> :unused_options => build.unused_options,
<del> :tabfile => f.prefix.join(FILENAME),
<add> :tabfile => formula.prefix.join(FILENAME),
<ide> :built_as_bottle => !!ARGV.build_bottle?,
<ide> :poured_from_bottle => false,
<del> :tapped_from => f.tap,
<add> :tapped_from => formula.tap,
<ide> :time => Time.now.to_i,
<ide> :HEAD => Homebrew.git_head,
<ide> :compiler => compiler, | 2 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.