content_type
stringclasses 8
values | main_lang
stringclasses 7
values | message
stringlengths 1
50
| sha
stringlengths 40
40
| patch
stringlengths 52
962k
| file_count
int64 1
300
|
---|---|---|---|---|---|
Text | Text | add body text and link | 5f494670ebe7c00cc7ce28899d487383bf82b5b6 | <ide><path>guide/english/tools/calculators/401k-calculator/index.md
<ide> title: 401k Calculator
<ide> ---
<ide> ## 401k Calculator
<ide>
<del>This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/tools/calculators/401k-calculator/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
<del>
<del><a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
<del>
<del><!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
<add>A 401k calculator is used to calculate the 401k balance at retirement given a number of variables, including income, life expectancy, and expected inflation rate. Since many of these variables are uncertain, the end calculation will only be an estimate.
<ide>
<ide> #### More Information:
<del><!-- Please add any articles you think might be helpful to read before writing the article -->
<add>
<add>* [An online 401k calculator](https://www.calculator.net/401k-calculator.html)
<ide>
<ide> | 1 |
Javascript | Javascript | remove more process.assert | fbb74784b51701278f7bfdf1038922a02fc18632 | <ide><path>test/simple/test-module-loading.js
<ide> common.debug('load modules by absolute id, then change require.paths, ' +
<ide> 'and load another module with the same absolute id.');
<ide> // this will throw if it fails.
<ide> var foo = require('../fixtures/require-path/p1/foo');
<del>process.assert(foo.bar.expect === foo.bar.actual);
<add>assert.ok(foo.bar.expect === foo.bar.actual);
<ide>
<ide> assert.equal(require('../fixtures/foo').foo, 'ok',
<ide> 'require module with no extension');
<ide><path>test/simple/test-net-binary.js
<ide> for (var i = 255; i >= 0; i--) {
<ide> JSON.stringify(String.fromCharCode(i)) +
<ide> ' ' +
<ide> S.charCodeAt(0));
<del> process.assert(S.charCodeAt(0) == i);
<del> process.assert(S == String.fromCharCode(i));
<add> assert.ok(S.charCodeAt(0) == i);
<add> assert.ok(S == String.fromCharCode(i));
<ide> binaryString += S;
<ide> }
<ide> | 2 |
Python | Python | correct mape loss | 8bd8ae1daa432bce9881214c4d326ac8a38e2046 | <ide><path>keras/objectives.py
<ide> def mean_absolute_error(y_true, y_pred):
<ide> return T.abs_(y_pred - y_true).mean(axis=-1)
<ide>
<ide> def mean_absolute_percentage_error(y_true, y_pred):
<del> return T.abs_((y_true - y_pred) / y_true).mean(axis=-1) * 100
<add> return T.abs_((y_true - y_pred) / T.clip(T.abs_(y_true), epsilon, np.inf)).mean(axis=-1) * 100.
<ide>
<ide> def mean_squared_logarithmic_error(y_true, y_pred):
<ide> return T.sqr(T.log(T.clip(y_pred, epsilon, np.inf) + 1.) - T.log(T.clip(y_true, epsilon, np.inf) + 1.)).mean(axis=-1) | 1 |
Javascript | Javascript | add missing scene parameter in renderobject call | c9988a535b65b37830e6b34da6abb4214f11706a | <ide><path>examples/js/renderers/CSS2DRenderer.js
<ide> THREE.CSS2DRenderer = function () {
<ide>
<ide> for ( var i = 0, l = object.children.length; i < l; i ++ ) {
<ide>
<del> renderObject( object.children[ i ], camera );
<add> renderObject( object.children[ i ], scene, camera );
<ide>
<ide> }
<ide> | 1 |
Text | Text | remove redundant class in bootstrap description | ce9bd6ddbd29a17303a7e0671910333b564ec4d9 | <ide><path>curriculum/challenges/english/03-front-end-development-libraries/bootstrap/call-out-optional-actions-with-btn-info.md
<ide> dashedName: call-out-optional-actions-with-btn-info
<ide>
<ide> Bootstrap comes with several pre-defined colors for buttons. The `btn-info` class is used to call attention to optional actions that the user can take.
<ide>
<del>Create a new block-level Bootstrap button below your `Like` button with the text `Info`, and add Bootstrap's `btn-info` and `btn-block` classes to it.
<add>Create a new block-level Bootstrap button below your `Like` button with the text `Info`, and add Bootstrap's `btn-info` class to it.
<ide>
<ide> Note that these buttons still need the `btn` and `btn-block` classes.
<ide> | 1 |
Java | Java | improve requestpartmethodargumentresolver javadoc | e04ca3d6714e6ff7a81cfc2994f3588c8188dd95 | <ide><path>spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/annotation/RequestPartMethodArgumentResolver.java
<ide> * it is derived from the name of the method argument.
<ide> *
<ide> * <p>Automatic validation may be applied if the argument is annotated with
<del> * {@code @javax.validation.Valid}. In case of validation failure, a {@link MethodArgumentNotValidException}
<del> * is raised and a 400 response status code returned if
<add> * {@code @javax.validation.Valid}, Spring's {@link org.springframework.validation.annotation.Validated}
<add> * or custom annotations whose name starts with "Valid". In case of validation failure,
<add> * a {@link MethodArgumentNotValidException} is raised and a 400 response status code returned if
<ide> * {@link org.springframework.web.servlet.mvc.support.DefaultHandlerExceptionResolver} is configured.
<ide> *
<ide> * @author Rossen Stoyanchev | 1 |
Ruby | Ruby | handle new options | d7ee54129310de2b8b4989e6a47607fd1fb7db6c | <ide><path>Library/Homebrew/cmd/reinstall.rb
<ide> def reinstall
<ide> end
<ide>
<ide> def reinstall_formula(f)
<del> options = f.build.used_options
<add> options = BuildOptions.new(Options.create(ARGV.flags_only), f.options).used_options
<add> options |= f.build.used_options
<ide>
<ide> notice = "Reinstalling #{f.full_name}"
<ide> notice += " with #{options * ", "}" unless options.empty? | 1 |
Python | Python | add message when cli train script throws exception | a27c77ce62193fdd777353bbf93b20dc9eda142e | <ide><path>spacy/cli/train.py
<ide> def train(
<ide> "score = {}".format(best_score, current_score)
<ide> )
<ide> break
<add> except Exception as e:
<add> msg.warn("Aborting and saving the final best model. Encountered exception: {}".format(e))
<ide> finally:
<ide> best_pipes = nlp.pipe_names
<ide> if disabled_pipes: | 1 |
PHP | PHP | use config to resolve database value | 1a43d7dbfc98f45841f817b0b6f45f2781275817 | <ide><path>src/Illuminate/Foundation/Testing/RefreshDatabase.php
<ide> public function refreshDatabase()
<ide> */
<ide> protected function usingInMemoryDatabase()
<ide> {
<del> return config('database.connections')[
<del> config('database.default')
<del> ]['database'] === ':memory:';
<add> $default = config('database.default');
<add>
<add> return config("database.connections.$default.database") === ':memory:';
<ide> }
<ide>
<ide> /** | 1 |
Ruby | Ruby | ask the exception for the formula name | 5e7edbbd3c7ee5fa29ee9907a67e6a79998a90a5 | <ide><path>Library/Homebrew/extend/ENV/shared.rb
<ide> def gcc_version_formula(name)
<ide> def warn_about_non_apple_gcc(name)
<ide> begin
<ide> gcc_formula = gcc_version_formula(name)
<del> rescue FormulaUnavailableError
<add> rescue FormulaUnavailableError => e
<ide> raise <<-EOS.undent
<del> Homebrew GCC requested, but formula #{name.delete(".-")} not found!
<add> Homebrew GCC requested, but formula #{e.name} not found!
<ide> You may need to: brew tap homebrew/versions
<ide> EOS
<ide> end | 1 |
PHP | PHP | allow $m->to, etc. to take an array of e-mails | b0e513c71ed123c604052884dc97eb60aa5aefc8 | <ide><path>src/Illuminate/Mail/Message.php
<ide> public function returnPath($address)
<ide> /**
<ide> * Add a recipient to the message.
<ide> *
<del> * @param string $address
<add> * @param string|array $address
<ide> * @param string $name
<ide> * @return \Illuminate\Mail\Message
<ide> */
<ide> public function to($address, $name = null)
<ide> {
<del> $this->swift->addTo($address, $name);
<del>
<del> return $this;
<add> return $this->addAddresses($address, $name, 'To');
<ide> }
<ide>
<ide> /**
<ide> public function to($address, $name = null)
<ide> */
<ide> public function cc($address, $name = null)
<ide> {
<del> $this->swift->addCc($address, $name);
<del>
<del> return $this;
<add> return $this->addAddresses($address, $name, 'Cc');
<ide> }
<ide>
<ide> /**
<ide> public function cc($address, $name = null)
<ide> */
<ide> public function bcc($address, $name = null)
<ide> {
<del> $this->swift->addBcc($address, $name);
<del>
<del> return $this;
<add> return $this->addAddresses($address, $name, 'Bcc');
<ide> }
<ide>
<ide> /**
<ide> public function bcc($address, $name = null)
<ide> */
<ide> public function replyTo($address, $name = null)
<ide> {
<del> $this->swift->addReplyTo($address, $name);
<add> return $this->addAddresses($address, $name, 'ReplyTo');
<add> }
<add>
<add> /**
<add> * Add a recipient to the message.
<add> *
<add> * @param string|array $address
<add> * @param string $name
<add> * @return \Illuminate\Mail\Message
<add> */
<add> protected function addAddresses($address, $name, $type)
<add> {
<add> if (is_array($address))
<add> {
<add> $this->swift->{"set{$type}"}($address, $name);
<add> }
<add> else
<add> {
<add> $this->swift->{"add{$type}"}($address, $name);
<add> }
<ide>
<ide> return $this;
<ide> } | 1 |
Python | Python | add test file | 1e849d6e34638716a985c6a011bbea109ce57171 | <ide><path>keras/saving/pickle_utils_test.py
<add># Copyright 2021 The TensorFlow Authors. All Rights Reserved.
<add>#
<add># Licensed under the Apache License, Version 2.0 (the "License");
<add># you may not use this file except in compliance with the License.
<add># You may obtain a copy of the License at
<add>#
<add># http://www.apache.org/licenses/LICENSE-2.0
<add>#
<add># Unless required by applicable law or agreed to in writing, software
<add># distributed under the License is distributed on an "AS IS" BASIS,
<add># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
<add># See the License for the specific language governing permissions and
<add># limitations under the License.
<add># ==============================================================================
<add>"""Tests for pickling / deepcopying of Keras Models."""
<add>
<add>import tensorflow.compat.v2 as tf
<add>
<add>import copy
<add>import pickle
<add>
<add>import numpy as np
<add>
<add>from keras import keras_parameterized
<add>from keras import layers as layers_module
<add>from keras.engine import input_layer
<add>from keras.engine import training as training_module
<add>
<add>
<add>class TestPickleProtocol(keras_parameterized.TestCase):
<add> """Tests pickle protoocol support.
<add> """
<add>
<add> @keras_parameterized.run_all_keras_modes
<add> def test_pickle_model(self):
<add> """Test copy.copy, copy.deepcopy and pickle on Functional Model."""
<add>
<add> def roundtrip(model):
<add> model = copy.copy(model)
<add> model = copy.deepcopy(model)
<add> for protocol in range(5): # support up to protocol version 5
<add> model = pickle.loads(pickle.dumps(model, protocol=protocol))
<add> return model
<add>
<add> # create model
<add> x = input_layer.Input((3,))
<add> y = layers_module.Dense(2)(x)
<add> original_model = training_module.Model(x, y)
<add> model = original_model
<add> original_weights = model.get_weights()
<add> # roundtrip without compiling
<add> model = roundtrip(model)
<add> # compile
<add> model.compile(optimizer='sgd', loss='mse')
<add> # roundtrip compiled but not trained
<add> model = roundtrip(model)
<add> # train
<add> x = np.random.random((1000, 3))
<add> y = np.random.random((1000, 2))
<add> model.fit(x, y)
<add> y1 = model.predict(x)
<add> # roundtrip with training
<add> model = roundtrip(model)
<add> y2 = model.predict(x)
<add> self.assertAllClose(y1, y2)
<add> # check that the original model has not been changed
<add> final_weights = original_model.get_weights()
<add> self.assertAllClose(original_weights, final_weights)
<add> self.assertNotAllClose(original_weights, model.get_weights())
<add>
<add>if __name__ == '__main__':
<add> tf.test.main() | 1 |
Text | Text | correct the spelling of omitting in dgram.md | b9a772116b457bee59612bcaf489d95c61397a6c | <ide><path>doc/api/dgram.md
<ide> On IPv4, if `multicastInterface` is a valid address but does not match any
<ide> interface, or if the address does not match the family then
<ide> a [`System Error`][] such as `EADDRNOTAVAIL` or `EPROTONOSUP` is thrown.
<ide>
<del>On IPv6, most errors with specifying or omiting scope will result in the socket
<add>On IPv6, most errors with specifying or omitting scope will result in the socket
<ide> continuing to use (or returning to) the system's default interface selection.
<ide>
<ide> A socket's address family's ANY address (IPv4 `'0.0.0.0'` or IPv6 `'::'`) can be | 1 |
Python | Python | change subprocess arguments from python>=3.7 | 69a39461de4da2bfc0e6aeeff02c73c0972a8614 | <ide><path>benchmarks/asv_pip_nopep517.py
<ide> # pip ignores '--global-option' when pep517 is enabled therefore we disable it.
<ide> cmd = [sys.executable, '-mpip', 'wheel', '--no-use-pep517']
<ide> try:
<del> output = subprocess.check_output(cmd, stderr=subprocess.STDOUT, universal_newlines=True)
<add> output = subprocess.check_output(cmd, stderr=subprocess.STDOUT, text=True)
<ide> except Exception as e:
<ide> output = str(e.output)
<ide> if "no such option" in output:
<ide><path>numpy/distutils/ccompiler_opt.py
<ide> def _dist_test_spawn_paths(self, cmd, display=None):
<ide> def _dist_test_spawn(cmd, display=None):
<ide> try:
<ide> o = subprocess.check_output(cmd, stderr=subprocess.STDOUT,
<del> universal_newlines=True)
<add> text=True)
<ide> if o and re.match(_Distutils._dist_warn_regex, o):
<ide> _Distutils.dist_error(
<ide> "Flags in command", cmd ,"aren't supported by the compiler"
<ide><path>numpy/distutils/exec_command.py
<ide> def _exec_command(command, use_shell=None, use_tee = None, **env):
<ide> # Inherit environment by default
<ide> env = env or None
<ide> try:
<del> # universal_newlines is set to False so that communicate()
<add> # text is set to False so that communicate()
<ide> # will return bytes. We need to decode the output ourselves
<ide> # so that Python will not raise a UnicodeDecodeError when
<ide> # it encounters an invalid character; rather, we simply replace it
<del> proc = subprocess.Popen(command, shell=use_shell, env=env,
<add> proc = subprocess.Popen(command, shell=use_shell, env=env, text=False,
<ide> stdout=subprocess.PIPE,
<del> stderr=subprocess.STDOUT,
<del> universal_newlines=False)
<add> stderr=subprocess.STDOUT)
<ide> except OSError:
<ide> # Return 127, as os.spawn*() and /bin/sh do
<ide> return 127, ''
<ide><path>numpy/distutils/lib2def.py
<ide> def getnm(nm_cmd=['nm', '-Cs', 'python%s.lib' % py_ver], shell=True):
<ide>
<ide> nm_output = getnm(nm_cmd = 'nm -Cs py_lib')"""
<ide> p = subprocess.Popen(nm_cmd, shell=shell, stdout=subprocess.PIPE,
<del> stderr=subprocess.PIPE, universal_newlines=True)
<add> stderr=subprocess.PIPE, text=True)
<ide> nm_output, nm_err = p.communicate()
<ide> if p.returncode != 0:
<ide> raise RuntimeError('failed to run "%s": "%s"' % (
<ide><path>numpy/distutils/misc_util.py
<ide> def cyg2win32(path: str) -> str:
<ide> if sys.platform != "cygwin":
<ide> return path
<ide> return subprocess.check_output(
<del> ["/usr/bin/cygpath", "--windows", path], universal_newlines=True
<add> ["/usr/bin/cygpath", "--windows", path], text=True
<ide> )
<ide>
<ide>
<ide><path>numpy/f2py/__init__.py
<ide> def compile(source,
<ide> '-c',
<ide> 'import numpy.f2py as f2py2e;f2py2e.main()'] + args
<ide> try:
<del> cp = subprocess.run(c, stdout=subprocess.PIPE,
<del> stderr=subprocess.PIPE)
<add> cp = subprocess.run(c, capture_output=True)
<ide> except OSError:
<ide> # preserve historic status code used by exec_command()
<ide> cp = subprocess.CompletedProcess(c, 127, stdout=b'', stderr=b'')
<ide><path>setup.py
<ide> def _needs_gcc_c99_flag(obj):
<ide> return False
<ide>
<ide> # will print something like '4.2.1\n'
<del> out = subprocess.run([cc, '-dumpversion'], stdout=subprocess.PIPE,
<del> stderr=subprocess.PIPE, universal_newlines=True)
<add> out = subprocess.run([cc, '-dumpversion'],
<add> capture_output=True, text=True)
<ide> # -std=c99 is default from this version on
<ide> if _pep440.parse(out.stdout) >= _pep440.Version('5.0'):
<ide> return False | 7 |
Javascript | Javascript | remove unreachable code | 99268b1e996d13a0aeda7aa112796484fe4e4238 | <ide><path>lib/internal/assert/assertion_error.js
<ide> class AssertionError extends Error {
<ide> } else {
<ide> let res = inspectValue(actual);
<ide> let other = inspectValue(expected);
<del> const knownOperators = kReadableOperator[operator];
<add> const knownOperator = kReadableOperator[operator];
<ide> if (operator === 'notDeepEqual' && res === other) {
<del> res = `${knownOperators}\n\n${res}`;
<add> res = `${knownOperator}\n\n${res}`;
<ide> if (res.length > 1024) {
<ide> res = `${res.slice(0, 1021)}...`;
<ide> }
<ide> class AssertionError extends Error {
<ide> other = `${other.slice(0, 509)}...`;
<ide> }
<ide> if (operator === 'deepEqual') {
<del> const eq = operator === 'deepEqual' ? 'deep-equal' : 'equal';
<del> res = `${knownOperators}\n\n${res}\n\nshould loosely ${eq}\n\n`;
<add> res = `${knownOperator}\n\n${res}\n\nshould loosely deep-equal\n\n`;
<ide> } else {
<ide> const newOperator = kReadableOperator[`${operator}Unequal`];
<ide> if (newOperator) { | 1 |
Javascript | Javascript | improve assert messages | 55bb9c41284d49bf4f1ebbb674d4ba9cad7508d5 | <ide><path>test/sequential/test-child-process-execsync.js
<ide> try {
<ide> assert.strictEqual(e.errno, 'ETIMEDOUT');
<ide> err = e;
<ide> } finally {
<del> assert.strictEqual(ret, undefined, 'we should not have a return value');
<add> assert.strictEqual(ret, undefined,
<add> `should not have a return value, received ${ret}`);
<ide> assert.strictEqual(caught, true, 'execSync should throw');
<ide> const end = Date.now() - start;
<ide> assert(end < SLEEP);
<ide> cmd = `"${process.execPath}" -e "console.log('${msg}');"`;
<ide> ret = execSync(cmd);
<ide>
<ide> assert.strictEqual(ret.length, msgBuf.length);
<del>assert.deepStrictEqual(ret, msgBuf, 'execSync result buffer should match');
<add>assert.deepStrictEqual(ret, msgBuf);
<ide>
<ide> ret = execSync(cmd, { encoding: 'utf8' });
<ide>
<del>assert.strictEqual(ret, `${msg}\n`, 'execSync encoding result should match');
<add>assert.strictEqual(ret, `${msg}\n`);
<ide>
<ide> const args = [
<ide> '-e',
<ide> assert.deepStrictEqual(ret, msgBuf);
<ide>
<ide> ret = execFileSync(process.execPath, args, { encoding: 'utf8' });
<ide>
<del>assert.strictEqual(ret, `${msg}\n`,
<del> 'execFileSync encoding result should match');
<add>assert.strictEqual(ret, `${msg}\n`);
<ide>
<ide> // Verify that the cwd option works - GH #7824
<ide> { | 1 |
PHP | PHP | use injection here. blah, can't decide | 45f0b4f9d9f7919ebba52fb511d48c931df7755c | <ide><path>app/Providers/ErrorServiceProvider.php
<ide> <?php namespace App\Providers;
<ide>
<del>use App, Log, Exception;
<add>use Exception;
<add>use Illuminate\Contracts\Logging\Log;
<ide> use Illuminate\Support\ServiceProvider;
<add>use Illuminate\Contracts\Exception\Handler;
<ide>
<ide> class ErrorServiceProvider extends ServiceProvider {
<ide>
<ide> /**
<ide> * Register any error handlers.
<ide> *
<add> * @param Handler $handler
<add> * @param Log $log
<ide> * @return void
<ide> */
<del> public function boot()
<add> public function boot(Handler $handler, Log $log)
<ide> {
<ide> // Here you may handle any errors that occur in your application, including
<ide> // logging them or displaying custom views for specific errors. You may
<ide> // even register several error handlers to handle different types of
<ide> // exceptions. If nothing is returned, the default error view is
<ide> // shown, which includes a detailed stack trace during debug.
<ide>
<del> App::error(function(Exception $e)
<add> $handler->error(function(Exception $e) use ($log)
<ide> {
<del> Log::error($e);
<add> $log->error($e);
<ide> });
<ide> }
<ide>
<ide><path>app/Providers/LogServiceProvider.php
<ide> <?php namespace App\Providers;
<ide>
<del>use Log;
<add>use Illuminate\Contracts\Logging\Log;
<ide> use Illuminate\Support\ServiceProvider;
<ide>
<ide> class LogServiceProvider extends ServiceProvider {
<ide>
<ide> /**
<ide> * Configure the application's logging facilities.
<ide> *
<add> * @param Log $log
<ide> * @return void
<ide> */
<del> public function boot()
<add> public function boot(Log $log)
<ide> {
<del> Log::useFiles(storage_path().'/logs/laravel.log');
<add> $log->useFiles(storage_path().'/logs/laravel.log');
<ide> }
<ide>
<ide> /** | 2 |
Ruby | Ruby | implement os-agnostic methods | 0ce7a74c585eb01df60f1e9d353825d90bd3d969 | <ide><path>Library/Homebrew/extend/os/linux/hardware/cpu.rb
<ide> module Hardware
<ide> class CPU
<ide> class << self
<del> def universal_archs
<del> [].extend ArchitectureListExtension
<del> end
<del>
<ide> def cpuinfo
<ide> @cpuinfo ||= File.read("/proc/cpuinfo")
<ide> end
<ide>
<del> def type
<del> @type ||= if cpuinfo =~ /Intel|AMD/
<del> :intel
<del> elsif cpuinfo =~ /ARM|Marvell/
<del> :arm
<del> else
<del> :dunno
<del> end
<del> end
<del>
<ide> def family
<ide> return :arm if arm?
<add> return :ppc if ppc?
<ide> return :dunno unless intel?
<ide> # See https://software.intel.com/en-us/articles/intel-architecture-and-processor-identification-with-cpuid-model-and-family-numbers
<ide> cpu_family = cpuinfo[/^cpu family\s*: ([0-9]+)/, 1].to_i
<ide> def family
<ide> end
<ide> end
<ide>
<del> def cores
<del> cpuinfo.scan(/^processor/).size
<del> end
<del>
<ide> def flags
<del> @flags ||= cpuinfo[/^(flags|Features).*/, 0].split
<add> @flags ||= cpuinfo[/^(flags|Features).*/, 0]&.split
<add> @flags ||= []
<ide> end
<ide>
<ide> # Compatibility with Mac method, which returns lowercase symbols
<ide> def sse3?
<ide> def sse4?
<ide> flags.include? "sse4_1"
<ide> end
<del>
<del> alias is_64_bit? lm?
<del>
<del> def bits
<del> is_64_bit? ? 64 : 32
<del> end
<ide> end
<ide> end
<ide> end
<ide><path>Library/Homebrew/extend/os/mac/hardware/cpu.rb
<ide> def extmodel
<ide> sysctl_int("machdep.cpu.extmodel")
<ide> end
<ide>
<del> def cores
<del> sysctl_int("hw.ncpu")
<del> end
<del>
<del> def bits
<del> sysctl_bool("hw.cpu64bit_capable") ? 64 : 32
<del> end
<del>
<del> def arch_32_bit
<del> intel? ? :i386 : :ppc
<del> end
<del>
<del> def arch_64_bit
<del> intel? ? :x86_64 : :ppc64
<del> end
<del>
<ide> # Returns an array that's been extended with ArchitectureListExtension,
<ide> # which provides helpers like #as_arch_flags and #as_cmake_arch_flags.
<ide> def universal_archs
<ide><path>Library/Homebrew/hardware.rb
<ide> def optimization_flags
<ide> end
<ide>
<ide> def arch_32_bit
<del> :i386
<add> if arm?
<add> :arm
<add> elsif intel?
<add> :i386
<add> elsif ppc?
<add> :ppc32
<add> else
<add> :dunno
<add> end
<ide> end
<ide>
<ide> def arch_64_bit
<del> :x86_64
<add> if arm?
<add> :arm64
<add> elsif intel?
<add> :x86_64
<add> elsif ppc?
<add> :ppc64
<add> else
<add> :dunno
<add> end
<add> end
<add>
<add> def arch
<add> case bits
<add> when 32
<add> arch_32_bit
<add> when 64
<add> arch_64_bit
<add> else
<add> :dunno
<add> end
<add> end
<add>
<add> def universal_archs
<add> [arch].extend ArchitectureListExtension
<ide> end
<ide>
<ide> def type
<ide> case RUBY_PLATFORM
<ide> when /x86_64/, /i\d86/ then :intel
<add> when /arm/ then :arm
<ide> when /ppc\d+/ then :ppc
<ide> else :dunno
<ide> end
<ide> def family
<ide> end
<ide>
<ide> def cores
<del> 1
<add> return @cores if @cores
<add> @cores = Utils.popen_read("getconf", "_NPROCESSORS_ONLN").chomp.to_i
<add> @cores = 1 unless $CHILD_STATUS.success?
<add> @cores
<ide> end
<ide>
<ide> def bits
<del> case RUBY_PLATFORM
<del> when /x86_64/, /ppc64/ then 64
<del> when /i\d86/, /ppc/ then 32
<add> @bits ||= case RUBY_PLATFORM
<add> when /x86_64/, /ppc64/, /aarch64|arm64/ then 64
<add> when /i\d86/, /ppc/, /arm/ then 32
<ide> end
<ide> end
<ide> | 3 |
Python | Python | remove processor_factory from dag processing | 2fea4cdceaa12b3ac13f24eeb383af624aacb2e7 | <ide><path>airflow/dag_processing/manager.py
<ide> from datetime import datetime, timedelta
<ide> from importlib import import_module
<ide> from multiprocessing.connection import Connection as MultiprocessingConnection
<del>from typing import TYPE_CHECKING, Any, Callable, Dict, List, NamedTuple, Optional, Union, cast
<add>from typing import TYPE_CHECKING, Any, Dict, List, NamedTuple, Optional, Union, cast
<ide>
<ide> from setproctitle import setproctitle
<ide> from sqlalchemy import or_
<ide> class DagFileProcessorAgent(LoggingMixin, MultiprocessingStartMethodMixin):
<ide> :param max_runs: The number of times to parse and schedule each file. -1
<ide> for unlimited.
<ide> :type max_runs: int
<del> :param processor_factory: function that creates processors for DAG
<del> definition files. Arguments are (dag_definition_path, log_file_path)
<del> :type processor_factory: ([str, List[CallbackRequest], Optional[List[str]], bool]) -> (
<del> DagFileProcessorProcess
<del> )
<ide> :param processor_timeout: How long to wait before timing out a DAG file processor
<ide> :type processor_timeout: timedelta
<ide> :param dag_ids: if specified, only schedule tasks with these DAG IDs
<ide> def __init__(
<ide> self,
<ide> dag_directory: str,
<ide> max_runs: int,
<del> processor_factory: Callable[
<del> [str, List[CallbackRequest], Optional[List[str]], bool], DagFileProcessorProcess
<del> ],
<ide> processor_timeout: timedelta,
<ide> dag_ids: Optional[List[str]],
<ide> pickle_dags: bool,
<ide> def __init__(
<ide> self._file_path_queue: List[str] = []
<ide> self._dag_directory: str = dag_directory
<ide> self._max_runs = max_runs
<del> self._processor_factory = processor_factory
<ide> self._processor_timeout = processor_timeout
<ide> self._dag_ids = dag_ids
<ide> self._pickle_dags = pickle_dags
<ide> def start(self) -> None:
<ide> args=(
<ide> self._dag_directory,
<ide> self._max_runs,
<del> # getattr prevents error while pickling an instance method.
<del> getattr(self, "_processor_factory"),
<ide> self._processor_timeout,
<ide> child_signal_conn,
<ide> self._dag_ids,
<ide> def wait_until_finished(self) -> None:
<ide> def _run_processor_manager(
<ide> dag_directory: str,
<ide> max_runs: int,
<del> processor_factory: Callable[[str, List[CallbackRequest]], DagFileProcessorProcess],
<ide> processor_timeout: timedelta,
<ide> signal_conn: MultiprocessingConnection,
<ide> dag_ids: Optional[List[str]],
<ide> def _run_processor_manager(
<ide> processor_manager = DagFileProcessorManager(
<ide> dag_directory,
<ide> max_runs,
<del> processor_factory,
<ide> processor_timeout,
<ide> signal_conn,
<ide> dag_ids,
<ide> class DagFileProcessorManager(LoggingMixin):
<ide> :param max_runs: The number of times to parse and schedule each file. -1
<ide> for unlimited.
<ide> :type max_runs: int
<del> :param processor_factory: function that creates processors for DAG
<del> definition files. Arguments are (dag_definition_path)
<del> :type processor_factory: (unicode, unicode, list) -> (DagFileProcessorProcess)
<ide> :param processor_timeout: How long to wait before timing out a DAG file processor
<ide> :type processor_timeout: timedelta
<ide> :param signal_conn: connection to communicate signal with processor agent.
<ide> def __init__(
<ide> self,
<ide> dag_directory: Union[str, "pathlib.Path"],
<ide> max_runs: int,
<del> processor_factory: Callable[[str, List[CallbackRequest]], DagFileProcessorProcess],
<ide> processor_timeout: timedelta,
<ide> signal_conn: MultiprocessingConnection,
<ide> dag_ids: Optional[List[str]],
<ide> def __init__(
<ide> self._file_path_queue: List[str] = []
<ide> self._dag_directory = dag_directory
<ide> self._max_runs = max_runs
<del> self._processor_factory = processor_factory
<ide> self._signal_conn = signal_conn
<ide> self._pickle_dags = pickle_dags
<ide> self._dag_ids = dag_ids
<ide> def _run_parsing_loop(self):
<ide> # If we're in async mode, we can start up straight away. If we're
<ide> # in sync mode we need to be told to start a "loop"
<ide> self.start_new_processes()
<del>
<ide> while True:
<ide> loop_start_time = time.monotonic()
<ide>
<ide> def collect_results(self) -> None:
<ide>
<ide> self.log.debug("%s file paths queued for processing", len(self._file_path_queue))
<ide>
<add> @staticmethod
<add> def _create_process(file_path, pickle_dags, dag_ids, callback_requests):
<add> """Creates DagFileProcessorProcess instance."""
<add> return DagFileProcessorProcess(
<add> file_path=file_path, pickle_dags=pickle_dags, dag_ids=dag_ids, callback_requests=callback_requests
<add> )
<add>
<ide> def start_new_processes(self):
<ide> """Start more processors if we have enough slots and files to process"""
<ide> while self._parallelism - len(self._processors) > 0 and self._file_path_queue:
<ide> def start_new_processes(self):
<ide> continue
<ide>
<ide> callback_to_execute_for_file = self._callback_to_execute[file_path]
<del> processor = self._processor_factory(
<del> file_path, callback_to_execute_for_file, self._dag_ids, self._pickle_dags
<add> processor = self._create_process(
<add> file_path, self._pickle_dags, self._dag_ids, callback_to_execute_for_file
<ide> )
<ide>
<ide> del self._callback_to_execute[file_path]
<ide><path>airflow/jobs/scheduler_job.py
<ide> from airflow import models, settings
<ide> from airflow.configuration import conf
<ide> from airflow.dag_processing.manager import DagFileProcessorAgent
<del>from airflow.dag_processing.processor import DagFileProcessorProcess
<ide> from airflow.exceptions import SerializedDagNotFound
<ide> from airflow.executors.executor_loader import UNPICKLEABLE_EXECUTORS
<ide> from airflow.jobs.base_job import BaseJob
<ide> from airflow.stats import Stats
<ide> from airflow.ti_deps.dependencies_states import EXECUTION_STATES
<ide> from airflow.utils import timezone
<del>from airflow.utils.callback_requests import CallbackRequest, DagCallbackRequest, TaskCallbackRequest
<add>from airflow.utils.callback_requests import DagCallbackRequest, TaskCallbackRequest
<ide> from airflow.utils.event_scheduler import EventScheduler
<ide> from airflow.utils.retries import MAX_DB_RETRIES, retry_db_transaction, run_with_db_retries
<ide> from airflow.utils.session import create_session, provide_session
<ide> def _execute(self) -> None:
<ide> self.processor_agent = DagFileProcessorAgent(
<ide> dag_directory=self.subdir,
<ide> max_runs=self.num_times_parse_dags,
<del> processor_factory=type(self)._create_dag_file_processor,
<ide> processor_timeout=processor_timeout,
<ide> dag_ids=[],
<ide> pickle_dags=pickle_dags,
<ide> def _execute(self) -> None:
<ide> self.log.exception("Exception when executing DagFileProcessorAgent.end")
<ide> self.log.info("Exited execute loop")
<ide>
<del> @staticmethod
<del> def _create_dag_file_processor(
<del> file_path: str,
<del> callback_requests: List[CallbackRequest],
<del> dag_ids: Optional[List[str]],
<del> pickle_dags: bool,
<del> ) -> DagFileProcessorProcess:
<del> """Creates DagFileProcessorProcess instance."""
<del> return DagFileProcessorProcess(
<del> file_path=file_path, pickle_dags=pickle_dags, dag_ids=dag_ids, callback_requests=callback_requests
<del> )
<del>
<ide> def _run_scheduler_loop(self) -> None:
<ide> """
<ide> The actual scheduler loop. The main steps in the loop are:
<ide><path>tests/dag_processing/test_manager.py
<ide> )
<ide> from airflow.dag_processing.processor import DagFileProcessorProcess
<ide> from airflow.jobs.local_task_job import LocalTaskJob as LJ
<del>from airflow.jobs.scheduler_job import SchedulerJob
<ide> from airflow.models import DagBag, DagModel, TaskInstance as TI
<ide> from airflow.models.serialized_dag import SerializedDagModel
<ide> from airflow.models.taskinstance import SimpleTaskInstance
<ide> def result(self):
<ide> return self._result
<ide>
<ide> @staticmethod
<del> def _fake_dag_processor_factory(file_path, callbacks, dag_ids, pickle_dags):
<add> def _create_process(file_path, callback_requests, dag_ids, pickle_dags):
<ide> return FakeDagFileProcessorRunner(
<ide> file_path,
<ide> pickle_dags,
<ide> dag_ids,
<del> callbacks,
<add> callback_requests,
<ide> )
<ide>
<ide> @property
<ide> def test_max_runs_when_no_files(self):
<ide> manager = DagFileProcessorManager(
<ide> dag_directory=dags_folder,
<ide> max_runs=1,
<del> processor_factory=FakeDagFileProcessorRunner._fake_dag_processor_factory,
<ide> processor_timeout=timedelta.max,
<ide> signal_conn=child_pipe,
<ide> dag_ids=[],
<ide> def test_start_new_processes_with_same_filepath(self):
<ide> Test that when a processor already exist with a filepath, a new processor won't be created
<ide> with that filepath. The filepath will just be removed from the list.
<ide> """
<del> processor_factory_mock = MagicMock()
<ide> manager = DagFileProcessorManager(
<ide> dag_directory='directory',
<ide> max_runs=1,
<del> processor_factory=processor_factory_mock,
<ide> processor_timeout=timedelta.max,
<ide> signal_conn=MagicMock(),
<ide> dag_ids=[],
<ide> def test_start_new_processes_with_same_filepath(self):
<ide> # and since a processor with 'file_1' already exists,
<ide> # even though it is first in '_file_path_queue'
<ide> # a new processor is created with 'file_2' and not 'file_1'.
<del> processor_factory_mock.assert_called_once_with('file_2.py', [], [], False)
<ide>
<ide> assert file_1 in manager._processors.keys()
<ide> assert file_2 in manager._processors.keys()
<ide> def test_set_file_paths_when_processor_file_path_not_in_new_file_paths(self):
<ide> manager = DagFileProcessorManager(
<ide> dag_directory='directory',
<ide> max_runs=1,
<del> processor_factory=MagicMock().return_value,
<ide> processor_timeout=timedelta.max,
<ide> signal_conn=MagicMock(),
<ide> dag_ids=[],
<ide> def test_set_file_paths_when_processor_file_path_is_in_new_file_paths(self):
<ide> manager = DagFileProcessorManager(
<ide> dag_directory='directory',
<ide> max_runs=1,
<del> processor_factory=MagicMock().return_value,
<ide> processor_timeout=timedelta.max,
<ide> signal_conn=MagicMock(),
<ide> dag_ids=[],
<ide> def test_file_paths_in_queue_sorted_alphabetically(
<ide> manager = DagFileProcessorManager(
<ide> dag_directory='directory',
<ide> max_runs=1,
<del> processor_factory=MagicMock().return_value,
<ide> processor_timeout=timedelta.max,
<ide> signal_conn=MagicMock(),
<ide> dag_ids=[],
<ide> def test_file_paths_in_queue_sorted_random_seeded_by_host(
<ide> manager = DagFileProcessorManager(
<ide> dag_directory='directory',
<ide> max_runs=1,
<del> processor_factory=MagicMock().return_value,
<ide> processor_timeout=timedelta.max,
<ide> signal_conn=MagicMock(),
<ide> dag_ids=[],
<ide> def test_file_paths_in_queue_sorted_by_modified_time(
<ide> manager = DagFileProcessorManager(
<ide> dag_directory='directory',
<ide> max_runs=1,
<del> processor_factory=MagicMock().return_value,
<ide> processor_timeout=timedelta.max,
<ide> signal_conn=MagicMock(),
<ide> dag_ids=[],
<ide> def test_recently_modified_file_is_parsed_with_mtime_mode(
<ide> manager = DagFileProcessorManager(
<ide> dag_directory='directory',
<ide> max_runs=3,
<del> processor_factory=MagicMock().return_value,
<ide> processor_timeout=timedelta.max,
<ide> signal_conn=MagicMock(),
<ide> dag_ids=[],
<ide> def test_find_zombies(self):
<ide> manager = DagFileProcessorManager(
<ide> dag_directory='directory',
<ide> max_runs=1,
<del> processor_factory=MagicMock().return_value,
<ide> processor_timeout=timedelta.max,
<ide> signal_conn=MagicMock(),
<ide> dag_ids=[],
<ide> def test_find_zombies(self):
<ide> session.query(TI).delete()
<ide> session.query(LJ).delete()
<ide>
<del> def test_handle_failure_callback_with_zombies_are_correctly_passed_to_dag_file_processor(self):
<add> @mock.patch('airflow.dag_processing.manager.DagFileProcessorProcess')
<add> def test_handle_failure_callback_with_zombies_are_correctly_passed_to_dag_file_processor(
<add> self, mock_processor
<add> ):
<ide> """
<ide> Check that the same set of failure callback with zombies are passed to the dag
<ide> file processors until the next zombie detection logic is invoked.
<ide> def test_handle_failure_callback_with_zombies_are_correctly_passed_to_dag_file_p
<ide>
<ide> fake_processors = []
<ide>
<del> def fake_processor_factory(*args, **kwargs):
<add> def fake_processor_(*args, **kwargs):
<ide> nonlocal fake_processors
<del> processor = FakeDagFileProcessorRunner._fake_dag_processor_factory(*args, **kwargs)
<add> processor = FakeDagFileProcessorRunner._create_process(*args, **kwargs)
<ide> fake_processors.append(processor)
<ide> return processor
<ide>
<add> mock_processor.side_effect = fake_processor_
<add>
<ide> manager = DagFileProcessorManager(
<ide> dag_directory=test_dag_path,
<ide> max_runs=1,
<del> processor_factory=fake_processor_factory,
<ide> processor_timeout=timedelta.max,
<ide> signal_conn=child_pipe,
<ide> dag_ids=[],
<ide> def test_kill_timed_out_processors_kill(self, mock_kill, mock_pid):
<ide> manager = DagFileProcessorManager(
<ide> dag_directory='directory',
<ide> max_runs=1,
<del> processor_factory=MagicMock().return_value,
<ide> processor_timeout=timedelta(seconds=5),
<ide> signal_conn=MagicMock(),
<ide> dag_ids=[],
<ide> def test_kill_timed_out_processors_no_kill(self, mock_dag_file_processor, mock_p
<ide> manager = DagFileProcessorManager(
<ide> dag_directory='directory',
<ide> max_runs=1,
<del> processor_factory=MagicMock().return_value,
<ide> processor_timeout=timedelta(seconds=5),
<ide> signal_conn=MagicMock(),
<ide> dag_ids=[],
<ide> def test_dag_with_system_exit(self):
<ide> Test to check that a DAG with a system.exit() doesn't break the scheduler.
<ide> """
<ide>
<del> # We need to _actually_ parse the files here to test the behaviour.
<del> # Right now the parsing code lives in SchedulerJob, even though it's
<del> # called via utils.dag_processing.
<del>
<ide> dag_id = 'exit_test_dag'
<ide> dag_directory = TEST_DAG_FOLDER.parent / 'dags_with_system_exit'
<ide>
<ide> def test_dag_with_system_exit(self):
<ide> dag_directory=dag_directory,
<ide> dag_ids=[],
<ide> max_runs=1,
<del> processor_factory=SchedulerJob._create_dag_file_processor,
<ide> processor_timeout=timedelta(seconds=5),
<ide> signal_conn=child_pipe,
<ide> pickle_dags=False,
<ide> def test_dag_with_system_exit(self):
<ide> @conf_vars({('core', 'load_examples'): 'False'})
<ide> @pytest.mark.backend("mysql", "postgres")
<ide> @pytest.mark.execution_timeout(30)
<del> def test_pipe_full_deadlock(self):
<add> @mock.patch('airflow.dag_processing.manager.DagFileProcessorProcess')
<add> def test_pipe_full_deadlock(self, mock_processor):
<ide> dag_filepath = TEST_DAG_FOLDER / "test_scheduler_dags.py"
<ide>
<ide> child_pipe, parent_pipe = multiprocessing.Pipe()
<ide> def keep_pipe_full(pipe, exit_event):
<ide>
<ide> fake_processors = []
<ide>
<del> def fake_processor_factory(*args, **kwargs):
<add> def fake_processor_(*args, **kwargs):
<ide> nonlocal fake_processors
<del> processor = FakeDagFileProcessorRunner._fake_dag_processor_factory(*args, **kwargs)
<add> processor = FakeDagFileProcessorRunner._create_process(*args, **kwargs)
<ide> fake_processors.append(processor)
<ide> return processor
<ide>
<add> mock_processor.side_effect = fake_processor_
<add>
<ide> manager = DagFileProcessorManager(
<ide> dag_directory=dag_filepath,
<ide> dag_ids=[],
<ide> # A reasonable large number to ensure that we trigger the deadlock
<ide> max_runs=100,
<del> processor_factory=fake_processor_factory,
<ide> processor_timeout=timedelta(seconds=5),
<ide> signal_conn=child_pipe,
<ide> pickle_dags=False,
<ide> class path, thus when reloading logging module the airflow.processor_manager
<ide> pass
<ide>
<ide> # Starting dag processing with 0 max_runs to avoid redundant operations.
<del> processor_agent = DagFileProcessorAgent(
<del> test_dag_path, 0, type(self)._processor_factory, timedelta.max, [], False, async_mode
<del> )
<add> processor_agent = DagFileProcessorAgent(test_dag_path, 0, timedelta.max, [], False, async_mode)
<ide> processor_agent.start()
<ide> if not async_mode:
<ide> processor_agent.run_single_parsing_loop()
<ide> def test_parse_once(self):
<ide>
<ide> test_dag_path = TEST_DAG_FOLDER / 'test_scheduler_dags.py'
<ide> async_mode = 'sqlite' not in conf.get('core', 'sql_alchemy_conn')
<del> processor_agent = DagFileProcessorAgent(
<del> test_dag_path, 1, type(self)._processor_factory, timedelta.max, [], False, async_mode
<del> )
<add> processor_agent = DagFileProcessorAgent(test_dag_path, 1, timedelta.max, [], False, async_mode)
<ide> processor_agent.start()
<ide> if not async_mode:
<ide> processor_agent.run_single_parsing_loop()
<ide> def test_launch_process(self):
<ide> pass
<ide>
<ide> # Starting dag processing with 0 max_runs to avoid redundant operations.
<del> processor_agent = DagFileProcessorAgent(
<del> test_dag_path, 0, type(self)._processor_factory, timedelta.max, [], False, async_mode
<del> )
<add> processor_agent = DagFileProcessorAgent(test_dag_path, 0, timedelta.max, [], False, async_mode)
<ide> processor_agent.start()
<ide> if not async_mode:
<ide> processor_agent.run_single_parsing_loop() | 3 |
Text | Text | fix binary links for 0.8.7 post | 226d37bb6883bcf5d409f82a9d6e910d46f175a9 | <ide><path>doc/blog/release/v0.8.7.md
<ide> Windows x64 Installer: http://nodejs.org/dist/v0.8.7/x64/node-v0.8.7-x64.msi
<ide>
<ide> Windows x64 Files: http://nodejs.org/dist/v0.8.7/x64/
<ide>
<del>Linux 32-bit Binary Package: http://nodejs.org/dist/v0.8.7/node-v0.8.7-linux-i686.tar.gz
<add>Linux 32-bit Binary: http://nodejs.org/dist/v0.8.7/node-v0.8.7-linux-x86.tar.gz
<ide>
<del>Linux 64-bit Binary Package: http://nodejs.org/dist/v0.8.7/node-v0.8.7-linux-x86_64.tar.gz
<add>Linux 64-bit Binary: http://nodejs.org/dist/v0.8.7/node-v0.8.7-linux-x64.tar.gz
<ide>
<del>Solaris 32-bit Binary Package: http://nodejs.org/dist/v0.8.7/node-v0.8.7-sunos-i386.tar.gz
<add>Solaris 32-bit Binary: http://nodejs.org/dist/v0.8.7/node-v0.8.7-sunos-x86.tar.gz
<ide>
<del>Solaris 64-bit Binary Package: http://nodejs.org/dist/v0.8.7/node-v0.8.7-sunos-x86_64.tar.gz
<add>Solaris 64-bit Binary: http://nodejs.org/dist/v0.8.7/node-v0.8.7-sunos-x64.tar.gz
<ide>
<ide> Other release files: http://nodejs.org/dist/v0.8.7/
<ide> | 1 |
Javascript | Javascript | use key="foo" for all components | 4b81de93d3733ecc5d4dcaab7efad2a5eef1937d | <ide><path>src/core/__tests__/ReactIdentity-test.js
<add>/**
<add> * @jsx React.DOM
<add> * @emails react-core
<add> */
<add>
<add>"use strict";
<add>
<add>var React;
<add>var ReactTestUtils;
<add>var reactComponentExpect;
<add>
<add>describe('ReactIdentity', function() {
<add>
<add> beforeEach(function() {
<add> require('mock-modules').autoMockOff().dumpCache();
<add> React = require('React');
<add> ReactTestUtils = require('ReactTestUtils');
<add> reactComponentExpect = require('reactComponentExpect');
<add> });
<add>
<add> it('should allow keyed objects to express identity', function() {
<add> var instance =
<add> <div>
<add> {{
<add> first: <div />,
<add> second: <div />
<add> }}
<add> </div>;
<add>
<add> React.renderComponent(instance, document.createElement('div'));
<add> var node = instance.getDOMNode();
<add> reactComponentExpect(instance).toBeDOMComponentWithChildCount(2);
<add> expect(node.childNodes[0].id).toEqual('.reactRoot[0].:0:first');
<add> expect(node.childNodes[1].id).toEqual('.reactRoot[0].:0:second');
<add> });
<add>
<add> it('should allow key property to express identity', function() {
<add> var instance =
<add> <div>
<add> <div key="apple" />
<add> <div key="banana" />
<add> </div>;
<add>
<add> React.renderComponent(instance, document.createElement('div'));
<add> var node = instance.getDOMNode();
<add> reactComponentExpect(instance).toBeDOMComponentWithChildCount(2);
<add> expect(node.childNodes[0].id).toEqual('.reactRoot[0].:apple');
<add> expect(node.childNodes[1].id).toEqual('.reactRoot[0].:banana');
<add> });
<add>
<add> it('should use instance identity', function() {
<add>
<add> var Wrapper = React.createClass({
<add> render: function() {
<add> return <a key="i_get_overwritten">{this.props.children}</a>;
<add> }
<add> });
<add>
<add> var instance =
<add> <div>
<add> <Wrapper key="wrap1"><span key="squirrel" /></Wrapper>
<add> <Wrapper key="wrap2"><span key="bunny" /></Wrapper>
<add> <Wrapper><span key="chipmunk" /></Wrapper>
<add> </div>;
<add>
<add> React.renderComponent(instance, document.createElement('div'));
<add> var node = instance.getDOMNode();
<add> reactComponentExpect(instance).toBeDOMComponentWithChildCount(3);
<add> expect(node.childNodes[0].id)
<add> .toEqual('.reactRoot[0].:wrap1');
<add> expect(node.childNodes[0].firstChild.id)
<add> .toEqual('.reactRoot[0].:wrap1.:squirrel');
<add> expect(node.childNodes[1].id)
<add> .toEqual('.reactRoot[0].:wrap2');
<add> expect(node.childNodes[1].firstChild.id)
<add> .toEqual('.reactRoot[0].:wrap2.:bunny');
<add> expect(node.childNodes[2].id)
<add> .toEqual('.reactRoot[0].:2');
<add> expect(node.childNodes[2].firstChild.id)
<add> .toEqual('.reactRoot[0].:2.:chipmunk');
<add> });
<add>
<add>});
<ide><path>src/utils/flattenChildren.js
<ide> var flattenChildrenImpl = function(res, children, nameSoFar) {
<ide> if (Array.isArray(children)) {
<ide> for (var i = 0; i < children.length; i++) {
<ide> var child = children[i];
<del> key = child && child.mountInContainerNode &&
<del> (child._key || child.props.key);
<add> key = child && (child._key || (child.props && child.props.key));
<ide> escapedKey = key ? escapeTextForBrowser(key) : ('' + i);
<ide> flattenChildrenImpl(
<ide> res, | 2 |
Python | Python | add support for commonjs | 7b4d0af5fd8cfa6ea613f20e843e81032fac2d48 | <ide><path>utils/build/build.py
<ide> def main(argv=None):
<ide> sources = []
<ide>
<ide> if args.amd:
<del> tmp.write('( function ( root, factory ) {\n\n\tif ( typeof define === \'function\' && define.amd ) {\n\n\t\tdefine( factory );\n\n\t} else {\n\n\t\troot.THREE = factory();\n\n\t}\n\n}( this, function () {\n\n')
<add> tmp.write('( function ( root, factory ) {\n\n\tif ( typeof define === \'function\' && define.amd ) {\n\n\t\tdefine( [\'exports\'], factory );\n\n\t} else if (typeof exports === \'object\') {\n\n\t\tfactory(exports);\n\n\t} else {\n\n\t\tfactory(root);\n\n\t}\n\n}( this, function (exports) {\n\n')
<ide>
<ide> for include in args.include:
<ide> with open('includes/' + include + '.json','r') as f:
<ide> def main(argv=None):
<ide> tmp.write('\n')
<ide>
<ide> if args.amd:
<del> tmp.write('return THREE;\n\n} ) );')
<add> tmp.write('exports.THREE = THREE;\n\n} ) );')
<ide>
<ide> tmp.close()
<ide> | 1 |
Ruby | Ruby | remove default tag value from bottle_filename | a3dad588a855238b3576b59e90fd062b19cf55e0 | <ide><path>Library/Homebrew/bottles.rb
<ide> require 'bottle_version'
<ide>
<ide> def bottle_filename options={}
<del> options = { :tag => bottle_tag }.merge(options)
<ide> suffix = ".#{options[:tag]}#{bottle_suffix(options[:revision])}"
<ide> "#{options[:name]}-#{options[:version]}#{suffix}"
<ide> end | 1 |
Python | Python | update no_trainer script for summarization | c28d04e9e252a1a099944e325685f14d242ecdcd | <ide><path>examples/pytorch/summarization/run_summarization_no_trainer.py
<ide> def postprocess_text(preds, labels):
<ide> "max_length": args.val_max_target_length if args is not None else config.max_length,
<ide> "num_beams": args.num_beams,
<ide> }
<del> samples_seen = 0
<ide> for step, batch in enumerate(eval_dataloader):
<ide> with torch.no_grad():
<ide> generated_tokens = accelerator.unwrap_model(model).generate(
<ide> def postprocess_text(preds, labels):
<ide> # If we did not pad to max length, we need to pad the labels too
<ide> labels = accelerator.pad_across_processes(batch["labels"], dim=1, pad_index=tokenizer.pad_token_id)
<ide>
<del> generated_tokens, labels = accelerator.gather((generated_tokens, labels))
<add> generated_tokens, labels = accelerator.gather_for_metrics(generated_tokens, labels)
<ide> generated_tokens = generated_tokens.cpu().numpy()
<ide> labels = labels.cpu().numpy()
<ide>
<ide> def postprocess_text(preds, labels):
<ide> decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
<ide>
<ide> decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
<del> # If we are in a multiprocess environment, the last batch has duplicates
<del> if accelerator.num_processes > 1:
<del> if step == len(eval_dataloader) - 1:
<del> decoded_preds = decoded_preds[: len(eval_dataloader.dataset) - samples_seen]
<del> decoded_labels = decoded_labels[: len(eval_dataloader.dataset) - samples_seen]
<del> else:
<del> samples_seen += len(decoded_labels)
<ide>
<add> decoded_preds, decoded_labels = accelerator.gather_for_metrics(decoded_preds, decoded_labels)
<ide> metric.add_batch(
<ide> predictions=decoded_preds,
<ide> references=decoded_labels, | 1 |
Python | Python | revert some import * fixes in f2py | 83d5f9a331543fa0748708972a5e6b7c5dcbcb03 | <ide><path>numpy/f2py/capi_maps.py
<ide> import re
<ide> import os
<ide> import sys
<del>from .auxfuncs import (
<del> debugcapi, dictappend, errmess, gentitle, getcallprotoargument,
<del> getcallstatement, getfortranname, getpymethoddef, getrestdoc,
<del> getusercode, getusercode1, hasinitvalue, hasnote, hasresultnote,
<del> isarray, iscomplex, iscomplexarray, iscomplexfunction, isexternal,
<del> isfunction, isintent_aux, isintent_callback, isintent_dict,
<del> isintent_hide, isintent_in, isintent_inout, isintent_out, ismodule,
<del> isoptional, isrequired, isscalar, isstring, isstringarray,
<del> isstringfunction, issubroutine, l_and, l_not, l_or, outmess
<del>)
<del>
<ide> from .crackfortran import markoutercomma
<ide> from . import cb_rules
<ide>
<add># The eviroment provided by auxfuncs.py is needed for some calls to eval.
<add># As the needed functions cannot be determined by static inspection of the
<add># code, it is safest to use import * pending a major refactoring of f2py.
<add>from .auxfuncs import *
<add>
<ide> __all__ = [
<ide> 'getctype', 'getstrlength', 'getarrdims', 'getpydocsign',
<ide> 'getarrdocsign', 'getinit', 'sign2map', 'routsign2map', 'modsign2map',
<ide><path>numpy/f2py/crackfortran.py
<ide> import platform
<ide>
<ide> from . import __version__
<del>from .auxfuncs import (
<del> errmess, hascommon, isdouble, iscomplex, isexternal, isinteger,
<del> isintent_aux, isintent_c, isintent_callback, isintent_in,
<del> isintent_inout, isintent_inplace, islogical, isoptional, isscalar,
<del> isstring, isstringarray, l_or, show
<del>)
<add>
<add># The eviroment provided by auxfuncs.py is needed for some calls to eval.
<add># As the needed functions cannot be determined by static inspection of the
<add># code, it is safest to use import * pending a major refactoring of f2py.
<add>from .auxfuncs import *
<ide>
<ide>
<ide> f2py_version = __version__.version
<ide><path>numpy/f2py/f90mod_rules.py
<ide>
<ide> import numpy as np
<ide>
<del>from .auxfuncs import (
<del> applyrules, dictappend, hasbody, hasnote, isallocatable, isfunction,
<del> isintent_hide, ismodule, isprivate, isroutine, isstringarray, l_or,
<del> outmess
<del>)
<ide> from . import capi_maps
<ide> from . import func2subr
<ide> from .crackfortran import undo_rmbadname, undo_rmbadname1
<ide>
<add># The eviroment provided by auxfuncs.py is needed for some calls to eval.
<add># As the needed functions cannot be determined by static inspection of the
<add># code, it is safest to use import * pending a major refactoring of f2py.
<add>from .auxfuncs import *
<add>
<ide> options = {}
<ide>
<ide> | 3 |
Text | Text | add v4.4.4 to changelog | 6a17b8417e7e00e15b2b6f0441703f3c74f3fd0c | <ide><path>CHANGELOG.md
<ide> # Ember Changelog
<ide>
<add>### v4.4.4 (November 2, 2022)
<add>
<add>- [CVE pending](https://emberjs.com/blog/ember-4-8-1-released) Fix a prototype pollution vulnerability in `set` and `setProperties
<add>
<ide> ## v3.28.10 (November 2, 2022)
<ide>
<ide> - [CVE pending](https://emberjs.com/blog/ember-4-8-1-released) Fix a prototype pollution vulnerability in `set` and `setProperties` | 1 |
Javascript | Javascript | add performance.now() and fix view stuff | 50de50ea046009dc10c1c3e574581db77add3d1a | <ide><path>packages/ember-metal/lib/instrumentation.js
<ide> var populateListeners = function(name) {
<ide> return listeners;
<ide> };
<ide>
<add>var time = (function() {
<add> var perf = window.performance || {};
<add> var fn = perf.now || perf.mozNow || perf.webkitNow || perf.msNow || perf.oNow;
<add> // fn.bind will be available in all the browsers that support the advanced window.performance... ;-)
<add> return fn ? fn.bind(perf) : function() { return +new Date(); };
<add>})();
<add>
<add>
<ide> Ember.Instrumentation.instrument = function(name, payload, callback, binding) {
<ide> var listeners = cache[name];
<ide>
<ide> Ember.Instrumentation.instrument = function(name, payload, callback, binding) {
<ide> try {
<ide> for (i=0, l=listeners.length; i<l; i++) {
<ide> listener = listeners[i];
<del> beforeValues[i] = listener.before(name, new Date(), payload);
<add> beforeValues[i] = listener.before(name, time(), payload);
<ide> }
<ide>
<ide> ret = callback.call(binding);
<ide> Ember.Instrumentation.instrument = function(name, payload, callback, binding) {
<ide> } finally {
<ide> for (i=0, l=listeners.length; i<l; i++) {
<ide> listener = listeners[i];
<del> listener.after(name, new Date(), payload, beforeValues[i]);
<add> listener.after(name, time(), payload, beforeValues[i]);
<ide> }
<ide> }
<ide>
<ide><path>packages/ember-views/lib/views/view.js
<ide> Ember.View = Ember.Object.extend(Ember.Evented,
<ide> */
<ide> renderToBuffer: function(parentBuffer, bufferOperation) {
<ide> var name = get(this, 'instrumentName'),
<del> details = this.instrumentDetails({});
<add> details = {};
<add>
<add> this.instrumentDetails(details);
<ide>
<ide> return Ember.instrument(name, details, function() {
<ide> return this._renderToBuffer(parentBuffer, bufferOperation); | 2 |
Javascript | Javascript | use object destructuring | a69a29da102488a636c5862e5f805658ba32c90d | <ide><path>lib/net.js
<ide> const kLastWriteQueueSize = Symbol('lastWriteQueueSize');
<ide> let cluster;
<ide> let dns;
<ide>
<del>const errnoException = errors.errnoException;
<del>const exceptionWithHostPort = errors.exceptionWithHostPort;
<add>const { errnoException, exceptionWithHostPort } = errors;
<ide>
<ide> const {
<ide> kTimeout,
<ide> function Socket(options) {
<ide>
<ide> options.readable = options.readable || false;
<ide> options.writable = options.writable || false;
<del> const allowHalfOpen = options.allowHalfOpen;
<add> const { allowHalfOpen } = options;
<ide>
<ide> // Prevent the "no-half-open enforcer" from being inherited from `Duplex`.
<ide> options.allowHalfOpen = true;
<ide> function Socket(options) {
<ide> this._handle = options.handle; // private
<ide> this[async_id_symbol] = getNewAsyncId(this._handle);
<ide> } else if (options.fd !== undefined) {
<del> const fd = options.fd;
<add> const { fd } = options;
<ide> this._handle = createHandle(fd, false);
<ide> this._handle.open(fd);
<ide> this[async_id_symbol] = this._handle.getAsyncId();
<ide> Socket.prototype._onTimeout = function() {
<ide> if (lastWriteQueueSize > 0 && handle) {
<ide> // `lastWriteQueueSize !== writeQueueSize` means there is
<ide> // an active write in progress, so we suppress the timeout.
<del> const writeQueueSize = handle.writeQueueSize;
<add> const { writeQueueSize } = handle;
<ide> if (lastWriteQueueSize !== writeQueueSize) {
<ide> this[kLastWriteQueueSize] = writeQueueSize;
<ide> this._unrefTimer();
<ide> Socket.prototype.connect = function(...args) {
<ide> this._sockname = null;
<ide> }
<ide>
<del> const path = options.path;
<add> const { path } = options;
<ide> var pipe = !!path;
<ide> debug('pipe', pipe, path);
<ide>
<ide> Socket.prototype.connect = function(...args) {
<ide>
<ide>
<ide> function lookupAndConnect(self, options) {
<add> var { port, localAddress, localPort } = options;
<ide> var host = options.host || 'localhost';
<del> var port = options.port;
<del> var localAddress = options.localAddress;
<del> var localPort = options.localPort;
<ide>
<ide> if (localAddress && !isIP(localAddress)) {
<ide> throw new ERR_INVALID_IP_ADDRESS(localAddress); | 1 |
Javascript | Javascript | make uimanager prepackable | be32cbef007dada23bf830b3b12df8beefbbdac6 | <ide><path>Libraries/ReactNative/UIManager.js
<ide> if (Platform.OS === 'ios') {
<ide> }
<ide> });
<ide> } else if (UIManager.ViewManagerNames) {
<del> UIManager.ViewManagerNames.forEach(viewManagerName => {
<del> defineLazyObjectProperty(UIManager, viewManagerName, {
<del> get: () => UIManager.getConstantsForViewManager(viewManagerName),
<add> // We want to add all the view managers to the UIManager.
<add> // However, the way things are set up, the list of view managers is not known at compile time.
<add> // As Prepack runs at compile it, it cannot process this loop.
<add> // So we wrap it in a special __residual call, which basically tells Prepack to ignore it.
<add> let residual = global.__residual ? global.__residual : (_, f, ...args) => f.apply(undefined, args);
<add> residual("void", (UIManager, defineLazyObjectProperty) => {
<add> UIManager.ViewManagerNames.forEach(viewManagerName => {
<add> defineLazyObjectProperty(UIManager, viewManagerName, {
<add> get: () => UIManager.getConstantsForViewManager(viewManagerName),
<add> });
<ide> });
<del> });
<add> }, UIManager, defineLazyObjectProperty);
<add>
<add> // As Prepack now no longer knows which properties exactly the UIManager has,
<add> // we also tell Prepack that it has only partial knowledge of the UIManager,
<add> // so that any accesses to unknown properties along the global code will fail
<add> // when Prepack encounters them.
<add> if (global.__makePartial) global.__makePartial(UIManager);
<ide> }
<ide>
<ide> module.exports = UIManager; | 1 |
Go | Go | add cgo or osusergo buildtag constraints for unix" | 80e338a18db0acce00653a176d82a567eafb0c79 | <ide><path>pkg/homedir/homedir_unix.go
<del>// +build !windows,cgo !windows,osusergo
<add>// +build !windows
<ide>
<ide> package homedir // import "github.com/docker/docker/pkg/homedir"
<ide> | 1 |
Ruby | Ruby | ignore case when checking path for config scripts | b20c35cf0e16be41b59c92666c7a529f09238f1f | <ide><path>Library/Homebrew/cmd/doctor.rb
<ide> def check_for_config_scripts
<ide>
<ide> config_scripts = []
<ide>
<add> whitelist = %W[/usr/bin /usr/sbin /usr/X11/bin /usr/X11R6/bin /opt/X11/bin #{HOMEBREW_PREFIX}/bin #{HOMEBREW_PREFIX}/sbin]
<add> whitelist.map! { |d| d.downcase }
<add>
<ide> path_folders.each do |p|
<del> next if ['/usr/bin', '/usr/sbin', '/usr/X11/bin', '/usr/X11R6/bin', "#{HOMEBREW_PREFIX}/bin", "#{HOMEBREW_PREFIX}/sbin", "/opt/X11/bin"].include? p
<add> next if whitelist.include? p.downcase
<ide> next if p =~ %r[^(#{real_cellar.to_s}|#{HOMEBREW_CELLAR.to_s})] if real_cellar
<ide>
<ide> configs = Dir["#{p}/*-config"] | 1 |
Javascript | Javascript | fix color space conversion for cubetexturenode | 45f1286ef9c9d5cb670eecaae85c15daaf031da6 | <ide><path>examples/jsm/nodes/inputs/CubeTextureNode.js
<ide> CubeTextureNode.prototype.generate = function ( builder, output ) {
<ide> builder.addContext( context );
<ide>
<ide> this.colorSpace = this.colorSpace || new ColorSpaceNode( new ExpressionNode( '', outputType ) );
<del> this.colorSpace.fromEncoding( builder.getTextureEncodingFromMap( this.value ) );
<add> this.colorSpace.fromDecoding( builder.getTextureEncodingFromMap( this.value ) );
<ide> this.colorSpace.input.parse( code );
<ide>
<ide> code = this.colorSpace.build( builder, outputType ); | 1 |
Python | Python | add the absl flag | a3c53fc0988d61c6cb9f6663fd6a326e884dca13 | <ide><path>keras/engine/training_arrays_v1.py
<ide> from keras.engine import training_utils_v1
<ide> from keras.utils.generic_utils import make_batches
<ide> from keras.utils.generic_utils import slice_arrays
<add>from keras.utils import io_utils
<ide> from keras.utils.mode_keys import ModeKeys
<ide> from tensorflow.python.platform import tf_logging as logging
<ide>
<ide> def _print_train_info(num_samples_or_steps, val_samples_or_steps, is_dataset):
<ide> if val_samples_or_steps:
<ide> msg += ', validate on {0} {increment}'.format(
<ide> val_samples_or_steps, increment=increment)
<del> print(msg)
<add> io_utils.print_msg(msg)
<ide>
<ide>
<ide> def _get_num_samples_or_steps(ins, batch_size, steps_per_epoch):
<ide><path>keras/utils/io_utils.py
<ide> """Utilities related to disk I/O."""
<ide>
<ide> import os
<add>import sys
<add>import threading
<add>
<add>import absl
<add>
<add>ABSL_LOGGING = threading.local()
<add>ABSL_LOGGING.enable = False
<add>
<add>
<add>def print_msg(message):
<add> if ABSL_LOGGING.enable:
<add> absl.logging.info(message)
<add> else:
<add> sys.stdout.write(message)
<add> sys.stdout.flush()
<ide>
<ide>
<ide> def path_to_string(path):
<ide><path>keras/utils/io_utils_test.py
<ide> # ==============================================================================
<ide> """Tests for io_utils."""
<ide>
<add>import sys
<add>
<ide> import tensorflow.compat.v2 as tf
<ide>
<ide> import builtins
<ide> def __fspath__(self):
<ide> self.assertEqual(io_utils.path_to_string('path'), 'path')
<ide> self.assertIs(io_utils.path_to_string(dummy), dummy)
<ide>
<add> def test_print_msg(self):
<add> enabled = io_utils.ABSL_LOGGING.enable
<add>
<add> io_utils.ABSL_LOGGING.enable = True
<add> with self.assertLogs(level='INFO') as logged:
<add> io_utils.print_msg('Testing Message')
<add> self.assertIn('Testing Message', logged.output[0])
<add>
<add> io_utils.ABSL_LOGGING.enable = False
<add> with self.captureWritesToStream(sys.stdout) as printed:
<add> io_utils.print_msg('Testing Message')
<add> self.assertIn('Testing Message', printed.contents())
<add>
<add> io_utils.ABSL_LOGGING.enable = enabled
<ide>
<ide> if __name__ == '__main__':
<ide> tf.test.main() | 3 |
Javascript | Javascript | add todomvc features | 83b8ad7a312b425f75ba2cc2a5773d6dc2a053f8 | <ide><path>examples/todomvc/js/app.js
<ide> var TodoItem = React.createClass({
<ide> this.props.onEdit();
<ide> this.refs.editField.getDOMNode().focus();
<ide> }),
<add> handleKey: React.autoBind(function(event) {
<add> if (event.nativeEvent.keyCode === 27) {
<add> this.handleSubmit();
<add> }
<add> }),
<ide> render: function() {
<ide> return (
<ide> <li class={cx({completed: this.props.todo.completed, editing: this.props.editing})}>
<ide> var TodoItem = React.createClass({
<ide> <button class="destroy" onClick={this.props.onDestroy} />
<ide> </div>
<ide> <form onSubmit={this.handleSubmit}>
<del> <input ref="editField" class="edit" value={this.props.todo.title} />
<add> <input
<add> ref="editField"
<add> class="edit"
<add> value={this.props.todo.title}
<add> onBlur={this.handleSubmit}
<add> onKeyDown={this.handleKey}
<add> />
<ide> <input type="submit" class="submitButton" />
<ide> </form>
<ide> </li> | 1 |
Python | Python | fix openstack tests | 16fccdfca7e9e99a85f9edcf3bbba7438f433fe3 | <ide><path>libcloud/test/compute/test_openstack.py
<ide> from libcloud.compute.base import Node, NodeImage, NodeSize
<ide> from libcloud.pricing import set_pricing, clear_pricing_data
<ide>
<del>from libcloud.test import MockResponse, MockHttpTestCase, XML_HEADERS
<add>from libcloud.common.base import Response as MockResponse
<add>from libcloud.test import MockHttpTestCase, XML_HEADERS
<ide> from libcloud.test.file_fixtures import ComputeFileFixtures, OpenStackFixtures
<ide> from libcloud.test.compute import TestCaseMixin
<ide> | 1 |
Mixed | Javascript | set trackunmanagedfds to true by default | 7603c7e50c3ae453db9702916f740618029020ba | <ide><path>doc/api/worker_threads.md
<ide> if (isMainThread) {
<ide> <!-- YAML
<ide> added: v10.5.0
<ide> changes:
<add> - version: REPLACEME
<add> pr-url: https://github.com/nodejs/node/pull/34394
<add> description: The `trackUnmanagedFds` option was set to `true` by default.
<ide> - version:
<ide> - v14.6.0
<ide> pr-url: https://github.com/nodejs/node/pull/34303
<ide> changes:
<ide> [`fs.close()`][], and close them when the Worker exits, similar to other
<ide> resources like network sockets or file descriptors managed through
<ide> the [`FileHandle`][] API. This option is automatically inherited by all
<del> nested `Worker`s. **Default**: `false`.
<add> nested `Worker`s. **Default**: `true`.
<ide> * `transferList` {Object[]} If one or more `MessagePort`-like objects
<ide> are passed in `workerData`, a `transferList` is required for those
<ide> items or [`ERR_MISSING_MESSAGE_PORT_IN_TRANSFER_LIST`][] will be thrown.
<ide><path>lib/internal/worker.js
<ide> class Worker extends EventEmitter {
<ide> env === process.env ? null : env,
<ide> options.execArgv,
<ide> parseResourceLimits(options.resourceLimits),
<del> !!options.trackUnmanagedFds);
<add> !!(options.trackUnmanagedFds ?? true));
<ide> if (this[kHandle].invalidExecArgv) {
<ide> throw new ERR_WORKER_INVALID_EXEC_ARGV(this[kHandle].invalidExecArgv);
<ide> }
<ide><path>test/parallel/test-worker-track-unmanaged-fds.js
<ide> 'use strict';
<ide> const common = require('../common');
<ide> const assert = require('assert');
<del>const { Worker } = require('worker_threads');
<add>const { Worker, isMainThread } = require('worker_threads');
<ide> const { once } = require('events');
<ide> const fs = require('fs');
<ide>
<add>if (!isMainThread)
<add> common.skip('test needs to be able to freely set `trackUnmanagedFds`');
<add>
<ide> // All the tests here are run sequentially, to avoid accidentally opening an fd
<ide> // which another part of the test expects to be closed.
<ide>
<ide> process.on('warning', (warning) => parentPort.postMessage({ warning }));
<ide> assert.throws(() => fs.fstatSync(fd), { code: 'EBADF' });
<ide> }
<ide>
<add> // The same, but trackUnmanagedFds is used only as the implied default.
<add> {
<add> const w = new Worker(`${preamble}
<add> parentPort.postMessage(fs.openSync(__filename));
<add> `, { eval: true });
<add> const [ [ fd ] ] = await Promise.all([once(w, 'message'), once(w, 'exit')]);
<add> assert(fd > 2);
<add> assert.throws(() => fs.fstatSync(fd), { code: 'EBADF' });
<add> }
<add>
<ide> // There is a warning when an fd is unexpectedly opened twice.
<ide> {
<ide> const w = new Worker(`${preamble} | 3 |
Javascript | Javascript | remove redundant callback check | e34d41e42e5163cfb933b6c01dd31c182b60476b | <ide><path>lib/fs.js
<ide> realpathSync.native = (path, options) => {
<ide>
<ide>
<ide> function realpath(p, options, callback) {
<del> callback = maybeCallback(typeof options === 'function' ? options : callback);
<add> callback = typeof options === 'function' ? options : maybeCallback(callback);
<ide> if (!options)
<ide> options = emptyObj;
<ide> else | 1 |
Ruby | Ruby | use unversioned sdk path on big sur | 3f0ed01a2aeceb9fbe8ff410a170d7f30b7936e4 | <ide><path>Library/Homebrew/os/mac/sdk.rb
<ide> def sdk_paths
<ide> paths[OS::Mac::Version.new(version)] = sdk_path if version.present?
<ide> end
<ide>
<add> # Use unversioned SDK path on Big Sur to avoid issues such as:
<add> # https://github.com/Homebrew/homebrew-core/issues/67075
<add> if OS::Mac.version >= :big_sur
<add> sdk_path = File.join(sdk_prefix, "MacOSX.sdk")
<add> version = OS::Mac.full_version
<add> paths[version] = sdk_path if File.directory?(sdk_path)
<add> end
<add>
<ide> paths
<ide> else
<ide> {} | 1 |
Text | Text | move changelog entry of 47018a82 up [ci skip] | 7867603fbbf5dc3327be3c3ff7c245614e21c04a | <ide><path>activesupport/CHANGELOG.md
<add>* Support not to cache `nil` for `ActiveSupport::Cache#fetch`.
<add>
<add> cache.fetch('bar', skip_nil: true) { nil }
<add> cache.exist?('bar') # => false
<add>
<add> *Martin Hong*
<add>
<ide> * Add "event object" support to the notification system.
<ide> Before this change, end users were forced to create hand made artisanal
<ide> event objects on their own, like this:
<ide>
<ide> ActiveSupport::Notifications.subscribe('wait') do |*args|
<ide> @event = ActiveSupport::Notifications::Event.new(*args)
<ide> end
<del>
<add>
<ide> ActiveSupport::Notifications.instrument('wait') do
<ide> sleep 1
<ide> end
<del>
<add>
<ide> @event.duration # => 1000.138
<ide>
<ide> After this change, if the block passed to `subscribe` only takes one
<ide> ActiveSupport::Notifications.subscribe('wait') do |event|
<ide> @event = event
<ide> end
<del>
<add>
<ide> ActiveSupport::Notifications.instrument('wait') do
<ide> sleep 1
<ide> end
<del>
<add>
<ide> p @event.allocations # => 7
<ide> p @event.cpu_time # => 0.256
<ide> p @event.idle_time # => 1003.2399
<ide>
<ide> *Eileen M. Uchitelle*, *Aaron Patterson*
<ide>
<del>* Support not to cache `nil` for `ActiveSupport::Cache#fetch`
<del>
<del> cache.fetch('bar', skip_nil: true) { nil }
<del> cache.exist?('bar') # => false
<del>
<del> *Martin Hong*
<del>
<ide>
<ide> Please check [5-2-stable](https://github.com/rails/rails/blob/5-2-stable/activesupport/CHANGELOG.md) for previous changes. | 1 |
Javascript | Javascript | pass the container and namespace into initializers | da2b2a0f2aa7586254985827555bf7722114be54 | <ide><path>packages/ember-application/lib/system/application.js
<ide> var Application = Ember.Application = Ember.Namespace.extend(
<ide> runInitializers: function() {
<ide> var router = this.container.lookup('router:main'),
<ide> initializers = get(this.constructor, 'initializers'),
<add> container = this.container,
<ide> graph = new Ember.DAG(),
<ide> namespace = this,
<ide> properties, i, initializer;
<ide> var Application = Ember.Application = Ember.Namespace.extend(
<ide>
<ide> graph.topsort(function (vertex) {
<ide> var initializer = vertex.value;
<del> initializer(this.container);
<add> initializer(container, namespace);
<ide> });
<ide> },
<ide> | 1 |
PHP | PHP | fix doc errors in component/acl | 07b43403fb7d89b62b3916ca602602f83b345356 | <ide><path>lib/Cake/Controller/Component/Acl/PhpAcl.php
<ide> class PhpAro {
<ide> /**
<ide> * Constructor
<ide> *
<del> * @param array $aro
<del> * @param array $map
<del> * @param array $aliases
<add> * @param array $aro The aro data
<add> * @param array $map The identifier mappings
<add> * @param array $aliases The aliases to map.
<ide> */
<ide> public function __construct(array $aro = array(), array $map = array(), array $aliases = array()) {
<ide> if (!empty($map)) { | 1 |
Javascript | Javascript | edit _storeheader to check for trailer header | 80c9ef0b6be57d42632526818a3b0c3f20f225a1 | <ide><path>lib/_http_outgoing.js
<ide> const checkInvalidHeaderChar = common._checkInvalidHeaderChar;
<ide> const outHeadersKey = require('internal/http').outHeadersKey;
<ide> const async_id_symbol = process.binding('async_wrap').async_id_symbol;
<ide> const nextTick = require('internal/process/next_tick').nextTick;
<add>const errors = require('internal/errors');
<ide>
<ide> const CRLF = common.CRLF;
<ide> const debug = common.debug;
<ide> function _storeHeader(firstLine, headers) {
<ide> }
<ide> }
<ide>
<add> // Test non-chunked message does not have trailer header set,
<add> // message will be terminated by the first empty line after the
<add> // header fields, regardless of the header fields present in the
<add> // message, and thus cannot contain a message body or 'trailers'.
<add> if (this.chunkedEncoding !== true && state.trailer) {
<add> throw new errors.Error('ERR_HTTP_TRAILER_INVALID');
<add> }
<add>
<ide> this._header = state.header + CRLF;
<ide> this._headerSent = false;
<ide>
<ide><path>lib/internal/errors.js
<ide> E('ERR_PARSE_HISTORY_DATA',
<ide> (oldHistoryPath) => `Could not parse history data in ${oldHistoryPath}`);
<ide> E('ERR_STDERR_CLOSE', 'process.stderr cannot be closed');
<ide> E('ERR_STDOUT_CLOSE', 'process.stdout cannot be closed');
<add>E('ERR_HTTP_TRAILER_INVALID',
<add> 'Trailers are invalid with this transfer encoding');
<ide> E('ERR_UNKNOWN_BUILTIN_MODULE', (id) => `No such built-in module: ${id}`);
<ide> E('ERR_UNKNOWN_SIGNAL', (signal) => `Unknown signal: ${signal}`);
<ide> E('ERR_UNKNOWN_STDIN_TYPE', 'Unknown stdin file type');
<ide><path>test/parallel/test-http-server-de-chunked-trailer.js
<add>'use strict';
<add>const common = require('../common');
<add>
<add>// This test ensures that a Trailer header is set only when a chunked transfer
<add>// encoding is used.
<add>
<add>const assert = require('assert');
<add>const http = require('http');
<add>
<add>const server = http.createServer(common.mustCall(function(req, res) {
<add> res.setHeader('Trailer', 'baz');
<add> const trailerInvalidErr = {
<add> code: 'ERR_HTTP_TRAILER_INVALID',
<add> message: 'Trailers are invalid with this transfer encoding',
<add> type: Error
<add> };
<add> assert.throws(() => res.writeHead(200, {'Content-Length': '2'}),
<add> common.expectsError(trailerInvalidErr));
<add> res.removeHeader('Trailer');
<add> res.end('ok');
<add>}));
<add>server.listen(0, common.mustCall(() => {
<add> http.get({ port: server.address().port }, common.mustCall((res) => {
<add> assert.strictEqual(res.statusCode, 200);
<add> let buf = '';
<add> res.on('data', (chunk) => {
<add> buf += chunk;
<add> }).on('end', common.mustCall(() => {
<add> assert.strictEqual(buf, 'ok');
<add> }));
<add> server.close();
<add> }));
<add>})); | 3 |
Mixed | Python | attentionocr partial tf2 migration | 50fa3eb914f5ba8c4cda106544f126e40f3306be | <ide><path>research/attention_ocr/README.md
<del>## Attention-based Extraction of Structured Information from Street View Imagery
<add># Attention-based Extraction of Structured Information from Street View Imagery
<ide>
<ide> [](https://paperswithcode.com/sota/optical-character-recognition-on-fsns-test?p=attention-based-extraction-of-structured)
<ide> [](https://arxiv.org/abs/1704.03549)
<ide> *A TensorFlow model for real-world image text extraction problems.*
<ide>
<ide> This folder contains the code needed to train a new Attention OCR model on the
<del>[FSNS dataset][FSNS] dataset to transcribe street names in France. You can
<del>also use it to train it on your own data.
<add>[FSNS dataset][FSNS] to transcribe street names in France. You can also train the code on your own data.
<ide>
<ide> More details can be found in our paper:
<ide>
<ide> ["Attention-based Extraction of Structured Information from Street View
<ide> Imagery"](https://arxiv.org/abs/1704.03549)
<ide>
<add>## Description
<add>
<add>* Paper presents a model based on ConvNets, RNN's and a novel attention mechanism.
<add>Achieves **84.2%** on FSNS beating the previous benchmark (**72.46%**). Also studies
<add>the speed/accuracy tradeoff that results from using CNN feature extractors of
<add>different depths.
<add>
<ide> ## Contacts
<ide>
<ide> Authors
<ide>
<ide> * Zbigniew Wojna ([email protected])
<ide> * Alexander Gorban ([email protected])
<ide>
<del>Maintainer: Xavier Gibert [@xavigibert](https://github.com/xavigibert)
<add>Maintainer
<add>
<add>* Xavier Gibert ([@xavigibert](https://github.com/xavigibert))
<add>
<add>## Table of Contents
<add>
<add>* [Requirements](https://github.com/tensorflow/models/blob/master/research/attention_ocr/README.md#requirements)
<add>* [Dataset](https://github.com/tensorflow/models/blob/master/research/attention_ocr/README.md#dataset)
<add>* [How to use this code](https://github.com/tensorflow/models/blob/master/research/attention_ocr/README.md#how-to-use-this-code)
<add>* [Using your own image data](https://github.com/tensorflow/models/blob/master/research/attention_ocr/README.md#using-your-own-image-data)
<add>* [How to use a pre-trained model](https://github.com/tensorflow/models/blob/master/research/attention_ocr/README.md#how-to-use-a-pre-trained-model)
<add>* [Disclaimer](https://github.com/tensorflow/models/blob/master/research/attention_ocr/README.md#disclaimer)
<ide>
<ide> ## Requirements
<ide>
<ide> cd ..
<ide> [TF]: https://www.tensorflow.org/install/
<ide> [FSNS]: https://github.com/tensorflow/models/tree/master/research/street
<ide>
<add>## Dataset
<add>
<add>The French Street Name Signs (FSNS) dataset is split into subsets,
<add>each of which is composed of multiple files. Note that these datasets
<add>are very large. The approximate sizes are:
<add>
<add>* Train: 512 files of 300MB each.
<add>* Validation: 64 files of 40MB each.
<add>* Test: 64 files of 50MB each.
<add>* The datasets download includes a directory `testdata` that contains
<add>some small datasets that are big enough to test that models can
<add>actually learn something.
<add>* Total: around 158GB
<add>
<add>The download paths are in the following list:
<add>
<add>```
<add>https://download.tensorflow.org/data/fsns-20160927/charset_size=134.txt
<add>https://download.tensorflow.org/data/fsns-20160927/test/test-00000-of-00064
<add>...
<add>https://download.tensorflow.org/data/fsns-20160927/test/test-00063-of-00064
<add>https://download.tensorflow.org/data/fsns-20160927/testdata/arial-32-00000-of-00001
<add>https://download.tensorflow.org/data/fsns-20160927/testdata/fsns-00000-of-00001
<add>https://download.tensorflow.org/data/fsns-20160927/testdata/mnist-sample-00000-of-00001
<add>https://download.tensorflow.org/data/fsns-20160927/testdata/numbers-16-00000-of-00001
<add>https://download.tensorflow.org/data/fsns-20160927/train/train-00000-of-00512
<add>...
<add>https://download.tensorflow.org/data/fsns-20160927/train/train-00511-of-00512
<add>https://download.tensorflow.org/data/fsns-20160927/validation/validation-00000-of-00064
<add>...
<add>https://download.tensorflow.org/data/fsns-20160927/validation/validation-00063-of-00064
<add>```
<add>
<add>All URLs are stored in the [research/street](https://github.com/tensorflow/models/tree/master/research/street)
<add>repository in the text file `python/fsns_urls.txt`.
<add>
<ide> ## How to use this code
<ide>
<ide> To run all unit tests:
<ide> tar xf attention_ocr_2017_08_09.tar.gz
<ide> python train.py --checkpoint=model.ckpt-399731
<ide> ```
<ide>
<del>## How to use your own image data to train the model
<add>## Using your own image data
<ide>
<ide> You need to define a new dataset. There are two options:
<ide>
<ide><path>research/attention_ocr/python/data_provider.py
<ide> def augment_image(image):
<ide> Returns:
<ide> Distorted Tensor image of the same shape.
<ide> """
<del> with tf.variable_scope('AugmentImage'):
<add> with tf.compat.v1.variable_scope('AugmentImage'):
<ide> height = image.get_shape().dims[0].value
<ide> width = image.get_shape().dims[1].value
<ide>
<ide> # Random crop cut from the street sign image, resized to the same size.
<ide> # Assures that the crop is covers at least 0.8 area of the input image.
<ide> bbox_begin, bbox_size, _ = tf.image.sample_distorted_bounding_box(
<del> tf.shape(image),
<add> image_size=tf.shape(input=image),
<ide> bounding_boxes=tf.zeros([0, 0, 4]),
<ide> min_object_covered=0.8,
<ide> aspect_ratio_range=[0.8, 1.2],
<ide> def augment_image(image):
<ide> # Randomly chooses one of the 4 interpolation methods
<ide> distorted_image = inception_preprocessing.apply_with_random_selector(
<ide> distorted_image,
<del> lambda x, method: tf.image.resize_images(x, [height, width], method),
<add> lambda x, method: tf.image.resize(x, [height, width], method),
<ide> num_cases=4)
<ide> distorted_image.set_shape([height, width, 3])
<ide>
<ide> def central_crop(image, crop_size):
<ide> Returns:
<ide> A tensor of shape [crop_height, crop_width, channels].
<ide> """
<del> with tf.variable_scope('CentralCrop'):
<add> with tf.compat.v1.variable_scope('CentralCrop'):
<ide> target_width, target_height = crop_size
<del> image_height, image_width = tf.shape(image)[0], tf.shape(image)[1]
<add> image_height, image_width = tf.shape(
<add> input=image)[0], tf.shape(input=image)[1]
<ide> assert_op1 = tf.Assert(
<ide> tf.greater_equal(image_height, target_height),
<ide> ['image_height < target_height', image_height, target_height])
<ide> def preprocess_image(image, augment=False, central_crop_size=None,
<ide> A float32 tensor of shape [H x W x 3] with RGB values in the required
<ide> range.
<ide> """
<del> with tf.variable_scope('PreprocessImage'):
<add> with tf.compat.v1.variable_scope('PreprocessImage'):
<ide> image = tf.image.convert_image_dtype(image, dtype=tf.float32)
<ide> if augment or central_crop_size:
<ide> if num_towers == 1:
<ide> def get_data(dataset,
<ide> image_orig, augment, central_crop_size, num_towers=dataset.num_of_views)
<ide> label_one_hot = slim.one_hot_encoding(label, dataset.num_char_classes)
<ide>
<del> images, images_orig, labels, labels_one_hot = (tf.train.shuffle_batch(
<add> images, images_orig, labels, labels_one_hot = (tf.compat.v1.train.shuffle_batch(
<ide> [image, image_orig, label, label_one_hot],
<ide> batch_size=batch_size,
<ide> num_threads=shuffle_config.num_batching_threads,
<ide><path>research/attention_ocr/python/datasets/fsns.py
<ide> def read_charset(filename, null_character=u'\u2591'):
<ide> """
<ide> pattern = re.compile(r'(\d+)\t(.+)')
<ide> charset = {}
<del> with tf.gfile.GFile(filename) as f:
<add> with tf.io.gfile.GFile(filename) as f:
<ide> for i, line in enumerate(f):
<ide> m = pattern.match(line)
<ide> if m is None:
<ide> def __init__(self, width_key, original_width_key, num_of_views):
<ide> self._num_of_views = num_of_views
<ide>
<ide> def tensors_to_item(self, keys_to_tensors):
<del> return tf.to_int64(
<add> return tf.cast(
<ide> self._num_of_views * keys_to_tensors[self._original_width_key] /
<del> keys_to_tensors[self._width_key])
<add> keys_to_tensors[self._width_key], dtype=tf.int64)
<ide>
<ide>
<ide> def get_split(split_name, dataset_dir=None, config=None):
<ide> def get_split(split_name, dataset_dir=None, config=None):
<ide> zero = tf.zeros([1], dtype=tf.int64)
<ide> keys_to_features = {
<ide> 'image/encoded':
<del> tf.FixedLenFeature((), tf.string, default_value=''),
<add> tf.io.FixedLenFeature((), tf.string, default_value=''),
<ide> 'image/format':
<del> tf.FixedLenFeature((), tf.string, default_value='png'),
<add> tf.io.FixedLenFeature((), tf.string, default_value='png'),
<ide> 'image/width':
<del> tf.FixedLenFeature([1], tf.int64, default_value=zero),
<add> tf.io.FixedLenFeature([1], tf.int64, default_value=zero),
<ide> 'image/orig_width':
<del> tf.FixedLenFeature([1], tf.int64, default_value=zero),
<add> tf.io.FixedLenFeature([1], tf.int64, default_value=zero),
<ide> 'image/class':
<del> tf.FixedLenFeature([config['max_sequence_length']], tf.int64),
<add> tf.io.FixedLenFeature([config['max_sequence_length']], tf.int64),
<ide> 'image/unpadded_class':
<del> tf.VarLenFeature(tf.int64),
<add> tf.io.VarLenFeature(tf.int64),
<ide> 'image/text':
<del> tf.FixedLenFeature([1], tf.string, default_value=''),
<add> tf.io.FixedLenFeature([1], tf.string, default_value=''),
<ide> }
<ide> items_to_handlers = {
<ide> 'image':
<ide> def get_split(split_name, dataset_dir=None, config=None):
<ide> config['splits'][split_name]['pattern'])
<ide> return slim.dataset.Dataset(
<ide> data_sources=file_pattern,
<del> reader=tf.TFRecordReader,
<add> reader=tf.compat.v1.TFRecordReader,
<ide> decoder=decoder,
<ide> num_samples=config['splits'][split_name]['size'],
<ide> items_to_descriptions=config['items_to_descriptions'],
<ide><path>research/attention_ocr/python/datasets/fsns_test.py
<ide> def test_can_use_the_test_data(self):
<ide> image_tf, label_tf = provider.get(['image', 'label'])
<ide>
<ide> with self.test_session() as sess:
<del> sess.run(tf.global_variables_initializer())
<add> sess.run(tf.compat.v1.global_variables_initializer())
<ide> with slim.queues.QueueRunners(sess):
<ide> image_np, label_np = sess.run([image_tf, label_tf])
<ide>
<ide><path>research/attention_ocr/python/datasets/testdata/fsns/download_data.py
<ide> print('Downloading %s ...' % URL)
<ide> urllib.request.urlretrieve(URL, DST_ORIG)
<ide>
<del>print('Writing %d records from %s to %s ...' % (KEEP_NUM_RECORDS, DST_ORIG, DST))
<add>print('Writing %d records from %s to %s ...' %
<add> (KEEP_NUM_RECORDS, DST_ORIG, DST))
<ide> with tf.io.TFRecordWriter(DST) as writer:
<del> for raw_record in itertools.islice(tf.python_io.tf_record_iterator(DST_ORIG), KEEP_NUM_RECORDS):
<add> for raw_record in itertools.islice(tf.compat.v1.python_io.tf_record_iterator(DST_ORIG), KEEP_NUM_RECORDS):
<ide> writer.write(raw_record)
<ide><path>research/attention_ocr/python/demo_inference.py
<ide> def load_images(file_pattern, batch_size, dataset_name):
<ide> for i in range(batch_size):
<ide> path = file_pattern % i
<ide> print("Reading %s" % path)
<del> pil_image = PIL.Image.open(tf.gfile.GFile(path, 'rb'))
<add> pil_image = PIL.Image.open(tf.io.gfile.GFile(path, 'rb'))
<ide> images_actual_data[i, ...] = np.asarray(pil_image)
<ide> return images_actual_data
<ide>
<ide> def create_model(batch_size, dataset_name):
<ide> width, height = get_dataset_image_size(dataset_name)
<ide> dataset = common_flags.create_dataset(split_name=FLAGS.split_name)
<ide> model = common_flags.create_model(
<del> num_char_classes=dataset.num_char_classes,
<del> seq_length=dataset.max_sequence_length,
<del> num_views=dataset.num_of_views,
<del> null_code=dataset.null_code,
<del> charset=dataset.charset)
<del> raw_images = tf.placeholder(tf.uint8, shape=[batch_size, height, width, 3])
<add> num_char_classes=dataset.num_char_classes,
<add> seq_length=dataset.max_sequence_length,
<add> num_views=dataset.num_of_views,
<add> null_code=dataset.null_code,
<add> charset=dataset.charset)
<add> raw_images = tf.compat.v1.placeholder(
<add> tf.uint8, shape=[batch_size, height, width, 3])
<ide> images = tf.map_fn(data_provider.preprocess_image, raw_images,
<ide> dtype=tf.float32)
<ide> endpoints = model.create_base(images, labels_one_hot=None)
<ide> def run(checkpoint, batch_size, dataset_name, image_path_pattern):
<ide> images_data = load_images(image_path_pattern, batch_size,
<ide> dataset_name)
<ide> session_creator = monitored_session.ChiefSessionCreator(
<del> checkpoint_filename_with_path=checkpoint)
<add> checkpoint_filename_with_path=checkpoint)
<ide> with monitored_session.MonitoredSession(
<del> session_creator=session_creator) as sess:
<add> session_creator=session_creator) as sess:
<ide> predictions = sess.run(endpoints.predicted_text,
<ide> feed_dict={images_placeholder: images_data})
<ide> return [pr_bytes.decode('utf-8') for pr_bytes in predictions.tolist()]
<ide> def run(checkpoint, batch_size, dataset_name, image_path_pattern):
<ide> def main(_):
<ide> print("Predicted strings:")
<ide> predictions = run(FLAGS.checkpoint, FLAGS.batch_size, FLAGS.dataset_name,
<del> FLAGS.image_path_pattern)
<add> FLAGS.image_path_pattern)
<ide> for line in predictions:
<ide> print(line)
<ide>
<ide>
<ide> if __name__ == '__main__':
<del> tf.app.run()
<add> tf.compat.v1.app.run()
<ide><path>research/attention_ocr/python/demo_inference_test.py
<ide> def setUp(self):
<ide> super(DemoInferenceTest, self).setUp()
<ide> for suffix in ['.meta', '.index', '.data-00000-of-00001']:
<ide> filename = _CHECKPOINT + suffix
<del> self.assertTrue(tf.gfile.Exists(filename),
<add> self.assertTrue(tf.io.gfile.exists(filename),
<ide> msg='Missing checkpoint file %s. '
<ide> 'Please download and extract it from %s' %
<ide> (filename, _CHECKPOINT_URL))
<ide> self._batch_size = 32
<del> tf.flags.FLAGS.dataset_dir = os.path.join(os.path.dirname(__file__), 'datasets/testdata/fsns')
<add> tf.flags.FLAGS.dataset_dir = os.path.join(
<add> os.path.dirname(__file__), 'datasets/testdata/fsns')
<ide>
<ide> def test_moving_variables_properly_loaded_from_a_checkpoint(self):
<ide> batch_size = 32
<ide> def test_moving_variables_properly_loaded_from_a_checkpoint(self):
<ide> images_data = demo_inference.load_images(image_path_pattern, batch_size,
<ide> dataset_name)
<ide> tensor_name = 'AttentionOcr_v1/conv_tower_fn/INCE/InceptionV3/Conv2d_2a_3x3/BatchNorm/moving_mean'
<del> moving_mean_tf = tf.get_default_graph().get_tensor_by_name(
<del> tensor_name + ':0')
<del> reader = tf.train.NewCheckpointReader(_CHECKPOINT)
<add> moving_mean_tf = tf.compat.v1.get_default_graph().get_tensor_by_name(
<add> tensor_name + ':0')
<add> reader = tf.compat.v1.train.NewCheckpointReader(_CHECKPOINT)
<ide> moving_mean_expected = reader.get_tensor(tensor_name)
<ide>
<ide> session_creator = monitored_session.ChiefSessionCreator(
<del> checkpoint_filename_with_path=_CHECKPOINT)
<add> checkpoint_filename_with_path=_CHECKPOINT)
<ide> with monitored_session.MonitoredSession(
<del> session_creator=session_creator) as sess:
<add> session_creator=session_creator) as sess:
<ide> moving_mean_np = sess.run(moving_mean_tf,
<ide> feed_dict={images_placeholder: images_data})
<ide>
<ide> def test_correct_results_on_test_data(self):
<ide> 'fsns',
<ide> image_path_pattern)
<ide> self.assertEqual([
<del> u'Boulevard de Lunel░░░░░░░░░░░░░░░░░░░',
<del> 'Rue de Provence░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue de Port Maria░░░░░░░░░░░░░░░░░░░░',
<del> 'Avenue Charles Gounod░░░░░░░░░░░░░░░░',
<del> 'Rue de l‘Aurore░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue de Beuzeville░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue d‘Orbey░░░░░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue Victor Schoulcher░░░░░░░░░░░░░░░░',
<del> 'Rue de la Gare░░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue des Tulipes░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue André Maginot░░░░░░░░░░░░░░░░░░░░',
<del> 'Route de Pringy░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue des Landelles░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue des Ilettes░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Avenue de Maurin░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue Théresa░░░░░░░░░░░░░░░░░░░░░░░░░░', # GT='Rue Thérésa'
<del> 'Route de la Balme░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue Hélène Roederer░░░░░░░░░░░░░░░░░░',
<del> 'Rue Emile Bernard░░░░░░░░░░░░░░░░░░░░',
<del> 'Place de la Mairie░░░░░░░░░░░░░░░░░░░',
<del> 'Rue des Perrots░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue de la Libération░░░░░░░░░░░░░░░░░',
<del> 'Impasse du Capcir░░░░░░░░░░░░░░░░░░░░',
<del> 'Avenue de la Grand Mare░░░░░░░░░░░░░░',
<del> 'Rue Pierre Brossolette░░░░░░░░░░░░░░░',
<del> 'Rue de Provence░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue du Docteur Mourre░░░░░░░░░░░░░░░░',
<del> 'Rue d‘Ortheuil░░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue des Sarments░░░░░░░░░░░░░░░░░░░░░',
<del> 'Rue du Centre░░░░░░░░░░░░░░░░░░░░░░░░',
<del> 'Impasse Pierre Mourgues░░░░░░░░░░░░░░',
<del> 'Rue Marcel Dassault░░░░░░░░░░░░░░░░░░'
<add> u'Boulevard de Lunel░░░░░░░░░░░░░░░░░░░',
<add> 'Rue de Provence░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue de Port Maria░░░░░░░░░░░░░░░░░░░░',
<add> 'Avenue Charles Gounod░░░░░░░░░░░░░░░░',
<add> 'Rue de l‘Aurore░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue de Beuzeville░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue d‘Orbey░░░░░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue Victor Schoulcher░░░░░░░░░░░░░░░░',
<add> 'Rue de la Gare░░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue des Tulipes░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue André Maginot░░░░░░░░░░░░░░░░░░░░',
<add> 'Route de Pringy░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue des Landelles░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue des Ilettes░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Avenue de Maurin░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue Théresa░░░░░░░░░░░░░░░░░░░░░░░░░░', # GT='Rue Thérésa'
<add> 'Route de la Balme░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue Hélène Roederer░░░░░░░░░░░░░░░░░░',
<add> 'Rue Emile Bernard░░░░░░░░░░░░░░░░░░░░',
<add> 'Place de la Mairie░░░░░░░░░░░░░░░░░░░',
<add> 'Rue des Perrots░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue de la Libération░░░░░░░░░░░░░░░░░',
<add> 'Impasse du Capcir░░░░░░░░░░░░░░░░░░░░',
<add> 'Avenue de la Grand Mare░░░░░░░░░░░░░░',
<add> 'Rue Pierre Brossolette░░░░░░░░░░░░░░░',
<add> 'Rue de Provence░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue du Docteur Mourre░░░░░░░░░░░░░░░░',
<add> 'Rue d‘Ortheuil░░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue des Sarments░░░░░░░░░░░░░░░░░░░░░',
<add> 'Rue du Centre░░░░░░░░░░░░░░░░░░░░░░░░',
<add> 'Impasse Pierre Mourgues░░░░░░░░░░░░░░',
<add> 'Rue Marcel Dassault░░░░░░░░░░░░░░░░░░'
<ide> ], predictions)
<ide>
<ide>
<ide><path>research/attention_ocr/python/eval.py
<ide>
<ide>
<ide> def main(_):
<del> if not tf.gfile.Exists(FLAGS.eval_log_dir):
<del> tf.gfile.MakeDirs(FLAGS.eval_log_dir)
<add> if not tf.io.gfile.exists(FLAGS.eval_log_dir):
<add> tf.io.gfile.makedirs(FLAGS.eval_log_dir)
<ide>
<ide> dataset = common_flags.create_dataset(split_name=FLAGS.split_name)
<ide> model = common_flags.create_model(dataset.num_char_classes,
<ide> def main(_):
<ide> eval_ops = model.create_summaries(
<ide> data, endpoints, dataset.charset, is_training=False)
<ide> slim.get_or_create_global_step()
<del> session_config = tf.ConfigProto(device_count={"GPU": 0})
<add> session_config = tf.compat.v1.ConfigProto(device_count={"GPU": 0})
<ide> slim.evaluation.evaluation_loop(
<ide> master=FLAGS.master,
<ide> checkpoint_dir=FLAGS.train_log_dir,
<ide><path>research/attention_ocr/python/inception_preprocessing.py
<ide> def apply_with_random_selector(x, func, num_cases):
<ide> The result of func(x, sel), where func receives the value of the
<ide> selector as a python integer, but sel is sampled dynamically.
<ide> """
<del> sel = tf.random_uniform([], maxval=num_cases, dtype=tf.int32)
<add> sel = tf.random.uniform([], maxval=num_cases, dtype=tf.int32)
<ide> # Pass the real x only to one of the func calls.
<ide> return control_flow_ops.merge([
<ide> func(control_flow_ops.switch(x, tf.equal(sel, case))[1], case)
<ide> def distort_color(image, color_ordering=0, fast_mode=True, scope=None):
<ide> Raises:
<ide> ValueError: if color_ordering not in [0, 3]
<ide> """
<del> with tf.name_scope(scope, 'distort_color', [image]):
<add> with tf.compat.v1.name_scope(scope, 'distort_color', [image]):
<ide> if fast_mode:
<ide> if color_ordering == 0:
<ide> image = tf.image.random_brightness(image, max_delta=32. / 255.)
<ide> def distorted_bounding_box_crop(image,
<ide> Returns:
<ide> A tuple, a 3-D Tensor cropped_image and the distorted bbox
<ide> """
<del> with tf.name_scope(scope, 'distorted_bounding_box_crop', [image, bbox]):
<add> with tf.compat.v1.name_scope(scope, 'distorted_bounding_box_crop', [image, bbox]):
<ide> # Each bounding box has shape [1, num_boxes, box coords] and
<ide> # the coordinates are ordered [ymin, xmin, ymax, xmax].
<ide>
<ide> def distorted_bounding_box_crop(image,
<ide> # bounding box. If no box is supplied, then we assume the bounding box is
<ide> # the entire image.
<ide> sample_distorted_bounding_box = tf.image.sample_distorted_bounding_box(
<del> tf.shape(image),
<add> image_size=tf.shape(input=image),
<ide> bounding_boxes=bbox,
<ide> min_object_covered=min_object_covered,
<ide> aspect_ratio_range=aspect_ratio_range,
<ide> def preprocess_for_train(image,
<ide> Returns:
<ide> 3-D float Tensor of distorted image used for training with range [-1, 1].
<ide> """
<del> with tf.name_scope(scope, 'distort_image', [image, height, width, bbox]):
<add> with tf.compat.v1.name_scope(scope, 'distort_image', [image, height, width, bbox]):
<ide> if bbox is None:
<ide> bbox = tf.constant(
<ide> [0.0, 0.0, 1.0, 1.0], dtype=tf.float32, shape=[1, 1, 4])
<ide> def preprocess_for_train(image,
<ide> # the coordinates are ordered [ymin, xmin, ymax, xmax].
<ide> image_with_box = tf.image.draw_bounding_boxes(
<ide> tf.expand_dims(image, 0), bbox)
<del> tf.summary.image('image_with_bounding_boxes', image_with_box)
<add> tf.compat.v1.summary.image('image_with_bounding_boxes', image_with_box)
<ide>
<ide> distorted_image, distorted_bbox = distorted_bounding_box_crop(image, bbox)
<ide> # Restore the shape since the dynamic slice based upon the bbox_size loses
<ide> # the third dimension.
<ide> distorted_image.set_shape([None, None, 3])
<ide> image_with_distorted_box = tf.image.draw_bounding_boxes(
<ide> tf.expand_dims(image, 0), distorted_bbox)
<del> tf.summary.image('images_with_distorted_bounding_box',
<del> image_with_distorted_box)
<add> tf.compat.v1.summary.image('images_with_distorted_bounding_box',
<add> image_with_distorted_box)
<ide>
<ide> # This resizing operation may distort the images because the aspect
<ide> # ratio is not respected. We select a resize method in a round robin
<ide> def preprocess_for_train(image,
<ide> num_resize_cases = 1 if fast_mode else 4
<ide> distorted_image = apply_with_random_selector(
<ide> distorted_image,
<del> lambda x, method: tf.image.resize_images(x, [height, width], method=method),
<add> lambda x, method: tf.image.resize(x, [height, width], method=method),
<ide> num_cases=num_resize_cases)
<ide>
<del> tf.summary.image('cropped_resized_image',
<del> tf.expand_dims(distorted_image, 0))
<add> tf.compat.v1.summary.image('cropped_resized_image',
<add> tf.expand_dims(distorted_image, 0))
<ide>
<ide> # Randomly flip the image horizontally.
<ide> distorted_image = tf.image.random_flip_left_right(distorted_image)
<ide> def preprocess_for_train(image,
<ide> lambda x, ordering: distort_color(x, ordering, fast_mode),
<ide> num_cases=4)
<ide>
<del> tf.summary.image('final_distorted_image',
<del> tf.expand_dims(distorted_image, 0))
<add> tf.compat.v1.summary.image('final_distorted_image',
<add> tf.expand_dims(distorted_image, 0))
<ide> distorted_image = tf.subtract(distorted_image, 0.5)
<ide> distorted_image = tf.multiply(distorted_image, 2.0)
<ide> return distorted_image
<ide> def preprocess_for_eval(image,
<ide> Returns:
<ide> 3-D float Tensor of prepared image.
<ide> """
<del> with tf.name_scope(scope, 'eval_image', [image, height, width]):
<add> with tf.compat.v1.name_scope(scope, 'eval_image', [image, height, width]):
<ide> if image.dtype != tf.float32:
<ide> image = tf.image.convert_image_dtype(image, dtype=tf.float32)
<ide> # Crop the central region of the image with an area containing 87.5% of
<ide> def preprocess_for_eval(image,
<ide> if height and width:
<ide> # Resize the image to the specified height and width.
<ide> image = tf.expand_dims(image, 0)
<del> image = tf.image.resize_bilinear(
<del> image, [height, width], align_corners=False)
<add> image = tf.image.resize(
<add> image, [height, width], method=tf.image.ResizeMethod.BILINEAR)
<ide> image = tf.squeeze(image, [0])
<ide> image = tf.subtract(image, 0.5)
<ide> image = tf.multiply(image, 2.0)
<ide><path>research/attention_ocr/python/metrics.py
<ide> def char_accuracy(predictions, targets, rej_char, streaming=False):
<ide> a update_ops for execution and value tensor whose value on evaluation
<ide> returns the total character accuracy.
<ide> """
<del> with tf.variable_scope('CharAccuracy'):
<add> with tf.compat.v1.variable_scope('CharAccuracy'):
<ide> predictions.get_shape().assert_is_compatible_with(targets.get_shape())
<ide>
<del> targets = tf.to_int32(targets)
<add> targets = tf.cast(targets, dtype=tf.int32)
<ide> const_rej_char = tf.constant(rej_char, shape=targets.get_shape())
<del> weights = tf.to_float(tf.not_equal(targets, const_rej_char))
<del> correct_chars = tf.to_float(tf.equal(predictions, targets))
<del> accuracy_per_example = tf.div(
<del> tf.reduce_sum(tf.multiply(correct_chars, weights), 1),
<del> tf.reduce_sum(weights, 1))
<add> weights = tf.cast(tf.not_equal(targets, const_rej_char), dtype=tf.float32)
<add> correct_chars = tf.cast(tf.equal(predictions, targets), dtype=tf.float32)
<add> accuracy_per_example = tf.compat.v1.div(
<add> tf.reduce_sum(input_tensor=tf.multiply(
<add> correct_chars, weights), axis=1),
<add> tf.reduce_sum(input_tensor=weights, axis=1))
<ide> if streaming:
<ide> return tf.contrib.metrics.streaming_mean(accuracy_per_example)
<ide> else:
<del> return tf.reduce_mean(accuracy_per_example)
<add> return tf.reduce_mean(input_tensor=accuracy_per_example)
<ide>
<ide>
<ide> def sequence_accuracy(predictions, targets, rej_char, streaming=False):
<ide> def sequence_accuracy(predictions, targets, rej_char, streaming=False):
<ide> returns the total sequence accuracy.
<ide> """
<ide>
<del> with tf.variable_scope('SequenceAccuracy'):
<add> with tf.compat.v1.variable_scope('SequenceAccuracy'):
<ide> predictions.get_shape().assert_is_compatible_with(targets.get_shape())
<ide>
<del> targets = tf.to_int32(targets)
<add> targets = tf.cast(targets, dtype=tf.int32)
<ide> const_rej_char = tf.constant(
<ide> rej_char, shape=targets.get_shape(), dtype=tf.int32)
<ide> include_mask = tf.not_equal(targets, const_rej_char)
<del> include_predictions = tf.to_int32(
<del> tf.where(include_mask, predictions,
<del> tf.zeros_like(predictions) + rej_char))
<del> correct_chars = tf.to_float(tf.equal(include_predictions, targets))
<add> include_predictions = tf.cast(
<add> tf.compat.v1.where(include_mask, predictions,
<add> tf.zeros_like(predictions) + rej_char), dtype=tf.int32)
<add> correct_chars = tf.cast(
<add> tf.equal(include_predictions, targets), dtype=tf.float32)
<ide> correct_chars_counts = tf.cast(
<del> tf.reduce_sum(correct_chars, reduction_indices=[1]), dtype=tf.int32)
<add> tf.reduce_sum(input_tensor=correct_chars, axis=[1]), dtype=tf.int32)
<ide> target_length = targets.get_shape().dims[1].value
<ide> target_chars_counts = tf.constant(
<ide> target_length, shape=correct_chars_counts.get_shape())
<del> accuracy_per_example = tf.to_float(
<del> tf.equal(correct_chars_counts, target_chars_counts))
<add> accuracy_per_example = tf.cast(
<add> tf.equal(correct_chars_counts, target_chars_counts), dtype=tf.float32)
<ide> if streaming:
<ide> return tf.contrib.metrics.streaming_mean(accuracy_per_example)
<ide> else:
<del> return tf.reduce_mean(accuracy_per_example)
<add> return tf.reduce_mean(input_tensor=accuracy_per_example)
<ide><path>research/attention_ocr/python/metrics_test.py
<ide> def initialized_session(self):
<ide> A session object that should be used as a context manager.
<ide> """
<ide> with self.cached_session() as sess:
<del> sess.run(tf.global_variables_initializer())
<del> sess.run(tf.local_variables_initializer())
<add> sess.run(tf.compat.v1.global_variables_initializer())
<add> sess.run(tf.compat.v1.local_variables_initializer())
<ide> yield sess
<ide>
<ide> def _fake_labels(self):
<ide> def _incorrect_copy(self, values, bad_indexes):
<ide> return incorrect
<ide>
<ide> def test_sequence_accuracy_identical_samples(self):
<del> labels_tf = tf.convert_to_tensor(self._fake_labels())
<add> labels_tf = tf.convert_to_tensor(value=self._fake_labels())
<ide>
<ide> accuracy_tf = metrics.sequence_accuracy(labels_tf, labels_tf,
<ide> self.rej_char)
<ide> def test_sequence_accuracy_identical_samples(self):
<ide>
<ide> def test_sequence_accuracy_one_char_difference(self):
<ide> ground_truth_np = self._fake_labels()
<del> ground_truth_tf = tf.convert_to_tensor(ground_truth_np)
<add> ground_truth_tf = tf.convert_to_tensor(value=ground_truth_np)
<ide> prediction_tf = tf.convert_to_tensor(
<del> self._incorrect_copy(ground_truth_np, bad_indexes=((0, 0))))
<add> value=self._incorrect_copy(ground_truth_np, bad_indexes=((0, 0))))
<ide>
<ide> accuracy_tf = metrics.sequence_accuracy(prediction_tf, ground_truth_tf,
<ide> self.rej_char)
<ide> def test_sequence_accuracy_one_char_difference(self):
<ide>
<ide> def test_char_accuracy_one_char_difference_with_padding(self):
<ide> ground_truth_np = self._fake_labels()
<del> ground_truth_tf = tf.convert_to_tensor(ground_truth_np)
<add> ground_truth_tf = tf.convert_to_tensor(value=ground_truth_np)
<ide> prediction_tf = tf.convert_to_tensor(
<del> self._incorrect_copy(ground_truth_np, bad_indexes=((0, 0))))
<add> value=self._incorrect_copy(ground_truth_np, bad_indexes=((0, 0))))
<ide>
<ide> accuracy_tf = metrics.char_accuracy(prediction_tf, ground_truth_tf,
<ide> self.rej_char)
<ide><path>research/attention_ocr/python/model.py
<ide> def get_text(self, ids):
<ide> Args:
<ide> ids: a tensor with shape [batch_size, max_sequence_length]
<ide> """
<del> return tf.reduce_join(
<del> self.table.lookup(tf.to_int64(ids)), reduction_indices=1)
<add> return tf.strings.reduce_join(
<add> inputs=self.table.lookup(tf.cast(ids, dtype=tf.int64)), axis=1)
<ide>
<ide>
<ide> def get_softmax_loss_fn(label_smoothing):
<ide> def get_softmax_loss_fn(label_smoothing):
<ide>
<ide> def loss_fn(labels, logits):
<ide> return (tf.nn.softmax_cross_entropy_with_logits(
<del> logits=logits, labels=labels))
<add> logits=logits, labels=tf.stop_gradient(labels)))
<ide> else:
<ide>
<ide> def loss_fn(labels, logits):
<ide> def get_tensor_dimensions(tensor):
<ide> raise ValueError(
<ide> 'Incompatible shape: len(tensor.get_shape().dims) != 4 (%d != 4)' %
<ide> len(tensor.get_shape().dims))
<del> batch_size = tf.shape(tensor)[0]
<add> batch_size = tf.shape(input=tensor)[0]
<ide> height = tensor.get_shape().dims[1].value
<ide> width = tensor.get_shape().dims[2].value
<ide> num_features = tensor.get_shape().dims[3].value
<ide> def lookup_indexed_value(indices, row_vecs):
<ide> A tensor of shape (batch, ) formed by row_vecs[i, indices[i]].
<ide> """
<ide> gather_indices = tf.stack((tf.range(
<del> tf.shape(row_vecs)[0], dtype=tf.int32), tf.cast(indices, tf.int32)),
<del> axis=1)
<add> tf.shape(input=row_vecs)[0], dtype=tf.int32), tf.cast(indices, tf.int32)),
<add> axis=1)
<ide> return tf.gather_nd(row_vecs, gather_indices)
<ide>
<ide>
<ide> def max_char_logprob_cumsum(char_log_prob):
<ide> so the same function can be used regardless whether use_length_predictions
<ide> is true or false.
<ide> """
<del> max_char_log_prob = tf.reduce_max(char_log_prob, reduction_indices=2)
<add> max_char_log_prob = tf.reduce_max(input_tensor=char_log_prob, axis=2)
<ide> # For an input array [a, b, c]) tf.cumsum returns [a, a + b, a + b + c] if
<ide> # exclusive set to False (default).
<ide> return tf.cumsum(max_char_log_prob, axis=1, exclusive=False)
<ide> def find_length_by_null(predicted_chars, null_code):
<ide> A [batch, ] tensor which stores the sequence length for each sample.
<ide> """
<ide> return tf.reduce_sum(
<del> tf.cast(tf.not_equal(null_code, predicted_chars), tf.int32), axis=1)
<add> input_tensor=tf.cast(tf.not_equal(null_code, predicted_chars), tf.int32), axis=1)
<ide>
<ide>
<ide> def axis_pad(tensor, axis, before=0, after=0, constant_values=0.0):
<ide> def null_based_length_prediction(chars_log_prob, null_code):
<ide> element #seq_length - is the probability of length=seq_length.
<ide> predicted_length is a tensor with shape [batch].
<ide> """
<del> predicted_chars = tf.to_int32(tf.argmax(chars_log_prob, axis=2))
<add> predicted_chars = tf.cast(
<add> tf.argmax(input=chars_log_prob, axis=2), dtype=tf.int32)
<ide> # We do right pad to support sequences with seq_length elements.
<ide> text_log_prob = max_char_logprob_cumsum(
<ide> axis_pad(chars_log_prob, axis=1, after=1))
<ide> def conv_tower_fn(self, images, is_training=True, reuse=None):
<ide> """
<ide> mparams = self._mparams['conv_tower_fn']
<ide> logging.debug('Using final_endpoint=%s', mparams.final_endpoint)
<del> with tf.variable_scope('conv_tower_fn/INCE'):
<add> with tf.compat.v1.variable_scope('conv_tower_fn/INCE'):
<ide> if reuse:
<del> tf.get_variable_scope().reuse_variables()
<add> tf.compat.v1.get_variable_scope().reuse_variables()
<ide> with slim.arg_scope(inception.inception_v3_arg_scope()):
<ide> with slim.arg_scope([slim.batch_norm, slim.dropout],
<ide> is_training=is_training):
<ide> def _create_lstm_inputs(self, net):
<ide> def sequence_logit_fn(self, net, labels_one_hot):
<ide> mparams = self._mparams['sequence_logit_fn']
<ide> # TODO(gorban): remove /alias suffixes from the scopes.
<del> with tf.variable_scope('sequence_logit_fn/SQLR'):
<add> with tf.compat.v1.variable_scope('sequence_logit_fn/SQLR'):
<ide> layer_class = sequence_layers.get_layer_class(mparams.use_attention,
<ide> mparams.use_autoregression)
<ide> layer = layer_class(net, labels_one_hot, self._params, mparams)
<ide> def max_pool_views(self, nets_list):
<ide> ]
<ide> xy_flat_shape = (batch_size, 1, height * width, num_features)
<ide> nets_for_merge = []
<del> with tf.variable_scope('max_pool_views', values=nets_list):
<add> with tf.compat.v1.variable_scope('max_pool_views', values=nets_list):
<ide> for net in nets_list:
<ide> nets_for_merge.append(tf.reshape(net, xy_flat_shape))
<ide> merged_net = tf.concat(nets_for_merge, 1)
<ide> def pool_views_fn(self, nets):
<ide> Returns:
<ide> A tensor of shape [batch_size, seq_length, features_size].
<ide> """
<del> with tf.variable_scope('pool_views_fn/STCK'):
<add> with tf.compat.v1.variable_scope('pool_views_fn/STCK'):
<ide> net = tf.concat(nets, 1)
<del> batch_size = tf.shape(net)[0]
<del> image_size = net.get_shape().dims[1].value * net.get_shape().dims[2].value
<add> batch_size = tf.shape(input=net)[0]
<add> image_size = net.get_shape().dims[1].value * \
<add> net.get_shape().dims[2].value
<ide> feature_size = net.get_shape().dims[3].value
<ide> return tf.reshape(net, tf.stack([batch_size, image_size, feature_size]))
<ide>
<ide> def char_predictions(self, chars_logit):
<ide> with shape [batch_size x seq_length].
<ide> """
<ide> log_prob = utils.logits_to_log_prob(chars_logit)
<del> ids = tf.to_int32(tf.argmax(log_prob, axis=2), name='predicted_chars')
<add> ids = tf.cast(tf.argmax(input=log_prob, axis=2),
<add> name='predicted_chars', dtype=tf.int32)
<ide> mask = tf.cast(
<ide> slim.one_hot_encoding(ids, self._params.num_char_classes), tf.bool)
<ide> all_scores = tf.nn.softmax(chars_logit)
<del> selected_scores = tf.boolean_mask(all_scores, mask, name='char_scores')
<add> selected_scores = tf.boolean_mask(
<add> tensor=all_scores, mask=mask, name='char_scores')
<ide> scores = tf.reshape(
<ide> selected_scores,
<ide> shape=(-1, self._params.seq_length),
<ide> def create_base(self,
<ide> images = tf.subtract(images, 0.5)
<ide> images = tf.multiply(images, 2.5)
<ide>
<del> with tf.variable_scope(scope, reuse=reuse):
<add> with tf.compat.v1.variable_scope(scope, reuse=reuse):
<ide> views = tf.split(
<ide> value=images, num_or_size_splits=self._params.num_views, axis=2)
<ide> logging.debug('Views=%d single view: %s', len(views), views[0])
<ide> def create_loss(self, data, endpoints):
<ide> # multiple losses including regularization losses.
<ide> self.sequence_loss_fn(endpoints.chars_logit, data.labels)
<ide> total_loss = slim.losses.get_total_loss()
<del> tf.summary.scalar('TotalLoss', total_loss)
<add> tf.compat.v1.summary.scalar('TotalLoss', total_loss)
<ide> return total_loss
<ide>
<ide> def label_smoothing_regularization(self, chars_labels, weight=0.1):
<ide> def sequence_loss_fn(self, chars_logits, chars_labels):
<ide> A Tensor with shape [batch_size] - the log-perplexity for each sequence.
<ide> """
<ide> mparams = self._mparams['sequence_loss_fn']
<del> with tf.variable_scope('sequence_loss_fn/SLF'):
<add> with tf.compat.v1.variable_scope('sequence_loss_fn/SLF'):
<ide> if mparams.label_smoothing > 0:
<ide> smoothed_one_hot_labels = self.label_smoothing_regularization(
<ide> chars_labels, mparams.label_smoothing)
<ide> def sequence_loss_fn(self, chars_logits, chars_labels):
<ide> shape=(batch_size, seq_length),
<ide> dtype=tf.int64)
<ide> known_char = tf.not_equal(chars_labels, reject_char)
<del> weights = tf.to_float(known_char)
<add> weights = tf.cast(known_char, dtype=tf.float32)
<ide>
<ide> logits_list = tf.unstack(chars_logits, axis=1)
<ide> weights_list = tf.unstack(weights, axis=1)
<ide> def sequence_loss_fn(self, chars_logits, chars_labels):
<ide> weights_list,
<ide> softmax_loss_function=get_softmax_loss_fn(mparams.label_smoothing),
<ide> average_across_timesteps=mparams.average_across_timesteps)
<del> tf.losses.add_loss(loss)
<add> tf.compat.v1.losses.add_loss(loss)
<ide> return loss
<ide>
<ide> def create_summaries(self, data, endpoints, charset, is_training):
<ide> def sname(label):
<ide> # tf.summary.text(sname('text/pr'), pr_text)
<ide> # gt_text = charset_mapper.get_text(data.labels[:max_outputs,:])
<ide> # tf.summary.text(sname('text/gt'), gt_text)
<del> tf.summary.image(sname('image'), data.images, max_outputs=max_outputs)
<add> tf.compat.v1.summary.image(
<add> sname('image'), data.images, max_outputs=max_outputs)
<ide>
<ide> if is_training:
<del> tf.summary.image(
<add> tf.compat.v1.summary.image(
<ide> sname('image/orig'), data.images_orig, max_outputs=max_outputs)
<del> for var in tf.trainable_variables():
<del> tf.summary.histogram(var.op.name, var)
<add> for var in tf.compat.v1.trainable_variables():
<add> tf.compat.v1.summary.histogram(var.op.name, var)
<ide> return None
<ide>
<ide> else:
<ide> def use_metric(name, value_update_tuple):
<ide>
<ide> for name, value in names_to_values.items():
<ide> summary_name = 'eval/' + name
<del> tf.summary.scalar(summary_name, tf.Print(value, [value], summary_name))
<add> tf.compat.v1.summary.scalar(
<add> summary_name, tf.compat.v1.Print(value, [value], summary_name))
<ide> return list(names_to_updates.values())
<ide>
<ide> def create_init_fn_to_restore(self,
<ide> def assign_from_checkpoint(variables, checkpoint):
<ide> logging.info('variables_to_restore:\n%s',
<ide> utils.variables_to_restore().keys())
<ide> logging.info('moving_average_variables:\n%s',
<del> [v.op.name for v in tf.moving_average_variables()])
<add> [v.op.name for v in tf.compat.v1.moving_average_variables()])
<ide> logging.info('trainable_variables:\n%s',
<del> [v.op.name for v in tf.trainable_variables()])
<add> [v.op.name for v in tf.compat.v1.trainable_variables()])
<ide> if master_checkpoint:
<ide> assign_from_checkpoint(utils.variables_to_restore(), master_checkpoint)
<ide>
<ide><path>research/attention_ocr/python/model_export.py
<ide> 'image_height', None,
<ide> 'Image height used during training(or crop height if used)'
<ide> ' If not set, the dataset default is used instead.')
<del>flags.DEFINE_string('work_dir', '/tmp', 'A directory to store temporary files.')
<add>flags.DEFINE_string('work_dir', '/tmp',
<add> 'A directory to store temporary files.')
<ide> flags.DEFINE_integer('version_number', 1, 'Version number of the model')
<ide> flags.DEFINE_bool(
<ide> 'export_for_serving', True,
<ide> def export_model(export_dir,
<ide> image_height = crop_image_height or dataset_image_height
<ide>
<ide> if export_for_serving:
<del> images_orig = tf.placeholder(
<add> images_orig = tf.compat.v1.placeholder(
<ide> tf.string, shape=[batch_size], name='tf_example')
<ide> images_orig_float = model_export_lib.generate_tfexample_image(
<ide> images_orig,
<ide> def export_model(export_dir,
<ide> name='float_images')
<ide> else:
<ide> images_shape = (batch_size, image_height, image_width, image_depth)
<del> images_orig = tf.placeholder(
<add> images_orig = tf.compat.v1.placeholder(
<ide> tf.uint8, shape=images_shape, name='original_image')
<ide> images_orig_float = tf.image.convert_image_dtype(
<ide> images_orig, dtype=tf.float32, name='float_images')
<ide>
<ide> endpoints = model.create_base(images_orig_float, labels_one_hot=None)
<ide>
<del> sess = tf.Session()
<del> saver = tf.train.Saver(slim.get_variables_to_restore(), sharded=True)
<add> sess = tf.compat.v1.Session()
<add> saver = tf.compat.v1.train.Saver(
<add> slim.get_variables_to_restore(), sharded=True)
<ide> saver.restore(sess, get_checkpoint_path())
<del> tf.logging.info('Model restored successfully.')
<add> tf.compat.v1.logging.info('Model restored successfully.')
<ide>
<ide> # Create model signature.
<ide> if export_for_serving:
<ide> input_tensors = {
<del> tf.saved_model.signature_constants.CLASSIFY_INPUTS: images_orig
<add> tf.saved_model.CLASSIFY_INPUTS: images_orig
<ide> }
<ide> else:
<ide> input_tensors = {'images': images_orig}
<ide> def export_model(export_dir,
<ide> dataset.max_sequence_length)):
<ide> output_tensors['attention_mask_%d' % i] = t
<ide> signature_outputs = model_export_lib.build_tensor_info(output_tensors)
<del> signature_def = tf.saved_model.signature_def_utils.build_signature_def(
<add> signature_def = tf.compat.v1.saved_model.signature_def_utils.build_signature_def(
<ide> signature_inputs, signature_outputs,
<del> tf.saved_model.signature_constants.CLASSIFY_METHOD_NAME)
<add> tf.saved_model.CLASSIFY_METHOD_NAME)
<ide> # Save model.
<del> builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
<add> builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(export_dir)
<ide> builder.add_meta_graph_and_variables(
<del> sess, [tf.saved_model.tag_constants.SERVING],
<add> sess, [tf.saved_model.SERVING],
<ide> signature_def_map={
<del> tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
<add> tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
<ide> signature_def
<ide> },
<del> main_op=tf.tables_initializer(),
<add> main_op=tf.compat.v1.tables_initializer(),
<ide> strip_default_attrs=True)
<ide> builder.save()
<del> tf.logging.info('Model has been exported to %s' % export_dir)
<add> tf.compat.v1.logging.info('Model has been exported to %s' % export_dir)
<ide>
<ide> return signature_def
<ide>
<ide><path>research/attention_ocr/python/model_export_lib.py
<ide> def normalize_image(image, original_minval, original_maxval, target_minval,
<ide> Returns:
<ide> image: image which is the same shape as input image.
<ide> """
<del> with tf.name_scope('NormalizeImage', values=[image]):
<add> with tf.compat.v1.name_scope('NormalizeImage', values=[image]):
<ide> original_minval = float(original_minval)
<ide> original_maxval = float(original_maxval)
<ide> target_minval = float(target_minval)
<ide> def generate_tfexample_image(input_example_strings,
<ide> A tensor with shape [batch_size, height, width, channels] of type float32
<ide> with values in the range [0..1]
<ide> """
<del> batch_size = tf.shape(input_example_strings)[0]
<add> batch_size = tf.shape(input=input_example_strings)[0]
<ide> images_shape = tf.stack(
<ide> [batch_size, image_height, image_width, image_channels])
<ide> tf_example_image_key = 'image/encoded'
<ide> feature_configs = {
<ide> tf_example_image_key:
<del> tf.FixedLenFeature(
<add> tf.io.FixedLenFeature(
<ide> image_height * image_width * image_channels, dtype=tf.float32)
<ide> }
<del> feature_tensors = tf.parse_example(input_example_strings, feature_configs)
<add> feature_tensors = tf.io.parse_example(
<add> serialized=input_example_strings, features=feature_configs)
<ide> float_images = tf.reshape(
<ide> normalize_image(
<ide> feature_tensors[tf_example_image_key],
<ide> def attention_ocr_attention_masks(num_characters):
<ide> names = ['%s/Softmax:0' % (prefix)]
<ide> for i in range(1, num_characters):
<ide> names += ['%s_%d/Softmax:0' % (prefix, i)]
<del> return [tf.get_default_graph().get_tensor_by_name(n) for n in names]
<add> return [tf.compat.v1.get_default_graph().get_tensor_by_name(n) for n in names]
<ide>
<ide>
<ide> def build_tensor_info(tensor_dict):
<ide> return {
<del> k: tf.saved_model.utils.build_tensor_info(t)
<add> k: tf.compat.v1.saved_model.utils.build_tensor_info(t)
<ide> for k, t in tensor_dict.items()
<ide> }
<ide><path>research/attention_ocr/python/model_export_test.py
<ide>
<ide>
<ide> def _clean_up():
<del> tf.gfile.DeleteRecursively(tf.test.get_temp_dir())
<add> tf.io.gfile.rmtree(tf.compat.v1.test.get_temp_dir())
<ide>
<ide>
<ide> def _create_tf_example_string(image):
<ide> def setUp(self):
<ide> for suffix in ['.meta', '.index', '.data-00000-of-00001']:
<ide> filename = _CHECKPOINT + suffix
<ide> self.assertTrue(
<del> tf.gfile.Exists(filename),
<add> tf.io.gfile.exists(filename),
<ide> msg='Missing checkpoint file %s. '
<ide> 'Please download and extract it from %s' %
<ide> (filename, _CHECKPOINT_URL))
<ide> def setUp(self):
<ide> os.path.dirname(__file__), 'datasets/testdata/fsns')
<ide> tf.test.TestCase.setUp(self)
<ide> _clean_up()
<del> self.export_dir = os.path.join(tf.test.get_temp_dir(), 'exported_model')
<add> self.export_dir = os.path.join(
<add> tf.compat.v1.test.get_temp_dir(), 'exported_model')
<ide> self.minimal_output_signature = {
<ide> 'predictions': 'AttentionOcr_v1/predicted_chars:0',
<ide> 'scores': 'AttentionOcr_v1/predicted_scores:0',
<ide> def create_input_feed(self, graph_def, serving):
<ide> size=self.dataset.image_shape).astype('uint8'),
<ide> }
<ide> signature_def = graph_def.signature_def[
<del> tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
<add> tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
<ide> if serving:
<ide> input_name = signature_def.inputs[
<del> tf.saved_model.signature_constants.CLASSIFY_INPUTS].name
<add> tf.saved_model.CLASSIFY_INPUTS].name
<ide> # Model for serving takes input: inputs['inputs'] = 'tf_example:0'
<ide> feed_dict = {
<ide> input_name: [
<ide> def verify_export_load_and_inference(self, export_for_serving=False):
<ide> export_for_serving: True if the model was exported for Serving. This
<ide> affects how input is fed into the model.
<ide> """
<del> tf.reset_default_graph()
<del> sess = tf.Session()
<del> graph_def = tf.saved_model.loader.load(
<add> tf.compat.v1.reset_default_graph()
<add> sess = tf.compat.v1.Session()
<add> graph_def = tf.compat.v1.saved_model.loader.load(
<ide> sess=sess,
<del> tags=[tf.saved_model.tag_constants.SERVING],
<add> tags=[tf.saved_model.SERVING],
<ide> export_dir=self.export_dir)
<ide> feed_dict = self.create_input_feed(graph_def, export_for_serving)
<ide> results = sess.run(self.minimal_output_signature, feed_dict=feed_dict)
<ide><path>research/attention_ocr/python/model_test.py
<ide> def setUp(self):
<ide> self.num_char_classes)
<ide> self.length_logit_shape = (self.batch_size, self.seq_length + 1)
<ide> # Placeholder knows image dimensions, but not batch size.
<del> self.input_images = tf.placeholder(
<add> self.input_images = tf.compat.v1.placeholder(
<ide> tf.float32,
<ide> shape=(None, self.image_height, self.image_width, 3),
<ide> name='input_node')
<ide> def test_char_related_shapes(self):
<ide> with self.test_session() as sess:
<ide> endpoints_tf = ocr_model.create_base(
<ide> images=self.input_images, labels_one_hot=None)
<del> sess.run(tf.global_variables_initializer())
<del> tf.tables_initializer().run()
<add> sess.run(tf.compat.v1.global_variables_initializer())
<add> tf.compat.v1.tables_initializer().run()
<ide> endpoints = sess.run(
<ide> endpoints_tf, feed_dict={self.input_images: self.fake_images})
<ide>
<ide> def test_conv_tower_shape(self):
<ide> ocr_model = self.create_model()
<ide> conv_tower = ocr_model.conv_tower_fn(self.input_images)
<ide>
<del> sess.run(tf.global_variables_initializer())
<add> sess.run(tf.compat.v1.global_variables_initializer())
<ide> conv_tower_np = sess.run(
<ide> conv_tower, feed_dict={self.input_images: self.fake_images})
<ide>
<ide> def test_model_size_less_then1_gb(self):
<ide> ocr_model = self.create_model()
<ide> ocr_model.create_base(images=self.input_images, labels_one_hot=None)
<ide> with self.test_session() as sess:
<del> tfprof_root = tf.profiler.profile(
<add> tfprof_root = tf.compat.v1.profiler.profile(
<ide> sess.graph,
<del> options=tf.profiler.ProfileOptionBuilder
<add> options=tf.compat.v1.profiler.ProfileOptionBuilder
<ide> .trainable_variables_parameter())
<ide>
<ide> model_size_bytes = 4 * tfprof_root.total_parameters
<ide> def test_create_summaries_is_runnable(self):
<ide> summaries = ocr_model.create_summaries(
<ide> data, endpoints, charset, is_training=False)
<ide> with self.test_session() as sess:
<del> sess.run(tf.global_variables_initializer())
<del> sess.run(tf.local_variables_initializer())
<del> tf.tables_initializer().run()
<add> sess.run(tf.compat.v1.global_variables_initializer())
<add> sess.run(tf.compat.v1.local_variables_initializer())
<add> tf.compat.v1.tables_initializer().run()
<ide> sess.run(summaries) # just check it is runnable
<ide>
<ide> def test_sequence_loss_function_without_label_smoothing(self):
<ide> def encode_coordinates_alt(self, net):
<ide> Returns:
<ide> a list of tensors with encoded image coordinates in them.
<ide> """
<del> batch_size = tf.shape(net)[0]
<add> batch_size = tf.shape(input=net)[0]
<ide> _, h, w, _ = net.shape.as_list()
<ide> h_loc = [
<ide> tf.tile(
<ide> def encode_coordinates_alt(self, net):
<ide> h_loc = tf.concat([tf.expand_dims(t, 2) for t in h_loc], 2)
<ide> w_loc = [
<ide> tf.tile(
<del> tf.contrib.layers.one_hot_encoding(tf.constant([i]), num_classes=w),
<add> tf.contrib.layers.one_hot_encoding(
<add> tf.constant([i]), num_classes=w),
<ide> [h, 1]) for i in range(w)
<ide> ]
<ide> w_loc = tf.concat([tf.expand_dims(t, 2) for t in w_loc], 2)
<ide> def test_predicted_text_has_correct_shape_w_charset(self):
<ide> endpoints_tf = ocr_model.create_base(
<ide> images=self.fake_images, labels_one_hot=None)
<ide>
<del> sess.run(tf.global_variables_initializer())
<del> tf.tables_initializer().run()
<add> sess.run(tf.compat.v1.global_variables_initializer())
<add> tf.compat.v1.tables_initializer().run()
<ide> endpoints = sess.run(endpoints_tf)
<ide>
<ide> self.assertEqual(endpoints.predicted_text.shape, (self.batch_size,))
<ide> def test_text_corresponds_to_ids(self):
<ide> charset_mapper = model.CharsetMapper(charset)
<ide>
<ide> with self.test_session() as sess:
<del> tf.tables_initializer().run()
<add> tf.compat.v1.tables_initializer().run()
<ide> text = sess.run(charset_mapper.get_text(ids))
<ide>
<ide> self.assertAllEqual(text, [b'hello', b'world'])
<ide><path>research/attention_ocr/python/sequence_layers.py
<ide> def __init__(self, net, labels_one_hot, model_params, method_params):
<ide> self._mparams = method_params
<ide> self._net = net
<ide> self._labels_one_hot = labels_one_hot
<del> self._batch_size = tf.shape(net)[0]
<add> self._batch_size = tf.shape(input=net)[0]
<ide>
<ide> # Initialize parameters for char logits which will be computed on the fly
<ide> # inside an LSTM decoder.
<ide> self._char_logits = {}
<del> regularizer = slim.l2_regularizer(self._mparams.weight_decay)
<add> regularizer = tf.keras.regularizers.l2(0.5 * (self._mparams.weight_decay))
<ide> self._softmax_w = slim.model_variable(
<ide> 'softmax_w',
<ide> [self._mparams.num_lstm_units, self._params.num_char_classes],
<ide> initializer=orthogonal_initializer,
<ide> regularizer=regularizer)
<ide> self._softmax_b = slim.model_variable(
<ide> 'softmax_b', [self._params.num_char_classes],
<del> initializer=tf.zeros_initializer(),
<add> initializer=tf.compat.v1.zeros_initializer(),
<ide> regularizer=regularizer)
<ide>
<ide> @abc.abstractmethod
<ide> def char_logit(self, inputs, char_index):
<ide> A tensor with shape [batch_size, num_char_classes]
<ide> """
<ide> if char_index not in self._char_logits:
<del> self._char_logits[char_index] = tf.nn.xw_plus_b(inputs, self._softmax_w,
<del> self._softmax_b)
<add> self._char_logits[char_index] = tf.compat.v1.nn.xw_plus_b(inputs, self._softmax_w,
<add> self._softmax_b)
<ide> return self._char_logits[char_index]
<ide>
<ide> def char_one_hot(self, logit):
<ide> def char_one_hot(self, logit):
<ide> Returns:
<ide> A tensor with shape [batch_size, num_char_classes]
<ide> """
<del> prediction = tf.argmax(logit, axis=1)
<add> prediction = tf.argmax(input=logit, axis=1)
<ide> return slim.one_hot_encoding(prediction, self._params.num_char_classes)
<ide>
<ide> def get_input(self, prev, i):
<ide> def create_logits(self):
<ide> Returns:
<ide> A tensor with shape [batch_size, seq_length, num_char_classes].
<ide> """
<del> with tf.variable_scope('LSTM'):
<add> with tf.compat.v1.variable_scope('LSTM'):
<ide> first_label = self.get_input(prev=None, i=0)
<ide> decoder_inputs = [first_label] + [None] * (self._params.seq_length - 1)
<del> lstm_cell = tf.contrib.rnn.LSTMCell(
<add> lstm_cell = tf.compat.v1.nn.rnn_cell.LSTMCell(
<ide> self._mparams.num_lstm_units,
<ide> use_peepholes=False,
<ide> cell_clip=self._mparams.lstm_state_clip_value,
<ide> def create_logits(self):
<ide> loop_function=self.get_input,
<ide> cell=lstm_cell)
<ide>
<del> with tf.variable_scope('logits'):
<add> with tf.compat.v1.variable_scope('logits'):
<ide> logits_list = [
<del> tf.expand_dims(self.char_logit(logit, i), dim=1)
<add> tf.expand_dims(self.char_logit(logit, i), axis=1)
<ide> for i, logit in enumerate(lstm_outputs)
<ide> ]
<ide>
<ide><path>research/attention_ocr/python/sequence_layers_test.py
<ide>
<ide> def fake_net(batch_size, num_features, feature_size):
<ide> return tf.convert_to_tensor(
<del> np.random.uniform(size=(batch_size, num_features, feature_size)),
<add> value=np.random.uniform(size=(batch_size, num_features, feature_size)),
<ide> dtype=tf.float32)
<ide>
<ide>
<ide> def fake_labels(batch_size, seq_length, num_char_classes):
<ide> labels_np = tf.convert_to_tensor(
<del> np.random.randint(
<add> value=np.random.randint(
<ide> low=0, high=num_char_classes, size=(batch_size, seq_length)))
<ide> return slim.one_hot_encoding(labels_np, num_classes=num_char_classes)
<ide>
<ide><path>research/attention_ocr/python/train.py
<ide> def get_training_hparams():
<ide> def create_optimizer(hparams):
<ide> """Creates optimized based on the specified flags."""
<ide> if hparams.optimizer == 'momentum':
<del> optimizer = tf.train.MomentumOptimizer(
<add> optimizer = tf.compat.v1.train.MomentumOptimizer(
<ide> hparams.learning_rate, momentum=hparams.momentum)
<ide> elif hparams.optimizer == 'adam':
<del> optimizer = tf.train.AdamOptimizer(hparams.learning_rate)
<add> optimizer = tf.compat.v1.train.AdamOptimizer(hparams.learning_rate)
<ide> elif hparams.optimizer == 'adadelta':
<del> optimizer = tf.train.AdadeltaOptimizer(hparams.learning_rate)
<add> optimizer = tf.compat.v1.train.AdadeltaOptimizer(hparams.learning_rate)
<ide> elif hparams.optimizer == 'adagrad':
<del> optimizer = tf.train.AdagradOptimizer(hparams.learning_rate)
<add> optimizer = tf.compat.v1.train.AdagradOptimizer(hparams.learning_rate)
<ide> elif hparams.optimizer == 'rmsprop':
<del> optimizer = tf.train.RMSPropOptimizer(
<add> optimizer = tf.compat.v1.train.RMSPropOptimizer(
<ide> hparams.learning_rate, momentum=hparams.momentum)
<ide> return optimizer
<ide>
<ide> def train(loss, init_fn, hparams):
<ide>
<ide>
<ide> def prepare_training_dir():
<del> if not tf.gfile.Exists(FLAGS.train_log_dir):
<add> if not tf.io.gfile.exists(FLAGS.train_log_dir):
<ide> logging.info('Create a new training directory %s', FLAGS.train_log_dir)
<del> tf.gfile.MakeDirs(FLAGS.train_log_dir)
<add> tf.io.gfile.makedirs(FLAGS.train_log_dir)
<ide> else:
<ide> if FLAGS.reset_train_dir:
<ide> logging.info('Reset the training directory %s', FLAGS.train_log_dir)
<del> tf.gfile.DeleteRecursively(FLAGS.train_log_dir)
<del> tf.gfile.MakeDirs(FLAGS.train_log_dir)
<add> tf.io.gfile.rmtree(FLAGS.train_log_dir)
<add> tf.io.gfile.makedirs(FLAGS.train_log_dir)
<ide> else:
<ide> logging.info('Use already existing training directory %s',
<ide> FLAGS.train_log_dir)
<ide>
<ide>
<ide> def calculate_graph_metrics():
<ide> param_stats = model_analyzer.print_model_analysis(
<del> tf.get_default_graph(),
<add> tf.compat.v1.get_default_graph(),
<ide> tfprof_options=model_analyzer.TRAINABLE_VARS_PARAMS_STAT_OPTIONS)
<ide> return param_stats.total_parameters
<ide>
<ide> def main(_):
<ide> # If ps_tasks is zero, the local device is used. When using multiple
<ide> # (non-local) replicas, the ReplicaDeviceSetter distributes the variables
<ide> # across the different devices.
<del> device_setter = tf.train.replica_device_setter(
<add> device_setter = tf.compat.v1.train.replica_device_setter(
<ide> FLAGS.ps_tasks, merge_devices=True)
<ide> with tf.device(device_setter):
<ide> data = data_provider.get_data(
<ide><path>research/attention_ocr/python/utils.py
<ide> def logits_to_log_prob(logits):
<ide> probabilities.
<ide> """
<ide>
<del> with tf.variable_scope('log_probabilities'):
<add> with tf.compat.v1.variable_scope('log_probabilities'):
<ide> reduction_indices = len(logits.shape.as_list()) - 1
<ide> max_logits = tf.reduce_max(
<del> logits, reduction_indices=reduction_indices, keep_dims=True)
<add> input_tensor=logits, axis=reduction_indices, keepdims=True)
<ide> safe_logits = tf.subtract(logits, max_logits)
<ide> sum_exp = tf.reduce_sum(
<del> tf.exp(safe_logits),
<del> reduction_indices=reduction_indices,
<del> keep_dims=True)
<del> log_probs = tf.subtract(safe_logits, tf.log(sum_exp))
<add> input_tensor=tf.exp(safe_logits),
<add> axis=reduction_indices,
<add> keepdims=True)
<add> log_probs = tf.subtract(safe_logits, tf.math.log(sum_exp))
<ide> return log_probs
<ide>
<ide>
<ide> def ConvertAllInputsToTensors(func):
<ide> """
<ide>
<ide> def FuncWrapper(*args):
<del> tensors = [tf.convert_to_tensor(a) for a in args]
<add> tensors = [tf.convert_to_tensor(value=a) for a in args]
<ide> return func(*tensors)
<ide>
<ide> return FuncWrapper | 20 |
Ruby | Ruby | make new mime types first class [dhh] | 5e998d1ea07f227f436c64b2ee748ec18d6f5456 | <ide><path>actionpack/lib/action_controller/mime_responds.rb
<ide> module InstanceMethods
<ide> # and accept Rails' defaults, life will be much easier.
<ide> def respond_to(*types, &block)
<ide> raise ArgumentError, "respond_to takes either types or a block, never bot" unless types.any? ^ block
<del> block ||= lambda { |responder| types.each { |type| responder.send(type) } }
<add> block ||= lambda { |responder| types.each { |type| responder.known(type) } }
<ide> responder = Responder.new(block.binding)
<ide> block.call(responder)
<ide> responder.respond
<ide> def custom(mime_type, &block)
<ide> end
<ide> end
<ide>
<del> for mime_type in %w( all html js xml rss atom yaml )
<del> eval <<-EOT
<del> def #{mime_type}(&block)
<del> custom(Mime::#{mime_type.upcase}, &block)
<del> end
<del> EOT
<add> def known(mime_type_extension, &block)
<add> custom(Mime.const_get(mime_type_extension.to_s.upcase), &block)
<ide> end
<ide>
<ide> def any(*args, &block)
<ide><path>actionpack/lib/action_controller/mime_type.rb
<ide> def lookup(string)
<ide>
<ide> def register(string, symbol, synonyms = [])
<ide> Mime.send :const_set, symbol.to_s.upcase, Type.new(string, symbol, synonyms)
<del> LOOKUP[string] = Mime.send :const_get, symbol.to_s.upcase
<add> SET << Mime.send(:const_get, symbol.to_s.upcase)
<add> LOOKUP[string] = EXTENSION_LOOKUP[symbol.to_s] = SET.last
<ide> end
<ide>
<ide> def parse(accept_header)
<ide> def ==(mime_type)
<ide> ATOM = Type.new "application/atom+xml", :atom
<ide> YAML = Type.new "application/x-yaml", :yaml, %w( text/yaml )
<ide>
<add> SET = [ ALL, TEXT, HTML, JS, ICS, XML, RSS, ATOM, YAML ]
<ide>
<ide> LOOKUP = Hash.new { |h, k| h[k] = Type.new(k) unless k == "" }
<ide>
<ide><path>actionpack/test/controller/mime_type_test.rb
<ide> def test_parse_with_q
<ide>
<ide> def test_custom_type
<ide> Mime::Type.register("image/gif", :gif)
<del> assert_nothing_raised { Mime::GIF }
<add> assert_nothing_raised do
<add> Mime::GIF
<add> assert_equal Mime::GIF, Mime::SET.last
<add> end
<ide> Mime.send :remove_const, :GIF
<ide> end
<ide> end
<ide>\ No newline at end of file | 3 |
Python | Python | add unicode declaration on new regression test | 1dca7eeb036423d1d5889e5ec084f9f91f90eb74 | <ide><path>spacy/tests/regression/test_issue957.py
<add>from __future__ import unicode_literals
<add>
<ide> import pytest
<ide> from ... import load as load_spacy
<ide>
<ide>
<del>def test_issue913(en_tokenizer):
<add>def test_issue957(en_tokenizer):
<ide> '''Test that spaCy doesn't hang on many periods.'''
<ide> string = '0'
<ide> for i in range(1, 100): | 1 |
Mixed | Go | fix typos and formatting in docs. add godoc badge | 8fe2d88db1838ad49f8d5149d02bad4bb65d7688 | <ide><path>libnetwork/README.md
<ide> # libnetwork - networking for containers
<ide>
<del>[](https://circleci.com/gh/docker/libnetwork/tree/master) [](https://coveralls.io/r/docker/libnetwork)
<add>[](https://circleci.com/gh/docker/libnetwork/tree/master) [](https://coveralls.io/r/docker/libnetwork) [](https://godoc.org/github.com/docker/libnetwork)
<ide>
<ide> Libnetwork provides a native Go implementation for connecting containers
<ide>
<ide><path>libnetwork/network.go
<ide> /*
<del>Package libnetwork provides basic fonctionalities and extension points to
<add>Package libnetwork provides the basic functionality and extension points to
<ide> create network namespaces and allocate interfaces for containers to use.
<ide>
<del>// Create a new controller instance
<del>controller := libnetwork.New()
<add> // Create a new controller instance
<add> controller := libnetwork.New()
<ide>
<del>// This option is only needed for in-tree drivers. Plugins(in future) will get
<del>// their options through plugin infrastructure.
<del>option := options.Generic{}
<del>driver, err := controller.NewNetworkDriver("simplebridge", option)
<del>if err != nil {
<add> // This option is only needed for in-tree drivers. Plugins(in future) will get
<add> // their options through plugin infrastructure.
<add> option := options.Generic{}
<add> driver, err := controller.NewNetworkDriver("simplebridge", option)
<add> if err != nil {
<ide> return
<del>}
<del>
<del>netOptions := options.Generic{}
<del>// Create a network for containers to join.
<del>network, err := controller.NewNetwork(driver, "network1", netOptions)
<del>if err != nil {
<del> return
<del>}
<del>
<del>// For a new container: create a sandbox instance (providing a unique key).
<del>// For linux it is a filesystem path
<del>networkPath := "/var/lib/docker/.../4d23e"
<del>networkNamespace, err := sandbox.NewSandbox(networkPath)
<del>if err != nil {
<del> return
<del>}
<del>
<del>// For each new container: allocate IP and interfaces. The returned network
<del>// settings will be used for container infos (inspect and such), as well as
<del>// iptables rules for port publishing.
<del>_, sinfo, err := network.CreateEndpoint("Endpoint1", networkNamespace.Key(), "")
<del>if err != nil {
<del> return
<del>}
<del>
<del>// Add interfaces to the namespace.
<del>for _, iface := range sinfo.Interfaces {
<del> if err := networkNamespace.AddInterface(iface); err != nil {
<del> return
<del> }
<del>}
<del>
<del>// Set the gateway IP
<del>if err := networkNamespace.SetGateway(sinfo.Gateway); err != nil {
<del> return
<del>}
<add> }
<add>
<add> netOptions := options.Generic{}
<add> // Create a network for containers to join.
<add> network, err := controller.NewNetwork(driver, "network1", netOptions)
<add> if err != nil {
<add> return
<add> }
<add>
<add> // For a new container: create a sandbox instance (providing a unique key).
<add> // For linux it is a filesystem path
<add> networkPath := "/var/lib/docker/.../4d23e"
<add> networkNamespace, err := sandbox.NewSandbox(networkPath)
<add> if err != nil {
<add> return
<add> }
<add>
<add> // For each new container: allocate IP and interfaces. The returned network
<add> // settings will be used for container infos (inspect and such), as well as
<add> // iptables rules for port publishing.
<add> _, sinfo, err := network.CreateEndpoint("Endpoint1", networkNamespace.Key(), "")
<add> if err != nil {
<add> return
<add> }
<ide> */
<ide> package libnetwork
<ide>
<ide> type NetworkController interface {
<ide> }
<ide>
<ide> // A Network represents a logical connectivity zone that containers may
<del>// ulteriorly join using the CreateEndpoint method. A Network is managed by a specific
<del>// driver.
<add>// join using the Link method. A Network is managed by a specific driver.
<ide> type Network interface {
<ide> // A user chosen name for this network.
<ide> Name() string | 2 |
Text | Text | add information about pattern matching | 57762df6f7b1e1e25fb7e89d45b63f0f46ac852a | <ide><path>guide/english/csharp/switch-case/index.md
<ide> switch (dog)
<ide> ```
<ide> As you see in the above example after `when` keyword you should specify logical condition (an instruction that returns bool value).
<ide>
<add>## Pattern matching using switch case
<add>
<add>We can use switch to not only match certain values, but also match certain data type. This is called pattern matching
<add>
<add>## Example
<add>
<add>Lets say that we have some different types of shapes:
<add>```csharp
<add> public class Square
<add> {
<add> public double Side { get; }
<add>
<add> public Square(double side)
<add> {
<add> Side = side;
<add> }
<add> }
<add>
<add> public class Circle
<add> {
<add> public double Radius { get; }
<add>
<add> public Circle(double radius)
<add> {
<add> Radius = radius;
<add> }
<add> }
<add>
<add> public struct Rectangle
<add> {
<add> public double Length { get; }
<add> public double Height { get; }
<add>
<add> public Rectangle(double length, double height)
<add> {
<add> Length = length;
<add> Height = height;
<add> }
<add> }
<add>```
<add>
<add>And now we want to create a method which calculates area for any shape we pass to it. We can do that using pattern matching in switch statement like so:
<add>```csharp
<add> public static double ComputeAreaModernSwitch(object shape)
<add> {
<add> switch (shape)
<add> {
<add> case Square s:
<add> return s.Side * s.Side;
<add> case Circle c:
<add> return c.Radius * c.Radius * Math.PI;
<add> case Rectangle r:
<add> return r.Height * r.Length;
<add> default:
<add> throw new ArgumentException(
<add> message: "shape is not a recognized shape",
<add> paramName: nameof(shape));
<add> }
<add> }
<add>```
<add>
<ide> ### Sources:
<ide> - 1 https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/switch
<add>- https://docs.microsoft.com/en-us/dotnet/csharp/pattern-matching | 1 |
Go | Go | use container.lock in public todisk method | eae5cf1e20e8d93cc13ea8e1db3cd787250fa76d | <ide><path>daemon/container.go
<ide> func (container *Container) FromDisk() error {
<ide> return container.readHostConfig()
<ide> }
<ide>
<del>func (container *Container) ToDisk() error {
<add>func (container *Container) toDisk() error {
<ide> data, err := json.Marshal(container)
<ide> if err != nil {
<ide> return err
<ide> func (container *Container) ToDisk() error {
<ide> return container.WriteHostConfig()
<ide> }
<ide>
<add>func (container *Container) ToDisk() error {
<add> container.Lock()
<add> err := container.toDisk()
<add> container.Unlock()
<add> return err
<add>}
<add>
<ide> func (container *Container) readHostConfig() error {
<ide> container.hostConfig = &runconfig.HostConfig{}
<ide> // If the hostconfig file does not exist, do not read it.
<ide> func (container *Container) monitor(callback execdriver.StartCallback) error {
<ide> // FIXME: here is race condition between two RUN instructions in Dockerfile
<ide> // because they share same runconfig and change image. Must be fixed
<ide> // in server/buildfile.go
<del> if err := container.ToDisk(); err != nil {
<add> if err := container.toDisk(); err != nil {
<ide> utils.Errorf("Error dumping container %s state to disk: %s\n", container.ID, err)
<ide> }
<ide> }
<ide> func (container *Container) waitForStart() error {
<ide> c.Close()
<ide> }
<ide> }
<del> if err := container.ToDisk(); err != nil {
<add> if err := container.toDisk(); err != nil {
<ide> utils.Debugf("%s", err)
<ide> }
<ide> container.State.SetRunning(command.Pid()) | 1 |
Javascript | Javascript | fix tests for git version of jquery | eac3ae1fa2016ade07243afe5971cfdf48577900 | <ide><path>packages/ember-handlebars/tests/views/collection_view_test.js
<ide> test("a block passed to a collection helper defaults to the content property of
<ide> view.appendTo('#qunit-fixture');
<ide> });
<ide>
<del> equal(view.$('li:has(label:contains("foo")) + li:has(label:contains("bar")) + li:has(label:contains("baz"))').length, 1, 'one label element is created for each content item');
<add> equal(view.$('li:nth-child(1) label').length, 1);
<add> equal(view.$('li:nth-child(1) label').text(), 'foo');
<add> equal(view.$('li:nth-child(2) label').length, 1);
<add> equal(view.$('li:nth-child(2) label').text(), 'bar');
<add> equal(view.$('li:nth-child(3) label').length, 1);
<add> equal(view.$('li:nth-child(3) label').text(), 'baz');
<ide> });
<ide>
<ide> test("a block passed to a collection helper defaults to the view", function() {
<ide> test("a block passed to a collection helper defaults to the view", function() {
<ide> Ember.run(function() {
<ide> view.appendTo('#qunit-fixture');
<ide> });
<del> equal(view.$('li:has(label:contains("foo")) + li:has(label:contains("bar")) + li:has(label:contains("baz"))').length, 1, 'precond - one aside element is created for each content item');
<add>
<add> // Preconds
<add> equal(view.$('li:nth-child(1) label').length, 1);
<add> equal(view.$('li:nth-child(1) label').text(), 'foo');
<add> equal(view.$('li:nth-child(2) label').length, 1);
<add> equal(view.$('li:nth-child(2) label').text(), 'bar');
<add> equal(view.$('li:nth-child(3) label').length, 1);
<add> equal(view.$('li:nth-child(3) label').text(), 'baz');
<ide>
<ide> Ember.run(function() {
<ide> set(firstChild(view), 'content', Ember.A());
<ide><path>packages/ember-views/tests/views/view/view_lifecycle_test.js
<ide> test("rerender should work inside a template", function() {
<ide> Ember.TESTING_DEPRECATION = false;
<ide> }
<ide>
<del> ok(view.$('div:contains(2), div:contains(Inside child2').length === 2,
<del> "Rerendering a view causes it to rerender");
<add> equal(view.$('div:nth-child(1)').length, 1);
<add> equal(view.$('div:nth-child(1)').text(), '2');
<add> equal(view.$('div:nth-child(2)').length, 1);
<add> equal(view.$('div:nth-child(2)').text(), 'Inside child2');
<ide> });
<ide>
<ide> module("views/view/view_lifecycle_test - in DOM", { | 2 |
Javascript | Javascript | remove unused code from autoescapestr | ed084a035c1f657284f3eee7f7a583a42e8b35f1 | <ide><path>lib/url.js
<ide> Url.prototype.parse = function parse(url, parseQueryString, slashesDenoteHost) {
<ide> // First, make 100% sure that any "autoEscape" chars get
<ide> // escaped, even if encodeURIComponent doesn't think they
<ide> // need to be.
<del> const result = autoEscapeStr(rest);
<del> if (result !== undefined)
<del> rest = result;
<add> rest = autoEscapeStr(rest);
<ide> }
<ide>
<ide> var questionIdx = -1;
<ide> function validateHostname(self, rest, hostname) {
<ide>
<ide> // Automatically escape all delimiters and unwise characters from RFC 2396.
<ide> // Also escape single quotes in case of an XSS attack.
<del>// Return undefined if the string doesn't need escaping,
<del>// otherwise return the escaped string.
<add>// Return the escaped string.
<ide> function autoEscapeStr(rest) {
<ide> var escaped = '';
<ide> var lastEscapedPos = 0;
<ide> function autoEscapeStr(rest) {
<ide> }
<ide> }
<ide> if (lastEscapedPos === 0) // Nothing has been escaped.
<del> return;
<add> return rest;
<add>
<ide> // There are ordinary characters at the end.
<ide> if (lastEscapedPos < rest.length)
<del> return escaped + rest.slice(lastEscapedPos);
<del> else // The last character is escaped.
<del> return escaped;
<add> escaped += rest.slice(lastEscapedPos);
<add>
<add> return escaped;
<ide> }
<ide>
<ide> // format a parsed object into a url string | 1 |
Python | Python | add support for .zip to init_model | 7ee880a0ade4c690afe74eaf09e40818ccc2a470 | <ide><path>spacy/cli/init_model.py
<ide> from preshed.counter import PreshCounter
<ide> import tarfile
<ide> import gzip
<add>import zipfile
<ide>
<del>from ._messages import Messages
<add>from ..compat import fix_text
<ide> from ..vectors import Vectors
<del>from ..errors import Warnings, user_warning
<ide> from ..util import prints, ensure_path, get_lang_class
<ide>
<del>try:
<del> import ftfy
<del>except ImportError:
<del> ftfy = None
<del>
<ide>
<ide> @plac.annotations(
<ide> lang=("model language", "positional", None, str),
<ide> def init_model(lang, output_dir, freqs_loc=None, clusters_loc=None, vectors_loc=
<ide> and word vectors.
<ide> """
<ide> if freqs_loc is not None and not freqs_loc.exists():
<del> prints(freqs_loc, title=Messages.M037, exits=1)
<add> prints(freqs_loc, title="Can't find words frequencies file", exits=1)
<ide> clusters_loc = ensure_path(clusters_loc)
<ide> vectors_loc = ensure_path(vectors_loc)
<add>
<ide> probs, oov_prob = read_freqs(freqs_loc) if freqs_loc is not None else ({}, -20)
<ide> vectors_data, vector_keys = read_vectors(vectors_loc) if vectors_loc else (None, None)
<ide> clusters = read_clusters(clusters_loc) if clusters_loc else {}
<add>
<ide> nlp = create_model(lang, probs, oov_prob, clusters, vectors_data, vector_keys, prune_vectors)
<add>
<ide> if not output_dir.exists():
<ide> output_dir.mkdir()
<ide> nlp.to_disk(output_dir)
<ide> def init_model(lang, output_dir, freqs_loc=None, clusters_loc=None, vectors_loc=
<ide> def open_file(loc):
<ide> '''Handle .gz, .tar.gz or unzipped files'''
<ide> loc = ensure_path(loc)
<add> print("Open loc")
<ide> if tarfile.is_tarfile(str(loc)):
<ide> return tarfile.open(str(loc), 'r:gz')
<ide> elif loc.parts[-1].endswith('gz'):
<ide> return (line.decode('utf8') for line in gzip.open(str(loc), 'r'))
<add> elif loc.parts[-1].endswith('zip'):
<add> zip_file = zipfile.ZipFile(str(loc))
<add> names = zip_file.namelist()
<add> file_ = zip_file.open(names[0])
<add> return (line.decode('utf8') for line in file_)
<ide> else:
<ide> return loc.open('r', encoding='utf8')
<ide>
<del>
<ide> def create_model(lang, probs, oov_prob, clusters, vectors_data, vector_keys, prune_vectors):
<ide> print("Creating model...")
<ide> lang_class = get_lang_class(lang)
<ide> nlp = lang_class()
<ide> for lexeme in nlp.vocab:
<ide> lexeme.rank = 0
<add>
<ide> lex_added = 0
<ide> for i, (word, prob) in enumerate(tqdm(sorted(probs.items(), key=lambda item: item[1], reverse=True))):
<ide> lexeme = nlp.vocab[word]
<ide> def create_model(lang, probs, oov_prob, clusters, vectors_data, vector_keys, pru
<ide> lexeme = nlp.vocab[word]
<ide> lexeme.is_oov = False
<ide> lex_added += 1
<add>
<ide> if len(vectors_data):
<ide> nlp.vocab.vectors = Vectors(data=vectors_data, keys=vector_keys)
<ide> if prune_vectors >= 1:
<ide> nlp.vocab.prune_vectors(prune_vectors)
<ide> vec_added = len(nlp.vocab.vectors)
<del> prints(Messages.M039.format(entries=lex_added, vectors=vec_added),
<del> title=Messages.M038)
<add>
<add> prints("{} entries, {} vectors".format(lex_added, vec_added),
<add> title="Sucessfully compiled vocab")
<ide> return nlp
<ide>
<ide>
<ide> def read_vectors(vectors_loc):
<ide> vectors_data = numpy.zeros(shape=shape, dtype='f')
<ide> vectors_keys = []
<ide> for i, line in enumerate(tqdm(f)):
<del> pieces = line.split()
<add> line = line.rstrip()
<add> pieces = line.rsplit(' ', vectors_data.shape[1]+1)
<ide> word = pieces.pop(0)
<add> if len(pieces) != vectors_data.shape[1]:
<add> print(word, repr(line))
<add> raise ValueError("Bad line in file")
<ide> vectors_data[i] = numpy.asarray(pieces, dtype='f')
<ide> vectors_keys.append(word)
<ide> return vectors_data, vectors_keys
<ide> def read_freqs(freqs_loc, max_length=100, min_doc_freq=5, min_freq=50):
<ide> def read_clusters(clusters_loc):
<ide> print("Reading clusters...")
<ide> clusters = {}
<del> if ftfy is None:
<del> user_warning(Warnings.W004)
<ide> with clusters_loc.open() as f:
<ide> for line in tqdm(f):
<ide> try:
<ide> cluster, word, freq = line.split()
<del> if ftfy is not None:
<del> word = ftfy.fix_text(word)
<add> word = fix_text(word)
<ide> except ValueError:
<ide> continue
<ide> # If the clusterer has only seen the word a few times, its | 1 |
PHP | PHP | remove invalid options from attachto | aa61eb02891519aac59459699edae673eeb9b056 | <ide><path>src/ORM/Association.php
<ide> protected function _options(array $options): void
<ide> * - conditions: array with a list of conditions to filter the join with, this
<ide> * will be merged with any conditions originally configured for this association
<ide> * - fields: a list of fields in the target table to include in the result
<del> * - type: The type of join to be used (e.g. INNER)
<del> * the records found on this association
<ide> * - aliasPath: A dot separated string representing the path of association names
<ide> * followed from the passed query main table to this association.
<ide> * - propertyPath: A dot separated string representing the path of association
<ide> protected function _options(array $options): void
<ide> * @param \Cake\ORM\Query $query the query to be altered to include the target table data
<ide> * @param array $options Any extra options or overrides to be taken in account
<ide> * @return void
<del> * @throws \RuntimeException if the query builder passed does not return a query
<del> * object
<add> * @throws \RuntimeException Unable to build the query or associations.
<ide> */
<ide> public function attachTo(Query $query, array $options = []): void
<ide> {
<ide> $target = $this->getTarget();
<del> $joinType = empty($options['joinType']) ? $this->getJoinType() : $options['joinType'];
<ide> $table = $target->getTable();
<ide>
<ide> $options += [
<ide> 'includeFields' => true,
<ide> 'foreignKey' => $this->getForeignKey(),
<ide> 'conditions' => [],
<add> 'joinType' => $this->getJoinType(),
<ide> 'fields' => [],
<del> 'type' => $joinType,
<ide> 'table' => $table,
<ide> 'finder' => $this->getFinder(),
<ide> ];
<ide> public function attachTo(Query $query, array $options = []): void
<ide> $dummy->where($options['conditions']);
<ide> $this->_dispatchBeforeFind($dummy);
<ide>
<del> $joinOptions = ['table' => 1, 'conditions' => 1, 'type' => 1];
<del> $options['conditions'] = $dummy->clause('where');
<del> $query->join([$this->_name => array_intersect_key($options, $joinOptions)]);
<add> $query->join([$this->_name => [
<add> 'table' => $options['table'],
<add> 'conditions' => $dummy->clause('where'),
<add> 'type' => $options['joinType'],
<add> ]]);
<ide>
<ide> $this->_appendFields($query, $dummy, $options);
<ide> $this->_formatAssociationResults($query, $dummy, $options); | 1 |
PHP | PHP | add dd() to collection | f5fafad80dbb08353824483f5b849031693cc477 | <ide><path>src/Illuminate/Support/Collection.php
<ide> public function crossJoin(...$lists)
<ide> ));
<ide> }
<ide>
<add> /**
<add> * Dump the collection and end the script.
<add> *
<add> * @return void
<add> */
<add> public function dd()
<add> {
<add> dd($this->all());
<add> }
<add>
<ide> /**
<ide> * Get the items in the collection that are not present in the given items.
<ide> * | 1 |
Javascript | Javascript | add missing requires and statements | a9fbc0e3ed613109f99ad7a1d2d9a98031ec6141 | <ide><path>script/lib/create-windows-installer.js
<ide> 'use strict'
<ide>
<del>const downloadCertificate = require('./download-github-raw-file')
<add>const downloadGithubRawFile = require('./download-github-raw-file')
<ide> const electronInstaller = require('electron-winstaller')
<ide> const fs = require('fs-extra')
<ide> const os = require('os')
<ide> module.exports = function (packagedAppPath, codeSign) {
<ide> setupIcon: path.join(CONFIG.repositoryRootPath, 'resources', 'app-icons', CONFIG.channel, 'atom.ico')
<ide> }
<ide>
<del> if (codeSign && WIN_P12KEY_URL) {
<add> if (codeSign && process.env.WIN_P12KEY_URL) {
<ide> const certPath = path.join(os.tmpdir(), 'win.p12')
<del> downloadGithubRawFile(WIN_P12KEY_URL, certPath)
<add> downloadGithubRawFile(process.env.WIN_P12KEY_URL, certPath)
<ide> const deleteCertificate = function () {
<ide> console.log(`Deleting certificate at ${certPath}`)
<ide> fs.removeSync(certPath)
<ide> }
<ide> options.certificateFile = certPath
<del> options.certificatePassword = WIN_P12KEY_PASSWORD
<add> options.certificatePassword = process.env.WIN_P12KEY_PASSWORD
<ide> return electronInstaller.createWindowsInstaller(options).then(deleteCertificate, deleteCertificate)
<ide> } else {
<ide> console.log('Skipping code-signing. Specify the --code-sign option and provide a WIN_P12KEY_URL environment variable to perform code-signing'.gray) | 1 |
Javascript | Javascript | beautify webpack configs | 32ebbfd7c7fddab4e2a329aa5f1b8ef7f13b32fe | <ide><path>examples/coffee-script/webpack.config.js
<ide> module.exports = {
<ide> // mode: "development || "production",
<ide> module: {
<del> rules: [
<del> { test: /\.coffee$/, loader: "coffee-loader" }
<del> ]
<add> rules: [{
<add> test: /\.coffee$/,
<add> loader: "coffee-loader"
<add> }]
<ide> },
<ide> resolve: {
<ide> extensions: [".web.coffee", ".web.js", ".coffee", ".js"]
<ide><path>examples/explicit-vendor-chunk/webpack.config.js
<ide> var path = require("path");
<ide> var webpack = require("../../");
<ide> module.exports = [
<add>
<ide> {
<ide> name: "vendor",
<ide> // mode: "development || "production",
<ide> module.exports = [
<ide> })
<ide> ]
<ide> },
<add>
<ide> {
<ide> name: "app",
<ide> // mode: "development || "production",
<ide> module.exports = [
<ide> })
<ide> ]
<ide> }
<add>
<ide> ];
<ide><path>examples/loader/webpack.config.js
<ide> module.exports = {
<ide> // mode: "development || "production",
<ide> module: {
<del> rules: [
<del> { test: /\.css$/, loader: "css-loader" }
<del> ]
<add> rules: [{
<add> test: /\.css$/,
<add> loader: "css-loader"
<add> }]
<ide> }
<ide> };
<ide><path>examples/multi-compiler/webpack.config.js
<ide> var path = require("path");
<ide> var webpack = require("../../");
<ide> module.exports = [
<add>
<ide> {
<ide> name: "mobile",
<ide> // mode: "development || "production",
<ide> module.exports = [
<ide> })
<ide> ]
<ide> },
<add>
<ide> {
<ide> name: "desktop",
<ide> // mode: "development || "production",
<ide> module.exports = [
<ide> })
<ide> ]
<ide> }
<add>
<ide> ];
<ide><path>examples/wasm-simple/webpack.config.js
<ide> module.exports = {
<ide> publicPath: "js/"
<ide> },
<ide> module: {
<del> rules: [
<del> {
<del> test: /\.wasm$/,
<del> type: "webassembly/experimental"
<del> }
<del> ]
<add> rules: [{
<add> test: /\.wasm$/,
<add> type: "webassembly/experimental"
<add> }]
<ide> },
<ide> optimization: {
<ide> occurrenceOrder: true // To keep filename consistent between different modes (for example building only)
<ide><path>test/configCases/compiletime/warn-not-found/webpack.config.js
<add>
<ide><path>test/configCases/context-replacement/d/webpack.config.js
<ide> var webpack = require("../../../../");
<ide>
<ide> module.exports = {
<ide> module: {
<del> rules: [
<del> {
<del> test: /a\.js$/,
<del> use: [
<del> "./queryloader?lions=roar"
<del> ]
<del> }
<del> ]
<add> rules: [{
<add> test: /a\.js$/,
<add> use: [
<add> "./queryloader?lions=roar"
<add> ]
<add> }]
<ide> },
<ide> plugins: [
<ide> new webpack.ContextReplacementPlugin(/context-replacement.d$/, path.resolve(__dirname, "modules?cats=meow"), {
<ide><path>test/configCases/delegated-hash/simple/webpack.config.js
<ide> module.exports = {
<ide> type: "require",
<ide> context: __dirname,
<ide> content: {
<del> "./a.js": { id: 0 },
<del> "./loader.js!./b.js": { id: 1 },
<del> "./dir/c.js": { id: 2 }
<add> "./a.js": {
<add> id: 0
<add> },
<add> "./loader.js!./b.js": {
<add> id: 1
<add> },
<add> "./dir/c.js": {
<add> id: 2
<add> }
<ide> }
<ide> }),
<ide> new DelegatedPlugin({
<ide> source: "./bundle2",
<ide> type: "object",
<ide> context: __dirname,
<ide> content: {
<del> "./d.js": { id: 3 },
<del> "./e.js": { id: 4 }
<add> "./d.js": {
<add> id: 3
<add> },
<add> "./e.js": {
<add> id: 4
<add> }
<ide> }
<ide> })
<ide> ]
<ide><path>test/configCases/delegated/simple/webpack.config.js
<ide> module.exports = {
<ide> type: "require",
<ide> context: __dirname,
<ide> content: {
<del> "./a.js": { id: 0 },
<del> "./loader.js!./b.js": { id: 1 },
<del> "./dir/c.js": { id: 2 }
<add> "./a.js": {
<add> id: 0
<add> },
<add> "./loader.js!./b.js": {
<add> id: 1
<add> },
<add> "./dir/c.js": {
<add> id: 2
<add> }
<ide> }
<ide> })
<ide> ]
<ide><path>test/configCases/dll-plugin/0-create-dll/webpack.config.js
<ide> module.exports = {
<ide> libraryTarget: "commonjs2"
<ide> },
<ide> module: {
<del> rules: [
<del> {
<del> test: /\.abc\.js$/,
<del> loader: "./g-loader.js",
<del> options: {
<del> test: 1
<del> }
<add> rules: [{
<add> test: /\.abc\.js$/,
<add> loader: "./g-loader.js",
<add> options: {
<add> test: 1
<ide> }
<del> ]
<add> }]
<ide> },
<ide> plugins: [
<ide> new webpack.DllPlugin({
<ide><path>test/configCases/dll-plugin/2-use-dll-without-scope/webpack.config.js
<ide> var webpack = require("../../../../");
<ide>
<ide> module.exports = {
<ide> module: {
<del> rules: [
<del> { oneOf: [
<del> {
<del> test: /\.abc\.js$/,
<del> loader: "../0-create-dll/g-loader.js",
<del> options: {
<del> test: 1
<del> }
<add> rules: [{
<add> oneOf: [{
<add> test: /\.abc\.js$/,
<add> loader: "../0-create-dll/g-loader.js",
<add> options: {
<add> test: 1
<ide> }
<del> ] }
<del> ]
<add> }]
<add> }]
<ide> },
<ide> resolve: {
<ide> extensions: [".js", ".jsx"]
<ide><path>test/configCases/dll-plugin/3-use-dll-with-hashid/webpack.config.js
<ide> var webpack = require("../../../../");
<ide>
<ide> module.exports = {
<ide> module: {
<del> rules: [
<del> { oneOf: [
<del> {
<del> test: /\.abc\.js$/,
<del> loader: "../0-create-dll/g-loader.js",
<del> options: {
<del> test: 1
<del> }
<add> rules: [{
<add> oneOf: [{
<add> test: /\.abc\.js$/,
<add> loader: "../0-create-dll/g-loader.js",
<add> options: {
<add> test: 1
<ide> }
<del> ] }
<del> ]
<add> }]
<add> }]
<ide> },
<ide> plugins: [
<ide> new webpack.DllReferencePlugin({
<ide><path>test/configCases/errors/multi-entry-missing-module/webpack.config.js
<ide> const IgnorePlugin = require("../../../../lib/IgnorePlugin");
<ide> module.exports = {
<del> entry: {
<del> b: ["./intentionally-missing-module.js"],
<del> bundle0: ["./index"]
<del> },
<del> output: {
<del> filename: "[name].js"
<del> },
<del> plugins: [
<del> new IgnorePlugin(new RegExp(/intentionally-missing-module/))
<del> ],
<del> node: {
<del> __dirname: false
<del> }
<add> entry: {
<add> b: ["./intentionally-missing-module.js"],
<add> bundle0: ["./index"]
<add> },
<add> output: {
<add> filename: "[name].js"
<add> },
<add> plugins: [
<add> new IgnorePlugin(new RegExp(/intentionally-missing-module/))
<add> ],
<add> node: {
<add> __dirname: false
<add> }
<ide> };
<ide><path>test/configCases/hash-length/output-filename/webpack.config.js
<ide> var webpack = require("../../../../");
<del>module.exports = [{
<del> name: "hash with length in publicPath",
<del> output: {
<del> publicPath: "/[hash:6]/",
<del> filename: "bundle0.[hash:6].js",
<del> chunkFilename: "[id].bundle0.[hash:6].js"
<del> },
<del> amd: {
<del> expectedFilenameLength: 17,
<del> expectedChunkFilenameLength: 19
<del> }
<del>}, {
<del> name: "hash in publicPath",
<del> output: {
<del> publicPath: "/[hash]/",
<del> filename: "bundle1.[hash].js",
<del> chunkFilename: "[id].bundle1.[hash].js"
<del> },
<del> amd: {
<del> expectedFilenameLength: 31,
<del> expectedChunkFilenameLength: 33
<del> }
<del>}, {
<del> name: "chunkhash with length",
<del> output: {
<del> filename: "bundle2.[chunkhash:8].js",
<del> chunkFilename: "[id].bundle2.[chunkhash:8].js"
<del> },
<del> amd: {
<del> expectedFilenameLength: 19,
<del> expectedChunkFilenameLength: 21
<del> }
<del>}, {
<del> name: "chunkhash",
<del> output: {
<del> filename: "bundle3.[chunkhash].js",
<del> chunkFilename: "[id].bundle3.[chunkhash].js"
<del> },
<del> amd: {
<del> expectedFilenameLength: 31,
<del> expectedChunkFilenameLength: 33
<del> }
<del>}, {
<del> name: "hash with and without length",
<del> output: {
<del> filename: "bundle4.[hash].js",
<del> chunkFilename: "[id].bundle4.[hash:8].js"
<del> },
<del> amd: {
<del> expectedFilenameLength: 31,
<del> expectedChunkFilenameLength: 21
<del> }
<del>}, {
<del> name: "hash with length",
<del> output: {
<del> filename: "bundle5.[hash:6].js",
<del> chunkFilename: "[id].bundle5.[hash:8].js"
<del> },
<del> amd: {
<del> expectedFilenameLength: 17,
<del> expectedChunkFilenameLength: 21
<del> }
<del>}, {
<del> name: "chunkhash in chunkFilename ",
<del> output: {
<del> filename: "bundle6.[hash].js",
<del> chunkFilename: "[id].bundle6.[chunkhash:7].js"
<del> },
<del> amd: {
<del> expectedFilenameLength: 31,
<del> expectedChunkFilenameLength: 20
<del> },
<del> plugins: [
<del> new webpack.HotModuleReplacementPlugin()
<del> ]
<del>}, {
<del> name: "hash with length and chunkhash with length",
<del> output: {
<del> filename: "bundle7.[hash:7].js",
<del> chunkFilename: "[id].bundle7.[chunkhash:7].js"
<del> },
<del> target: "node",
<del> amd: {
<del> expectedFilenameLength: 18,
<del> expectedChunkFilenameLength: 20
<del> }
<del>}, {
<del> name: "hash with length in chunkFilename",
<del> output: {
<del> filename: "bundle8.[hash].js",
<del> chunkFilename: "[id].bundle8.[hash:7].js"
<add>module.exports = [
<ide>
<del> },
<del> target: "node",
<del> amd: {
<del> expectedFilenameLength: 31,
<del> expectedChunkFilenameLength: 20
<del> }
<del>},
<del>{
<del> name: "chunkhash with length in chunkFilename",
<del> output: {
<del> filename: "bundle9.[hash].js",
<del> chunkFilename: "[id].bundle9.[chunkhash:7].js"
<add> {
<add> name: "hash with length in publicPath",
<add> output: {
<add> publicPath: "/[hash:6]/",
<add> filename: "bundle0.[hash:6].js",
<add> chunkFilename: "[id].bundle0.[hash:6].js"
<add> },
<add> amd: {
<add> expectedFilenameLength: 17,
<add> expectedChunkFilenameLength: 19
<add> }
<add> }, {
<add> name: "hash in publicPath",
<add> output: {
<add> publicPath: "/[hash]/",
<add> filename: "bundle1.[hash].js",
<add> chunkFilename: "[id].bundle1.[hash].js"
<add> },
<add> amd: {
<add> expectedFilenameLength: 31,
<add> expectedChunkFilenameLength: 33
<add> }
<add> }, {
<add> name: "chunkhash with length",
<add> output: {
<add> filename: "bundle2.[chunkhash:8].js",
<add> chunkFilename: "[id].bundle2.[chunkhash:8].js"
<add> },
<add> amd: {
<add> expectedFilenameLength: 19,
<add> expectedChunkFilenameLength: 21
<add> }
<add> }, {
<add> name: "chunkhash",
<add> output: {
<add> filename: "bundle3.[chunkhash].js",
<add> chunkFilename: "[id].bundle3.[chunkhash].js"
<add> },
<add> amd: {
<add> expectedFilenameLength: 31,
<add> expectedChunkFilenameLength: 33
<add> }
<add> }, {
<add> name: "hash with and without length",
<add> output: {
<add> filename: "bundle4.[hash].js",
<add> chunkFilename: "[id].bundle4.[hash:8].js"
<add> },
<add> amd: {
<add> expectedFilenameLength: 31,
<add> expectedChunkFilenameLength: 21
<add> }
<add> }, {
<add> name: "hash with length",
<add> output: {
<add> filename: "bundle5.[hash:6].js",
<add> chunkFilename: "[id].bundle5.[hash:8].js"
<add> },
<add> amd: {
<add> expectedFilenameLength: 17,
<add> expectedChunkFilenameLength: 21
<add> }
<add> }, {
<add> name: "chunkhash in chunkFilename ",
<add> output: {
<add> filename: "bundle6.[hash].js",
<add> chunkFilename: "[id].bundle6.[chunkhash:7].js"
<add> },
<add> amd: {
<add> expectedFilenameLength: 31,
<add> expectedChunkFilenameLength: 20
<add> },
<add> plugins: [
<add> new webpack.HotModuleReplacementPlugin()
<add> ]
<add> }, {
<add> name: "hash with length and chunkhash with length",
<add> output: {
<add> filename: "bundle7.[hash:7].js",
<add> chunkFilename: "[id].bundle7.[chunkhash:7].js"
<add> },
<add> target: "node",
<add> amd: {
<add> expectedFilenameLength: 18,
<add> expectedChunkFilenameLength: 20
<add> }
<add> }, {
<add> name: "hash with length in chunkFilename",
<add> output: {
<add> filename: "bundle8.[hash].js",
<add> chunkFilename: "[id].bundle8.[hash:7].js"
<ide>
<add> },
<add> target: "node",
<add> amd: {
<add> expectedFilenameLength: 31,
<add> expectedChunkFilenameLength: 20
<add> }
<ide> },
<del> target: "node",
<del> amd: {
<del> expectedFilenameLength: 31,
<del> expectedChunkFilenameLength: 20
<add> {
<add> name: "chunkhash with length in chunkFilename",
<add> output: {
<add> filename: "bundle9.[hash].js",
<add> chunkFilename: "[id].bundle9.[chunkhash:7].js"
<add>
<add> },
<add> target: "node",
<add> amd: {
<add> expectedFilenameLength: 31,
<add> expectedChunkFilenameLength: 20
<add> }
<ide> }
<del>}
<ide> ];
<ide>
<ide> module.exports.forEach(function(options) {
<ide><path>test/configCases/library/0-create-library/webpack.config.js
<ide> module.exports = [
<add>
<ide> {
<ide> output: {
<ide> filename: "commonjs.js",
<ide><path>test/configCases/library/1-use-library/webpack.config.js
<ide> var webpack = require("../../../../");
<ide> var path = require("path");
<ide> module.exports = [
<add>
<ide> {
<ide> resolve: {
<ide> alias: {
<ide><path>test/configCases/loaders/generate-ident/webpack.config.js
<ide> module.exports = {
<ide> module: {
<ide> rules: [
<add>
<ide> {
<ide> test: /a\.js$/,
<ide> use: [
<ide><path>test/configCases/loaders/issue-3320/webpack.config.js
<ide> module.exports = {
<ide> },
<ide> module: {
<ide> rules: [
<add>
<ide> {
<ide> test: /a\.js$/,
<del> use: [
<del> {
<del> loader: "some-loader"
<del> }
<del> ]
<add> use: [{
<add> loader: "some-loader"
<add> }]
<ide> },
<ide> {
<ide> test: /b\.js$/,
<del> use: [
<del> {
<del> loader: "some-loader",
<del> options: {
<del> foo: "someOtherMessage"
<del> }
<add> use: [{
<add> loader: "some-loader",
<add> options: {
<add> foo: "someOtherMessage"
<ide> }
<del> ]
<add> }]
<ide> },
<ide> {
<ide> test: /b2\.js$/,
<ide><path>test/configCases/loaders/pre-post-loader/webpack.config.js
<ide> module.exports = {
<ide> module: {
<ide> rules: [
<add>
<ide> {
<ide> test: /a\.js$/,
<ide> use: "./loader1",
<ide><path>test/configCases/loaders/remaining-request/webpack.config.js
<ide> module.exports = {
<ide> module: {
<del> rules: [
<del> {
<del> test: /a\.js$/,
<del> use: [
<del> "./loader1",
<del> {
<del> loader: "./loader2",
<del> ident: "loader2",
<del> options: {
<del> f: function() {
<del> return "ok";
<del> }
<add> rules: [{
<add> test: /a\.js$/,
<add> use: [
<add> "./loader1",
<add> {
<add> loader: "./loader2",
<add> ident: "loader2",
<add> options: {
<add> f: function() {
<add> return "ok";
<ide> }
<ide> }
<del> ]
<del> }
<del> ]
<add> }
<add> ]
<add> }]
<ide> }
<ide> };
<ide><path>test/configCases/parsing/require.main/webpack.config.js
<del>module.exports = {
<del>};
<add>module.exports = {};
<ide><path>test/configCases/parsing/system.import/webpack.config.js
<ide> function createConfig(system) {
<ide> return {
<ide> name: `system_${systemString}`,
<ide> module: {
<del> rules: [
<del> {
<del> test: /\.js$/,
<del> parser: {
<del> system
<del> }
<add> rules: [{
<add> test: /\.js$/,
<add> parser: {
<add> system
<ide> }
<del> ]
<add> }]
<ide> },
<ide> plugins: [
<ide> new webpack.DefinePlugin({
<ide><path>test/configCases/plugins/environment-plugin/webpack.config.js
<ide> process.env.III = "";
<ide>
<ide> module.exports = [{
<ide> name: "aaa",
<del> module: { unknownContextRegExp: /$^/, unknownContextCritical: false },
<add> module: {
<add> unknownContextRegExp: /$^/,
<add> unknownContextCritical: false
<add> },
<ide> plugins: [
<ide> new EnvironmentPlugin("AAA")
<ide> ]
<ide> }, {
<ide> name: "bbbccc",
<del> module: { unknownContextRegExp: /$^/, unknownContextCritical: false },
<add> module: {
<add> unknownContextRegExp: /$^/,
<add> unknownContextCritical: false
<add> },
<ide> plugins: [
<ide> new EnvironmentPlugin("BBB", "CCC")
<ide> ]
<ide> }, {
<ide> name: "ddd",
<del> module: { unknownContextRegExp: /$^/, unknownContextCritical: false },
<add> module: {
<add> unknownContextRegExp: /$^/,
<add> unknownContextCritical: false
<add> },
<ide> plugins: [
<ide> new EnvironmentPlugin("DDD")
<ide> ]
<ide> }, {
<ide> name: "eeefff",
<del> module: { unknownContextRegExp: /$^/, unknownContextCritical: false },
<add> module: {
<add> unknownContextRegExp: /$^/,
<add> unknownContextCritical: false
<add> },
<ide> plugins: [
<ide> new EnvironmentPlugin(["EEE", "FFF"])
<ide> ]
<ide> }, {
<ide> name: "ggghhh",
<del> module: { unknownContextRegExp: /$^/, unknownContextCritical: false },
<add> module: {
<add> unknownContextRegExp: /$^/,
<add> unknownContextCritical: false
<add> },
<ide> plugins: [
<ide> new EnvironmentPlugin({
<ide> GGG: "ggg-default",
<ide> module.exports = [{
<ide> ]
<ide> }, {
<ide> name: "iii",
<del> module: { unknownContextRegExp: /$^/, unknownContextCritical: false },
<add> module: {
<add> unknownContextRegExp: /$^/,
<add> unknownContextCritical: false
<add> },
<ide> plugins: [
<ide> new EnvironmentPlugin("III")
<ide> ]
<ide><path>test/configCases/plugins/profiling-plugin/webpack.config.js
<ide> var path = require("path");
<ide> var os = require("os");
<ide>
<ide> module.exports = {
<del> plugins: [
<del> new webpack.debug.ProfilingPlugin({
<del> outPath: path.join(os.tmpdir(), "events.json")
<del> })
<del> ],
<del> node: {
<del> __dirname: false,
<del> __filename: false
<del> }
<add> plugins: [
<add> new webpack.debug.ProfilingPlugin({
<add> outPath: path.join(os.tmpdir(), "events.json")
<add> })
<add> ],
<add> node: {
<add> __dirname: false,
<add> __filename: false
<add> }
<ide> };
<ide><path>test/configCases/records/issue-2991/webpack.config.js
<ide> module.exports = {
<ide> __dirname: false
<ide> },
<ide> resolve: {
<del> aliasFields: [ "browser" ],
<add> aliasFields: ["browser"],
<ide> alias: {
<ide> pkgs: path.resolve(__dirname, "pkgs")
<ide> }
<ide><path>test/configCases/rule-set/chaining/webpack.config.js
<ide> module.exports = {
<ide> module: {
<ide> rules: [
<add>
<ide> {
<ide> resource: /abc\.js$/,
<ide> loader: "./loader?a!./loader?b"
<ide><path>test/configCases/rule-set/compiler/webpack.config.js
<ide> module.exports = {
<ide> name: "compiler-name",
<ide> module: {
<ide> rules: [
<add>
<ide> {
<ide> test: /a\.js$/,
<ide> compiler: "compiler",
<ide><path>test/configCases/rule-set/custom/webpack.config.js
<ide> module.exports = {
<ide> module: {
<del> rules: [
<del> {
<del> test: /[ab]\.js$/,
<del> use: function(data) {
<del> return {
<del> loader: "./loader",
<del> options: {
<del> resource: data.resource.replace(/^.*[\\/]/g, ""),
<del> resourceQuery: data.resourceQuery,
<del> issuer: data.issuer.replace(/^.*[\\/]/g, ""),
<del> }
<del> };
<del> }
<add> rules: [{
<add> test: /[ab]\.js$/,
<add> use: function(data) {
<add> return {
<add> loader: "./loader",
<add> options: {
<add> resource: data.resource.replace(/^.*[\\/]/g, ""),
<add> resourceQuery: data.resourceQuery,
<add> issuer: data.issuer.replace(/^.*[\\/]/g, ""),
<add> }
<add> };
<ide> }
<del> ]
<add> }]
<ide> }
<ide> };
<ide><path>test/configCases/rule-set/query/webpack.config.js
<ide> module.exports = {
<ide> module: {
<del> rules: [
<del> {
<del> resourceQuery: /^\?loader/,
<del> use: "./loader?query"
<del> }
<del> ]
<add> rules: [{
<add> resourceQuery: /^\?loader/,
<add> use: "./loader?query"
<add> }]
<ide> }
<ide> };
<ide><path>test/configCases/rule-set/resolve-options/webpack.config.js
<ide> module.exports = {
<ide> module: {
<del> rules: [
<del> {
<del> test: require.resolve("./a"),
<del> resolve: {
<del> alias: {
<del> "./wrong": "./ok"
<del> }
<add> rules: [{
<add> test: require.resolve("./a"),
<add> resolve: {
<add> alias: {
<add> "./wrong": "./ok"
<ide> }
<ide> }
<del> ]
<add> }]
<ide> }
<ide> };
<ide><path>test/configCases/rule-set/simple-use-array-fn/webpack.config.js
<ide> module.exports = {
<ide> module: {
<del> rules: [
<del> { oneOf: [
<del> {
<add> rules: [{
<add> oneOf: [{
<ide> test: {
<ide> and: [
<ide> /a.\.js$/,
<ide> module.exports = {
<ide> loader: "./loader",
<ide> options: "third"
<ide> }
<del> ] }
<del> ]
<add> ]
<add> }]
<ide> }
<ide> };
<del>
<ide><path>test/configCases/rule-set/simple-use-fn-array/webpack.config.js
<del>
<ide> function createFunctionArrayFromUseArray(useArray) {
<ide> return useArray.map(function(useItem) {
<ide> return function(data) {
<ide> var useArray = createFunctionArrayFromUseArray([
<ide>
<ide> module.exports = {
<ide> module: {
<del> rules: [
<del> { oneOf: [
<del> {
<add> rules: [{
<add> oneOf: [{
<ide> test: {
<ide> and: [
<ide> /a.\.js$/,
<ide> module.exports = {
<ide> loader: "./loader",
<ide> options: "third"
<ide> }
<del> ] }
<del> ]
<add> ]
<add> }]
<ide> }
<ide> };
<del>
<ide><path>test/configCases/rule-set/simple/webpack.config.js
<ide> module.exports = {
<ide> module: {
<del> rules: [
<del> { oneOf: [
<del> {
<add> rules: [{
<add> oneOf: [{
<ide> test: {
<ide> and: [
<ide> /a.\.js$/,
<ide> module.exports = {
<ide> loader: "./loader",
<ide> options: "third"
<ide> }
<del> ] }
<del> ]
<add> ]
<add> }]
<ide> }
<ide> };
<ide><path>test/configCases/side-effects/side-effects-override/webpack.config.js
<ide> module.exports = {
<ide> mode: "production",
<ide> module: {
<ide> rules: [
<add>
<ide> {
<ide> test: path.resolve(__dirname, "node_modules/pmodule"),
<ide> sideEffects: true
<ide><path>test/configCases/simple/multi-compiler/webpack.config.js
<del>module.exports = [
<del> {
<add>module.exports = [{
<ide>
<del> }
<del>];
<add>}];
<ide><path>test/statsCases/async-commons-chunk-auto/webpack.config.js
<ide> const stats = {
<ide> modules: false
<ide> };
<ide> module.exports = [
<add>
<ide> {
<ide> name: "disabled",
<ide> mode: "production",
<ide> module.exports = [
<ide> },
<ide> stats
<ide> },
<add>
<ide> {
<ide> name: "vendors",
<ide> mode: "production",
<ide> module.exports = [
<ide> },
<ide> stats
<ide> },
<add>
<ide> {
<ide> name: "multiple-vendors",
<ide> mode: "production",
<ide> module.exports = [
<ide> },
<ide> stats
<ide> }
<add>
<ide> ];
<ide><path>test/statsCases/commons-plugin-issue-4980/webpack.config.js
<ide> module.exports = [{
<ide> },
<ide> namedModules: true
<ide> }
<del>},{
<add>}, {
<ide> mode: "production",
<ide> output: {
<ide> chunkFilename: "[name].[chunkhash].js"
<ide><path>test/statsCases/define-plugin/webpack.config.js
<ide> var webpack = require("../../../");
<ide> module.exports = [
<add>
<ide> {
<ide> mode: "production",
<ide> entry: "./index",
<ide> module.exports = [
<ide> })
<ide> ]
<ide> },
<add>
<ide> {
<ide> mode: "production",
<ide> entry: "./index",
<ide> module.exports = [
<ide> })
<ide> ]
<ide> }
<add>
<ide> ];
<ide><path>test/statsCases/filter-warnings/webpack.config.js
<ide> module.exports = [
<ide> undefined,
<ide> "UglifyJs",
<ide> /UglifyJs/,
<del> warnings => true,
<del> ["UglifyJs"],
<add> warnings => true, ["UglifyJs"],
<ide> [/UglifyJs/],
<ide> [
<ide> warnings => true
<ide> ],
<ide> "should not filter",
<ide> /should not filter/,
<del> warnings => false,
<del> ["should not filter"],
<add> warnings => false, ["should not filter"],
<ide> [/should not filter/],
<ide> [
<ide> warnings => false
<ide> ]
<ide> ].map(filter => Object.assign({}, baseConfig, {
<del> stats: Object.assign({}, baseConfig.stats, { warningsFilter: filter })
<add> stats: Object.assign({}, baseConfig.stats, {
<add> warningsFilter: filter
<add> })
<ide> }));
<ide><path>test/statsCases/preset-mixed-array/webpack.config.js
<ide> module.exports = [
<add>
<ide> {
<ide> name: "minimal",
<ide> mode: "production",
<ide> entry: "./index",
<ide> stats: "minimal"
<ide> },
<add>
<ide> {
<ide> name: "none",
<ide> mode: "production",
<ide> entry: "./index",
<ide> stats: false
<ide> },
<add>
<ide> {
<ide> name: "verbose",
<ide> mode: "production",
<ide> module.exports = [
<ide> assets: false
<ide> }
<ide> }
<add>
<ide> ];
<ide><path>test/statsCases/preset-none-array/webpack.config.js
<ide> module.exports = [
<add>
<ide> {
<ide> mode: "production",
<ide> entry: "./index",
<ide> stats: "none"
<ide> },
<add>
<ide> {
<ide> mode: "production",
<ide> entry: "./index",
<ide> stats: "none"
<ide> }
<add>
<ide> ];
<ide><path>test/statsCases/scope-hoisting-multi/webpack.config.js
<ide> module.exports = [
<add>
<ide> {
<ide> mode: "production",
<ide> entry: {
<ide> module.exports = [
<ide> assets: false
<ide> }
<ide> },
<add>
<ide> {
<ide> mode: "production",
<ide> entry: {
<ide> module.exports = [
<ide> optimizationBailout: true
<ide> }
<ide> }
<add>
<ide> ];
<ide><path>test/statsCases/split-chunks/webpack.config.js
<ide> const stats = {
<ide> modules: false
<ide> };
<ide> module.exports = [
<add>
<ide> {
<ide> name: "default",
<ide> mode: "production",
<ide> module.exports = [
<ide> },
<ide> stats
<ide> },
<add>
<ide> {
<ide> name: "all-chunks",
<ide> mode: "production",
<ide> module.exports = [
<ide> },
<ide> stats
<ide> },
<add>
<ide> {
<ide> name: "manual",
<ide> mode: "production",
<ide> module.exports = [
<ide> },
<ide> stats
<ide> }
<add>
<ide> ]; | 43 |
PHP | PHP | update return values | 15f414604c8992c5c8eddfd5d4a2115bd37f3ec9 | <ide><path>src/Illuminate/Cache/ApcStore.php
<ide> public function get($key)
<ide> * @param string $key
<ide> * @param mixed $value
<ide> * @param int $minutes
<del> * @return array|bool
<add> * @return void
<ide> */
<ide> public function put($key, $value, $minutes)
<ide> {
<ide> public function put($key, $value, $minutes)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return array|bool
<add> * @return int|bool
<ide> */
<ide> public function increment($key, $value = 1)
<ide> {
<ide> public function increment($key, $value = 1)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return array|bool
<add> * @return int|bool
<ide> */
<ide> public function decrement($key, $value = 1)
<ide> {
<ide> public function forever($key, $value)
<ide> * Remove an item from the cache.
<ide> *
<ide> * @param string $key
<del> * @return array|bool
<add> * @return void
<ide> */
<ide> public function forget($key)
<ide> {
<ide><path>src/Illuminate/Cache/ApcWrapper.php
<ide> public function put($key, $value, $seconds)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return array|bool
<add> * @return int|bool
<ide> */
<ide> public function increment($key, $value)
<ide> {
<ide> public function increment($key, $value)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return array|bool
<add> * @return int|bool
<ide> */
<ide> public function decrement($key, $value)
<ide> {
<ide><path>src/Illuminate/Cache/ArrayStore.php
<ide> public function put($key, $value, $minutes)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return void
<add> * @return int
<ide> */
<ide> public function increment($key, $value = 1)
<ide> {
<ide> public function increment($key, $value = 1)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return void
<add> * @return int
<ide> */
<ide> public function decrement($key, $value = 1)
<ide> {
<ide><path>src/Illuminate/Cache/MemcachedStore.php
<ide> public function put($key, $value, $minutes)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return void
<add> * @return int|bool
<ide> */
<ide> public function increment($key, $value = 1)
<ide> {
<ide> public function increment($key, $value = 1)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return void
<add> * @return int|bool
<ide> */
<ide> public function decrement($key, $value = 1)
<ide> {
<ide><path>src/Illuminate/Cache/RedisStore.php
<ide> public function put($key, $value, $minutes)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return void
<add> * @return int
<ide> */
<ide> public function increment($key, $value = 1)
<ide> {
<ide> public function increment($key, $value = 1)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return void
<add> * @return int
<ide> */
<ide> public function decrement($key, $value = 1)
<ide> {
<ide><path>src/Illuminate/Cache/WinCacheStore.php
<ide> public function put($key, $value, $minutes)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return void
<add> * @return int|bool
<ide> */
<ide> public function increment($key, $value = 1)
<ide> {
<ide> public function increment($key, $value = 1)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return void
<add> * @return int|bool
<ide> */
<ide> public function decrement($key, $value = 1)
<ide> {
<ide><path>src/Illuminate/Cache/XCacheStore.php
<ide> public function put($key, $value, $minutes)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return void
<add> * @return int
<ide> */
<ide> public function increment($key, $value = 1)
<ide> {
<ide> public function increment($key, $value = 1)
<ide> *
<ide> * @param string $key
<ide> * @param mixed $value
<del> * @return void
<add> * @return int
<ide> */
<ide> public function decrement($key, $value = 1)
<ide> { | 7 |
Javascript | Javascript | remove disableyielding feature flag | e91dd70ba28e7a20da3e5de8d787f3cde56341bd | <ide><path>packages/react-dom/src/__tests__/ReactDOMFiberAsync-test.internal.js
<ide> 'use strict';
<ide>
<ide> const React = require('react');
<del>const Fragment = React.Fragment;
<ide> let ReactFeatureFlags = require('shared/ReactFeatureFlags');
<ide>
<ide> let ReactDOM;
<ide> describe('ReactDOMFiberAsync', () => {
<ide> expect(root.createBatch).toBe(undefined);
<ide> });
<ide> });
<del>
<del> describe('Disable yielding', () => {
<del> beforeEach(() => {
<del> jest.resetModules();
<del> ReactFeatureFlags = require('shared/ReactFeatureFlags');
<del> ReactFeatureFlags.disableYielding = true;
<del> ReactFeatureFlags.debugRenderPhaseSideEffectsForStrictMode = false;
<del> ReactDOM = require('react-dom');
<del> Scheduler = require('scheduler');
<del> });
<del>
<del> it('wont yield during a render if yielding is disabled', () => {
<del> class A extends React.Component {
<del> render() {
<del> Scheduler.yieldValue('A');
<del> return <div>{this.props.children}</div>;
<del> }
<del> }
<del>
<del> class B extends React.Component {
<del> render() {
<del> Scheduler.yieldValue('B');
<del> return <div>{this.props.children}</div>;
<del> }
<del> }
<del>
<del> class C extends React.Component {
<del> render() {
<del> Scheduler.yieldValue('C');
<del> return <div>{this.props.children}</div>;
<del> }
<del> }
<del>
<del> let root = ReactDOM.unstable_createRoot(container);
<del>
<del> root.render(
<del> <Fragment>
<del> <A />
<del> <B />
<del> <C />
<del> </Fragment>,
<del> );
<del>
<del> expect(Scheduler).toHaveYielded([]);
<del>
<del> Scheduler.unstable_flushNumberOfYields(2);
<del> // Even though we just flushed two yields, we should have rendered
<del> // everything without yielding when the flag is on.
<del> expect(Scheduler).toHaveYielded(['A', 'B', 'C']);
<del> });
<del>
<del> it('wont suspend during a render if yielding is disabled', () => {
<del> let p = new Promise(resolve => {});
<del>
<del> function Suspend() {
<del> throw p;
<del> }
<del>
<del> let root = ReactDOM.unstable_createRoot(container);
<del> root.render(
<del> <React.Suspense fallback={'Loading'}>Initial</React.Suspense>,
<del> );
<del>
<del> Scheduler.flushAll();
<del> expect(container.textContent).toBe('Initial');
<del>
<del> root.render(
<del> <React.Suspense fallback={'Loading'}>
<del> <Suspend />
<del> </React.Suspense>,
<del> );
<del>
<del> expect(Scheduler).toHaveYielded([]);
<del>
<del> Scheduler.flushAll();
<del>
<del> // This should have flushed to the DOM even though we haven't ran the timers.
<del> expect(container.textContent).toBe('Loading');
<del> });
<del> });
<ide> });
<ide><path>packages/react-reconciler/src/ReactFiberWorkLoop.js
<ide> import {
<ide> enableSuspenseServerRenderer,
<ide> replayFailedUnitOfWorkWithInvokeGuardedCallback,
<ide> enableProfilerTimer,
<del> disableYielding,
<ide> enableSchedulerTracing,
<ide> revertPassiveEffectsChange,
<ide> } from 'shared/ReactFeatureFlags';
<ide> function renderRoot(
<ide> // possible.
<ide> const hasNotProcessedNewUpdates =
<ide> workInProgressRootLatestProcessedExpirationTime === Sync;
<del> if (hasNotProcessedNewUpdates && !disableYielding && !isSync) {
<add> if (hasNotProcessedNewUpdates && !isSync) {
<ide> // If we have not processed any new updates during this pass, then this is
<ide> // either a retry of an existing fallback state or a hidden tree.
<ide> // Hidden trees shouldn't be batched with other work and after that's
<ide> function renderRoot(
<ide> return commitRoot.bind(null, root);
<ide> }
<ide> case RootSuspendedWithDelay: {
<del> if (!disableYielding && !isSync) {
<add> if (!isSync) {
<ide> // We're suspended in a state that should be avoided. We'll try to avoid committing
<ide> // it for as long as the timeouts let us.
<ide> if (workInProgressRootHasPendingPing) {
<ide> function computeMsUntilSuspenseLoadingDelay(
<ide> committedExpirationTime: ExpirationTime,
<ide> suspenseConfig: SuspenseConfig,
<ide> ) {
<del> if (disableYielding) {
<del> // Timeout immediately when yielding is disabled.
<del> return 0;
<del> }
<del>
<ide> const busyMinDurationMs = (suspenseConfig.busyMinDurationMs: any) | 0;
<ide> if (busyMinDurationMs <= 0) {
<ide> return 0;
<ide><path>packages/react-reconciler/src/SchedulerWithReactIntegration.js
<ide> // CommonJS interop named imports.
<ide> import * as Scheduler from 'scheduler';
<ide> import {__interactionsRef} from 'scheduler/tracing';
<del>import {
<del> disableYielding,
<del> enableSchedulerTracing,
<del>} from 'shared/ReactFeatureFlags';
<add>import {enableSchedulerTracing} from 'shared/ReactFeatureFlags';
<ide> import invariant from 'shared/invariant';
<ide>
<ide> const {
<ide> export const IdlePriority: ReactPriorityLevel = 95;
<ide> // NoPriority is the absence of priority. Also React-only.
<ide> export const NoPriority: ReactPriorityLevel = 90;
<ide>
<del>export const shouldYield = disableYielding
<del> ? () => false // Never yield when `disableYielding` is on
<del> : Scheduler_shouldYield;
<add>export const shouldYield = Scheduler_shouldYield;
<ide>
<ide> let syncQueue: Array<SchedulerCallback> | null = null;
<ide> let immediateQueueCallbackNode: mixed | null = null;
<ide><path>packages/shared/ReactFeatureFlags.js
<ide> export function addUserTimingListener() {
<ide> // Disable javascript: URL strings in href for XSS protection.
<ide> export const disableJavaScriptURLs = false;
<ide>
<del>// Disables yielding during render in Concurrent Mode. Used for debugging only.
<del>export const disableYielding = false;
<del>
<ide> // React Fire: prevent the value and checked attributes from syncing
<ide> // with their related DOM properties
<ide> export const disableInputAttributeSyncing = false;
<ide><path>packages/shared/forks/ReactFeatureFlags.native-fb.js
<ide> export const warnAboutShorthandPropertyCollision = false;
<ide> export const enableSchedulerDebugging = false;
<ide> export const debugRenderPhaseSideEffectsForStrictMode = true;
<ide> export const disableJavaScriptURLs = false;
<del>export const disableYielding = false;
<ide> export const disableInputAttributeSyncing = false;
<ide> export const replayFailedUnitOfWorkWithInvokeGuardedCallback = __DEV__;
<ide> export const warnAboutDeprecatedLifecycles = true;
<ide><path>packages/shared/forks/ReactFeatureFlags.native-oss.js
<ide> export const enableProfilerTimer = __PROFILE__;
<ide> export const enableSchedulerTracing = __PROFILE__;
<ide> export const enableSuspenseServerRenderer = false;
<ide> export const disableJavaScriptURLs = false;
<del>export const disableYielding = false;
<ide> export const disableInputAttributeSyncing = false;
<ide> export const enableStableConcurrentModeAPIs = false;
<ide> export const warnAboutShorthandPropertyCollision = false;
<ide><path>packages/shared/forks/ReactFeatureFlags.persistent.js
<ide> export const enableProfilerTimer = __PROFILE__;
<ide> export const enableSchedulerTracing = __PROFILE__;
<ide> export const enableSuspenseServerRenderer = false;
<ide> export const disableJavaScriptURLs = false;
<del>export const disableYielding = false;
<ide> export const disableInputAttributeSyncing = false;
<ide> export const enableStableConcurrentModeAPIs = false;
<ide> export const warnAboutShorthandPropertyCollision = false;
<ide><path>packages/shared/forks/ReactFeatureFlags.test-renderer.js
<ide> export const enableProfilerTimer = false;
<ide> export const enableSchedulerTracing = false;
<ide> export const enableSuspenseServerRenderer = false;
<ide> export const disableJavaScriptURLs = false;
<del>export const disableYielding = false;
<ide> export const disableInputAttributeSyncing = false;
<ide> export const enableStableConcurrentModeAPIs = false;
<ide> export const warnAboutShorthandPropertyCollision = false;
<ide><path>packages/shared/forks/ReactFeatureFlags.test-renderer.www.js
<ide> export const enableStableConcurrentModeAPIs = false;
<ide> export const enableSchedulerDebugging = false;
<ide> export const warnAboutDeprecatedSetNativeProps = false;
<ide> export const disableJavaScriptURLs = false;
<del>export const disableYielding = false;
<ide> export const enableEventAPI = true;
<ide> export const enableJSXTransformAPI = true;
<ide> export const warnAboutMissingMockScheduler = true;
<ide><path>packages/shared/forks/ReactFeatureFlags.www.js
<ide> export const {
<ide> debugRenderPhaseSideEffectsForStrictMode,
<ide> replayFailedUnitOfWorkWithInvokeGuardedCallback,
<ide> warnAboutDeprecatedLifecycles,
<del> disableYielding,
<ide> disableInputAttributeSyncing,
<ide> warnAboutShorthandPropertyCollision,
<ide> warnAboutDeprecatedSetNativeProps, | 10 |
Go | Go | restore stack deploy integration test with dab | bd8de8d8be9e367af9e045d0fdead4462fed490c | <ide><path>cli/command/stack/deploy.go
<ide> func newDeployCommand(dockerCli *command.DockerCli) *cobra.Command {
<ide> }
<ide>
<ide> func runDeploy(dockerCli *command.DockerCli, opts deployOptions) error {
<del> if opts.bundlefile == "" && opts.composefile == "" {
<add> switch {
<add> case opts.bundlefile == "" && opts.composefile == "":
<ide> return fmt.Errorf("Please specify either a bundle file (with --bundle-file) or a Compose file (with --compose-file).")
<del> }
<del>
<del> if opts.bundlefile != "" && opts.composefile != "" {
<add> case opts.bundlefile != "" && opts.composefile != "":
<ide> return fmt.Errorf("You cannot specify both a bundle file and a Compose file.")
<del> }
<del>
<del> info, err := dockerCli.Client().Info(context.Background())
<del> if err != nil {
<del> return err
<del> }
<del> if !info.Swarm.ControlAvailable {
<del> return fmt.Errorf("This node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again.")
<del> }
<del>
<del> if opts.bundlefile != "" {
<add> case opts.bundlefile != "":
<ide> return deployBundle(dockerCli, opts)
<del> } else {
<add> default:
<ide> return deployCompose(dockerCli, opts)
<ide> }
<ide> }
<ide>
<del>func deployBundle(dockerCli *command.DockerCli, opts deployOptions) error {
<del> bundle, err := loadBundlefile(dockerCli.Err(), opts.namespace, opts.bundlefile)
<del> if err != nil {
<del> return err
<del> }
<del>
<del> namespace := namespace{name: opts.namespace}
<del>
<del> networks := make(map[string]types.NetworkCreate)
<del> for _, service := range bundle.Services {
<del> for _, networkName := range service.Networks {
<del> networks[networkName] = types.NetworkCreate{
<del> Labels: getStackLabels(namespace.name, nil),
<del> }
<del> }
<del> }
<del>
<del> services := make(map[string]swarm.ServiceSpec)
<del> for internalName, service := range bundle.Services {
<del> name := namespace.scope(internalName)
<del>
<del> var ports []swarm.PortConfig
<del> for _, portSpec := range service.Ports {
<del> ports = append(ports, swarm.PortConfig{
<del> Protocol: swarm.PortConfigProtocol(portSpec.Protocol),
<del> TargetPort: portSpec.Port,
<del> })
<del> }
<del>
<del> nets := []swarm.NetworkAttachmentConfig{}
<del> for _, networkName := range service.Networks {
<del> nets = append(nets, swarm.NetworkAttachmentConfig{
<del> Target: namespace.scope(networkName),
<del> Aliases: []string{networkName},
<del> })
<del> }
<del>
<del> serviceSpec := swarm.ServiceSpec{
<del> Annotations: swarm.Annotations{
<del> Name: name,
<del> Labels: getStackLabels(namespace.name, service.Labels),
<del> },
<del> TaskTemplate: swarm.TaskSpec{
<del> ContainerSpec: swarm.ContainerSpec{
<del> Image: service.Image,
<del> Command: service.Command,
<del> Args: service.Args,
<del> Env: service.Env,
<del> // Service Labels will not be copied to Containers
<del> // automatically during the deployment so we apply
<del> // it here.
<del> Labels: getStackLabels(namespace.name, nil),
<del> },
<del> },
<del> EndpointSpec: &swarm.EndpointSpec{
<del> Ports: ports,
<del> },
<del> Networks: nets,
<del> }
<del>
<del> services[internalName] = serviceSpec
<del> }
<del>
<del> ctx := context.Background()
<del>
<del> if err := createNetworks(ctx, dockerCli, namespace, networks); err != nil {
<del> return err
<del> }
<del> return deployServices(ctx, dockerCli, services, namespace, opts.sendRegistryAuth)
<del>}
<del>
<ide> func deployCompose(dockerCli *command.DockerCli, opts deployOptions) error {
<ide> configDetails, err := getConfigDetails(opts)
<ide> if err != nil {
<ide><path>cli/command/stack/deploy_bundlefile.go
<add>package stack
<add>
<add>import (
<add> "golang.org/x/net/context"
<add>
<add> "github.com/docker/docker/api/types"
<add> "github.com/docker/docker/api/types/swarm"
<add> "github.com/docker/docker/cli/command"
<add>)
<add>
<add>func deployBundle(dockerCli *command.DockerCli, opts deployOptions) error {
<add> bundle, err := loadBundlefile(dockerCli.Err(), opts.namespace, opts.bundlefile)
<add> if err != nil {
<add> return err
<add> }
<add>
<add> namespace := namespace{name: opts.namespace}
<add>
<add> networks := make(map[string]types.NetworkCreate)
<add> for _, service := range bundle.Services {
<add> for _, networkName := range service.Networks {
<add> networks[networkName] = types.NetworkCreate{
<add> Labels: getStackLabels(namespace.name, nil),
<add> }
<add> }
<add> }
<add>
<add> services := make(map[string]swarm.ServiceSpec)
<add> for internalName, service := range bundle.Services {
<add> name := namespace.scope(internalName)
<add>
<add> var ports []swarm.PortConfig
<add> for _, portSpec := range service.Ports {
<add> ports = append(ports, swarm.PortConfig{
<add> Protocol: swarm.PortConfigProtocol(portSpec.Protocol),
<add> TargetPort: portSpec.Port,
<add> })
<add> }
<add>
<add> nets := []swarm.NetworkAttachmentConfig{}
<add> for _, networkName := range service.Networks {
<add> nets = append(nets, swarm.NetworkAttachmentConfig{
<add> Target: namespace.scope(networkName),
<add> Aliases: []string{networkName},
<add> })
<add> }
<add>
<add> serviceSpec := swarm.ServiceSpec{
<add> Annotations: swarm.Annotations{
<add> Name: name,
<add> Labels: getStackLabels(namespace.name, service.Labels),
<add> },
<add> TaskTemplate: swarm.TaskSpec{
<add> ContainerSpec: swarm.ContainerSpec{
<add> Image: service.Image,
<add> Command: service.Command,
<add> Args: service.Args,
<add> Env: service.Env,
<add> // Service Labels will not be copied to Containers
<add> // automatically during the deployment so we apply
<add> // it here.
<add> Labels: getStackLabels(namespace.name, nil),
<add> },
<add> },
<add> EndpointSpec: &swarm.EndpointSpec{
<add> Ports: ports,
<add> },
<add> Networks: nets,
<add> }
<add>
<add> services[internalName] = serviceSpec
<add> }
<add>
<add> ctx := context.Background()
<add>
<add> if err := createNetworks(ctx, dockerCli, namespace, networks); err != nil {
<add> return err
<add> }
<add> return deployServices(ctx, dockerCli, services, namespace, opts.sendRegistryAuth)
<add>}
<ide><path>integration-cli/docker_cli_stack_test.go
<ide> package main
<ide>
<ide> import (
<add> "io/ioutil"
<add> "os"
<add>
<ide> "github.com/docker/docker/pkg/integration/checker"
<ide> "github.com/go-check/check"
<ide> )
<ide> func (s *DockerSwarmSuite) TestStackDeployComposeFile(c *check.C) {
<ide> c.Assert(err, checker.IsNil)
<ide> c.Assert(out, check.Equals, "NAME SERVICES\n")
<ide> }
<add>
<add>// testDAB is the DAB JSON used for testing.
<add>// TODO: Use template/text and substitute "Image" with the result of
<add>// `docker inspect --format '{{index .RepoDigests 0}}' busybox:latest`
<add>const testDAB = `{
<add> "Version": "0.1",
<add> "Services": {
<add> "srv1": {
<add> "Image": "busybox@sha256:e4f93f6ed15a0cdd342f5aae387886fba0ab98af0a102da6276eaf24d6e6ade0",
<add> "Command": ["top"]
<add> },
<add> "srv2": {
<add> "Image": "busybox@sha256:e4f93f6ed15a0cdd342f5aae387886fba0ab98af0a102da6276eaf24d6e6ade0",
<add> "Command": ["tail"],
<add> "Args": ["-f", "/dev/null"]
<add> }
<add> }
<add>}`
<add>
<add>func (s *DockerSwarmSuite) TestStackDeployWithDAB(c *check.C) {
<add> testRequires(c, ExperimentalDaemon)
<add> // setup
<add> testStackName := "test"
<add> testDABFileName := testStackName + ".dab"
<add> defer os.RemoveAll(testDABFileName)
<add> err := ioutil.WriteFile(testDABFileName, []byte(testDAB), 0444)
<add> c.Assert(err, checker.IsNil)
<add> d := s.AddDaemon(c, true, true)
<add> // deploy
<add> stackArgs := []string{
<add> "stack", "deploy",
<add> "--bundle-file", testDABFileName,
<add> testStackName,
<add> }
<add> out, err := d.Cmd(stackArgs...)
<add> c.Assert(err, checker.IsNil)
<add> c.Assert(out, checker.Contains, "Loading bundle from test.dab\n")
<add> c.Assert(out, checker.Contains, "Creating service test_srv1\n")
<add> c.Assert(out, checker.Contains, "Creating service test_srv2\n")
<add> // ls
<add> stackArgs = []string{"stack", "ls"}
<add> out, err = d.Cmd(stackArgs...)
<add> c.Assert(err, checker.IsNil)
<add> c.Assert(out, check.Equals, "NAME SERVICES\n"+"test 2\n")
<add> // rm
<add> stackArgs = []string{"stack", "rm", testStackName}
<add> out, err = d.Cmd(stackArgs...)
<add> c.Assert(err, checker.IsNil)
<add> c.Assert(out, checker.Contains, "Removing service test_srv1\n")
<add> c.Assert(out, checker.Contains, "Removing service test_srv2\n")
<add> // ls (empty)
<add> stackArgs = []string{"stack", "ls"}
<add> out, err = d.Cmd(stackArgs...)
<add> c.Assert(err, checker.IsNil)
<add> c.Assert(out, check.Equals, "NAME SERVICES\n")
<add>} | 3 |
Python | Python | add unittest directory discovery | cb2a07e16dc5a01e9bd72d30f244d553df527d9e | <ide><path>celery/tests/__init__.py
<add>from djangox.test.depth import alltests
<add>
<add>def suite():
<add> return alltests(__file__, __name__)
<ide><path>setup.py
<ide> scripts=["celery/bin/celeryd"],
<ide> zip_safe=False,
<ide> install_requires=[
<add> 'django-unittest-depth',
<ide> 'simplejson',
<ide> 'carrot',
<ide> 'django', | 2 |
Javascript | Javascript | add more tests for suppresshydrationwarning | 5f7f5280836a44d4b81c1cb13f51e7482470cf1b | <ide><path>packages/react-dom/src/__tests__/ReactDOMFizzServer-test.js
<ide> describe('ReactDOMFizzServer', () => {
<ide> </ul>,
<ide> );
<ide> });
<del>
<del> // @gate experimental
<del> it('suppresses and fixes text mismatches with suppressHydrationWarning', async () => {
<del> function App({isClient}) {
<del> return (
<del> <div>
<del> <span
<del> suppressHydrationWarning={true}
<del> data-attr={isClient ? 'client-attr' : 'server-attr'}>
<del> {isClient ? 'Client Text' : 'Server Text'}
<del> </span>
<del> <span suppressHydrationWarning={true}>{isClient ? 2 : 1}</span>
<del> <span suppressHydrationWarning={true}>
<del> hello,{isClient ? 'client' : 'server'}
<del> </span>
<del> </div>
<del> );
<del> }
<del> await act(async () => {
<del> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<del> <App isClient={false} />,
<del> );
<del> pipe(writable);
<del> });
<del> expect(getVisibleChildren(container)).toEqual(
<del> <div>
<del> <span data-attr="server-attr">Server Text</span>
<del> <span>1</span>
<del> <span>
<del> {'hello,'}
<del> {'server'}
<del> </span>
<del> </div>,
<del> );
<del> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<del> onRecoverableError(error) {
<del> // Don't miss a hydration error. There should be none.
<del> Scheduler.unstable_yieldValue(error.message);
<del> },
<del> });
<del> expect(Scheduler).toFlushAndYield([]);
<del> // The text mismatch should be *silently* fixed. Even in production.
<del> // The attribute mismatch should be ignored and not fixed.
<del> expect(getVisibleChildren(container)).toEqual(
<del> <div>
<del> <span data-attr="server-attr">Client Text</span>
<del> <span>2</span>
<del> <span>
<del> {'hello,'}
<del> {'client'}
<del> </span>
<del> </div>,
<del> );
<del> });
<del>
<del> // @gate experimental
<del> it('suppresses and does not fix html mismatches with suppressHydrationWarning', async () => {
<del> function App({isClient}) {
<del> return (
<del> <div>
<del> <p
<del> suppressHydrationWarning={true}
<del> dangerouslySetInnerHTML={{
<del> __html: isClient ? 'Client HTML' : 'Server HTML',
<del> }}
<del> />
<del> </div>
<del> );
<del> }
<del> await act(async () => {
<del> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<del> <App isClient={false} />,
<del> );
<del> pipe(writable);
<del> });
<del> expect(getVisibleChildren(container)).toEqual(
<del> <div>
<del> <p>Server HTML</p>
<del> </div>,
<del> );
<del> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<del> onRecoverableError(error) {
<del> Scheduler.unstable_yieldValue(error.message);
<del> },
<del> });
<del> expect(Scheduler).toFlushAndYield([]);
<del> expect(getVisibleChildren(container)).toEqual(
<del> <div>
<del> <p>Server HTML</p>
<del> </div>,
<del> );
<del> });
<ide> });
<ide><path>packages/react-dom/src/__tests__/ReactDOMFizzSuppressHydrationWarning-test.js
<add>/**
<add> * Copyright (c) Facebook, Inc. and its affiliates.
<add> *
<add> * This source code is licensed under the MIT license found in the
<add> * LICENSE file in the root directory of this source tree.
<add> *
<add> * @emails react-core
<add> */
<add>
<add>'use strict';
<add>
<add>let JSDOM;
<add>let Stream;
<add>let Scheduler;
<add>let React;
<add>let ReactDOMClient;
<add>let ReactDOMFizzServer;
<add>let document;
<add>let writable;
<add>let container;
<add>let buffer = '';
<add>let hasErrored = false;
<add>let fatalError = undefined;
<add>
<add>describe('ReactDOMFizzServerHydrationWarning', () => {
<add> beforeEach(() => {
<add> jest.resetModules();
<add> JSDOM = require('jsdom').JSDOM;
<add> Scheduler = require('scheduler');
<add> React = require('react');
<add> ReactDOMClient = require('react-dom/client');
<add> if (__EXPERIMENTAL__) {
<add> ReactDOMFizzServer = require('react-dom/server');
<add> }
<add> Stream = require('stream');
<add>
<add> // Test Environment
<add> const jsdom = new JSDOM(
<add> '<!DOCTYPE html><html><head></head><body><div id="container">',
<add> {
<add> runScripts: 'dangerously',
<add> },
<add> );
<add> document = jsdom.window.document;
<add> container = document.getElementById('container');
<add>
<add> buffer = '';
<add> hasErrored = false;
<add>
<add> writable = new Stream.PassThrough();
<add> writable.setEncoding('utf8');
<add> writable.on('data', chunk => {
<add> buffer += chunk;
<add> });
<add> writable.on('error', error => {
<add> hasErrored = true;
<add> fatalError = error;
<add> });
<add> });
<add>
<add> async function act(callback) {
<add> await callback();
<add> // Await one turn around the event loop.
<add> // This assumes that we'll flush everything we have so far.
<add> await new Promise(resolve => {
<add> setImmediate(resolve);
<add> });
<add> if (hasErrored) {
<add> throw fatalError;
<add> }
<add> // JSDOM doesn't support stream HTML parser so we need to give it a proper fragment.
<add> // We also want to execute any scripts that are embedded.
<add> // We assume that we have now received a proper fragment of HTML.
<add> const bufferedContent = buffer;
<add> buffer = '';
<add> const fakeBody = document.createElement('body');
<add> fakeBody.innerHTML = bufferedContent;
<add> while (fakeBody.firstChild) {
<add> const node = fakeBody.firstChild;
<add> if (node.nodeName === 'SCRIPT') {
<add> const script = document.createElement('script');
<add> script.textContent = node.textContent;
<add> fakeBody.removeChild(node);
<add> container.appendChild(script);
<add> } else {
<add> container.appendChild(node);
<add> }
<add> }
<add> }
<add>
<add> function getVisibleChildren(element) {
<add> const children = [];
<add> let node = element.firstChild;
<add> while (node) {
<add> if (node.nodeType === 1) {
<add> if (
<add> node.tagName !== 'SCRIPT' &&
<add> node.tagName !== 'TEMPLATE' &&
<add> node.tagName !== 'template' &&
<add> !node.hasAttribute('hidden') &&
<add> !node.hasAttribute('aria-hidden')
<add> ) {
<add> const props = {};
<add> const attributes = node.attributes;
<add> for (let i = 0; i < attributes.length; i++) {
<add> if (
<add> attributes[i].name === 'id' &&
<add> attributes[i].value.includes(':')
<add> ) {
<add> // We assume this is a React added ID that's a non-visual implementation detail.
<add> continue;
<add> }
<add> props[attributes[i].name] = attributes[i].value;
<add> }
<add> props.children = getVisibleChildren(node);
<add> children.push(React.createElement(node.tagName.toLowerCase(), props));
<add> }
<add> } else if (node.nodeType === 3) {
<add> children.push(node.data);
<add> }
<add> node = node.nextSibling;
<add> }
<add> return children.length === 0
<add> ? undefined
<add> : children.length === 1
<add> ? children[0]
<add> : children;
<add> }
<add>
<add> // @gate experimental
<add> it('suppresses and fixes text mismatches with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div>
<add> <span suppressHydrationWarning={true}>
<add> {isClient ? 'Client Text' : 'Server Text'}
<add> </span>
<add> <span suppressHydrationWarning={true}>{isClient ? 2 : 1}</span>
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>Server Text</span>
<add> <span>1</span>
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> // Don't miss a hydration error. There should be none.
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> expect(Scheduler).toFlushAndYield([]);
<add> // The text mismatch should be *silently* fixed. Even in production.
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>Client Text</span>
<add> <span>2</span>
<add> </div>,
<add> );
<add> });
<add>
<add> // @gate experimental
<add> it('suppresses and fixes multiple text node mismatches with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div>
<add> <span suppressHydrationWarning={true}>
<add> {isClient ? 'Client1' : 'Server1'}
<add> {isClient ? 'Client2' : 'Server2'}
<add> </span>
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>
<add> {'Server1'}
<add> {'Server2'}
<add> </span>
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> expect(Scheduler).toFlushAndYield([]);
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>
<add> {'Client1'}
<add> {'Client2'}
<add> </span>
<add> </div>,
<add> );
<add> });
<add>
<add> // @gate experimental
<add> it('errors on text-to-element mismatches with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div>
<add> <span suppressHydrationWarning={true}>
<add> Hello, {isClient ? <span>Client</span> : 'Server'}!
<add> </span>
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>
<add> {'Hello, '}
<add> {'Server'}
<add> {'!'}
<add> </span>
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> if (gate(flags => flags.enableClientRenderFallbackOnHydrationMismatch)) {
<add> expect(() => {
<add> expect(Scheduler).toFlushAndYield([
<add> 'Hydration failed because the initial UI does not match what was rendered on the server.',
<add> 'There was an error while hydrating. Because the error happened outside of a Suspense boundary, the entire root will switch to client rendering.',
<add> ]);
<add> }).toErrorDev(
<add> [
<add> 'An error occurred during hydration. The server HTML was replaced with client content in <div>.',
<add> ],
<add> {withoutStack: true},
<add> );
<add> } else {
<add> // This used to not warn.
<add> expect(Scheduler).toFlushAndYield([]);
<add> }
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>
<add> Hello, <span>Client</span>!
<add> </span>
<add> </div>,
<add> );
<add> });
<add>
<add> // @gate experimental
<add> it('suppresses and fixes client-only single text node mismatches with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div>
<add> <span suppressHydrationWarning={true}>
<add> {isClient ? 'Client' : null}
<add> </span>
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span />
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> expect(Scheduler).toFlushAndYield([]);
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>{'Client'}</span>
<add> </div>,
<add> );
<add> });
<add>
<add> // TODO: This behavior is not consistent with client-only single text node.
<add> // @gate experimental
<add> it('errors on server-only single text node mismatches with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div>
<add> <span suppressHydrationWarning={true}>
<add> {isClient ? null : 'Server'}
<add> </span>
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>Server</span>
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> if (gate(flags => flags.enableClientRenderFallbackOnHydrationMismatch)) {
<add> expect(() => {
<add> expect(Scheduler).toFlushAndYield([
<add> 'Hydration failed because the initial UI does not match what was rendered on the server.',
<add> 'There was an error while hydrating. Because the error happened outside of a Suspense boundary, the entire root will switch to client rendering.',
<add> ]);
<add> }).toErrorDev(
<add> [
<add> 'An error occurred during hydration. The server HTML was replaced with client content in <div>.',
<add> ],
<add> {withoutStack: true},
<add> );
<add> } else {
<add> // This used to not warn.
<add> expect(Scheduler).toFlushAndYield([]);
<add> }
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span />
<add> </div>,
<add> );
<add> });
<add>
<add> // @gate experimental
<add> it('errors on client-only extra text node mismatches with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div>
<add> <span suppressHydrationWarning={true}>
<add> <span>Shared</span>
<add> {isClient ? 'Client' : null}
<add> </span>
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>
<add> <span>Shared</span>
<add> </span>
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> if (gate(flags => flags.enableClientRenderFallbackOnHydrationMismatch)) {
<add> expect(() => {
<add> expect(Scheduler).toFlushAndYield([
<add> 'Hydration failed because the initial UI does not match what was rendered on the server.',
<add> 'There was an error while hydrating. Because the error happened outside of a Suspense boundary, the entire root will switch to client rendering.',
<add> ]);
<add> }).toErrorDev(
<add> [
<add> 'An error occurred during hydration. The server HTML was replaced with client content in <div>.',
<add> ],
<add> {withoutStack: true},
<add> );
<add> } else {
<add> // This used to not warn.
<add> expect(Scheduler).toFlushAndYield([]);
<add> }
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>
<add> <span>Shared</span>
<add> {'Client'}
<add> </span>
<add> </div>,
<add> );
<add> });
<add>
<add> // @gate experimental
<add> it('errors on server-only extra text node mismatches with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div>
<add> <span suppressHydrationWarning={true}>
<add> <span>Shared</span>
<add> {isClient ? null : 'Server'}
<add> </span>
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>
<add> <span>Shared</span>Server
<add> </span>
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> if (gate(flags => flags.enableClientRenderFallbackOnHydrationMismatch)) {
<add> expect(() => {
<add> expect(Scheduler).toFlushAndYield([
<add> 'Hydration failed because the initial UI does not match what was rendered on the server.',
<add> 'There was an error while hydrating. Because the error happened outside of a Suspense boundary, the entire root will switch to client rendering.',
<add> ]);
<add> }).toErrorDev(
<add> [
<add> 'An error occurred during hydration. The server HTML was replaced with client content in <div>.',
<add> ],
<add> {withoutStack: true},
<add> );
<add> } else {
<add> // This used to not warn.
<add> expect(Scheduler).toFlushAndYield([]);
<add> }
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>
<add> <span>Shared</span>
<add> </span>
<add> </div>,
<add> );
<add> });
<add>
<add> // @gate experimental
<add> it('errors on element-to-text mismatches with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div>
<add> <span suppressHydrationWarning={true}>
<add> Hello, {isClient ? 'Client' : <span>Server</span>}!
<add> </span>
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>
<add> Hello, <span>Server</span>!
<add> </span>
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> if (gate(flags => flags.enableClientRenderFallbackOnHydrationMismatch)) {
<add> expect(() => {
<add> expect(Scheduler).toFlushAndYield([
<add> 'Hydration failed because the initial UI does not match what was rendered on the server.',
<add> 'Hydration failed because the initial UI does not match what was rendered on the server.',
<add> 'There was an error while hydrating. Because the error happened outside of a Suspense boundary, the entire root will switch to client rendering.',
<add> ]);
<add> }).toErrorDev(
<add> [
<add> 'An error occurred during hydration. The server HTML was replaced with client content in <div>.',
<add> ],
<add> {withoutStack: true},
<add> );
<add> } else {
<add> // This used to not warn.
<add> expect(Scheduler).toFlushAndYield([]);
<add> }
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span>
<add> {'Hello, '}
<add> {'Client'}
<add> {'!'}
<add> </span>
<add> </div>,
<add> );
<add> });
<add>
<add> // @gate experimental
<add> it('suppresses and does not fix attribute mismatches with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div>
<add> <span
<add> suppressHydrationWarning={true}
<add> className={isClient ? 'client' : 'server'}
<add> style={{opacity: isClient ? 1 : 0}}
<add> data-serveronly={isClient ? null : 'server-only'}
<add> data-clientonly={isClient ? 'client-only' : null}
<add> />
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span class="server" style="opacity:0" data-serveronly="server-only" />
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> expect(Scheduler).toFlushAndYield([]);
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <span class="server" style="opacity:0" data-serveronly="server-only" />
<add> </div>,
<add> );
<add> });
<add>
<add> // @gate experimental
<add> it('suppresses and does not fix html mismatches with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div>
<add> <p
<add> suppressHydrationWarning={true}
<add> dangerouslySetInnerHTML={{
<add> __html: isClient ? 'Client HTML' : 'Server HTML',
<add> }}
<add> />
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <p>Server HTML</p>
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> expect(Scheduler).toFlushAndYield([]);
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <p>Server HTML</p>
<add> </div>,
<add> );
<add> });
<add>
<add> // @gate experimental
<add> it('errors on insertions with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div suppressHydrationWarning={true}>
<add> <p>Client and server</p>
<add> {isClient && <p>Client only</p>}
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <p>Client and server</p>
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> if (gate(flags => flags.enableClientRenderFallbackOnHydrationMismatch)) {
<add> expect(() => {
<add> expect(Scheduler).toFlushAndYield([
<add> 'Hydration failed because the initial UI does not match what was rendered on the server.',
<add> 'There was an error while hydrating. Because the error happened outside of a Suspense boundary, the entire root will switch to client rendering.',
<add> ]);
<add> }).toErrorDev(
<add> [
<add> 'An error occurred during hydration. The server HTML was replaced with client content in <div>.',
<add> ],
<add> {withoutStack: true},
<add> );
<add> } else {
<add> // This used to not warn.
<add> expect(Scheduler).toFlushAndYield([]);
<add> }
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <p>Client and server</p>
<add> <p>Client only</p>
<add> </div>,
<add> );
<add> });
<add>
<add> // @gate experimental
<add> it('errors on deletions with suppressHydrationWarning', async () => {
<add> function App({isClient}) {
<add> return (
<add> <div suppressHydrationWarning={true}>
<add> <p>Client and server</p>
<add> {!isClient && <p>Server only</p>}
<add> </div>
<add> );
<add> }
<add> await act(async () => {
<add> const {pipe} = ReactDOMFizzServer.renderToPipeableStream(
<add> <App isClient={false} />,
<add> );
<add> pipe(writable);
<add> });
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <p>Client and server</p>
<add> <p>Server only</p>
<add> </div>,
<add> );
<add> ReactDOMClient.hydrateRoot(container, <App isClient={true} />, {
<add> onRecoverableError(error) {
<add> Scheduler.unstable_yieldValue(error.message);
<add> },
<add> });
<add> if (gate(flags => flags.enableClientRenderFallbackOnHydrationMismatch)) {
<add> expect(() => {
<add> expect(Scheduler).toFlushAndYield([
<add> 'Hydration failed because the initial UI does not match what was rendered on the server.',
<add> 'There was an error while hydrating. Because the error happened outside of a Suspense boundary, the entire root will switch to client rendering.',
<add> ]);
<add> }).toErrorDev(
<add> [
<add> 'An error occurred during hydration. The server HTML was replaced with client content in <div>.',
<add> ],
<add> {withoutStack: true},
<add> );
<add> } else {
<add> // This used to not warn.
<add> expect(Scheduler).toFlushAndYield([]);
<add> }
<add> expect(getVisibleChildren(container)).toEqual(
<add> <div>
<add> <p>Client and server</p>
<add> </div>,
<add> );
<add> });
<add>}); | 2 |
Java | Java | fix wrong modal size in fullscreen | 8ba06cbfb1c60980437e38c358a971ff105af245 | <ide><path>ReactAndroid/src/main/java/com/facebook/react/views/modal/ModalHostHelper.java
<ide>
<ide> import android.annotation.TargetApi;
<ide> import android.content.Context;
<add>import android.content.res.Resources;
<add>import android.content.res.TypedArray;
<ide> import android.graphics.Point;
<ide> import android.view.Display;
<ide> import android.view.WindowManager;
<ide> public static Point getModalHostSize(Context context) {
<ide> // getSize will return the dimensions of the screen in its current orientation
<ide> display.getSize(SIZE_POINT);
<ide>
<add> int[] attrs = {android.R.attr.windowFullscreen};
<add> Resources.Theme theme = context.getTheme();
<add> TypedArray ta = theme.obtainStyledAttributes(attrs);
<add> boolean windowFullscreen = ta.getBoolean(0, false);
<add>
<add> // We need to add the status bar height to the height if we have a fullscreen window,
<add> // because Display.getCurrentSizeRange doesn't include it.
<add> Resources resources = context.getResources();
<add> int statusBarId = resources.getIdentifier("status_bar_height", "dimen", "android");
<add> int statusBarHeight = 0;
<add> if (windowFullscreen && statusBarId > 0) {
<add> statusBarHeight = (int) resources.getDimension(statusBarId);
<add> }
<add>
<ide> if (SIZE_POINT.x < SIZE_POINT.y) {
<ide> // If we are vertical the width value comes from min width and height comes from max height
<del> return new Point(MIN_POINT.x, MAX_POINT.y);
<add> return new Point(MIN_POINT.x, MAX_POINT.y + statusBarHeight);
<ide> } else {
<ide> // If we are horizontal the width value comes from max width and height comes from min height
<del> return new Point(MAX_POINT.x, MIN_POINT.y);
<add> return new Point(MAX_POINT.x, MIN_POINT.y + statusBarHeight);
<ide> }
<ide> }
<ide> } | 1 |
Ruby | Ruby | use array.wrap to quiet 1.8.8 deprecation | 2538ef0d09954340701354e8fcf57e76c4210416 | <ide><path>activerecord/lib/active_record/fixtures.rb
<ide> require 'csv'
<ide> require 'zlib'
<ide> require 'active_support/dependencies'
<del>require 'active_support/core_ext/logger'
<add>require 'active_support/core_ext/array/wrap'
<ide> require 'active_support/core_ext/object/blank'
<add>require 'active_support/core_ext/logger'
<ide>
<ide> if RUBY_VERSION < '1.9'
<ide> module YAML #:nodoc:
<ide> def require_fixture_classes(table_names = nil)
<ide> end
<ide>
<ide> def setup_fixture_accessors(table_names = nil)
<del> table_names = [table_names] if table_names && !table_names.respond_to?(:each)
<del> (table_names || fixture_table_names).each do |table_name|
<add> table_names = Array.wrap(table_names || fixture_table_names)
<add> table_names.each do |table_name|
<ide> table_name = table_name.to_s.tr('.', '_')
<ide>
<ide> define_method(table_name) do |*fixtures| | 1 |
Javascript | Javascript | remove unneeded asynchrony from routing tests | b23292fff540ca74967139c81530b6f18fc7030b | <ide><path>packages/ember/tests/routing/basic_test.js
<ide> test("The Specials Page defaults to looking models up via `find`", function() {
<ide> });
<ide>
<ide> test("The Special Page returning a promise puts the app into a loading state until the promise is resolved", function() {
<del> stop();
<del>
<ide> Router.map(function() {
<ide> this.route("home", { path: "/" });
<ide> this.resource("special", { path: "/specials/:menu_item_id" });
<ide> test("The Special Page returning a promise puts the app into a loading state unt
<ide> menuItem.resolve(menuItem);
<ide> });
<ide>
<del> setTimeout(function() {
<del> equal(Ember.$('p', '#qunit-fixture').text(), "1", "The app is now in the specials state");
<del> start();
<del> }, 100);
<add> equal(Ember.$('p', '#qunit-fixture').text(), "1", "The app is now in the specials state");
<ide> });
<ide>
<ide> test("The Special page returning an error puts the app into the failure state", function() {
<del> stop();
<del>
<ide> Router.map(function() {
<ide> this.route("home", { path: "/" });
<ide> this.resource("special", { path: "/specials/:menu_item_id" });
<ide> test("The Special page returning an error puts the app into the failure state",
<ide> menuItem.resolve(menuItem);
<ide> });
<ide>
<del> setTimeout(function() {
<del> equal(Ember.$('p', '#qunit-fixture').text(), "FAILURE!", "The app is now in the failure state");
<del> start();
<del> }, 100);
<add> equal(Ember.$('p', '#qunit-fixture').text(), "FAILURE!", "The app is now in the failure state");
<ide> });
<ide>
<ide> test("The Special page returning an error puts the app into a default failure state if none provided", function() {
<del> stop();
<del>
<ide> Router.map(function() {
<ide> this.route("home", { path: "/" });
<ide> this.resource("special", { path: "/specials/:menu_item_id" });
<ide> test("The Special page returning an error puts the app into a default failure st
<ide> menuItem.resolve(menuItem);
<ide> });
<ide>
<del> setTimeout(function() {
<del> equal(lastFailure, 'Setup error');
<del> start();
<del> }, 100);
<add> equal(lastFailure, 'Setup error');
<ide> });
<ide>
<ide> test("Moving from one page to another triggers the correct callbacks", function() { | 1 |
Javascript | Javascript | remove test-path-parse-6229.js from known issues | f9da55cca2eda56ad2e35f654272c257d378d907 | <ide><path>test/known_issues/test-path-parse-6229.js
<del>'use strict';
<del>// Refs: https://github.com/nodejs/node/issues/6229
<del>
<del>require('../common');
<del>const assert = require('assert');
<del>const path = require('path');
<del>
<del>{
<del> // The path `/foo/bar` is not the same path as `/foo/bar/`
<del> const parsed1 = path.posix.parse('/foo/bar');
<del> const parsed2 = path.posix.parse('/foo/bar/');
<del>
<del> assert.strictEqual(parsed1.root, '/');
<del> assert.strictEqual(parsed1.dir, '/foo');
<del> assert.strictEqual(parsed1.base, 'bar');
<del>
<del> assert.strictEqual(parsed2.root, '/');
<del> assert.strictEqual(parsed2.dir, '/foo/bar');
<del> assert.strictEqual(parsed2.base, '');
<del>}
<del>
<del>{
<del> // The path `\\foo\\bar` is not the same path as `\\foo\\bar\\`
<del> const parsed1 = path.win32.parse('\\foo\\bar');
<del> const parsed2 = path.win32.parse('\\foo\\bar\\');
<del>
<del> assert.strictEqual(parsed1.root, '\\');
<del> assert.strictEqual(parsed1.dir, '\\foo');
<del> assert.strictEqual(parsed1.base, 'bar');
<del>
<del> assert.strictEqual(parsed2.root, '\\');
<del> assert.strictEqual(parsed2.dir, '\\foo\\bar');
<del> assert.strictEqual(parsed2.base, '');
<del>} | 1 |
Go | Go | fix error msg | 2ca6896aeef24ee6d86072d4e9e72b819f718d0a | <ide><path>integration/container/rename_test.go
<ide> func TestRenameAnonymousContainer(t *testing.T) {
<ide>
<ide> inspect, err := client.ContainerInspect(ctx, cID)
<ide> assert.NilError(t, err)
<del> assert.Check(t, is.Equal(0, inspect.State.ExitCode), "container %s exited with the wrong exitcode: %+v", cID, inspect)
<add> assert.Check(t, is.Equal(0, inspect.State.ExitCode), "container %s exited with the wrong exitcode: %s", cID, inspect.State.Error)
<ide> }
<ide>
<ide> // TODO: should be a unit test | 1 |
Go | Go | fix the typo | c33cdf9ee3ece0358f828c7ac8f6367c3414e67a | <ide><path>daemon/graphdriver/btrfs/btrfs.go
<ide> func (d *Driver) Create(id, parent, mountLabel string) error {
<ide> return err
<ide> }
<ide> if !st.IsDir() {
<del> return fmt.Errorf("%s: not a direcotory", parentDir)
<add> return fmt.Errorf("%s: not a directory", parentDir)
<ide> }
<ide> if err := subvolSnapshot(parentDir, subvolumes, id); err != nil {
<ide> return err | 1 |
Javascript | Javascript | remove addmandatoryattributes. close gh-1037 | bb570fc37341520c27a76190e0c11271596890ec | <ide><path>src/manipulation.js
<ide> var nodeNames = "abbr|article|aside|audio|bdi|canvas|data|datalist|details|figca
<ide> wrapMap = {
<ide> option: [ 1, "<select multiple='multiple'>", "</select>" ],
<ide> legend: [ 1, "<fieldset>", "</fieldset>" ],
<add> area: [ 1, "<map>", "</map>" ],
<add> param: [ 1, "<object>", "</object>" ],
<ide> thead: [ 1, "<table>", "</table>" ],
<ide> tr: [ 2, "<table><tbody>", "</tbody></table>" ],
<del> td: [ 3, "<table><tbody><tr>", "</tr></tbody></table>" ],
<ide> col: [ 2, "<table><tbody></tbody><colgroup>", "</colgroup></table>" ],
<del> area: [ 1, "<map>", "</map>" ],
<del> _default: [ 0, "", "" ]
<add> td: [ 3, "<table><tbody><tr>", "</tr></tbody></table>" ],
<add>
<add> // IE6-8 can't serialize link, script, style, or any html5 (NoScope) tags,
<add> // unless wrapped in a div with non-breaking characters in front of it.
<add> _default: jQuery.support.htmlSerialize ? [ 0, "", "" ] : [ 1, "X<div>", "" ]
<ide> },
<ide> safeFragment = createSafeFragment( document ),
<del> fragmentDiv = safeFragment.appendChild( document.createElement("div") ),
<del> addMandatoryAttributes = function( elem ) { return elem; };
<add> fragmentDiv = safeFragment.appendChild( document.createElement("div") );
<ide>
<ide> wrapMap.optgroup = wrapMap.option;
<ide> wrapMap.tbody = wrapMap.tfoot = wrapMap.colgroup = wrapMap.caption = wrapMap.thead;
<ide> wrapMap.th = wrapMap.td;
<ide>
<del>// IE6-8 can't serialize link, script, style, or any html5 (NoScope) tags,
<del>// unless wrapped in a div with non-breaking characters in front of it.
<del>if ( !jQuery.support.htmlSerialize ) {
<del> wrapMap._default = [ 1, "X<div>", "" ];
<del> // Fixes #11280
<del> wrapMap.param = [ 1, "X<object>", "" ];
<del> // Fixes #11280. HTMLParam name attribute added to avoid IE6-8 parsing issue.
<del> addMandatoryAttributes = function( elem ) {
<del> // If it's a param
<del> return elem.replace(/<param([^>]*)>/gi, function( m, s1, offset ) {
<del> var name = s1.match( /name=["']([^"']*)["']/i );
<del> return name ?
<del> ( name[1].length ?
<del> // It has a name attr with a value
<del> "<param" + s1 + ">" :
<del> // It has name attr without a value
<del> "<param" + s1.replace( name[0], "name='_" + offset + "'" ) + ">" ) :
<del> // No name attr
<del> "<param name='_" + offset + "' " + s1 + ">";
<del> });
<del> };
<del>}
<del>
<ide> jQuery.fn.extend({
<ide> text: function( value ) {
<ide> return jQuery.access( this, function( value ) {
<ide> jQuery.extend({
<ide> // Deserialize a standard representation
<ide> tag = ( rtagName.exec( elem ) || ["", ""] )[1].toLowerCase();
<ide> wrap = wrapMap[ tag ] || wrapMap._default;
<del> tmp.innerHTML = wrap[1] + addMandatoryAttributes( elem.replace( rxhtmlTag, "<$1></$2>" ) ) + wrap[2];
<add> tmp.innerHTML = wrap[1] + elem.replace( rxhtmlTag, "<$1></$2>" ) + wrap[2];
<ide>
<ide> // Descend through wrappers to the right content
<ide> j = wrap[0];
<ide><path>test/unit/manipulation.js
<ide> test("append(Function)", function() {
<ide> });
<ide>
<ide> test("append(param) to object, see #11280", function() {
<del> expect(11);
<del>
<del> var objectElement = document.createElement("object"),
<del> $objectElement = jQuery( objectElement ),
<del> paramElement = jQuery("<param type='wmode' value='transparent'/>"),
<del> paramElement2 = jQuery("<param name='' type='wmode2' value='transparent2' />"),
<del> paramElement3 = jQuery("<param type='wmode' name='foo' >"),
<del> newObject = jQuery("<object><param type='foo' ><param name='' value='foo2'/><param type='baz' name='bar'></object>");
<del>
<del> equal( objectElement.childNodes.length, 0, "object did not have childNodes previously" );
<del>
<del> document.body.appendChild( objectElement );
<add> expect( 5 );
<ide>
<del> $objectElement.append( paramElement );
<del> equal( $objectElement.children().length, 1, "param single insertion ok" );
<del> equal( jQuery(objectElement.childNodes[0]).attr("type"), "wmode", "param.eq(0) has type=wmode" );
<add> var object = jQuery( document.createElement("object") ).appendTo( document.body );
<ide>
<del> $objectElement.html( paramElement2 );
<del> equal( $objectElement.children().length, 1, "param single insertion ok" );
<del> equal( jQuery(objectElement.childNodes[0]).attr("type"), "wmode2", "param.eq(0) has type=wmode2" );
<add> equal( object.children().length, 0, "object does not start with children" );
<ide>
<del> $objectElement.html( paramElement3 );
<del> equal( $objectElement.children().length, 1, "param single insertion ok" );
<del> equal( jQuery(objectElement.childNodes[0]).attr("name"), "foo", "param.eq(0) has name=foo" );
<add> object.append( jQuery("<param type='wmode' name='foo'>") );
<add> equal( object.children().length, 1, "appended param" );
<add> equal( object.children().eq(0).attr("name"), "foo", "param has name=foo" );
<ide>
<del> equal( newObject.children().length, 3, "param wrapper multiple insertion ok" );
<del> equal( newObject.children().eq(0).attr("type"), "foo", "param.eq(0) has type=foo" );
<del> equal( newObject.children().eq(1).attr("value"), "foo2", "param.eq(1) has value=foo2" );
<del> equal( newObject.children().eq(2).attr("name"), "bar", "param.eq(2) has name=bar" );
<add> object = jQuery("<object><param type='baz' name='bar'></object>");
<add> equal( object.children().length, 1, "object created with child param" );
<add> equal( object.children().eq(0).attr("name"), "bar", "param has name=bar" );
<ide> });
<ide>
<ide> test("append(Function) with incoming value", function() { | 2 |
PHP | PHP | apply fixes from styleci | 0e16770a54e63541dcd9686f4f7fa23edc1b4d95 | <ide><path>tests/Database/DatabaseEloquentBuilderTest.php
<ide> public function testWithCountAndGlobalScope()
<ide> $builder = $model->select('id')->withCount(['foo']);
<ide>
<ide> // Remove the global scope so it doesn't interfere with any other tests
<del> EloquentBuilderTestModelCloseRelatedStub::addGlobalScope('withCount', function ($query) {});
<add> EloquentBuilderTestModelCloseRelatedStub::addGlobalScope('withCount', function ($query) {
<add> });
<ide>
<ide> $this->assertEquals('select "id", (select count(*) from "eloquent_builder_test_model_close_related_stubs" where "eloquent_builder_test_model_parent_stubs"."foo_id" = "eloquent_builder_test_model_close_related_stubs"."id") as "foo_count" from "eloquent_builder_test_model_parent_stubs"', $builder->toSql());
<ide> } | 1 |
Go | Go | use tls for tests if needed | 0bdba0e91a072ee2cdecb4e632c13f187eb88e9c | <ide><path>integration-cli/docker_api_containers_test.go
<ide> func (s *DockerSuite) TestContainerAPICreateNoHostConfig118(c *check.C) {
<ide> Image: "busybox",
<ide> }
<ide>
<del> var httpClient *http.Client
<del> cli, err := client.NewClient(daemonHost(), "v1.18", httpClient, map[string]string{})
<add> cli, err := NewEnvClientWithVersion("v1.18")
<ide>
<ide> _, err = cli.ContainerCreate(context.Background(), &config, &containertypes.HostConfig{}, &networktypes.NetworkingConfig{}, "")
<ide> c.Assert(err, checker.IsNil)
<ide><path>integration-cli/docker_api_images_test.go
<ide> func (s *DockerSuite) TestAPIImagesSizeCompatibility(c *check.C) {
<ide> Labels map[string]string
<ide> }
<ide>
<del> var httpClient *http.Client
<del> cli, err = client.NewClient(daemonHost(), "v1.24", httpClient, nil)
<add> cli, err = NewEnvClientWithVersion("v1.24")
<ide> c.Assert(err, checker.IsNil)
<ide> defer cli.Close()
<ide>
<ide><path>integration-cli/docker_api_inspect_unix_test.go
<ide> package main
<ide>
<ide> import (
<ide> "encoding/json"
<del> "net/http"
<ide>
<del> "github.com/docker/docker/client"
<ide> "github.com/docker/docker/integration-cli/checker"
<ide> "github.com/go-check/check"
<ide> "golang.org/x/net/context"
<ide> func (s *DockerSuite) TestInspectAPICpusetInConfigPre120(c *check.C) {
<ide>
<ide> name := "cpusetinconfig-pre120"
<ide> dockerCmd(c, "run", "--name", name, "--cpuset-cpus", "0", "busybox", "true")
<del> var httpClient *http.Client
<del> cli, err := client.NewClient(daemonHost(), "v1.19", httpClient, nil)
<add> cli, err := NewEnvClientWithVersion("v1.19")
<ide> c.Assert(err, checker.IsNil)
<ide> defer cli.Close()
<ide> _, body, err := cli.ContainerInspectWithRaw(context.Background(), name, false)
<ide><path>integration-cli/docker_cli_kill_test.go
<ide> package main
<ide>
<ide> import (
<del> "net/http"
<ide> "strings"
<ide> "time"
<ide>
<del> "github.com/docker/docker/client"
<ide> "github.com/docker/docker/integration-cli/checker"
<ide> "github.com/docker/docker/integration-cli/cli"
<ide> "github.com/go-check/check"
<ide> func (s *DockerSuite) TestKillStoppedContainerAPIPre120(c *check.C) {
<ide> testRequires(c, DaemonIsLinux) // Windows only supports 1.25 or later
<ide> runSleepingContainer(c, "--name", "docker-kill-test-api", "-d")
<ide> dockerCmd(c, "stop", "docker-kill-test-api")
<del> var httpClient *http.Client
<del> cli, err := client.NewClient(daemonHost(), "v1.19", httpClient, nil)
<add> cli, err := NewEnvClientWithVersion("v1.19")
<ide> c.Assert(err, check.IsNil)
<ide> defer cli.Close()
<ide> err = cli.ContainerKill(context.Background(), "docker-kill-test-api", "SIGKILL")
<ide><path>integration-cli/docker_cli_run_test.go
<ide> func (s *DockerSuite) TestRunRm(c *check.C) {
<ide> // Test that auto-remove is performed by the client on API versions that do not support daemon-side api-remove (API < 1.25)
<ide> func (s *DockerSuite) TestRunRmPre125Api(c *check.C) {
<ide> name := "miss-me-when-im-gone"
<del> envs := appendBaseEnv(false, "DOCKER_API_VERSION=1.24")
<add> envs := appendBaseEnv(os.Getenv("DOCKER_TLS_VERIFY") != "", "DOCKER_API_VERSION=1.24")
<ide> cli.Docker(cli.Args("run", "--name="+name, "--rm", "busybox"), cli.WithEnvironmentVariables(envs...)).Assert(c, icmd.Success)
<ide>
<ide> cli.Docker(cli.Inspect(name), cli.Format(".name")).Assert(c, icmd.Expected{
<ide><path>integration-cli/docker_utils_test.go
<ide> import (
<ide> "fmt"
<ide> "io"
<ide> "io/ioutil"
<del> "net/http"
<ide> "os"
<ide> "path"
<ide> "path/filepath"
<ide> func waitInspectWithArgs(name, expr, expected string, timeout time.Duration, arg
<ide> }
<ide>
<ide> func getInspectBody(c *check.C, version, id string) []byte {
<del> var httpClient *http.Client
<del> cli, err := client.NewClient(daemonHost(), version, httpClient, nil)
<add> cli, err := NewEnvClientWithVersion(version)
<ide> c.Assert(err, check.IsNil)
<ide> defer cli.Close()
<ide> _, body, err := cli.ContainerInspectWithRaw(context.Background(), id, false)
<ide><path>integration-cli/request/request.go
<ide> func New(host, endpoint string, modifiers ...func(*http.Request) error) (*http.R
<ide> return nil, fmt.Errorf("could not create new request: %v", err)
<ide> }
<ide>
<del> req.URL.Scheme = "http"
<add> if os.Getenv("DOCKER_TLS_VERIFY") != "" {
<add> req.URL.Scheme = "https"
<add> } else {
<add> req.URL.Scheme = "http"
<add> }
<ide> req.URL.Host = addr
<ide>
<ide> for _, config := range modifiers {
<ide><path>integration-cli/utils_test.go
<ide> func RemoveOutputForExistingElements(output string, existing []string) string {
<ide> res := RemoveLinesForExistingElements(strings.Split(output, "\n"), existing)
<ide> return strings.Join(res, "\n")
<ide> }
<add>
<add>// NewEnvClientWithVersion returns a docker client with a specified version.
<add>// See: github.com/docker/docker/client `NewEnvClient()`
<add>func NewEnvClientWithVersion(version string) (*client.Client, error) {
<add> cli, err := client.NewEnvClient()
<add> if err != nil {
<add> return nil, err
<add> }
<add> cli.NegotiateAPIVersionPing(types.Ping{APIVersion: version})
<add> return cli, nil
<add>} | 8 |
Javascript | Javascript | move escape codes into internal/readline | 4c070d489718b196d7950998ccfb54bcc50d9711 | <ide><path>lib/internal/readline.js
<ide> const ansi =
<ide> /[\u001b\u009b][[()#;?]*(?:[0-9]{1,4}(?:;[0-9]{0,4})*)?[0-9A-ORZcf-nqry=><]/g;
<ide>
<add>const kEscape = '\x1b';
<add>
<ide> var getStringWidth;
<ide> var isFullWidthCodePoint;
<ide>
<add>function CSI(strings, ...args) {
<add> let ret = `${kEscape}[`;
<add> for (var n = 0; n < strings.length; n++) {
<add> ret += strings[n];
<add> if (n < args.length)
<add> ret += args[n];
<add> }
<add> return ret;
<add>}
<add>
<add>CSI.kEscape = kEscape;
<add>CSI.kClearToBeginning = CSI`1K`;
<add>CSI.kClearToEnd = CSI`0K`;
<add>CSI.kClearLine = CSI`2K`;
<add>CSI.kClearScreenDown = CSI`0J`;
<add>
<ide> if (process.binding('config').hasIntl) {
<ide> const icu = process.binding('icu');
<ide> getStringWidth = function getStringWidth(str, options) {
<ide> function* emitKeys(stream) {
<ide> shift: false
<ide> };
<ide>
<del> if (ch === '\x1b') {
<add> if (ch === kEscape) {
<ide> escaped = true;
<ide> s += (ch = yield);
<ide>
<del> if (ch === '\x1b') {
<add> if (ch === kEscape) {
<ide> s += (ch = yield);
<ide> }
<ide> }
<ide> function* emitKeys(stream) {
<ide> // backspace or ctrl+h
<ide> key.name = 'backspace';
<ide> key.meta = escaped;
<del> } else if (ch === '\x1b') {
<add> } else if (ch === kEscape) {
<ide> // escape key
<ide> key.name = 'escape';
<ide> key.meta = escaped;
<ide> module.exports = {
<ide> emitKeys,
<ide> getStringWidth,
<ide> isFullWidthCodePoint,
<del> stripVTControlCharacters
<add> stripVTControlCharacters,
<add> CSI
<ide> };
<ide><path>lib/readline.js
<ide> const { debug, inherits } = require('util');
<ide> const Buffer = require('buffer').Buffer;
<ide> const EventEmitter = require('events');
<ide> const {
<add> CSI,
<ide> emitKeys,
<ide> getStringWidth,
<ide> isFullWidthCodePoint,
<ide> stripVTControlCharacters
<ide> } = require('internal/readline');
<ide>
<add>const {
<add> kEscape,
<add> kClearToBeginning,
<add> kClearToEnd,
<add> kClearLine,
<add> kClearScreenDown
<add>} = CSI;
<add>
<ide> const kHistorySize = 30;
<ide> const kMincrlfDelay = 100;
<ide> const kMaxcrlfDelay = 2000;
<ide> function emitKeypressEvents(stream, iface) {
<ide> try {
<ide> stream[ESCAPE_DECODER].next(r[i]);
<ide> // Escape letter at the tail position
<del> if (r[i] === '\x1b' && i + 1 === r.length) {
<add> if (r[i] === kEscape && i + 1 === r.length) {
<ide> timeoutId = setTimeout(escapeCodeTimeout, ESCAPE_CODE_TIMEOUT);
<ide> }
<ide> } catch (err) {
<ide> function cursorTo(stream, x, y) {
<ide> throw new Error('Can\'t set cursor row without also setting it\'s column');
<ide>
<ide> if (typeof y !== 'number') {
<del> stream.write('\x1b[' + (x + 1) + 'G');
<add> stream.write(CSI`${x + 1}G`);
<ide> } else {
<del> stream.write('\x1b[' + (y + 1) + ';' + (x + 1) + 'H');
<add> stream.write(CSI`${y + 1};${x + 1}H`);
<ide> }
<ide> }
<ide>
<ide> function moveCursor(stream, dx, dy) {
<ide> return;
<ide>
<ide> if (dx < 0) {
<del> stream.write('\x1b[' + (-dx) + 'D');
<add> stream.write(CSI`${-dx}D`);
<ide> } else if (dx > 0) {
<del> stream.write('\x1b[' + dx + 'C');
<add> stream.write(CSI`${dx}C`);
<ide> }
<ide>
<ide> if (dy < 0) {
<del> stream.write('\x1b[' + (-dy) + 'A');
<add> stream.write(CSI`${-dy}A`);
<ide> } else if (dy > 0) {
<del> stream.write('\x1b[' + dy + 'B');
<add> stream.write(CSI`${dy}B`);
<ide> }
<ide> }
<ide>
<ide> function clearLine(stream, dir) {
<ide>
<ide> if (dir < 0) {
<ide> // to the beginning
<del> stream.write('\x1b[1K');
<add> stream.write(kClearToBeginning);
<ide> } else if (dir > 0) {
<ide> // to the end
<del> stream.write('\x1b[0K');
<add> stream.write(kClearToEnd);
<ide> } else {
<ide> // entire line
<del> stream.write('\x1b[2K');
<add> stream.write(kClearLine);
<ide> }
<ide> }
<ide>
<ide> function clearScreenDown(stream) {
<ide> if (stream === null || stream === undefined)
<ide> return;
<ide>
<del> stream.write('\x1b[0J');
<add> stream.write(kClearScreenDown);
<ide> }
<ide>
<ide> module.exports = { | 2 |
Javascript | Javascript | move empty callbacks to prototype | 667bd407af6b7b1142137f8098b9e99eb8e35964 | <ide><path>src/core/InterleavedBuffer.js
<ide> function InterleavedBuffer( array, stride ) {
<ide> this.dynamic = false;
<ide> this.updateRange = { offset: 0, count: - 1 };
<ide>
<del> this.onUploadCallback = function () {};
<del>
<ide> this.version = 0;
<ide>
<ide> }
<ide> Object.assign( InterleavedBuffer.prototype, {
<ide>
<ide> isInterleavedBuffer: true,
<ide>
<add> onUploadCallback: function () {},
<add>
<ide> setArray: function ( array ) {
<ide>
<ide> if ( Array.isArray( array ) ) {
<ide><path>src/loaders/Loader.js
<ide> import { Color } from '../math/Color.js';
<ide> * @author alteredq / http://alteredqualia.com/
<ide> */
<ide>
<del>function Loader() {
<del>
<del> this.onLoadStart = function () {};
<del> this.onLoadProgress = function () {};
<del> this.onLoadComplete = function () {};
<del>
<del>}
<add>function Loader() {}
<ide>
<ide> Loader.Handlers = {
<ide>
<ide> Object.assign( Loader.prototype, {
<ide>
<ide> crossOrigin: undefined,
<ide>
<add> onLoadStart: function () {},
<add>
<add> onLoadProgress: function () {},
<add>
<add> onLoadComplete: function () {},
<add>
<ide> initMaterials: function ( materials, texturePath, crossOrigin ) {
<ide>
<ide> var array = []; | 2 |
Ruby | Ruby | add failing test that triggers the stack overflow | 1080351437dc43c3ecaa0d494f5ca215f03b1883 | <ide><path>activerecord/test/cases/autosave_association_test.rb
<ide> require 'models/company'
<ide> require 'models/customer'
<ide> require 'models/developer'
<add>require 'models/invoice'
<add>require 'models/line_item'
<ide> require 'models/order'
<ide> require 'models/parrot'
<ide> require 'models/person'
<ide> def setup
<ide> assert [email protected]_to?(:validate_associated_records_for_non_validated_parrots)
<ide> end
<ide> end
<add>
<add>class TestAutosaveAssociationWithTouch < ActiveRecord::TestCase
<add> def test_autosave_with_touch_should_not_raise_system_stack_error
<add> invoice = Invoice.create
<add> assert_nothing_raised { invoice.line_items.create(:amount => 10) }
<add> end
<add>end
<ide><path>activerecord/test/models/invoice.rb
<add>class Invoice < ActiveRecord::Base
<add> has_many :line_items, :autosave => true
<add> before_save {|record| record.balance = record.line_items.map(&:amount).sum }
<add>end
<ide><path>activerecord/test/models/line_item.rb
<add>class LineItem < ActiveRecord::Base
<add> belongs_to :invoice, :touch => true
<add>end
<ide><path>activerecord/test/schema/schema.rb
<ide> def create_table(*args, &block)
<ide> t.string :info
<ide> end
<ide>
<add> create_table :invoices, :force => true do |t|
<add> t.integer :balance
<add> t.datetime :updated_at
<add> end
<add>
<ide> create_table :items, :force => true do |t|
<ide> t.column :name, :integer
<ide> end
<ide> def create_table(*args, &block)
<ide> t.integer :version, :null => false, :default => 0
<ide> end
<ide>
<add> create_table :line_items, :force => true do |t|
<add> t.integer :invoice_id
<add> t.integer :amount
<add> end
<add>
<ide> create_table :lock_without_defaults, :force => true do |t|
<ide> t.column :lock_version, :integer
<ide> end | 4 |
Text | Text | update image to match current icon | c651cdc356913e53b3b6c875ab0a496b6ddda8ee | <ide><path>README.md
<ide> # Atom — Futuristic Text Editing
<ide>
<del>
<add>
<ide>
<ide> Check out our [documentation on the docs tab](https://github.com/github/atom/docs).
<ide> | 1 |
Python | Python | fix a bug in open_latin1 | b022b9c5d2e49d1d2ab07be0fe4549342ab88c1e | <ide><path>numpy/compat/py3k.py
<ide> def asstr(s):
<ide> def isfileobj(f):
<ide> return isinstance(f, io.FileIO)
<ide> def open_latin1(filename, mode='r'):
<del> return open(f, mode=mode, encoding='iso-8859-1')
<add> return open(filename, mode=mode, encoding='iso-8859-1')
<ide> strchar = 'U'
<ide> else:
<ide> bytes = str | 1 |
Ruby | Ruby | remove unnecessary namespaces in `explain_test.rb` | 1f6da4fda6d7c59ca6b27fa8930f806628a2f548 | <ide><path>activerecord/test/cases/adapters/mysql2/explain_test.rb
<ide> require 'models/developer'
<ide> require 'models/computer'
<ide>
<del>module ActiveRecord
<del> module ConnectionAdapters
<del> class Mysql2Adapter
<del> class ExplainTest < ActiveRecord::Mysql2TestCase
<del> fixtures :developers
<add>class Mysql2ExplainTest < ActiveRecord::Mysql2TestCase
<add> fixtures :developers
<ide>
<del> def test_explain_for_one_query
<del> explain = Developer.where(:id => 1).explain
<del> assert_match %(EXPLAIN for: SELECT `developers`.* FROM `developers` WHERE `developers`.`id` = 1), explain
<del> assert_match %r(developers |.* const), explain
<del> end
<add> def test_explain_for_one_query
<add> explain = Developer.where(id: 1).explain
<add> assert_match %(EXPLAIN for: SELECT `developers`.* FROM `developers` WHERE `developers`.`id` = 1), explain
<add> assert_match %r(developers |.* const), explain
<add> end
<ide>
<del> def test_explain_with_eager_loading
<del> explain = Developer.where(:id => 1).includes(:audit_logs).explain
<del> assert_match %(EXPLAIN for: SELECT `developers`.* FROM `developers` WHERE `developers`.`id` = 1), explain
<del> assert_match %r(developers |.* const), explain
<del> assert_match %(EXPLAIN for: SELECT `audit_logs`.* FROM `audit_logs` WHERE `audit_logs`.`developer_id` = 1), explain
<del> assert_match %r(audit_logs |.* ALL), explain
<del> end
<del> end
<del> end
<add> def test_explain_with_eager_loading
<add> explain = Developer.where(id: 1).includes(:audit_logs).explain
<add> assert_match %(EXPLAIN for: SELECT `developers`.* FROM `developers` WHERE `developers`.`id` = 1), explain
<add> assert_match %r(developers |.* const), explain
<add> assert_match %(EXPLAIN for: SELECT `audit_logs`.* FROM `audit_logs` WHERE `audit_logs`.`developer_id` = 1), explain
<add> assert_match %r(audit_logs |.* ALL), explain
<ide> end
<ide> end
<ide><path>activerecord/test/cases/adapters/postgresql/explain_test.rb
<ide> class PostgreSQLExplainTest < ActiveRecord::PostgreSQLTestCase
<ide> fixtures :developers
<ide>
<ide> def test_explain_for_one_query
<del> explain = Developer.where(:id => 1).explain
<add> explain = Developer.where(id: 1).explain
<ide> assert_match %r(EXPLAIN for: SELECT "developers".* FROM "developers" WHERE "developers"."id" = \$?1), explain
<ide> assert_match %(QUERY PLAN), explain
<ide> end
<ide>
<ide> def test_explain_with_eager_loading
<del> explain = Developer.where(:id => 1).includes(:audit_logs).explain
<add> explain = Developer.where(id: 1).includes(:audit_logs).explain
<ide> assert_match %(QUERY PLAN), explain
<ide> assert_match %r(EXPLAIN for: SELECT "developers".* FROM "developers" WHERE "developers"."id" = \$?1), explain
<ide> assert_match %(EXPLAIN for: SELECT "audit_logs".* FROM "audit_logs" WHERE "audit_logs"."developer_id" = 1), explain
<ide><path>activerecord/test/cases/adapters/sqlite3/explain_test.rb
<ide> require 'models/developer'
<ide> require 'models/computer'
<ide>
<del>module ActiveRecord
<del> module ConnectionAdapters
<del> class SQLite3Adapter
<del> class ExplainTest < ActiveRecord::SQLite3TestCase
<del> fixtures :developers
<add>class SQLite3ExplainTest < ActiveRecord::SQLite3TestCase
<add> fixtures :developers
<ide>
<del> def test_explain_for_one_query
<del> explain = Developer.where(:id => 1).explain
<del> assert_match %r(EXPLAIN for: SELECT "developers".* FROM "developers" WHERE "developers"."id" = (?:\?|1)), explain
<del> assert_match(/(SEARCH )?TABLE developers USING (INTEGER )?PRIMARY KEY/, explain)
<del> end
<add> def test_explain_for_one_query
<add> explain = Developer.where(id: 1).explain
<add> assert_match %r(EXPLAIN for: SELECT "developers".* FROM "developers" WHERE "developers"."id" = (?:\?|1)), explain
<add> assert_match(/(SEARCH )?TABLE developers USING (INTEGER )?PRIMARY KEY/, explain)
<add> end
<ide>
<del> def test_explain_with_eager_loading
<del> explain = Developer.where(:id => 1).includes(:audit_logs).explain
<del> assert_match %r(EXPLAIN for: SELECT "developers".* FROM "developers" WHERE "developers"."id" = (?:\?|1)), explain
<del> assert_match(/(SEARCH )?TABLE developers USING (INTEGER )?PRIMARY KEY/, explain)
<del> assert_match %(EXPLAIN for: SELECT "audit_logs".* FROM "audit_logs" WHERE "audit_logs"."developer_id" = 1), explain
<del> assert_match(/(SCAN )?TABLE audit_logs/, explain)
<del> end
<del> end
<del> end
<add> def test_explain_with_eager_loading
<add> explain = Developer.where(id: 1).includes(:audit_logs).explain
<add> assert_match %r(EXPLAIN for: SELECT "developers".* FROM "developers" WHERE "developers"."id" = (?:\?|1)), explain
<add> assert_match(/(SEARCH )?TABLE developers USING (INTEGER )?PRIMARY KEY/, explain)
<add> assert_match %(EXPLAIN for: SELECT "audit_logs".* FROM "audit_logs" WHERE "audit_logs"."developer_id" = 1), explain
<add> assert_match(/(SCAN )?TABLE audit_logs/, explain)
<ide> end
<ide> end | 3 |
Ruby | Ruby | return nil if not found | f471c3e99fef4bb59c0bbafbfc496eca2850caa2 | <ide><path>Library/Homebrew/global.rb
<ide> module Homebrew extend self
<ide> # Xcode-only installs place tools in non-standard locations, and we also want
<ide> # to ensure the dev tools are in the PATH in build.rb
<ide> unless ORIGINAL_PATHS.include? MacOS.dev_tools_path
<del> ENV['PATH'] = ENV['PATH'].to_s + ':' + MacOS.dev_tools_path
<add> ENV['PATH'] = ENV['PATH'].to_s + ':' + MacOS.dev_tools_path.to_s
<ide> end
<ide><path>Library/Homebrew/macos.rb
<ide> def dev_tools_path
<ide> else
<ide> # Since we are pretty unrelenting in finding Xcode no matter where
<ide> # it hides, we can now throw in the towel.
<del> opoo "You really should consult the `brew doctor`!"
<del> ""
<add> opoo "Could not locate developer tools. Consult `brew doctor`."
<ide> end
<ide> end
<ide> | 2 |
Python | Python | fix race conditions in task callback invocations | f1d4f54b3479cd7549ce79efadd25cc6859dd420 | <ide><path>airflow/cli/cli_parser.py
<ide> def positive_int(value):
<ide> ("--ship-dag",), help="Pickles (serializes) the DAG and ships it to the worker", action="store_true"
<ide> )
<ide> ARG_PICKLE = Arg(("-p", "--pickle"), help="Serialized pickle object of the entire dag (used internally)")
<add>ARG_ERROR_FILE = Arg(("--error-file",), help="File to store task failure error")
<ide> ARG_JOB_ID = Arg(("-j", "--job-id"), help=argparse.SUPPRESS)
<ide> ARG_CFG_PATH = Arg(("--cfg-path",), help="Path to config file to use instead of airflow.cfg")
<ide> ARG_MIGRATION_TIMEOUT = Arg(
<ide> class GroupCommand(NamedTuple):
<ide> ARG_PICKLE,
<ide> ARG_JOB_ID,
<ide> ARG_INTERACTIVE,
<add> ARG_ERROR_FILE,
<ide> ARG_SHUT_DOWN_LOGGING,
<ide> ),
<ide> ),
<ide><path>airflow/cli/commands/task_command.py
<ide> from airflow.utils.session import create_session
<ide>
<ide>
<del>def _run_task_by_selected_method(args, dag, ti):
<add>def _run_task_by_selected_method(args, dag: DAG, ti: TaskInstance) -> None:
<ide> """
<ide> Runs the task in one of 3 modes
<ide>
<ide> def _run_task_by_local_task_job(args, ti):
<ide> ]
<ide>
<ide>
<del>def _run_raw_task(args, ti):
<add>def _run_raw_task(args, ti: TaskInstance) -> None:
<ide> """Runs the main task handling code"""
<ide> unsupported_options = [o for o in RAW_TASK_UNSUPPORTED_OPTION if getattr(args, o)]
<ide>
<ide> def _run_raw_task(args, ti):
<ide> mark_success=args.mark_success,
<ide> job_id=args.job_id,
<ide> pool=args.pool,
<add> error_file=args.error_file,
<ide> )
<ide>
<ide>
<ide><path>airflow/executors/debug_executor.py
<ide> def sync(self) -> None:
<ide> self.log.info("Executor is terminated! Stopping %s to %s", ti.key, State.FAILED)
<ide> ti.set_state(State.FAILED)
<ide> self.change_state(ti.key, State.FAILED)
<add> ti._run_finished_callback() # pylint: disable=protected-access
<ide> continue
<ide>
<ide> task_succeeded = self._run_task(ti)
<ide> def _run_task(self, ti: TaskInstance) -> bool:
<ide> params = self.tasks_params.pop(ti.key, {})
<ide> ti._run_raw_task(job_id=ti.job_id, **params) # pylint: disable=protected-access
<ide> self.change_state(key, State.SUCCESS)
<add> ti._run_finished_callback() # pylint: disable=protected-access
<ide> return True
<ide> except Exception as e: # pylint: disable=broad-except
<add> ti.set_state(State.FAILED)
<ide> self.change_state(key, State.FAILED)
<add> ti._run_finished_callback() # pylint: disable=protected-access
<ide> self.log.exception("Failed to execute task: %s.", str(e))
<ide> return False
<ide>
<ide><path>airflow/jobs/backfill_job.py
<ide> def _manage_executor_state(self, running):
<ide> "killed externally? Info: {}".format(ti, state, ti.state, info)
<ide> )
<ide> self.log.error(msg)
<del> ti.handle_failure(msg)
<add> ti.handle_failure_with_callback(error=msg)
<ide>
<ide> @provide_session
<ide> def _get_dag_run(self, run_date: datetime, dag: DAG, session: Session = None):
<ide><path>airflow/jobs/local_task_job.py
<ide> def signal_handler(signum, frame):
<ide>
<ide> heartbeat_time_limit = conf.getint('scheduler', 'scheduler_zombie_task_threshold')
<ide>
<del> while True:
<add> # task callback invocation happens either here or in
<add> # self.heartbeat() instead of taskinstance._run_raw_task to
<add> # avoid race conditions
<add> #
<add> # When self.terminating is set to True by heartbeat_callback, this
<add> # loop should not be restarted. Otherwise self.handle_task_exit
<add> # will be invoked and we will end up with duplicated callbacks
<add> while not self.terminating:
<ide> # Monitor the task to see if it's done. Wait in a syscall
<ide> # (`os.wait`) for as long as possible so we notice the
<ide> # subprocess finishing as quick as we can
<ide> def signal_handler(signum, frame):
<ide>
<ide> return_code = self.task_runner.return_code(timeout=max_wait_time)
<ide> if return_code is not None:
<del> self.log.info("Task exited with return code %s", return_code)
<add> self.handle_task_exit(return_code)
<ide> return
<ide>
<ide> self.heartbeat()
<ide> def signal_handler(signum, frame):
<ide> finally:
<ide> self.on_kill()
<ide>
<add> def handle_task_exit(self, return_code: int) -> None:
<add> """Handle case where self.task_runner exits by itself"""
<add> self.log.info("Task exited with return code %s", return_code)
<add> self.task_instance.refresh_from_db()
<add> # task exited by itself, so we need to check for error file
<add> # incase it failed due to runtime exception/error
<add> error = None
<add> if self.task_instance.state != State.SUCCESS:
<add> error = self.task_runner.deserialize_run_error()
<add> self.task_instance._run_finished_callback(error=error) # pylint: disable=protected-access
<add>
<ide> def on_kill(self):
<ide> self.task_runner.terminate()
<ide> self.task_runner.on_finish()
<ide> def heartbeat_callback(self, session=None):
<ide> self.log.warning(
<ide> "State of this instance has been externally set to %s. " "Terminating instance.", ti.state
<ide> )
<del> if ti.state == State.FAILED and ti.task.on_failure_callback:
<del> context = ti.get_template_context()
<del> ti.task.on_failure_callback(context)
<del> if ti.state == State.SUCCESS and ti.task.on_success_callback:
<del> context = ti.get_template_context()
<del> ti.task.on_success_callback(context)
<ide> self.task_runner.terminate()
<add> if ti.state == State.SUCCESS:
<add> error = None
<add> else:
<add> # if ti.state is not set by taskinstance.handle_failure, then
<add> # error file will not be populated and it must be updated by
<add> # external source suck as web UI
<add> error = self.task_runner.deserialize_run_error() or "task marked as failed externally"
<add> ti._run_finished_callback(error=error) # pylint: disable=protected-access
<ide> self.terminating = True
<ide><path>airflow/jobs/scheduler_job.py
<ide> def _execute_task_callbacks(self, dagbag: DagBag, request: TaskCallbackRequest):
<ide> ti.state = simple_ti.state
<ide> ti.test_mode = self.UNIT_TEST_MODE
<ide> if request.is_failure_callback:
<del> ti.handle_failure(request.msg, ti.test_mode, ti.get_template_context())
<add> ti.handle_failure_with_callback(error=request.msg, test_mode=ti.test_mode)
<ide> self.log.info('Executed failure callback for %s in state %s', ti, ti.state)
<ide>
<ide> @provide_session
<ide> def _emit_pool_metrics(self, session: Session = None) -> None:
<ide> pools = models.Pool.slots_stats(session=session)
<ide> for pool_name, slot_stats in pools.items():
<ide> Stats.gauge(f'pool.open_slots.{pool_name}', slot_stats["open"])
<del> Stats.gauge(f'pool.queued_slots.{pool_name}', slot_stats[State.QUEUED])
<del> Stats.gauge(f'pool.running_slots.{pool_name}', slot_stats[State.RUNNING])
<add> Stats.gauge(f'pool.queued_slots.{pool_name}', slot_stats[State.QUEUED]) # type: ignore
<add> Stats.gauge(f'pool.running_slots.{pool_name}', slot_stats[State.RUNNING]) # type: ignore
<ide>
<ide> @provide_session
<ide> def heartbeat_callback(self, session: Session = None) -> None:
<ide><path>airflow/models/dag.py
<ide> def next_dagrun_after_date(self, date_last_automated_dagrun: Optional[pendulum.D
<ide> next_run_date = None
<ide> if not date_last_automated_dagrun:
<ide> # First run
<del> task_start_dates = [t.start_date for t in self.tasks]
<add> task_start_dates = [t.start_date for t in self.tasks if t.start_date]
<ide> if task_start_dates:
<ide> next_run_date = self.normalize_schedule(min(task_start_dates))
<ide> self.log.debug("Next run date based on tasks %s", next_run_date)
<ide><path>airflow/models/taskinstance.py
<ide> import logging
<ide> import math
<ide> import os
<add>import pickle
<ide> import signal
<ide> import warnings
<ide> from datetime import datetime, timedelta
<del>from typing import Any, Dict, Iterable, List, NamedTuple, Optional, Tuple, Union
<add>from tempfile import NamedTemporaryFile
<add>from typing import IO, Any, Dict, Iterable, List, NamedTuple, Optional, Tuple, Union
<ide> from urllib.parse import quote
<ide>
<ide> import dill
<ide> def set_current_context(context: Context):
<ide> )
<ide>
<ide>
<add>def load_error_file(fd: IO[bytes]) -> Optional[Union[str, Exception]]:
<add> """Load and return error from error file"""
<add> fd.seek(0, os.SEEK_SET)
<add> data = fd.read()
<add> if not data:
<add> return None
<add> try:
<add> return pickle.loads(data)
<add> except Exception: # pylint: disable=broad-except
<add> return "Failed to load task run error"
<add>
<add>
<add>def set_error_file(error_file: str, error: Union[str, Exception]) -> None:
<add> """Write error into error file by path"""
<add> with open(error_file, "wb") as fd:
<add> try:
<add> pickle.dump(error, fd)
<add> except Exception: # pylint: disable=broad-except
<add> # local class objects cannot be pickled, so we fallback
<add> # to store the string representation instead
<add> pickle.dump(str(error), fd)
<add>
<add>
<ide> def clear_task_instances(
<ide> tis,
<ide> session,
<ide> def _run_raw_task(
<ide> test_mode: bool = False,
<ide> job_id: Optional[str] = None,
<ide> pool: Optional[str] = None,
<add> error_file: Optional[str] = None,
<ide> session=None,
<ide> ) -> None:
<ide> """
<ide> def _run_raw_task(
<ide> return
<ide> except AirflowFailException as e:
<ide> self.refresh_from_db()
<del> self.handle_failure(e, test_mode, context, force_fail=True)
<add> self.handle_failure(e, test_mode, force_fail=True, error_file=error_file)
<ide> raise
<ide> except AirflowException as e:
<ide> self.refresh_from_db()
<ide> def _run_raw_task(
<ide> if self.state in {State.SUCCESS, State.FAILED}:
<ide> return
<ide> else:
<del> self.handle_failure(e, test_mode, context)
<add> self.handle_failure(e, test_mode, error_file=error_file)
<ide> raise
<ide> except (Exception, KeyboardInterrupt) as e:
<del> self.handle_failure(e, test_mode, context)
<add> self.handle_failure(e, test_mode, error_file=error_file)
<ide> raise
<ide> finally:
<ide> Stats.incr(f'ti.finish.{task.dag_id}.{task.task_id}.{self.state}')
<ide>
<del> self._run_success_callback(context, task)
<del>
<ide> # Recording SUCCESS
<ide> self.end_date = timezone.utcnow()
<ide> self.log.info(
<ide> def _update_ti_state_for_sensing(self, session=None):
<ide> # Raise exception for sensing state
<ide> raise AirflowSmartSensorException("Task successfully registered in smart sensor.")
<ide>
<del> def _run_success_callback(self, context, task):
<del> """Functions that need to be run if Task is successful"""
<del> # Success callback
<del> try:
<del> if task.on_success_callback:
<del> task.on_success_callback(context)
<del> except Exception as exc: # pylint: disable=broad-except
<del> self.log.error("Failed when executing success callback")
<del> self.log.exception(exc)
<del>
<ide> def _execute_task(self, context, task_copy):
<ide> """Executes Task (optionally with a Timeout) and pushes Xcom results"""
<ide> # If a timeout is specified for the task, make it fail
<ide> def _execute_task(self, context, task_copy):
<ide> self.xcom_push(key=XCOM_RETURN_KEY, value=result)
<ide> return result
<ide>
<del> def _run_execute_callback(self, context, task):
<add> def _run_execute_callback(self, context: Context, task):
<ide> """Functions that need to be run before a Task is executed"""
<ide> try:
<ide> if task.on_execute_callback:
<ide> def _run_execute_callback(self, context, task):
<ide> self.log.error("Failed when executing execute callback")
<ide> self.log.exception(exc)
<ide>
<add> def _run_finished_callback(self, error: Optional[Union[str, Exception]] = None) -> None:
<add> """
<add> Call callback defined for finished state change.
<add>
<add> NOTE: Only invoke this function from caller of self._run_raw_task or
<add> self.run
<add> """
<add> if self.state == State.FAILED:
<add> task = self.task
<add> if task.on_failure_callback is not None:
<add> context = self.get_template_context()
<add> context["exception"] = error
<add> task.on_failure_callback(context)
<add> elif self.state == State.SUCCESS:
<add> task = self.task
<add> if task.on_success_callback is not None:
<add> context = self.get_template_context()
<add> task.on_success_callback(context)
<add> elif self.state == State.UP_FOR_RETRY:
<add> task = self.task
<add> if task.on_retry_callback is not None:
<add> context = self.get_template_context()
<add> context["exception"] = error
<add> task.on_retry_callback(context)
<add>
<ide> @provide_session
<ide> def run( # pylint: disable=too-many-arguments
<ide> self,
<ide> def run( # pylint: disable=too-many-arguments
<ide> pool=pool,
<ide> session=session,
<ide> )
<del> if res:
<add> if not res:
<add> return
<add>
<add> try:
<add> error_fd = NamedTemporaryFile(delete=True)
<ide> self._run_raw_task(
<del> mark_success=mark_success, test_mode=test_mode, job_id=job_id, pool=pool, session=session
<add> mark_success=mark_success,
<add> test_mode=test_mode,
<add> job_id=job_id,
<add> pool=pool,
<add> error_file=error_fd.name,
<add> session=session,
<ide> )
<add> finally:
<add> error = None if self.state == State.SUCCESS else load_error_file(error_fd)
<add> error_fd.close()
<add> self._run_finished_callback(error=error)
<ide>
<ide> def dry_run(self):
<ide> """Only Renders Templates for the TI"""
<ide> def _handle_reschedule(self, actual_start_date, reschedule_exception, test_mode=
<ide> self.log.info('Rescheduling task, marking task as UP_FOR_RESCHEDULE')
<ide>
<ide> @provide_session
<del> def handle_failure(self, error, test_mode=None, context=None, force_fail=False, session=None):
<add> def handle_failure(
<add> self,
<add> error: Union[str, Exception],
<add> test_mode: Optional[bool] = None,
<add> force_fail: bool = False,
<add> error_file: Optional[str] = None,
<add> session=None,
<add> ) -> None:
<ide> """Handle Failure for the TaskInstance"""
<ide> if test_mode is None:
<ide> test_mode = self.test_mode
<del> if context is None:
<del> context = self.get_template_context()
<ide>
<del> self.log.exception(error)
<add> if error:
<add> self.log.exception(error)
<add> # external monitoring process provides pickle file so _run_raw_task
<add> # can send its runtime errors for access by failure callback
<add> if error_file:
<add> set_error_file(error_file, error)
<add>
<ide> task = self.task
<ide> self.end_date = timezone.utcnow()
<ide> self.set_duration()
<ide> def handle_failure(self, error, test_mode=None, context=None, force_fail=False,
<ide> # Log failure duration
<ide> session.add(TaskFail(task, self.execution_date, self.start_date, self.end_date))
<ide>
<del> if context is not None:
<del> context['exception'] = error
<add> # Set state correctly and figure out how to log it and decide whether
<add> # to email
<ide>
<del> # Set state correctly and figure out how to log it,
<del> # what callback to call if any, and how to decide whether to email
<add> # Note, callback invocation needs to be handled by caller of
<add> # _run_raw_task to avoid race conditions which could lead to duplicate
<add> # invocations or miss invocation.
<ide>
<ide> # Since this function is called only when the TaskInstance state is running,
<ide> # try_number contains the current try_number (not the next). We
<ide> def handle_failure(self, error, test_mode=None, context=None, force_fail=False,
<ide> else:
<ide> log_message = "Marking task as FAILED."
<ide> email_for_state = task.email_on_failure
<del> callback = task.on_failure_callback
<ide> else:
<ide> self.state = State.UP_FOR_RETRY
<ide> log_message = "Marking task as UP_FOR_RETRY."
<ide> email_for_state = task.email_on_retry
<del> callback = task.on_retry_callback
<ide>
<ide> self.log.info(
<ide> '%s dag_id=%s, task_id=%s, execution_date=%s, start_date=%s, end_date=%s',
<ide> def handle_failure(self, error, test_mode=None, context=None, force_fail=False,
<ide> self.log.error('Failed to send email to: %s', task.email)
<ide> self.log.exception(exec2)
<ide>
<del> # Handling callbacks pessimistically
<del> if callback:
<del> try:
<del> callback(context)
<del> except Exception as exec3: # pylint: disable=broad-except
<del> self.log.error("Failed at executing callback")
<del> self.log.exception(exec3)
<del>
<ide> if not test_mode:
<ide> session.merge(self)
<ide> session.commit()
<ide>
<add> @provide_session
<add> def handle_failure_with_callback(
<add> self,
<add> error: Union[str, Exception],
<add> test_mode: Optional[bool] = None,
<add> force_fail: bool = False,
<add> session=None,
<add> ) -> None:
<add> self.handle_failure(error=error, test_mode=test_mode, force_fail=force_fail, session=session)
<add> self._run_finished_callback(error=error)
<add>
<ide> def is_eligible_to_retry(self):
<ide> """Is task instance is eligible for retry"""
<ide> return self.task.retries and self.try_number <= self.max_tries
<ide><path>airflow/task/task_runner/base_task_runner.py
<ide> import os
<ide> import subprocess
<ide> import threading
<add>from tempfile import NamedTemporaryFile
<add>from typing import Optional, Union
<ide>
<ide> from airflow.configuration import conf
<ide> from airflow.exceptions import AirflowConfigException
<add>from airflow.models.taskinstance import load_error_file
<ide> from airflow.utils.configuration import tmp_configuration_copy
<ide> from airflow.utils.log.logging_mixin import LoggingMixin
<ide> from airflow.utils.net import get_hostname
<ide> def __init__(self, local_task_job):
<ide> # - the runner can read/execute those values as it needs
<ide> cfg_path = tmp_configuration_copy(chmod=0o600)
<ide>
<add> self._error_file = NamedTemporaryFile(delete=True)
<ide> self._cfg_path = cfg_path
<del> self._command = popen_prepend + self._task_instance.command_as_list(
<del> raw=True,
<del> pickle_id=local_task_job.pickle_id,
<del> mark_success=local_task_job.mark_success,
<del> job_id=local_task_job.id,
<del> pool=local_task_job.pool,
<del> cfg_path=cfg_path,
<add> self._command = (
<add> popen_prepend
<add> + self._task_instance.command_as_list(
<add> raw=True,
<add> pickle_id=local_task_job.pickle_id,
<add> mark_success=local_task_job.mark_success,
<add> job_id=local_task_job.id,
<add> pool=local_task_job.pool,
<add> cfg_path=cfg_path,
<add> )
<add> + ["--error-file", self._error_file.name]
<ide> )
<ide> self.process = None
<ide>
<add> def deserialize_run_error(self) -> Optional[Union[str, Exception]]:
<add> """Return task runtime error if its written to provided error file."""
<add> return load_error_file(self._error_file)
<add>
<ide> def _read_task_logs(self, stream):
<ide> while True:
<ide> line = stream.readline()
<ide> def start(self):
<ide> """Start running the task instance in a subprocess."""
<ide> raise NotImplementedError()
<ide>
<del> def return_code(self):
<add> def return_code(self) -> Optional[int]:
<ide> """
<ide> :return: The return code associated with running the task instance or
<ide> None if the task is not yet done.
<ide> :rtype: int
<ide> """
<ide> raise NotImplementedError()
<ide>
<del> def terminate(self):
<del> """Kill the running task instance."""
<add> def terminate(self) -> None:
<add> """Force kill the running task instance."""
<ide> raise NotImplementedError()
<ide>
<del> def on_finish(self):
<add> def on_finish(self) -> None:
<ide> """A callback that should be called when this is done running."""
<ide> if self._cfg_path and os.path.isfile(self._cfg_path):
<ide> if self.run_as_user:
<ide> subprocess.call(['sudo', 'rm', self._cfg_path], close_fds=True)
<ide> else:
<ide> os.remove(self._cfg_path)
<add> self._error_file.close()
<ide><path>airflow/task/task_runner/standard_task_runner.py
<ide> """Standard task runner"""
<ide> import logging
<ide> import os
<add>from typing import Optional
<ide>
<ide> import psutil
<ide> from setproctitle import setproctitle # pylint: disable=no-name-in-module
<ide> def _start_by_fork(self): # pylint: disable=inconsistent-return-statements
<ide> logging.shutdown()
<ide> os._exit(return_code) # pylint: disable=protected-access
<ide>
<del> def return_code(self, timeout=0):
<add> def return_code(self, timeout: int = 0) -> Optional[int]:
<ide> # We call this multiple times, but we can only wait on the process once
<ide> if self._rc is not None or not self.process:
<ide> return self._rc
<ide><path>airflow/utils/process_utils.py
<ide> DEFAULT_TIME_TO_WAIT_AFTER_SIGTERM = conf.getint('core', 'KILLED_TASK_CLEANUP_TIME')
<ide>
<ide>
<del>def reap_process_group(pgid, logger, sig=signal.SIGTERM, timeout=DEFAULT_TIME_TO_WAIT_AFTER_SIGTERM):
<add>def reap_process_group(
<add> pgid: int,
<add> logger,
<add> sig: 'signal.Signals' = signal.SIGTERM,
<add> timeout: int = DEFAULT_TIME_TO_WAIT_AFTER_SIGTERM,
<add>) -> Dict[int, int]:
<ide> """
<ide> Tries really hard to terminate all processes in the group (including grandchildren). Will send
<ide> sig (SIGTERM) to the process group of pid. If any process is alive after timeout
<ide><path>tests/core/test_core.py
<ide> def test_on_failure_callback(self):
<ide> # Annoying workaround for nonlocal not existing in python 2
<ide> data = {'called': False}
<ide>
<del> def check_failure(context, test_case=self):
<add> def check_failure(context, test_case=self): # pylint: disable=unused-argument
<ide> data['called'] = True
<del> error = context.get('exception')
<add> error = context.get("exception")
<ide> test_case.assertIsInstance(error, AirflowException)
<ide>
<ide> op = BashOperator(
<ide><path>tests/jobs/test_local_task_job.py
<ide> import time
<ide> import unittest
<ide> import uuid
<add>from multiprocessing import Lock, Value
<ide> from unittest import mock
<ide> from unittest.mock import patch
<ide>
<ide> import pytest
<ide>
<ide> from airflow import settings
<del>from airflow.exceptions import AirflowException
<add>from airflow.exceptions import AirflowException, AirflowFailException
<ide> from airflow.executors.sequential_executor import SequentialExecutor
<ide> from airflow.jobs.local_task_job import LocalTaskJob
<ide> from airflow.models.dag import DAG
<ide> from airflow.models.dagbag import DagBag
<ide> from airflow.models.taskinstance import TaskInstance
<del>from airflow.operators.dummy import DummyOperator
<add>from airflow.operators.dummy_operator import DummyOperator
<ide> from airflow.operators.python import PythonOperator
<add>from airflow.task.task_runner.standard_task_runner import StandardTaskRunner
<ide> from airflow.utils import timezone
<ide> from airflow.utils.net import get_hostname
<ide> from airflow.utils.session import create_session
<ide> def test_localtaskjob_double_trigger(self):
<ide> ti_run = TaskInstance(task=task, execution_date=DEFAULT_DATE)
<ide> ti_run.refresh_from_db()
<ide> job1 = LocalTaskJob(task_instance=ti_run, executor=SequentialExecutor())
<del> from airflow.task.task_runner.standard_task_runner import StandardTaskRunner
<del>
<ide> with patch.object(StandardTaskRunner, 'start', return_value=None) as mock_method:
<ide> job1.run()
<ide> mock_method.assert_not_called()
<ide> def multi_return_code():
<ide> return return_codes.pop(0)
<ide>
<ide> time_start = time.time()
<del> from airflow.task.task_runner.standard_task_runner import StandardTaskRunner
<del>
<ide> with patch.object(StandardTaskRunner, 'start', return_value=None) as mock_start:
<ide> with patch.object(StandardTaskRunner, 'return_code') as mock_ret_code:
<ide> mock_ret_code.side_effect = multi_return_code
<ide> def test_mark_failure_on_failure_callback(self):
<ide> Test that ensures that mark_failure in the UI fails
<ide> the task, and executes on_failure_callback
<ide> """
<del> data = {'called': False}
<add> # use shared memory value so we can properly track value change even if
<add> # it's been updated across processes.
<add> failure_callback_called = Value('i', 0)
<add> task_terminated_externally = Value('i', 1)
<ide>
<ide> def check_failure(context):
<add> with failure_callback_called.get_lock():
<add> failure_callback_called.value += 1
<ide> assert context['dag_run'].dag_id == 'test_mark_failure'
<del> data['called'] = True
<add> assert context['exception'] == "task marked as failed externally"
<ide>
<ide> def task_function(ti):
<del> print("python_callable run in pid %s", os.getpid())
<ide> with create_session() as session:
<ide> assert State.RUNNING == ti.state
<ide> ti.log.info("Marking TI as failed 'externally'")
<ide> ti.state = State.FAILED
<ide> session.merge(ti)
<ide> session.commit()
<ide>
<del> time.sleep(60)
<add> time.sleep(10)
<ide> # This should not happen -- the state change should be noticed and the task should get killed
<del> data['reached_end_of_sleep'] = True
<add> with task_terminated_externally.get_lock():
<add> task_terminated_externally.value = 0
<ide>
<ide> with DAG(dag_id='test_mark_failure', start_date=DEFAULT_DATE) as dag:
<ide> task = PythonOperator(
<ide> def task_function(ti):
<ide> on_failure_callback=check_failure,
<ide> )
<ide>
<del> session = settings.Session()
<del>
<ide> dag.clear()
<del> dag.create_dagrun(
<del> run_id="test",
<del> state=State.RUNNING,
<del> execution_date=DEFAULT_DATE,
<del> start_date=DEFAULT_DATE,
<del> session=session,
<del> )
<add> with create_session() as session:
<add> dag.create_dagrun(
<add> run_id="test",
<add> state=State.RUNNING,
<add> execution_date=DEFAULT_DATE,
<add> start_date=DEFAULT_DATE,
<add> session=session,
<add> )
<ide> ti = TaskInstance(task=task, execution_date=DEFAULT_DATE)
<ide> ti.refresh_from_db()
<ide>
<ide> def task_function(ti):
<ide>
<ide> ti.refresh_from_db()
<ide> assert ti.state == State.FAILED
<del> assert data['called']
<del> assert 'reached_end_of_sleep' not in data, 'Task should not have been allowed to run to completion'
<add> assert failure_callback_called.value == 1
<add> assert task_terminated_externally.value == 1
<add>
<add> @patch('airflow.utils.process_utils.subprocess.check_call')
<add> @patch.object(StandardTaskRunner, 'return_code')
<add> def test_failure_callback_only_called_once(self, mock_return_code, _check_call):
<add> """
<add> Test that ensures that when a task exits with failure by itself,
<add> failure callback is only called once
<add> """
<add> # use shared memory value so we can properly track value change even if
<add> # it's been updated across processes.
<add> failure_callback_called = Value('i', 0)
<add> callback_count_lock = Lock()
<add>
<add> def failure_callback(context):
<add> with callback_count_lock:
<add> failure_callback_called.value += 1
<add> assert context['dag_run'].dag_id == 'test_failure_callback_race'
<add> assert isinstance(context['exception'], AirflowFailException)
<add>
<add> def task_function(ti):
<add> raise AirflowFailException()
<add>
<add> dag = DAG(dag_id='test_failure_callback_race', start_date=DEFAULT_DATE)
<add> task = PythonOperator(
<add> task_id='test_exit_on_failure',
<add> python_callable=task_function,
<add> on_failure_callback=failure_callback,
<add> dag=dag,
<add> )
<add>
<add> dag.clear()
<add> with create_session() as session:
<add> dag.create_dagrun(
<add> run_id="test",
<add> state=State.RUNNING,
<add> execution_date=DEFAULT_DATE,
<add> start_date=DEFAULT_DATE,
<add> session=session,
<add> )
<add> ti = TaskInstance(task=task, execution_date=DEFAULT_DATE)
<add> ti.refresh_from_db()
<add>
<add> job1 = LocalTaskJob(task_instance=ti, ignore_ti_state=True, executor=SequentialExecutor())
<add>
<add> # Simulate race condition where job1 heartbeat ran right after task
<add> # state got set to failed by ti.handle_failure but before task process
<add> # fully exits. See _execute loop in airflow/jobs/local_task_job.py.
<add> # In this case, we have:
<add> # * task_runner.return_code() is None
<add> # * ti.state == State.Failed
<add> #
<add> # We also need to set return_code to a valid int after job1.terminating
<add> # is set to True so _execute loop won't loop forever.
<add> def dummy_return_code(*args, **kwargs):
<add> return None if not job1.terminating else -9
<add>
<add> mock_return_code.side_effect = dummy_return_code
<add>
<add> with timeout(10):
<add> # This should be _much_ shorter to run.
<add> # If you change this limit, make the timeout in the callbable above bigger
<add> job1.run()
<add>
<add> ti.refresh_from_db()
<add> assert ti.state == State.FAILED # task exits with failure state
<add> assert failure_callback_called.value == 1
<ide>
<del> @pytest.mark.quarantined
<ide> def test_mark_success_on_success_callback(self):
<ide> """
<ide> Test that ensures that where a task is marked suceess in the UI
<ide> on_success_callback gets executed
<ide> """
<del> data = {'called': False}
<add> # use shared memory value so we can properly track value change even if
<add> # it's been updated across processes.
<add> success_callback_called = Value('i', 0)
<add> task_terminated_externally = Value('i', 1)
<add> shared_mem_lock = Lock()
<ide>
<ide> def success_callback(context):
<add> with shared_mem_lock:
<add> success_callback_called.value += 1
<ide> assert context['dag_run'].dag_id == 'test_mark_success'
<del> data['called'] = True
<ide>
<ide> dag = DAG(dag_id='test_mark_success', start_date=DEFAULT_DATE, default_args={'owner': 'owner1'})
<ide>
<del> task = DummyOperator(task_id='test_state_succeeded1', dag=dag, on_success_callback=success_callback)
<add> def task_function(ti):
<add> # pylint: disable=unused-argument
<add> time.sleep(60)
<add> # This should not happen -- the state change should be noticed and the task should get killed
<add> with shared_mem_lock:
<add> task_terminated_externally.value = 0
<add>
<add> task = PythonOperator(
<add> task_id='test_state_succeeded1',
<add> python_callable=task_function,
<add> on_success_callback=success_callback,
<add> dag=dag,
<add> )
<ide>
<ide> session = settings.Session()
<ide>
<ide> def success_callback(context):
<ide> ti = TaskInstance(task=task, execution_date=DEFAULT_DATE)
<ide> ti.refresh_from_db()
<ide> job1 = LocalTaskJob(task_instance=ti, ignore_ti_state=True, executor=SequentialExecutor())
<del> from airflow.task.task_runner.standard_task_runner import StandardTaskRunner
<del>
<ide> job1.task_runner = StandardTaskRunner(job1)
<add>
<add> settings.engine.dispose()
<ide> process = multiprocessing.Process(target=job1.run)
<ide> process.start()
<del> ti.refresh_from_db()
<del> for _ in range(0, 50):
<add>
<add> for _ in range(0, 25):
<add> ti.refresh_from_db()
<ide> if ti.state == State.RUNNING:
<ide> break
<del> time.sleep(0.1)
<del> ti.refresh_from_db()
<del> assert State.RUNNING == ti.state
<add> time.sleep(0.2)
<add> assert ti.state == State.RUNNING
<ide> ti.state = State.SUCCESS
<ide> session.merge(ti)
<ide> session.commit()
<ide>
<del> job1.heartbeat_callback(session=None)
<del> assert data['called']
<ide> process.join(timeout=10)
<add> assert success_callback_called.value == 1
<add> assert task_terminated_externally.value == 1
<ide> assert not process.is_alive()
<ide>
<ide>
<ide> def test_number_of_queries_single_loop(self, mock_get_task_runner, return_codes)
<ide> mock_get_task_runner.return_value.return_code.side_effects = return_codes
<ide>
<ide> job = LocalTaskJob(task_instance=ti, executor=MockExecutor())
<del> with assert_queries_count(12):
<add> with assert_queries_count(13):
<ide> job.run()
<ide><path>tests/jobs/test_scheduler_job.py
<ide> def test_runs_respected_after_clear(self):
<ide> num_scheduled = scheduler._schedule_dag_run(dr3, {dr1.execution_date}, session)
<ide> assert num_scheduled == 0
<ide>
<del> @patch.object(TaskInstance, 'handle_failure')
<add> @patch.object(TaskInstance, 'handle_failure_with_callback')
<ide> def test_execute_on_failure_callbacks(self, mock_ti_handle_failure):
<ide> dagbag = DagBag(dag_folder="/dev/null", include_examples=True, read_dags_from_db=False)
<ide> dag_file_processor = DagFileProcessor(dag_ids=[], log=mock.MagicMock())
<ide> def test_execute_on_failure_callbacks(self, mock_ti_handle_failure):
<ide> ]
<ide> dag_file_processor.execute_callbacks(dagbag, requests)
<ide> mock_ti_handle_failure.assert_called_once_with(
<del> "Message", conf.getboolean('core', 'unit_test_mode'), mock.ANY
<add> error="Message",
<add> test_mode=conf.getboolean('core', 'unit_test_mode'),
<ide> )
<ide>
<ide> def test_process_file_should_failure_callback(self):
<ide><path>tests/models/test_taskinstance.py
<ide> def test_success_callback_no_race_condition(self):
<ide>
<ide> callback_wrapper.wrap_task_instance(ti)
<ide> ti._run_raw_task()
<add> ti._run_finished_callback()
<ide> assert callback_wrapper.callback_ran
<del> assert callback_wrapper.task_state_in_callback == State.RUNNING
<add> assert callback_wrapper.task_state_in_callback == State.SUCCESS
<ide> ti.refresh_from_db()
<ide> assert ti.state == State.SUCCESS
<ide>
<ide> def test_handle_failure(self):
<ide> ti1 = TI(task=task1, execution_date=start_date)
<ide> ti1.state = State.FAILED
<ide> ti1.handle_failure("test failure handling")
<add> ti1._run_finished_callback()
<ide>
<ide> context_arg_1 = mock_on_failure_1.call_args[0][0]
<ide> assert context_arg_1 and "task_instance" in context_arg_1
<ide> def test_handle_failure(self):
<ide> ti2 = TI(task=task2, execution_date=start_date)
<ide> ti2.state = State.FAILED
<ide> ti2.handle_failure("test retry handling")
<add> ti2._run_finished_callback()
<ide>
<ide> mock_on_failure_2.assert_not_called()
<ide>
<ide> def test_handle_failure(self):
<ide> ti3 = TI(task=task3, execution_date=start_date)
<ide> ti3.state = State.FAILED
<ide> ti3.handle_failure("test force_fail handling", force_fail=True)
<add> ti3._run_finished_callback()
<ide>
<ide> context_arg_3 = mock_on_failure_3.call_args[0][0]
<ide> assert context_arg_3 and "task_instance" in context_arg_3
<ide><path>tests/providers/apache/hive/transfers/test_mysql_to_hive.py
<ide>
<ide> import unittest
<ide> from collections import OrderedDict
<add>from os import path
<ide> from unittest import mock
<ide>
<ide> import pytest
<ide> TEST_DAG_ID = 'unit_test_dag'
<ide>
<ide>
<add>class HiveopTempFile:
<add> """
<add> Make sure temp file path is in the format of "/tmp/airflow_hiveop_t_78lpye/tmpour2_kig",
<add> """
<add>
<add> def __eq__(self, other: str) -> bool:
<add> (head, tail) = path.split(other)
<add> (head, tail) = path.split(head)
<add> return tail.startswith("airflow_hiveop_")
<add>
<add>
<add>class HiveopTempDir:
<add> """
<add> Make sure temp dir path is in the format of "/tmp/airflow_hiveop_t_78lpye",
<add> """
<add>
<add> def __eq__(self, other: str) -> bool:
<add> (_, tail) = path.split(other)
<add> return tail.startswith("airflow_hiveop_")
<add>
<add>
<ide> @pytest.mark.backend("mysql")
<ide> class TestTransfer(unittest.TestCase):
<ide> def setUp(self):
<ide> def tearDown(self):
<ide> with MySqlHook().get_conn() as cur:
<ide> cur.execute("DROP TABLE IF EXISTS baby_names CASCADE;")
<ide>
<del> @mock.patch('tempfile.tempdir', '/tmp/')
<del> @mock.patch('tempfile._RandomNameSequence.__next__')
<ide> @mock.patch('subprocess.Popen')
<del> def test_mysql_to_hive(self, mock_popen, mock_temp_dir):
<add> def test_mysql_to_hive(self, mock_popen):
<ide> mock_subprocess = MockSubProcess()
<ide> mock_popen.return_value = mock_subprocess
<del> mock_temp_dir.return_value = "test_mysql_to_hive"
<ide>
<ide> with mock.patch.dict('os.environ', self.env_vars):
<ide> sql = "SELECT * FROM baby_names LIMIT 1000;"
<ide> def test_mysql_to_hive(self, mock_popen, mock_temp_dir):
<ide> '-hiveconf',
<ide> 'tez.queue.name=airflow',
<ide> '-f',
<del> '/tmp/airflow_hiveop_test_mysql_to_hive/tmptest_mysql_to_hive',
<add> HiveopTempFile(),
<ide> ]
<ide>
<ide> mock_popen.assert_called_with(
<ide> hive_cmd,
<ide> stdout=mock_subprocess.PIPE,
<ide> stderr=mock_subprocess.STDOUT,
<del> cwd="/tmp/airflow_hiveop_test_mysql_to_hive",
<add> cwd=HiveopTempDir(),
<ide> close_fds=True,
<ide> )
<ide>
<del> @mock.patch('tempfile.tempdir', '/tmp/')
<del> @mock.patch('tempfile._RandomNameSequence.__next__')
<ide> @mock.patch('subprocess.Popen')
<del> def test_mysql_to_hive_partition(self, mock_popen, mock_temp_dir):
<add> def test_mysql_to_hive_partition(self, mock_popen):
<ide> mock_subprocess = MockSubProcess()
<ide> mock_popen.return_value = mock_subprocess
<del> mock_temp_dir.return_value = "test_mysql_to_hive_partition"
<ide>
<ide> with mock.patch.dict('os.environ', self.env_vars):
<ide> sql = "SELECT * FROM baby_names LIMIT 1000;"
<ide> def test_mysql_to_hive_partition(self, mock_popen, mock_temp_dir):
<ide> '-hiveconf',
<ide> 'tez.queue.name=airflow',
<ide> '-f',
<del> '/tmp/airflow_hiveop_test_mysql_to_hive_partition/tmptest_mysql_to_hive_partition',
<add> HiveopTempFile(),
<ide> ]
<ide>
<ide> mock_popen.assert_called_with(
<ide> hive_cmd,
<ide> stdout=mock_subprocess.PIPE,
<ide> stderr=mock_subprocess.STDOUT,
<del> cwd="/tmp/airflow_hiveop_test_mysql_to_hive_partition",
<add> cwd=HiveopTempDir(),
<ide> close_fds=True,
<ide> )
<ide>
<del> @mock.patch('tempfile.tempdir', '/tmp/')
<del> @mock.patch('tempfile._RandomNameSequence.__next__')
<ide> @mock.patch('subprocess.Popen')
<del> def test_mysql_to_hive_tblproperties(self, mock_popen, mock_temp_dir):
<add> def test_mysql_to_hive_tblproperties(self, mock_popen):
<ide> mock_subprocess = MockSubProcess()
<ide> mock_popen.return_value = mock_subprocess
<del> mock_temp_dir.return_value = "test_mysql_to_hive"
<ide>
<ide> with mock.patch.dict('os.environ', self.env_vars):
<ide> sql = "SELECT * FROM baby_names LIMIT 1000;"
<ide> def test_mysql_to_hive_tblproperties(self, mock_popen, mock_temp_dir):
<ide> '-hiveconf',
<ide> 'tez.queue.name=airflow',
<ide> '-f',
<del> '/tmp/airflow_hiveop_test_mysql_to_hive/tmptest_mysql_to_hive',
<add> HiveopTempFile(),
<ide> ]
<ide>
<ide> mock_popen.assert_called_with(
<ide> hive_cmd,
<ide> stdout=mock_subprocess.PIPE,
<ide> stderr=mock_subprocess.STDOUT,
<del> cwd="/tmp/airflow_hiveop_test_mysql_to_hive",
<add> cwd=HiveopTempDir(),
<ide> close_fds=True,
<ide> )
<ide>
<ide> def test_mysql_to_hive_type_conversion(self, mock_load_file):
<ide> with hook.get_conn() as conn:
<ide> conn.execute(f"DROP TABLE IF EXISTS {mysql_table}")
<ide>
<del> @mock.patch('tempfile.tempdir', '/tmp/')
<del> @mock.patch('tempfile._RandomNameSequence.__next__')
<ide> @mock.patch('subprocess.Popen')
<del> def test_mysql_to_hive_verify_csv_special_char(self, mock_popen, mock_temp_dir):
<add> def test_mysql_to_hive_verify_csv_special_char(self, mock_popen):
<ide> mock_subprocess = MockSubProcess()
<ide> mock_popen.return_value = mock_subprocess
<del> mock_temp_dir.return_value = "test_mysql_to_hive"
<ide>
<ide> mysql_table = 'test_mysql_to_hive'
<ide> hive_table = 'test_mysql_to_hive'
<ide> def test_mysql_to_hive_verify_csv_special_char(self, mock_popen, mock_temp_dir):
<ide> '-hiveconf',
<ide> 'tez.queue.name=airflow',
<ide> '-f',
<del> '/tmp/airflow_hiveop_test_mysql_to_hive/tmptest_mysql_to_hive',
<add> HiveopTempFile(),
<ide> ]
<ide>
<ide> mock_popen.assert_called_with(
<ide> hive_cmd,
<ide> stdout=mock_subprocess.PIPE,
<ide> stderr=mock_subprocess.STDOUT,
<del> cwd="/tmp/airflow_hiveop_test_mysql_to_hive",
<add> cwd=HiveopTempDir(),
<ide> close_fds=True,
<ide> )
<ide> finally:
<ide> with hook.get_conn() as conn:
<ide> conn.execute(f"DROP TABLE IF EXISTS {mysql_table}")
<ide>
<del> @mock.patch('tempfile.tempdir', '/tmp/')
<del> @mock.patch('tempfile._RandomNameSequence.__next__')
<ide> @mock.patch('subprocess.Popen')
<del> def test_mysql_to_hive_verify_loaded_values(self, mock_popen, mock_temp_dir):
<add> def test_mysql_to_hive_verify_loaded_values(self, mock_popen):
<ide> mock_subprocess = MockSubProcess()
<ide> mock_popen.return_value = mock_subprocess
<del> mock_temp_dir.return_value = "test_mysql_to_hive"
<ide>
<ide> mysql_table = 'test_mysql_to_hive'
<ide> hive_table = 'test_mysql_to_hive'
<ide> def test_mysql_to_hive_verify_loaded_values(self, mock_popen, mock_temp_dir):
<ide> '-hiveconf',
<ide> 'tez.queue.name=airflow',
<ide> '-f',
<del> '/tmp/airflow_hiveop_test_mysql_to_hive/tmptest_mysql_to_hive',
<add> HiveopTempFile(),
<ide> ]
<ide>
<ide> mock_popen.assert_called_with(
<ide> hive_cmd,
<ide> stdout=mock_subprocess.PIPE,
<ide> stderr=mock_subprocess.STDOUT,
<del> cwd="/tmp/airflow_hiveop_test_mysql_to_hive",
<add> cwd=HiveopTempDir(),
<ide> close_fds=True,
<ide> )
<ide> | 16 |
PHP | PHP | fix loading of old interface | 1d0f6c536d7eb243a40ab9663a2d8d101ce9b8d8 | <ide><path>src/Datasource/Paging/PaginatorInterface.php
<ide> public function paginate(object $object, array $params = [], array $settings = [
<ide> */
<ide> public function getPagingParams(): array;
<ide> }
<add>
<add>// phpcs:disable
<add>// The old interface Cake\Datasource\PaginatorInterface will not get loaded during
<add>// instanceof / type checks so ensure it's loaded here.
<add>class_exists('Cake\Datasource\PaginatorInterface');
<add>// phpcs:enable | 1 |
Go | Go | ci only unpause on hyper-v containers | 64615c9aa8e21bf9cd39e6ad4496c9e9c1bce55f | <ide><path>integration-cli/environment/clean.go
<ide> type logT interface {
<ide> // and removing everything else. It's meant to run after any tests so that they don't
<ide> // depend on each others.
<ide> func (e *Execution) Clean(t testingT, dockerBinary string) {
<del> unpauseAllContainers(t, dockerBinary)
<add> if (e.DaemonPlatform() != "windows") || (e.DaemonPlatform() == "windows" && e.Isolation() == "hyperv") {
<add> unpauseAllContainers(t, dockerBinary)
<add> }
<ide> deleteAllContainers(t, dockerBinary)
<ide> deleteAllImages(t, dockerBinary, e.protectedElements.images)
<ide> deleteAllVolumes(t, dockerBinary) | 1 |
Javascript | Javascript | dismiss all logs on fast refresh | bdd1a675ba6949868d11ef0c3dccaa8f336ca384 | <ide><path>Libraries/LogBox/Data/LogBoxData.js
<ide> export function symbolicateLogLazy(log: LogBoxLog) {
<ide> }
<ide>
<ide> export function clear(): void {
<del> const newLogs = Array.from(logs).filter(log => log.level === 'syntax');
<del> if (newLogs.length !== logs.size) {
<del> logs = new Set(newLogs);
<add> if (logs.size > 0) {
<add> logs = new Set();
<ide> handleUpdate();
<ide> }
<ide> }
<ide> export function clearErrors(): void {
<ide> }
<ide> }
<ide>
<del>export function clearSyntaxErrors(): void {
<del> const newLogs = Array.from(logs).filter(log => log.level !== 'syntax');
<del> if (newLogs.length !== logs.size) {
<del> logs = new Set(newLogs);
<del> handleUpdate();
<del> }
<del>}
<del>
<ide> export function dismiss(log: LogBoxLog): void {
<ide> if (logs.has(log)) {
<ide> logs.delete(log);
<ide><path>Libraries/LogBox/Data/__tests__/LogBoxData-test.js
<ide> describe('LogBoxData', () => {
<ide> expect(registry()[0]).toBeUndefined();
<ide> });
<ide>
<del> it('clears all logs except syntax errors', () => {
<add> it('clears all logs', () => {
<ide> addLogs(['A', 'B']);
<ide> addSoftErrors(['C', 'D']);
<ide> addFatalErrors(['E', 'F']);
<ide> describe('LogBoxData', () => {
<ide> expect(registry().length).toBe(7);
<ide>
<ide> LogBoxData.clear();
<del> expect(registry().length).toBe(1);
<add> expect(registry().length).toBe(0);
<ide> });
<ide>
<ide> it('clears only warnings', () => {
<ide> describe('LogBoxData', () => {
<ide> expect(registry().length).toBe(3);
<ide> });
<ide>
<del> it('clears only syntax errors', () => {
<del> addLogs(['A', 'B']);
<del> addSoftErrors(['C', 'D', 'E']);
<del> addFatalErrors(['F']);
<del> addSyntaxError();
<del> flushLogs();
<del>
<del> expect(registry().length).toBe(7);
<del>
<del> LogBoxData.clearSyntaxErrors();
<del> expect(registry().length).toBe(6);
<del> });
<del>
<del> it('clears all types', () => {
<add> it('clears all types except syntax errors', () => {
<ide> addLogs(['A', 'B']);
<ide> addSoftErrors(['C', 'D', 'E']);
<ide> addFatalErrors(['F']);
<ide> describe('LogBoxData', () => {
<ide>
<ide> LogBoxData.clearErrors();
<ide> LogBoxData.clearWarnings();
<del> LogBoxData.clearSyntaxErrors();
<del> expect(registry().length).toBe(0);
<add> expect(registry().length).toBe(1);
<ide> });
<ide>
<ide> it('keeps logs in chronological order', () => {
<ide> describe('LogBoxData', () => {
<ide> expect(observer.mock.calls.length).toBe(3);
<ide> });
<ide>
<del> it('updates observers when syntax errors cleared', () => {
<del> const {observer} = observe();
<del> expect(observer.mock.calls.length).toBe(1);
<del>
<del> addLogs(['A']);
<del> addSoftErrors(['B']);
<del> addFatalErrors(['C']);
<del> addSyntaxError();
<del> jest.runAllImmediates();
<del> expect(observer.mock.calls.length).toBe(2);
<del>
<del> LogBoxData.clearSyntaxErrors();
<del> jest.runAllImmediates();
<del> expect(observer.mock.calls.length).toBe(3);
<del>
<del> // Does nothing when already empty.
<del> LogBoxData.clearSyntaxErrors();
<del> jest.runAllImmediates();
<del> expect(observer.mock.calls.length).toBe(3);
<del> });
<del>
<ide> it('updates observers when an ignore pattern is added', () => {
<ide> const {observer} = observe();
<ide> expect(observer.mock.calls.length).toBe(1);
<ide><path>Libraries/Utilities/HMRClient.js
<ide> Error: ${e.message}`;
<ide> client.on('update', ({isInitialUpdate}) => {
<ide> if (client.isEnabled() && !isInitialUpdate) {
<ide> dismissRedbox();
<del> LogBoxData.clearSyntaxErrors();
<add> LogBoxData.clear();
<ide> }
<ide> });
<ide> | 3 |
Javascript | Javascript | enable platforms to configure cli commands | a40bfa730e05c68da49e6f217ae0f161dcc7ba98 | <ide><path>local-cli/core/__tests__/findPlugins.spec.js
<ide> describe('findPlugins', () => {
<ide> jest.mock(pjsonPath, () => ({
<ide> dependencies: {'rnpm-plugin-test': '*'},
<ide> }));
<del> expect(findPlugins([ROOT])).toHaveLength(1);
<del> expect(findPlugins([ROOT])[0]).toBe('rnpm-plugin-test');
<add>
<add> expect(findPlugins([ROOT])).toHaveProperty('commands');
<add> expect(findPlugins([ROOT])).toHaveProperty('platforms');
<add> expect(findPlugins([ROOT]).commands).toHaveLength(1);
<add> expect(findPlugins([ROOT]).commands[0]).toBe('rnpm-plugin-test');
<add> expect(findPlugins([ROOT]).platforms).toHaveLength(0);
<ide> });
<ide>
<ide> it('returns an empty array if there are no plugins in this folder', () => {
<ide> jest.mock(pjsonPath, () => ({}));
<del> expect(findPlugins([ROOT])).toHaveLength(0);
<add> expect(findPlugins([ROOT])).toHaveProperty('commands');
<add> expect(findPlugins([ROOT])).toHaveProperty('platforms');
<add> expect(findPlugins([ROOT]).commands).toHaveLength(0);
<add> expect(findPlugins([ROOT]).platforms).toHaveLength(0);
<ide> });
<ide>
<del> it('returns an empty array if there is no package.json in the supplied folder', () => {
<del> expect(Array.isArray(findPlugins(['fake-path']))).toBeTruthy();
<del> expect(findPlugins(['fake-path'])).toHaveLength(0);
<add> it('returns an object with empty arrays if there is no package.json in the supplied folder', () => {
<add> expect(findPlugins(['fake-path'])).toHaveProperty('commands');
<add> expect(findPlugins(['fake-path'])).toHaveProperty('platforms');
<add> expect(findPlugins(['fake-path']).commands).toHaveLength(0);
<add> expect(findPlugins(['fake-path']).platforms).toHaveLength(0);
<ide> });
<ide>
<ide> it('returns plugins from both dependencies and dev dependencies', () => {
<ide> jest.mock(pjsonPath, () => ({
<ide> dependencies: {'rnpm-plugin-test': '*'},
<ide> devDependencies: {'rnpm-plugin-test-2': '*'},
<ide> }));
<del> expect(findPlugins([ROOT])).toHaveLength(2);
<add> expect(findPlugins([ROOT])).toHaveProperty('commands');
<add> expect(findPlugins([ROOT])).toHaveProperty('platforms');
<add> expect(findPlugins([ROOT]).commands).toHaveLength(2);
<add> expect(findPlugins([ROOT]).platforms).toHaveLength(0);
<ide> });
<ide>
<ide> it('returns unique list of plugins', () => {
<ide> jest.mock(pjsonPath, () => ({
<ide> dependencies: {'rnpm-plugin-test': '*'},
<ide> devDependencies: {'rnpm-plugin-test': '*'},
<ide> }));
<del> expect(findPlugins([ROOT])).toHaveLength(1);
<add> expect(findPlugins([ROOT]).commands).toHaveLength(1);
<ide> });
<ide> });
<ide><path>local-cli/core/findPlugins.js
<ide> const findPluginsInReactNativePackage = (pjson) => {
<ide> return path.join(pjson.name, pjson.rnpm.plugin);
<ide> };
<ide>
<add>const findPlatformsInPackage = (pjson) => {
<add> if (!pjson.rnpm || !pjson.rnpm.platform) {
<add> return [];
<add> }
<add>
<add> return path.join(pjson.name, pjson.rnpm.platform);
<add>};
<add>
<ide> const findPluginInFolder = (folder) => {
<ide> const pjson = readPackage(folder);
<ide>
<ide> if (!pjson) {
<del> return [];
<add> return {commands: [], platforms: []};
<ide> }
<ide>
<ide> const deps = union(
<ide> const findPluginInFolder = (folder) => {
<ide>
<ide> return deps.reduce(
<ide> (acc, pkg) => {
<add> let commands = acc.commands;
<add> let platforms = acc.platforms;
<ide> if (isRNPMPlugin(pkg)) {
<del> return acc.concat(pkg);
<add> commands = commands.concat(pkg);
<ide> }
<ide> if (isReactNativePlugin(pkg)) {
<ide> const pkgJson = readPackage(path.join(folder, 'node_modules', pkg));
<del> if (!pkgJson) {
<del> return acc;
<add> if (pkgJson) {
<add> commands = commands.concat(findPluginsInReactNativePackage(pkgJson));
<add> platforms = platforms.concat(findPlatformsInPackage(pkgJson));
<ide> }
<del> return acc.concat(findPluginsInReactNativePackage(pkgJson));
<ide> }
<del> return acc;
<add> return {commands: commands, platforms: platforms};
<ide> },
<del> []
<add> {commands: [], platforms: []}
<ide> );
<ide> };
<ide>
<ide> /**
<ide> * Find plugins in package.json of the given folder
<ide> * @param {String} folder Path to the folder to get the package.json from
<del> * @type {Array} Array of plugins or an empty array if no package.json found
<add> * @type {Object} Object of commands and platform plugins
<ide> */
<ide> module.exports = function findPlugins(folders) {
<del> return uniq(flatten(folders.map(findPluginInFolder)));
<add> const plugins = folders.map(findPluginInFolder);
<add> return {
<add> commands: uniq(flatten(plugins.map(p => p.commands))),
<add> platforms: uniq(flatten(plugins.map(p => p.platforms)))
<add> };
<ide> };
<ide><path>local-cli/core/index.js
<ide> const Config = require('../util/Config');
<ide> const findPlugins = require('./findPlugins');
<ide> const findAssets = require('./findAssets');
<ide> const ios = require('./ios');
<del>const windows = require('./windows');
<ide> const wrapCommands = require('./wrapCommands');
<ide>
<ide> /* $FlowFixMe(>=0.54.0 site=react_native_oss) This comment suppresses an error
<ide> import type {ConfigT} from 'metro';
<ide>
<ide> export type RNConfig = {
<ide> ...ConfigT,
<add> /**
<add> * Returns an object with all platform configurations.
<add> */
<add> getPlatformConfig(): Object,
<ide> /**
<ide> * Returns an array of project commands used by the CLI to load
<ide> */
<ide> const attachPackage = (command, pkg) => Array.isArray(command)
<ide> ? command.map(cmd => attachPackage(cmd, pkg))
<ide> : { ...command, pkg };
<ide>
<add>const appRoot = process.cwd();
<add>const plugins = findPlugins([appRoot]);
<add>const pluginPlatforms = plugins
<add> .platforms
<add> .reduce((acc, pathToPlatforms) => {
<add> // $FlowFixMe non-literal require
<add> return Object.assign(acc, require(path.join(appRoot, 'node_modules', pathToPlatforms)));
<add> },
<add> {});
<add>
<ide> const defaultRNConfig = {
<add>
<ide> getProjectCommands(): Array<CommandT> {
<del> const appRoot = process.cwd();
<del> const plugins = findPlugins([appRoot])
<add> const commands = plugins
<add> .commands
<ide> .map(pathToCommands => {
<ide> const name = pathToCommands.split(path.sep)[0];
<ide>
<ide> const defaultRNConfig = {
<ide> );
<ide> });
<ide>
<del> return flatten(plugins);
<add> return flatten(commands);
<add> },
<add>
<add> getPlatformConfig(): Object {
<add> return {
<add> ios,
<add> android,
<add> ...pluginPlatforms
<add> };
<ide> },
<ide>
<ide> getProjectConfig(): Object {
<add> const platforms = this.getPlatformConfig();
<ide> const folder = process.cwd();
<ide> const rnpm = getRNPMConfig(folder);
<ide>
<del> return Object.assign({}, rnpm, {
<del> ios: ios.projectConfig(folder, rnpm.ios || {}),
<del> android: android.projectConfig(folder, rnpm.android || {}),
<del> windows: windows.projectConfig(folder, rnpm.windows || {}),
<add> let config = Object.assign({}, rnpm, {
<ide> assets: findAssets(folder, rnpm.assets),
<ide> });
<add>
<add> Object.keys(platforms).forEach(key => {
<add> config[key] = platforms[key].projectConfig(folder, rnpm[key] || {});
<add> });
<add>
<add> return config;
<ide> },
<ide>
<ide> getDependencyConfig(packageName: string) {
<add> const platforms = this.getPlatformConfig();
<ide> const folder = path.join(process.cwd(), 'node_modules', packageName);
<ide> const rnpm = getRNPMConfig(
<ide> path.join(process.cwd(), 'node_modules', packageName)
<ide> );
<ide>
<del> return Object.assign({}, rnpm, {
<del> ios: ios.dependencyConfig(folder, rnpm.ios || {}),
<del> android: android.dependencyConfig(folder, rnpm.android || {}),
<del> windows: windows.dependencyConfig(folder, rnpm.windows || {}),
<add> let config = Object.assign({}, rnpm, {
<ide> assets: findAssets(folder, rnpm.assets),
<ide> commands: wrapCommands(rnpm.commands),
<ide> params: rnpm.params || [],
<ide> });
<add>
<add> Object.keys(platforms).forEach(key => {
<add> config[key] = platforms[key].dependencyConfig(folder, rnpm[key] || {});
<add> });
<add>
<add> return config;
<ide> },
<ide> };
<ide>
<ide><path>local-cli/core/windows/findNamespace.js
<del>/**
<del> * Copyright (c) 2015-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> */
<del>'use strict';
<del>
<del>const fs = require('fs');
<del>const glob = require('glob');
<del>const path = require('path');
<del>
<del>/**
<del> * Gets package's namespace
<del> * by searching for its declaration in all C# files present in the folder
<del> *
<del> * @param {String} folder Folder to find C# files
<del> */
<del>module.exports = function getNamespace(folder) {
<del> const files = glob.sync('**/*.cs', { cwd: folder });
<del>
<del> const packages = files
<del> .map(filePath => fs.readFileSync(path.join(folder, filePath), 'utf8'))
<del> .map(file => file.match(/namespace (.*)[\s\S]+IReactPackage/))
<del> .filter(match => match);
<del>
<del> return packages.length ? packages[0][1] : null;
<del>};
<ide><path>local-cli/core/windows/findPackageClassName.js
<del>/**
<del> * Copyright (c) 2015-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> */
<del>'use strict';
<del>
<del>const fs = require('fs');
<del>const glob = require('glob');
<del>const path = require('path');
<del>
<del>/**
<del> * Gets package's class name (class that implements IReactPackage)
<del> * by searching for its declaration in all C# files present in the folder
<del> *
<del> * @param {String} folder Folder to find C# files
<del> */
<del>module.exports = function getPackageClassName(folder) {
<del> const files = glob.sync('**/*.cs', { cwd: folder });
<del>
<del> const packages = files
<del> .map(filePath => fs.readFileSync(path.join(folder, filePath), 'utf8'))
<del> .map(file => file.match(/class (.*) : IReactPackage/))
<del> .filter(match => match);
<del>
<del> return packages.length ? packages[0][1] : null;
<del>};
<ide><path>local-cli/core/windows/findProject.js
<del>/**
<del> * Copyright (c) 2015-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> */
<del>'use strict';
<del>
<del>const glob = require('glob');
<del>const path = require('path');
<del>
<del>/**
<del> * Find an C# project file
<del> *
<del> * @param {String} folder Name of the folder where to seek
<del> * @return {String}
<del> */
<del>module.exports = function findManifest(folder) {
<del> const csprojPath = glob.sync(path.join('**', '*.csproj'), {
<del> cwd: folder,
<del> ignore: ['node_modules/**', '**/build/**', 'Examples/**', 'examples/**'],
<del> })[0];
<del>
<del> return csprojPath ? path.join(folder, csprojPath) : null;
<del>};
<ide><path>local-cli/core/windows/findWindowsSolution.js
<del>/**
<del> * Copyright (c) 2015-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> */
<del>'use strict';
<del>
<del>const glob = require('glob');
<del>const path = require('path');
<del>
<del>/**
<del> * Glob pattern to look for solution file
<del> */
<del>const GLOB_PATTERN = '**/*.sln';
<del>
<del>/**
<del> * Regexp matching all test projects
<del> */
<del>const TEST_PROJECTS = /test|example|sample/i;
<del>
<del>/**
<del> * Base windows folder
<del> */
<del>const WINDOWS_BASE = 'windows';
<del>
<del>/**
<del> * These folders will be excluded from search to speed it up
<del> */
<del>const GLOB_EXCLUDE_PATTERN = ['**/@(node_modules)/**'];
<del>
<del>/**
<del> * Finds windows project by looking for all .sln files
<del> * in given folder.
<del> *
<del> * Returns first match if files are found or null
<del> *
<del> * Note: `./windows/*.sln` are returned regardless of the name
<del> */
<del>module.exports = function findSolution(folder) {
<del> const projects = glob
<del> .sync(GLOB_PATTERN, {
<del> cwd: folder,
<del> ignore: GLOB_EXCLUDE_PATTERN,
<del> })
<del> .filter(project => {
<del> return path.dirname(project) === WINDOWS_BASE || !TEST_PROJECTS.test(project);
<del> })
<del> .sort((projectA, projectB) => {
<del> return path.dirname(projectA) === WINDOWS_BASE ? -1 : 1;
<del> });
<del>
<del> if (projects.length === 0) {
<del> return null;
<del> }
<del>
<del> return projects[0];
<del>};
<ide><path>local-cli/core/windows/generateGUID.js
<del>const s4 = () => {
<del> return Math.floor((1 + Math.random()) * 0x10000)
<del> .toString(16)
<del> .substring(1);
<del>};
<del>
<del>module.exports = function generateGUID() {
<del> return s4() + s4() + '-' + s4() + '-' + s4() + '-' +
<del> s4() + '-' + s4() + s4() + s4();
<del>};
<ide><path>local-cli/core/windows/index.js
<del>/**
<del> * Copyright (c) 2015-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> */
<del>'use strict';
<del>
<del>const findWindowsSolution = require('./findWindowsSolution');
<del>const findNamespace = require('./findNamespace');
<del>const findProject = require('./findProject');
<del>const findPackageClassName = require('./findPackageClassName');
<del>const path = require('path');
<del>const generateGUID = require('./generateGUID');
<del>
<del>const relativeProjectPath = (fullProjPath) => {
<del> const windowsPath = fullProjPath
<del> .substring(fullProjPath.lastIndexOf('node_modules') - 1, fullProjPath.length)
<del> .replace(/\//g, '\\');
<del>
<del> return '..' + windowsPath;
<del>};
<del>
<del>const getProjectName = (fullProjPath) => {
<del> return fullProjPath.split('/').slice(-1)[0].replace(/\.csproj/i, '');
<del>};
<del>
<del>/**
<del> * Gets windows project config by analyzing given folder and taking some
<del> * defaults specified by user into consideration
<del> */
<del>exports.projectConfig = function projectConfigWindows(folder, userConfig) {
<del>
<del> const csSolution = userConfig.csSolution || findWindowsSolution(folder);
<del>
<del> if (!csSolution) {
<del> return null;
<del> }
<del>
<del> // expects solutions to be named the same as project folders
<del> const solutionPath = path.join(folder, csSolution);
<del> const windowsAppFolder = csSolution.substring(0, csSolution.lastIndexOf('.sln'));
<del> const src = userConfig.sourceDir || windowsAppFolder;
<del> const sourceDir = path.join(folder, src);
<del> const mainPage = path.join(sourceDir, 'MainPage.cs');
<del> const projectPath = userConfig.projectPath || findProject(folder);
<del>
<del> return {
<del> sourceDir,
<del> solutionPath,
<del> projectPath,
<del> mainPage,
<del> folder,
<del> userConfig,
<del> };
<del>};
<del>
<del>/**
<del> * Same as projectConfigWindows except it returns
<del> * different config that applies to packages only
<del> */
<del>exports.dependencyConfig = function dependencyConfigWindows(folder, userConfig) {
<del>
<del> const csSolution = userConfig.csSolution || findWindowsSolution(folder);
<del>
<del> if (!csSolution) {
<del> return null;
<del> }
<del>
<del> // expects solutions to be named the same as project folders
<del> const windowsAppFolder = csSolution.substring(0, csSolution.lastIndexOf('.sln'));
<del> const src = userConfig.sourceDir || windowsAppFolder;
<del>
<del> if (!src) {
<del> return null;
<del> }
<del>
<del> const sourceDir = path.join(folder, src);
<del> const packageClassName = findPackageClassName(sourceDir);
<del> const namespace = userConfig.namespace || findNamespace(sourceDir);
<del> const csProj = userConfig.csProj || findProject(folder);
<del>
<del> /**
<del> * This module has no package to export or no namespace
<del> */
<del> if (!packageClassName || !namespace) {
<del> return null;
<del> }
<del>
<del> const packageUsingPath = userConfig.packageUsingPath ||
<del> `using ${namespace};`;
<del>
<del> const packageInstance = userConfig.packageInstance ||
<del> `new ${packageClassName}()`;
<del>
<del> const projectGUID = generateGUID();
<del> const pathGUID = generateGUID();
<del> const projectName = getProjectName(csProj);
<del> const relativeProjPath = relativeProjectPath(csProj);
<del>
<del> return {
<del> sourceDir,
<del> packageUsingPath,
<del> packageInstance,
<del> projectName,
<del> csProj,
<del> folder,
<del> projectGUID,
<del> pathGUID,
<del> relativeProjPath,
<del> };
<del>};
<ide><path>local-cli/link/__tests__/link.spec.js
<ide> describe('link', () => {
<ide>
<ide> it('should accept a name of a dependency to link', (done) => {
<ide> const config = {
<add> getPlatformConfig: () => ({ios: {}, android: {}}),
<ide> getProjectConfig: () => ({ assets: [] }),
<ide> getDependencyConfig: sinon.stub().returns({ assets: [], commands: {} }),
<ide> };
<ide> describe('link', () => {
<ide>
<ide> it('should read dependencies from package.json when name not provided', (done) => {
<ide> const config = {
<add> getPlatformConfig: () => ({ios: {}, android: {}}),
<ide> getProjectConfig: () => ({ assets: [] }),
<ide> getDependencyConfig: sinon.stub().returns({ assets: [], commands: {} }),
<ide> };
<ide> describe('link', () => {
<ide> const registerNativeModule = sinon.stub();
<ide> const dependencyConfig = {android: {}, ios: {}, assets: [], commands: {}};
<ide> const config = {
<add> getPlatformConfig: () => ({ios: {}, android: {}}),
<ide> getProjectConfig: () => ({android: {}, ios: {}, assets: []}),
<ide> getDependencyConfig: sinon.stub().returns(dependencyConfig),
<ide> };
<ide> describe('link', () => {
<ide> const registerNativeModule = sinon.stub();
<ide> const dependencyConfig = {ios: {}, android: {}, assets: [], commands: {}};
<ide> const config = {
<add> getPlatformConfig: () => ({ios: {}, android: {}}),
<ide> getProjectConfig: () => ({ ios: {}, android: {}, assets: [] }),
<ide> getDependencyConfig: sinon.stub().returns(dependencyConfig),
<ide> };
<ide> describe('link', () => {
<ide> });
<ide> });
<ide>
<add> it('should register native modules for plugins', (done) => {
<add> const registerNativeModule = sinon.stub();
<add> const dependencyConfig = {ios: {}, android: {}, test: {}, assets: [], commands: {}};
<add> const linkPluginConfig = { isInstalled: () => false, register: registerNativeModule };
<add> const config = {
<add> getPlatformConfig: () => ({ ios: {}, android: {}, test: { linkConfig: () => linkPluginConfig }}),
<add> getProjectConfig: () => ({ ios: {}, android: {}, test: {}, assets: [] }),
<add> getDependencyConfig: sinon.stub().returns(dependencyConfig),
<add> };
<add>
<add> jest.setMock(
<add> '../ios/isInstalled.js',
<add> sinon.stub().returns(true)
<add> );
<add>
<add> jest.setMock(
<add> '../android/isInstalled.js',
<add> sinon.stub().returns(true)
<add> );
<add>
<add> const link = require('../link').func;
<add>
<add> link(['react-native-blur'], config).then(() => {
<add> expect(registerNativeModule.calledOnce).toBeTruthy();
<add> done();
<add> });
<add> });
<add>
<add> it('should not register native modules for plugins when already installed', (done) => {
<add> const registerNativeModule = sinon.stub();
<add> const dependencyConfig = {ios: {}, android: {}, test: {}, assets: [], commands: {}};
<add> const linkPluginConfig = { isInstalled: () => true, register: registerNativeModule};
<add> const config = {
<add> getPlatformConfig: () => ({ ios: {}, android: {}, test: { linkConfig: () => linkPluginConfig }}),
<add> getProjectConfig: () => ({ ios: {}, android: {}, test: {}, assets: [] }),
<add> getDependencyConfig: sinon.stub().returns(dependencyConfig),
<add> };
<add>
<add> jest.setMock(
<add> '../ios/isInstalled.js',
<add> sinon.stub().returns(true)
<add> );
<add>
<add> jest.setMock(
<add> '../android/isInstalled.js',
<add> sinon.stub().returns(true)
<add> );
<add>
<add> const link = require('../link').func;
<add>
<add> link(['react-native-blur'], config).then(() => {
<add> expect(registerNativeModule.callCount).toEqual(0);
<add> done();
<add> });
<add> });
<add>
<ide> it('should run prelink and postlink commands at the appropriate times', (done) => {
<ide> const registerNativeModule = sinon.stub();
<ide> const prelink = sinon.stub().yieldsAsync();
<ide> describe('link', () => {
<ide> );
<ide>
<ide> const config = {
<add> getPlatformConfig: () => ({ ios: {}}),
<ide> getProjectConfig: () => ({ ios: {}, assets: [] }),
<ide> getDependencyConfig: sinon.stub().returns({
<ide> ios: {}, assets: [], commands: { prelink, postlink },
<ide> describe('link', () => {
<ide>
<ide> it('should copy assets from both project and dependencies projects', (done) => {
<ide> const dependencyAssets = ['Fonts/Font.ttf'];
<del> const dependencyConfig = {assets: dependencyAssets, commands: {}};
<add> const dependencyConfig = {assets: dependencyAssets, ios: {}, commands: {}};
<ide> const projectAssets = ['Fonts/FontC.ttf'];
<ide> const copyAssets = sinon.stub();
<ide>
<ide> describe('link', () => {
<ide> );
<ide>
<ide> const config = {
<add> getPlatformConfig: () => ({ ios: {} }),
<ide> getProjectConfig: () => ({ ios: {}, assets: projectAssets }),
<ide> getDependencyConfig: sinon.stub().returns(dependencyConfig),
<ide> };
<ide><path>local-cli/link/link.js
<ide> const chalk = require('chalk');
<ide> const isEmpty = require('lodash').isEmpty;
<ide> const promiseWaterfall = require('./promiseWaterfall');
<ide> const registerDependencyAndroid = require('./android/registerNativeModule');
<del>const registerDependencyWindows = require('./windows/registerNativeModule');
<ide> const registerDependencyIOS = require('./ios/registerNativeModule');
<ide> const registerDependencyPods = require('./pods/registerNativeModule');
<ide> const isInstalledAndroid = require('./android/isInstalled');
<del>const isInstalledWindows = require('./windows/isInstalled');
<ide> const isInstalledIOS = require('./ios/isInstalled');
<ide> const isInstalledPods = require('./pods/isInstalled');
<ide> const copyAssetsAndroid = require('./android/copyAssets');
<ide> const linkDependencyAndroid = (androidProject, dependency) => {
<ide> });
<ide> };
<ide>
<del>const linkDependencyWindows = (windowsProject, dependency) => {
<del>
<del> if (!windowsProject || !dependency.config.windows) {
<del> return null;
<del> }
<del>
<del> const isInstalled = isInstalledWindows(windowsProject, dependency.config.windows);
<del>
<del> if (isInstalled) {
<del> log.info(chalk.grey(`Windows module ${dependency.name} is already linked`));
<del> return null;
<del> }
<del>
<del> return pollParams(dependency.config.params).then(params => {
<del> log.info(`Linking ${dependency.name} windows dependency`);
<del>
<del> registerDependencyWindows(
<del> dependency.name,
<del> dependency.config.windows,
<del> params,
<del> windowsProject
<del> );
<del>
<del> log.info(`Windows module ${dependency.name} has been successfully linked`);
<del> });
<add>const linkDependencyPlatforms = (platforms, project, dependency) => {
<add> const ignorePlatforms = ['android', 'ios'];
<add> Object.keys(platforms || {})
<add> .filter(platform => ignorePlatforms.indexOf(platform) < 0)
<add> .forEach(platform => {
<add> if (!project[platform] || !dependency.config[platform]) {
<add> return null;
<add> }
<add>
<add> const linkConfig = platforms[platform] && platforms[platform].linkConfig && platforms[platform].linkConfig();
<add> if (!linkConfig || !linkConfig.isInstalled || !linkConfig.register) {
<add> return null;
<add> }
<add>
<add> const isInstalled = linkConfig.isInstalled(project[platform], dependency.config[platform]);
<add>
<add> if (isInstalled) {
<add> log.info(chalk.grey(`Platform '${platform}' module ${dependency.name} is already linked`));
<add> return null;
<add> }
<add>
<add> return pollParams(dependency.config.params).then(params => {
<add> log.info(`Linking ${dependency.name} ${platform} dependency`);
<add>
<add> linkConfig.register(
<add> dependency.name,
<add> dependency.config[platform],
<add> params,
<add> project[platform]
<add> );
<add>
<add> log.info(`Platform '${platform}' module ${dependency.name} has been successfully linked`);
<add> });
<add> });
<ide> };
<ide>
<ide> const linkDependencyIOS = (iOSProject, dependency) => {
<ide> const linkDependencyIOS = (iOSProject, dependency) => {
<ide> log.info(`iOS module ${dependency.name} has been successfully linked`);
<ide> };
<ide>
<del>const linkAssets = (project, assets) => {
<add>const linkAssets = (platforms, project, assets) => {
<ide> if (isEmpty(assets)) {
<ide> return;
<ide> }
<ide> const linkAssets = (project, assets) => {
<ide> copyAssetsAndroid(assets, project.android.assetsPath);
<ide> }
<ide>
<add> const ignorePlatforms = ['android', 'ios'];
<add> Object.keys(platforms || {})
<add> .filter(platform => ignorePlatforms.indexOf(platform) < 0)
<add> .forEach(platform => {
<add> const linkConfig = platforms[platform] && platforms[platform].linkConfig && platforms[platform].linkConfig();
<add> if (!linkConfig || !linkConfig.copyAssets) {
<add> return;
<add> }
<add>
<add> log.info(`Linking assets to ${platform} project`);
<add> linkConfig.copyAssets(assets, project[platform]);
<add> });
<add>
<ide> log.info('Assets have been successfully linked to your project');
<ide> };
<ide>
<ide> const linkAssets = (project, assets) => {
<ide> * @param config CLI config, see local-cli/core/index.js
<ide> */
<ide> function link(args: Array<string>, config: RNConfig) {
<del> var project;
<add> let project;
<add> let platforms;
<ide> try {
<ide> project = config.getProjectConfig();
<add> platforms = config.getPlatformConfig();
<ide> } catch (err) {
<ide> log.error(
<ide> 'ERRPACKAGEJSON',
<ide> function link(args: Array<string>, config: RNConfig) {
<ide> return Promise.reject(err);
<ide> }
<ide>
<del> if (!project.android && !project.ios && !project.windows && findReactNativeScripts()) {
<add> const hasProjectConfig = Object.keys(platforms).reduce((acc, key) => acc || key in project, false);
<add> if (!hasProjectConfig && findReactNativeScripts()) {
<ide> throw new Error(
<ide> '`react-native link` can not be used in Create React Native App projects. ' +
<ide> 'If you need to include a library that relies on custom native code, ' +
<ide> function link(args: Array<string>, config: RNConfig) {
<ide> );
<ide>
<ide> const assets = dedupeAssets(dependencies.reduce(
<del> (assets, dependency) => assets.concat(dependency.config.assets),
<add> (acc, dependency) => acc.concat(dependency.config.assets),
<ide> project.assets
<ide> ));
<ide>
<ide> const tasks = flatten(dependencies.map(dependency => [
<ide> () => promisify(dependency.config.commands.prelink || commandStub),
<ide> () => linkDependencyAndroid(project.android, dependency),
<ide> () => linkDependencyIOS(project.ios, dependency),
<del> () => linkDependencyWindows(project.windows, dependency),
<add> () => linkDependencyPlatforms(platforms, project, dependency),
<ide> () => promisify(dependency.config.commands.postlink || commandStub),
<ide> ]));
<ide>
<del> tasks.push(() => linkAssets(project, assets));
<add> tasks.push(() => linkAssets(platforms, project, assets));
<ide>
<ide> return promiseWaterfall(tasks).catch(err => {
<ide> log.error(
<ide><path>local-cli/link/unlink.js
<ide> const log = require('npmlog');
<ide>
<ide> const getProjectDependencies = require('./getProjectDependencies');
<ide> const unregisterDependencyAndroid = require('./android/unregisterNativeModule');
<del>const unregisterDependencyWindows = require('./windows/unregisterNativeModule');
<ide> const unregisterDependencyIOS = require('./ios/unregisterNativeModule');
<ide> const unregisterDependencyPods = require('./pods/unregisterNativeModule');
<ide> const isInstalledAndroid = require('./android/isInstalled');
<del>const isInstalledWindows = require('./windows/isInstalled');
<ide> const isInstalledIOS = require('./ios/isInstalled');
<ide> const isInstalledPods = require('./pods/isInstalled');
<ide> const unlinkAssetsAndroid = require('./android/unlinkAssets');
<ide> const unlinkDependencyAndroid = (androidProject, dependency, packageName) => {
<ide> log.info(`Android module ${packageName} has been successfully unlinked`);
<ide> };
<ide>
<del>const unlinkDependencyWindows = (windowsProject, dependency, packageName) => {
<del> if (!windowsProject || !dependency.windows) {
<del> return;
<del> }
<add>const unlinkDependencyPlatforms = (platforms, project, dependency, packageName) => {
<ide>
<del> const isInstalled = isInstalledWindows(windowsProject, dependency.windows);
<add> const ignorePlatforms = ['android', 'ios'];
<add> Object.keys(platforms || {})
<add> .filter(platform => ignorePlatforms.indexOf(platform) < 0)
<add> .forEach(platform => {
<add> if (!project[platform] || !dependency[platform]) {
<add> return null;
<add> }
<ide>
<del> if (!isInstalled) {
<del> log.info(`Windows module ${packageName} is not installed`);
<del> return;
<del> }
<add> const linkConfig = platforms[platform] && platforms[platform].linkConfig && platforms[platform].linkConfig();
<add> if (!linkConfig || !linkConfig.isInstalled || !linkConfig.unregister) {
<add> return null;
<add> }
<add>
<add> const isInstalled = linkConfig.isInstalled(project[platform], dependency[platform]);
<add>
<add> if (!isInstalled) {
<add> log.info(`Platform '${platform}' module ${packageName} is not installed`);
<add> return;
<add> }
<ide>
<del> log.info(`Unlinking ${packageName} windows dependency`);
<add> log.info(`Unlinking ${packageName} ${platform} dependency`);
<ide>
<del> unregisterDependencyWindows(packageName, dependency.windows, windowsProject);
<add> linkConfig.unregister(
<add> packageName,
<add> dependency[platform],
<add> project[platform]
<add> );
<ide>
<del> log.info(`Windows module ${packageName} has been successfully unlinked`);
<add> log.info(`Platform '${platform}' module ${dependency.name} has been successfully unlinked`);
<add> });
<ide> };
<ide>
<ide> const unlinkDependencyIOS = (iOSProject, dependency, packageName, iOSDependencies) => {
<ide> const unlinkDependencyIOS = (iOSProject, dependency, packageName, iOSDependencie
<ide> function unlink(args, config) {
<ide> const packageName = args[0];
<ide>
<del> var project;
<del> var dependency;
<add> let platforms;
<add> let project;
<add> let dependency;
<ide>
<ide> try {
<add> platforms = config.getPlatformConfig();
<ide> project = config.getProjectConfig();
<ide> } catch (err) {
<ide> log.error(
<ide> function unlink(args, config) {
<ide> () => promisify(dependency.commands.preunlink || commandStub),
<ide> () => unlinkDependencyAndroid(project.android, dependency, packageName),
<ide> () => unlinkDependencyIOS(project.ios, dependency, packageName, iOSDependencies),
<del> () => unlinkDependencyWindows(project.windows, dependency, packageName),
<add> () => unlinkDependencyPlatforms(platforms, project, dependency, packageName),
<ide> () => promisify(dependency.commands.postunlink || commandStub)
<ide> ];
<ide>
<ide><path>local-cli/link/windows/isInstalled.js
<del>const fs = require('fs');
<del>const makeUsingPatch = require('./patches/makeUsingPatch');
<del>
<del>module.exports = function isInstalled(config, dependencyConfig) {
<del> return fs
<del> .readFileSync(config.mainPage)
<del> .indexOf(makeUsingPatch(dependencyConfig.packageUsingPath).patch) > -1;
<del>};
<ide><path>local-cli/link/windows/patches/applyParams.js
<del>/**
<del> * Copyright (c) 2013-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> */
<del>
<del>const toCamelCase = require('lodash').camelCase;
<del>
<del>module.exports = function applyParams(str, params, prefix) {
<del> return str.replace(
<del> /\$\{(\w+)\}/g,
<del> (pattern, param) => {
<del> const name = toCamelCase(prefix) + '_' + param;
<del>
<del> return params[param]
<del> ? `getResources().getString(R.string.${name})`
<del> : null;
<del> }
<del> );
<del>};
<ide><path>local-cli/link/windows/patches/applyPatch.js
<del>const fs = require('fs');
<del>
<del>module.exports = function applyPatch(file, patch, flip = false) {
<del>
<del> fs.writeFileSync(file, fs
<del> .readFileSync(file, 'utf8')
<del> .replace(patch.pattern, match => {
<del> return flip ? `${patch.patch}${match}` : `${match}${patch.patch}`;
<del> })
<del> );
<del>};
<ide><path>local-cli/link/windows/patches/makePackagePatch.js
<del>const applyParams = require('./applyParams');
<del>
<del>module.exports = function makePackagePatch(packageInstance, params, prefix) {
<del> const processedInstance = applyParams(packageInstance, params, prefix);
<del>
<del> return {
<del> pattern: 'new MainReactPackage()',
<del> patch: ',\n ' + processedInstance,
<del> };
<del>};
<ide><path>local-cli/link/windows/patches/makeProjectPatch.js
<del>module.exports = function makeProjectPatch(windowsConfig) {
<del>
<del> const projectInsert = `<ProjectReference Include="..\\${windowsConfig.relativeProjPath}">
<del> <Project>{${windowsConfig.pathGUID}}</Project>
<del> <Name>${windowsConfig.projectName}</Name>
<del> </ProjectReference>
<del> `;
<del>
<del> return {
<del> pattern: '<ProjectReference Include="..\\..\\node_modules\\react-native-windows\\ReactWindows\\ReactNative\\ReactNative.csproj">',
<del> patch: projectInsert,
<del> unpatch: new RegExp(`<ProjectReference.+\\s+.+\\s+.+${windowsConfig.projectName}.+\\s+.+\\s`),
<del> };
<del>};
<ide><path>local-cli/link/windows/patches/makeSolutionPatch.js
<del>module.exports = function makeSolutionPatch(windowsConfig) {
<del>
<del> const solutionInsert = `Project("{${windowsConfig.projectGUID.toUpperCase()}}") = "${windowsConfig.projectName}", "${windowsConfig.relativeProjPath}", "{${windowsConfig.pathGUID.toUpperCase()}}"
<del>EndProject
<del>`;
<del>
<del> return {
<del> pattern: 'Global',
<del> patch: solutionInsert,
<del> unpatch: new RegExp(`Project.+${windowsConfig.projectName}.+\\s+EndProject\\s+`),
<del> };
<del>};
<ide><path>local-cli/link/windows/patches/makeUsingPatch.js
<del>module.exports = function makeUsingPatch(packageImportPath) {
<del> return {
<del> pattern: 'using ReactNative.Modules.Core;',
<del> patch: '\n' + packageImportPath,
<del> };
<del>};
<ide><path>local-cli/link/windows/patches/revokePatch.js
<del>const fs = require('fs');
<del>
<del>module.exports = function revokePatch(file, patch) {
<del> const unpatch = patch.unpatch || patch.patch;
<del> fs.writeFileSync(file, fs
<del> .readFileSync(file, 'utf8')
<del> .replace(unpatch, '')
<del> );
<del>};
<ide><path>local-cli/link/windows/registerNativeModule.js
<del>const applyPatch = require('./patches/applyPatch');
<del>
<del>const makeProjectPatch = require('./patches/makeProjectPatch');
<del>const makeSolutionPatch = require('./patches/makeSolutionPatch');
<del>const makeUsingPatch = require('./patches/makeUsingPatch');
<del>const makePackagePatch = require('./patches/makePackagePatch');
<del>
<del>module.exports = function registerNativeWindowsModule(
<del> name,
<del> windowsConfig,
<del> params,
<del> projectConfig
<del>) {
<del> applyPatch(projectConfig.projectPath, makeProjectPatch(windowsConfig), true);
<del> applyPatch(projectConfig.solutionPath, makeSolutionPatch(windowsConfig), true);
<del>
<del> applyPatch(
<del> projectConfig.mainPage,
<del> makePackagePatch(windowsConfig.packageInstance, params, name)
<del> );
<del>
<del> applyPatch(
<del> projectConfig.mainPage,
<del> makeUsingPatch(windowsConfig.packageUsingPath)
<del> );
<del>};
<ide><path>local-cli/link/windows/unregisterNativeModule.js
<del>const fs = require('fs');
<del>const toCamelCase = require('lodash').camelCase;
<del>
<del>const revokePatch = require('./patches/revokePatch');
<del>const makeProjectPatch = require('./patches/makeProjectPatch');
<del>const makeSolutionPatch = require('./patches/makeSolutionPatch');
<del>const makeUsingPatch = require('./patches/makeUsingPatch');
<del>const makePackagePatch = require('./patches/makePackagePatch');
<del>
<del>module.exports = function unregisterNativeWindowsModule(
<del> name,
<del> windowsConfig,
<del> projectConfig
<del>) {
<del> revokePatch(projectConfig.projectPath, makeProjectPatch(windowsConfig));
<del> revokePatch(projectConfig.solutionPath, makeSolutionPatch(windowsConfig));
<del>
<del> revokePatch(
<del> projectConfig.mainPage,
<del> makePackagePatch(windowsConfig.packageInstance, {}, name)
<del> );
<del>
<del> revokePatch(
<del> projectConfig.mainPage,
<del> makeUsingPatch(windowsConfig.packageUsingPath)
<del> );
<del>}; | 22 |
Javascript | Javascript | fix bad link typo in view document | e8c8a06329070eef297fcc0d3aaf493e9d330f94 | <ide><path>Libraries/Components/View/View.js
<ide> const View = React.createClass({
<ide> * - `'no-hide-descendants'` - The view is not important for accessibility,
<ide> * nor are any of its descendant views.
<ide> *
<del> * See the [Android `importantForAccessibility` docs]( [http://developer.android.com/reference/android/R.attr.html#importantForAccessibility)
<add> * See the [Android `importantForAccessibility` docs](http://developer.android.com/reference/android/R.attr.html#importantForAccessibility)
<ide> * for reference.
<ide> *
<ide> * @platform android | 1 |
Python | Python | improve coverage for airflow.cli package | 14b3e66ad5e2325eb2ee7b9c6661ff7b636d017d | <ide><path>tests/cli/commands/test_celery_command.py
<ide> def test_worker_failure_gracefull_shutdown(self, mock_celery_app, mock_popen):
<ide> celery_command.worker(args)
<ide> finally:
<ide> mock_popen().terminate.assert_called()
<add>
<add>
<add>@pytest.mark.backend("mysql", "postgres")
<add>class TestFlowerCommand(unittest.TestCase):
<add> @classmethod
<add> def setUpClass(cls):
<add> cls.parser = cli_parser.get_parser()
<add>
<add> @mock.patch('airflow.cli.commands.celery_command.celery_app')
<add> @conf_vars({("core", "executor"): "CeleryExecutor"})
<add> def test_run_command(self, mock_celery_app):
<add> args = self.parser.parse_args(
<add> [
<add> 'celery',
<add> 'flower',
<add> '--basic-auth',
<add> 'admin:admin',
<add> '--broker-api',
<add> 'http://username:password@rabbitmq-server-name:15672/api/',
<add> '--flower-conf',
<add> 'flower_config',
<add> '--hostname',
<add> 'my-hostname',
<add> '--port',
<add> '3333',
<add> '--url-prefix',
<add> 'flower-monitoring',
<add> ]
<add> )
<add>
<add> celery_command.flower(args)
<add> mock_celery_app.start.assert_called_once_with(
<add> [
<add> 'flower',
<add> 'amqp://guest:guest@rabbitmq:5672/',
<add> '--address=my-hostname',
<add> '--port=3333',
<add> '--broker-api=http://username:password@rabbitmq-server-name:15672/api/',
<add> '--url-prefix=flower-monitoring',
<add> '--basic-auth=admin:admin',
<add> '--conf=flower_config',
<add> ]
<add> )
<add>
<add> @mock.patch('airflow.cli.commands.celery_command.TimeoutPIDLockFile')
<add> @mock.patch('airflow.cli.commands.celery_command.setup_locations')
<add> @mock.patch('airflow.cli.commands.celery_command.daemon')
<add> @mock.patch('airflow.cli.commands.celery_command.celery_app')
<add> @conf_vars({("core", "executor"): "CeleryExecutor"})
<add> def test_run_command_daemon(self, mock_celery_app, mock_daemon, mock_setup_locations, mock_pid_file):
<add> mock_setup_locations.return_value = (
<add> mock.MagicMock(name='pidfile'),
<add> mock.MagicMock(name='stdout'),
<add> mock.MagicMock(name='stderr'),
<add> mock.MagicMock(name="INVALID"),
<add> )
<add> args = self.parser.parse_args(
<add> [
<add> 'celery',
<add> 'flower',
<add> '--basic-auth',
<add> 'admin:admin',
<add> '--broker-api',
<add> 'http://username:password@rabbitmq-server-name:15672/api/',
<add> '--flower-conf',
<add> 'flower_config',
<add> '--hostname',
<add> 'my-hostname',
<add> '--log-file',
<add> '/tmp/flower.log',
<add> '--pid',
<add> '/tmp/flower.pid',
<add> '--port',
<add> '3333',
<add> '--stderr',
<add> '/tmp/flower-stderr.log',
<add> '--stdout',
<add> '/tmp/flower-stdout.log',
<add> '--url-prefix',
<add> 'flower-monitoring',
<add> '--daemon',
<add> ]
<add> )
<add> mock_open = mock.mock_open()
<add> with mock.patch('airflow.cli.commands.celery_command.open', mock_open):
<add> celery_command.flower(args)
<add>
<add> mock_celery_app.start.assert_called_once_with(
<add> [
<add> 'flower',
<add> 'amqp://guest:guest@rabbitmq:5672/',
<add> '--address=my-hostname',
<add> '--port=3333',
<add> '--broker-api=http://username:password@rabbitmq-server-name:15672/api/',
<add> '--url-prefix=flower-monitoring',
<add> '--basic-auth=admin:admin',
<add> '--conf=flower_config',
<add> ]
<add> )
<add> assert mock_daemon.mock_calls == [
<add> mock.call.DaemonContext(
<add> pidfile=mock_pid_file.return_value,
<add> stderr=mock_open.return_value,
<add> stdout=mock_open.return_value,
<add> ),
<add> mock.call.DaemonContext().__enter__(),
<add> mock.call.DaemonContext().__exit__(None, None, None),
<add> ]
<add>
<add> assert mock_setup_locations.mock_calls == [
<add> mock.call(
<add> log='/tmp/flower.log',
<add> pid='/tmp/flower.pid',
<add> process='flower',
<add> stderr='/tmp/flower-stderr.log',
<add> stdout='/tmp/flower-stdout.log',
<add> )
<add> ]
<add> mock_pid_file.assert_has_calls([mock.call(mock_setup_locations.return_value[0], -1)])
<add> assert mock_open.mock_calls == [
<add> mock.call(mock_setup_locations.return_value[1], 'w+'),
<add> mock.call().__enter__(),
<add> mock.call(mock_setup_locations.return_value[2], 'w+'),
<add> mock.call().__enter__(),
<add> mock.call().__exit__(None, None, None),
<add> mock.call().__exit__(None, None, None),
<add> ]
<ide><path>tests/cli/commands/test_kerberos_command.py
<add># Licensed to the Apache Software Foundation (ASF) under one
<add># or more contributor license agreements. See the NOTICE file
<add># distributed with this work for additional information
<add># regarding copyright ownership. The ASF licenses this file
<add># to you under the Apache License, Version 2.0 (the
<add># "License"); you may not use this file except in compliance
<add># with the License. You may obtain a copy of the License at
<add>#
<add># http://www.apache.org/licenses/LICENSE-2.0
<add>#
<add># Unless required by applicable law or agreed to in writing,
<add># software distributed under the License is distributed on an
<add># "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
<add># KIND, either express or implied. See the License for the
<add># specific language governing permissions and limitations
<add># under the License.
<add>
<add>import unittest
<add>from unittest import mock
<add>
<add>from airflow.cli import cli_parser
<add>from airflow.cli.commands import kerberos_command
<add>from tests.test_utils.config import conf_vars
<add>
<add>
<add>class TestKerberosCommand(unittest.TestCase):
<add> @classmethod
<add> def setUpClass(cls):
<add> cls.parser = cli_parser.get_parser()
<add>
<add> @mock.patch('airflow.cli.commands.kerberos_command.krb')
<add> @conf_vars({("core", "executor"): "CeleryExecutor"})
<add> def test_run_command(self, mock_krb):
<add> args = self.parser.parse_args(['kerberos', 'PRINCIPAL', '--keytab', '/tmp/airflow.keytab'])
<add>
<add> kerberos_command.kerberos(args)
<add> mock_krb.run.assert_called_once_with(keytab='/tmp/airflow.keytab', principal='PRINCIPAL')
<add>
<add> @mock.patch('airflow.cli.commands.kerberos_command.TimeoutPIDLockFile')
<add> @mock.patch('airflow.cli.commands.kerberos_command.setup_locations')
<add> @mock.patch('airflow.cli.commands.kerberos_command.daemon')
<add> @mock.patch('airflow.cli.commands.kerberos_command.krb')
<add> @conf_vars({("core", "executor"): "CeleryExecutor"})
<add> def test_run_command_daemon(self, mock_krb, mock_daemon, mock_setup_locations, mock_pid_file):
<add> mock_setup_locations.return_value = (
<add> mock.MagicMock(name='pidfile'),
<add> mock.MagicMock(name='stdout'),
<add> mock.MagicMock(name='stderr'),
<add> mock.MagicMock(name="INVALID"),
<add> )
<add> args = self.parser.parse_args(
<add> [
<add> 'kerberos',
<add> 'PRINCIPAL',
<add> '--keytab',
<add> '/tmp/airflow.keytab',
<add> '--log-file',
<add> '/tmp/kerberos.log',
<add> '--pid',
<add> '/tmp/kerberos.pid',
<add> '--stderr',
<add> '/tmp/kerberos-stderr.log',
<add> '--stdout',
<add> '/tmp/kerberos-stdout.log',
<add> '--daemon',
<add> ]
<add> )
<add> mock_open = mock.mock_open()
<add> with mock.patch('airflow.cli.commands.kerberos_command.open', mock_open):
<add> kerberos_command.kerberos(args)
<add>
<add> mock_krb.run.assert_called_once_with(keytab='/tmp/airflow.keytab', principal='PRINCIPAL')
<add> assert mock_daemon.mock_calls == [
<add> mock.call.DaemonContext(
<add> pidfile=mock_pid_file.return_value,
<add> stderr=mock_open.return_value,
<add> stdout=mock_open.return_value,
<add> ),
<add> mock.call.DaemonContext().__enter__(),
<add> mock.call.DaemonContext().__exit__(None, None, None),
<add> ]
<add>
<add> mock_setup_locations.assert_has_calls(
<add> [
<add> mock.call(
<add> 'kerberos',
<add> '/tmp/kerberos.pid',
<add> '/tmp/kerberos-stdout.log',
<add> '/tmp/kerberos-stderr.log',
<add> '/tmp/kerberos.log',
<add> )
<add> ]
<add> )
<add> mock_pid_file.assert_has_calls([mock.call(mock_setup_locations.return_value[0], -1)])
<add> assert mock_open.mock_calls == [
<add> mock.call(mock_setup_locations.return_value[1], 'w+'),
<add> mock.call().__enter__(),
<add> mock.call(mock_setup_locations.return_value[2], 'w+'),
<add> mock.call().__enter__(),
<add> mock.call().__exit__(None, None, None),
<add> mock.call().__exit__(None, None, None),
<add> ]
<ide><path>tests/cli/commands/test_plugins_command.py
<ide>
<ide> import io
<ide> import json
<add>import textwrap
<ide> import unittest
<ide> from contextlib import redirect_stdout
<ide>
<ide> def test_should_display_one_plugins(self):
<ide> 'operator_extra_links': [],
<ide> }
<ide> ]
<add>
<add> @mock_plugin_manager(plugins=[TestPlugin])
<add> def test_should_display_one_plugins_as_table(self):
<add>
<add> with redirect_stdout(io.StringIO()) as temp_stdout:
<add> plugins_command.dump_plugins(self.parser.parse_args(['plugins', '--output=table']))
<add> stdout = temp_stdout.getvalue()
<add>
<add> # Remove leading spaces
<add> stdout = "\n".join(line.rstrip(" ") for line in stdout.splitlines())
<add> # Assert that only columns with values are displayed
<add> expected_output = textwrap.dedent(
<add> """\
<add> name | hooks
<add> ================+===========
<add> test-plugin-cli | PluginHook
<add> """
<add> )
<add> self.assertEqual(stdout, expected_output)
<ide><path>tests/cli/commands/test_rotate_fernet_key_command.py
<add># Licensed to the Apache Software Foundation (ASF) under one
<add># or more contributor license agreements. See the NOTICE file
<add># distributed with this work for additional information
<add># regarding copyright ownership. The ASF licenses this file
<add># to you under the Apache License, Version 2.0 (the
<add># "License"); you may not use this file except in compliance
<add># with the License. You may obtain a copy of the License at
<add>#
<add># http://www.apache.org/licenses/LICENSE-2.0
<add>#
<add># Unless required by applicable law or agreed to in writing,
<add># software distributed under the License is distributed on an
<add># "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
<add># KIND, either express or implied. See the License for the
<add># specific language governing permissions and limitations
<add># under the License.
<add>
<add>import unittest
<add>from unittest import mock
<add>
<add>from cryptography.fernet import Fernet
<add>
<add>from airflow.cli import cli_parser
<add>from airflow.cli.commands import rotate_fernet_key_command
<add>from airflow.hooks.base import BaseHook
<add>from airflow.models import Connection, Variable
<add>from airflow.utils.session import provide_session
<add>from tests.test_utils.config import conf_vars
<add>from tests.test_utils.db import clear_db_connections, clear_db_variables
<add>
<add>
<add>class TestRotateFernetKeyCommand(unittest.TestCase):
<add> @classmethod
<add> def setUpClass(cls):
<add> cls.parser = cli_parser.get_parser()
<add>
<add> def setUp(self) -> None:
<add> clear_db_connections(add_default_connections_back=False)
<add> clear_db_variables()
<add>
<add> def tearDown(self) -> None:
<add> clear_db_connections(add_default_connections_back=False)
<add> clear_db_variables()
<add>
<add> @provide_session
<add> def test_should_rotate_variable(self, session):
<add> fernet_key1 = Fernet.generate_key()
<add> fernet_key2 = Fernet.generate_key()
<add> var1_key = f"{__file__}_var1"
<add> var2_key = f"{__file__}_var2"
<add>
<add> # Create unencrypted variable
<add> with conf_vars({('core', 'fernet_key'): ''}), mock.patch('airflow.models.crypto._fernet', None):
<add> Variable.set(key=var1_key, value="value")
<add>
<add> # Create encrypted variable
<add> with conf_vars({('core', 'fernet_key'): fernet_key1.decode()}), mock.patch(
<add> 'airflow.models.crypto._fernet', None
<add> ):
<add> Variable.set(key=var2_key, value="value")
<add>
<add> # Rotate fernet key
<add> with conf_vars(
<add> {('core', 'fernet_key'): ','.join([fernet_key2.decode(), fernet_key1.decode()])}
<add> ), mock.patch('airflow.models.crypto._fernet', None):
<add> args = self.parser.parse_args(['rotate-fernet-key'])
<add> rotate_fernet_key_command.rotate_fernet_key(args)
<add>
<add> # Assert correctness using a new fernet key
<add> with conf_vars({('core', 'fernet_key'): fernet_key2.decode()}), mock.patch(
<add> 'airflow.models.crypto._fernet', None
<add> ):
<add> var1 = session.query(Variable).filter(Variable.key == var1_key).first()
<add> # Unencrypted variable should be unchanged
<add> assert Variable.get(key=var1_key) == 'value'
<add> assert var1._val == 'value'
<add> assert Variable.get(key=var2_key) == 'value'
<add>
<add> @provide_session
<add> def test_should_rotate_connection(self, session):
<add> fernet_key1 = Fernet.generate_key()
<add> fernet_key2 = Fernet.generate_key()
<add> var1_key = f"{__file__}_var1"
<add> var2_key = f"{__file__}_var2"
<add>
<add> # Create unencrypted variable
<add> with conf_vars({('core', 'fernet_key'): ''}), mock.patch('airflow.models.crypto._fernet', None):
<add> session.add(Connection(conn_id=var1_key, uri="mysql://user:pass@localhost"))
<add> session.commit()
<add>
<add> # Create encrypted variable
<add> with conf_vars({('core', 'fernet_key'): fernet_key1.decode()}), mock.patch(
<add> 'airflow.models.crypto._fernet', None
<add> ):
<add> session.add(Connection(conn_id=var2_key, uri="mysql://user:pass@localhost"))
<add> session.commit()
<add>
<add> # Rotate fernet key
<add> with conf_vars(
<add> {('core', 'fernet_key'): ','.join([fernet_key2.decode(), fernet_key1.decode()])}
<add> ), mock.patch('airflow.models.crypto._fernet', None):
<add> args = self.parser.parse_args(['rotate-fernet-key'])
<add> rotate_fernet_key_command.rotate_fernet_key(args)
<add>
<add> # Assert correctness using a new fernet key
<add> with conf_vars({('core', 'fernet_key'): fernet_key2.decode()}), mock.patch(
<add> 'airflow.models.crypto._fernet', None
<add> ):
<add> # Unencrypted variable should be unchanged
<add> conn1: Connection = BaseHook.get_connection(var1_key)
<add> assert conn1.password == 'pass'
<add> assert conn1._password == 'pass'
<add> assert BaseHook.get_connection(var2_key).password == 'pass' | 4 |
Javascript | Javascript | add emoji poetry to showcase.js | d72f151bfc9e35f77d1fac707be4e9adc8d71555 | <ide><path>website/src/react-native/showcase.js
<ide> var apps = [
<ide> linkPlayStore: 'https://play.google.com/store/apps/details?id=com.eon',
<ide> author: 'Sharath Prabhal',
<ide> },
<add> {
<add> name: 'Emoji Poetry',
<add> icon: 'http://a5.mzstatic.com/us/r30/Purple49/v4/31/b5/09/31b509b2-aaec-760f-ccec-2ce72fe7134e/icon175x175.jpeg',
<add> linkAppStore: 'https://itunes.apple.com/us/app/emoji-poetry/id1068972506?l=en&mt=8',
<add> linkPlayStore: 'https://play.google.com/store/apps/details?id=com.arzamasemoji',
<add> author: 'forforce.com',
<add> blogs: [
<add> 'http://arzamas.academy/special/emoji/english',
<add> ],
<add> },
<ide> {
<ide> name: 'Fan of it',
<ide> icon: 'http://a4.mzstatic.com/us/r30/Purple3/v4/c9/3f/e8/c93fe8fb-9332-e744-f04a-0f4f78e42aa8/icon350x350.png', | 1 |
Java | Java | add missing @nullable annotation | 90df7dd279a3c09ae0a022386319df2aacbb52d6 | <ide><path>spring-core/src/main/java/org/springframework/core/ReactiveAdapterRegistry.java
<ide> public void registerReactiveType(ReactiveTypeDescriptor descriptor,
<ide> /**
<ide> * Get the adapter for the given reactive type.
<ide> */
<del> public ReactiveAdapter getAdapter(Class<?> reactiveType) {
<add> public ReactiveAdapter getAdapter(@Nullable Class<?> reactiveType) {
<ide> return getAdapter(reactiveType, null);
<ide> }
<ide> | 1 |
Mixed | Javascript | add ontoggle event to details tag | ad133f5b3dbf01331f819f3000ab52097bf7d598 | <ide><path>docs/docs/reference-events.md
<ide> The event handlers below are triggered by an event in the bubbling phase. To reg
<ide> - [Image Events](#image-events)
<ide> - [Animation Events](#animation-events)
<ide> - [Transition Events](#transition-events)
<add>- [Other Events](#other-events)
<ide>
<ide> * * *
<ide>
<ide> number deltaZ
<ide> Event names:
<ide>
<ide> ```
<del>onAbort onCanPlay onCanPlayThrough onDurationChange onEmptied onEncrypted
<del>onEnded onError onLoadedData onLoadedMetadata onLoadStart onPause onPlay
<del>onPlaying onProgress onRateChange onSeeked onSeeking onStalled onSuspend
<add>onAbort onCanPlay onCanPlayThrough onDurationChange onEmptied onEncrypted
<add>onEnded onError onLoadedData onLoadedMetadata onLoadStart onPause onPlay
<add>onPlaying onProgress onRateChange onSeeked onSeeking onStalled onSuspend
<ide> onTimeUpdate onVolumeChange onWaiting
<ide> ```
<ide>
<ide> string pseudoElement
<ide> float elapsedTime
<ide> ```
<ide>
<add>* * *
<add>
<add>### Other Events
<add>
<add>Event names:
<add>
<add>```
<add>onToggle
<add>```
<ide><path>src/renderers/dom/fiber/ReactDOMFiberComponent.js
<ide> function trapBubbledEventsLocal(node : Element, tag : string) {
<ide> node
<ide> );
<ide> break;
<add> case 'details':
<add> ReactBrowserEventEmitter.trapBubbledEvent(
<add> 'topToggle',
<add> 'toggle',
<add> node
<add> );
<add> break;
<ide> }
<ide> }
<ide>
<ide> var ReactDOMFiberComponent = {
<ide> case 'object':
<ide> case 'source':
<ide> case 'video':
<add> case 'details':
<ide> trapBubbledEventsLocal(domElement, tag);
<ide> props = rawProps;
<ide> break;
<ide><path>src/renderers/dom/shared/ReactBrowserEventEmitter.js
<ide> var topEventMapping = {
<ide> topSuspend: 'suspend',
<ide> topTextInput: 'textInput',
<ide> topTimeUpdate: 'timeupdate',
<add> topToggle: 'toggle',
<ide> topTouchCancel: 'touchcancel',
<ide> topTouchEnd: 'touchend',
<ide> topTouchMove: 'touchmove',
<ide><path>src/renderers/dom/shared/eventPlugins/SimpleEventPlugin.js
<ide> var topLevelEventsToDispatchConfig: {[key: TopLevelTypes]: DispatchConfig} = {};
<ide> 'submit',
<ide> 'suspend',
<ide> 'timeUpdate',
<add> 'toggle',
<ide> 'touchCancel',
<ide> 'touchEnd',
<ide> 'touchMove',
<ide> var SimpleEventPlugin: PluginModule<MouseEvent> = {
<ide> case 'topSubmit':
<ide> case 'topSuspend':
<ide> case 'topTimeUpdate':
<add> case 'topToggle':
<ide> case 'topVolumeChange':
<ide> case 'topWaiting':
<ide> // HTML Events
<ide><path>src/renderers/dom/stack/client/ReactDOMComponent.js
<ide> function trapBubbledEventsLocal() {
<ide> ),
<ide> ];
<ide> break;
<add> case 'details':
<add> inst._wrapperState.listeners = [
<add> ReactBrowserEventEmitter.trapBubbledEvent(
<add> 'topToggle',
<add> 'toggle',
<add> node
<add> ),
<add> ];
<add> break;
<ide> }
<ide> }
<ide>
<ide> ReactDOMComponent.Mixin = {
<ide> case 'object':
<ide> case 'source':
<ide> case 'video':
<add> case 'details':
<ide> this._wrapperState = {
<ide> listeners: null,
<ide> };
<ide><path>src/renderers/shared/shared/event/EventConstants.js
<ide> var topLevelTypes = {
<ide> topSuspend: null,
<ide> topTextInput: null,
<ide> topTimeUpdate: null,
<add> topToggle: null,
<ide> topTouchCancel: null,
<ide> topTouchEnd: null,
<ide> topTouchMove: null, | 6 |
Python | Python | improve clarify of comment | 5f50628fb0938e47b37722651e6dce1db8a8d856 | <ide><path>keras/engine/training.py
<ide> def __reduce__(self):
<ide> return (deserialize_model_from_bytecode, serialize_model_as_bytecode(self))
<ide> else:
<ide> # SavedModel (and hence serialize_model_as_bytecode) only support built models,
<del> # but if the model is not built, it can be serialized as a plain Python object
<del> # Thus we call up the MRO to get an implementation of __reduce__,
<del> # possibly landing at object.__reduce__
<add> # but if the model is not built,
<add> # it _may_ be possible to serialize as a plain Python object,
<add> # as long as the constituent parts (layers, optimizers, losses, etc.)
<add> # can be serialized as plain Python objects.
<add> # Thus we call up the MRO to get an implementation of __reduce__
<add> # to try to pickle this Model as a plain Python object.
<ide> return super(Model, self).__reduce__()
<ide>
<ide> def __deepcopy__(self, memo): | 1 |
Python | Python | add camembert classes to __init__.py | 694d4fcbb61f15b66781219954112791248d832e | <ide><path>transformers/__init__.py
<ide> from .tokenization_xlm import XLMTokenizer
<ide> from .tokenization_roberta import RobertaTokenizer
<ide> from .tokenization_distilbert import DistilBertTokenizer
<add>from .tokenization_camembert import CamembertTokenizer
<ide>
<ide> # Configurations
<ide> from .configuration_utils import PretrainedConfig
<ide> from .configuration_xlm import XLMConfig, XLM_PRETRAINED_CONFIG_ARCHIVE_MAP
<ide> from .configuration_roberta import RobertaConfig, ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP
<ide> from .configuration_distilbert import DistilBertConfig, DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP
<add>from .configuration_camembert import CamembertConfig, CAMEMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP
<ide>
<ide> # Modeling
<ide> if is_torch_available():
<ide> DistilBertForSequenceClassification, DistilBertForQuestionAnswering,
<ide> DistilBertForTokenClassification,
<ide> DISTILBERT_PRETRAINED_MODEL_ARCHIVE_MAP)
<add> from .modeling_camembert import (CamembertForMaskedLM, CamembertModel,
<add> CamembertForSequenceClassification, CamembertForMultipleChoice,
<add> CAMEMBERT_PRETRAINED_MODEL_ARCHIVE_MAP)
<ide> from .modeling_encoder_decoder import PreTrainedEncoderDecoder, Model2Model
<ide>
<ide> # Optimization | 1 |
Go | Go | fix some nits in newrouter() | 758714ed6dab4d865bde55b3f670478605892099 | <ide><path>api/server/router/grpc/grpc.go
<ide> type grpcRouter struct {
<ide>
<ide> // NewRouter initializes a new grpc http router
<ide> func NewRouter(backends ...Backend) router.Router {
<del> opts := []grpc.ServerOption{grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor), grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor)}
<del> server := grpc.NewServer(opts...)
<del>
<ide> r := &grpcRouter{
<del> h2Server: &http2.Server{},
<del> grpcServer: server,
<add> h2Server: &http2.Server{},
<add> grpcServer: grpc.NewServer(
<add> grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
<add> grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
<add> ),
<ide> }
<ide> for _, b := range backends {
<ide> b.RegisterGRPC(r.grpcServer)
<ide> func NewRouter(backends ...Backend) router.Router {
<ide> }
<ide>
<ide> // Routes returns the available routers to the session controller
<del>func (r *grpcRouter) Routes() []router.Route {
<del> return r.routes
<add>func (gr *grpcRouter) Routes() []router.Route {
<add> return gr.routes
<ide> }
<ide>
<del>func (r *grpcRouter) initRoutes() {
<del> r.routes = []router.Route{
<del> router.NewPostRoute("/grpc", r.serveGRPC),
<add>func (gr *grpcRouter) initRoutes() {
<add> gr.routes = []router.Route{
<add> router.NewPostRoute("/grpc", gr.serveGRPC),
<ide> }
<ide> } | 1 |
Ruby | Ruby | add rdoc to phone_field | b5cc89ae017c540e1921a14b560f845ff31b3a8f | <ide><path>actionpack/lib/action_view/helpers/form_helper.rb
<ide> def search_field(object_name, method, options = {})
<ide> def telephone_field(object_name, method, options = {})
<ide> Tags::TelField.new(object_name, method, self, options).render
<ide> end
<add> # alias of telephone_field
<ide> alias phone_field telephone_field
<ide>
<ide> # Returns a text_field of type "date". | 1 |
Text | Text | add license to xlm-roberta-large-xnli card | a1ac08287940ba1bad9645682947c6299f70278a | <ide><path>model_cards/joeddav/xlm-roberta-large-xnli/README.md
<ide> datasets:
<ide> - xnli
<ide> widget:
<ide> - text: "За кого вы голосуете в 2020 году? <sep> This text is about politique."
<add>license: mit
<ide> ---
<ide>
<ide> # xlm-roberta-large-xnli | 1 |
PHP | PHP | fix double base dir in image() with fullbase | e61f636bc7e6d9bcc53ed2dba7f4041bfd7a42dc | <ide><path>lib/Cake/Test/Case/View/Helper/HtmlHelperTest.php
<ide> public function testImageTag() {
<ide>
<ide> $result = $this->Html->image('test.gif?one=two&three=four');
<ide> $this->assertTags($result, array('img' => array('src' => 'img/test.gif?one=two&three=four', 'alt' => '')));
<add> }
<ide>
<add>/**
<add> * Test that image() works with fullBase and a webroot not equal to /
<add> *
<add> * @return void
<add> */
<add> public function testImageWithFullBase() {
<ide> $result = $this->Html->image('test.gif', array('fullBase' => true));
<ide> $here = $this->Html->url('/', true);
<ide> $this->assertTags($result, array('img' => array('src' => $here . 'img/test.gif', 'alt' => '')));
<ide>
<ide> $result = $this->Html->image('sub/test.gif', array('fullBase' => true));
<ide> $here = $this->Html->url('/', true);
<ide> $this->assertTags($result, array('img' => array('src' => $here . 'img/sub/test.gif', 'alt' => '')));
<add>
<add> $request = $this->Html->request;
<add> $request->webroot = '/myproject/';
<add> $request->base = '/myproject';
<add> Router::setRequestInfo($request);
<add>
<add> $result = $this->Html->image('sub/test.gif', array('fullBase' => true));
<add> $here = $this->Html->url('/', true);
<add> $this->assertTags($result, array('img' => array('src' => $here . 'img/sub/test.gif', 'alt' => '')));
<ide> }
<ide>
<ide> /**
<ide><path>lib/Cake/View/Helper.php
<ide> public function assetUrl($path, $options = array()) {
<ide> $path = h($this->assetTimestamp($this->webroot($path)));
<ide>
<ide> if (!empty($options['fullBase'])) {
<del> if ($path[0] == '/') {
<del> $path = substr($path, 1);
<add> $base = $this->url('/', true);
<add> $len = strlen($this->request->webroot);
<add> if ($len) {
<add> $base = substr($base, 0, -$len);
<ide> }
<del> $path = $this->url('/', true) . $path;
<add> $path = $base . $path;
<ide> }
<ide> }
<ide> | 2 |
Python | Python | update the tag_map | adfd98731655cc3f351e0042353ea850ef7d23c2 | <ide><path>spacy/ja/__init__.py
<ide> def __call__(self, tokens):
<ide> # 1. get raw JP tags
<ide> # 2. add features to tags as necessary for UD
<ide>
<del> # TODO: if the text has been tokenized, this info is already available
<del> # How to set the data when tokenizing or save it for the tagger to find?
<del>
<ide> dtokens = detailed_tokens(self.tokenizer, tokens.text)
<ide> rawtags = list(map(resolve_pos, dtokens))
<ide> self.tagger.tag_from_strings(tokens, rawtags)
<ide><path>spacy/ja/tag_map.py
<ide> "名詞,普通名詞,助数詞可能,*":{POS: NOUN}, # counter / unit
<ide> "名詞,普通名詞,副詞可能,*":{POS: NOUN},
<ide>
<del> "連体詞,*,*,*":{POS: ADJ}, # XXX note この、その etc. should be DET
<add> "連体詞,*,*,*":{POS: ADJ}, # XXX this has exceptions based on literal token
<ide> "連体詞,*,*,*,ADJ":{POS: ADJ},
<add> "連体詞,*,*,*,PRON":{POS: PRON},
<ide> "連体詞,*,*,*,DET":{POS: DET},
<ide> } | 2 |
Python | Python | remove debug message | 5f4b92b88cc5faa3962978a79ea1e7e8725d8b21 | <ide><path>glances/server.py
<ide> def serve_forever(self):
<ide> """Main loop"""
<ide> while not self.finished:
<ide> self.handle_request()
<del> logger.info(self.finished)
<ide>
<ide>
<ide> class GlancesInstance(object): | 1 |
Python | Python | update backends with rnn support | 47ed18a3af8be20dce91286a331d4671074ee0ca | <ide><path>keras/backend/tensorflow_backend.py
<ide> def gradients(loss, variables):
<ide>
<ide> # CONTROL FLOW
<ide>
<del>def rnn(step_function, inputs, initial_states, go_backwards=False):
<del> '''TODO
<add>def rnn(step_function, inputs, initial_states,
<add> go_backwards=False, masking=True):
<add> '''Iterates over the time dimension of a tensor.
<add>
<add> Parameters
<add> ----------
<add> inputs: tensor of temporal data of shape (samples, time, ...)
<add> (at least 3D).
<add> step_function:
<add> Parameters:
<add> input: tensor with shape (samples, ...) (no time dimension),
<add> representing input for the batch of samples at a certain
<add> time step.
<add> states: list of tensors.
<add> Returns:
<add> output: tensor with shape (samples, ...) (no time dimension),
<add> new_states: list of tensors, same length and shapes
<add> as 'states'.
<add> initial_states: tensor with shape (samples, ...) (no time dimension),
<add> containing the initial values for the states used in
<add> the step function.
<add> go_backwards: boolean. If True, do the iteration over
<add> the time dimension in reverse order.
<add> masking: boolean. If true, any input timestep inputs[s, i]
<add> that is all-zeros will be skipped (states will be passed to
<add> the next step unchanged) and the corresponding output will
<add> be all zeros.
<add>
<add> Returns
<add> -------
<add> A tuple (last_output, outputs, new_states).
<add> last_output: the latest output of the rnn, of shape (samples, ...)
<add> outputs: tensor with shape (samples, time, ...) where each
<add> entry outputs[s, t] is the output of the step function
<add> at time t for sample s.
<add> new_states: list of tensors, latest states returned by
<add> the step function, of shape (samples, ...).
<ide> '''
<del> pass
<add> inputs = tf.transpose(inputs, (1, 0, 2))
<add> input_list = tf.unpack(inputs)
<add>
<add> states = initial_states
<add> successive_states = []
<add> successive_outputs = []
<add> if go_backwards:
<add> input_list = input_list.reverse()
<add> for input in input_list:
<add> output, new_states = step_function(input, states)
<add> if masking:
<add> # if all-zero input timestep, return
<add> # all-zero output and unchanged states
<add> switch = tf.reduce_any(input)
<add> output = tf.control_flow_ops.cond(switch,
<add> lambda: output,
<add> lambda: 0. * output)
<add> return_states = []
<add> for state, new_state in zip(states, new_states):
<add> return_states.append(tf.control_flow_ops.cond(switch,
<add> lambda: new_state,
<add> lambda: state))
<add> states = return_states
<add> else:
<add> states = new_states
<add> successive_outputs.append(output)
<add> successive_states.append(states)
<add>
<add> last_output = successive_outputs[-1]
<add> outputs = tf.pack(successive_outputs)
<add> new_states = successive_states[-1]
<add>
<add> outputs = tf.transpose(outputs, (1, 0, 2))
<add> return last_output, outputs, states
<ide>
<ide>
<ide> def switch(condition, then_expression, else_expression):
<ide> def conv2d(x, kernel, strides=(1, 1), border_mode='valid', dim_ordering='th'):
<ide>
<ide> strides = (1,) + strides + (1,)
<ide>
<add> if _FLOATX == 'float64':
<add> # tf conv2d only supports float32
<add> x = tf.cast(x, 'float32')
<add> kernel = tf.cast(kernel, 'float32')
<add>
<ide> if dim_ordering == 'th':
<ide> # TF uses the last dimension as channel dimension,
<ide> # instead of the 2nd one.
<ide> def conv2d(x, kernel, strides=(1, 1), border_mode='valid', dim_ordering='th'):
<ide> x = tf.nn.conv2d(x, kernel, strides, padding=padding)
<ide> else:
<ide> raise Exception('Unknown dim_ordering: ' + str(dim_ordering))
<add>
<add> if _FLOATX == 'float64':
<add> x = tf.cast(x, 'float64')
<ide> return x
<ide>
<ide>
<ide> def maxpool2d(x, pool_size, strides=(1, 1),
<ide> strides = (1,) + strides + (1,)
<ide> pool_size = (1,) + pool_size + (1,)
<ide>
<add> if _FLOATX == 'float64':
<add> # tf max_pool only supports float32
<add> x = tf.cast(x, 'float32')
<add>
<ide> if dim_ordering == 'th':
<ide> # TF uses the last dimension as channel dimension,
<ide> # instead of the 2nd one.
<ide> def maxpool2d(x, pool_size, strides=(1, 1),
<ide> x = tf.nn.max_pool(x, pool_size, strides, padding=padding)
<ide> else:
<ide> raise Exception('Unknown dim_ordering: ' + str(dim_ordering))
<add>
<add> if _FLOATX == 'float64':
<add> x = tf.cast(x, 'float64')
<ide> return x
<ide>
<ide>
<ide><path>keras/backend/theano_backend.py
<ide> def gradients(loss, variables):
<ide>
<ide> # CONTROL FLOW
<ide>
<del>def rnn(step_function, inputs, initial_states, go_backwards=False):
<del> '''TODO
<del>
<del> Wrapper for scan
<add>def rnn(step_function, inputs, initial_states,
<add> go_backwards=False, masking=True):
<add> '''Iterates over the time dimension of a tensor.
<add>
<add> Parameters
<add> ----------
<add> inputs: tensor of temporal data of shape (samples, time, ...)
<add> (at least 3D).
<add> step_function:
<add> Parameters:
<add> input: tensor with shape (samples, ...) (no time dimension),
<add> representing input for the batch of samples at a certain
<add> time step.
<add> states: list of tensors.
<add> Returns:
<add> output: tensor with shape (samples, ...) (no time dimension),
<add> new_states: list of tensors, same length and shapes
<add> as 'states'.
<add> initial_states: tensor with shape (samples, ...) (no time dimension),
<add> containing the initial values for the states used in
<add> the step function.
<add> go_backwards: boolean. If True, do the iteration over
<add> the time dimension in reverse order.
<add> masking: boolean. If true, any input timestep inputs[s, i]
<add> that is all-zeros will be skipped (states will be passed to
<add> the next step unchanged) and the corresponding output will
<add> be all zeros.
<add>
<add> Returns
<add> -------
<add> A tuple (last_output, outputs, new_states).
<add> last_output: the latest output of the rnn, of shape (samples, ...)
<add> outputs: tensor with shape (samples, time, ...) where each
<add> entry outputs[s, t] is the output of the step function
<add> at time t for sample s.
<add> new_states: list of tensors, latest states returned by
<add> the step function, of shape (samples, ...).
<ide> '''
<del> pass
<add> inputs = inputs.dimshuffle((1, 0, 2))
<add>
<add> def _step(*args):
<add> global single_result
<add> input = args[0]
<add> states = args[1:]
<add> output, new_states = step_function(input, states)
<add> if masking:
<add> # if all-zero input timestep, return
<add> # all-zero output and unchanged states
<add> switch = T.any(input)
<add> output = T.switch(switch, output, 0. * output)
<add> return_states = []
<add> for state, new_state in zip(states, new_states):
<add> return_states.append(T.switch(switch, new_state, state))
<add> return [output] + return_states
<add> else:
<add> return [output] + new_states
<add>
<add> results, _ = theano.scan(
<add> _step,
<add> sequences=inputs,
<add> outputs_info=[None] + initial_states,
<add> go_backwards=go_backwards)
<add>
<add> # deal with Theano API inconsistency
<add> if type(results) is list:
<add> outputs = results[0]
<add> states = results[1:]
<add> else:
<add> outputs = results
<add> states = []
<add>
<add> outputs = T.squeeze(outputs)
<add> last_output = outputs[-1]
<add>
<add> outputs = outputs.dimshuffle((1, 0, 2))
<add> states = [T.squeeze(state[-1]) for state in states]
<add> return last_output, outputs, states
<ide>
<ide>
<ide> def switch(condition, then_expression, else_expression):
<ide><path>keras/layers/core.py
<ide> def output_shape(self):
<ide>
<ide> def get_output(self, train=False):
<ide> X = self.get_input(train)
<del> output = self.activation(K.dot(K.permute_dimensions(X, (1, 0, 2)),
<del> self.W) + self.b)
<del> return K.permute_dimensions(output, (1, 0, 2))
<add>
<add> def step(x, states):
<add> output = K.dot(x, self.W) + self.b
<add> return output, []
<add>
<add> last_output, outputs, states = K.rnn(step, X, [], masking=False)
<add>
<add> outputs = self.activation(outputs)
<add> return outputs
<ide>
<ide> def get_config(self):
<ide> config = {"name": self.__class__.__name__,
<ide><path>keras/layers/recurrent.py
<ide> # -*- coding: utf-8 -*-
<ide> from __future__ import absolute_import
<del>import theano
<del>import theano.tensor as T
<ide> import numpy as np
<ide>
<add>from .. import backend as K
<ide> from .. import activations, initializations
<del>from ..utils.theano_utils import shared_scalar, shared_zeros, alloc_zeros_matrix
<del>from ..layers.core import Layer, MaskedLayer
<del>from six.moves import range
<add>from ..layers.core import MaskedLayer
<ide>
<ide>
<ide> class Recurrent(MaskedLayer):
<ide> input_ndim = 3
<ide>
<del> def get_output_mask(self, train=None):
<add> def __init__(self, weights=None,
<add> return_sequences=False, input_dim=None,
<add> input_length=None, go_backwards=False,
<add> stateful=False, **kwargs):
<add> self.return_sequences = return_sequences
<add> self.initial_weights = weights
<add> self.go_backwards = go_backwards
<add> self.stateful = stateful
<add>
<add> self.input_dim = input_dim
<add> self.input_length = input_length
<add> if self.input_dim:
<add> kwargs['input_shape'] = (self.input_length, self.input_dim)
<add> super(Recurrent, self).__init__(**kwargs)
<add>
<add> def get_output_mask(self, train=False):
<ide> if self.return_sequences:
<ide> return super(Recurrent, self).get_output_mask(train)
<ide> else:
<ide> return None
<ide>
<del> def get_padded_shuffled_mask(self, train, X, pad=0):
<del> mask = self.get_input_mask(train)
<del> if mask is None:
<del> mask = T.ones_like(X.sum(axis=-1)) # is there a better way to do this without a sum?
<del>
<del> # TODO: reimplement
<del> # mask is (nb_samples, time)
<del> mask = T.shape_padright(mask) # (nb_samples, time, 1)
<del> mask = T.addbroadcast(mask, -1) # the new dimension (the '1') is made broadcastable
<del> # see http://deeplearning.net/software/theano/library/tensor/basic.html#broadcasting-in-theano-vs-numpy
<del> mask = mask.dimshuffle(1, 0, 2) # (time, nb_samples, 1)
<del>
<del> if pad > 0:
<del> # left-pad in time with 0
<del> padding = alloc_zeros_matrix(pad, mask.shape[1], 1)
<del> mask = T.concatenate([padding, mask], axis=0)
<del> return mask.astype('int8')
<del>
<ide> @property
<ide> def output_shape(self):
<ide> input_shape = self.input_shape
<ide> def output_shape(self):
<ide> else:
<ide> return (input_shape[0], self.output_dim)
<ide>
<add> def step(self, x, states):
<add> raise NotImplementedError
<ide>
<del>class SimpleRNN(Recurrent):
<del> '''
<del> Fully connected RNN where output is to fed back to input.
<del>
<del> Not a particularly useful model,
<del> included for demonstration purposes
<del> (demonstrates how to use theano.scan to build a basic RNN).
<del> '''
<del> def __init__(self, output_dim,
<del> init='glorot_uniform', inner_init='orthogonal', activation='sigmoid', weights=None,
<del> truncate_gradient=-1, return_sequences=False, input_dim=None,
<del> input_length=None, go_backwards=False, **kwargs):
<del> self.output_dim = output_dim
<del> self.init = initializations.get(init)
<del> self.inner_init = initializations.get(inner_init)
<del> self.truncate_gradient = truncate_gradient
<del> self.activation = activations.get(activation)
<del> self.return_sequences = return_sequences
<del> self.initial_weights = weights
<del> self.go_backwards = go_backwards
<del>
<del> self.input_dim = input_dim
<del> self.input_length = input_length
<del> if self.input_dim:
<del> kwargs['input_shape'] = (self.input_length, self.input_dim)
<del> super(SimpleRNN, self).__init__(**kwargs)
<del>
<del> def build(self):
<del> input_dim = self.input_shape[2]
<del> self.input = T.tensor3()
<del>
<del> self.W = self.init((input_dim, self.output_dim))
<del> self.U = self.inner_init((self.output_dim, self.output_dim))
<del> self.b = shared_zeros((self.output_dim))
<del> self.params = [self.W, self.U, self.b]
<del>
<del> if self.initial_weights is not None:
<del> self.set_weights(self.initial_weights)
<del> del self.initial_weights
<del>
<del> def _step(self, x_t, mask_tm1, h_tm1, u):
<del> '''
<del> Variable names follow the conventions from:
<del> http://deeplearning.net/software/theano/library/scan.html
<add> def get_output(self, train=False):
<add> # input shape: (nb_samples, time (padded with zeros), input_dim)
<add> X = self.get_input(train)
<add> assert K.ndim(X) == 3
<ide>
<del> '''
<del> return self.activation(x_t + mask_tm1 * T.dot(h_tm1, u))
<add> mask = self.get_output_mask(train)
<add> if mask:
<add> # apply mask
<add> X *= K.expand_dims(mask)
<add> masking = True
<add> else:
<add> masking = False
<ide>
<del> def get_output(self, train=False):
<del> X = self.get_input(train) # shape: (nb_samples, time (padded with zeros), input_dim)
<del> # new shape: (time, nb_samples, input_dim) -> because theano.scan iterates over main dimension
<del> padded_mask = self.get_padded_shuffled_mask(train, X, pad=1)
<del> X = X.dimshuffle((1, 0, 2))
<del> x = T.dot(X, self.W) + self.b
<del>
<del> # scan = theano symbolic loop.
<del> # See: http://deeplearning.net/software/theano/library/scan.html
<del> # Iterate over the first dimension of the x array (=time).
<del> outputs, updates = theano.scan(
<del> self._step, # this will be called with arguments (sequences[i], outputs[i-1], non_sequences[i])
<del> sequences=[x, dict(input=padded_mask, taps=[-1])], # tensors to iterate over, inputs to _step
<del> # initialization of the output. Input to _step with default tap=-1.
<del> outputs_info=T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.output_dim), 1),
<del> non_sequences=self.U, # static inputs to _step
<del> truncate_gradient=self.truncate_gradient,
<del> go_backwards=self.go_backwards)
<add> if self.stateful:
<add> initial_states = self.states
<add> else:
<add> # build an all-zero tensor of shape (samples, output_dim)
<add> initial_state = K.zeros_like(X) # (samples, timesteps, input_dim)
<add> initial_state = K.sum(initial_state, axis=1) # (samples, input_dim)
<add> reducer = K.zeros((self.input_dim, self.output_dim))
<add> initial_state = K.dot(initial_state, reducer) # (samples, output_dim)
<add> initial_states = [initial_state for _ in range(len(self.states))]
<add> last_output, outputs, states = K.rnn(self.step, X, initial_states,
<add> go_backwards=self.go_backwards,
<add> masking=masking)
<add> if self.stateful:
<add> for i in range(len(states)):
<add> K.set_value(self.states[i], states[i])
<ide>
<ide> if self.return_sequences:
<del> return outputs.dimshuffle((1, 0, 2))
<del> return outputs[-1]
<add> return outputs
<add> else:
<add> return last_output
<ide>
<ide> def get_config(self):
<ide> config = {"name": self.__class__.__name__,
<del> "output_dim": self.output_dim,
<del> "init": self.init.__name__,
<del> "inner_init": self.inner_init.__name__,
<del> "activation": self.activation.__name__,
<del> "truncate_gradient": self.truncate_gradient,
<ide> "return_sequences": self.return_sequences,
<ide> "input_dim": self.input_dim,
<ide> "input_length": self.input_length,
<del> "go_backwards": self.go_backwards}
<del> base_config = super(SimpleRNN, self).get_config()
<add> "go_backwards": self.go_backwards,
<add> "stateful": self.stateful}
<add> base_config = super(Recurrent, self).get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<del>class SimpleDeepRNN(Recurrent):
<add>class SimpleRNN(Recurrent):
<ide> '''
<del> Fully connected RNN where the output of multiple timesteps
<del> (up to "depth" steps in the past) is fed back to the input:
<add> Fully-connected RNN where the output is to fed back to input.
<ide>
<del> output = activation( W.x_t + b + inner_activation(U_1.h_tm1) + inner_activation(U_2.h_tm2) + ... )
<add> Takes inputs with shape:
<add> (nb_samples, max_sample_length, input_dim)
<add> (samples shorter than `max_sample_length`
<add> are padded with zeros at the end)
<ide>
<del> This demonstrates how to build RNNs with arbitrary lookback.
<del> Also (probably) not a super useful model.
<add> and returns outputs with shape:
<add> if not return_sequences:
<add> (nb_samples, output_dim)
<add> if return_sequences:
<add> (nb_samples, max_sample_length, output_dim)
<ide> '''
<del> def __init__(self, output_dim, depth=3,
<add> def __init__(self, output_dim,
<ide> init='glorot_uniform', inner_init='orthogonal',
<del> activation='sigmoid', inner_activation='hard_sigmoid',
<del> weights=None, truncate_gradient=-1, return_sequences=False,
<del> input_dim=None, input_length=None, go_backwards=False, **kwargs):
<add> activation='sigmoid', **kwargs):
<ide> self.output_dim = output_dim
<ide> self.init = initializations.get(init)
<ide> self.inner_init = initializations.get(inner_init)
<del> self.truncate_gradient = truncate_gradient
<ide> self.activation = activations.get(activation)
<del> self.inner_activation = activations.get(inner_activation)
<del> self.depth = depth
<del> self.return_sequences = return_sequences
<del> self.initial_weights = weights
<del> self.go_backwards = go_backwards
<add> super(SimpleRNN, self).__init__(**kwargs)
<ide>
<add> def build(self):
<add> input_shape = self.input_shape
<add> if self.stateful:
<add> if not input_shape[0]:
<add> raise Exception('If a RNN is stateful, a complete ' +
<add> 'input_shape must be provided ' +
<add> '(including batch size).')
<add> self.states = [K.zeros(input_shape[0], self.output_dim)]
<add> else:
<add> # initial states: all-zero tensor of shape (output_dim)
<add> self.states = [None]
<add> input_dim = input_shape[2]
<ide> self.input_dim = input_dim
<del> self.input_length = input_length
<del> if self.input_dim:
<del> kwargs['input_shape'] = (self.input_length, self.input_dim)
<del> super(SimpleDeepRNN, self).__init__(**kwargs)
<ide>
<del> def build(self):
<del> input_dim = self.input_shape[2]
<del> self.input = T.tensor3()
<ide> self.W = self.init((input_dim, self.output_dim))
<del> self.Us = [self.inner_init((self.output_dim, self.output_dim)) for _ in range(self.depth)]
<del> self.b = shared_zeros((self.output_dim))
<del> self.params = [self.W] + self.Us + [self.b]
<add> self.U = self.inner_init((self.output_dim, self.output_dim))
<add> self.b = K.zeros((self.output_dim))
<add> self.params = [self.W, self.U, self.b]
<ide>
<ide> if self.initial_weights is not None:
<ide> self.set_weights(self.initial_weights)
<ide> del self.initial_weights
<ide>
<del> def _step(self, x_t, *args):
<del> o = x_t
<del> for i in range(self.depth):
<del> mask_tmi = args[i]
<del> h_tmi = args[i + self.depth]
<del> U_tmi = args[i + 2*self.depth]
<del> o += mask_tmi*self.inner_activation(T.dot(h_tmi, U_tmi))
<del> return self.activation(o)
<del>
<del> def get_output(self, train=False):
<del> X = self.get_input(train)
<del> padded_mask = self.get_padded_shuffled_mask(train, X, pad=self.depth)
<del> X = X.dimshuffle((1, 0, 2))
<del>
<del> x = T.dot(X, self.W) + self.b
<del>
<del> if self.depth == 1:
<del> initial = T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.output_dim), 1)
<del> else:
<del> initial = T.unbroadcast(T.unbroadcast(alloc_zeros_matrix(self.depth, X.shape[1], self.output_dim), 0), 2)
<del>
<del> outputs, updates = theano.scan(
<del> self._step,
<del> sequences=[x, dict(
<del> input=padded_mask,
<del> taps=[(-i) for i in range(self.depth)]
<del> )],
<del> outputs_info=[dict(
<del> initial=initial,
<del> taps=[(-i-1) for i in range(self.depth)]
<del> )],
<del> non_sequences=self.Us,
<del> truncate_gradient=self.truncate_gradient,
<del> go_backwards=self.go_backwards)
<del>
<del> if self.return_sequences:
<del> return outputs.dimshuffle((1, 0, 2))
<del> return outputs[-1]
<add> def step(self, x, states):
<add> # states only contains the previous output.
<add> assert len(states) == 1
<add> prev_output = states[0]
<add> h = K.dot(x, self.W) + self.b
<add> output = self.activation(h * K.dot(prev_output, self.U))
<add> return output, [output]
<ide>
<ide> def get_config(self):
<del> config = {"name": self.__class__.__name__,
<del> "output_dim": self.output_dim,
<del> "depth": self.depth,
<add> config = {"output_dim": self.output_dim,
<ide> "init": self.init.__name__,
<ide> "inner_init": self.inner_init.__name__,
<del> "activation": self.activation.__name__,
<del> "inner_activation": self.inner_activation.__name__,
<del> "truncate_gradient": self.truncate_gradient,
<del> "return_sequences": self.return_sequences,
<del> "input_dim": self.input_dim,
<del> "input_length": self.input_length,
<del> "go_backwards": self.go_backwards}
<del> base_config = super(SimpleDeepRNN, self).get_config()
<add> "activation": self.activation.__name__}
<add> base_config = super(SimpleRNN, self).get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class GRU(Recurrent):
<ide> Acts as a spatiotemporal projection,
<ide> turning a sequence of vectors into a single vector.
<ide>
<del> Eats inputs with shape:
<del> (nb_samples, max_sample_length (samples shorter than this are padded with zeros at the end), input_dim)
<add> Takes inputs with shape:
<add> (nb_samples, max_sample_length, input_dim)
<add> (samples shorter than `max_sample_length`
<add> are padded with zeros at the end)
<ide>
<ide> and returns outputs with shape:
<ide> if not return_sequences:
<ide> class GRU(Recurrent):
<ide> (nb_samples, max_sample_length, output_dim)
<ide>
<ide> References:
<del> On the Properties of Neural Machine Translation: Encoder–Decoder Approaches
<add> On the Properties of Neural Machine Translation:
<add> Encoder–Decoder Approaches
<ide> http://www.aclweb.org/anthology/W14-4012
<del> Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
<add> Empirical Evaluation of Gated Recurrent Neural Networks
<add> on Sequence Modeling
<ide> http://arxiv.org/pdf/1412.3555v1.pdf
<ide> '''
<ide> def __init__(self, output_dim,
<ide> init='glorot_uniform', inner_init='orthogonal',
<ide> activation='sigmoid', inner_activation='hard_sigmoid',
<del> weights=None, truncate_gradient=-1, return_sequences=False,
<del> input_dim=None, input_length=None, go_backwards=False, **kwargs):
<add> **kwargs):
<ide> self.output_dim = output_dim
<ide> self.init = initializations.get(init)
<ide> self.inner_init = initializations.get(inner_init)
<ide> self.activation = activations.get(activation)
<ide> self.inner_activation = activations.get(inner_activation)
<del> self.truncate_gradient = truncate_gradient
<del> self.return_sequences = return_sequences
<del> self.initial_weights = weights
<del> self.go_backwards = go_backwards
<del>
<del> self.input_dim = input_dim
<del> self.input_length = input_length
<del> if self.input_dim:
<del> kwargs['input_shape'] = (self.input_length, self.input_dim)
<ide> super(GRU, self).__init__(**kwargs)
<ide>
<ide> def build(self):
<del> input_dim = self.input_shape[2]
<del> self.input = T.tensor3()
<add> input_shape = self.input_shape
<add> input_dim = input_shape[2]
<add> self.input_dim = input_dim
<add> self.input = K.placeholder(input_shape)
<ide>
<ide> self.W_z = self.init((input_dim, self.output_dim))
<ide> self.U_z = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_z = shared_zeros((self.output_dim))
<add> self.b_z = K.zeros((self.output_dim,))
<ide>
<ide> self.W_r = self.init((input_dim, self.output_dim))
<ide> self.U_r = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_r = shared_zeros((self.output_dim))
<add> self.b_r = K.zeros((self.output_dim,))
<ide>
<ide> self.W_h = self.init((input_dim, self.output_dim))
<ide> self.U_h = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_h = shared_zeros((self.output_dim))
<del>
<del> self.params = [
<del> self.W_z, self.U_z, self.b_z,
<del> self.W_r, self.U_r, self.b_r,
<del> self.W_h, self.U_h, self.b_h,
<del> ]
<add> self.b_h = K.zeros((self.output_dim,))
<add>
<add> self.params = [self.W_z, self.U_z, self.b_z,
<add> self.W_r, self.U_r, self.b_r,
<add> self.W_h, self.U_h, self.b_h]
<add>
<add> if self.stateful:
<add> if not input_shape[0]:
<add> raise Exception('If a RNN is stateful, a complete ' +
<add> 'input_shape must be provided ' +
<add> '(including batch size).')
<add> self.states = [K.zeros(input_shape[0], self.output_dim)]
<add> else:
<add> # initial states: all-zero tensor of shape (output_dim)
<add> self.states = [None]
<ide>
<ide> if self.initial_weights is not None:
<ide> self.set_weights(self.initial_weights)
<ide> del self.initial_weights
<ide>
<del> def _step(self,
<del> xz_t, xr_t, xh_t, mask_tm1,
<del> h_tm1,
<del> u_z, u_r, u_h):
<del> h_mask_tm1 = mask_tm1 * h_tm1
<del> z = self.inner_activation(xz_t + T.dot(h_mask_tm1, u_z))
<del> r = self.inner_activation(xr_t + T.dot(h_mask_tm1, u_r))
<del> hh_t = self.activation(xh_t + T.dot(r * h_mask_tm1, u_h))
<del> h_t = z * h_mask_tm1 + (1 - z) * hh_t
<del> return h_t
<add> def step(self, x, states):
<add> assert len(states) == 1
<add> x_z = K.dot(x, self.W_z) + self.b_z
<add> x_r = K.dot(x, self.W_r) + self.b_r
<add> x_h = K.dot(x, self.W_h) + self.b_h
<ide>
<del> def get_output(self, train=False):
<del> X = self.get_input(train)
<del> padded_mask = self.get_padded_shuffled_mask(train, X, pad=1)
<del> X = X.dimshuffle((1, 0, 2))
<del>
<del> x_z = T.dot(X, self.W_z) + self.b_z
<del> x_r = T.dot(X, self.W_r) + self.b_r
<del> x_h = T.dot(X, self.W_h) + self.b_h
<del> outputs, updates = theano.scan(
<del> self._step,
<del> sequences=[x_z, x_r, x_h, padded_mask],
<del> outputs_info=T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.output_dim), 1),
<del> non_sequences=[self.U_z, self.U_r, self.U_h],
<del> truncate_gradient=self.truncate_gradient,
<del> go_backwards=self.go_backwards)
<add> h_tm1 = states[0]
<add> z = self.inner_activation(x_z + K.dot(h_tm1, self.U_z))
<add> r = self.inner_activation(x_r + K.dot(h_tm1, self.U_r))
<ide>
<del> if self.return_sequences:
<del> return outputs.dimshuffle((1, 0, 2))
<del> return outputs[-1]
<add> hh = self.inner_activation(x_h + K.dot(r * h_tm1, self.U_h))
<add> h = z * h_tm1 + (1 - z) * hh
<add> return h, [h]
<ide>
<ide> def get_config(self):
<del> config = {"name": self.__class__.__name__,
<del> "output_dim": self.output_dim,
<add> config = {"output_dim": self.output_dim,
<ide> "init": self.init.__name__,
<ide> "inner_init": self.inner_init.__name__,
<ide> "activation": self.activation.__name__,
<del> "inner_activation": self.inner_activation.__name__,
<del> "truncate_gradient": self.truncate_gradient,
<del> "return_sequences": self.return_sequences,
<del> "input_dim": self.input_dim,
<del> "input_length": self.input_length,
<del> "go_backwards": self.go_backwards}
<add> "inner_activation": self.inner_activation.__name__}
<ide> base_config = super(GRU, self).get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> class LSTM(Recurrent):
<ide> Acts as a spatiotemporal projection,
<ide> turning a sequence of vectors into a single vector.
<ide>
<del> Eats inputs with shape:
<del> (nb_samples, max_sample_length (samples shorter than this are padded with zeros at the end), input_dim)
<add> Takes inputs with shape:
<add> (nb_samples, max_sample_length, input_dim)
<add> (samples shorter than `max_sample_length`
<add> are padded with zeros at the end)
<ide>
<ide> and returns outputs with shape:
<ide> if not return_sequences:
<ide> class LSTM(Recurrent):
<ide> http://www.cs.toronto.edu/~graves/preprint.pdf
<ide> '''
<ide> def __init__(self, output_dim,
<del> init='glorot_uniform', inner_init='orthogonal', forget_bias_init='one',
<del> activation='tanh', inner_activation='hard_sigmoid',
<del> weights=None, truncate_gradient=-1, return_sequences=False,
<del> input_dim=None, input_length=None, go_backwards=False, **kwargs):
<add> init='glorot_uniform', inner_init='orthogonal',
<add> forget_bias_init='one', activation='tanh',
<add> inner_activation='hard_sigmoid', **kwargs):
<ide> self.output_dim = output_dim
<ide> self.init = initializations.get(init)
<ide> self.inner_init = initializations.get(inner_init)
<ide> self.forget_bias_init = initializations.get(forget_bias_init)
<ide> self.activation = activations.get(activation)
<ide> self.inner_activation = activations.get(inner_activation)
<del> self.truncate_gradient = truncate_gradient
<del> self.return_sequences = return_sequences
<del> self.initial_weights = weights
<del> self.go_backwards = go_backwards
<del>
<del> self.input_dim = input_dim
<del> self.input_length = input_length
<del> if self.input_dim:
<del> kwargs['input_shape'] = (self.input_length, self.input_dim)
<ide> super(LSTM, self).__init__(**kwargs)
<ide>
<ide> def build(self):
<del> input_dim = self.input_shape[2]
<del> self.input = T.tensor3()
<add> input_shape = self.input_shape
<add> input_dim = input_shape[2]
<add> self.input_dim = input_dim
<add> self.input = K.placeholder(input_shape)
<add>
<add> if self.stateful:
<add> if not input_shape[0]:
<add> raise Exception('If a RNN is stateful, a complete ' +
<add> 'input_shape must be provided ' +
<add> '(including batch size).')
<add> self.states = [K.zeros(input_shape[0], self.output_dim),
<add> K.zeros(input_shape[0], self.output_dim)]
<add> else:
<add> # initial states: 2 all-zero tensor of shape (output_dim)
<add> self.states = [None, None]
<ide>
<ide> self.W_i = self.init((input_dim, self.output_dim))
<ide> self.U_i = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_i = shared_zeros((self.output_dim))
<add> self.b_i = K.zeros((self.output_dim))
<ide>
<ide> self.W_f = self.init((input_dim, self.output_dim))
<ide> self.U_f = self.inner_init((self.output_dim, self.output_dim))
<ide> self.b_f = self.forget_bias_init((self.output_dim))
<ide>
<ide> self.W_c = self.init((input_dim, self.output_dim))
<ide> self.U_c = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_c = shared_zeros((self.output_dim))
<add> self.b_c = K.zeros((self.output_dim))
<ide>
<ide> self.W_o = self.init((input_dim, self.output_dim))
<ide> self.U_o = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_o = shared_zeros((self.output_dim))
<add> self.b_o = K.zeros((self.output_dim))
<ide>
<del> self.params = [
<del> self.W_i, self.U_i, self.b_i,
<del> self.W_c, self.U_c, self.b_c,
<del> self.W_f, self.U_f, self.b_f,
<del> self.W_o, self.U_o, self.b_o,
<del> ]
<add> self.params = [self.W_i, self.U_i, self.b_i,
<add> self.W_c, self.U_c, self.b_c,
<add> self.W_f, self.U_f, self.b_f,
<add> self.W_o, self.U_o, self.b_o]
<ide>
<ide> if self.initial_weights is not None:
<ide> self.set_weights(self.initial_weights)
<ide> del self.initial_weights
<ide>
<del> def _step(self,
<del> xi_t, xf_t, xo_t, xc_t, mask_tm1,
<del> h_tm1, c_tm1,
<del> u_i, u_f, u_o, u_c):
<del> h_mask_tm1 = mask_tm1 * h_tm1
<del> c_mask_tm1 = mask_tm1 * c_tm1
<del>
<del> i_t = self.inner_activation(xi_t + T.dot(h_mask_tm1, u_i))
<del> f_t = self.inner_activation(xf_t + T.dot(h_mask_tm1, u_f))
<del> c_t = f_t * c_mask_tm1 + i_t * self.activation(xc_t + T.dot(h_mask_tm1, u_c))
<del> o_t = self.inner_activation(xo_t + T.dot(h_mask_tm1, u_o))
<del> h_t = o_t * self.activation(c_t)
<del> return h_t, c_t
<add> def step(self, x, states):
<add> assert len(states) == 2
<add> h_tm1 = states[0]
<add> c_tm1 = states[1]
<ide>
<del> def get_output(self, train=False):
<del> X = self.get_input(train)
<del> padded_mask = self.get_padded_shuffled_mask(train, X, pad=1)
<del> X = X.dimshuffle((1, 0, 2))
<del>
<del> xi = T.dot(X, self.W_i) + self.b_i
<del> xf = T.dot(X, self.W_f) + self.b_f
<del> xc = T.dot(X, self.W_c) + self.b_c
<del> xo = T.dot(X, self.W_o) + self.b_o
<del>
<del> [outputs, memories], updates = theano.scan(
<del> self._step,
<del> sequences=[xi, xf, xo, xc, padded_mask],
<del> outputs_info=[
<del> T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.output_dim), 1),
<del> T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.output_dim), 1)
<del> ],
<del> non_sequences=[self.U_i, self.U_f, self.U_o, self.U_c],
<del> truncate_gradient=self.truncate_gradient,
<del> go_backwards=self.go_backwards)
<add> x_i = K.dot(x, self.W_i) + self.b_i
<add> x_f = K.dot(x, self.W_f) + self.b_f
<add> x_c = K.dot(x, self.W_c) + self.b_c
<add> x_o = K.dot(x, self.W_o) + self.b_o
<ide>
<del> if self.return_sequences:
<del> return outputs.dimshuffle((1, 0, 2))
<del> return outputs[-1]
<add> i = self.inner_activation(x_i + K.dot(h_tm1, self.U_i))
<add> f = self.inner_activation(x_f + K.dot(h_tm1, self.U_f))
<add> c = f * c_tm1 + i * self.activation(x_c + K.dot(h_tm1, self.U_c))
<add> o = self.inner_activation(x_o + K.dot(h_tm1, self.U_o))
<add> h = o * self.activation(c)
<add> return h, [h, c]
<ide>
<ide> def get_config(self):
<del> config = {"name": self.__class__.__name__,
<del> "output_dim": self.output_dim,
<add> config = {"output_dim": self.output_dim,
<ide> "init": self.init.__name__,
<ide> "inner_init": self.inner_init.__name__,
<ide> "forget_bias_init": self.forget_bias_init.__name__,
<ide> "activation": self.activation.__name__,
<del> "inner_activation": self.inner_activation.__name__,
<del> "truncate_gradient": self.truncate_gradient,
<del> "return_sequences": self.return_sequences,
<del> "input_dim": self.input_dim,
<del> "input_length": self.input_length,
<del> "go_backwards": self.go_backwards}
<add> "inner_activation": self.inner_activation.__name__}
<ide> base_config = super(LSTM, self).get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<del>class JZS1(Recurrent):
<add>class JZS(Recurrent):
<ide> '''
<del> Evolved recurrent neural network architectures from the evaluation of thousands
<del> of models, serving as alternatives to LSTMs and GRUs. See Jozefowicz et al. 2015.
<del>
<del> This corresponds to the `MUT1` architecture described in the paper.
<del>
<del> Takes inputs with shape:
<del> (nb_samples, max_sample_length (samples shorter than this are padded with zeros at the end), input_dim)
<del>
<del> and returns outputs with shape:
<del> if not return_sequences:
<del> (nb_samples, output_dim)
<del> if return_sequences:
<del> (nb_samples, max_sample_length, output_dim)
<add> Evolved recurrent neural network architectures
<add> from the evaluation of thousands
<add> of models, serving as alternatives to LSTMs and GRUs.
<add> See Jozefowicz et al. 2015.
<ide>
<ide> References:
<ide> An Empirical Exploration of Recurrent Network Architectures
<ide> class JZS1(Recurrent):
<ide> def __init__(self, output_dim,
<ide> init='glorot_uniform', inner_init='orthogonal',
<ide> activation='tanh', inner_activation='sigmoid',
<del> weights=None, truncate_gradient=-1, return_sequences=False,
<del> input_dim=None, input_length=None, go_backwards=False, **kwargs):
<add> **kwargs):
<ide> self.output_dim = output_dim
<ide> self.init = initializations.get(init)
<ide> self.inner_init = initializations.get(inner_init)
<ide> self.activation = activations.get(activation)
<ide> self.inner_activation = activations.get(inner_activation)
<del> self.truncate_gradient = truncate_gradient
<del> self.return_sequences = return_sequences
<del> self.initial_weights = weights
<del> self.go_backwards = go_backwards
<del>
<del> self.input_dim = input_dim
<del> self.input_length = input_length
<del> if self.input_dim:
<del> kwargs['input_shape'] = (self.input_length, self.input_dim)
<del> super(JZS1, self).__init__(**kwargs)
<add> super(JZS, self).__init__(**kwargs)
<ide>
<ide> def build(self):
<del> input_dim = self.input_shape[2]
<del> self.input = T.tensor3()
<add> input_shape = self.input_shape
<add> input_dim = input_shape[2]
<add> self.input_dim = input_dim
<add> self.input = K.placeholder(input_shape)
<add>
<add> if self.stateful:
<add> if not input_shape[0]:
<add> raise Exception('If a RNN is stateful, a complete ' +
<add> 'input_shape must be provided ' +
<add> '(including batch size).')
<add> self.states = [K.zeros(input_shape[0], self.output_dim)]
<add> else:
<add> # initial states: all-zero tensor of shape (output_dim)
<add> self.states = [None]
<ide>
<ide> self.W_z = self.init((input_dim, self.output_dim))
<del> self.b_z = shared_zeros((self.output_dim))
<add> self.U_z = self.inner_init((self.output_dim, self.output_dim))
<add> self.b_z = K.zeros((self.output_dim))
<ide>
<ide> self.W_r = self.init((input_dim, self.output_dim))
<ide> self.U_r = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_r = shared_zeros((self.output_dim))
<add> self.b_r = K.zeros((self.output_dim))
<ide>
<add> self.W_h = self.init((input_dim, self.output_dim))
<ide> self.U_h = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_h = shared_zeros((self.output_dim))
<add> self.b_h = K.zeros((self.output_dim))
<ide>
<del> # P_h used to project X onto different dimension, using sparse random projections
<add> # P matrix used to project X onto different dimension,
<add> # using sparse random projections
<ide> if input_dim == self.output_dim:
<del> self.Pmat = theano.shared(np.identity(self.output_dim, dtype=theano.config.floatX), name=None)
<add> self.P = K.variable(np.identity(self.output_dim))
<ide> else:
<del> P = np.random.binomial(1, 0.5, size=(input_dim, self.output_dim)).astype(theano.config.floatX) * 2 - 1
<del> P = 1 / np.sqrt(input_dim) * P
<del> self.Pmat = theano.shared(P, name=None)
<add> P = np.random.binomial(1, 0.5, size=(input_dim, self.output_dim))
<add> P = 1. / np.sqrt(input_dim) * (P * 2. - 1.)
<add> self.P = K.variable(P)
<ide>
<del> self.params = [
<del> self.W_z, self.b_z,
<del> self.W_r, self.U_r, self.b_r,
<del> self.U_h, self.b_h,
<del> self.Pmat
<del> ]
<add> self.params = [self.W_z, self.b_z,
<add> self.W_r, self.U_r, self.b_r,
<add> self.U_h, self.b_h,
<add> self.P]
<ide>
<ide> if self.initial_weights is not None:
<ide> self.set_weights(self.initial_weights)
<ide> del self.initial_weights
<ide>
<del> def _step(self,
<del> xz_t, xr_t, xh_t, mask_tm1,
<del> h_tm1,
<del> u_r, u_h):
<del> h_mask_tm1 = mask_tm1 * h_tm1
<del> z = self.inner_activation(xz_t)
<del> r = self.inner_activation(xr_t + T.dot(h_mask_tm1, u_r))
<del> hh_t = self.activation(xh_t + T.dot(r * h_mask_tm1, u_h))
<del> h_t = hh_t * z + h_mask_tm1 * (1 - z)
<del> return h_t
<del>
<del> def get_output(self, train=False):
<del> X = self.get_input(train)
<del> padded_mask = self.get_padded_shuffled_mask(train, X, pad=1)
<del> X = X.dimshuffle((1, 0, 2))
<del>
<del> x_z = T.dot(X, self.W_z) + self.b_z
<del> x_r = T.dot(X, self.W_r) + self.b_r
<del> x_h = T.tanh(T.dot(X, self.Pmat)) + self.b_h
<del> outputs, updates = theano.scan(
<del> self._step,
<del> sequences=[x_z, x_r, x_h, padded_mask],
<del> outputs_info=T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.output_dim), 1),
<del> non_sequences=[self.U_r, self.U_h],
<del> truncate_gradient=self.truncate_gradient,
<del> go_backwards=self.go_backwards)
<del> if self.return_sequences:
<del> return outputs.dimshuffle((1, 0, 2))
<del> return outputs[-1]
<del>
<ide> def get_config(self):
<del> config = {"name": self.__class__.__name__,
<del> "output_dim": self.output_dim,
<add> config = {"output_dim": self.output_dim,
<ide> "init": self.init.__name__,
<ide> "inner_init": self.inner_init.__name__,
<ide> "activation": self.activation.__name__,
<del> "inner_activation": self.inner_activation.__name__,
<del> "truncate_gradient": self.truncate_gradient,
<del> "return_sequences": self.return_sequences,
<del> "input_dim": self.input_dim,
<del> "input_length": self.input_length,
<del> "go_backwards": self.go_backwards}
<del> base_config = super(JZS1, self).get_config()
<add> "inner_activation": self.inner_activation.__name__}
<add> base_config = super(JZS, self).get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<del>class JZS2(Recurrent):
<add>class JZS1(JZS):
<ide> '''
<del> Evolved recurrent neural network architectures from the evaluation of thousands
<del> of models, serving as alternatives to LSTMs and GRUs. See Jozefowicz et al. 2015.
<add> Evolved recurrent neural network architectures
<add> from the evaluation of thousands
<add> of models, serving as alternatives to LSTMs and GRUs.
<add> See Jozefowicz et al. 2015.
<ide>
<del> This corresponds to the `MUT2` architecture described in the paper.
<del>
<del> Takes inputs with shape:
<del> (nb_samples, max_sample_length (samples shorter than this are padded with zeros at the end), input_dim)
<del>
<del> and returns outputs with shape:
<del> if not return_sequences:
<del> (nb_samples, output_dim)
<del> if return_sequences:
<del> (nb_samples, max_sample_length, output_dim)
<del>
<del> References:
<del> An Empirical Exploration of Recurrent Network Architectures
<del> http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf
<add> This corresponds to the `MUT1` architecture described in the paper.
<ide> '''
<del> def __init__(self, output_dim,
<del> init='glorot_uniform', inner_init='orthogonal',
<del> activation='tanh', inner_activation='sigmoid',
<del> weights=None, truncate_gradient=-1, return_sequences=False,
<del> input_dim=None, input_length=None, go_backwards=False, **kwargs):
<del> self.output_dim = output_dim
<del> self.init = initializations.get(init)
<del> self.inner_init = initializations.get(inner_init)
<del> self.activation = activations.get(activation)
<del> self.inner_activation = activations.get(inner_activation)
<del> self.truncate_gradient = truncate_gradient
<del> self.return_sequences = return_sequences
<del> self.initial_weights = weights
<del> self.go_backwards = go_backwards
<add> def __init__(self, *args, **kwargs):
<add> super(JZS1, self).__init__(*args, **kwargs)
<ide>
<del> self.input_dim = input_dim
<del> self.input_length = input_length
<del> if self.input_dim:
<del> kwargs['input_shape'] = (self.input_length, self.input_dim)
<del> super(JZS2, self).__init__(**kwargs)
<add> def step(self, x, states):
<add> assert len(states) == 1
<add> h_tm1 = states[0]
<ide>
<del> def build(self):
<del> input_dim = self.input_shape[2]
<del> self.input = T.tensor3()
<add> x_z = K.dot(x, self.W_z) + self.b_z
<add> x_r = K.dot(x, self.P) + self.b_r
<add> x_h = K.dot(x, self.W_h) + self.b_h
<ide>
<del> self.W_z = self.init((input_dim, self.output_dim))
<del> self.U_z = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_z = shared_zeros((self.output_dim))
<del>
<del> self.U_r = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_r = shared_zeros((self.output_dim))
<add> z = self.inner_activation(x_z)
<add> r = self.inner_activation(x_r + K.dot(h_tm1, self.U_r))
<add> hh = self.activation(x_h + K.dot(r * h_tm1, self.U_h))
<add> h = hh * z + h_tm1 * (1. - z)
<add> return h, [h]
<ide>
<del> self.W_h = self.init((input_dim, self.output_dim))
<del> self.U_h = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_h = shared_zeros((self.output_dim))
<ide>
<del> # P_h used to project X onto different dimension, using sparse random projections
<del> if input_dim == self.output_dim:
<del> self.Pmat = theano.shared(np.identity(self.output_dim, dtype=theano.config.floatX), name=None)
<del> else:
<del> P = np.random.binomial(1, 0.5, size=(input_dim, self.output_dim)).astype(theano.config.floatX) * 2 - 1
<del> P = 1 / np.sqrt(input_dim) * P
<del> self.Pmat = theano.shared(P, name=None)
<add>class JZS2(JZS):
<add> '''
<add> Evolved recurrent neural network architectures
<add> from the evaluation of thousands
<add> of models, serving as alternatives to LSTMs and GRUs.
<add> See Jozefowicz et al. 2015.
<ide>
<del> self.params = [
<del> self.W_z, self.U_z, self.b_z,
<del> self.U_r, self.b_r,
<del> self.W_h, self.U_h, self.b_h,
<del> self.Pmat
<del> ]
<add> This corresponds to the `MUT2` architecture described in the paper.
<add> '''
<add> def __init__(self, *args, **kwargs):
<add> super(JZS2, self).__init__(*args, **kwargs)
<ide>
<del> if self.initial_weights is not None:
<del> self.set_weights(self.initial_weights)
<del> del self.initial_weights
<add> def step(self, x, states):
<add> assert len(states) == 1
<add> h_tm1 = states[0]
<ide>
<del> def _step(self,
<del> xz_t, xr_t, xh_t, mask_tm1,
<del> h_tm1,
<del> u_z, u_r, u_h):
<del> h_mask_tm1 = mask_tm1 * h_tm1
<del> z = self.inner_activation(xz_t + T.dot(h_mask_tm1, u_z))
<del> r = self.inner_activation(xr_t + T.dot(h_mask_tm1, u_r))
<del> hh_t = self.activation(xh_t + T.dot(r * h_mask_tm1, u_h))
<del> h_t = hh_t * z + h_mask_tm1 * (1 - z)
<del> return h_t
<add> x_z = K.dot(x, self.W_z) + self.b_z
<add> x_r = K.dot(x, self.P) + self.b_r
<add> x_h = K.dot(x, self.W_h) + self.b_h
<ide>
<del> def get_output(self, train=False):
<del> X = self.get_input(train)
<del> padded_mask = self.get_padded_shuffled_mask(train, X, pad=1)
<del> X = X.dimshuffle((1, 0, 2))
<del>
<del> x_z = T.dot(X, self.W_z) + self.b_z
<del> x_r = T.dot(X, self.Pmat) + self.b_r
<del> x_h = T.dot(X, self.W_h) + self.b_h
<del> outputs, updates = theano.scan(
<del> self._step,
<del> sequences=[x_z, x_r, x_h, padded_mask],
<del> outputs_info=T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.output_dim), 1),
<del> non_sequences=[self.U_z, self.U_r, self.U_h],
<del> truncate_gradient=self.truncate_gradient,
<del> go_backwards=self.go_backwards)
<del>
<del> if self.return_sequences:
<del> return outputs.dimshuffle((1, 0, 2))
<del> return outputs[-1]
<del>
<del> def get_config(self):
<del> config = {"name": self.__class__.__name__,
<del> "output_dim": self.output_dim,
<del> "init": self.init.__name__,
<del> "inner_init": self.inner_init.__name__,
<del> "activation": self.activation.__name__,
<del> "inner_activation": self.inner_activation.__name__,
<del> "truncate_gradient": self.truncate_gradient,
<del> "return_sequences": self.return_sequences,
<del> "input_dim": self.input_dim,
<del> "input_length": self.input_length,
<del> "go_backwards": self.go_backwards}
<del> base_config = super(JZS2, self).get_config()
<del> return dict(list(base_config.items()) + list(config.items()))
<add> z = self.inner_activation(x_z + K.dot(h_tm1, self.U_z))
<add> r = self.inner_activation(x_r + K.dot(h_tm1, self.U_r))
<add> hh = self.activation(x_h + K.dot(r * h_tm1, self.U_h))
<add> h = hh * z + h_tm1 * (1. - z)
<add> return h, [h]
<ide>
<ide>
<ide> class JZS3(Recurrent):
<ide> '''
<del> Evolved recurrent neural network architectures from the evaluation of thousands
<del> of models, serving as alternatives to LSTMs and GRUs. See Jozefowicz et al. 2015.
<add> Evolved recurrent neural network architectures
<add> from the evaluation of thousands
<add> of models, serving as alternatives to LSTMs and GRUs.
<add> See Jozefowicz et al. 2015.
<ide>
<ide> This corresponds to the `MUT3` architecture described in the paper.
<del>
<del> Takes inputs with shape:
<del> (nb_samples, max_sample_length (samples shorter than this are padded with zeros at the end), input_dim)
<del>
<del> and returns outputs with shape:
<del> if not return_sequences:
<del> (nb_samples, output_dim)
<del> if return_sequences:
<del> (nb_samples, max_sample_length, output_dim)
<del>
<del> References:
<del> An Empirical Exploration of Recurrent Network Architectures
<del> http://www.jmlr.org/proceedings/papers/v37/jozefowicz15.pdf
<ide> '''
<del> def __init__(self, output_dim,
<del> init='glorot_uniform', inner_init='orthogonal',
<del> activation='tanh', inner_activation='sigmoid',
<del> weights=None, truncate_gradient=-1, return_sequences=False,
<del> input_dim=None, input_length=None, go_backwards=False, **kwargs):
<del> self.output_dim = output_dim
<del> self.init = initializations.get(init)
<del> self.inner_init = initializations.get(inner_init)
<del> self.activation = activations.get(activation)
<del> self.inner_activation = activations.get(inner_activation)
<del> self.truncate_gradient = truncate_gradient
<del> self.return_sequences = return_sequences
<del> self.initial_weights = weights
<del> self.go_backwards = go_backwards
<del>
<del> self.input_dim = input_dim
<del> self.input_length = input_length
<del> if self.input_dim:
<del> kwargs['input_shape'] = (self.input_length, self.input_dim)
<del> super(JZS3, self).__init__(**kwargs)
<del>
<del> def build(self):
<del> input_dim = self.input_shape[2]
<del> self.input = T.tensor3()
<del>
<del> self.W_z = self.init((input_dim, self.output_dim))
<del> self.U_z = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_z = shared_zeros((self.output_dim))
<del>
<del> self.W_r = self.init((input_dim, self.output_dim))
<del> self.U_r = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_r = shared_zeros((self.output_dim))
<del>
<del> self.W_h = self.init((input_dim, self.output_dim))
<del> self.U_h = self.inner_init((self.output_dim, self.output_dim))
<del> self.b_h = shared_zeros((self.output_dim))
<del>
<del> self.params = [
<del> self.W_z, self.U_z, self.b_z,
<del> self.W_r, self.U_r, self.b_r,
<del> self.W_h, self.U_h, self.b_h,
<del> ]
<del>
<del> if self.initial_weights is not None:
<del> self.set_weights(self.initial_weights)
<del> del self.initial_weights
<del>
<del> def _step(self,
<del> xz_t, xr_t, xh_t, mask_tm1,
<del> h_tm1,
<del> u_z, u_r, u_h):
<del> h_mask_tm1 = mask_tm1 * h_tm1
<del> z = self.inner_activation(xz_t + T.dot(T.tanh(h_mask_tm1), u_z))
<del> r = self.inner_activation(xr_t + T.dot(h_mask_tm1, u_r))
<del> hh_t = self.activation(xh_t + T.dot(r * h_mask_tm1, u_h))
<del> h_t = hh_t * z + h_mask_tm1 * (1 - z)
<del> return h_t
<del>
<del> def get_output(self, train=False):
<del> X = self.get_input(train)
<del> padded_mask = self.get_padded_shuffled_mask(train, X, pad=1)
<del> X = X.dimshuffle((1, 0, 2))
<del>
<del> x_z = T.dot(X, self.W_z) + self.b_z
<del> x_r = T.dot(X, self.W_r) + self.b_r
<del> x_h = T.dot(X, self.W_h) + self.b_h
<del> outputs, updates = theano.scan(
<del> self._step,
<del> sequences=[x_z, x_r, x_h, padded_mask],
<del> outputs_info=T.unbroadcast(alloc_zeros_matrix(X.shape[1], self.output_dim), 1),
<del> non_sequences=[self.U_z, self.U_r, self.U_h],
<del> truncate_gradient=self.truncate_gradient,
<del> go_backwards=self.go_backwards)
<del>
<del> if self.return_sequences:
<del> return outputs.dimshuffle((1, 0, 2))
<del> return outputs[-1]
<del>
<del> def get_config(self):
<del> config = {"name": self.__class__.__name__,
<del> "output_dim": self.output_dim,
<del> "init": self.init.__name__,
<del> "inner_init": self.inner_init.__name__,
<del> "activation": self.activation.__name__,
<del> "inner_activation": self.inner_activation.__name__,
<del> "truncate_gradient": self.truncate_gradient,
<del> "return_sequences": self.return_sequences,
<del> "input_dim": self.input_dim,
<del> "input_length": self.input_length,
<del> "go_backwards": self.go_backwards}
<del> base_config = super(JZS3, self).get_config()
<del> return dict(list(base_config.items()) + list(config.items()))
<add> def __init__(self, *args, **kwargs):
<add> super(JZS3, self).__init__(*args, **kwargs)
<add>
<add> def step(self, x, states):
<add> assert len(states) == 1
<add> h_tm1 = states[0]
<add>
<add> x_z = K.dot(x, self.W_z) + self.b_z
<add> x_r = K.dot(x, self.P) + self.b_r
<add> x_h = K.dot(x, self.W_h) + self.b_h
<add>
<add> z = self.inner_activation(x_z + K.dot(K.tanh(h_tm1), self.U_z))
<add> r = self.inner_activation(x_r + K.dot(h_tm1, self.U_r))
<add> hh = self.activation(x_h + K.dot(r * h_tm1, self.U_h))
<add> h = hh * z + h_tm1 * (1. - z)
<add> return h, [h]
<ide><path>tests/auto/keras/backend/test_backends.py
<ide> from numpy.testing import assert_allclose
<ide> import numpy as np
<ide>
<del>np.random.seed(1337)
<del>
<ide> from keras.backend import theano_backend as KTH
<ide> from keras.backend import tensorflow_backend as KTF
<ide>
<ide> def test_function(self):
<ide> assert_allclose(new_val_th, new_val_tf, atol=1e-06)
<ide>
<ide> def test_rnn(self):
<del> pass
<add> # implement a simple RNN
<add> input_dim = 8
<add> output_dim = 4
<add> timesteps = 5
<add>
<add> input_val = np.random.random((32, timesteps, input_dim))
<add> init_state_val = np.random.random((32, output_dim))
<add> W_i_val = np.random.random((input_dim, output_dim))
<add> W_o_val = np.random.random((output_dim, output_dim))
<add>
<add> def rnn_step_fn(input_dim, output_dim, K):
<add> W_i = K.variable(W_i_val)
<add> W_o = K.variable(W_o_val)
<add>
<add> def step_function(x, states):
<add> assert len(states) == 1
<add> prev_output = states[0]
<add> output = K.dot(x, W_i) + K.dot(prev_output, W_o)
<add> return output, [output]
<add> return step_function
<add>
<add> th_rnn_step_fn = rnn_step_fn(input_dim, output_dim, KTH)
<add> inputs = KTH.variable(input_val)
<add> initial_states = [KTH.variable(init_state_val)]
<add> last_output, outputs, new_states = KTH.rnn(th_rnn_step_fn, inputs,
<add> initial_states,
<add> go_backwards=False,
<add> masking=False)
<add> th_last_output = KTH.eval(last_output)
<add> th_outputs = KTH.eval(outputs)
<add> assert len(new_states) == 1
<add> th_state = KTH.eval(new_states[0])
<add>
<add> tf_rnn_step_fn = rnn_step_fn(input_dim, output_dim, KTF)
<add> inputs = KTF.variable(input_val)
<add> initial_states = [KTF.variable(init_state_val)]
<add> last_output, outputs, new_states = KTF.rnn(tf_rnn_step_fn, inputs,
<add> initial_states,
<add> go_backwards=False,
<add> masking=False)
<add> tf_last_output = KTF.eval(last_output)
<add> tf_outputs = KTF.eval(outputs)
<add> assert len(new_states) == 1
<add> tf_state = KTF.eval(new_states[0])
<add>
<add> assert_allclose(tf_last_output, th_last_output, atol=1e-06)
<add> assert_allclose(tf_outputs, th_outputs, atol=1e-06)
<add> assert_allclose(tf_state, th_state, atol=1e-06)
<ide>
<ide> def test_switch(self):
<ide> val = np.random.random()
<ide><path>tests/auto/keras/layers/test_recurrent.py
<ide> import unittest
<ide> import numpy as np
<del>import theano
<ide>
<ide> from keras.layers import recurrent
<add>from keras import backend as K
<ide>
<ide> nb_samples, timesteps, input_dim, output_dim = 3, 3, 10, 5
<ide>
<ide>
<ide> def _runner(layer_class):
<ide> """
<del> All the recurrent layers share the same interface, so we can run through them with a single
<del> function.
<add> All the recurrent layers share the same interface,
<add> so we can run through them with a single function.
<ide> """
<ide> for ret_seq in [True, False]:
<del> layer = layer_class(output_dim, return_sequences=ret_seq, weights=None, input_shape=(None, input_dim))
<del> layer.input = theano.shared(value=np.ones((nb_samples, timesteps, input_dim)))
<add> layer = layer_class(output_dim, return_sequences=ret_seq,
<add> weights=None, input_shape=(None, input_dim))
<add> layer.input = K.variable(np.ones((nb_samples, timesteps, input_dim)))
<ide> config = layer.get_config()
<ide>
<ide> for train in [True, False]:
<del> out = layer.get_output(train).eval()
<add> out = K.eval(layer.get_output(train))
<ide> # Make sure the output has the desired shape
<ide> if ret_seq:
<ide> assert(out.shape == (nb_samples, timesteps, output_dim))
<ide> class TestRNNS(unittest.TestCase):
<ide> def test_simple(self):
<ide> _runner(recurrent.SimpleRNN)
<ide>
<del> def test_simple_deep(self):
<del> _runner(recurrent.SimpleDeepRNN)
<del>
<ide> def test_gru(self):
<ide> _runner(recurrent.GRU)
<ide>
<ide> def test_lstm(self):
<ide> _runner(recurrent.LSTM)
<ide>
<del> def test_jzs1(self):
<del> _runner(recurrent.JZS1)
<add> # def test_jzs1(self):
<add> # _runner(recurrent.JZS1)
<ide>
<del> def test_jzs2(self):
<del> _runner(recurrent.JZS2)
<add> # def test_jzs2(self):
<add> # _runner(recurrent.JZS2)
<ide>
<del> def test_jzs3(self):
<del> _runner(recurrent.JZS3)
<add> # def test_jzs3(self):
<add> # _runner(recurrent.JZS3)
<ide>
<ide>
<ide> if __name__ == '__main__': | 6 |
Java | Java | fix javadoc link in defaultcorsprocessor | df291a39b1f857846f870772772809d0a63c4d26 | <ide><path>spring-web/src/main/java/org/springframework/web/cors/DefaultCorsProcessor.java
<ide> protected String checkOrigin(CorsConfiguration config, @Nullable String requestO
<ide> /**
<ide> * Check the HTTP method and determine the methods for the response of a
<ide> * pre-flight request. The default implementation simply delegates to
<del> * {@link org.springframework.web.cors.CorsConfiguration#checkOrigin(String)}.
<add> * {@link org.springframework.web.cors.CorsConfiguration#checkHttpMethod(HttpMethod)}.
<ide> */
<ide> @Nullable
<ide> protected List<HttpMethod> checkMethods(CorsConfiguration config, @Nullable HttpMethod requestMethod) {
<ide><path>spring-web/src/main/java/org/springframework/web/cors/reactive/DefaultCorsProcessor.java
<ide> protected String checkOrigin(CorsConfiguration config, @Nullable String requestO
<ide> /**
<ide> * Check the HTTP method and determine the methods for the response of a
<ide> * pre-flight request. The default implementation simply delegates to
<del> * {@link CorsConfiguration#checkOrigin(String)}.
<add> * {@link CorsConfiguration#checkHttpMethod(HttpMethod)}.
<ide> */
<ide> @Nullable
<ide> protected List<HttpMethod> checkMethods(CorsConfiguration config, @Nullable HttpMethod requestMethod) { | 2 |
Python | Python | remove the double connect hack | 649c70e7029118d82128f2f6e35a22ca8f85be74 | <ide><path>libcloud/common/base.py
<ide> def request(self, action, params=None, data=None, headers=None,
<ide> else:
<ide> url = action
<ide>
<del> # Removed terrible hack...this a less-bad hack that doesn't execute a
<del> # request twice, but it's still a hack.
<del> self.connect()
<ide> try:
<ide> # @TODO: Should we just pass File object as body to request method
<ide> # instead of dealing with splitting and sending the file ourselves? | 1 |
Javascript | Javascript | expose schema validator as `webpack.validateschema | 2bbd3652bf6c9c73209ec070f906a82f05240afb | <ide><path>lib/webpack.js
<ide> webpack.WebpackOptionsApply = WebpackOptionsApply;
<ide> webpack.Compiler = Compiler;
<ide> webpack.MultiCompiler = MultiCompiler;
<ide> webpack.NodeEnvironmentPlugin = NodeEnvironmentPlugin;
<del>webpack.validate = validateWebpackOptions;
<add>webpack.validate = validateWebpackOptions.bind(this, webpackOptionsSchema);
<add>webpack.validateSchema = validateWebpackOptions;
<ide>
<ide> function exportPlugins(exports, path, plugins) {
<ide> plugins.forEach(function(name) { | 1 |
Text | Text | add strtolower function | d6f8ddfdb3523b095ce63c11bec3d29e459ee094 | <ide><path>guide/english/php/functions/index.md
<ide> function makeItBIG($a_lot_of_names) {
<ide> $a_lot_of_names = ['Homer', 'Marge', 'Bart', 'Maggy', 'Lisa'];
<ide> var_dump(makeItBIG($a_lot_of_names));
<ide> ```
<add>## strtolower Function
<add>The strtolower() function converts a string to lowercase.
<add>```
<add><?php
<add> echo strtolower("Hello WORLD."); //hello world.
<add>?>
<add>```
<ide>
<ide> #### More Information:
<ide> | 1 |
Ruby | Ruby | reorganize some checks | ab19242d0484a91ca2b9dac28c8a131be08758d6 | <ide><path>Library/Homebrew/cmd/audit.rb
<ide> def audit_formula_urls f
<ide> problems << " * The homepage should start with http or https."
<ide> end
<ide>
<add> # Google Code homepages should end in a slash
<add> if f.homepage =~ %r[^https?://code\.google\.com/p/[^/]+[^/]$]
<add> problems << " * Google Code homepage should end with a slash."
<add> end
<add>
<ide> urls = [(f.url rescue nil), (f.head rescue nil)].reject {|p| p.nil?}
<ide> urls.uniq! # head-only formulae result in duplicate entries
<ide>
<ide> def audit_formula_urls f
<ide> return problems
<ide> end
<ide>
<add>def audit_formula_specs text
<add> problems = []
<add>
<add> if text =~ /devel .+(url '.+').+(url '.+')/m
<add> problems << " * 'devel' block found before stable 'url'"
<add> end
<add>
<add> if text =~ /devel .+(head '.+')/m
<add> problems << " * 'devel' block found before 'head'"
<add> end
<add>
<add> if text =~ /devel do\s+end/
<add> problems << " * Empty 'devel' block found"
<add> end
<add>
<add> return problems
<add>end
<add>
<ide> def audit_formula_instance f
<ide> problems = []
<ide>
<ide> def audit_formula_instance f
<ide> end
<ide> end
<ide>
<del> # Google Code homepages should end in a slash
<del> if f.homepage =~ %r[^https?://code\.google\.com/p/[^/]+[^/]$]
<del> problems << " * Google Code homepage should end with a slash."
<del> end
<add> problems += [' * invalid or missing version'] if f.version.to_s.empty?
<ide>
<ide> return problems
<ide> end
<ide> def audit
<ide> problems << " * 'DATA' was found, but no '__END__'"
<ide> end
<ide>
<del> problems << " * File should end with a newline" if text =~ /.+\z/
<del>
<del> problems += [' * invalid or missing version'] if f.version.to_s.empty?
<del>
<del> problems << " * 'devel' block found before stable 'url'" if text =~ /devel .+(url '.+').+(url '.+')/m
<del>
<del> problems << " * 'devel' block found before 'head'" if text =~ /devel .+(head '.+')/m
<del>
<del> problems << " * Empty 'devel' block found" if text =~ /devel do\s+end/
<add> # files should end with a newline
<add> if text =~ /.+\z/
<add> problems << " * File should end with a newline"
<add> end
<ide>
<ide> # Don't try remaining audits on text in __END__
<ide> text_without_patch = (text.split("__END__")[0]).strip()
<ide>
<ide> problems += audit_formula_text(f.name, text_without_patch)
<ide> problems += audit_formula_options(f, text_without_patch)
<ide> problems += audit_formula_version(f, text_without_patch)
<add> problems += audit_formula_specs(text_without_patch)
<ide>
<ide> unless problems.empty?
<ide> errors = true | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.