content_type
stringclasses 8
values | main_lang
stringclasses 7
values | message
stringlengths 1
50
| sha
stringlengths 40
40
| patch
stringlengths 52
962k
| file_count
int64 1
300
|
---|---|---|---|---|---|
Text | Text | add changelog entry for | da2c6eff74fda1636446ea1566ee4f8ac9300a50 | <ide><path>activerecord/CHANGELOG.md
<add>* Don't check type when using `if_not_exists` on `add_column`.
<add>
<add> Previously, if a migration called `add_column` with the `if_not_exists` option set to true
<add> the `column_exists?` check would look for a column with the same name and type as the migration.
<add>
<add> Recently it was discovered that the type passed to the migration is not always the same type
<add> as the column after migration. For example a column set to `:mediumblob` in the migration will
<add> be casted to `binary` when calling `column.type`. Since there is no straightforward way to cast
<add> the type to the database type without running the migration, we opted to drop the type check from
<add> `add_column`. This means that migrations adding a duplicate column with a different type will no
<add> longer raise an error.
<add>
<add> *Eileen M. Uchitelle*
<add>
<ide> * Log a warning message when running SQLite in production
<ide>
<ide> Using SQLite in production ENV is generally discouraged. SQLite is also the default adapter | 1 |
Text | Text | change the instruction text | 79f937ea1d9190a40475113e2b7ecbfd6be730d2 | <ide><path>curriculum/challenges/english/14-responsive-web-design-22/learn-html-forms-by-building-a-registration-form/60f8604682407e0d017bbf7f.md
<ide> dashedName: step-25
<ide>
<ide> # --description--
<ide>
<del>For the terms and conditions, add an `input` with a `type` of `checkbox` to the third `label` element. Also, as we do not want users to sign up, without having read the terms and conditions, make it `required`.
<add>For the terms and conditions, add an `input` with a `type` of `checkbox` to the third `label` element. Make this `input` element `required` because users should not sign up without reading the terms and conditions.
<ide>
<ide> # --hints--
<ide> | 1 |
Ruby | Ruby | remove base.delete as it's same as relation#delete | 223e2a2709cbd0013d51b024bb4e0f950586c125 | <ide><path>activerecord/lib/active_record/base.rb
<ide> def colorize_logging(*args)
<ide> end
<ide> alias :colorize_logging= :colorize_logging
<ide>
<del> delegate :find, :first, :last, :all, :destroy_all, :exists?, :to => :scoped
<add> delegate :find, :first, :last, :all, :destroy_all, :exists?, :delete, :to => :scoped
<ide> delegate :select, :group, :order, :limit, :joins, :where, :preload, :eager_load, :includes, :from, :lock, :readonly, :having, :to => :scoped
<ide> delegate :count, :average, :minimum, :maximum, :sum, :calculate, :to => :scoped
<ide>
<ide> def update(id, attributes)
<ide> end
<ide> end
<ide>
<del> # Deletes the row with a primary key matching the +id+ argument, using a
<del> # SQL +DELETE+ statement, and returns the number of rows deleted. Active
<del> # Record objects are not instantiated, so the object's callbacks are not
<del> # executed, including any <tt>:dependent</tt> association options or
<del> # Observer methods.
<del> #
<del> # You can delete multiple rows at once by passing an Array of <tt>id</tt>s.
<del> #
<del> # Note: Although it is often much faster than the alternative,
<del> # <tt>#destroy</tt>, skipping callbacks might bypass business logic in
<del> # your application that ensures referential integrity or performs other
<del> # essential jobs.
<del> #
<del> # ==== Examples
<del> #
<del> # # Delete a single row
<del> # Todo.delete(1)
<del> #
<del> # # Delete multiple rows
<del> # Todo.delete([2,3,4])
<del> def delete(id_or_array)
<del> scoped.delete(id_or_array)
<del> end
<del>
<ide> # Destroy an object (or multiple objects) that has the given id, the object is instantiated first,
<ide> # therefore all callbacks and filters are fired off before the object is deleted. This method is
<ide> # less efficient than ActiveRecord#delete but allows cleanup methods and other actions to be run.
<ide><path>activerecord/lib/active_record/relation.rb
<ide> def delete_all
<ide> arel.delete.tap { reset }
<ide> end
<ide>
<add> # Deletes the row with a primary key matching the +id+ argument, using a
<add> # SQL +DELETE+ statement, and returns the number of rows deleted. Active
<add> # Record objects are not instantiated, so the object's callbacks are not
<add> # executed, including any <tt>:dependent</tt> association options or
<add> # Observer methods.
<add> #
<add> # You can delete multiple rows at once by passing an Array of <tt>id</tt>s.
<add> #
<add> # Note: Although it is often much faster than the alternative,
<add> # <tt>#destroy</tt>, skipping callbacks might bypass business logic in
<add> # your application that ensures referential integrity or performs other
<add> # essential jobs.
<add> #
<add> # ==== Examples
<add> #
<add> # # Delete a single row
<add> # Todo.delete(1)
<add> #
<add> # # Delete multiple rows
<add> # Todo.delete([2,3,4])
<ide> def delete(id_or_array)
<ide> where(@klass.primary_key => id_or_array).delete_all
<ide> end | 2 |
Javascript | Javascript | improve explanation of modules | d8e4093b5a75de2aa8d0ffb0aa5b2cdc252b8d2f | <ide><path>src/loader.js
<ide> function setupModuleLoader(window) {
<ide> *
<ide> * # Module
<ide> *
<del> * A module is a collection of services, directives, filters, and configuration information.
<add> * A module is a collection of services, directives, controllers, filters, and configuration information.
<ide> * `angular.module` is used to configure the {@link auto.$injector $injector}.
<ide> *
<ide> * ```js | 1 |
PHP | PHP | add handlerstats method for http client | b57b61c01ea2a5666a5a2242117fa1d3db89ef63 | <ide><path>src/Illuminate/Http/Client/Response.php
<ide> public function effectiveUri()
<ide> return $this->transferStats->getEffectiveUri();
<ide> }
<ide>
<add> /**
<add> * Get the handler stats of the response.
<add> *
<add> * @return \Psr\Http\Message\UriInterface
<add> */
<add> public function handlerStats()
<add> {
<add> return $this->transferStats->getHandlerStats();
<add> }
<add>
<ide> /**
<ide> * Determine if the request was successful.
<ide> * | 1 |
Javascript | Javascript | fix typo in error message | 7a4dc0b17eec8a1a103f64845262b4ca9e418a3f | <ide><path>lib/webpack.js
<ide> function webpack(options, callback) {
<ide> }));
<ide> } else if(typeof options === "object") {
<ide> if(!options.entry && !options.plugins) {
<del> throw new Error("Passed 'options' object don't look like a valid webpack configuration");
<add> throw new Error("Passed 'options' object does not look like a valid webpack configuration");
<ide> }
<ide> new WebpackOptionsDefaulter().process(options);
<ide> | 1 |
Python | Python | fix kaiser for m=1 | ced34d27a8eef42e0f963afa4989e2383fc3ca77 | <ide><path>numpy/lib/function_base.py
<ide> def kaiser(M,beta):
<ide>
<ide> """
<ide> from numpy.dual import i0
<add> if M == 1:
<add> return np.array([1.])
<ide> n = arange(0,M)
<ide> alpha = (M-1)/2.0
<ide> return i0(beta * sqrt(1-((n-alpha)/alpha)**2.0))/i0(float(beta))
<ide><path>numpy/lib/tests/test_function_base.py
<ide> def test_simple(self):
<ide> class TestKaiser(TestCase):
<ide> def test_simple(self):
<ide> assert_almost_equal(kaiser(0, 1.0), array([]))
<del> assert isnan(kaiser(1, 1.0))
<add> assert isfinite(kaiser(1, 1.0))
<ide> assert_almost_equal(kaiser(2, 1.0), array([ 0.78984831, 0.78984831]))
<ide> assert_almost_equal(kaiser(5, 1.0),
<ide> array([ 0.78984831, 0.94503323, 1. , | 2 |
Javascript | Javascript | fix e2e tests on jenkins | 5fe823948f3af9d6ef34bb42112d87d4d7ae360f | <ide><path>protractor-jenkins-conf.js
<ide> exports.config = {
<ide>
<ide> browser.addMockModule('disableNgAnimate', disableNgAnimate);
<ide>
<del> require('jasmine-reporters');
<del> jasmine.getEnv().addReporter(
<del> new jasmine.JUnitXmlReporter('test_out/docs-e2e-' + exports.config.capabilities.browserName + '-', true, true));
<add> var reporters = require('jasmine-reporters');
<add> jasmine.getEnv().addReporter(new reporters.JUnitXmlReporter({
<add> savePath: 'test_out/docs-e2e-' + exports.config.capabilities.browserName + '-'
<add> }));
<ide> },
<ide>
<ide> jasmineNodeOpts: { | 1 |
Python | Python | correct a bug on unitary tests | ffcea930793eaf707af4f6a75a545a11fe043943 | <ide><path>glances/unitest.py
<ide> import unittest
<ide> import glances
<ide> import multiprocessing
<add>import time
<ide>
<ide> class TestGlancesStat(unittest.TestCase):
<ide>
<ide> def setUp(self):
<del> self.stats = glances.glancesStats()
<add> self.stats = glances.GlancesStats()
<add> self.stats.update()
<add>
<add> def test_Glances_getSystem(self):
<ide> self.stats.update()
<add> system = self.stats.getSystem()
<add> print "System info: %s" % system
<add> self.assertTrue(len(system) > 1)
<ide>
<ide> def test_Glances_getCore(self):
<del> self.assertEqual(self.stats.getCore(), multiprocessing.cpu_count())
<add> self.stats.update()
<add> core = self.stats.getCore()
<add> print "CPU Core number: %s" % core
<add> self.assertEqual(core, multiprocessing.cpu_count())
<ide>
<ide> def test_Glances_getCpu(self):
<ide> self.stats.update()
<del> self.assertEqual(len(self.stats.getCpu()), 4)
<add> cpu = self.stats.getCpu()
<add> print "CPU stat %s:" % cpu
<add> self.assertTrue(len(cpu) > 1)
<ide>
<ide> def test_Glances_getPerCpu(self):
<ide> self.stats.update()
<del> self.assertEqual(len(self.stats.getPerCpu()), multiprocessing.cpu_count())
<add> percpu = self.stats.getPerCpu()
<add> print "PerCPU stat %s:" % percpu
<add> self.assertEqual(len(percpu), multiprocessing.cpu_count())
<ide>
<ide> def test_Glances_getMem(self):
<ide> self.stats.update()
<del> self.assertTrue(len(self.stats.getMem()) > 2)
<add> mem = self.stats.getMem()
<add> print "Mem stat %s:" % mem
<add> self.assertTrue(len(mem) > 2)
<ide>
<ide> def test_Glances_getMemSwap(self):
<ide> self.stats.update()
<add> memswap = self.stats.getMemSwap()
<add> print "MemSwap stat %s:" % memswap
<ide> self.assertTrue(len(self.stats.getMemSwap()) > 2)
<ide>
<add>
<ide> if __name__ == '__main__':
<ide> unittest.main() | 1 |
PHP | PHP | add deprecation warning | 9af8e608e43803d3ea38c72b972236fe8d648646 | <ide><path>src/Http/Exception/RedirectException.php
<ide> public function __construct(string $target, int $code = 302, array $headers = []
<ide> */
<ide> public function addHeaders(array $headers)
<ide> {
<add> deprecationWarning('RedirectException::addHeaders() is deprecated, use setHeaders() instead.');
<add>
<ide> foreach ($headers as $key => $value) {
<ide> $this->headers[$key][] = $value;
<ide> }
<ide> public function addHeaders(array $headers)
<ide> */
<ide> public function removeHeader(string $key)
<ide> {
<add> deprecationWarning('RedirectException::removeHeader() is deprecated, use setHeaders() instead.');
<add>
<ide> unset($this->headers[$key]);
<ide>
<ide> return $this; | 1 |
Go | Go | set default seccomp profile | 947293a28084cb5ee2e10e4d128c6e2b9d9da89d | <ide><path>daemon/execdriver/native/create.go
<ide> func (d *Driver) createContainer(c *execdriver.Command, hooks execdriver.Hooks)
<ide> if err := d.setCapabilities(container, c); err != nil {
<ide> return nil, err
<ide> }
<add>
<add> if c.SeccompProfile == "" {
<add> container.Seccomp = getDefaultSeccompProfile()
<add> }
<ide> }
<ide> // add CAP_ prefix to all caps for new libcontainer update to match
<ide> // the spec format.
<ide> func (d *Driver) createContainer(c *execdriver.Command, hooks execdriver.Hooks)
<ide> return nil, err
<ide> }
<ide> }
<add>
<ide> if err := execdriver.SetupCgroups(container, c); err != nil {
<ide> return nil, err
<ide> }
<ide><path>daemon/execdriver/native/seccomp.go
<ide> import (
<ide> "github.com/opencontainers/specs"
<ide> )
<ide>
<add>func getDefaultSeccompProfile() *configs.Seccomp {
<add> return defaultSeccompProfile
<add>}
<add>
<ide> func loadSeccompProfile(path string) (*configs.Seccomp, error) {
<ide> f, err := ioutil.ReadFile(path)
<ide> if err != nil {
<ide><path>daemon/execdriver/native/seccomp_default.go
<add>// +build linux
<add>
<add>package native
<add>
<add>import "github.com/opencontainers/runc/libcontainer/configs"
<add>
<add>var defaultSeccompProfile = &configs.Seccomp{
<add> DefaultAction: configs.Allow,
<add> Syscalls: []*configs.Syscall{
<add> {
<add> // Quota and Accounting syscalls which could let containers
<add> // disable their own resource limits or process accounting
<add> Name: "acct",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Prevent containers from using the kernel keyring,
<add> // which is not namespaced
<add> Name: "add_key",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Similar to clock_settime and settimeofday
<add> // Time/Date is not namespaced
<add> Name: "adjtimex",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Time/Date is not namespaced
<add> Name: "clock_settime",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny cloning new namespaces
<add> Name: "clone",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{
<add> {
<add> // flags from sched.h
<add> // CLONE_NEWUTS 0x04000000
<add> // CLONE_NEWIPC 0x08000000
<add> // CLONE_NEWUSER 0x10000000
<add> // CLONE_NEWPID 0x20000000
<add> // CLONE_NEWNET 0x40000000
<add> Index: 0,
<add> Value: uint64(0x04000000),
<add> Op: configs.GreaterThanOrEqualTo,
<add> },
<add> {
<add> // flags from sched.h
<add> // CLONE_NEWNS 0x00020000
<add> Index: 0,
<add> Value: uint64(0x00020000),
<add> Op: configs.EqualTo,
<add> },
<add> },
<add> },
<add> {
<add> // Deny manipulation and functions on kernel modules.
<add> Name: "create_module",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny manipulation and functions on kernel modules.
<add> Name: "delete_module",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny retrieval of exported kernel and module symbols
<add> Name: "get_kernel_syms",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Terrifying syscalls that modify kernel memory and NUMA settings.
<add> // They're gated by CAP_SYS_NICE,
<add> // which we do not retain by default in containers.
<add> Name: "get_mempolicy",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny getting the list of robust futexes
<add> Name: "get_robust_list",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny manipulation and functions on kernel modules.
<add> Name: "init_module",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Prevent containers from modifying kernel I/O privilege levels.
<add> // Already restricted as containers drop CAP_SYS_RAWIO by default.
<add> Name: "ioperm",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Prevent containers from modifying kernel I/O privilege levels.
<add> // Already restricted as containers drop CAP_SYS_RAWIO by default.
<add> Name: "iopl",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Sister syscall of kexec_load that does the same thing,
<add> // slightly different arguments
<add> Name: "kexec_file_load",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny loading a new kernel for later execution
<add> Name: "kexec_load",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Prevent containers from using the kernel keyring,
<add> // which is not namespaced
<add> Name: "keyctl",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Tracing/profiling syscalls,
<add> // which could leak a lot of information on the host
<add> Name: "lookup_dcookie",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Terrifying syscalls that modify kernel memory and NUMA settings.
<add> // They're gated by CAP_SYS_NICE,
<add> // which we do not retain by default in containers.
<add> Name: "mbind",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Terrifying syscalls that modify kernel memory and NUMA settings.
<add> // They're gated by CAP_SYS_NICE,
<add> // which we do not retain by default in containers.
<add> Name: "migrate_pages",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Old syscall only used in 16-bit code,
<add> // and a potential information leak
<add> Name: "modify_ldt",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny mount
<add> Name: "mount",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Terrifying syscalls that modify kernel memory and NUMA settings.
<add> // They're gated by CAP_SYS_NICE,
<add> // which we do not retain by default in containers.
<add> Name: "move_pages",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny interaction with the kernel nfs daemon
<add> Name: "nfsservctl",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Cause of an old container breakout,
<add> // might as well restrict it to be on the safe side
<add> Name: "open_by_handle_at",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Tracing/profiling syscalls,
<add> // which could leak a lot of information on the host
<add> Name: "perf_event_open",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Prevent container from enabling BSD emulation.
<add> // Not inherently dangerous, but poorly tested,
<add> // potential for a lot of kernel vulns in this.
<add> Name: "personality",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny pivot_root
<add> Name: "pivot_root",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Already blocked by dropping CAP_PTRACE
<add> Name: "ptrace",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny manipulation and functions on kernel modules.
<add> Name: "query_module",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Quota and Accounting syscalls which could let containers
<add> // disable their own resource limits or process accounting
<add> Name: "quotactl",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Probably a bad idea to let containers reboot the host
<add> Name: "reboot",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Probably a bad idea to let containers restart
<add> Name: "restart_syscall",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Prevent containers from using the kernel keyring,
<add> // which is not namespaced
<add> Name: "request_key",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // meta, deny seccomp
<add> Name: "seccomp",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Terrifying syscalls that modify kernel memory and NUMA settings.
<add> // They're gated by CAP_SYS_NICE,
<add> // which we do not retain by default in containers.
<add> Name: "set_mempolicy",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // deny associating a thread with a namespace
<add> Name: "setns",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny setting the list of robust futexes
<add> Name: "set_robust_list",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Time/Date is not namespaced
<add> Name: "settimeofday",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny start/stop swapping to file/device
<add> Name: "swapon",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny start/stop swapping to file/device
<add> Name: "swapoff",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny read/write system parameters
<add> Name: "_sysctl",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Deny umount
<add> Name: "umount2",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Same as clone
<add> Name: "unshare",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> {
<add> // Older syscall related to shared libraries, unused for a long time
<add> Name: "uselib",
<add> Action: configs.Errno,
<add> Args: []*configs.Arg{},
<add> },
<add> },
<add>}
<ide><path>integration-cli/docker_cli_run_test.go
<ide> func (s *DockerSuite) TestRunUnshareProc(c *check.C) {
<ide> testRequires(c, Apparmor, DaemonIsLinux, NotUserNamespace)
<ide>
<ide> name := "acidburn"
<del> if out, _, err := dockerCmdWithError("run", "--name", name, "jess/unshare", "unshare", "-p", "-m", "-f", "-r", "--mount-proc=/proc", "mount"); err == nil || !strings.Contains(out, "Permission denied") {
<add> out, _, err := dockerCmdWithError("run", "--name", name, "jess/unshare", "unshare", "-p", "-m", "-f", "-r", "--mount-proc=/proc", "mount")
<add> if err == nil ||
<add> !(strings.Contains(strings.ToLower(out), "permission denied") ||
<add> strings.Contains(strings.ToLower(out), "operation not permitted")) {
<ide> c.Fatalf("unshare with --mount-proc should have failed with permission denied, got: %s, %v", out, err)
<ide> }
<ide>
<ide> name = "cereal"
<del> if out, _, err := dockerCmdWithError("run", "--name", name, "jess/unshare", "unshare", "-p", "-m", "-f", "-r", "mount", "-t", "proc", "none", "/proc"); err == nil || !strings.Contains(out, "Permission denied") {
<add> out, _, err = dockerCmdWithError("run", "--name", name, "jess/unshare", "unshare", "-p", "-m", "-f", "-r", "mount", "-t", "proc", "none", "/proc")
<add> if err == nil ||
<add> !(strings.Contains(strings.ToLower(out), "permission denied") ||
<add> strings.Contains(strings.ToLower(out), "operation not permitted")) {
<ide> c.Fatalf("unshare and mount of /proc should have failed with permission denied, got: %s, %v", out, err)
<ide> }
<ide>
<ide> /* Ensure still fails if running privileged with the default policy */
<ide> name = "crashoverride"
<del> if out, _, err := dockerCmdWithError("run", "--privileged", "--security-opt", "apparmor:docker-default", "--name", name, "jess/unshare", "unshare", "-p", "-m", "-f", "-r", "mount", "-t", "proc", "none", "/proc"); err == nil || !(strings.Contains(strings.ToLower(out), "permission denied") || strings.Contains(strings.ToLower(out), "operation not permitted")) {
<add> out, _, err = dockerCmdWithError("run", "--privileged", "--security-opt", "apparmor:docker-default", "--name", name, "jess/unshare", "unshare", "-p", "-m", "-f", "-r", "mount", "-t", "proc", "none", "/proc")
<add> if err == nil || !(strings.Contains(strings.ToLower(out), "permission denied") || strings.Contains(strings.ToLower(out), "operation not permitted")) {
<ide> c.Fatalf("privileged unshare with apparmor should have failed with permission denied, got: %s, %v", out, err)
<ide> }
<ide> }
<ide><path>integration-cli/docker_cli_run_unix_test.go
<ide> func (s *DockerSuite) TestRunSeccompProfileDenyChmod(c *check.C) {
<ide> c.Fatalf("expected chmod with seccomp profile denied to fail, got %s", out)
<ide> }
<ide> }
<add>
<add>// TestRunSeccompProfileDenyUserns checks that 'docker run jess/unshare unshare --map-root-user --user sh -c whoami' exits with operation not permitted.
<add>func (s *DockerSuite) TestRunSeccompProfileDenyUserns(c *check.C) {
<add> testRequires(c, SameHostDaemon, seccompEnabled)
<add> // from sched.h
<add> jsonData := fmt.Sprintf(`{
<add> "defaultAction": "SCMP_ACT_ALLOW",
<add> "syscalls": [
<add> {
<add> "name": "unshare",
<add> "action": "SCMP_ACT_ERRNO",
<add> "args": [
<add> {
<add> "index": 0,
<add> "value": %d,
<add> "op": "SCMP_CMP_EQ"
<add> }
<add> ]
<add> }
<add> ]
<add>}`, uint64(0x10000000))
<add> tmpFile, err := ioutil.TempFile("", "profile.json")
<add> defer tmpFile.Close()
<add> if err != nil {
<add> c.Fatal(err)
<add> }
<add>
<add> if _, err := tmpFile.Write([]byte(jsonData)); err != nil {
<add> c.Fatal(err)
<add> }
<add> runCmd := exec.Command(dockerBinary, "run", "--security-opt", "seccomp:"+tmpFile.Name(), "jess/unshare", "unshare", "--map-root-user", "--user", "sh", "-c", "whoami")
<add> out, _, _ := runCommandWithOutput(runCmd)
<add> if !strings.Contains(out, "Operation not permitted") {
<add> c.Fatalf("expected unshare userns with seccomp profile denied to fail, got %s", out)
<add> }
<add>} | 5 |
Ruby | Ruby | use long shfmt arguments | bf34f2106554de49a8d64a0fc17edfb045c29f06 | <ide><path>Library/Homebrew/style.rb
<ide> def run_shfmt(files, fix: false)
<ide> files.delete(HOMEBREW_REPOSITORY/"completions/bash/brew")
<ide> files.delete(HOMEBREW_REPOSITORY/"Dockerfile")
<ide>
<del> # shfmt options:
<del> # -i 2 : indent by 2 spaces
<del> # -ci : indent switch cases
<del> # -ln bash : language variant to parse ("bash")
<del> # -w : write result to file instead of stdout (inplace fixing)
<del> # "--" is needed for `utils/shfmt.sh`
<del> args = ["-i", "2", "-ci", "-ln", "bash", "--", *files]
<del>
<del> # Do inplace fixing
<del> args.unshift("-w") if fix # need to add before "--"
<add> args = ["--language-dialect", "bash", "--indent", "2", "--case-indent", "--", *files]
<add> args.unshift("--write") if fix # need to add before "--"
<ide>
<ide> system shfmt, *args
<ide> $CHILD_STATUS.success? | 1 |
Python | Python | allow reversed iteration over sorteddict | c3fabb282d429de931ef10c91cee55700578eb86 | <ide><path>django/utils/datastructures.py
<ide> def __delitem__(self, key):
<ide> def __iter__(self):
<ide> return iter(self.keyOrder)
<ide>
<add> def __reversed__(self):
<add> return reversed(self.keyOrder)
<add>
<ide> def pop(self, k, *args):
<ide> result = super(SortedDict, self).pop(k, *args)
<ide> try: | 1 |
PHP | PHP | fix bug in validate | 9f5050bab13d40f62a2c1db9c98c659bf1bd25c7 | <ide><path>src/Illuminate/Auth/TokenGuard.php
<ide> public function id()
<ide> */
<ide> public function validate(array $credentials = [])
<ide> {
<del> if (! is_null($token)) {
<del> $credentials = [$this->storageKey => $credentials[$this->inputKey]];
<add> $credentials = [$this->storageKey => $credentials[$this->inputKey]];
<ide>
<del> if ($this->provider->retrieveByCredentials($credentials)) {
<del> return true;
<del> }
<add> if ($this->provider->retrieveByCredentials($credentials)) {
<add> return true;
<ide> }
<ide>
<ide> return false; | 1 |
Javascript | Javascript | improve quaternion closure performance | 563060ae23cc8602b84588e89a49a5c17428fc6c | <ide><path>src/math/Quaternion.js
<ide> Object.assign( Quaternion.prototype, {
<ide>
<ide> // assumes direction vectors vFrom and vTo are normalized
<ide>
<del> var v1, r;
<add> var v1 = new Vector3();
<add> var r;
<ide>
<ide> var EPS = 0.000001;
<ide> | 1 |
Python | Python | require model for test_is_properties | a42fbcf946864ecebcf82b426f5a9e077305ccab | <ide><path>spacy/tests/tokens/test_token_api.py
<ide> def test_str_builtin(EN):
<ide> assert str(tokens[1]) == u'two'
<ide>
<ide>
<add>@pytest.mark.models
<ide> def test_is_properties(EN):
<ide> Hi, comma, my, email, is_, addr = EN(u'Hi, my email is [email protected]')
<ide> assert Hi.is_title | 1 |
Javascript | Javascript | move internal functions to bottom of vector file | ea6fa3596637c9ea9a16c987fd4a2ff69def092f | <ide><path>dist/Immutable.dev.js
<ide> VectorPrototype.cursor = Map.prototype.cursor;
<ide> VectorPrototype.withMutations = Map.prototype.withMutations;
<ide> VectorPrototype.asMutable = Map.prototype.asMutable;
<ide> VectorPrototype.asImmutable = Map.prototype.asImmutable;
<del>function makeVector(origin, size, level, root, tail, ownerID) {
<del> var vect = Object.create(VectorPrototype);
<del> vect.length = size - origin;
<del> vect._origin = origin;
<del> vect._size = size;
<del> vect._level = level;
<del> vect._root = root;
<del> vect._tail = tail;
<del> vect.__ownerID = ownerID;
<del> return vect;
<del>}
<del>function vectorNodeFor(vector, rawIndex) {
<del> if (rawIndex >= getTailOffset(vector._size)) {
<del> return vector._tail;
<del> }
<del> if (rawIndex < 1 << (vector._level + SHIFT)) {
<del> var node = vector._root;
<del> var level = vector._level;
<del> while (node && level > 0) {
<del> node = node.array[(rawIndex >>> level) & MASK];
<del> level -= SHIFT;
<del> }
<del> return node;
<del> }
<del>}
<del>function setVectorBounds(vector, begin, end) {
<del> var owner = vector.__ownerID || new OwnerID();
<del> var oldOrigin = vector._origin;
<del> var oldSize = vector._size;
<del> var newOrigin = oldOrigin + begin;
<del> var newSize = end == null ? oldSize : end < 0 ? oldSize + end : oldOrigin + end;
<del> if (newOrigin === oldOrigin && newSize === oldSize) {
<del> return vector;
<del> }
<del> if (newOrigin >= newSize) {
<del> return vector.clear();
<del> }
<del> var newLevel = vector._level;
<del> var newRoot = vector._root;
<del> var offsetShift = 0;
<del> while (newOrigin + offsetShift < 0) {
<del> newRoot = new VNode(newRoot.array.length ? [, newRoot] : [], owner);
<del> newLevel += SHIFT;
<del> offsetShift += 1 << newLevel;
<del> }
<del> if (offsetShift) {
<del> newOrigin += offsetShift;
<del> oldOrigin += offsetShift;
<del> newSize += offsetShift;
<del> oldSize += offsetShift;
<del> }
<del> var oldTailOffset = getTailOffset(oldSize);
<del> var newTailOffset = getTailOffset(newSize);
<del> while (newTailOffset >= 1 << (newLevel + SHIFT)) {
<del> newRoot = new VNode(newRoot.array.length ? [newRoot] : [], owner);
<del> newLevel += SHIFT;
<del> }
<del> var oldTail = vector._tail;
<del> var newTail = newTailOffset < oldTailOffset ? vectorNodeFor(vector, newSize - 1) : newTailOffset > oldTailOffset ? new VNode([], owner) : oldTail;
<del> if (newTailOffset > oldTailOffset && newOrigin < oldSize && oldTail.array.length) {
<del> newRoot = newRoot.ensureOwner(owner);
<del> var node = newRoot;
<del> for (var level = newLevel; level > SHIFT; level -= SHIFT) {
<del> var idx = (oldTailOffset >>> level) & MASK;
<del> node = node.array[idx] = node.array[idx] ? node.array[idx].ensureOwner(owner) : new VNode([], owner);
<del> }
<del> node.array[(oldTailOffset >>> SHIFT) & MASK] = oldTail;
<del> }
<del> if (newSize < oldSize) {
<del> newTail = newTail.removeAfter(owner, 0, newSize);
<del> }
<del> if (newOrigin >= newTailOffset) {
<del> newOrigin -= newTailOffset;
<del> newSize -= newTailOffset;
<del> newLevel = SHIFT;
<del> newRoot = EMPTY_VNODE;
<del> newTail = newTail.removeBefore(owner, 0, newOrigin);
<del> } else if (newOrigin > oldOrigin || newTailOffset < oldTailOffset) {
<del> var beginIndex,
<del> endIndex;
<del> offsetShift = 0;
<del> do {
<del> beginIndex = ((newOrigin) >>> newLevel) & MASK;
<del> endIndex = ((newTailOffset - 1) >>> newLevel) & MASK;
<del> if (beginIndex === endIndex) {
<del> if (beginIndex) {
<del> offsetShift += (1 << newLevel) * beginIndex;
<del> }
<del> newLevel -= SHIFT;
<del> newRoot = newRoot && newRoot.array[beginIndex];
<del> }
<del> } while (newRoot && beginIndex === endIndex);
<del> if (newRoot && newOrigin > oldOrigin) {
<del> newRoot = newRoot.removeBefore(owner, newLevel, newOrigin - offsetShift);
<del> }
<del> if (newRoot && newTailOffset < oldTailOffset) {
<del> newRoot = newRoot.removeAfter(owner, newLevel, newTailOffset - offsetShift);
<del> }
<del> if (offsetShift) {
<del> newOrigin -= offsetShift;
<del> newSize -= offsetShift;
<del> }
<del> newRoot = newRoot || EMPTY_VNODE;
<del> }
<del> if (vector.__ownerID) {
<del> vector.length = newSize - newOrigin;
<del> vector._origin = newOrigin;
<del> vector._size = newSize;
<del> vector._level = newLevel;
<del> vector._root = newRoot;
<del> vector._tail = newTail;
<del> return vector;
<del> }
<del> return makeVector(newOrigin, newSize, newLevel, newRoot, newTail);
<del>}
<ide> var VNode = function VNode(array, ownerID) {
<ide> this.array = array;
<ide> this.ownerID = ownerID;
<ide> var VectorIterator = function VectorIterator(vector, origin, size, level, root,
<ide> }
<ide> return {done: true};
<ide> }}, {});
<add>function makeVector(origin, size, level, root, tail, ownerID) {
<add> var vect = Object.create(VectorPrototype);
<add> vect.length = size - origin;
<add> vect._origin = origin;
<add> vect._size = size;
<add> vect._level = level;
<add> vect._root = root;
<add> vect._tail = tail;
<add> vect.__ownerID = ownerID;
<add> return vect;
<add>}
<add>function vectorNodeFor(vector, rawIndex) {
<add> if (rawIndex >= getTailOffset(vector._size)) {
<add> return vector._tail;
<add> }
<add> if (rawIndex < 1 << (vector._level + SHIFT)) {
<add> var node = vector._root;
<add> var level = vector._level;
<add> while (node && level > 0) {
<add> node = node.array[(rawIndex >>> level) & MASK];
<add> level -= SHIFT;
<add> }
<add> return node;
<add> }
<add>}
<add>function setVectorBounds(vector, begin, end) {
<add> var owner = vector.__ownerID || new OwnerID();
<add> var oldOrigin = vector._origin;
<add> var oldSize = vector._size;
<add> var newOrigin = oldOrigin + begin;
<add> var newSize = end == null ? oldSize : end < 0 ? oldSize + end : oldOrigin + end;
<add> if (newOrigin === oldOrigin && newSize === oldSize) {
<add> return vector;
<add> }
<add> if (newOrigin >= newSize) {
<add> return vector.clear();
<add> }
<add> var newLevel = vector._level;
<add> var newRoot = vector._root;
<add> var offsetShift = 0;
<add> while (newOrigin + offsetShift < 0) {
<add> newRoot = new VNode(newRoot.array.length ? [, newRoot] : [], owner);
<add> newLevel += SHIFT;
<add> offsetShift += 1 << newLevel;
<add> }
<add> if (offsetShift) {
<add> newOrigin += offsetShift;
<add> oldOrigin += offsetShift;
<add> newSize += offsetShift;
<add> oldSize += offsetShift;
<add> }
<add> var oldTailOffset = getTailOffset(oldSize);
<add> var newTailOffset = getTailOffset(newSize);
<add> while (newTailOffset >= 1 << (newLevel + SHIFT)) {
<add> newRoot = new VNode(newRoot.array.length ? [newRoot] : [], owner);
<add> newLevel += SHIFT;
<add> }
<add> var oldTail = vector._tail;
<add> var newTail = newTailOffset < oldTailOffset ? vectorNodeFor(vector, newSize - 1) : newTailOffset > oldTailOffset ? new VNode([], owner) : oldTail;
<add> if (newTailOffset > oldTailOffset && newOrigin < oldSize && oldTail.array.length) {
<add> newRoot = newRoot.ensureOwner(owner);
<add> var node = newRoot;
<add> for (var level = newLevel; level > SHIFT; level -= SHIFT) {
<add> var idx = (oldTailOffset >>> level) & MASK;
<add> node = node.array[idx] = node.array[idx] ? node.array[idx].ensureOwner(owner) : new VNode([], owner);
<add> }
<add> node.array[(oldTailOffset >>> SHIFT) & MASK] = oldTail;
<add> }
<add> if (newSize < oldSize) {
<add> newTail = newTail.removeAfter(owner, 0, newSize);
<add> }
<add> if (newOrigin >= newTailOffset) {
<add> newOrigin -= newTailOffset;
<add> newSize -= newTailOffset;
<add> newLevel = SHIFT;
<add> newRoot = EMPTY_VNODE;
<add> newTail = newTail.removeBefore(owner, 0, newOrigin);
<add> } else if (newOrigin > oldOrigin || newTailOffset < oldTailOffset) {
<add> var beginIndex,
<add> endIndex;
<add> offsetShift = 0;
<add> do {
<add> beginIndex = ((newOrigin) >>> newLevel) & MASK;
<add> endIndex = ((newTailOffset - 1) >>> newLevel) & MASK;
<add> if (beginIndex === endIndex) {
<add> if (beginIndex) {
<add> offsetShift += (1 << newLevel) * beginIndex;
<add> }
<add> newLevel -= SHIFT;
<add> newRoot = newRoot && newRoot.array[beginIndex];
<add> }
<add> } while (newRoot && beginIndex === endIndex);
<add> if (newRoot && newOrigin > oldOrigin) {
<add> newRoot = newRoot.removeBefore(owner, newLevel, newOrigin - offsetShift);
<add> }
<add> if (newRoot && newTailOffset < oldTailOffset) {
<add> newRoot = newRoot.removeAfter(owner, newLevel, newTailOffset - offsetShift);
<add> }
<add> if (offsetShift) {
<add> newOrigin -= offsetShift;
<add> newSize -= offsetShift;
<add> }
<add> newRoot = newRoot || EMPTY_VNODE;
<add> }
<add> if (vector.__ownerID) {
<add> vector.length = newSize - newOrigin;
<add> vector._origin = newOrigin;
<add> vector._size = newSize;
<add> vector._level = newLevel;
<add> vector._root = newRoot;
<add> vector._tail = newTail;
<add> return vector;
<add> }
<add> return makeVector(newOrigin, newSize, newLevel, newRoot, newTail);
<add>}
<ide> function mergeIntoVectorWith(vector, merger, iterables) {
<ide> var seqs = [];
<ide> for (var ii = 0; ii < iterables.length; ii++) {
<ide><path>src/Vector.js
<ide> VectorPrototype.withMutations = Map.prototype.withMutations;
<ide> VectorPrototype.asMutable = Map.prototype.asMutable;
<ide> VectorPrototype.asImmutable = Map.prototype.asImmutable;
<ide>
<del>function makeVector(origin, size, level, root, tail, ownerID) {
<del> var vect = Object.create(VectorPrototype);
<del> vect.length = size - origin;
<del> vect._origin = origin;
<del> vect._size = size;
<del> vect._level = level;
<del> vect._root = root;
<del> vect._tail = tail;
<del> vect.__ownerID = ownerID;
<del> return vect;
<del>}
<del>
<del>function vectorNodeFor(vector, rawIndex) {
<del> if (rawIndex >= getTailOffset(vector._size)) {
<del> return vector._tail;
<del> }
<del> if (rawIndex < 1 << (vector._level + SHIFT)) {
<del> var node = vector._root;
<del> var level = vector._level;
<del> while (node && level > 0) {
<del> node = node.array[(rawIndex >>> level) & MASK];
<del> level -= SHIFT;
<del> }
<del> return node;
<del> }
<del>}
<del>
<del>function setVectorBounds(vector, begin, end) {
<del> var owner = vector.__ownerID || new OwnerID();
<del> var oldOrigin = vector._origin;
<del> var oldSize = vector._size;
<del> var newOrigin = oldOrigin + begin;
<del> var newSize = end == null ? oldSize : end < 0 ? oldSize + end : oldOrigin + end;
<del> if (newOrigin === oldOrigin && newSize === oldSize) {
<del> return vector;
<del> }
<del>
<del> // If it's going to end after it starts, it's empty.
<del> if (newOrigin >= newSize) {
<del> return vector.clear();
<del> }
<del>
<del> var newLevel = vector._level;
<del> var newRoot = vector._root;
<del>
<del> // New origin might require creating a higher root.
<del> var offsetShift = 0;
<del> while (newOrigin + offsetShift < 0) {
<del> // TODO: why only ever shifting over by 1?
<del> newRoot = new VNode(newRoot.array.length ? [,newRoot] : [], owner);
<del> newLevel += SHIFT;
<del> offsetShift += 1 << newLevel;
<del> }
<del> if (offsetShift) {
<del> newOrigin += offsetShift;
<del> oldOrigin += offsetShift;
<del> newSize += offsetShift;
<del> oldSize += offsetShift;
<del> }
<del>
<del> var oldTailOffset = getTailOffset(oldSize);
<del> var newTailOffset = getTailOffset(newSize);
<del>
<del> // New size might require creating a higher root.
<del> while (newTailOffset >= 1 << (newLevel + SHIFT)) {
<del> newRoot = new VNode(newRoot.array.length ? [newRoot] : [], owner);
<del> newLevel += SHIFT;
<del> }
<del>
<del> // Locate or create the new tail.
<del> var oldTail = vector._tail;
<del> var newTail = newTailOffset < oldTailOffset ?
<del> vectorNodeFor(vector, newSize - 1) :
<del> newTailOffset > oldTailOffset ? new VNode([], owner) : oldTail;
<del>
<del> // Merge Tail into tree.
<del> if (newTailOffset > oldTailOffset && newOrigin < oldSize && oldTail.array.length) {
<del> newRoot = newRoot.ensureOwner(owner);
<del> var node = newRoot;
<del> for (var level = newLevel; level > SHIFT; level -= SHIFT) {
<del> var idx = (oldTailOffset >>> level) & MASK;
<del> node = node.array[idx] = node.array[idx] ? node.array[idx].ensureOwner(owner) : new VNode([], owner);
<del> }
<del> node.array[(oldTailOffset >>> SHIFT) & MASK] = oldTail;
<del> }
<del>
<del> // If the size has been reduced, there's a chance the tail needs to be trimmed.
<del> if (newSize < oldSize) {
<del> newTail = newTail.removeAfter(owner, 0, newSize);
<del> }
<del>
<del> // If the new origin is within the tail, then we do not need a root.
<del> if (newOrigin >= newTailOffset) {
<del> newOrigin -= newTailOffset;
<del> newSize -= newTailOffset;
<del> newLevel = SHIFT;
<del> newRoot = EMPTY_VNODE;
<del> newTail = newTail.removeBefore(owner, 0, newOrigin);
<del>
<del> // Otherwise, if the root has been trimmed, garbage collect.
<del> } else if (newOrigin > oldOrigin || newTailOffset < oldTailOffset) {
<del> var beginIndex, endIndex;
<del> offsetShift = 0;
<del>
<del> // Identify the new top root node of the subtree of the old root.
<del> do {
<del> beginIndex = ((newOrigin) >>> newLevel) & MASK;
<del> endIndex = ((newTailOffset - 1) >>> newLevel) & MASK;
<del> if (beginIndex === endIndex) {
<del> if (beginIndex) {
<del> offsetShift += (1 << newLevel) * beginIndex;
<del> }
<del> newLevel -= SHIFT;
<del> newRoot = newRoot && newRoot.array[beginIndex];
<del> }
<del> } while (newRoot && beginIndex === endIndex);
<del>
<del> // Trim the new sides of the new root.
<del> if (newRoot && newOrigin > oldOrigin) {
<del> newRoot = newRoot.removeBefore(owner, newLevel, newOrigin - offsetShift);
<del> }
<del> if (newRoot && newTailOffset < oldTailOffset) {
<del> newRoot = newRoot.removeAfter(owner, newLevel, newTailOffset - offsetShift);
<del> }
<del> if (offsetShift) {
<del> newOrigin -= offsetShift;
<del> newSize -= offsetShift;
<del> }
<del> // Ensure root is not null.
<del> newRoot = newRoot || EMPTY_VNODE;
<del> }
<del>
<del> if (vector.__ownerID) {
<del> vector.length = newSize - newOrigin;
<del> vector._origin = newOrigin;
<del> vector._size = newSize;
<del> vector._level = newLevel;
<del> vector._root = newRoot;
<del> vector._tail = newTail;
<del> return vector;
<del> }
<del> return makeVector(newOrigin, newSize, newLevel, newRoot, newTail);
<del>}
<ide>
<ide> class VNode {
<ide> constructor(array, ownerID) {
<ide> class VectorIterator {
<ide> }
<ide> }
<ide>
<add>
<add>function makeVector(origin, size, level, root, tail, ownerID) {
<add> var vect = Object.create(VectorPrototype);
<add> vect.length = size - origin;
<add> vect._origin = origin;
<add> vect._size = size;
<add> vect._level = level;
<add> vect._root = root;
<add> vect._tail = tail;
<add> vect.__ownerID = ownerID;
<add> return vect;
<add>}
<add>
<add>function vectorNodeFor(vector, rawIndex) {
<add> if (rawIndex >= getTailOffset(vector._size)) {
<add> return vector._tail;
<add> }
<add> if (rawIndex < 1 << (vector._level + SHIFT)) {
<add> var node = vector._root;
<add> var level = vector._level;
<add> while (node && level > 0) {
<add> node = node.array[(rawIndex >>> level) & MASK];
<add> level -= SHIFT;
<add> }
<add> return node;
<add> }
<add>}
<add>
<add>function setVectorBounds(vector, begin, end) {
<add> var owner = vector.__ownerID || new OwnerID();
<add> var oldOrigin = vector._origin;
<add> var oldSize = vector._size;
<add> var newOrigin = oldOrigin + begin;
<add> var newSize = end == null ? oldSize : end < 0 ? oldSize + end : oldOrigin + end;
<add> if (newOrigin === oldOrigin && newSize === oldSize) {
<add> return vector;
<add> }
<add>
<add> // If it's going to end after it starts, it's empty.
<add> if (newOrigin >= newSize) {
<add> return vector.clear();
<add> }
<add>
<add> var newLevel = vector._level;
<add> var newRoot = vector._root;
<add>
<add> // New origin might require creating a higher root.
<add> var offsetShift = 0;
<add> while (newOrigin + offsetShift < 0) {
<add> // TODO: why only ever shifting over by 1?
<add> newRoot = new VNode(newRoot.array.length ? [,newRoot] : [], owner);
<add> newLevel += SHIFT;
<add> offsetShift += 1 << newLevel;
<add> }
<add> if (offsetShift) {
<add> newOrigin += offsetShift;
<add> oldOrigin += offsetShift;
<add> newSize += offsetShift;
<add> oldSize += offsetShift;
<add> }
<add>
<add> var oldTailOffset = getTailOffset(oldSize);
<add> var newTailOffset = getTailOffset(newSize);
<add>
<add> // New size might require creating a higher root.
<add> while (newTailOffset >= 1 << (newLevel + SHIFT)) {
<add> newRoot = new VNode(newRoot.array.length ? [newRoot] : [], owner);
<add> newLevel += SHIFT;
<add> }
<add>
<add> // Locate or create the new tail.
<add> var oldTail = vector._tail;
<add> var newTail = newTailOffset < oldTailOffset ?
<add> vectorNodeFor(vector, newSize - 1) :
<add> newTailOffset > oldTailOffset ? new VNode([], owner) : oldTail;
<add>
<add> // Merge Tail into tree.
<add> if (newTailOffset > oldTailOffset && newOrigin < oldSize && oldTail.array.length) {
<add> newRoot = newRoot.ensureOwner(owner);
<add> var node = newRoot;
<add> for (var level = newLevel; level > SHIFT; level -= SHIFT) {
<add> var idx = (oldTailOffset >>> level) & MASK;
<add> node = node.array[idx] = node.array[idx] ? node.array[idx].ensureOwner(owner) : new VNode([], owner);
<add> }
<add> node.array[(oldTailOffset >>> SHIFT) & MASK] = oldTail;
<add> }
<add>
<add> // If the size has been reduced, there's a chance the tail needs to be trimmed.
<add> if (newSize < oldSize) {
<add> newTail = newTail.removeAfter(owner, 0, newSize);
<add> }
<add>
<add> // If the new origin is within the tail, then we do not need a root.
<add> if (newOrigin >= newTailOffset) {
<add> newOrigin -= newTailOffset;
<add> newSize -= newTailOffset;
<add> newLevel = SHIFT;
<add> newRoot = EMPTY_VNODE;
<add> newTail = newTail.removeBefore(owner, 0, newOrigin);
<add>
<add> // Otherwise, if the root has been trimmed, garbage collect.
<add> } else if (newOrigin > oldOrigin || newTailOffset < oldTailOffset) {
<add> var beginIndex, endIndex;
<add> offsetShift = 0;
<add>
<add> // Identify the new top root node of the subtree of the old root.
<add> do {
<add> beginIndex = ((newOrigin) >>> newLevel) & MASK;
<add> endIndex = ((newTailOffset - 1) >>> newLevel) & MASK;
<add> if (beginIndex === endIndex) {
<add> if (beginIndex) {
<add> offsetShift += (1 << newLevel) * beginIndex;
<add> }
<add> newLevel -= SHIFT;
<add> newRoot = newRoot && newRoot.array[beginIndex];
<add> }
<add> } while (newRoot && beginIndex === endIndex);
<add>
<add> // Trim the new sides of the new root.
<add> if (newRoot && newOrigin > oldOrigin) {
<add> newRoot = newRoot.removeBefore(owner, newLevel, newOrigin - offsetShift);
<add> }
<add> if (newRoot && newTailOffset < oldTailOffset) {
<add> newRoot = newRoot.removeAfter(owner, newLevel, newTailOffset - offsetShift);
<add> }
<add> if (offsetShift) {
<add> newOrigin -= offsetShift;
<add> newSize -= offsetShift;
<add> }
<add> // Ensure root is not null.
<add> newRoot = newRoot || EMPTY_VNODE;
<add> }
<add>
<add> if (vector.__ownerID) {
<add> vector.length = newSize - newOrigin;
<add> vector._origin = newOrigin;
<add> vector._size = newSize;
<add> vector._level = newLevel;
<add> vector._root = newRoot;
<add> vector._tail = newTail;
<add> return vector;
<add> }
<add> return makeVector(newOrigin, newSize, newLevel, newRoot, newTail);
<add>}
<add>
<ide> function mergeIntoVectorWith(vector, merger, iterables) {
<ide> var seqs = [];
<ide> for (var ii = 0; ii < iterables.length; ii++) { | 2 |
Python | Python | move cupy and cudastream to compat | abf0188b0a45624d97c839d3aedba8521b184315 | <ide><path>spacy/compat.py
<ide> except ImportError:
<ide> import copyreg as copy_reg
<ide>
<add>try:
<add> from cupy.cuda.stream import Stream as CudaStream
<add>except ImportError:
<add> CudaStream = None
<add>
<add>try:
<add> import cupy
<add>except ImportError:
<add> cupy = None
<add>
<add>
<ide> pickle = pickle
<ide> copy_reg = copy_reg
<ide> CudaStream = CudaStream
<ide><path>spacy/util.py
<ide> import textwrap
<ide>
<ide> from .symbols import ORTH
<del>from .compat import path2str, basestring_, input_, unicode_
<add>from .compat import cupy, CudaStream, path2str, basestring_, input_, unicode_
<ide>
<ide>
<ide> LANGUAGES = {}
<ide> _data_path = Path(__file__).parent / 'data'
<del>try:
<del> from cupy.cuda.stream import Stream as CudaStream
<del>except ImportError:
<del> CudaStream = None
<del>
<del>try:
<del> import cupy
<del>except ImportError:
<del> cupy = None
<add>
<ide>
<ide> def get_lang_class(lang):
<ide> """Import and load a Language class. | 2 |
Text | Text | fix docker stack link | 93ed4b35fa6b35a24a79e3e4f9cfaea29982e7a4 | <ide><path>experimental/README.md
<ide> to build a Docker binary with the experimental features enabled:
<ide>
<ide> * [External graphdriver plugins](plugins_graphdriver.md)
<ide> * [Macvlan and Ipvlan Network Drivers](vlan-networks.md)
<del> * [Docker stacks](docker-stacks.md)
<add> * [Docker Stacks and Distributed Application Bundles](docker-stacks-and-bundles.md)
<ide>
<ide> ## How to comment on an experimental feature
<ide> | 1 |
Text | Text | fix inverted definition of controlled component | 8011112cc1db52f48c85b01875c3449bbe9e0805 | <ide><path>docs/docs/07-forms.md
<ide> In this example, we are accepting the value provided by the user and updating th
<ide>
<ide> This would accept user input and truncate the value to the first 140 characters.
<ide>
<del>A **Controlled** component maintains its own internal state; the component renders purely based on props.
<add>A **Controlled** component does not maintain its own internal state; the component renders purely based on props.
<ide>
<ide> ### Potential Issues With Checkboxes and Radio Buttons
<ide> | 1 |
Javascript | Javascript | correct the portable path location on windows | 9648d8b82fdb06cfaf3f8dcc14e910d06c742eb1 | <ide><path>src/atom-paths.js
<ide> const hasWriteAccess = (dir) => {
<ide> const getAppDirectory = () => {
<ide> switch (process.platform) {
<ide> case 'darwin':
<del> return path.join(process.execPath.substring(0, process.execPath.indexOf('.app')), '..')
<add> return process.execPath.substring(0, process.execPath.indexOf('.app') + 4)
<ide> case 'linux':
<ide> case 'win32':
<ide> return path.join(process.execPath, '..')
<ide> const getAppDirectory = () => {
<ide> module.exports = {
<ide> setAtomHome: (homePath) => {
<ide> // When a read-writeable .atom folder exists above app use that
<del> const portableHomePath = path.join(getAppDirectory(), '.atom')
<add> const portableHomePath = path.join(getAppDirectory(), '..', '.atom')
<ide> if (fs.existsSync(portableHomePath)) {
<ide> if (hasWriteAccess(portableHomePath)) {
<ide> process.env.ATOM_HOME = portableHomePath | 1 |
Text | Text | fix some recent nits | 008a1f6e8c99fb1889e25f1864d35ef71411b721 | <ide><path>doc/api/events.md
<ide> The `Promise` will resolve with an array of all the arguments emitted to the
<ide> given event.
<ide>
<ide> This method is intentionally generic and works with the web platform
<del>[EventTarget](WHATWG-EventTarget) interface, which has no special
<add>[EventTarget][WHATWG-EventTarget] interface, which has no special
<ide> `'error'` event semantics and does not listen to the `'error'` event.
<ide>
<ide> ```js
<ide> async function run() {
<ide>
<ide> run();
<ide> ```
<del>[WHATWG-EventTarget](https://dom.spec.whatwg.org/#interface-eventtarget)
<add>
<add>[WHATWG-EventTarget]: https://dom.spec.whatwg.org/#interface-eventtarget
<ide> [`--trace-warnings`]: cli.html#cli_trace_warnings
<ide> [`EventEmitter.defaultMaxListeners`]: #events_eventemitter_defaultmaxlisteners
<ide> [`domain`]: domain.html
<ide><path>doc/api/fs.md
<ide> then resolves the `Promise` with no arguments upon success.
<ide> This function does not work on AIX versions before 7.1, it will resolve the
<ide> `Promise` with an error using code `UV_ENOSYS`.
<ide>
<del>#### filehandle.write(buffer, offset, length, position)
<add>#### filehandle.write(buffer[, offset[, length[, position]]])
<ide> <!-- YAML
<ide> added: v10.0.0
<ide> -->
<ide><path>doc/api/process.md
<ide> undefined
<ide> true
<ide> > process.emitWarning('test', 'DeprecationWarning');
<ide> Thrown:
<del>{ [DeprecationWarning: test] name: 'DeprecationWarning' }
<add>[DeprecationWarning: test] { name: 'DeprecationWarning' }
<ide> ```
<ide>
<ide> ## process.title
<ide><path>doc/api/tls.md
<ide> See
<ide> [SSL_CIPHER_get_name](https://www.openssl.org/docs/man1.1.1/man3/SSL_CIPHER_get_name.html)
<ide> for more information.
<ide>
<del>### tlsSocket.getSharedSigalgs()
<del><!-- YAML
<del>added: v12.11.0
<del>-->
<del>
<del>* Returns: {Array} List of signature algorithms shared between the server and
<del>the client in the order of decreasing preference.
<del>
<del>See
<del>[SSL_get_shared_sigalgs](https://www.openssl.org/docs/man1.1.1/man3/SSL_get_shared_sigalgs.html)
<del>for more information.
<del>
<ide> ### tlsSocket.getEphemeralKeyInfo()
<ide> <!-- YAML
<ide> added: v5.0.0
<ide> See [Session Resumption][] for more information.
<ide> Note: `getSession()` works only for TLSv1.2 and below. For TLSv1.3, applications
<ide> must use the [`'session'`][] event (it also works for TLSv1.2 and below).
<ide>
<add>### tlsSocket.getSharedSigalgs()
<add><!-- YAML
<add>added: v12.11.0
<add>-->
<add>
<add>* Returns: {Array} List of signature algorithms shared between the server and
<add>the client in the order of decreasing preference.
<add>
<add>See
<add>[SSL_get_shared_sigalgs](https://www.openssl.org/docs/man1.1.1/man3/SSL_get_shared_sigalgs.html)
<add>for more information.
<add>
<ide> ### tlsSocket.getTLSTicket()
<ide> <!-- YAML
<ide> added: v0.11.4
<ide> changes:
<ide> order as their private keys in `key`. If the intermediate certificates are
<ide> not provided, the peer will not be able to validate the certificate, and the
<ide> handshake will fail.
<del> * `sigalgs` {string}` Colon-separated list of supported signature algorithms.
<add> * `sigalgs` {string} Colon-separated list of supported signature algorithms.
<ide> The list can contain digest algorithms (`SHA256`, `MD5` etc.), public key
<ide> algorithms (`RSA-PSS`, `ECDSA` etc.), combination of both (e.g
<ide> 'RSA+SHA384') or TLS v1.3 scheme names (e.g. `rsa_pss_pss_sha512`). | 4 |
Go | Go | add unit test for multiple attach / restart | 0ebdca5e6144b0176a4b161cb1bd40efc0dc7efe | <ide><path>container_test.go
<ide> func TestIdFormat(t *testing.T) {
<ide> }
<ide> }
<ide>
<add>func TestMultipleAttachRestart(t *testing.T) {
<add> runtime, err := newTestRuntime()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> defer nuke(runtime)
<add> container, err := runtime.Create(
<add> &Config{
<add> Image: GetTestImage(runtime).Id,
<add> Cmd: []string{"/bin/sh", "-c",
<add> "i=1; while [ $i -le 5 ]; do i=`expr $i + 1`; echo hello; done"},
<add> Memory: 33554432,
<add> },
<add> )
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> defer runtime.Destroy(container)
<add>
<add> // Simulate 3 client attaching to the container and stop/restart
<add>
<add> stdout1, err := container.StdoutPipe()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> stdout2, err := container.StdoutPipe()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> stdout3, err := container.StdoutPipe()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> if err := container.Start(); err != nil {
<add> t.Fatal(err)
<add> }
<add> l1, err := bufio.NewReader(stdout1).ReadString('\n')
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> if strings.Trim(l1, " \r\n") != "hello" {
<add> t.Fatalf("Unexpected output. Expected [%s], received [%s]", "hello", l1)
<add> }
<add> l2, err := bufio.NewReader(stdout2).ReadString('\n')
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> if strings.Trim(l2, " \r\n") != "hello" {
<add> t.Fatalf("Unexpected output. Expected [%s], received [%s]", "hello", l2)
<add> }
<add> l3, err := bufio.NewReader(stdout3).ReadString('\n')
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> if strings.Trim(l3, " \r\n") != "hello" {
<add> t.Fatalf("Unexpected output. Expected [%s], received [%s]", "hello", l3)
<add> }
<add>
<add> if err := container.Stop(); err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> stdout1, err = container.StdoutPipe()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> stdout2, err = container.StdoutPipe()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> stdout3, err = container.StdoutPipe()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> if err := container.Start(); err != nil {
<add> t.Fatal(err)
<add> }
<add> timeout := make(chan bool)
<add> go func() {
<add> l1, err = bufio.NewReader(stdout1).ReadString('\n')
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> if strings.Trim(l1, " \r\n") != "hello" {
<add> t.Fatalf("Unexpected output. Expected [%s], received [%s]", "hello", l1)
<add> }
<add> l2, err = bufio.NewReader(stdout2).ReadString('\n')
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> if strings.Trim(l2, " \r\n") != "hello" {
<add> t.Fatalf("Unexpected output. Expected [%s], received [%s]", "hello", l2)
<add> }
<add> l3, err = bufio.NewReader(stdout3).ReadString('\n')
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> if strings.Trim(l3, " \r\n") != "hello" {
<add> t.Fatalf("Unexpected output. Expected [%s], received [%s]", "hello", l3)
<add> }
<add> timeout <- false
<add> }()
<add> go func() {
<add> time.Sleep(3 * time.Second)
<add> timeout <- true
<add> }()
<add> if <-timeout {
<add> t.Fatalf("Timeout reading from the process")
<add> }
<add>}
<add>
<ide> func TestCommitRun(t *testing.T) {
<ide> runtime, err := newTestRuntime()
<ide> if err != nil {
<ide> func TestCommitRun(t *testing.T) {
<ide> t.Fatal(err)
<ide> }
<ide> defer runtime.Destroy(container2)
<del>
<ide> stdout, err := container2.StdoutPipe()
<ide> stderr, err := container2.StderrPipe()
<ide> if err := container2.Start(); err != nil { | 1 |
PHP | PHP | raise error when no mocks are found | 21ffbdf2f403c10b0bbb4b9edefbab2459d22739 | <ide><path>src/Http/Client/Adapter/Mock.php
<ide> namespace Cake\Http\Client\Adapter;
<ide>
<ide> use Cake\Http\Client\AdapterInterface;
<add>use Cake\Http\Client\Exception\MissingResponseException;
<ide> use Cake\Http\Client\Response;
<ide> use Closure;
<ide> use InvalidArgumentException;
<ide> *
<ide> * This adapter is not intended for production use. Instead
<ide> * it is the backend used by `Client::addMockResponse()`
<add> *
<add> * @internal
<ide> */
<ide> class Mock implements AdapterInterface
<ide> {
<ide> public function addResponse(RequestInterface $request, Response $response, array
<ide> public function send(RequestInterface $request, array $options): array
<ide> {
<ide> $found = null;
<add> $method = $request->getMethod();
<add> $requestUri = (string)$request->getUri();
<add>
<ide> foreach ($this->responses as $index => $mock) {
<del> if ($request->getMethod() !== $mock['request']->getMethod()) {
<add> if ($method !== $mock['request']->getMethod()) {
<ide> continue;
<ide> }
<del> if (!$this->urlMatches($request, $mock['request'])) {
<add> if (!$this->urlMatches($requestUri, $mock['request'])) {
<ide> continue;
<ide> }
<ide> if (isset($mock['options']['match'])) {
<ide> public function send(RequestInterface $request, array $options): array
<ide> return [$mock['response']];
<ide> }
<ide>
<del> return [];
<add> throw new MissingResponseException($method, $requestUri);
<ide> }
<ide>
<ide> /**
<ide> * Check if the request URI matches the mock URI.
<ide> *
<del> * @param \Psr\Http\Message\RequestInterface $request The request being sent.
<add> * @param string $requestUri The request being sent.
<ide> * @param \Psr\Http\Message\RequestInterface $mock The request being mocked.
<ide> * @return bool
<ide> */
<del> protected function urlMatches(RequestInterface $request, RequestInterface $mock): bool
<add> protected function urlMatches(string $requestUri, RequestInterface $mock): bool
<ide> {
<del> $requestUri = (string)$request->getUri();
<ide> $mockUri = (string)$mock->getUri();
<ide> if ($requestUri === $mockUri) {
<ide> return true;
<ide><path>src/Http/Client/Exception/MissingResponseException.php
<add><?php
<add>declare(strict_types=1);
<add>
<add>/**
<add> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<add> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<add> *
<add> * Licensed under The MIT License
<add> * Redistributions of files must retain the above copyright notice.
<add> *
<add> * @copyright Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<add> * @link https://cakephp.org CakePHP(tm) Project
<add> * @since 3.0.0
<add> * @license https://opensource.org/licenses/mit-license.php MIT License
<add> */
<add>namespace Cake\Http\Client\Exception;
<add>
<add>use RuntimeException;
<add>
<add>/**
<add> * Used to indicate that a request did not have a matching mock response.
<add> */
<add>class MissingResponseException extends RuntimeException
<add>{
<add> /**
<add> * Constructor
<add> *
<add> * @param string $method The HTTP method used.
<add> * @param string $url The request URL.
<add> */
<add> public function __construct(string $method, string $url)
<add> {
<add> $message = "Unable to find a mocked response for {$method} to {$url}.";
<add> parent::__construct($message);
<add> }
<add>}
<ide><path>tests/TestCase/Http/ClientTest.php
<ide>
<ide> use Cake\Http\Client;
<ide> use Cake\Http\Client\Adapter\Stream;
<add>use Cake\Http\Client\Exception\MissingResponseException;
<ide> use Cake\Http\Client\Request;
<ide> use Cake\Http\Client\Response;
<ide> use Cake\Http\Cookie\Cookie;
<ide> public function testAddMockResponseMethodMatchFailure(): void
<ide> $stub = new Response(['HTTP/1.0 200'], 'hello world');
<ide> Client::addMockResponse('POST', 'http://example.com/path', $stub);
<ide>
<del> $mock = $this->getMockBuilder(Stream::class)
<del> ->onlyMethods(['send'])
<del> ->getMock();
<del> $mock->expects($this->once())
<del> ->method('send')
<del> ->will($this->throwException(new InvalidArgumentException('No match')));
<del>
<del> $client = new Client(['adapter' => $mock]);
<del> $this->expectException(InvalidArgumentException::class);
<del> $this->expectExceptionMessage('No match');
<add> $client = new Client();
<add> $this->expectException(MissingResponseException::class);
<add> $this->expectExceptionMessage('Unable to find a mock');
<ide>
<ide> $client->get('http://example.com/path');
<ide> }
<ide> public function testAddMockResponseCustomNoMatch(): void
<ide> {
<ide> $stub = new Response(['HTTP/1.0 200'], 'hello world');
<ide> Client::addMockResponse('POST', 'http://example.com/path', $stub, [
<del> 'match' => function ($request) {
<add> 'match' => function () {
<ide> return false;
<ide> },
<ide> ]);
<ide>
<del> $mock = $this->getMockBuilder(Stream::class)
<del> ->onlyMethods(['send'])
<del> ->getMock();
<del> $mock->expects($this->once())
<del> ->method('send')
<del> ->will($this->throwException(new InvalidArgumentException('No match')));
<del>
<del> $client = new Client(['adapter' => $mock]);
<del> $this->expectException(InvalidArgumentException::class);
<del> $this->expectExceptionMessage('No match');
<add> $client = new Client();
<add> $this->expectException(MissingResponseException::class);
<add> $this->expectExceptionMessage('Unable to find a mock');
<ide>
<ide> $client->post('http://example.com/path');
<ide> } | 3 |
Ruby | Ruby | fix the build | b079f7a334137400085e19ec1b3ffc95fd53fc4c | <ide><path>actionpack/lib/action_dispatch/http/cache.rb
<ide> def not_modified?(modified_at)
<ide> end
<ide>
<ide> def etag_matches?(etag)
<add> etag = etag.gsub(/^\"|\"$/, "")
<ide> if_none_match_etags.include?(etag)
<ide> end
<ide> | 1 |
PHP | PHP | update exception message | 37df839aed3dc6f0c29a6a433c090e5678b4d265 | <ide><path>src/Utility/Security.php
<ide> public static function constantEquals($original, $compare): bool
<ide> public static function getSalt(): string
<ide> {
<ide> if (static::$_salt === null) {
<del> throw new RuntimeException('Salt not set. Use Security::setSalt() to set one, ideally in bootstrap.php.');
<add> throw new RuntimeException('Salt not set. Use Security::setSalt() to set one, ideally in `config/bootstrap.php`.');
<ide> }
<ide>
<ide> return static::$_salt; | 1 |
PHP | PHP | apply fixes from styleci | f5f900b2bac0b14c4256dcf8beb28f0450531bd5 | <ide><path>tests/Console/ConsoleEventSchedulerTest.php
<ide> use Illuminate\Console\Command;
<ide> use PHPUnit\Framework\TestCase;
<ide> use Illuminate\Container\Container;
<del>use Illuminate\Config\Repository as Config;
<ide> use Illuminate\Console\Scheduling\Schedule;
<ide> use Illuminate\Console\Scheduling\EventMutex;
<ide> use Illuminate\Console\Scheduling\CacheEventMutex; | 1 |
Javascript | Javascript | add support for skinnedmesh and bone object types | f8aedef34c3d98441585211d03d25a45504c9ec3 | <ide><path>src/loaders/ObjectLoader.js
<ide> Object.assign( ObjectLoader.prototype, {
<ide>
<ide> break;
<ide>
<add> case 'SkinnedMesh':
<add>
<add> var geometry = getGeometry( data.geometry );
<add> var material = getMaterial( data.material );
<add>
<add> object = new SkinnedMesh( geometry, material );
<add>
<add> break;
<add>
<add> case 'Bone':
<add>
<add> object = new Bone();
<add>
<add> break;
<add>
<ide> case 'LOD':
<ide>
<ide> object = new LOD(); | 1 |
Ruby | Ruby | add missing require | b5384d91a4e761023602d7eeb2ad92be0fe44815 | <ide><path>activesupport/lib/active_support/per_thread_registry.rb
<add>require 'active_support/core_ext/module/delegation'
<add>
<ide> module ActiveSupport
<ide> # This module is used to encapsulate access to thread local variables.
<ide> # | 1 |
Ruby | Ruby | remove unnecessary db call when replacing | 774160b9ad6908435bf3485e7ac98633deff76c6 | <ide><path>activerecord/lib/active_record/associations/collection_association.rb
<ide> def replace(other_array)
<ide> if owner.new_record?
<ide> replace_records(other_array, original_target)
<ide> else
<del> transaction { replace_records(other_array, original_target) }
<add> if other_array != original_target
<add> transaction { replace_records(other_array, original_target) }
<add> end
<ide> end
<ide> end
<ide>
<ide><path>activerecord/test/cases/associations/has_many_associations_test.rb
<ide> def test_replace_failure
<ide> assert_equal orig_accounts, firm.accounts
<ide> end
<ide>
<add> def test_replace_with_same_content
<add> firm = Firm.first
<add> firm.clients = []
<add> firm.save
<add>
<add> assert_queries(0, ignore_none: true) do
<add> firm.clients = []
<add> end
<add> end
<add>
<ide> def test_transactions_when_replacing_on_persisted
<ide> good = Client.new(:name => "Good")
<ide> bad = Client.new(:name => "Bad", :raise_on_save => true) | 2 |
Text | Text | move code example right after colon | 099a0f08801257c945235c3cb5d13cff109f7e55 | <ide><path>guides/source/upgrading_ruby_on_rails.md
<ide> end
<ide> If the action is not being used in a public API and you are free to change the
<ide> HTTP method, you can update your route to use `patch` instead of `put`:
<ide>
<del>`PUT` requests to `/users/:id` in Rails 4 get routed to `update` as they are
<del>today. So, if you have an API that gets real PUT requests it is going to work.
<del>The router also routes `PATCH` requests to `/users/:id` to the `update` action.
<del>
<ide> ```ruby
<ide> resources :users do
<ide> patch :update_name, on: :member
<ide> end
<ide> ```
<ide>
<add>`PUT` requests to `/users/:id` in Rails 4 get routed to `update` as they are
<add>today. So, if you have an API that gets real PUT requests it is going to work.
<add>The router also routes `PATCH` requests to `/users/:id` to the `update` action.
<add>
<ide> If the action is being used in a public API and you can't change to HTTP method
<ide> being used, you can update your form to use the `PUT` method instead:
<ide> | 1 |
PHP | PHP | add resource missing option | 62925b64310bb796dff479aea4acc55a1620d470 | <ide><path>src/Illuminate/Routing/PendingResourceRegistration.php
<ide> public function shallow($shallow = true)
<ide> return $this;
<ide> }
<ide>
<add> /**
<add> * Define the callable that should be invoked on a missing model exception.
<add> *
<add> * @param $callback
<add> * @return $this
<add> */
<add> public function missing($callback)
<add> {
<add> $this->options['missing'] = $callback;
<add>
<add> return $this;
<add> }
<add>
<ide> /**
<ide> * Indicate that the resource routes should be scoped using the given binding fields.
<ide> *
<ide><path>src/Illuminate/Routing/ResourceRegistrar.php
<ide> class ResourceRegistrar
<ide> */
<ide> protected $resourceDefaults = ['index', 'create', 'store', 'show', 'edit', 'update', 'destroy'];
<ide>
<add> /**
<add> * Actions that use model binding.
<add> *
<add> * @var string[]
<add> */
<add> protected $modelBoundMethods = ['show', 'edit', 'update', 'destroy'];
<add>
<ide> /**
<ide> * The parameters set for this resource instance.
<ide> *
<ide> protected function getResourceAction($resource, $controller, $method, $options)
<ide> $action['where'] = $options['wheres'];
<ide> }
<ide>
<add> if (isset($options['missing']) && in_array($method, $this->modelBoundMethods)) {
<add> $action['missing'] = $options['missing'];
<add> }
<add>
<ide> return $action;
<ide> }
<ide>
<ide><path>tests/Routing/RouteRegistrarTest.php
<ide> public function testCanRegisterResourcesWithoutOption()
<ide> }
<ide> }
<ide>
<add> public function testCanRegisterResourceWithMissingOption()
<add> {
<add> $this->router->middleware('resource-middleware')
<add> ->resource('users', RouteRegistrarControllerStub::class)
<add> ->missing(function () { return 'missing'; });
<add>
<add> $this->assertIsCallable($this->router->getRoutes()->getByName('users.show')->getMissing());
<add> $this->assertIsCallable($this->router->getRoutes()->getByName('users.edit')->getMissing());
<add> $this->assertIsCallable($this->router->getRoutes()->getByName('users.update')->getMissing());
<add> $this->assertIsCallable($this->router->getRoutes()->getByName('users.destroy')->getMissing());
<add>
<add> $this->assertNull($this->router->getRoutes()->getByName('users.index')->getMissing());
<add> $this->assertNull($this->router->getRoutes()->getByName('users.create')->getMissing());
<add> $this->assertNull($this->router->getRoutes()->getByName('users.store')->getMissing());
<add> }
<add>
<ide> public function testCanAccessRegisteredResourceRoutesAsRouteCollection()
<ide> {
<ide> $resource = $this->router->middleware('resource-middleware') | 3 |
PHP | PHP | apply suggestions from code review | efd6020d9f89c6278f624c2e9ae0b00beea7a6f6 | <ide><path>src/Error/ErrorTrap.php
<ide> class ErrorTrap
<ide> * See the `Error` key in you `config/app.php`
<ide> * for details on the keys and their values.
<ide> *
<del> * @var array
<add> * @var array<string, mixed>
<ide> */
<ide> protected $_defaultConfig = [
<ide> 'errorLevel' => E_ALL,
<ide> class ErrorTrap
<ide> /**
<ide> * Constructor
<ide> *
<del> * @param array $options An options array. See $_defaultConfig.
<add> * @param array<string, mixed> $options An options array. See $_defaultConfig.
<ide> */
<ide> public function __construct(array $options = [])
<ide> { | 1 |
Text | Text | add docs for x-nextjs-cache header | a0924fc7c763797b2927b4c3cb0f321ead49e645 | <ide><path>docs/api-reference/data-fetching/get-static-props.md
<ide> export async function getStaticProps() {
<ide>
<ide> Learn more about [Incremental Static Regeneration](/docs/basic-features/data-fetching/incremental-static-regeneration.md)
<ide>
<add>The cache status of a page leveraging ISR can be determined by reading the value of the `x-nextjs-cache` response header. The possible values are the following:
<add>
<add>- `MISS` - the path is not in the cache (occurs at most once, on the first visit)
<add>- `STALE` - the path is in the cache but exceeded the revalidate time so it will be updated in the background
<add>- `HIT` - the path is in the cache and has not exceeded the revalidate time
<add>
<ide> ### `notFound`
<ide>
<ide> The `notFound` boolean allows the page to return a `404` status and [404 Page](/docs/advanced-features/custom-error-page.md#404-page). With `notFound: true`, the page will return a `404` even if there was a successfully generated page before. This is meant to support use cases like user-generated content getting removed by its author. Note, `notFound` follows the same `revalidate` behavior [described here](/docs/api-reference/data-fetching/get-static-props.md#revalidate)
<ide><path>docs/api-reference/next/image.md
<ide> The following describes the caching algorithm for the default [loader](#loader).
<ide>
<ide> Images are optimized dynamically upon request and stored in the `<distDir>/cache/images` directory. The optimized image file will be served for subsequent requests until the expiration is reached. When a request is made that matches a cached but expired file, the expired image is served stale immediately. Then the image is optimized again in the background (also called revalidation) and saved to the cache with the new expiration date.
<ide>
<add>The cache status of an image can be determined by reading the value of the `x-nextjs-cache` response header. The possible values are the following:
<add>
<add>- `MISS` - the path is not in the cache (occurs at most once, on the first visit)
<add>- `STALE` - the path is in the cache but exceeded the revalidate time so it will be updated in the background
<add>- `HIT` - the path is in the cache and has not exceeded the revalidate time
<add>
<ide> The expiration (or rather Max Age) is defined by either the [`minimumCacheTTL`](#minimum-cache-ttl) configuration or the upstream server's `Cache-Control` header, whichever is larger. Specifically, the `max-age` value of the `Cache-Control` header is used. If both `s-maxage` and `max-age` are found, then `s-maxage` is preferred.
<ide>
<ide> - You can configure [`minimumCacheTTL`](#minimum-cache-ttl) to increase the cache duration when the upstream image does not include `Cache-Control` header or the value is very low. | 2 |
Text | Text | change events --since to fit rfc3339nano | d619b5594ba136ab285b93128bbe8fe246678488 | <ide><path>docs/sources/reference/commandline/cli.md
<ide> You'll need two shells for this example.
<ide> 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die
<ide> 2014-09-03T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) stop
<ide>
<del> $ sudo docker events --since '2013-09-03 15:49:29 +0200 CEST'
<add> $ sudo docker events --since '2013-09-03T15:49:29'
<ide> 2014-09-03T15:49:29.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die
<ide> 2014-05-10T17:42:14.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop
<ide> 2014-05-10T17:42:14.999999999Z07:00 7805c1d35632: (from redis:2.8) die | 1 |
Javascript | Javascript | fix issue #233 | a982beb5c4ec50932c312626f44488d0ee85abec | <ide><path>packages/ember-handlebars/lib/views/metamorph_view.js
<ide> var DOMManager = {
<ide> remove: function(view) {
<ide> var morph = view.morph;
<ide> if (morph.isRemoved()) { return; }
<add> set(view, 'element', null);
<add> set(view, 'lastInsert', null);
<ide> morph.remove();
<ide> },
<ide>
<ide><path>packages/ember-views/lib/views/collection_view.js
<ide> Ember.CollectionView = Ember.ContainerView.extend(
<ide> addedViews.push(emptyView);
<ide> set(this, 'emptyView', emptyView);
<ide> }
<del>
<ide> childViews.replace(start, 0, addedViews);
<ide> },
<ide>
<ide><path>packages/ember-views/lib/views/states/default.js
<ide> Ember.View.states = {
<ide> // Handle events from `Ember.EventDispatcher`
<ide> handleEvent: function() {
<ide> return true; // continue event propagation
<add> },
<add>
<add> destroyElement: function(view) {
<add> set(view, 'element', null);
<add> set(view, 'lastInsert', null);
<add> return view;
<ide> }
<ide> }
<ide> };
<ide><path>packages/ember-views/lib/views/states/in_dom.js
<ide> Ember.View.states.hasElement = {
<ide> setElement: function(view, value) {
<ide> if (value === null) {
<ide> view.invalidateRecursively('element');
<add>
<ide> view.transitionTo('preRender');
<ide> } else {
<ide> throw "You cannot set an element to a non-null value when the element is already in the DOM.";
<ide> Ember.View.states.hasElement = {
<ide>
<ide> // once the view is already in the DOM, destroying it removes it
<ide> // from the DOM, nukes its element, and puts it back into the
<del> // preRender state.
<add> // preRender state if inDOM.
<add>
<ide> destroyElement: function(view) {
<ide> view._notifyWillDestroyElement();
<del>
<ide> view.domManager.remove(view);
<ide> return view;
<ide> },
<ide> Ember.View.states.hasElement = {
<ide> Ember.View.states.inDOM = {
<ide> parentState: Ember.View.states.hasElement,
<ide>
<del> insertElement: function() {
<add> insertElement: function(view, fn) {
<add> if (view.get('lastInsert') !== fn.insertGuid){
<add> return;
<add> }
<ide> throw "You can't insert an element into the DOM that has already been inserted";
<ide> }
<ide> };
<ide><path>packages/ember-views/lib/views/states/pre_render.js
<ide> Ember.View.states.preRender = {
<ide> // a view leaves the preRender state once its element has been
<ide> // created (createElement).
<ide> insertElement: function(view, fn) {
<add> if (view.get('lastInsert') !== fn.insertGuid){
<add> return;
<add> }
<ide> view.createElement();
<ide> view._notifyWillInsertElement(true);
<ide> // after createElement, the view will be in the hasElement state.
<ide><path>packages/ember-views/lib/views/view.js
<ide> Ember.View = Ember.Object.extend(Ember.Evented,
<ide> @param {Function} fn the function that inserts the element into the DOM
<ide> */
<ide> _insertElementLater: function(fn) {
<add> set(this, 'lastInsert', fn.insertGuid = Ember.generateGuid());
<ide> Ember.run.schedule('render', this, this.invokeForState, 'insertElement', fn);
<ide> },
<ide>
<ide> var DOMManager = {
<ide> var elem = get(view, 'element');
<ide>
<ide> set(view, 'element', null);
<add> set(view, 'lastInsert', null);
<ide>
<ide> Ember.$(elem).remove();
<ide> },
<ide><path>packages/ember-views/tests/views/collection_test.js
<ide> test("should allow declaration of itemViewClass as a string", function() {
<ide>
<ide> equal(view.$('.ember-view').length, 3);
<ide> });
<add>
<add>test("should not render the emptyView if content is emptied and refilled in the same run loop", function() {
<add> view = Ember.CollectionView.create({
<add> tagName: 'div',
<add> content: Ember.A(['NEWS GUVNAH']),
<add>
<add> emptyView: Ember.View.create({
<add> tagName: 'kbd',
<add> render: function(buf) {
<add> buf.push("OY SORRY GUVNAH NO NEWS TODAY EH");
<add> }
<add> })
<add> });
<add>
<add> Ember.run(function() {
<add> view.append();
<add> });
<add>
<add> equal(view.$().find('kbd:contains("OY SORRY GUVNAH")').length, 0);
<add>
<add> Ember.run(function() {
<add> view.get('content').popObject();
<add> view.get('content').pushObject(['NEWS GUVNAH']);
<add> });
<add> equal(view.$('div').length, 1);
<add> equal(view.$().find('kbd:contains("OY SORRY GUVNAH")').length, 0);
<add>});
<ide><path>packages/ember-views/tests/views/view/remove_test.js
<ide> test("does nothing if not in parentView", function() {
<ide> });
<ide>
<ide>
<add>test("the DOM element is gone after doing append and remove in two separate runloops", function() {
<add> var view = Ember.View.create();
<add> Ember.run(function() {
<add> view.append();
<add> });
<add> Ember.run(function() {
<add> view.remove();
<add> });
<add>
<add> var viewElem = Ember.$('#'+get(view, 'elementId'));
<add> ok(viewElem.length === 0, "view's element doesn't exist in DOM");
<add>});
<ide>
<add>test("the DOM element is gone after doing append and remove in a single runloop", function() {
<add> var view = Ember.View.create();
<add> Ember.run(function() {
<add> view.append();
<add> view.remove();
<add> });
<add>
<add> var viewElem = Ember.$('#'+get(view, 'elementId'));
<add> ok(viewElem.length === 0, "view's element doesn't exist in DOM");
<add>});
<ide> | 8 |
Python | Python | improve error message when reading vectors | 0ddb152be0f1876e0f80d7f9b6ee3473b9c7eb2c | <ide><path>spacy/cli/init_model.py
<ide>
<ide> from ._messages import Messages
<ide> from ..vectors import Vectors
<del>from ..errors import Warnings, user_warning
<add>from ..errors import Errors, Warnings, user_warning
<ide> from ..util import prints, ensure_path, get_lang_class
<ide>
<ide> try:
<ide> def read_vectors(vectors_loc):
<ide> pieces = line.rsplit(' ', vectors_data.shape[1]+1)
<ide> word = pieces.pop(0)
<ide> if len(pieces) != vectors_data.shape[1]:
<del> print(word, repr(line))
<del> raise ValueError("Bad line in file")
<add> raise ValueError(Errors.E094.format(line_num=i, loc=vectors_loc)
<ide> vectors_data[i] = numpy.asarray(pieces, dtype='f')
<ide> vectors_keys.append(word)
<ide> return vectors_data, vectors_keys
<ide><path>spacy/errors.py
<ide> class Errors(object):
<ide> "Alternatively, it is built from the 'lang' and 'name' keys in "
<ide> "the meta.json. Vector names are required to avoid issue #1660.")
<ide> E093 = ("token.ent_iob values make invalid sequence: I without B\n{seq}")
<add> E094 = ("Error reading line {line_num} in vectors file {loc}.")
<ide>
<ide>
<ide> @add_codes | 2 |
Go | Go | fix race in stats cli and native driver | 77280a87b70d3b2b629cd30ea93464287f346fa1 | <ide><path>api/client/stats.go
<ide> func (s *containerStats) Collect(cli *DockerCli, streamStats bool) {
<ide> }
<ide> stream, _, err := cli.call("GET", "/containers/"+s.Name+"/stats?"+v.Encode(), nil, nil)
<ide> if err != nil {
<add> s.mu.Lock()
<ide> s.err = err
<add> s.mu.Unlock()
<ide> return
<ide> }
<ide> defer stream.Close()
<ide><path>daemon/execdriver/native/driver.go
<ide> func (d *driver) Clean(id string) error {
<ide> }
<ide>
<ide> func (d *driver) Stats(id string) (*execdriver.ResourceStats, error) {
<add> d.Lock()
<ide> c := d.activeContainers[id]
<add> d.Unlock()
<ide> if c == nil {
<ide> return nil, execdriver.ErrNotRunning
<ide> } | 2 |
Ruby | Ruby | fix method signature | 126d2133ab824072f526bc88830a5b9948247bfb | <ide><path>Library/Homebrew/patch.rb
<ide> require 'erb'
<ide>
<ide> class Patch
<del> def self.create(strip, io=nil, &block)
<del> case strip ||= :p1
<add> def self.create(strip, io, &block)
<add> case strip
<ide> when :DATA, IO, StringIO
<ide> IOPatch.new(strip, :p1)
<ide> when String
<ide><path>Library/Homebrew/test/test_patch.rb
<ide>
<ide> class PatchTests < Homebrew::TestCase
<ide> def test_create_simple
<del> patch = Patch.create(:p2)
<add> patch = Patch.create(:p2, nil)
<ide> assert_kind_of ExternalPatch, patch
<ide> assert_predicate patch, :external?
<ide> assert_equal :p2, patch.strip
<ide> def test_create_io
<ide> end
<ide>
<ide> def test_create_io_without_strip
<del> patch = Patch.create(StringIO.new("foo"))
<add> patch = Patch.create(StringIO.new("foo"), nil)
<ide> assert_kind_of IOPatch, patch
<ide> assert_equal :p1, patch.strip
<ide> end
<ide> def test_create_string
<ide> end
<ide>
<ide> def test_create_string_without_strip
<del> patch = Patch.create("foo")
<add> patch = Patch.create("foo", nil)
<ide> assert_kind_of IOPatch, patch
<ide> assert_equal :p1, patch.strip
<ide> end
<ide> def test_create_DATA
<ide> end
<ide>
<ide> def test_create_DATA_without_strip
<del> patch = Patch.create(:DATA)
<add> patch = Patch.create(:DATA, nil)
<ide> assert_kind_of IOPatch, patch
<ide> assert_equal :p1, patch.strip
<ide> end | 2 |
Text | Text | remove logo, add cake website link to description | 2b1a11f2b0fb6d48163e2846607d9971f6e3dd22 | <ide><path>README.md
<ide> [](http://squizlabs.github.io/PHP_CodeSniffer/analysis/cakephp/cakephp/)
<ide> [](https://packagist.org/packages/cakephp/cakephp)
<ide>
<del>[](http://www.cakephp.org)
<del>
<del>CakePHP is a rapid development framework for PHP which uses commonly known
<del>design patterns like Active Record, Association Data Mapping, Front Controller
<del>and MVC. Our primary goal is to provide a structured framework that enables
<del>PHP users at all levels to rapidly develop robust web applications, without any
<del>loss to flexibility.
<add>[CakePHP](http://www.cakephp.org) is a rapid development framework for PHP which
<add>uses commonly known design patterns like Active Record, Association Data
<add>Mapping, Front Controller and MVC. Our primary goal is to provide a structured
<add>framework that enables PHP users at all levels to rapidly develop robust web
<add>applications, without any loss to flexibility.
<ide>
<ide> ## Installing CakePHP via Composer
<ide> | 1 |
Go | Go | address some displaying issues in docker info | 8ad9438edeab44c8f424113bc96fa12d76e4fdc6 | <ide><path>api/client/system/info.go
<ide> func runInfo(dockerCli *client.DockerCli) error {
<ide> fmt.Fprintf(dockerCli.Out(), " ClusterID: %s\n", info.Swarm.Cluster.ID)
<ide> fmt.Fprintf(dockerCli.Out(), " Managers: %d\n", info.Swarm.Managers)
<ide> fmt.Fprintf(dockerCli.Out(), " Nodes: %d\n", info.Swarm.Nodes)
<del> fmt.Fprintf(dockerCli.Out(), " Name: %s\n", info.Swarm.Cluster.Spec.Annotations.Name)
<ide> fmt.Fprintf(dockerCli.Out(), " Orchestration:\n")
<del> fmt.Fprintf(dockerCli.Out(), " Task History Retention: %d\n", info.Swarm.Cluster.Spec.Orchestration.TaskHistoryRetentionLimit)
<add> fmt.Fprintf(dockerCli.Out(), " Task History Retention Limit: %d\n", info.Swarm.Cluster.Spec.Orchestration.TaskHistoryRetentionLimit)
<ide> fmt.Fprintf(dockerCli.Out(), " Raft:\n")
<ide> fmt.Fprintf(dockerCli.Out(), " Snapshot interval: %d\n", info.Swarm.Cluster.Spec.Raft.SnapshotInterval)
<ide> fmt.Fprintf(dockerCli.Out(), " Heartbeat tick: %d\n", info.Swarm.Cluster.Spec.Raft.HeartbeatTick) | 1 |
Javascript | Javascript | enforce 20s timeout for all unit tests | df2a22ee6149595b212ef7dca73535a853969f7b | <ide><path>test/data/testrunner.js
<del>jQuery.noConflict(); // Allow the test to run with other libs or jQuery's.
<add>/**
<add> * Allow the test suite to run with other libs or jQuery's.
<add> */
<add>jQuery.noConflict();
<ide>
<del>// jQuery-specific QUnit.reset
<add>/**
<add> * QUnit hooks
<add> */
<ide> (function() {
<add> // jQuery-specific QUnit.reset
<ide> var reset = QUnit.reset,
<ide> ajaxSettings = jQuery.ajaxSettings;
<ide>
<ide> jQuery.noConflict(); // Allow the test to run with other libs or jQuery's.
<ide> };
<ide> })();
<ide>
<del>// load testswarm agent
<add>/**
<add> * QUnit configuration
<add> */
<add>// Max time for stop() and asyncTest() until it aborts test
<add>// and start()'s the next test.
<add>QUnit.config.testTimeout = 20 * 1000; // 20 seconds
<add>
<add>/**
<add> * Load the TestSwarm listener if swarmURL is in the address.
<add> */
<ide> (function() {
<ide> var url = window.location.search;
<del> url = decodeURIComponent( url.slice( url.indexOf("swarmURL=") + 9 ) );
<add> url = decodeURIComponent( url.slice( url.indexOf("swarmURL=") + "swarmURL=".length ) );
<add>
<ide> if ( !url || url.indexOf("http") !== 0 ) {
<ide> return;
<ide> } | 1 |
Javascript | Javascript | update http2 for new stream api | b07f2e25f414dde014528705ddbc0823d1aeb89f | <ide><path>benchmark/http_simple.js
<ide> path = require("path");
<ide>
<del>libDir = path.join(path.dirname(__filename), "../lib");
<del>require.paths.unshift(libDir);
<del>
<del>var puts = require("sys").puts;
<del>http = require("http");
<add>var puts = require("../lib/sys").puts;
<add>http = require("../lib/http2");
<ide>
<ide> fixed = ""
<ide> for (var i = 0; i < 20*1024; i++) {
<ide><path>lib/http2.js
<ide> function OutgoingMessage () {
<ide> sys.inherits(OutgoingMessage, events.EventEmitter);
<ide> exports.OutgoingMessage = OutgoingMessage;
<ide>
<del>OutgoingMessage.prototype.send = function (data, encoding) {
<add>OutgoingMessage.prototype._send = function (data, encoding) {
<ide> var length = this.output.length;
<ide>
<ide> if (length === 0) {
<ide> OutgoingMessage.prototype.send = function (data, encoding) {
<ide> this.outputEncodings.push(encoding);
<ide> };
<ide>
<del>OutgoingMessage.prototype.sendHeaderLines = function (first_line, headers) {
<add>OutgoingMessage.prototype._sendHeaderLines = function (first_line, headers) {
<ide> var sentConnectionHeader = false;
<ide> var sendContentLengthHeader = false;
<ide> var sendTransferEncodingHeader = false;
<ide> OutgoingMessage.prototype.sendHeaderLines = function (first_line, headers) {
<ide>
<ide> messageHeader += CRLF;
<ide>
<del> this.send(messageHeader);
<add> this._send(messageHeader);
<ide> // wait until the first body chunk, or finish(), is sent to flush.
<ide> };
<ide>
<del>OutgoingMessage.prototype.sendBody = function (chunk, encoding) {
<add>OutgoingMessage.prototype.write = function (chunk, encoding) {
<ide> encoding = encoding || "ascii";
<ide> if (this.chunkEncoding) {
<del> this.send(process._byteLength(chunk, encoding).toString(16));
<del> this.send(CRLF);
<del> this.send(chunk, encoding);
<del> this.send(CRLF);
<add> this._send(process._byteLength(chunk, encoding).toString(16));
<add> this._send(CRLF);
<add> this._send(chunk, encoding);
<add> this._send(CRLF);
<ide> } else {
<del> this.send(chunk, encoding);
<add> this._send(chunk, encoding);
<ide> }
<ide>
<ide> if (this.flushing) {
<ide> OutgoingMessage.prototype.sendBody = function (chunk, encoding) {
<ide> }
<ide> };
<ide>
<add>OutgoingMessage.prototype.sendBody = function () {
<add> throw new Error('sendBody() renamed to write()');
<add>};
<add>
<add>
<ide> OutgoingMessage.prototype.flush = function () {
<ide> this.emit("flush");
<ide> };
<ide>
<del>OutgoingMessage.prototype.finish = function () {
<del> if (this.chunkEncoding) this.send("0\r\n\r\n"); // last chunk
<add>OutgoingMessage.prototype.close = function () {
<add> if (this.chunkEncoding) this._send("0\r\n\r\n"); // last chunk
<ide> this.finished = true;
<ide> this.flush();
<ide> };
<ide> function ServerResponse (req) {
<ide> sys.inherits(ServerResponse, OutgoingMessage);
<ide> exports.ServerResponse = ServerResponse;
<ide>
<del>ServerResponse.prototype.sendHeader = function (statusCode, headers) {
<add>ServerResponse.prototype.writeHead = function (statusCode, headers) {
<ide> var reason = STATUS_CODES[statusCode] || "unknown";
<ide> var status_line = "HTTP/1.1 " + statusCode.toString() + " " + reason + CRLF;
<del> this.sendHeaderLines(status_line, headers);
<add> this._sendHeaderLines(status_line, headers);
<add>};
<add>
<add>ServerResponse.prototype.writeHeader = ServerResponse.prototype.writeHead;
<add>
<add>ServerResponse.prototype.sendHeader = function () {
<add> throw new Error('sendHeader renamed to writeHead()');
<ide> };
<ide>
<ide>
<ide> function ClientRequest (method, url, headers) {
<ide> }
<ide> this.closeOnFinish = true;
<ide>
<del> this.sendHeaderLines(method + " " + url + " HTTP/1.1\r\n", headers);
<add> this._sendHeaderLines(method + " " + url + " HTTP/1.1\r\n", headers);
<ide> }
<ide> sys.inherits(ClientRequest, OutgoingMessage);
<ide> exports.ClientRequest = ClientRequest;
<ide> function flushMessageQueue (socket, queue) {
<ide> var data = message.output.shift();
<ide> var encoding = message.outputEncodings.shift();
<ide>
<del> socket.send(data, encoding);
<add> socket.write(data, encoding);
<ide> }
<ide>
<ide> if (!message.finished) break; | 2 |
Javascript | Javascript | eliminate port collision | 4a3928e125bb1ab7242e8134f911b6c71551bd75 | <add><path>test/sequential/test-cluster-net-listen-ipv6only-rr.js
<del><path>test/parallel/test-cluster-net-listen-ipv6only-rr.js
<ide> if (cluster.isMaster) {
<ide> workers.set(i, worker);
<ide> }
<ide> } else {
<add> // As the cluster member has the potential to grab any port
<add> // from the environment, this can cause collision when master
<add> // obtains the port from cluster member and tries to listen on.
<add> // So move this to sequential, and provide a static port.
<add> // Refs: https://github.com/nodejs/node/issues/25813
<ide> net.createServer().listen({
<del> host,
<del> port: 0,
<add> host: host,
<add> port: common.PORT,
<ide> ipv6Only: true,
<ide> }, common.mustCall());
<ide> } | 1 |
Javascript | Javascript | add merge strategy for facebook so far | 2f1e39433462ee3d5f900dbd023e99535791c049 | <ide><path>config/passport.js
<ide> passport.use(new LocalStrategy({ usernameField: 'email' }, function(email, passw
<ide> });
<ide> }));
<ide>
<add>/**
<add> * Sign in with Facebook.
<add> *
<add> * Possible authentication states:
<add> *
<add> * 1. User is logged in.
<add> * a. Already signed in with Facebook before. (MERGE ACCOUNTS, EXISTING ACCOUNT HAS PRECEDENCE)
<add> * b. First time signing in with Facebook. (ADD FACEBOOK ID TO EXISTING USER)
<add> * 2. User is not logged in.
<add> * a. Already signed with Facebook before. (LOGIN)
<add> * b. First time signing in with Facebook. (CREATE ACCOUNT)
<add> */
<add>
<ide> passport.use(new FacebookStrategy(secrets.facebook, function (req, accessToken, refreshToken, profile, done) {
<ide> if (req.user) {
<del> User.findById(req.user.id, function(err, user) {
<del> user.facebook = profile.id;
<del> user.tokens.push({ kind: 'facebook', accessToken: accessToken });
<del> user.profile.name = user.profile.name || profile.displayName;
<del> user.profile.gender = user.profile.gender || profile._json.gender;
<del> user.profile.picture = user.profile.picture || profile._json.profile_image_url;
<del> user.save(function(err) {
<del> done(err, user);
<del> });
<add> User.findOne({ facebook: profile.id }, function(err, existingUser) {
<add> if (existingUser) {
<add> existingUser.github = existingUser.github || req.user.github;
<add> existingUser.google = existingUser.google || req.user.google;
<add> existingUser.twitter = existingUser.twitter || req.user.twitter;
<add> existingUser.email = existingUser.email || req.user.email;
<add> existingUser.password = existingUser.password || req.user.password;
<add> existingUser.profile = existingUser.profile || req.user.profile;
<add> existingUser.tokens = _.union(existingUser.tokens, req.user.tokens);
<add> existingUser.save(function(err) {
<add> User.remove({ _id: req.user.id }, function(err) {
<add> req.flash('info', { msg: 'Your accont has been merged with an existing one.' });
<add> return done(null, existingUser);
<add> });
<add> });
<add> } else {
<add> User.findById(req.user.id, function(err, user) {
<add> user.facebook = profile.id;
<add> user.tokens.push({ kind: 'facebook', accessToken: accessToken });
<add> user.profile.name = user.profile.name || profile.displayName;
<add> user.profile.gender = user.profile.gender || profile._json.gender;
<add> user.profile.picture = user.profile.picture || profile._json.profile_image_url;
<add> user.save(function(err) {
<add> done(err, user);
<add> });
<add> });
<add> }
<ide> });
<ide> } else {
<ide> User.findOne({ facebook: profile.id }, function(err, existingUser) {
<add> console.log(profile);
<ide> if (existingUser) return done(null, existingUser);
<ide> var user = new User();
<ide> user.email = profile._json.email; | 1 |
Python | Python | fix a argument order bug and use keyword args | 3f4e92457bf9b40b6626bf9777832557b97e9767 | <ide><path>libcloud/storage/drivers/backblaze_b2.py
<ide> def upload_object(self, file_path, container, object_name, extra=None,
<ide> iterator = read_in_chunks(iterator=iterator)
<ide> data = exhaust_iterator(iterator=iterator)
<ide>
<del> obj = self._perform_upload(data, container, object_name,
<del> extra, verify_hash, headers)
<add> obj = self._perform_upload(data=data, container=container,
<add> object_name=object_name,
<add> extra=extra,
<add> verify_hash=verify_hash,
<add> headers=headers)
<ide>
<ide> return obj
<ide>
<ide> def upload_object_via_stream(self, iterator, container, object_name,
<ide> iterator = read_in_chunks(iterator=iterator)
<ide> data = exhaust_iterator(iterator=iterator)
<ide>
<del> obj = self._perform_upload(data, container,
<del> object_name, extra, headers)
<add> obj = self._perform_upload(data=data, container=container,
<add> object_name=object_name,
<add> extra=extra,
<add> headers=headers)
<ide>
<ide> return obj
<ide> | 1 |
Ruby | Ruby | convert bash test to spec | f531e63949683a7bf33ec96014901e3c6d5eaf61 | <ide><path>Library/Homebrew/test/bash_spec.rb
<add>require "open3"
<add>
<add>RSpec::Matchers.define :have_valid_bash_syntax do
<add> match do |file|
<add> stdout, stderr, status = Open3.capture3("/bin/bash", "-n", file)
<add>
<add> @actual = [file, stderr]
<add>
<add> stdout.empty? && status.success?
<add> end
<add>
<add> failure_message do |(file, stderr)|
<add> "expected that #{file} is a valid Bash file:\n#{stderr}"
<add> end
<add>end
<add>
<add>describe "Bash" do
<add> context "brew" do
<add> subject { HOMEBREW_LIBRARY_PATH.parent.parent/"bin/brew" }
<add> it { is_expected.to have_valid_bash_syntax }
<add> end
<add>
<add> context "every `.sh` file" do
<add> it "has valid bash syntax" do
<add> Pathname.glob("#{HOMEBREW_LIBRARY_PATH}/**/*.sh").each do |path|
<add> relative_path = path.relative_path_from(HOMEBREW_LIBRARY_PATH)
<add> next if relative_path.to_s.start_with?("shims/", "test/", "vendor/")
<add>
<add> expect(path).to have_valid_bash_syntax
<add> end
<add> end
<add> end
<add>
<add> context "Bash completion" do
<add> subject { HOMEBREW_LIBRARY_PATH.parent.parent/"completions/bash/brew" }
<add> it { is_expected.to have_valid_bash_syntax }
<add> end
<add>
<add> context "every shim script" do
<add> it "has valid bash syntax" do
<add> # These have no file extension, but can be identified by their shebang.
<add> (HOMEBREW_LIBRARY_PATH/"shims").find do |path|
<add> next if path.directory?
<add> next if path.symlink?
<add> next unless path.executable?
<add> next unless path.read(12) == "#!/bin/bash\n"
<add>
<add> expect(path).to have_valid_bash_syntax
<add> end
<add> end
<add> end
<add>end
<ide><path>Library/Homebrew/test/bash_test.rb
<del>require "testing_env"
<del>
<del>class BashTests < Homebrew::TestCase
<del> def assert_valid_bash_syntax(file)
<del> return unless file.exist?
<del> output = Utils.popen_read("/bin/bash -n #{file} 2>&1")
<del> assert $?.success?, output
<del> end
<del>
<del> def test_bin_brew
<del> assert_valid_bash_syntax HOMEBREW_LIBRARY_PATH.parent.parent/"bin/brew"
<del> end
<del>
<del> def test_bash_code
<del> Pathname.glob("#{HOMEBREW_LIBRARY_PATH}/**/*.sh").each do |pn|
<del> pn_relative = pn.relative_path_from(HOMEBREW_LIBRARY_PATH)
<del> next if pn_relative.to_s.start_with?("shims/", "test/", "vendor/")
<del> assert_valid_bash_syntax pn
<del> end
<del> end
<del>
<del> def test_bash_completion
<del> script = HOMEBREW_LIBRARY_PATH.parent.parent/"completions/bash/brew"
<del> assert_valid_bash_syntax script
<del> end
<del>
<del> def test_bash_shims
<del> # These have no file extension, but can be identified by their shebang.
<del> (HOMEBREW_LIBRARY_PATH/"shims").find do |pn|
<del> next if pn.directory? || pn.symlink?
<del> next unless pn.executable? && pn.read(12) == "#!/bin/bash\n"
<del> assert_valid_bash_syntax pn
<del> end
<del> end
<del>end | 2 |
PHP | PHP | add missing 'after_commit' attribute | 5cfd28df2d550c4d63417ba54f31c96887cd345b | <ide><path>config/queue.php
<ide> 'table' => 'jobs',
<ide> 'queue' => 'default',
<ide> 'retry_after' => 90,
<add> 'after_commit' => false,
<ide> ],
<ide>
<ide> 'beanstalkd' => [
<ide> 'queue' => 'default',
<ide> 'retry_after' => 90,
<ide> 'block_for' => 0,
<add> 'after_commit' => false,
<ide> ],
<ide>
<ide> 'sqs' => [
<ide> 'queue' => env('SQS_QUEUE', 'default'),
<ide> 'suffix' => env('SQS_SUFFIX'),
<ide> 'region' => env('AWS_DEFAULT_REGION', 'us-east-1'),
<add> 'after_commit' => false,
<ide> ],
<ide>
<ide> 'redis' => [
<ide> 'queue' => env('REDIS_QUEUE', 'default'),
<ide> 'retry_after' => 90,
<ide> 'block_for' => null,
<add> 'after_commit' => false,
<ide> ],
<ide>
<ide> ], | 1 |
Go | Go | fix internal macvlan network to work in swarm | b0bce9159ea9209decf5a1350fb4762b44441cc6 | <ide><path>libnetwork/drivers/macvlan/macvlan_network.go
<ide> func parseNetworkOptions(id string, option options.Generic) (*configuration, err
<ide> return nil, err
<ide> }
<ide> }
<del> // setting the parent to "" will trigger an isolated network dummy parent link
<ide> if val, ok := option[netlabel.Internal]; ok {
<ide> if internal, ok := val.(bool); ok && internal {
<ide> config.Internal = true
<del> // empty --parent= and --internal are handled the same.
<del> config.Parent = ""
<ide> }
<ide> }
<ide> | 1 |
Javascript | Javascript | remove redundant initial of hasownproperty | 2c9fef32db5c9a342a1a60c34217ffc9ae087fbb | <ide><path>packages/react-dom/src/client/ReactDOMComponent.js
<ide> import {
<ide> } from '../events/EventRegistry';
<ide>
<ide> import {canUseDOM} from 'shared/ExecutionEnvironment';
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide>
<ide> import {
<ide> getValueForAttribute,
<ide> export function createElement(
<ide> !isCustomComponentTag &&
<ide> Object.prototype.toString.call(domElement) ===
<ide> '[object HTMLUnknownElement]' &&
<del> !Object.prototype.hasOwnProperty.call(warnedUnknownTags, type)
<add> !hasOwnProperty.call(warnedUnknownTags, type)
<ide> ) {
<ide> warnedUnknownTags[type] = true;
<ide> console.error(
<ide><path>packages/react-dom/src/server/ReactDOMServerFormatConfig.js
<ide> import warnValidStyle from '../shared/warnValidStyle';
<ide> import escapeTextForBrowser from './escapeTextForBrowser';
<ide> import hyphenateStyleName from '../shared/hyphenateStyleName';
<ide> import invariant from 'shared/invariant';
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide> import sanitizeURL from '../shared/sanitizeURL';
<ide>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<ide> const isArray = Array.isArray;
<ide>
<ide> // Per response, global state that is not contextual to the rendering subtree.
<ide><path>packages/react-dom/src/server/ReactPartialRenderer.js
<ide> import warnValidStyle from '../shared/warnValidStyle';
<ide> import {validateProperties as validateARIAProperties} from '../shared/ReactDOMInvalidARIAHook';
<ide> import {validateProperties as validateInputProperties} from '../shared/ReactDOMNullInputValuePropHook';
<ide> import {validateProperties as validateUnknownProperties} from '../shared/ReactDOMUnknownPropertyHook';
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide>
<ide> export type ServerOptions = {
<ide> identifierPrefix?: string,
<ide> function flattenOptionChildren(children: mixed): ?string {
<ide> return content;
<ide> }
<ide>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<ide> const STYLE = 'style';
<ide> const RESERVED_PROPS = {
<ide> children: null,
<ide><path>packages/react-dom/src/shared/DOMProperty.js
<ide> */
<ide>
<ide> import {enableFilterEmptyStringAttributesDOM} from 'shared/ReactFeatureFlags';
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide>
<ide> type PropertyType = 0 | 1 | 2 | 3 | 4 | 5 | 6;
<ide>
<ide> export const VALID_ATTRIBUTE_NAME_REGEX = new RegExp(
<ide> '^[' + ATTRIBUTE_NAME_START_CHAR + '][' + ATTRIBUTE_NAME_CHAR + ']*$',
<ide> );
<ide>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<ide> const illegalAttributeNameCache = {};
<ide> const validatedAttributeNameCache = {};
<ide>
<ide><path>packages/react-dom/src/shared/ReactDOMInvalidARIAHook.js
<ide> import {ATTRIBUTE_NAME_CHAR} from './DOMProperty';
<ide> import isCustomComponent from './isCustomComponent';
<ide> import validAriaProperties from './validAriaProperties';
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide>
<ide> const warnedProperties = {};
<ide> const rARIA = new RegExp('^(aria)-[' + ATTRIBUTE_NAME_CHAR + ']*$');
<ide> const rARIACamel = new RegExp('^(aria)[A-Z][' + ATTRIBUTE_NAME_CHAR + ']*$');
<ide>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<del>
<ide> function validateProperty(tagName, name) {
<ide> if (__DEV__) {
<ide> if (hasOwnProperty.call(warnedProperties, name) && warnedProperties[name]) {
<ide><path>packages/react-dom/src/shared/ReactDOMUnknownPropertyHook.js
<ide> import {
<ide> } from './DOMProperty';
<ide> import isCustomComponent from './isCustomComponent';
<ide> import possibleStandardNames from './possibleStandardNames';
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide>
<ide> let validateProperty = () => {};
<ide>
<ide> if (__DEV__) {
<ide> const warnedProperties = {};
<del> const hasOwnProperty = Object.prototype.hasOwnProperty;
<ide> const EVENT_NAME_REGEX = /^on./;
<ide> const INVALID_EVENT_NAME_REGEX = /^on[^A-Z]/;
<ide> const rARIA = new RegExp('^(aria)-[' + ATTRIBUTE_NAME_CHAR + ']*$');
<ide><path>packages/react-server-dom-relay/src/ReactFlightDOMRelayServerHostConfig.js
<ide> import type {Request, ReactModel} from 'react-server/src/ReactFlightServer';
<ide>
<ide> import JSResourceReference from 'JSResourceReference';
<ide>
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<add>
<ide> export type ModuleReference<T> = JSResourceReference<T>;
<ide>
<ide> import type {
<ide> export function processErrorChunk(
<ide> ];
<ide> }
<ide>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<del>
<ide> function convertModelToJSON(
<ide> request: Request,
<ide> parent: {+[key: string]: ReactModel} | $ReadOnlyArray<ReactModel>,
<ide><path>packages/react-server-native-relay/src/ReactFlightNativeRelayServerHostConfig.js
<ide> import type {RowEncoding, JSONValue} from './ReactFlightNativeRelayProtocol';
<ide>
<ide> import type {Request, ReactModel} from 'react-server/src/ReactFlightServer';
<del>
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide> import JSResourceReferenceImpl from 'JSResourceReferenceImpl';
<ide>
<ide> export type ModuleReference<T> = JSResourceReferenceImpl<T>;
<ide> export function processErrorChunk(
<ide> ];
<ide> }
<ide>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<del>
<ide> function convertModelToJSON(
<ide> request: Request,
<ide> parent: {+[key: string]: ReactModel} | $ReadOnlyArray<ReactModel>,
<ide><path>packages/react/src/ReactElement.js
<ide> import getComponentNameFromType from 'shared/getComponentNameFromType';
<ide> import invariant from 'shared/invariant';
<ide> import {REACT_ELEMENT_TYPE} from 'shared/ReactSymbols';
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide>
<ide> import ReactCurrentOwner from './ReactCurrentOwner';
<ide>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<del>
<ide> const RESERVED_PROPS = {
<ide> key: true,
<ide> ref: true,
<ide><path>packages/react/src/ReactElementValidator.js
<ide> import {
<ide> } from './ReactElement';
<ide> import {setExtraStackFrame} from './ReactDebugCurrentFrame';
<ide> import {describeUnknownElementTypeFrameInDEV} from 'shared/ReactComponentStackFrame';
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide>
<ide> function setCurrentlyValidatingElement(element) {
<ide> if (__DEV__) {
<ide> if (__DEV__) {
<ide> propTypesMisspellWarningShown = false;
<ide> }
<ide>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<del>
<ide> function getDeclarationErrorAddendum() {
<ide> if (ReactCurrentOwner.current) {
<ide> const name = getComponentNameFromType(ReactCurrentOwner.current.type);
<ide><path>packages/react/src/jsx/ReactJSXElement.js
<ide>
<ide> import getComponentNameFromType from 'shared/getComponentNameFromType';
<ide> import ReactSharedInternals from 'shared/ReactSharedInternals';
<del>
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide> import {REACT_ELEMENT_TYPE} from 'shared/ReactSymbols';
<ide>
<ide> const ReactCurrentOwner = ReactSharedInternals.ReactCurrentOwner;
<ide>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<del>
<ide> const RESERVED_PROPS = {
<ide> key: true,
<ide> ref: true,
<ide><path>packages/react/src/jsx/ReactJSXElementValidator.js
<ide> import {
<ide> REACT_ELEMENT_TYPE,
<ide> } from 'shared/ReactSymbols';
<ide> import {warnAboutSpreadingKeyToJSX} from 'shared/ReactFeatureFlags';
<del>
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide> import {jsxDEV} from './ReactJSXElement';
<ide>
<ide> import {describeUnknownElementTypeFrameInDEV} from 'shared/ReactComponentStackFrame';
<ide> if (__DEV__) {
<ide> propTypesMisspellWarningShown = false;
<ide> }
<ide>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<del>
<ide> /**
<ide> * Verifies the object is a ReactElement.
<ide> * See https://reactjs.org/docs/react-api.html#isvalidelement
<ide><path>packages/shared/checkPropTypes.js
<ide> const loggedTypeFailures = {};
<ide> import {describeUnknownElementTypeFrameInDEV} from 'shared/ReactComponentStackFrame';
<ide>
<ide> import ReactSharedInternals from 'shared/ReactSharedInternals';
<add>import hasOwnProperty from 'shared/hasOwnProperty';
<ide>
<ide> const ReactDebugCurrentFrame = ReactSharedInternals.ReactDebugCurrentFrame;
<ide>
<ide> export default function checkPropTypes(
<ide> ): void {
<ide> if (__DEV__) {
<ide> // $FlowFixMe This is okay but Flow doesn't know it.
<del> const has = Function.call.bind(Object.prototype.hasOwnProperty);
<add> const has = Function.call.bind(hasOwnProperty);
<ide> for (const typeSpecName in typeSpecs) {
<ide> if (has(typeSpecs, typeSpecName)) {
<ide> let error;
<ide><path>packages/shared/hasOwnProperty.js
<add>/**
<add> * Copyright (c) Facebook, Inc. and its affiliates.
<add> *
<add> * This source code is licensed under the MIT license found in the
<add> * LICENSE file in the root directory of this source tree.
<add> *
<add> * @flow
<add> */
<add>
<add>const hasOwnProperty = Object.prototype.hasOwnProperty;
<add>
<add>export default hasOwnProperty;
<ide><path>packages/shared/shallowEqual.js
<ide> */
<ide>
<ide> import is from './objectIs';
<del>
<del>const hasOwnProperty = Object.prototype.hasOwnProperty;
<add>import hasOwnProperty from './hasOwnProperty';
<ide>
<ide> /**
<ide> * Performs equality by iterating through keys on an object and returning false | 15 |
PHP | PHP | add additional information about complex type data | f150d161907910c2ab0679cececc917f32e01e34 | <ide><path>src/View/Input/MultiCheckbox.php
<ide>
<ide> use Cake\Utility\Inflector;
<ide>
<add>/**
<add> * Input widget class for generating multiple checkboxes.
<add> *
<add> */
<ide> class MultiCheckbox {
<ide>
<add>/**
<add> * Template instance to use.
<add> *
<add> * @var Cake\View\StringTemplate
<add> */
<ide> protected $_templates;
<ide>
<ide> /**
<ide> public function __construct($templates) {
<ide> * `[]` will be appended to the name.
<ide> * - `options` An array of options to create checkboxes out of.
<ide> * - `val` Either a string/integer or array of values that should be
<del> * checked.
<add> * checked. Can also be a complex options set.
<ide> * - `disabled` Either a boolean or an array of checkboxes to disable.
<ide> * - `escape` Set to false to disable HTML escaping.
<ide> * - `options` An associative array of value=>labels to generate options for.
<ide> *
<add> * ### Options format
<add> *
<add> * The options option can take a variety of data format depending on
<add> * the complexity of HTML you want generated.
<add> *
<add> * You can generate simple options using a basic associative array:
<add> *
<add> * {{{
<add> * 'options' => ['elk' => 'Elk', 'beaver' => 'Beaver']
<add> * }}}
<add> *
<add> * If you need to define additional attributes on your option elements
<add> * you can use the complex form for options:
<add> *
<add> * {{{
<add> * 'options' => [
<add> * ['value' => 'elk', 'text' => 'Elk', 'data-foo' => 'bar'],
<add> * ]
<add> * }}}
<add> *
<add> * This form **requires** that both the `value` and `text` keys be defined.
<add> * If either is not set options will not be generated correctly.
<add> *
<ide> * @param array $data
<ide> * @return string
<ide> */ | 1 |
Ruby | Ruby | use appropriate type for `rc` option | 92dae5dfda9b72e748bb00f38603e8fda403089a | <ide><path>railties/lib/rails/commands/plugin/plugin_command.rb
<ide> def self.banner(*) # :nodoc:
<ide> "#{executable} new [options]"
<ide> end
<ide>
<del> class_option :rc, type: :boolean, default: File.join("~", ".railsrc"),
<add> class_option :rc, type: :string, default: File.join("~", ".railsrc"),
<ide> desc: "Initialize the plugin command with previous defaults. Uses .railsrc in your home directory by default."
<ide>
<ide> class_option :no_rc, desc: "Skip evaluating .railsrc." | 1 |
Javascript | Javascript | add readonly prop to textinput component | de75a7a22eebbe6b7106377bdd697a2d779b91b0 | <ide><path>Libraries/Components/TextInput/TextInput.js
<ide> export type Props = $ReadOnly<{|
<ide> */
<ide> placeholderTextColor?: ?ColorValue,
<ide>
<add> /** `readOnly` works like the `readonly` attribute in HTML.
<add> * If `true`, text is not editable. The default value is `false`.
<add> * See https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/readonly
<add> * for more details.
<add> */
<add> readOnly?: ?boolean,
<add>
<ide> /**
<ide> * Determines how the return key should look. On Android you can also use
<ide> * `returnKeyLabel`.
<ide> const ExportedForwardRef: React.AbstractComponent<
<ide> allowFontScaling = true,
<ide> rejectResponderTermination = true,
<ide> underlineColorAndroid = 'transparent',
<add> readOnly,
<add> editable,
<ide> ...restProps
<ide> },
<ide> forwardedRef: ReactRefSetter<
<ide> const ExportedForwardRef: React.AbstractComponent<
<ide> allowFontScaling={allowFontScaling}
<ide> rejectResponderTermination={rejectResponderTermination}
<ide> underlineColorAndroid={underlineColorAndroid}
<add> editable={readOnly !== undefined ? !readOnly : editable}
<ide> {...restProps}
<ide> forwardedRef={forwardedRef}
<ide> />
<ide><path>packages/rn-tester/js/examples/TextInput/TextInputExample.android.js
<ide> const styles = StyleSheet.create({
<ide> singleLineWithHeightTextInput: {
<ide> height: 30,
<ide> },
<add> default: {
<add> borderWidth: StyleSheet.hairlineWidth,
<add> borderColor: '#0f0f0f',
<add> flex: 1,
<add> fontSize: 13,
<add> padding: 4,
<add> },
<ide> });
<ide>
<ide> exports.title = 'TextInput';
<ide> exports.examples = ([
<ide> );
<ide> },
<ide> },
<add> {
<add> title: 'Editable and Read only',
<add> render: function (): React.Node {
<add> return (
<add> <View>
<add> <TextInput
<add> placeholder="editable text input using editable prop"
<add> style={styles.default}
<add> editable
<add> />
<add> <TextInput
<add> placeholder="uneditable text input using editable prop"
<add> style={styles.default}
<add> editable={false}
<add> />
<add> <TextInput
<add> placeholder="editable text input using readOnly prop"
<add> style={styles.default}
<add> readOnly={false}
<add> />
<add> <TextInput
<add> placeholder="uneditable text input using readOnly prop"
<add> style={styles.default}
<add> readOnly
<add> />
<add> </View>
<add> );
<add> },
<add> },
<ide> {
<ide> title: 'Fixed number of lines',
<ide> platform: 'android',
<ide><path>packages/rn-tester/js/examples/TextInput/TextInputExample.ios.js
<ide> exports.examples = ([
<ide> );
<ide> },
<ide> },
<add> {
<add> title: 'Editable and Read only',
<add> render: function (): React.Node {
<add> return (
<add> <View>
<add> <TextInput
<add> placeholder="editable text input using editable prop"
<add> style={styles.default}
<add> editable
<add> />
<add> <TextInput
<add> placeholder="uneditable text input using editable prop"
<add> style={styles.default}
<add> editable={false}
<add> />
<add> <TextInput
<add> placeholder="editable text input using readOnly prop"
<add> style={styles.default}
<add> readOnly={false}
<add> />
<add> <TextInput
<add> placeholder="uneditable text input using readOnly prop"
<add> style={styles.default}
<add> readOnly
<add> />
<add> </View>
<add> );
<add> },
<add> },
<ide> {
<ide> title: 'TextInput Intrinsic Size',
<ide> render: function (): React.Node { | 3 |
Python | Python | add html5lib to setup.py to fix six error (see ) | 002ee80ddf1e3616e9d957abbdab76180d45aa27 | <ide><path>setup.py
<ide> def setup_package():
<ide> 'thinc>=6.10.1,<6.11.0',
<ide> 'plac<1.0.0,>=0.9.6',
<ide> 'six',
<add> 'html5lib==1.0b8',
<ide> 'pathlib',
<ide> 'ujson>=1.35',
<ide> 'dill>=0.2,<0.3', | 1 |
PHP | PHP | fix issues with treebehavior and nested deletes | d70730d72200862806209a133b43a2f099e0fa8c | <ide><path>lib/Cake/Model/Behavior/TreeBehavior.php
<ide> class TreeBehavior extends ModelBehavior {
<ide> *
<ide> * @var array
<ide> */
<del> protected $_deletedRow = null;
<add> protected $_deletedRow = array();
<ide>
<ide> /**
<ide> * Initiate Tree behavior
<ide> public function beforeDelete(Model $Model, $cascade = true) {
<ide> 'fields' => array($Model->escapeField($left), $Model->escapeField($right)),
<ide> 'recursive' => -1));
<ide> if ($data) {
<del> $this->_deletedRow = current($data);
<add> $this->_deletedRow[$Model->alias] = current($data);
<ide> }
<ide> return true;
<ide> }
<ide> public function beforeDelete(Model $Model, $cascade = true) {
<ide> */
<ide> public function afterDelete(Model $Model) {
<ide> extract($this->settings[$Model->alias]);
<del> $data = $this->_deletedRow;
<del> $this->_deletedRow = null;
<add> $data = $this->_deletedRow[$Model->alias];
<add> $this->_deletedRow[$Model->alias] = null;
<ide>
<ide> if (!$data[$right] || !$data[$left]) {
<ide> return true; | 1 |
Java | Java | avoid exceptions when evaluating validation hints | e7cbe23771a18f959d351c56018ece490320796f | <ide><path>spring-context/src/main/java/org/springframework/validation/annotation/ValidationAnnotationUtils.java
<add>/*
<add> * Copyright 2002-2021 the original author or authors.
<add> *
<add> * Licensed under the Apache License, Version 2.0 (the "License");
<add> * you may not use this file except in compliance with the License.
<add> * You may obtain a copy of the License at
<add> *
<add> * https://www.apache.org/licenses/LICENSE-2.0
<add> *
<add> * Unless required by applicable law or agreed to in writing, software
<add> * distributed under the License is distributed on an "AS IS" BASIS,
<add> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
<add> * See the License for the specific language governing permissions and
<add> * limitations under the License.
<add> */
<add>
<add>package org.springframework.validation.annotation;
<add>
<add>import java.lang.annotation.Annotation;
<add>
<add>import org.springframework.core.annotation.AnnotationUtils;
<add>import org.springframework.lang.Nullable;
<add>
<add>/**
<add> * Utility class for handling validation annotations.
<add> * Mainly for internal use within the framework.
<add> *
<add> * @author Christoph Dreis
<add> * @since 5.3.7
<add> */
<add>public abstract class ValidationAnnotationUtils {
<add>
<add> /**
<add> * Determine any validation hints by the given annotation.
<add> * <p>This implementation checks for {@code @javax.validation.Valid},
<add> * Spring's {@link org.springframework.validation.annotation.Validated},
<add> * and custom annotations whose name starts with "Valid".
<add> * @param ann the annotation (potentially a validation annotation)
<add> * @return the validation hints to apply (possibly an empty array),
<add> * or {@code null} if this annotation does not trigger any validation
<add> * @since 5.3.7
<add> */
<add> @Nullable
<add> public static Object[] determineValidationHints(Annotation ann) {
<add> Class<? extends Annotation> annotationType = ann.annotationType();
<add> String annotationName = annotationType.getName();
<add> if ("javax.validation.Valid".equals(annotationName)) {
<add> return new Object[0];
<add> }
<add> Validated validatedAnn = AnnotationUtils.getAnnotation(ann, Validated.class);
<add> if (validatedAnn != null) {
<add> Object hints = validatedAnn.value();
<add> return convertValidationHints(hints);
<add> }
<add> if (annotationType.getSimpleName().startsWith("Valid")) {
<add> Object hints = AnnotationUtils.getValue(ann);
<add> return convertValidationHints(hints);
<add> }
<add> return null;
<add> }
<add>
<add> private static Object[] convertValidationHints(@Nullable Object hints) {
<add> if (hints == null) {
<add> return new Object[0];
<add> }
<add> return (hints instanceof Object[] ? (Object[]) hints : new Object[]{hints});
<add> }
<add>
<add>}
<ide><path>spring-web/src/main/java/org/springframework/web/method/annotation/ModelAttributeMethodProcessor.java
<ide> /*
<del> * Copyright 2002-2020 the original author or authors.
<add> * Copyright 2002-2021 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> import org.springframework.beans.BeanUtils;
<ide> import org.springframework.beans.TypeMismatchException;
<ide> import org.springframework.core.MethodParameter;
<del>import org.springframework.core.annotation.AnnotationUtils;
<ide> import org.springframework.lang.Nullable;
<ide> import org.springframework.util.Assert;
<ide> import org.springframework.util.StringUtils;
<ide> import org.springframework.validation.Errors;
<ide> import org.springframework.validation.SmartValidator;
<ide> import org.springframework.validation.Validator;
<del>import org.springframework.validation.annotation.Validated;
<add>import org.springframework.validation.annotation.ValidationAnnotationUtils;
<ide> import org.springframework.web.bind.WebDataBinder;
<ide> import org.springframework.web.bind.annotation.ModelAttribute;
<ide> import org.springframework.web.bind.support.WebDataBinderFactory;
<ide> else if (StringUtils.startsWithIgnoreCase(request.getHeader("Content-Type"), "mu
<ide> */
<ide> protected void validateIfApplicable(WebDataBinder binder, MethodParameter parameter) {
<ide> for (Annotation ann : parameter.getParameterAnnotations()) {
<del> Object[] validationHints = determineValidationHints(ann);
<add> Object[] validationHints = ValidationAnnotationUtils.determineValidationHints(ann);
<ide> if (validationHints != null) {
<ide> binder.validate(validationHints);
<ide> break;
<ide> protected void validateValueIfApplicable(WebDataBinder binder, MethodParameter p
<ide> Class<?> targetType, String fieldName, @Nullable Object value) {
<ide>
<ide> for (Annotation ann : parameter.getParameterAnnotations()) {
<del> Object[] validationHints = determineValidationHints(ann);
<add> Object[] validationHints = ValidationAnnotationUtils.determineValidationHints(ann);
<ide> if (validationHints != null) {
<ide> for (Validator validator : binder.getValidators()) {
<ide> if (validator instanceof SmartValidator) {
<ide> protected void validateValueIfApplicable(WebDataBinder binder, MethodParameter p
<ide> }
<ide> }
<ide>
<del> /**
<del> * Determine any validation triggered by the given annotation.
<del> * @param ann the annotation (potentially a validation annotation)
<del> * @return the validation hints to apply (possibly an empty array),
<del> * or {@code null} if this annotation does not trigger any validation
<del> * @since 5.1
<del> */
<del> @Nullable
<del> private Object[] determineValidationHints(Annotation ann) {
<del> Validated validatedAnn = AnnotationUtils.getAnnotation(ann, Validated.class);
<del> if (validatedAnn != null || ann.annotationType().getSimpleName().startsWith("Valid")) {
<del> Object hints = (validatedAnn != null ? validatedAnn.value() : AnnotationUtils.getValue(ann));
<del> if (hints == null) {
<del> return new Object[0];
<del> }
<del> return (hints instanceof Object[] ? (Object[]) hints : new Object[] {hints});
<del> }
<del> return null;
<del> }
<del>
<ide> /**
<ide> * Whether to raise a fatal bind exception on validation errors.
<ide> * <p>The default implementation delegates to {@link #isBindExceptionRequired(MethodParameter)}.
<ide><path>spring-webflux/src/main/java/org/springframework/web/reactive/result/method/annotation/AbstractMessageReaderArgumentResolver.java
<ide> import org.springframework.core.ReactiveAdapter;
<ide> import org.springframework.core.ReactiveAdapterRegistry;
<ide> import org.springframework.core.ResolvableType;
<del>import org.springframework.core.annotation.AnnotationUtils;
<ide> import org.springframework.core.codec.DecodingException;
<ide> import org.springframework.core.codec.Hints;
<ide> import org.springframework.core.io.buffer.DataBuffer;
<ide> import org.springframework.lang.Nullable;
<ide> import org.springframework.util.Assert;
<ide> import org.springframework.validation.Validator;
<del>import org.springframework.validation.annotation.Validated;
<add>import org.springframework.validation.annotation.ValidationAnnotationUtils;
<ide> import org.springframework.web.bind.support.WebExchangeBindException;
<ide> import org.springframework.web.bind.support.WebExchangeDataBinder;
<ide> import org.springframework.web.reactive.BindingContext;
<ide> private ServerWebInputException handleMissingBody(MethodParameter parameter) {
<ide> private Object[] extractValidationHints(MethodParameter parameter) {
<ide> Annotation[] annotations = parameter.getParameterAnnotations();
<ide> for (Annotation ann : annotations) {
<del> Validated validatedAnn = AnnotationUtils.getAnnotation(ann, Validated.class);
<del> if (validatedAnn != null || ann.annotationType().getSimpleName().startsWith("Valid")) {
<del> Object hints = (validatedAnn != null ? validatedAnn.value() : AnnotationUtils.getValue(ann));
<del> return (hints instanceof Object[] ? (Object[]) hints : new Object[] {hints});
<add> Object[] hints = ValidationAnnotationUtils.determineValidationHints(ann);
<add> if (hints != null) {
<add> return hints;
<ide> }
<ide> }
<ide> return null;
<ide><path>spring-webflux/src/main/java/org/springframework/web/reactive/result/method/annotation/ModelAttributeMethodArgumentResolver.java
<ide> /*
<del> * Copyright 2002-2020 the original author or authors.
<add> * Copyright 2002-2021 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> import org.springframework.core.ReactiveAdapter;
<ide> import org.springframework.core.ReactiveAdapterRegistry;
<ide> import org.springframework.core.ResolvableType;
<del>import org.springframework.core.annotation.AnnotationUtils;
<ide> import org.springframework.lang.Nullable;
<ide> import org.springframework.ui.Model;
<ide> import org.springframework.util.Assert;
<ide> import org.springframework.util.ClassUtils;
<ide> import org.springframework.validation.BindingResult;
<ide> import org.springframework.validation.Errors;
<del>import org.springframework.validation.annotation.Validated;
<add>import org.springframework.validation.annotation.ValidationAnnotationUtils;
<ide> import org.springframework.web.bind.annotation.ModelAttribute;
<ide> import org.springframework.web.bind.support.WebExchangeBindException;
<ide> import org.springframework.web.bind.support.WebExchangeDataBinder;
<ide> private boolean hasErrorsArgument(MethodParameter parameter) {
<ide>
<ide> private void validateIfApplicable(WebExchangeDataBinder binder, MethodParameter parameter) {
<ide> for (Annotation ann : parameter.getParameterAnnotations()) {
<del> Validated validatedAnn = AnnotationUtils.getAnnotation(ann, Validated.class);
<del> if (validatedAnn != null || ann.annotationType().getSimpleName().startsWith("Valid")) {
<del> Object hints = (validatedAnn != null ? validatedAnn.value() : AnnotationUtils.getValue(ann));
<del> if (hints != null) {
<del> Object[] validationHints = (hints instanceof Object[] ? (Object[]) hints : new Object[] {hints});
<del> binder.validate(validationHints);
<del> }
<del> else {
<del> binder.validate();
<del> }
<add> Object[] validationHints = ValidationAnnotationUtils.determineValidationHints(ann);
<add> if (validationHints != null) {
<add> binder.validate(validationHints);
<ide> }
<ide> }
<ide> }
<ide><path>spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/annotation/AbstractMessageConverterMethodArgumentResolver.java
<ide>
<ide> import org.springframework.core.MethodParameter;
<ide> import org.springframework.core.ResolvableType;
<del>import org.springframework.core.annotation.AnnotationUtils;
<ide> import org.springframework.core.log.LogFormatUtils;
<ide> import org.springframework.http.HttpHeaders;
<ide> import org.springframework.http.HttpInputMessage;
<ide> import org.springframework.util.Assert;
<ide> import org.springframework.util.StreamUtils;
<ide> import org.springframework.validation.Errors;
<del>import org.springframework.validation.annotation.Validated;
<add>import org.springframework.validation.annotation.ValidationAnnotationUtils;
<ide> import org.springframework.web.HttpMediaTypeNotSupportedException;
<ide> import org.springframework.web.bind.WebDataBinder;
<ide> import org.springframework.web.context.request.NativeWebRequest;
<ide> protected ServletServerHttpRequest createInputMessage(NativeWebRequest webReques
<ide> protected void validateIfApplicable(WebDataBinder binder, MethodParameter parameter) {
<ide> Annotation[] annotations = parameter.getParameterAnnotations();
<ide> for (Annotation ann : annotations) {
<del> Validated validatedAnn = AnnotationUtils.getAnnotation(ann, Validated.class);
<del> if (validatedAnn != null || ann.annotationType().getSimpleName().startsWith("Valid")) {
<del> Object hints = (validatedAnn != null ? validatedAnn.value() : AnnotationUtils.getValue(ann));
<del> Object[] validationHints = (hints instanceof Object[] ? (Object[]) hints : new Object[] {hints});
<add> Object[] validationHints = ValidationAnnotationUtils.determineValidationHints(ann);
<add> if (validationHints != null) {
<ide> binder.validate(validationHints);
<ide> break;
<ide> } | 5 |
Text | Text | add forgotten changelog entry for | bfc8ffa2327ac6e3949507e71dba04dd3a6c9131 | <ide><path>activerecord/CHANGELOG.md
<add>* Handle single quotes in PostgreSQL default column values.
<add> Fixes #10881.
<add>
<add> *Dylan Markow*
<add>
<ide> * Log the sql that is actually sent to the database.
<ide>
<ide> If I have a query that produces sql | 1 |
PHP | PHP | fix strict typing errors | e2b8505cda8f766a12f1b435d0f31b65cb976c9c | <ide><path>src/Console/ConsoleInputArgument.php
<ide> public function xml(SimpleXMLElement $parent): SimpleXMLElement
<ide> $option = $parent->addChild('argument');
<ide> $option->addAttribute('name', $this->_name);
<ide> $option->addAttribute('help', $this->_help);
<del> $option->addAttribute('required', (int)$this->isRequired());
<add> $option->addAttribute('required', (string)(int)$this->isRequired());
<ide> $choices = $option->addChild('choices');
<ide> foreach ($this->_choices as $valid) {
<ide> $choices->addChild('choice', $valid);
<ide><path>src/Console/ConsoleInputOption.php
<ide> public function xml(SimpleXMLElement $parent): SimpleXMLElement
<ide> }
<ide> $option->addAttribute('short', $short);
<ide> $option->addAttribute('help', $this->_help);
<del> $option->addAttribute('boolean', (int)$this->_boolean);
<add> $option->addAttribute('boolean', (string)(int)$this->_boolean);
<ide> $option->addChild('default', $this->_default);
<ide> $choices = $option->addChild('choices');
<ide> foreach ($this->_choices as $valid) {
<ide><path>src/Console/ConsoleOptionParser.php
<ide> public function addOption($name, array $options = []): self
<ide> } else {
<ide> $defaults = [
<ide> 'name' => $name,
<del> 'short' => null,
<add> 'short' => '',
<ide> 'help' => '',
<del> 'default' => null,
<add> 'default' => '',
<ide> 'boolean' => false,
<ide> 'choices' => [],
<ide> ];
<ide><path>src/Routing/Route/Route.php
<ide> public function match(array $url, array $context = []): ?string
<ide> ) {
<ide> $hostOptions += $context;
<ide>
<del> if (getservbyname($hostOptions['_scheme'], 'tcp') === $hostOptions['_port']) {
<add> if ($hostOptions['_scheme'] &&
<add> getservbyname($hostOptions['_scheme'], 'tcp') === $hostOptions['_port']
<add> ) {
<ide> unset($hostOptions['_port']);
<ide> }
<ide> }
<ide><path>tests/TestCase/Routing/Middleware/RoutingMiddlewareTest.php
<ide> public function testCacheRoutes()
<ide> public function testCacheNotUsedIfCacheDisabled()
<ide> {
<ide> $cacheConfigName = '_cake_router_';
<add> Cache::drop($cacheConfigName);
<ide> Cache::disable();
<ide> Cache::setConfig($cacheConfigName, [
<ide> 'engine' => 'File',
<ide> public function testCacheNotUsedIfCacheDisabled()
<ide> public function testCacheConfigNotFound()
<ide> {
<ide> $this->expectException(\InvalidArgumentException::class);
<del> $this->expectExceptionMessage('The "notfound" cache configuration does not exist');
<add> $this->expectExceptionMessage('The "notfound" cache configuration does not exist.');
<ide>
<ide> Cache::setConfig('_cake_router_', [
<ide> 'engine' => 'File',
<ide><path>tests/TestCase/Routing/Route/DashedRouteTest.php
<ide> public function testMatchBasic()
<ide> $result = $route->match([
<ide> 'controller' => 'MyPosts',
<ide> 'action' => 'myView',
<del> 'id' => 1,
<add> 'id' => '1',
<ide> 'slug' => 'the-slug',
<ide> ]);
<ide> $this->assertEquals('/my-posts/my-view/the-slug-1', $result);
<ide><path>tests/TestCase/Routing/Route/InflectedRouteTest.php
<ide> public function testMatchBasic()
<ide> $result = $route->match([
<ide> 'controller' => 'MyPosts',
<ide> 'action' => 'my_view',
<del> 'id' => 1,
<add> 'id' => '1',
<ide> 'slug' => 'the-slug',
<ide> ]);
<ide> $this->assertEquals('/my_posts/my_view/the-slug-1', $result);
<ide><path>tests/TestCase/Routing/Route/RouteTest.php
<ide> public function testRouteCompileSmallPlaceholders()
<ide> $result = $route->match([
<ide> 'controller' => 'Fighters',
<ide> 'action' => 'move',
<del> 'id' => 123,
<del> 'x' => 8,
<del> 'y' => 42,
<add> 'id' => '123',
<add> 'x' => '8',
<add> 'y' => '42',
<ide> ]);
<ide> $this->assertEquals('/fighters/123/move/8/42', $result);
<ide> }
<ide> public function testRouteCompileBraces()
<ide> $result = $route->match([
<ide> 'controller' => 'Fighters',
<ide> 'action' => 'move',
<del> 'id' => 123,
<del> 'x' => 8,
<del> 'y' => 42,
<add> 'id' => '123',
<add> 'x' => '8',
<add> 'y' => '42',
<ide> ]);
<ide> $this->assertEquals('/fighters/123/move/8/42', $result);
<ide>
<ide> public function testRouteCompileBraces()
<ide> $result = $route->match([
<ide> 'controller' => 'Images',
<ide> 'action' => 'view',
<del> 'id' => 123,
<del> 'x' => 8,
<del> 'y' => 42,
<add> 'id' => '123',
<add> 'x' => '8',
<add> 'y' => '42',
<ide> ]);
<ide> $this->assertEquals('/images/123/8x42', $result);
<ide> }
<ide> public function testRouteCompileMixedPlaceholders()
<ide> $result = $route->match([
<ide> 'controller' => 'Fighters',
<ide> 'action' => 'move',
<del> 'id' => 123,
<del> 'x' => 8,
<del> 'y' => 9,
<add> 'id' => '123',
<add> 'x' => '8',
<add> 'y' => '9',
<ide> ]);
<ide> $this->assertEquals('/fighters/123/move/8/:y?y=9', $result);
<ide> }
<ide><path>tests/TestCase/Routing/RouteBuilderTest.php
<ide> public function testResourcesInScope()
<ide> 'controller' => 'Articles',
<ide> 'action' => 'edit',
<ide> '_method' => 'PUT',
<del> 'id' => 99,
<add> 'id' => '99',
<ide> ]);
<ide> $this->assertEquals('/api/articles/99', $url);
<ide>
<ide> public function testResourcesInScope()
<ide> 'action' => 'edit',
<ide> '_method' => 'PUT',
<ide> '_ext' => 'json',
<del> 'id' => 99,
<add> 'id' => '99',
<ide> ]);
<ide> $this->assertEquals('/api/articles/99.json', $url);
<ide> }
<ide><path>tests/TestCase/Routing/RouterTest.php
<ide> public function testGenerateUrlResourceRoute()
<ide> 'controller' => 'Posts',
<ide> 'action' => 'view',
<ide> '_method' => 'GET',
<del> 'id' => 10,
<add> 'id' => '10',
<ide> ]);
<ide> $expected = '/posts/10';
<ide> $this->assertEquals($expected, $result);
<ide> public function testGenerateUrlResourceRoute()
<ide> $expected = '/posts';
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = Router::url(['controller' => 'Posts', 'action' => 'edit', '_method' => 'PUT', 'id' => 10]);
<add> $result = Router::url(['controller' => 'Posts', 'action' => 'edit', '_method' => 'PUT', 'id' => '10']);
<ide> $expected = '/posts/10';
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = Router::url(['controller' => 'Posts', 'action' => 'delete', '_method' => 'DELETE', 'id' => 10]);
<add> $result = Router::url([
<add> 'controller' => 'Posts',
<add> 'action' => 'delete',
<add> '_method' => 'DELETE',
<add> 'id' => '10'
<add> ]);
<ide> $expected = '/posts/10';
<ide> $this->assertEquals($expected, $result);
<ide>
<del> $result = Router::url(['controller' => 'Posts', 'action' => 'edit', '_method' => 'PATCH', 'id' => 10]);
<add> $result = Router::url([
<add> 'controller' => 'Posts',
<add> 'action' => 'edit',
<add> '_method' => 'PATCH',
<add> 'id' => '10'
<add> ]);
<ide> $expected = '/posts/10';
<ide> $this->assertEquals($expected, $result);
<ide> }
<ide> public function testUrlGenerationWithRegexQualifiedParams()
<ide> 'plugin' => 'shows',
<ide> 'controller' => 'shows',
<ide> 'action' => 'calendar',
<del> 'month' => 10,
<del> 'year' => 2007,
<add> 'month' => '10',
<add> 'year' => '2007',
<ide> 'min-forestilling',
<ide> ]);
<ide> $expected = '/forestillinger/10/2007/min-forestilling';
<ide> public function testUrlGenerationWithRegexQualifiedParams()
<ide> 'plugin' => 'shows',
<ide> 'controller' => 'shows',
<ide> 'action' => 'calendar',
<del> 'year' => 2007,
<del> 'month' => 10,
<add> 'year' => '2007',
<add> 'month' => '10',
<ide> 'min-forestilling',
<ide> ]);
<ide> $expected = '/kalender/10/2007/min-forestilling';
<ide> public function testResourcesInScope()
<ide> 'controller' => 'Articles',
<ide> 'action' => 'edit',
<ide> '_method' => 'PUT',
<del> 'id' => 99,
<add> 'id' => '99',
<ide> ]);
<ide> $this->assertEquals('/api/articles/99', $url);
<ide>
<ide> public function testResourcesInScope()
<ide> 'action' => 'edit',
<ide> '_method' => 'PUT',
<ide> '_ext' => 'json',
<del> 'id' => 99,
<add> 'id' => '99',
<ide> ]);
<ide> $this->assertEquals('/api/articles/99.json', $url);
<ide> }
<ide><path>tests/test_app/TestApp/Auth/TestAuthenticate.php
<ide> public function authenticate(ServerRequest $request, Response $response)
<ide> public function afterIdentify(EventInterface $event, array $user)
<ide> {
<ide> $this->callStack[] = __FUNCTION__;
<del> $this->authenticationProvider = $event->getData(1);
<add> $this->authenticationProvider = $event->getData('1');
<ide>
<ide> if (!empty($this->modifiedUser)) {
<ide> return $user + ['extra' => 'foo']; | 11 |
Javascript | Javascript | remove unused legacy shims | 32b6244772dc68f9b92fcaee008accb2ed36678e | <ide><path>vendor/ember/shims.js
<del>(function() {
<del>/* globals define, Ember, jQuery */
<del>
<del> function processEmberShims() {
<del> var shims = {
<del> 'ember': {
<del> 'default': Ember
<del> },
<del> 'ember-application': {
<del> 'default': Ember.Application
<del> },
<del> 'ember-array': {
<del> 'default': Ember.Array
<del> },
<del> 'ember-array/mutable': {
<del> 'default': Ember.MutableArray
<del> },
<del> 'ember-array/utils': {
<del> 'A': Ember.A,
<del> 'isEmberArray': Ember.isArray,
<del> 'wrap': Ember.makeArray
<del> },
<del> 'ember-component': {
<del> 'default': Ember.Component
<del> },
<del> 'ember-components/checkbox': {
<del> 'default': Ember.Checkbox
<del> },
<del> 'ember-components/text-area': {
<del> 'default': Ember.TextArea
<del> },
<del> 'ember-components/text-field': {
<del> 'default': Ember.TextField
<del> },
<del> 'ember-controller': {
<del> 'default': Ember.Controller
<del> },
<del> 'ember-controller/inject': {
<del> 'default': Ember.inject.controller
<del> },
<del> 'ember-controller/proxy': {
<del> 'default': Ember.ArrayProxy
<del> },
<del> 'ember-controllers/sortable': {
<del> 'default': Ember.SortableMixin
<del> },
<del> 'ember-debug': {
<del> 'log': Ember.debug,
<del> 'inspect': Ember.inspect,
<del> 'run': Ember.runInDebug,
<del> 'warn': Ember.warn
<del> },
<del> 'ember-debug/container-debug-adapter': {
<del> 'default': Ember.ContainerDebugAdapter
<del> },
<del> 'ember-debug/data-adapter': {
<del> 'default': Ember.DataAdapter
<del> },
<del> 'ember-deprecations': {
<del> 'deprecate': Ember.deprecate,
<del> 'deprecateFunc': Ember.deprecateFunc
<del> },
<del> 'ember-enumerable': {
<del> 'default': Ember.Enumerable
<del> },
<del> 'ember-evented': {
<del> 'default': Ember.Evented
<del> },
<del> 'ember-evented/on': {
<del> 'default': Ember.on
<del> },
<del> 'ember-globals-resolver': {
<del> 'default': Ember.DefaultResolver
<del> },
<del> 'ember-helper': {
<del> 'default': Ember.Helper,
<del> 'helper': Ember.Helper && Ember.Helper.helper
<del> },
<del> 'ember-instrumentation': {
<del> 'instrument': Ember.Instrumentation.instrument,
<del> 'reset': Ember.Instrumentation.reset,
<del> 'subscribe': Ember.Instrumentation.subscribe,
<del> 'unsubscribe': Ember.Instrumentation.unsubscribe
<del> },
<del> 'ember-locations/hash': {
<del> 'default': Ember.HashLocation
<del> },
<del> 'ember-locations/history': {
<del> 'default': Ember.HistoryLocation
<del> },
<del> 'ember-locations/none': {
<del> 'default': Ember.NoneLocation
<del> },
<del> 'ember-map': {
<del> 'default': Ember.Map,
<del> 'withDefault': Ember.MapWithDefault
<del> },
<del> 'ember-metal/destroy': {
<del> 'default': Ember.destroy
<del> },
<del> 'ember-metal/events': {
<del> 'addListener': Ember.addListener,
<del> 'removeListener': Ember.removeListener,
<del> 'send': Ember.sendEvent
<del> },
<del> 'ember-metal/get': {
<del> 'default': Ember.get
<del> },
<del> 'ember-metal/mixin': {
<del> 'default': Ember.Mixin
<del> },
<del> 'ember-metal/observer': {
<del> 'default': Ember.observer,
<del> 'addObserver': Ember.addObserver,
<del> 'removeObserver': Ember.removeObserver
<del> },
<del> 'ember-metal/on-load': {
<del> 'default': Ember.onLoad,
<del> 'run': Ember.runLoadHooks
<del> },
<del> 'ember-metal/set': {
<del> 'default': Ember.set,
<del> 'setProperties': Ember.setProperties,
<del> 'trySet': Ember.trySet
<del> },
<del> 'ember-metal/utils': {
<del> 'aliasMethod': Ember.aliasMethod,
<del> 'assert': Ember.assert,
<del> 'cacheFor': Ember.cacheFor,
<del> 'copy': Ember.copy
<del> },
<del> 'ember-object': {
<del> 'default': Ember.Object
<del> },
<del> 'ember-owner/get': {
<del> 'default': Ember.getOwner
<del> },
<del> 'ember-owner/set': {
<del> 'default': Ember.setOwner
<del> },
<del> 'ember-platform': {
<del> 'assign': Ember.merge,
<del> 'create': Ember.create,
<del> 'defineProperty': Ember.platform.defineProperty,
<del> 'hasAccessors': Ember.platform.hasPropertyAccessors,
<del> 'keys': Ember.keys
<del> },
<del> 'ember-route': {
<del> 'default': Ember.Route
<del> },
<del> 'ember-router': {
<del> 'default': Ember.Router
<del> },
<del> 'ember-runloop': {
<del> 'default': Ember.run,
<del> 'begin': Ember.run.begin,
<del> 'bind': Ember.run.bind,
<del> 'cancel': Ember.run.cancel,
<del> 'debounce': Ember.run.debounce,
<del> 'end': Ember.run.end,
<del> 'join': Ember.run.join,
<del> 'later': Ember.run.later,
<del> 'next': Ember.run.next,
<del> 'once': Ember.run.once,
<del> 'schedule': Ember.run.schedule,
<del> 'scheduleOnce': Ember.run.scheduleOnce,
<del> 'throttle': Ember.run.throttle
<del> },
<del> 'ember-service': {
<del> 'default': Ember.Service
<del> },
<del> 'ember-service/inject': {
<del> 'default': Ember.inject.service
<del> },
<del> 'ember-set/ordered': {
<del> 'default': Ember.OrderedSet
<del> },
<del> 'ember-string': {
<del> 'camelize': Ember.String.camelize,
<del> 'capitalize': Ember.String.capitalize,
<del> 'classify': Ember.String.classify,
<del> 'dasherize': Ember.String.dasherize,
<del> 'decamelize': Ember.String.decamelize,
<del> 'fmt': Ember.String.fmt,
<del> 'htmlSafe': Ember.String.htmlSafe,
<del> 'loc': Ember.String.loc,
<del> 'underscore': Ember.String.underscore,
<del> 'w': Ember.String.w
<del> },
<del> 'ember-utils': {
<del> 'isBlank': Ember.isBlank,
<del> 'isEmpty': Ember.isEmpty,
<del> 'isNone': Ember.isNone,
<del> 'isPresent': Ember.isPresent,
<del> 'tryInvoke': Ember.tryInvoke,
<del> 'typeOf': Ember.typeOf
<del> }
<del> };
<del>
<del> // populate `ember/computed` named exports
<del> shims['ember-computed'] = {
<del> 'default': Ember.computed
<del> };
<del> var computedMacros = [
<del> "empty", "notEmpty", "none", "not", "bool", "match", "equal", "gt", "gte",
<del> "lt", "lte", "alias", "oneWay", "reads", "readOnly", "deprecatingAlias",
<del> "and", "or", "collect", "sum", "min", "max", "map", "sort", "setDiff",
<del> "mapBy", "filter", "filterBy", "uniq", "union", "intersect"
<del> ];
<del>
<del> for (var i = 0, l = computedMacros.length; i < l; i++) {
<del> var key = computedMacros[i];
<del> shims['ember-computed'][key] = Ember.computed[key];
<del> }
<del>
<del> for (var moduleName in shims) {
<del> generateModule(moduleName, shims[moduleName]);
<del> }
<del> }
<del>
<del> function processTestShims() {
<del> if (Ember.Test) {
<del> var testShims = {
<del> 'ember-test': {
<del> 'default': Ember.Test
<del> },
<del> 'ember-test/adapter': {
<del> 'default': Ember.Test.Adapter
<del> },
<del> 'ember-test/qunit-adapter': {
<del> 'default': Ember.Test.QUnitAdapter
<del> }
<del> };
<del>
<del> for (var moduleName in testShims) {
<del> generateModule(moduleName, testShims[moduleName]);
<del> }
<del> }
<del> }
<del>
<del> function generateModule(name, values) {
<del> define(name, [], function() {
<del> 'use strict';
<del>
<del> return values;
<del> });
<del> }
<del>
<del> processEmberShims();
<del> processTestShims();
<del> generateModule('jquery', { 'default': self.jQuery });
<del> generateModule('rsvp', { 'default': Ember.RSVP });
<del>})(); | 1 |
PHP | PHP | apply fixes from styleci | 4b736f75f533e53f677dff52efd80fa5088a39b6 | <ide><path>src/Illuminate/Routing/RouteAction.php
<ide> public static function parse($uri, $action)
<ide> if (is_callable($action)) {
<ide> return ! is_array($action) ? ['uses' => $action] : [
<ide> 'uses' => $action[0].'@'.$action[1],
<del> 'controller' => $action[0].'@'.$action[1]
<add> 'controller' => $action[0].'@'.$action[1],
<ide> ];
<ide> }
<ide> | 1 |
Text | Text | remove incorrect "readonly" example | 244d9c337034b0db030c05189ca8eb2323d92c61 | <ide><path>docs/userguide/dockervolumes.md
<ide> This will create a new volume inside a container at `/webapp`.
<ide> > You can also use the `VOLUME` instruction in a `Dockerfile` to add one or
<ide> > more new volumes to any container created from that image.
<ide>
<del>Docker volumes default to mount in read-write mode, but you can also set it to be mounted read-only.
<del>
<del> $ docker run -d -P --name web -v /opt/webapp:ro training/webapp python app.py
<del>
<del>
<ide> ### Locating a volume
<ide>
<ide> You can locate the volume on the host by utilizing the 'docker inspect' command. | 1 |
Javascript | Javascript | prevent infinite loop in mesh.raycast(). | 595987ef5c783603ad93ee39551906a67eb24a8a | <ide><path>src/objects/Mesh.js
<ide> class Mesh extends Object3D {
<ide> const groupMaterial = material[ group.materialIndex ];
<ide>
<ide> const start = Math.max( group.start, drawRange.start );
<del> const end = Math.min( ( group.start + group.count ), ( drawRange.start + drawRange.count ) );
<add> const end = Math.min( index.count, Math.min( ( group.start + group.count ), ( drawRange.start + drawRange.count ) ) );
<ide>
<ide> for ( let j = start, jl = end; j < jl; j += 3 ) {
<ide>
<ide> class Mesh extends Object3D {
<ide> const groupMaterial = material[ group.materialIndex ];
<ide>
<ide> const start = Math.max( group.start, drawRange.start );
<del> const end = Math.min( ( group.start + group.count ), ( drawRange.start + drawRange.count ) );
<add> const end = Math.min( position.count, Math.min( ( group.start + group.count ), ( drawRange.start + drawRange.count ) ) );
<ide>
<ide> for ( let j = start, jl = end; j < jl; j += 3 ) {
<ide> | 1 |
Mixed | Python | add xception model to keras.applications | 94ee8e15704d76fb3ef06a91c2c9c72aa07678e9 | <ide><path>docs/templates/applications.md
<ide> Weights are downloaded automatically when instantiating a model. They are stored
<ide>
<ide> ### Models for image classification with weights trained on ImageNet:
<ide>
<add>- [Xception](#xception)
<ide> - [VGG16](#vgg16)
<ide> - [VGG19](#vgg19)
<ide> - [ResNet50](#resnet50)
<ide> - [InceptionV3](#inceptionv3)
<ide>
<del>All of these architectures are compatible with both TensorFlow and Theano, and upon instantiation the models will be built according to the image dimension ordering set in your Keras configuration file at `~/.keras/keras.json`. For instance, if you have set `image_dim_ordering=tf`, then any model loaded from this repository will get built according to the TensorFlow dimension ordering convention, "Width-Height-Depth".
<add>All of these architectures (except Xception) are compatible with both TensorFlow and Theano, and upon instantiation the models will be built according to the image dimension ordering set in your Keras configuration file at `~/.keras/keras.json`. For instance, if you have set `image_dim_ordering=tf`, then any model loaded from this repository will get built according to the TensorFlow dimension ordering convention, "Width-Height-Depth".
<add>
<add>The Xception model is only available for TensorFlow, due to its reliance on `SeparableConvolution` layers.
<ide>
<ide> ### Model for music audio file auto-tagging (taking as input Mel-spectrograms):
<ide>
<ide> model = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=T
<ide>
<ide> # Documentation for individual models
<ide>
<del>
<add>- [Xception](#xception)
<ide> - [VGG16](#vgg16)
<ide> - [VGG19](#vgg19)
<ide> - [ResNet50](#resnet50)
<ide> model = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=T
<ide>
<ide> -----
<ide>
<add>
<add>## Xception
<add>
<add>
<add>```python
<add>keras.applications.xception.Xception(include_top=True, weights='imagenet', input_tensor=None)
<add>```
<add>
<add>Xception V1 model, with weights pre-trained on ImageNet.
<add>
<add>On ImageNet, this model gets to a top-1 validation accuracy of 0.790
<add>and a top-5 validation accuracy of 0.945.
<add>
<add>Note that this model is only available for the TensorFlow backend,
<add>due to its reliance on `SeparableConvolution` layers. Additionally it only supports
<add>the dimension ordering "tf" (width, height, channels).
<add>
<add>The default input size for this model is 299x299.
<add>
<add>### Arguments
<add>
<add>- include_top: whether to include the fully-connected layer at the top of the network.
<add>- weights: one of `None` (random initialization) or "imagenet" (pre-training on ImageNet).
<add>- input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model.
<add>
<add>### Returns
<add>
<add>A Keras model instance.
<add>
<add>### References
<add>
<add>- [Xception: Deep Learning with Depthwise Separable Convolutions](https://arxiv.org/abs/1610.02357)
<add>
<add>### License
<add>
<add>These weights are trained by ourselves and are released under the MIT license.
<add>
<add>
<add>-----
<add>
<add>
<ide> ## VGG16
<ide>
<ide> ```python
<ide> keras.applications.vgg16.VGG16(include_top=True, weights='imagenet', input_tensor=None)
<ide> ```
<ide>
<add>VGG16 model, with weights pre-trained on ImageNet.
<add>
<add>This model is available for both the Theano and TensorFlow backend, and can be built both
<add>with "th" dim ordering (channels, width, height) or "tf" dim ordering (width, height, channels).
<add>
<add>The default input size for this model is 224x224.
<add>
<ide> ### Arguments
<ide>
<ide> - include_top: whether to include the 3 fully-connected layers at the top of the network.
<ide> These weights are ported from the ones [released by VGG at Oxford](http://www.ro
<ide> keras.applications.vgg19.VGG19(include_top=True, weights='imagenet', input_tensor=None)
<ide> ```
<ide>
<add>
<add>VGG19 model, with weights pre-trained on ImageNet.
<add>
<add>This model is available for both the Theano and TensorFlow backend, and can be built both
<add>with "th" dim ordering (channels, width, height) or "tf" dim ordering (width, height, channels).
<add>
<add>The default input size for this model is 224x224.
<add>
<ide> ### Arguments
<ide>
<ide> - include_top: whether to include the 3 fully-connected layers at the top of the network.
<ide> These weights are ported from the ones [released by VGG at Oxford](http://www.ro
<ide> keras.applications.resnet50.ResNet50(include_top=True, weights='imagenet', input_tensor=None)
<ide> ```
<ide>
<add>
<add>ResNet50 model, with weights pre-trained on ImageNet.
<add>
<add>This model is available for both the Theano and TensorFlow backend, and can be built both
<add>with "th" dim ordering (channels, width, height) or "tf" dim ordering (width, height, channels).
<add>
<add>The default input size for this model is 224x224.
<add>
<add>
<ide> ### Arguments
<ide>
<del>- include_top: whether to include the 3 fully-connected layers at the top of the network.
<add>- include_top: whether to include the fully-connected layer at the top of the network.
<ide> - weights: one of `None` (random initialization) or "imagenet" (pre-training on ImageNet).
<ide> - input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model.
<ide>
<ide> These weights are ported from the ones [released by Kaiming He](https://github.c
<ide> keras.applications.inception_v3.InceptionV3(include_top=True, weights='imagenet', input_tensor=None)
<ide> ```
<ide>
<add>Inception V3 model, with weights pre-trained on ImageNet.
<add>
<add>This model is available for both the Theano and TensorFlow backend, and can be built both
<add>with "th" dim ordering (channels, width, height) or "tf" dim ordering (width, height, channels).
<add>
<add>The default input size for this model is 299x299.
<add>
<add>
<ide> ### Arguments
<ide>
<del>- include_top: whether to include the 3 fully-connected layers at the top of the network.
<add>- include_top: whether to include the fully-connected layer at the top of the network.
<ide> - weights: one of `None` (random initialization) or "imagenet" (pre-training on ImageNet).
<ide> - input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model.
<ide>
<ide><path>keras/applications/__init__.py
<ide> from .vgg19 import VGG19
<ide> from .resnet50 import ResNet50
<ide> from .inception_v3 import InceptionV3
<add>from .xception import Xception
<ide><path>keras/applications/inception_v3.py
<ide> For comparison, VGG16 only gets to 9.9%, quite a bit worse.
<ide>
<ide> Also, do note that the input image format for this model is different than for
<del>other models (299x299 instead of 224x224), and that the input preprocessing function
<del>is also different.
<add>the VGG16 and ResNet models (299x299 instead of 224x224), and that the input preprocessing function
<add>is also different (same as Xception).
<ide>
<ide> # Reference:
<ide>
<ide> def InceptionV3(include_top=True, weights='imagenet',
<ide> Note that the default input image size for this model is 299x299.
<ide>
<ide> # Arguments
<del> include_top: whether to include the 3 fully-connected
<del> layers at the top of the network.
<add> include_top: whether to include the fully-connected
<add> layer at the top of the network.
<ide> weights: one of `None` (random initialization)
<ide> or "imagenet" (pre-training on ImageNet).
<ide> input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
<ide><path>keras/applications/xception.py
<add># -*- coding: utf-8 -*-
<add>'''Xception V1 model for Keras.
<add>
<add>On ImageNet, this model gets to a top-1 validation accuracy of 0.790
<add>and a top-5 validation accuracy of 0.945.
<add>
<add>Do note that the input image format for this model is different than for
<add>the VGG16 and ResNet models (299x299 instead of 224x224),
<add>and that the input preprocessing function
<add>is also different (same as Inception V3).
<add>
<add>Also do note that this model is only available for the TensorFlow backend,
<add>due to its reliance on `SeparableConvolution` layers.
<add>
<add># Reference:
<add>
<add>- [Xception: Deep Learning with Depthwise Separable Convolutions](https://arxiv.org/abs/1610.02357)
<add>
<add>'''
<add>from __future__ import print_function
<add>from __future__ import absolute_import
<add>
<add>import warnings
<add>
<add>from ..models import Model
<add>from ..layers import Dense, Input, BatchNormalization, Activation, merge
<add>from ..layers import Conv2D, SeparableConv2D, MaxPooling2D, GlobalAveragePooling2D
<add>from ..utils.data_utils import get_file
<add>from .. import backend as K
<add>from .imagenet_utils import decode_predictions
<add>
<add>
<add>TF_WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.4/xception_weights_tf_dim_ordering_tf_kernels.h5'
<add>TF_WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.4/xception_weights_tf_dim_ordering_tf_kernels_notop.h5'
<add>
<add>
<add>def Xception(include_top=True, weights='imagenet',
<add> input_tensor=None):
<add> '''Instantiate the Xception architecture,
<add> optionally loading weights pre-trained
<add> on ImageNet. This model is available for TensorFlow only,
<add> and can only be used with inputs following the TensorFlow
<add> dimension ordering `(width, height, channels)`.
<add> You should set `image_dim_ordering="tf"` in your Keras config
<add> located at ~/.keras/keras.json.
<add>
<add> Note that the default input image size for this model is 299x299.
<add>
<add> # Arguments
<add> include_top: whether to include the fully-connected
<add> layer at the top of the network.
<add> weights: one of `None` (random initialization)
<add> or "imagenet" (pre-training on ImageNet).
<add> input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
<add> to use as image input for the model.
<add>
<add> # Returns
<add> A Keras model instance.
<add> '''
<add> if weights not in {'imagenet', None}:
<add> raise ValueError('The `weights` argument should be either '
<add> '`None` (random initialization) or `imagenet` '
<add> '(pre-training on ImageNet).')
<add> if K.backend() != 'tensorflow':
<add> raise Exception('The Xception model is only available with '
<add> 'the TensorFlow backend.')
<add> if K.image_dim_ordering() != 'tf':
<add> warnings.warn('The Xception model is only available for the '
<add> 'input dimension ordering "tf" '
<add> '(width, height, channels). '
<add> 'However your settings specify the default '
<add> 'dimension ordering "th" (channels, width, height). '
<add> 'You should set `image_dim_ordering="tf"` in your Keras '
<add> 'config located at ~/.keras/keras.json. '
<add> 'The model being returned right now will expect inputs '
<add> 'to follow the "tf" dimension ordering.')
<add> K.set_image_dim_ordering('tf')
<add> old_dim_ordering = 'th'
<add> else:
<add> old_dim_ordering = None
<add>
<add> # Determine proper input shape
<add> if include_top:
<add> input_shape = (299, 299, 3)
<add> else:
<add> input_shape = (None, None, 3)
<add>
<add> if input_tensor is None:
<add> img_input = Input(shape=input_shape)
<add> else:
<add> if not K.is_keras_tensor(input_tensor):
<add> img_input = Input(tensor=input_tensor, shape=input_shape)
<add> else:
<add> img_input = input_tensor
<add>
<add> x = Conv2D(32, 3, 3, subsample=(2, 2), bias=False, name='block1_conv1')(img_input)
<add> x = BatchNormalization(name='block1_conv1_bn')(x)
<add> x = Activation('relu', name='block1_conv1_act')(x)
<add> x = Conv2D(64, 3, 3, bias=False, name='block1_conv2')(x)
<add> x = BatchNormalization(name='block1_conv2_bn')(x)
<add> x = Activation('relu', name='block1_conv2_act')(x)
<add>
<add> residual = Conv2D(128, 1, 1, subsample=(2, 2),
<add> border_mode='same', bias=False)(x)
<add> residual = BatchNormalization()(residual)
<add>
<add> x = SeparableConv2D(128, 3, 3, border_mode='same', bias=False, name='block2_sepconv1')(x)
<add> x = BatchNormalization(name='block2_sepconv1_bn')(x)
<add> x = Activation('relu', name='block2_sepconv2_act')(x)
<add> x = SeparableConv2D(128, 3, 3, border_mode='same', bias=False, name='block2_sepconv2')(x)
<add> x = BatchNormalization(name='block2_sepconv2_bn')(x)
<add>
<add> x = MaxPooling2D((3, 3), strides=(2, 2), border_mode='same', name='block2_pool')(x)
<add> x = merge([x, residual], mode='sum')
<add>
<add> residual = Conv2D(256, 1, 1, subsample=(2, 2),
<add> border_mode='same', bias=False)(x)
<add> residual = BatchNormalization()(residual)
<add>
<add> x = Activation('relu', name='block3_sepconv1_act')(x)
<add> x = SeparableConv2D(256, 3, 3, border_mode='same', bias=False, name='block3_sepconv1')(x)
<add> x = BatchNormalization(name='block3_sepconv1_bn')(x)
<add> x = Activation('relu', name='block3_sepconv2_act')(x)
<add> x = SeparableConv2D(256, 3, 3, border_mode='same', bias=False, name='block3_sepconv2')(x)
<add> x = BatchNormalization(name='block3_sepconv2_bn')(x)
<add>
<add> x = MaxPooling2D((3, 3), strides=(2, 2), border_mode='same', name='block3_pool')(x)
<add> x = merge([x, residual], mode='sum')
<add>
<add> residual = Conv2D(728, 1, 1, subsample=(2, 2),
<add> border_mode='same', bias=False)(x)
<add> residual = BatchNormalization()(residual)
<add>
<add> x = Activation('relu', name='block4_sepconv1_act')(x)
<add> x = SeparableConv2D(728, 3, 3, border_mode='same', bias=False, name='block4_sepconv1')(x)
<add> x = BatchNormalization(name='block4_sepconv1_bn')(x)
<add> x = Activation('relu', name='block4_sepconv2_act')(x)
<add> x = SeparableConv2D(728, 3, 3, border_mode='same', bias=False, name='block4_sepconv2')(x)
<add> x = BatchNormalization(name='block4_sepconv2_bn')(x)
<add>
<add> x = MaxPooling2D((3, 3), strides=(2, 2), border_mode='same', name='block4_pool')(x)
<add> x = merge([x, residual], mode='sum')
<add>
<add> for i in range(8):
<add> residual = x
<add> prefix = 'block' + str(i + 5)
<add>
<add> x = Activation('relu', name=prefix + '_sepconv1_act')(x)
<add> x = SeparableConv2D(728, 3, 3, border_mode='same', bias=False, name=prefix + '_sepconv1')(x)
<add> x = BatchNormalization(name=prefix + '_sepconv1_bn')(x)
<add> x = Activation('relu', name=prefix + '_sepconv2_act')(x)
<add> x = SeparableConv2D(728, 3, 3, border_mode='same', bias=False, name=prefix + '_sepconv2')(x)
<add> x = BatchNormalization(name=prefix + '_sepconv2_bn')(x)
<add> x = Activation('relu', name=prefix + '_sepconv3_act')(x)
<add> x = SeparableConv2D(728, 3, 3, border_mode='same', bias=False, name=prefix + '_sepconv3')(x)
<add> x = BatchNormalization(name=prefix + '_sepconv3_bn')(x)
<add>
<add> x = merge([x, residual], mode='sum')
<add>
<add> residual = Conv2D(1024, 1, 1, subsample=(2, 2),
<add> border_mode='same', bias=False)(x)
<add> residual = BatchNormalization()(residual)
<add>
<add> x = Activation('relu', name='block13_sepconv1_act')(x)
<add> x = SeparableConv2D(728, 3, 3, border_mode='same', bias=False, name='block13_sepconv1')(x)
<add> x = BatchNormalization(name='block13_sepconv1_bn')(x)
<add> x = Activation('relu', name='block13_sepconv2_act')(x)
<add> x = SeparableConv2D(1024, 3, 3, border_mode='same', bias=False, name='block13_sepconv2')(x)
<add> x = BatchNormalization(name='block13_sepconv2_bn')(x)
<add>
<add> x = MaxPooling2D((3, 3), strides=(2, 2), border_mode='same', name='block13_pool')(x)
<add> x = merge([x, residual], mode='sum')
<add>
<add> x = SeparableConv2D(1536, 3, 3, border_mode='same', bias=False, name='block14_sepconv1')(x)
<add> x = BatchNormalization(name='block14_sepconv1_bn')(x)
<add> x = Activation('relu', name='block14_sepconv1_act')(x)
<add>
<add> x = SeparableConv2D(2048, 3, 3, border_mode='same', bias=False, name='block14_sepconv2')(x)
<add> x = BatchNormalization(name='block14_sepconv2_bn')(x)
<add> x = Activation('relu', name='block14_sepconv2_act')(x)
<add>
<add> if include_top:
<add> x = GlobalAveragePooling2D(name='avg_pool')(x)
<add> x = Dense(1000, activation='softmax', name='predictions')(x)
<add>
<add> # Create model
<add> model = Model(img_input, x)
<add>
<add> # load weights
<add> if weights == 'imagenet':
<add> if include_top:
<add> weights_path = get_file('xception_weights_tf_dim_ordering_tf_kernels.h5',
<add> TF_WEIGHTS_PATH,
<add> cache_subdir='models')
<add> else:
<add> weights_path = get_file('xception_weights_tf_dim_ordering_tf_kernels_notop.h5',
<add> TF_WEIGHTS_PATH_NO_TOP,
<add> cache_subdir='models')
<add> model.load_weights(weights_path)
<add>
<add> if old_dim_ordering:
<add> K.set_image_dim_ordering(old_dim_ordering)
<add> return model
<add>
<add>
<add>def preprocess_input(x):
<add> x /= 255.
<add> x -= 0.5
<add> x *= 2.
<add> return x | 4 |
Python | Python | add update_exc util function | 66c7348cda2e0e52839564556f62d3a776164474 | <ide><path>spacy/util.py
<ide> def compile_infix_regex(entries):
<ide> return re.compile(expression)
<ide>
<ide>
<add>def update_exc(exc, additions):
<add> overlap = set(exc.keys()).intersection(set(additions))
<add> assert not overlap, overlap
<add> exc.update(additions)
<add>
<add>
<ide> def normalize_slice(length, start, stop, step=None):
<ide> if not (step is None or step == 1):
<ide> raise ValueError("Stepped slices not supported in Span objects." | 1 |
Javascript | Javascript | show % change rather than % difference | 035aa6b4cefc0b1d5bab8ffdcb5c63593ef8d190 | <ide><path>benchmark/compare.js
<ide> function compare() {
<ide> var n0 = res[nodes[0]];
<ide> var n1 = res[nodes[1]];
<ide>
<del> var pct = ((n0 - n1) / ((n0 + n1) / 2) * 100).toFixed(2);
<add> var pct = ((n0 - n1) / n1 * 100).toFixed(2);
<ide>
<ide> var g = n0 > n1 ? green : '';
<ide> var r = n0 > n1 ? '' : red; | 1 |
PHP | PHP | fix tests that were failing becuase of icu changes | 853b691a52a6653d6b8c2c3589fe29c622cfa94b | <ide><path>tests/TestCase/I18n/NumberTest.php
<ide> public function testCurrency()
<ide>
<ide> $options = ['locale' => 'fr_FR', 'pattern' => 'EUR #,###.00'];
<ide> $result = $this->Number->currency($value, 'EUR', $options);
<del> $expected = 'EUR 100 100 100,00';
<del> $this->assertEquals($expected, $result);
<add> // The following tests use regexp because whitespace used
<add> // is inconsistent between *nix & windows.
<add> $expected = '/^EUR\W+100\W+100\W+100,00$/';
<add> $this->assertRegExp($expected, $result);
<ide>
<ide> $options = ['locale' => 'fr_FR', 'pattern' => '#,###.00 ¤¤'];
<ide> $result = $this->Number->currency($value, 'EUR', $options);
<del> $expected = '100 100 100,00 EUR';
<del> $this->assertEquals($expected, $result);
<add> $expected = '/^100\W+100\W+100,00\W+EUR$/';
<add> $this->assertRegExp($expected, $result);
<ide>
<ide> $options = ['locale' => 'fr_FR', 'pattern' => '#,###.00;(¤#,###.00)'];
<ide> $result = $this->Number->currency(-1235.03, 'EUR', $options);
<del> $expected = '(€1 235,03)';
<del> $this->assertEquals($expected, $result);
<add> $expected = '/^\(€1\W+235,03\)$/';
<add> $this->assertRegExp($expected, $result);
<ide>
<ide> $result = $this->Number->currency(0.5, 'USD', ['locale' => 'en_US', 'fractionSymbol' => 'c']);
<ide> $expected = '50c'; | 1 |
Ruby | Ruby | use parser to parse args | 74baf04ad3b0111cace6701caeb5939ef8161bf7 | <ide><path>Library/Homebrew/dev-cmd/pull.rb
<ide> require "net/https"
<ide> require "utils"
<ide> require "json"
<add>require "cli_parser"
<ide> require "formula"
<ide> require "formulary"
<ide> require "tap"
<ide> module GitHub
<ide> module_function
<ide>
<ide> # Return the corresponding test-bot user name for the given GitHub organization.
<del> def test_bot_user(user)
<del> test_bot = ARGV.value "test-bot-user"
<add> def test_bot_user(user, test_bot)
<ide> return test_bot if test_bot
<ide> return "BrewTestBot" if user.casecmp("homebrew").zero?
<ide> "#{user.capitalize}TestBot"
<ide> module Homebrew
<ide> def pull
<ide> odie "You meant `git pull --rebase`." if ARGV[0] == "--rebase"
<ide>
<add> @args = Homebrew::CLI::Parser.parse do
<add> switch "--bottle"
<add> switch "--bump"
<add> switch "--clean"
<add> switch "--ignore-whitespace"
<add> switch "--resolve"
<add> switch "--branch-okay"
<add> switch "--no-pbcopy"
<add> switch "--no-publish"
<add> switch "--warn-on-publish-failure"
<add> flag "--bintray-org", required: true
<add> flag "--test-bot-user", required: true
<add> end
<add>
<ide> if ARGV.named.empty?
<ide> odie "This command requires at least one argument containing a URL or pull request number"
<ide> end
<ide> def pull
<ide> ENV["GIT_COMMITTER_EMAIL"] = ENV["HOMEBREW_GIT_EMAIL"]
<ide> end
<ide>
<del> do_bump = ARGV.include?("--bump") && !ARGV.include?("--clean")
<add> do_bump = @args.bump? && [email protected]?
<ide>
<ide> # Formulae with affected bottles that were published
<ide> bintray_published_formulae = []
<ide> def pull
<ide> end
<ide> _, testing_job = *testing_match
<ide> url = "https://github.com/Homebrew/homebrew-#{tap.repo}/compare/master...BrewTestBot:testing-#{testing_job}"
<del> odie "Testing URLs require `--bottle`!" unless ARGV.include?("--bottle")
<add> odie "Testing URLs require `--bottle`!" unless @args.bottle?
<ide> elsif (api_match = arg.match HOMEBREW_PULL_API_REGEX)
<ide> _, user, repo, issue = *api_match
<ide> url = "https://github.com/#{user}/#{repo}/pull/#{issue}"
<ide> def pull
<ide> odie "Not a GitHub pull request or commit: #{arg}"
<ide> end
<ide>
<del> if !testing_job && ARGV.include?("--bottle") && issue.nil?
<add> if !testing_job && @args.bottle? && issue.nil?
<ide> odie "No pull request detected!"
<ide> end
<ide>
<ide> def pull
<ide> orig_revision = `git rev-parse --short HEAD`.strip
<ide> branch = `git symbolic-ref --short HEAD`.strip
<ide>
<del> unless branch == "master" || ARGV.include?("--clean") || ARGV.include?("--branch-okay")
<add> unless branch == "master" || @args.clean? || @args.branch_okay?
<ide> opoo "Current branch is #{branch}: do you need to pull inside master?"
<ide> end
<ide>
<del> patch_puller = PatchPuller.new(url)
<add> patch_puller = PatchPuller.new(url, @args)
<ide> patch_puller.fetch_patch
<ide> patch_changes = files_changed_in_patch(patch_puller.patchpath, tap)
<ide>
<ide> def pull
<ide> end
<ide> end
<ide>
<del> if ARGV.include? "--bottle"
<add> if @args.bottle?
<ide> if f.bottle_unneeded?
<ide> ohai "#{f}: skipping unneeded bottle."
<ide> elsif f.bottle_disabled?
<ide> def pull
<ide> end
<ide>
<ide> orig_message = message = `git log HEAD^.. --format=%B`
<del> if issue && !ARGV.include?("--clean")
<add> if issue && [email protected]?
<ide> ohai "Patch closes issue ##{issue}"
<ide> close_message = "Closes ##{issue}."
<ide> # If this is a pull request, append a close message.
<ide> def pull
<ide> is_bumpable = false
<ide> end
<ide>
<del> is_bumpable = false if ARGV.include?("--clean")
<add> is_bumpable = false if @args.clean?
<ide> is_bumpable = false if ENV["HOMEBREW_DISABLE_LOAD_FORMULA"]
<ide>
<ide> if is_bumpable
<ide> def pull
<ide> odie "No version changes found for #{formula.name}" if bump_subject.nil?
<ide> unless orig_subject == bump_subject
<ide> ohai "New bump commit subject: #{bump_subject}"
<del> pbcopy bump_subject unless ARGV.include? "--no-pbcopy"
<add> pbcopy bump_subject unless @args.no_pbcopy?
<ide> message = "#{bump_subject}\n\n#{message}"
<ide> end
<ide> elsif bump_subject != orig_subject && !bump_subject.nil?
<ide> def pull
<ide> end
<ide> end
<ide>
<del> if message != orig_message && !ARGV.include?("--clean")
<add> if message != orig_message && [email protected]?
<ide> safe_system "git", "commit", "--amend", "--signoff", "--allow-empty", "-q", "-m", message
<ide> end
<ide>
<ide> def pull
<ide> url
<ide> else
<ide> bottle_branch = "pull-bottle-#{issue}"
<del> "https://github.com/#{GitHub.test_bot_user user}/homebrew-#{tap.repo}/compare/#{user}:master...pr-#{issue}"
<add> bot_username = GitHub.test_bot_user(user, @args.test_bot_user)
<add> "https://github.com/#{bot_username}/homebrew-#{tap.repo}/compare/#{user}:master...pr-#{issue}"
<ide> end
<ide>
<ide> curl "--silent", "--fail", "--output", "/dev/null", "--head", bottle_commit_url
<ide> def pull
<ide> safe_system "git", "branch", "--quiet", "-D", bottle_branch
<ide>
<ide> # Publish bottles on Bintray
<del> unless ARGV.include? "--no-publish"
<add> unless @args.no_publish?
<ide> published = publish_changed_formula_bottles(tap, changed_formulae_names)
<ide> bintray_published_formulae.concat(published)
<ide> end
<ide> def publish_changed_formula_bottles(tap, changed_formulae_names)
<ide> changed_formulae_names.each do |name|
<ide> f = Formula[name]
<ide> next if f.bottle_unneeded? || f.bottle_disabled?
<del> bintray_org = ARGV.value("bintray-org") || tap.user.downcase
<add> bintray_org = @args.bintray_org || tap.user.downcase
<ide> next unless publish_bottle_file_on_bintray(f, bintray_org, bintray_creds)
<ide> published << f.full_name
<ide> end
<ide> def publish_changed_formula_bottles(tap, changed_formulae_names)
<ide> end
<ide>
<ide> def pull_patch(url, description = nil)
<del> PatchPuller.new(url, description).pull_patch
<add> PatchPuller.new(url, @args, description).pull_patch
<ide> end
<ide>
<ide> class PatchPuller
<ide> attr_reader :base_url
<ide> attr_reader :patch_url
<ide> attr_reader :patchpath
<ide>
<del> def initialize(url, description = nil)
<add> def initialize(url, args, description = nil)
<ide> @base_url = url
<ide> # GitHub provides commits/pull-requests raw patches using this URL.
<ide> @patch_url = url + ".patch"
<ide> @patchpath = HOMEBREW_CACHE + File.basename(patch_url)
<ide> @description = description
<add> @args = args
<ide> end
<ide>
<ide> def pull_patch
<ide> def apply_patch
<ide> patch_args = []
<ide> # Normally we don't want whitespace errors, but squashing them can break
<ide> # patches so an option is provided to skip this step.
<del> if ARGV.include?("--ignore-whitespace") || ARGV.include?("--clean")
<add> if @args.ignore_whitespace? || @args.clean?
<ide> patch_args << "--whitespace=nowarn"
<ide> else
<ide> patch_args << "--whitespace=fix"
<ide> def apply_patch
<ide> begin
<ide> safe_system "git", "am", *patch_args
<ide> rescue ErrorDuringExecution
<del> if ARGV.include? "--resolve"
<add> if @args.resolve?
<ide> odie "Patch failed to apply: try to resolve it."
<ide> else
<ide> system "git", "am", "--abort"
<ide> def publish_bottle_file_on_bintray(f, bintray_org, creds)
<ide> "https://api.bintray.com/content/#{bintray_org}/#{repo}/#{package}/#{version}/publish"
<ide> true
<ide> rescue => e
<del> raise unless ARGV.include?("--warn-on-publish-failure")
<add> raise unless @args.warn_on_publish_failure?
<ide> onoe e
<ide> false
<ide> end | 1 |
Python | Python | fix default bool in argparser | c9486fd0f515c38b0a525ceb5348c4b8bf2d4d9c | <ide><path>src/transformers/hf_argparser.py
<ide> def _add_dataclass_arguments(self, dtype: DataClassType):
<ide> # Hack because type=bool in argparse does not behave as we want.
<ide> kwargs["type"] = string_to_bool
<ide> if field.type is bool or (field.default is not None and field.default is not dataclasses.MISSING):
<del> # Default value is True if we have no default when of type bool.
<del> default = True if field.default is dataclasses.MISSING else field.default
<add> # Default value is False if we have no default when of type bool.
<add> default = False if field.default is dataclasses.MISSING else field.default
<ide> # This is the value that will get picked if we don't include --field_name in any way
<ide> kwargs["default"] = default
<ide> # This tells argparse we accept 0 or 1 value after --field_name
<ide><path>tests/test_hf_argparser.py
<ide> def test_basic(self):
<ide> expected.add_argument("--foo", type=int, required=True)
<ide> expected.add_argument("--bar", type=float, required=True)
<ide> expected.add_argument("--baz", type=str, required=True)
<del> expected.add_argument("--flag", type=string_to_bool, default=True, const=True, nargs="?")
<add> expected.add_argument("--flag", type=string_to_bool, default=False, const=True, nargs="?")
<ide> self.argparsersEqual(parser, expected)
<ide>
<add> args = ["--foo", "1", "--baz", "quux", "--bar", "0.5"]
<add> (example,) = parser.parse_args_into_dataclasses(args, look_for_args_file=False)
<add> self.assertFalse(example.flag)
<add>
<ide> def test_with_default(self):
<ide> parser = HfArgumentParser(WithDefaultExample)
<ide> | 2 |
Python | Python | make top numpy __init__ importable from python3 | 26e1d7f4739e39ee16f73685bbcefdac03ae4866 | <ide><path>numpy/__init__.py
<ide>
<ide> if __NUMPY_SETUP__:
<ide> import sys as _sys
<del> print >> _sys.stderr, 'Running from numpy source directory.'
<add> _sys.stderr.write('Running from numpy source directory.')
<ide> del _sys
<ide> else:
<ide> try:
<ide> from numpy.__config__ import show as show_config
<del> except ImportError, e:
<add> except ImportError:
<ide> msg = """Error importing numpy: you should not try to import numpy from
<ide> its source directory; please exit the numpy source tree, and relaunch
<ide> your python intepreter from there.""" | 1 |
Ruby | Ruby | remove redundant current_branch | d802b3755a6df6a9c1081061017d1d995c34771a | <ide><path>Library/Homebrew/dev-cmd/pr-pull.rb
<ide> def pr_pull
<ide> _, user, repo, pr = *url_match
<ide> odie "Not a GitHub pull request: #{arg}" unless pr
<ide>
<del> current_branch = Utils::Git.current_branch(tap.path)
<add> current_branch = tap.path.git_branch
<ide> origin_branch = Utils::Git.origin_branch(tap.path).split("/").last
<ide>
<ide> if current_branch != origin_branch || args.branch_okay? || args.clean?
<ide><path>Library/Homebrew/utils/git.rb
<ide> def origin_branch(repo)
<ide> end
<ide>
<ide> def current_branch(repo)
<del> Utils.popen_read("git", "-C", repo, "symbolic-ref", "--short", "HEAD").chomp.presence
<add> odeprecated "Utils::Git.current_branch(repo)", "Pathname(repo).git_branch"
<add> Pathname(repo).extend(GitRepositoryExtension).git_branch
<ide> end
<ide>
<ide> # Special case of `git cherry-pick` that permits non-verbose output and | 2 |
Mixed | Ruby | raise irreversiblemigration if no column given | 3771e4d51122e1ec22728029bae00f121d5d4e3b | <ide><path>activerecord/CHANGELOG.md
<add>* While removing index if column option is missing then raise IrreversibleMigration exception.
<add>
<add> Following code should raise `IrreversibleMigration`. But the code was
<add> failing since options is an array and not a hash.
<add>
<add> def change
<add> change_table :users do |t|
<add> t.remove_index [:name, :email]
<add> end
<add> end
<add>
<add> Fix was to check if the options is a Hash before operating on it.
<add>
<add> Fixes #10419.
<add>
<add> *Neeraj Singh*
<add>
<ide> * Do not overwrite manually built records during one-to-one nested attribute assignment
<ide>
<ide> For one-to-one nested associations, if you build the new (in-memory)
<ide><path>activerecord/lib/active_record/migration/command_recorder.rb
<ide> def invert_add_index(args)
<ide>
<ide> def invert_remove_index(args)
<ide> table, options = *args
<del> raise ActiveRecord::IrreversibleMigration, "remove_index is only reversible if given a :column option." unless options && options[:column]
<add>
<add> unless options && options.is_a?(Hash) && options[:column]
<add> raise ActiveRecord::IrreversibleMigration, "remove_index is only reversible if given a :column option."
<add> end
<ide>
<ide> options = options.dup
<ide> [:add_index, [table, options.delete(:column), options]]
<ide><path>activerecord/test/cases/invertible_migration_test.rb
<ide> def change
<ide> end
<ide> end
<ide>
<add> class RemoveIndexMigration1 < SilentMigration
<add> def self.up
<add> create_table("horses") do |t|
<add> t.column :name, :text
<add> t.column :color, :text
<add> t.index [:name, :color]
<add> end
<add> end
<add> end
<add>
<add> class RemoveIndexMigration2 < SilentMigration
<add> def change
<add> change_table("horses") do |t|
<add> t.remove_index [:name, :color]
<add> end
<add> end
<add> end
<add>
<ide> class LegacyMigration < ActiveRecord::Migration
<ide> def self.up
<ide> create_table("horses") do |t|
<ide> def test_no_reverse
<ide> end
<ide> end
<ide>
<add> def test_exception_on_removing_index_without_column_option
<add> RemoveIndexMigration1.new.migrate(:up)
<add> migration = RemoveIndexMigration2.new
<add> migration.migrate(:up)
<add>
<add> assert_raises(IrreversibleMigration) do
<add> migration.migrate(:down)
<add> end
<add> end
<add>
<ide> def test_migrate_up
<ide> migration = InvertibleMigration.new
<ide> migration.migrate(:up) | 3 |
Go | Go | move "image_delete" to daemon/image_delete.go | 7a5e3df1625df24d52e2c863706076c59803cff8 | <ide><path>daemon/daemon.go
<ide> type Daemon struct {
<ide> func (daemon *Daemon) Install(eng *engine.Engine) error {
<ide> // FIXME: rename "delete" to "rm" for consistency with the CLI command
<ide> // FIXME: rename ContainerDestroy to ContainerRm for consistency with the CLI command
<add> // FIXME: remove ImageDelete's dependency on Daemon, then move to graph/
<ide> for name, method := range map[string]engine.Handler{
<ide> "attach": daemon.ContainerAttach,
<ide> "commit": daemon.ContainerCommit,
<ide> func (daemon *Daemon) Install(eng *engine.Engine) error {
<ide> "top": daemon.ContainerTop,
<ide> "unpause": daemon.ContainerUnpause,
<ide> "wait": daemon.ContainerWait,
<add> "image_delete": daemon.ImageDelete, // FIXME: see above
<ide> } {
<ide> if err := eng.Register(name, method); err != nil {
<ide> return err
<ide><path>daemon/image_delete.go
<add>package daemon
<add>
<add>import (
<add> "fmt"
<add> "strings"
<add>
<add> "github.com/docker/docker/engine"
<add> "github.com/docker/docker/graph"
<add> "github.com/docker/docker/image"
<add> "github.com/docker/docker/pkg/parsers"
<add> "github.com/docker/docker/utils"
<add>)
<add>
<add>func (daemon *Daemon) ImageDelete(job *engine.Job) engine.Status {
<add> if n := len(job.Args); n != 1 {
<add> return job.Errorf("Usage: %s IMAGE", job.Name)
<add> }
<add> imgs := engine.NewTable("", 0)
<add> if err := daemon.DeleteImage(job.Eng, job.Args[0], imgs, true, job.GetenvBool("force"), job.GetenvBool("noprune")); err != nil {
<add> return job.Error(err)
<add> }
<add> if len(imgs.Data) == 0 {
<add> return job.Errorf("Conflict, %s wasn't deleted", job.Args[0])
<add> }
<add> if _, err := imgs.WriteListTo(job.Stdout); err != nil {
<add> return job.Error(err)
<add> }
<add> return engine.StatusOK
<add>}
<add>
<add>// FIXME: make this private and use the job instead
<add>func (daemon *Daemon) DeleteImage(eng *engine.Engine, name string, imgs *engine.Table, first, force, noprune bool) error {
<add> var (
<add> repoName, tag string
<add> tags = []string{}
<add> tagDeleted bool
<add> )
<add>
<add> // FIXME: please respect DRY and centralize repo+tag parsing in a single central place! -- shykes
<add> repoName, tag = parsers.ParseRepositoryTag(name)
<add> if tag == "" {
<add> tag = graph.DEFAULTTAG
<add> }
<add>
<add> img, err := daemon.Repositories().LookupImage(name)
<add> if err != nil {
<add> if r, _ := daemon.Repositories().Get(repoName); r != nil {
<add> return fmt.Errorf("No such image: %s:%s", repoName, tag)
<add> }
<add> return fmt.Errorf("No such image: %s", name)
<add> }
<add>
<add> if strings.Contains(img.ID, name) {
<add> repoName = ""
<add> tag = ""
<add> }
<add>
<add> byParents, err := daemon.Graph().ByParent()
<add> if err != nil {
<add> return err
<add> }
<add>
<add> //If delete by id, see if the id belong only to one repository
<add> if repoName == "" {
<add> for _, repoAndTag := range daemon.Repositories().ByID()[img.ID] {
<add> parsedRepo, parsedTag := parsers.ParseRepositoryTag(repoAndTag)
<add> if repoName == "" || repoName == parsedRepo {
<add> repoName = parsedRepo
<add> if parsedTag != "" {
<add> tags = append(tags, parsedTag)
<add> }
<add> } else if repoName != parsedRepo && !force {
<add> // the id belongs to multiple repos, like base:latest and user:test,
<add> // in that case return conflict
<add> return fmt.Errorf("Conflict, cannot delete image %s because it is tagged in multiple repositories, use -f to force", name)
<add> }
<add> }
<add> } else {
<add> tags = append(tags, tag)
<add> }
<add>
<add> if !first && len(tags) > 0 {
<add> return nil
<add> }
<add>
<add> //Untag the current image
<add> for _, tag := range tags {
<add> tagDeleted, err = daemon.Repositories().Delete(repoName, tag)
<add> if err != nil {
<add> return err
<add> }
<add> if tagDeleted {
<add> out := &engine.Env{}
<add> out.Set("Untagged", repoName+":"+tag)
<add> imgs.Add(out)
<add> eng.Job("log", "untag", img.ID, "").Run()
<add> }
<add> }
<add> tags = daemon.Repositories().ByID()[img.ID]
<add> if (len(tags) <= 1 && repoName == "") || len(tags) == 0 {
<add> if len(byParents[img.ID]) == 0 {
<add> if err := daemon.canDeleteImage(img.ID, force, tagDeleted); err != nil {
<add> return err
<add> }
<add> if err := daemon.Repositories().DeleteAll(img.ID); err != nil {
<add> return err
<add> }
<add> if err := daemon.Graph().Delete(img.ID); err != nil {
<add> return err
<add> }
<add> out := &engine.Env{}
<add> out.Set("Deleted", img.ID)
<add> imgs.Add(out)
<add> eng.Job("log", "delete", img.ID, "").Run()
<add> if img.Parent != "" && !noprune {
<add> err := daemon.DeleteImage(eng, img.Parent, imgs, false, force, noprune)
<add> if first {
<add> return err
<add> }
<add>
<add> }
<add>
<add> }
<add> }
<add> return nil
<add>}
<add>
<add>func (daemon *Daemon) canDeleteImage(imgID string, force, untagged bool) error {
<add> var message string
<add> if untagged {
<add> message = " (docker untagged the image)"
<add> }
<add> for _, container := range daemon.List() {
<add> parent, err := daemon.Repositories().LookupImage(container.Image)
<add> if err != nil {
<add> return err
<add> }
<add>
<add> if err := parent.WalkHistory(func(p *image.Image) error {
<add> if imgID == p.ID {
<add> if container.State.IsRunning() {
<add> if force {
<add> return fmt.Errorf("Conflict, cannot force delete %s because the running container %s is using it%s, stop it and retry", utils.TruncateID(imgID), utils.TruncateID(container.ID), message)
<add> }
<add> return fmt.Errorf("Conflict, cannot delete %s because the running container %s is using it%s, stop it and use -f to force", utils.TruncateID(imgID), utils.TruncateID(container.ID), message)
<add> } else if !force {
<add> return fmt.Errorf("Conflict, cannot delete %s because the container %s is using it%s, use -f to force", utils.TruncateID(imgID), utils.TruncateID(container.ID), message)
<add> }
<add> }
<add> return nil
<add> }); err != nil {
<add> return err
<add> }
<add> }
<add> return nil
<add>}
<ide><path>server/image.go
<ide> import (
<ide> "github.com/docker/docker/archive"
<ide> "github.com/docker/docker/builder"
<ide> "github.com/docker/docker/engine"
<del> "github.com/docker/docker/graph"
<ide> "github.com/docker/docker/image"
<ide> "github.com/docker/docker/pkg/parsers"
<ide> "github.com/docker/docker/registry"
<ide> func (srv *Server) ImagePush(job *engine.Job) engine.Status {
<ide> return engine.StatusOK
<ide> }
<ide>
<del>func (srv *Server) DeleteImage(name string, imgs *engine.Table, first, force, noprune bool) error {
<del> var (
<del> repoName, tag string
<del> tags = []string{}
<del> tagDeleted bool
<del> )
<del>
<del> repoName, tag = parsers.ParseRepositoryTag(name)
<del> if tag == "" {
<del> tag = graph.DEFAULTTAG
<del> }
<del>
<del> img, err := srv.daemon.Repositories().LookupImage(name)
<del> if err != nil {
<del> if r, _ := srv.daemon.Repositories().Get(repoName); r != nil {
<del> return fmt.Errorf("No such image: %s:%s", repoName, tag)
<del> }
<del> return fmt.Errorf("No such image: %s", name)
<del> }
<del>
<del> if strings.Contains(img.ID, name) {
<del> repoName = ""
<del> tag = ""
<del> }
<del>
<del> byParents, err := srv.daemon.Graph().ByParent()
<del> if err != nil {
<del> return err
<del> }
<del>
<del> //If delete by id, see if the id belong only to one repository
<del> if repoName == "" {
<del> for _, repoAndTag := range srv.daemon.Repositories().ByID()[img.ID] {
<del> parsedRepo, parsedTag := parsers.ParseRepositoryTag(repoAndTag)
<del> if repoName == "" || repoName == parsedRepo {
<del> repoName = parsedRepo
<del> if parsedTag != "" {
<del> tags = append(tags, parsedTag)
<del> }
<del> } else if repoName != parsedRepo && !force {
<del> // the id belongs to multiple repos, like base:latest and user:test,
<del> // in that case return conflict
<del> return fmt.Errorf("Conflict, cannot delete image %s because it is tagged in multiple repositories, use -f to force", name)
<del> }
<del> }
<del> } else {
<del> tags = append(tags, tag)
<del> }
<del>
<del> if !first && len(tags) > 0 {
<del> return nil
<del> }
<del>
<del> //Untag the current image
<del> for _, tag := range tags {
<del> tagDeleted, err = srv.daemon.Repositories().Delete(repoName, tag)
<del> if err != nil {
<del> return err
<del> }
<del> if tagDeleted {
<del> out := &engine.Env{}
<del> out.Set("Untagged", repoName+":"+tag)
<del> imgs.Add(out)
<del> srv.LogEvent("untag", img.ID, "")
<del> }
<del> }
<del> tags = srv.daemon.Repositories().ByID()[img.ID]
<del> if (len(tags) <= 1 && repoName == "") || len(tags) == 0 {
<del> if len(byParents[img.ID]) == 0 {
<del> if err := srv.canDeleteImage(img.ID, force, tagDeleted); err != nil {
<del> return err
<del> }
<del> if err := srv.daemon.Repositories().DeleteAll(img.ID); err != nil {
<del> return err
<del> }
<del> if err := srv.daemon.Graph().Delete(img.ID); err != nil {
<del> return err
<del> }
<del> out := &engine.Env{}
<del> out.Set("Deleted", img.ID)
<del> imgs.Add(out)
<del> srv.LogEvent("delete", img.ID, "")
<del> if img.Parent != "" && !noprune {
<del> err := srv.DeleteImage(img.Parent, imgs, false, force, noprune)
<del> if first {
<del> return err
<del> }
<del>
<del> }
<del>
<del> }
<del> }
<del> return nil
<del>}
<del>
<del>func (srv *Server) ImageDelete(job *engine.Job) engine.Status {
<del> if n := len(job.Args); n != 1 {
<del> return job.Errorf("Usage: %s IMAGE", job.Name)
<del> }
<del> imgs := engine.NewTable("", 0)
<del> if err := srv.DeleteImage(job.Args[0], imgs, true, job.GetenvBool("force"), job.GetenvBool("noprune")); err != nil {
<del> return job.Error(err)
<del> }
<del> if len(imgs.Data) == 0 {
<del> return job.Errorf("Conflict, %s wasn't deleted", job.Args[0])
<del> }
<del> if _, err := imgs.WriteListTo(job.Stdout); err != nil {
<del> return job.Error(err)
<del> }
<del> return engine.StatusOK
<del>}
<del>
<del>func (srv *Server) canDeleteImage(imgID string, force, untagged bool) error {
<del> var message string
<del> if untagged {
<del> message = " (docker untagged the image)"
<del> }
<del> for _, container := range srv.daemon.List() {
<del> parent, err := srv.daemon.Repositories().LookupImage(container.Image)
<del> if err != nil {
<del> return err
<del> }
<del>
<del> if err := parent.WalkHistory(func(p *image.Image) error {
<del> if imgID == p.ID {
<del> if container.State.IsRunning() {
<del> if force {
<del> return fmt.Errorf("Conflict, cannot force delete %s because the running container %s is using it%s, stop it and retry", utils.TruncateID(imgID), utils.TruncateID(container.ID), message)
<del> }
<del> return fmt.Errorf("Conflict, cannot delete %s because the running container %s is using it%s, stop it and use -f to force", utils.TruncateID(imgID), utils.TruncateID(container.ID), message)
<del> } else if !force {
<del> return fmt.Errorf("Conflict, cannot delete %s because the container %s is using it%s, use -f to force", utils.TruncateID(imgID), utils.TruncateID(container.ID), message)
<del> }
<del> }
<del> return nil
<del> }); err != nil {
<del> return err
<del> }
<del> }
<del> return nil
<del>}
<del>
<ide> func (srv *Server) poolAdd(kind, key string) (chan struct{}, error) {
<ide> srv.Lock()
<ide> defer srv.Unlock()
<ide><path>server/init.go
<ide> func InitServer(job *engine.Job) engine.Status {
<ide> job.Eng.Hack_SetGlobalVar("httpapi.daemon", srv.daemon)
<ide>
<ide> for name, handler := range map[string]engine.Handler{
<del> "tag": srv.ImageTag, // FIXME merge with "image_tag"
<del> "info": srv.DockerInfo,
<del> "log": srv.Log,
<del> "build": srv.Build,
<del> "pull": srv.ImagePull,
<del> "image_delete": srv.ImageDelete,
<del> "events": srv.Events,
<del> "push": srv.ImagePush,
<add> "tag": srv.ImageTag, // FIXME merge with "image_tag"
<add> "info": srv.DockerInfo,
<add> "log": srv.Log,
<add> "build": srv.Build,
<add> "pull": srv.ImagePull,
<add> "events": srv.Events,
<add> "push": srv.ImagePush,
<ide> } {
<ide> if err := job.Eng.Register(name, srv.handlerWrap(handler)); err != nil {
<ide> return job.Error(err) | 4 |
Mixed | Go | add reference filter and deprecated filter param… | 820b809e70df8b9c7af00256182c48d935972a5c | <ide><path>api/server/router/image/backend.go
<ide> import (
<ide>
<ide> "github.com/docker/docker/api/types"
<ide> "github.com/docker/docker/api/types/backend"
<add> "github.com/docker/docker/api/types/filters"
<ide> "github.com/docker/docker/api/types/registry"
<ide> "golang.org/x/net/context"
<ide> )
<ide> type containerBackend interface {
<ide> type imageBackend interface {
<ide> ImageDelete(imageRef string, force, prune bool) ([]types.ImageDelete, error)
<ide> ImageHistory(imageName string) ([]*types.ImageHistory, error)
<del> Images(filterArgs string, filter string, all bool, withExtraAttrs bool) ([]*types.ImageSummary, error)
<add> Images(imageFilters filters.Args, all bool, withExtraAttrs bool) ([]*types.ImageSummary, error)
<ide> LookupImage(name string) (*types.ImageInspect, error)
<ide> TagImage(imageName, repository, tag string) error
<ide> ImagesPrune(config *types.ImagesPruneConfig) (*types.ImagesPruneReport, error)
<ide><path>api/server/router/image/image_routes.go
<ide> import (
<ide> "github.com/docker/docker/api/types"
<ide> "github.com/docker/docker/api/types/backend"
<ide> "github.com/docker/docker/api/types/container"
<add> "github.com/docker/docker/api/types/filters"
<ide> "github.com/docker/docker/api/types/versions"
<ide> "github.com/docker/docker/pkg/ioutils"
<ide> "github.com/docker/docker/pkg/streamformatter"
<ide> func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
<ide> return err
<ide> }
<ide>
<del> // FIXME: The filter parameter could just be a match filter
<del> images, err := s.backend.Images(r.Form.Get("filters"), r.Form.Get("filter"), httputils.BoolValue(r, "all"), false)
<add> imageFilters, err := filters.FromParam(r.Form.Get("filters"))
<add> if err != nil {
<add> return err
<add> }
<add>
<add> version := httputils.VersionFromContext(ctx)
<add> filterParam := r.Form.Get("filter")
<add> if versions.LessThan(version, "1.28") && filterParam != "" {
<add> imageFilters.Add("reference", filterParam)
<add> }
<add>
<add> images, err := s.backend.Images(imageFilters, httputils.BoolValue(r, "all"), false)
<ide> if err != nil {
<ide> return err
<ide> }
<ide><path>api/types/client.go
<ide> type ImageImportOptions struct {
<ide>
<ide> // ImageListOptions holds parameters to filter the list of images with.
<ide> type ImageListOptions struct {
<del> MatchName string
<del> All bool
<del> Filters filters.Args
<add> All bool
<add> Filters filters.Args
<ide> }
<ide>
<ide> // ImageLoadResponse returns information to the client about a load process.
<ide><path>cli/command/image/list.go
<ide> func newListCommand(dockerCli *command.DockerCli) *cobra.Command {
<ide> func runImages(dockerCli *command.DockerCli, opts imagesOptions) error {
<ide> ctx := context.Background()
<ide>
<add> filters := opts.filter.Value()
<add> if opts.matchName != "" {
<add> filters.Add("reference", opts.matchName)
<add> }
<add>
<ide> options := types.ImageListOptions{
<del> MatchName: opts.matchName,
<del> All: opts.all,
<del> Filters: opts.filter.Value(),
<add> All: opts.all,
<add> Filters: filters,
<ide> }
<ide>
<ide> images, err := dockerCli.Client().ImageList(ctx, options)
<ide><path>client/image_list.go
<ide> func (cli *Client) ImageList(ctx context.Context, options types.ImageListOptions
<ide> }
<ide> query.Set("filters", filterJSON)
<ide> }
<del> if options.MatchName != "" {
<del> // FIXME rename this parameter, to not be confused with the filters flag
<del> query.Set("filter", options.MatchName)
<del> }
<ide> if options.All {
<ide> query.Set("all", "1")
<ide> }
<ide><path>client/image_list_test.go
<ide> func TestImageList(t *testing.T) {
<ide> "filters": "",
<ide> },
<ide> },
<del> {
<del> options: types.ImageListOptions{
<del> All: true,
<del> MatchName: "image_name",
<del> },
<del> expectedQueryParams: map[string]string{
<del> "all": "1",
<del> "filter": "image_name",
<del> "filters": "",
<del> },
<del> },
<ide> {
<ide> options: types.ImageListOptions{
<ide> Filters: filters,
<ide><path>daemon/disk_usage.go
<ide> import (
<ide> "github.com/Sirupsen/logrus"
<ide> "github.com/docker/distribution/digest"
<ide> "github.com/docker/docker/api/types"
<add> "github.com/docker/docker/api/types/filters"
<ide> "github.com/docker/docker/layer"
<ide> "github.com/docker/docker/pkg/directory"
<ide> "github.com/docker/docker/volume"
<ide> func (daemon *Daemon) SystemDiskUsage() (*types.DiskUsage, error) {
<ide> }
<ide>
<ide> // Get all top images with extra attributes
<del> allImages, err := daemon.Images("", "", false, true)
<add> allImages, err := daemon.Images(filters.NewArgs(), false, true)
<ide> if err != nil {
<ide> return nil, fmt.Errorf("failed to retrieve image list: %v", err)
<ide> }
<ide><path>daemon/images.go
<ide> package daemon
<ide> import (
<ide> "encoding/json"
<ide> "fmt"
<del> "path"
<ide> "sort"
<ide> "time"
<ide>
<ide> "github.com/pkg/errors"
<ide>
<add> "github.com/docker/distribution/reference"
<ide> "github.com/docker/docker/api/types"
<ide> "github.com/docker/docker/api/types/filters"
<ide> "github.com/docker/docker/container"
<ide> "github.com/docker/docker/image"
<ide> "github.com/docker/docker/layer"
<del> "github.com/docker/docker/reference"
<ide> )
<ide>
<ide> var acceptedImageFilterTags = map[string]bool{
<del> "dangling": true,
<del> "label": true,
<del> "before": true,
<del> "since": true,
<add> "dangling": true,
<add> "label": true,
<add> "before": true,
<add> "since": true,
<add> "reference": true,
<ide> }
<ide>
<ide> // byCreated is a temporary type used to sort a list of images by creation
<ide> func (daemon *Daemon) Map() map[image.ID]*image.Image {
<ide> // filter is a shell glob string applied to repository names. The argument
<ide> // named all controls whether all images in the graph are filtered, or just
<ide> // the heads.
<del>func (daemon *Daemon) Images(filterArgs, filter string, all bool, withExtraAttrs bool) ([]*types.ImageSummary, error) {
<add>func (daemon *Daemon) Images(imageFilters filters.Args, all bool, withExtraAttrs bool) ([]*types.ImageSummary, error) {
<ide> var (
<ide> allImages map[image.ID]*image.Image
<ide> err error
<ide> danglingOnly = false
<ide> )
<ide>
<del> imageFilters, err := filters.FromParam(filterArgs)
<del> if err != nil {
<del> return nil, err
<del> }
<ide> if err := imageFilters.Validate(acceptedImageFilterTags); err != nil {
<ide> return nil, err
<ide> }
<ide> func (daemon *Daemon) Images(filterArgs, filter string, all bool, withExtraAttrs
<ide> var allLayers map[layer.ChainID]layer.Layer
<ide> var allContainers []*container.Container
<ide>
<del> var filterTagged bool
<del> if filter != "" {
<del> filterRef, err := reference.ParseNamed(filter)
<del> if err == nil { // parse error means wildcard repo
<del> if _, ok := filterRef.(reference.NamedTagged); ok {
<del> filterTagged = true
<del> }
<del> }
<del> }
<del>
<ide> for id, img := range allImages {
<ide> if beforeFilter != nil {
<ide> if img.Created.Equal(beforeFilter.Created) || img.Created.After(beforeFilter.Created) {
<ide> func (daemon *Daemon) Images(filterArgs, filter string, all bool, withExtraAttrs
<ide> newImage := newImage(img, size)
<ide>
<ide> for _, ref := range daemon.referenceStore.References(id.Digest()) {
<del> if filter != "" { // filter by tag/repo name
<del> if filterTagged { // filter by tag, require full ref match
<del> if ref.String() != filter {
<del> continue
<add> if imageFilters.Include("reference") {
<add> var found bool
<add> var matchErr error
<add> for _, pattern := range imageFilters.Get("reference") {
<add> found, matchErr = reference.Match(pattern, ref)
<add> if matchErr != nil {
<add> return nil, matchErr
<ide> }
<del> } else if matched, err := path.Match(filter, ref.Name()); !matched || err != nil { // name only match, FIXME: docs say exact
<add> }
<add> if !found {
<ide> continue
<ide> }
<ide> }
<ide> func (daemon *Daemon) Images(filterArgs, filter string, all bool, withExtraAttrs
<ide> //dangling=false case, so dangling image is not needed
<ide> continue
<ide> }
<del> if filter != "" { // skip images with no references if filtering by tag
<add> if imageFilters.Include("reference") { // skip images with no references if filtering by reference
<ide> continue
<ide> }
<ide> newImage.RepoDigests = []string{"<none>@<none>"}
<ide><path>docs/deprecated.md
<ide> The following list of features are deprecated in Engine.
<ide> To learn more about Docker Engine's deprecation policy,
<ide> see [Feature Deprecation Policy](https://docs.docker.com/engine/#feature-deprecation-policy).
<ide>
<add>## `filter` param for `/images/json` endpoint
<add>**Deprecated In Release: [v1.13](https://github.com/docker/docker/releases/tag/v1.13.0)**
<add>
<add>**Target For Removal In Release: v1.16**
<add>
<add>The `filter` param to filter the list of image by reference (name or name:tag) is now implemented as a regular filter, named `reference`.
<ide>
<ide> ### `repository:shortid` image references
<del>**Deprecated In Release: [v1.13](https://github.com/docker/docker/releases/)**
<add>**Deprecated In Release: [v1.13](https://github.com/docker/docker/releases/tag/v1.13.0)**
<ide>
<ide> **Target For Removal In Release: v1.16**
<ide>
<ide> `repository:shortid` syntax for referencing images is very little used, collides with with tag references can be confused with digest references.
<ide>
<ide> ### `docker daemon` subcommand
<del>**Deprecated In Release: [v1.13](https://github.com/docker/docker/releases/)**
<add>**Deprecated In Release: [v1.13](https://github.com/docker/docker/releases/tag/v1.13.0)**
<ide>
<ide> **Target For Removal In Release: v1.16**
<ide>
<ide> The daemon is moved to a separate binary (`dockerd`), and should be used instead.
<ide>
<ide> ### Duplicate keys with conflicting values in engine labels
<del>**Deprecated In Release: [v1.13](https://github.com/docker/docker/releases/)**
<add>**Deprecated In Release: [v1.13](https://github.com/docker/docker/releases/tag/v1.13.0)**
<ide>
<ide> **Target For Removal In Release: v1.16**
<ide>
<ide><path>docs/reference/api/docker_remote_api_v1.25.md
<ide> references on the command line.
<ide> - `label=key` or `label="key=value"` of an image label
<ide> - `before`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`)
<ide> - `since`=(`<image-name>[:<tag>]`, `<image id>` or `<image@digest>`)
<del>- **filter** - only return images with the specified name
<add> - `reference`=(`<image-name>[:<tag>]`)
<ide>
<ide> ### Build image from a Dockerfile
<ide> | 10 |
Javascript | Javascript | make .bind() always asynchronous | 332fea5ac1816e498030109c4211bca24a7fa667 | <ide><path>lib/dgram.js
<ide> function isIP(address) {
<ide>
<ide>
<ide> function lookup(address, family, callback) {
<del> // implicit 'bind before send' needs to run on the same tick
<del> var matchedFamily = isIP(address);
<del> if (matchedFamily)
<del> return callback(null, address, matchedFamily);
<del>
<ide> if (!dns)
<ide> dns = require('dns');
<ide>
<ide> exports.createSocket = function(type, listener) {
<ide> };
<ide>
<ide>
<del>Socket.prototype.bind = function(port, address) {
<add>Socket.prototype.bind = function(port, address, callback) {
<ide> var self = this;
<ide>
<ide> self._healthCheck();
<ide>
<add> if (typeof callback === 'function')
<add> self.once('listening', callback);
<add>
<ide> // resolve address first
<ide> self._handle.lookup(address, function(err, ip) {
<del> if (!err) {
<del> if (self._handle.bind(ip, port || 0, /*flags=*/0)) {
<del> err = errnoException(errno, 'bind');
<del> }
<del> else {
<del> self._bound = true;
<del> self._startReceiving();
<del> self.emit('listening');
<del> }
<del> }
<add> if (!self._handle)
<add> return; // handle has been closed in the mean time
<ide>
<ide> if (err) {
<del> // caller may not have had a chance yet to register its
<del> // error event listener so defer the error to the next tick
<del> process.nextTick(function() {
<del> self.emit('error', err);
<del> });
<add> self.emit('error', err);
<add> return;
<add> }
<add>
<add> if (self._handle.bind(ip, port || 0, /*flags=*/ 0)) {
<add> self.emit('error', errnoException(errno, 'bind'));
<add> return;
<ide> }
<add>
<add> self._handle.onmessage = onMessage;
<add> self._handle.recvStart();
<add> self._receiving = true;
<add> self._bound = true;
<add> self.fd = -42; // compatibility hack
<add>
<add> self.emit('listening');
<ide> });
<ide> };
<ide>
<ide> Socket.prototype.send = function(buffer,
<ide> callback = callback || noop;
<ide>
<ide> self._healthCheck();
<del> self._startReceiving();
<add>
<add> if (!self._bound) {
<add> self.bind(0, null);
<add> self.once('listening', function() {
<add> self.send(buffer, offset, length, port, address, callback);
<add> });
<add> return;
<add> }
<ide>
<ide> self._handle.lookup(address, function(err, ip) {
<ide> if (err) {
<ide> Socket.prototype._healthCheck = function() {
<ide> };
<ide>
<ide>
<del>Socket.prototype._startReceiving = function() {
<del> if (this._receiving)
<del> return;
<del>
<del> if (!this._bound) {
<del> this.bind(); // bind to random port
<del>
<del> // sanity check
<del> if (!this._bound)
<del> throw new Error('implicit bind failed');
<del> }
<del>
<del> this._handle.onmessage = onMessage;
<del> this._handle.recvStart();
<del> this._receiving = true;
<del> this.fd = -42; // compatibility hack
<del>};
<del>
<del>
<ide> Socket.prototype._stopReceiving = function() {
<ide> if (!this._receiving)
<ide> return; | 1 |
Ruby | Ruby | use real basename for output | 398845891938879f3df3772102903827b4e76b17 | <ide><path>Library/Homebrew/download_strategy.rb
<ide> class AbstractDownloadStrategy
<ide>
<ide> module Pourable
<ide> def stage
<del> ohai "Pouring #{cached_location.basename}"
<add> ohai "Pouring #{basename}"
<ide> super
<ide> end
<ide> end
<ide> def clear_cache
<ide> end
<ide>
<ide> def basename
<del> nil
<add> cached_location.basename
<ide> end
<ide>
<ide> private | 1 |
Ruby | Ruby | improve `supported_repos` array syntax | 63a1a078b9b101fd5654ffcdf099632f3f258851 | <ide><path>Library/Homebrew/dev-cmd/contributions.rb
<ide> module Homebrew
<ide>
<ide> module_function
<ide>
<del> SUPPORTED_REPOS = (
<del> %w[brew core cask] +
<del> OFFICIAL_CMD_TAPS.keys.map { |t| t.delete_prefix("homebrew/") } +
<del> OFFICIAL_CASK_TAPS
<del> ).freeze
<add> SUPPORTED_REPOS = [
<add> %w[brew core cask],
<add> OFFICIAL_CMD_TAPS.keys.map { |t| t.delete_prefix("homebrew/") },
<add> OFFICIAL_CASK_TAPS,
<add> ].flatten.freeze
<ide>
<ide> sig { returns(CLI::Parser) }
<ide> def contributions_args | 1 |
Javascript | Javascript | add meridiemhour to locales that need it | 274e83ca00c74d0b48b04dadf503978cbce9ade5 | <ide><path>locale/hi.js
<ide> },
<ide> // Hindi notation for meridiems are quite fuzzy in practice. While there exists
<ide> // a rigid notion of a 'Pahar' it is not used as rigidly in modern Hindi.
<del> meridiemParse: /रात|सुबह|दोपहर|शाम|रात/,
<del> isPM: function (input) {
<del> // TODO: This is incorrect (look at cutoffs). We need a better isPM interface.
<del> return /^(दोपहर|शाम|रात)$/.test(input);
<add> meridiemParse: /रात|सुबह|दोपहर|शाम/,
<add> meridiemHour : function (hour, meridiem) {
<add> if (hour === 12) {
<add> hour = 0;
<add> }
<add> if (meridiem === 'रात') {
<add> return hour < 4 ? hour : hour + 12;
<add> } else if (meridiem === 'सुबह') {
<add> return hour;
<add> } else if (meridiem === 'दोपहर') {
<add> return hour >= 10 ? hour : hour + 12;
<add> } else if (meridiem === 'शाम') {
<add> return hour + 12;
<add> }
<ide> },
<ide> meridiem : function (hour, minute, isLower) {
<ide> if (hour < 4) {
<ide><path>locale/id.js
<ide> LLLL : 'dddd, D MMMM YYYY [pukul] LT'
<ide> },
<ide> meridiemParse: /pagi|siang|sore|malam/,
<del> isPM: function (input) {
<del> // TODO: This is incorrect (look at cutoffs).
<del> return /^(siang|sore|malam)$/.test(input);
<add> meridiemHour : function (hour, meridiem) {
<add> if (hour === 12) {
<add> hour = 0;
<add> }
<add> if (meridiem === 'pagi') {
<add> return hour;
<add> } else if (meridiem === 'siang') {
<add> return hour >= 11 ? hour : hour + 12;
<add> } else if (meridiem === 'sore' || meridiem === 'malam') {
<add> return hour + 12;
<add> }
<ide> },
<ide> meridiem : function (hours, minutes, isLower) {
<ide> if (hours < 11) {
<ide><path>locale/mr.js
<ide> return symbolMap[match];
<ide> });
<ide> },
<del> meridiemParse: /रात्री|सकाळी|दुपारी|सायंकाळी|रात्री/,
<del> isPM : function (input) {
<del> // TODO: This is wrong.
<del> return /^(दुपारी|सायंकाळी|रात्री)$/.test(input);
<add> meridiemParse: /रात्री|सकाळी|दुपारी|सायंकाळी/,
<add> meridiemHour : function (hour, meridiem) {
<add> if (hour === 12) {
<add> hour = 0;
<add> }
<add> if (meridiem === 'रात्री') {
<add> return hour < 4 ? hour : hour + 12;
<add> } else if (meridiem === 'सकाळी') {
<add> return hour;
<add> } else if (meridiem === 'दुपारी') {
<add> return hour >= 10 ? hour : hour + 12;
<add> } else if (meridiem === 'सायंकाळी') {
<add> return hour + 12;
<add> }
<ide> },
<ide> meridiem: function (hour, minute, isLower)
<ide> {
<ide><path>locale/ms-my.js
<ide> LLLL : 'dddd, D MMMM YYYY [pukul] LT'
<ide> },
<ide> meridiemParse: /pagi|tengahari|petang|malam/,
<del> isPM: function (input) {
<del> // TODO: This is wrong.
<del> return /^(tengahari|petang|malam)$/.test(input);
<add> meridiemHour: function (hour, meridiem) {
<add> if (hour === 12) {
<add> hour = 0;
<add> }
<add> if (meridiem === 'pagi') {
<add> return hour;
<add> } else if (meridiem === 'tengahari') {
<add> return hour >= 11 ? hour : hour + 12;
<add> } else if (meridiem === 'petang' || meridiem === 'malam') {
<add> return hour + 12;
<add> }
<ide> },
<ide> meridiem : function (hours, minutes, isLower) {
<ide> if (hours < 11) {
<ide><path>locale/ne.js
<ide> });
<ide> },
<ide> meridiemParse: /राती|बिहान|दिउँसो|बेलुका|साँझ|राती/,
<del> isPM : function (input) {
<del> // TODO: This is wrong.
<del> return /^(दिउँसो|बेलुका|साँझ|राती)$/.test(input);
<add> meridiemHour : function (hour, meridiem) {
<add> if (hour === 12) {
<add> hour = 0;
<add> }
<add> if (meridiem === 'राती') {
<add> return hour < 3 ? hour : hour + 12;
<add> } else if (meridiem === 'बिहान') {
<add> return hour;
<add> } else if (meridiem === 'दिउँसो') {
<add> return hour >= 10 ? hour : hour + 12;
<add> } else if (meridiem === 'बेलुका' || meridiem === 'साँझ') {
<add> return hour + 12;
<add> }
<ide> },
<ide> meridiem : function (hour, minute, isLower) {
<ide> if (hour < 3) {
<ide><path>locale/ta.js
<ide>
<ide>
<ide> // refer http://ta.wikipedia.org/s/1er1
<del>
<del> // TODO: This is pretty wrong (when hour is equal to 6 10, 14, 18, 20,
<del> // 24 (0). Also it doesn't split at 12 (noon).
<add> meridiemParse: /யாமம்|வைகறை|காலை|நண்பகல்|எற்பாடு|மாலை/,
<ide> meridiem : function (hour, minute, isLower) {
<del> if (hour >= 6 && hour <= 10) {
<del> return ' காலை';
<del> } else if (hour >= 10 && hour <= 14) {
<del> return ' நண்பகல்';
<del> } else if (hour >= 14 && hour <= 18) {
<del> return ' எற்பாடு';
<del> } else if (hour >= 18 && hour <= 20) {
<del> return ' மாலை';
<del> } else if (hour >= 20 && hour <= 24) {
<del> return ' இரவு';
<del> } else if (hour >= 0 && hour <= 6) {
<del> return ' வைகறை';
<add> if (hour < 2) {
<add> return ' யாமம்';
<add> } else if (hour < 6) {
<add> return ' வைகறை'; // வைகறை
<add> } else if (hour < 10) {
<add> return ' காலை'; // காலை
<add> } else if (hour < 14) {
<add> return ' நண்பகல்'; // நண்பகல்
<add> } else if (hour < 18) {
<add> return ' எற்பாடு'; // எற்பாடு
<add> } else if (hour < 22) {
<add> return ' மாலை'; // மாலை
<add> } else {
<add> return ' யாமம்';
<add> }
<add> },
<add> meridiemHour : function (hour, meridiem) {
<add> if (hour === 12) {
<add> hour = 0;
<add> }
<add> if (meridiem === 'யாமம்') {
<add> return hour < 2 ? hour : hour + 12;
<add> } else if (meridiem === 'வைகறை' || meridiem === 'காலை') {
<add> return hour;
<add> } else if (meridiem === 'நண்பகல்') {
<add> return hour >= 10 ? hour : hour + 12;
<add> } else {
<add> return hour + 12;
<ide> }
<ide> },
<ide> week : {
<ide><path>locale/zh-cn.js
<ide> llll : 'YYYY年MMMD日ddddLT'
<ide> },
<ide> meridiemParse: /凌晨|早上|上午|中午|下午|晚上/,
<del> isPM: function (input) {
<del> // TODO: This is wrong.
<del> return /^(中午|下午|晚上)$/.test(input);
<add> meridiemHour: function (hour, meridiem) {
<add> if (hour === 12) {
<add> hour = 0;
<add> }
<add> if (meridiem === '凌晨' || meridiem === '早上' ||
<add> meridiem === '上午') {
<add> return hour;
<add> } else if (meridiem === '下午' || meridiem === '晚上') {
<add> return hour + 12;
<add> } else {
<add> // '中午'
<add> return hour >= 11 ? hour : hour + 12;
<add> }
<ide> },
<ide> meridiem : function (hour, minute, isLower) {
<ide> var hm = hour * 100 + minute;
<ide><path>locale/zh-tw.js
<ide> llll : 'YYYY年MMMD日ddddLT'
<ide> },
<ide> meridiemParse: /早上|上午|中午|下午|晚上/,
<del> isPM: function (input) {
<del> // TODO: This is wrong.
<del> return /^(中午|下午|晚上)$/.test(input);
<add> meridiemHour : function (hour, meridiem) {
<add> if (hour === 12) {
<add> hour = 0;
<add> }
<add> if (meridiem === '早上' || meridiem === '上午') {
<add> return hour;
<add> } else if (meridiem === '中午') {
<add> return hour >= 11 ? hour : hour + 12;
<add> } else if (meridiem === '下午' || meridiem === '晚上') {
<add> return hour + 12;
<add> }
<ide> },
<ide> meridiem : function (hour, minute, isLower) {
<ide> var hm = hour * 100 + minute;
<ide><path>test/locale/hi.js
<ide> exports['locale:hi'] = {
<ide> test.done();
<ide> },
<ide>
<del> 'meridiem' : function (test) {
<add> 'meridiem invariant' : function (test) {
<ide> test.equal(moment([2011, 2, 23, 2, 30]).format('a'), 'रात', 'before dawn');
<ide> test.equal(moment([2011, 2, 23, 9, 30]).format('a'), 'सुबह', 'morning');
<ide> test.equal(moment([2011, 2, 23, 14, 30]).format('a'), 'दोपहर', 'during day');
<ide> exports['locale:hi'] = {
<ide> test.done();
<ide> },
<ide>
<add> 'meridiem' : function (test) {
<add> var h, m, t1, t2;
<add> for (h = 0; h < 24; ++h) {
<add> for (m = 0; m < 60; m += 15) {
<add> t1 = moment.utc([2000, 0, 1, h, m]);
<add> t2 = moment(t1.format('A h:mm'), 'A h:mm');
<add> test.equal(t2.format('HH:mm'), t1.format('HH:mm'),
<add> 'meridiem at ' + t1.format('HH:mm'));
<add> }
<add> }
<add>
<add> test.done();
<add> },
<add>
<ide> 'strict ordinal parsing' : function (test) {
<ide> var i, ordinalStr, testMoment;
<ide> for (i = 1; i <= 31; ++i) {
<ide><path>test/locale/id.js
<ide> exports['locale:id'] = {
<ide> test.done();
<ide> },
<ide>
<add> 'meridiem invariant' : function (test) {
<add> var h, m, t1, t2;
<add> for (h = 0; h < 24; ++h) {
<add> for (m = 0; m < 60; m += 15) {
<add> t1 = moment.utc([2000, 0, 1, h, m]);
<add> t2 = moment(t1.format('A h:mm'), 'A h:mm');
<add> test.equal(t2.format('HH:mm'), t1.format('HH:mm'),
<add> 'meridiem at ' + t1.format('HH:mm'));
<add> }
<add> }
<add>
<add> test.done();
<add> },
<add>
<ide> 'strict ordinal parsing' : function (test) {
<ide> var i, ordinalStr, testMoment;
<ide> for (i = 1; i <= 31; ++i) {
<ide><path>test/locale/mr.js
<ide> exports['locale:mr'] = {
<ide> test.done();
<ide> },
<ide>
<add> 'meridiem invariant' : function (test) {
<add> var h, m, t1, t2;
<add> for (h = 0; h < 24; ++h) {
<add> for (m = 0; m < 60; m += 15) {
<add> t1 = moment.utc([2000, 0, 1, h, m]);
<add> t2 = moment(t1.format('A h:mm'), 'A h:mm');
<add> test.equal(t2.format('HH:mm'), t1.format('HH:mm'),
<add> 'meridiem at ' + t1.format('HH:mm'));
<add> }
<add> }
<add>
<add> test.done();
<add> },
<add>
<ide> 'strict ordinal parsing' : function (test) {
<ide> var i, ordinalStr, testMoment;
<ide> for (i = 1; i <= 31; ++i) {
<ide><path>test/locale/ms-my.js
<ide> exports['locale:ms-my'] = {
<ide> test.done();
<ide> },
<ide>
<add> 'meridiem invariant' : function (test) {
<add> var h, m, t1, t2;
<add> for (h = 0; h < 24; ++h) {
<add> for (m = 0; m < 60; m += 15) {
<add> t1 = moment.utc([2000, 0, 1, h, m]);
<add> t2 = moment(t1.format('A h:mm'), 'A h:mm');
<add> test.equal(t2.format('HH:mm'), t1.format('HH:mm'),
<add> 'meridiem at ' + t1.format('HH:mm'));
<add> }
<add> }
<add>
<add> test.done();
<add> },
<add>
<ide> 'lenient ordinal parsing of number' : function (test) {
<ide> var i, testMoment;
<ide> for (i = 1; i <= 31; ++i) {
<ide><path>test/locale/ne.js
<ide> exports['locale:ne'] = {
<ide> test.done();
<ide> },
<ide>
<add> 'meridiem invariant' : function (test) {
<add> var h, m, t1, t2;
<add> for (h = 0; h < 24; ++h) {
<add> for (m = 0; m < 60; m += 15) {
<add> t1 = moment.utc([2000, 0, 1, h, m]);
<add> t2 = moment(t1.format('A h:mm'), 'A h:mm');
<add> test.equal(t2.format('HH:mm'), t1.format('HH:mm'),
<add> 'meridiem at ' + t1.format('HH:mm'));
<add> }
<add> }
<add>
<add> test.done();
<add> },
<add>
<ide> 'strict ordinal parsing' : function (test) {
<ide> var i, ordinalStr, testMoment;
<ide> for (i = 1; i <= 31; ++i) {
<ide><path>test/locale/ta.js
<ide> exports['locale:ta'] = {
<ide> },
<ide>
<ide> 'meridiem' : function (test) {
<add> test.equal(moment([2011, 2, 23, 0, 30]).format('a'), ' யாமம்', '(after) midnight');
<ide> test.equal(moment([2011, 2, 23, 2, 30]).format('a'), ' வைகறை', 'before dawn');
<ide> test.equal(moment([2011, 2, 23, 9, 30]).format('a'), ' காலை', 'morning');
<del> test.equal(moment([2011, 2, 23, 14, 30]).format('a'), ' நண்பகல்', 'during day');
<add> test.equal(moment([2011, 2, 23, 14, 30]).format('a'), ' எற்பாடு', 'during day');
<ide> test.equal(moment([2011, 2, 23, 17, 30]).format('a'), ' எற்பாடு', 'evening');
<ide> test.equal(moment([2011, 2, 23, 19, 30]).format('a'), ' மாலை', 'late evening');
<del> test.equal(moment([2011, 2, 23, 21, 20]).format('a'), ' இரவு', 'night');
<add> test.equal(moment([2011, 2, 23, 23, 30]).format('a'), ' யாமம்', '(before) midnight');
<ide> test.done();
<ide> },
<ide>
<ide> exports['locale:ta'] = {
<ide> test.done();
<ide> },
<ide>
<add> 'meridiem invariant' : function (test) {
<add> var h, m, t1, t2;
<add> for (h = 0; h < 24; ++h) {
<add> for (m = 0; m < 60; m += 15) {
<add> t1 = moment.utc([2000, 0, 1, h, m]);
<add> t2 = moment(t1.format('A h:mm'), 'A h:mm');
<add> test.equal(t2.format('HH:mm'), t1.format('HH:mm'),
<add> 'meridiem at ' + t1.format('HH:mm'));
<add> }
<add> }
<add>
<add> test.done();
<add> },
<add>
<ide> 'strict ordinal parsing' : function (test) {
<ide> var i, ordinalStr, testMoment;
<ide> for (i = 1; i <= 31; ++i) {
<ide><path>test/locale/zh-cn.js
<ide> exports['locale:zh-cn'] = {
<ide> test.done();
<ide> },
<ide>
<add> 'meridiem invariant' : function (test) {
<add> var h, m, t1, t2;
<add> for (h = 0; h < 24; ++h) {
<add> for (m = 0; m < 60; m += 15) {
<add> t1 = moment.utc([2000, 0, 1, h, m]);
<add> t2 = moment(t1.format('A h:mm'), 'A h:mm');
<add> test.equal(t2.format('HH:mm'), t1.format('HH:mm'),
<add> 'meridiem at ' + t1.format('HH:mm'));
<add> }
<add> }
<add>
<add> test.done();
<add> },
<add>
<ide> 'strict ordinal parsing' : function (test) {
<ide> var i, ordinalStr, testMoment;
<ide> for (i = 1; i <= 31; ++i) {
<ide><path>test/locale/zh-tw.js
<ide> exports['locale:zh-tw'] = {
<ide> test.done();
<ide> },
<ide>
<add> 'meridiem invariant' : function (test) {
<add> var h, m, t1, t2;
<add> for (h = 0; h < 24; ++h) {
<add> for (m = 0; m < 60; m += 15) {
<add> t1 = moment.utc([2000, 0, 1, h, m]);
<add> t2 = moment(t1.format('A h:mm'), 'A h:mm');
<add> test.equal(t2.format('HH:mm'), t1.format('HH:mm'),
<add> 'meridiem at ' + t1.format('HH:mm'));
<add> }
<add> }
<add>
<add> test.done();
<add> },
<add>
<ide> 'strict ordinal parsing' : function (test) {
<ide> var i, ordinalStr, testMoment;
<ide> for (i = 1; i <= 31; ++i) { | 16 |
Java | Java | add flux<part> serverwebexchange.getparts() | 11c7907a596d97585699440538b478d6e0c7edcc | <ide><path>spring-web/src/main/java/org/springframework/web/server/ServerWebExchange.java
<ide> import java.util.function.Consumer;
<ide> import java.util.function.Function;
<ide>
<add>import reactor.core.publisher.Flux;
<ide> import reactor.core.publisher.Mono;
<ide>
<ide> import org.springframework.context.ApplicationContext;
<ide> default <T> T getAttributeOrDefault(String name, T defaultValue) {
<ide> * cached so that this method is safe to call more than once.
<ide> * <p><strong>Note:</strong>the {@linkplain Part#content() contents} of each
<ide> * part is not cached, and can only be read once.
<add> * @see #getParts()
<ide> */
<ide> Mono<MultiValueMap<String, Part>> getMultipartData();
<ide>
<add> /**
<add> * Return the parts of a multipart request if the Content-Type is
<add> * {@code "multipart/form-data"} or an empty flux otherwise.
<add> * <p><strong>Note:</strong> calling this method causes the request body to
<add> * be read and parsed in full and the resulting {@code Flux} is
<add> * cached so that this method is safe to call more than once.
<add> * <p><strong>Note:</strong>the {@linkplain Part#content() contents} of each
<add> * part is not cached, and can only be read once.
<add> * @since 5.2
<add> * @see #getMultipartData()
<add> */
<add> Flux<Part> getParts();
<add>
<ide> /**
<ide> * Return the {@link LocaleContext} using the configured
<ide> * {@link org.springframework.web.server.i18n.LocaleContextResolver}.
<ide><path>spring-web/src/main/java/org/springframework/web/server/ServerWebExchangeDecorator.java
<ide> /*
<del> * Copyright 2002-2017 the original author or authors.
<add> * Copyright 2002-2019 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> import java.util.Map;
<ide> import java.util.function.Function;
<ide>
<add>import reactor.core.publisher.Flux;
<ide> import reactor.core.publisher.Mono;
<ide>
<ide> import org.springframework.context.ApplicationContext;
<ide> public Mono<MultiValueMap<String, Part>> getMultipartData() {
<ide> return getDelegate().getMultipartData();
<ide> }
<ide>
<add> @Override
<add> public Flux<Part> getParts() {
<add> return getDelegate().getParts();
<add> }
<add>
<ide> @Override
<ide> public boolean isNotModified() {
<ide> return getDelegate().isNotModified();
<ide><path>spring-web/src/main/java/org/springframework/web/server/adapter/DefaultServerWebExchange.java
<ide> /*
<del> * Copyright 2002-2018 the original author or authors.
<add> * Copyright 2002-2019 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> import java.util.concurrent.ConcurrentHashMap;
<ide> import java.util.function.Function;
<ide>
<add>import reactor.core.publisher.Flux;
<ide> import reactor.core.publisher.Mono;
<ide>
<ide> import org.springframework.context.ApplicationContext;
<ide> public class DefaultServerWebExchange implements ServerWebExchange {
<ide> private static final ResolvableType FORM_DATA_TYPE =
<ide> ResolvableType.forClassWithGenerics(MultiValueMap.class, String.class, String.class);
<ide>
<del> private static final ResolvableType MULTIPART_DATA_TYPE = ResolvableType.forClassWithGenerics(
<del> MultiValueMap.class, String.class, Part.class);
<add> private static final ResolvableType PARTS_DATA_TYPE = ResolvableType.forClass(Part.class);
<ide>
<ide> private static final Mono<MultiValueMap<String, String>> EMPTY_FORM_DATA =
<ide> Mono.just(CollectionUtils.unmodifiableMultiValueMap(new LinkedMultiValueMap<String, String>(0)))
<ide> public class DefaultServerWebExchange implements ServerWebExchange {
<ide>
<ide> private final Mono<MultiValueMap<String, Part>> multipartDataMono;
<ide>
<add> private final Flux<Part> partFlux;
<add>
<ide> @Nullable
<ide> private final ApplicationContext applicationContext;
<ide>
<ide> public DefaultServerWebExchange(ServerHttpRequest request, ServerHttpResponse re
<ide> this.sessionMono = sessionManager.getSession(this).cache();
<ide> this.localeContextResolver = localeContextResolver;
<ide> this.formDataMono = initFormData(request, codecConfigurer, getLogPrefix());
<del> this.multipartDataMono = initMultipartData(request, codecConfigurer, getLogPrefix());
<add> this.partFlux = initParts(request, codecConfigurer, getLogPrefix());
<add> this.multipartDataMono = initMultipartData(this.partFlux);
<ide> this.applicationContext = applicationContext;
<ide> }
<ide>
<ide> private static Mono<MultiValueMap<String, String>> initFormData(ServerHttpReques
<ide> }
<ide>
<ide> @SuppressWarnings("unchecked")
<del> private static Mono<MultiValueMap<String, Part>> initMultipartData(ServerHttpRequest request,
<del> ServerCodecConfigurer configurer, String logPrefix) {
<del>
<add> private static Flux<Part> initParts(ServerHttpRequest request, ServerCodecConfigurer configurer, String logPrefix) {
<ide> try {
<ide> MediaType contentType = request.getHeaders().getContentType();
<ide> if (MediaType.MULTIPART_FORM_DATA.isCompatibleWith(contentType)) {
<del> return ((HttpMessageReader<MultiValueMap<String, Part>>) configurer.getReaders().stream()
<del> .filter(reader -> reader.canRead(MULTIPART_DATA_TYPE, MediaType.MULTIPART_FORM_DATA))
<add> return ((HttpMessageReader<Part>)configurer.getReaders().stream()
<add> .filter(reader -> reader.canRead(PARTS_DATA_TYPE, MediaType.MULTIPART_FORM_DATA))
<ide> .findFirst()
<ide> .orElseThrow(() -> new IllegalStateException("No multipart HttpMessageReader.")))
<del> .readMono(MULTIPART_DATA_TYPE, request, Hints.from(Hints.LOG_PREFIX_HINT, logPrefix))
<del> .switchIfEmpty(EMPTY_MULTIPART_DATA)
<add> .read(PARTS_DATA_TYPE, request, Hints.from(Hints.LOG_PREFIX_HINT, logPrefix))
<ide> .cache();
<ide> }
<ide> }
<ide> catch (InvalidMediaTypeException ex) {
<ide> // Ignore
<ide> }
<del> return EMPTY_MULTIPART_DATA;
<add> return Flux.empty();
<add> }
<add>
<add> private static Mono<MultiValueMap<String, Part>> initMultipartData(Flux<Part> parts) {
<add> return parts.collect(
<add> () -> (MultiValueMap<String, Part>) new LinkedMultiValueMap<String, Part>(),
<add> (map, part) -> map.add(part.name(), part))
<add> .switchIfEmpty(EMPTY_MULTIPART_DATA)
<add> .cache();
<ide> }
<ide>
<ide>
<add>
<ide> @Override
<ide> public ServerHttpRequest getRequest() {
<ide> return this.request;
<ide> public Mono<MultiValueMap<String, Part>> getMultipartData() {
<ide> return this.multipartDataMono;
<ide> }
<ide>
<add> @Override
<add> public Flux<Part> getParts() {
<add> return this.partFlux;
<add> }
<add>
<ide> @Override
<ide> public LocaleContext getLocaleContext() {
<ide> return this.localeContextResolver.resolveLocaleContext(this); | 3 |
Ruby | Ruby | use any? instead of !empty? | af64ac4e5ce8406137d5520fa88e8f652ab703e9 | <ide><path>activemodel/lib/active_model/dirty.rb
<ide> module Dirty
<ide> # person.name = 'bob'
<ide> # person.changed? # => true
<ide> def changed?
<del> !changed_attributes.empty?
<add> changed_attributes.any?
<ide> end
<ide>
<ide> # List of attributes with unsaved changes. | 1 |
PHP | PHP | use swift_transportexception for mailgun & ses | 1563361e1a4f786e2ddfc24914d579639782893d | <ide><path>src/Illuminate/Mail/Transport/MailgunTransport.php
<ide> namespace Illuminate\Mail\Transport;
<ide>
<ide> use GuzzleHttp\ClientInterface;
<add>use GuzzleHttp\Exception\GuzzleException;
<ide> use Swift_Mime_SimpleMessage;
<add>use Swift_TransportException;
<ide>
<ide> class MailgunTransport extends Transport
<ide> {
<ide> public function send(Swift_Mime_SimpleMessage $message, &$failedRecipients = nul
<ide>
<ide> $message->setBcc([]);
<ide>
<del> $response = $this->client->request(
<del> 'POST',
<del> "https://{$this->endpoint}/v3/{$this->domain}/messages.mime",
<del> $this->payload($message, $to)
<del> );
<add> try {
<add> $response = $this->client->request(
<add> 'POST',
<add> "https://{$this->endpoint}/v3/{$this->domain}/messages.mime",
<add> $this->payload($message, $to)
<add> );
<add> } catch (GuzzleException $e) {
<add> throw new Swift_TransportException('Failed to make request to Mailgun API', $e->getCode(), $e);
<add> }
<ide>
<ide> $messageId = $this->getMessageId($response);
<ide>
<ide><path>src/Illuminate/Mail/Transport/SesTransport.php
<ide>
<ide> namespace Illuminate\Mail\Transport;
<ide>
<add>use Aws\Exception\AwsException;
<ide> use Aws\Ses\SesClient;
<ide> use Swift_Mime_SimpleMessage;
<add>use Swift_TransportException;
<ide>
<ide> class SesTransport extends Transport
<ide> {
<ide> public function send(Swift_Mime_SimpleMessage $message, &$failedRecipients = nul
<ide> {
<ide> $this->beforeSendPerformed($message);
<ide>
<del> $result = $this->ses->sendRawEmail(
<del> array_merge(
<del> $this->options, [
<del> 'Source' => key($message->getSender() ?: $message->getFrom()),
<del> 'RawMessage' => [
<del> 'Data' => $message->toString(),
<del> ],
<del> ]
<del> )
<del> );
<add> try {
<add> $result = $this->ses->sendRawEmail(
<add> array_merge(
<add> $this->options, [
<add> 'Source' => key($message->getSender() ?: $message->getFrom()),
<add> 'RawMessage' => [
<add> 'Data' => $message->toString(),
<add> ],
<add> ]
<add> )
<add> );
<add> } catch (AwsException $e) {
<add> throw new Swift_TransportException('Failed to make request to AWS SES API', $e->getCode(), $e);
<add> }
<ide>
<ide> $messageId = $result->get('MessageId');
<ide> | 2 |
Java | Java | make confclasspostpro ordered.highest_precedence | b78dcc59fe0a2f9937c65df1134cc87e0350cb9b | <ide><path>spring-context/src/main/java/org/springframework/context/annotation/ConfigurationClassPostProcessor.java
<ide> * @since 3.0
<ide> */
<ide> public class ConfigurationClassPostProcessor implements BeanDefinitionRegistryPostProcessor,
<del> ResourceLoaderAware, BeanClassLoaderAware, EnvironmentAware, ApplicationContextAware {
<add> ResourceLoaderAware, BeanClassLoaderAware, EnvironmentAware, ApplicationContextAware,
<add> Ordered {
<ide>
<ide> private static final String IMPORT_AWARE_PROCESSOR_BEAN_NAME =
<ide> ConfigurationClassPostProcessor.class.getName() + ".importAwareProcessor";
<ide> public void enhanceConfigurationClasses(ConfigurableListableBeanFactory beanFact
<ide> }
<ide> }
<ide>
<add> @Override
<add> public int getOrder() {
<add> return Ordered.HIGHEST_PRECEDENCE;
<add> }
<add>
<ide>
<ide> private static class ImportAwareBeanPostProcessor implements PriorityOrdered, BeanFactoryAware, BeanPostProcessor {
<ide> | 1 |
Text | Text | change solaris tag to smartos | e8c9f6f0be14c36c54ad2b6d6196d901d71faf18 | <ide><path>doc/onboarding-extras.md
<ide> Please use these when possible / appropriate
<ide> ### Other Labels
<ide>
<ide> * Operating system labels
<del> * `os x`, `windows`, `solaris`, `aix`
<add> * `os x`, `windows`, `smartos`, `aix`
<ide> * No linux, linux is the implied default
<ide> * Architecture labels
<ide> * `arm`, `mips`, `s390`, `ppc` | 1 |
Text | Text | fix typos in asset_pipeline.md [ci skip] | 90eb3746b289e79f38252f01ae127bc99085a9b9 | <ide><path>guides/source/asset_pipeline.md
<ide> assets.
<ide> ### Serving GZipped version of assets
<ide>
<ide> By default, gzipped version of compiled assets will be generated, along
<del>with the non-gzipped version of assets. Gzipped assets help reduce, the transmission of
<del>date over the wire. You can configure this by setting the `gzip` flag.
<add>with the non-gzipped version of assets. Gzipped assets help reduce the transmission of
<add>data over the wire. You can configure this by setting the `gzip` flag.
<ide>
<ide> ```ruby
<ide> config.assets.gzip = false # disable gzipped assets generation | 1 |
PHP | PHP | add exception message to test | 6eb6bea622890afb2b62372e775f054115243abf | <ide><path>tests/TestCase/ORM/Association/BelongsToManyTest.php
<ide> public function testSameSourceTargetJunction()
<ide> ]);
<ide>
<ide> $this->expectException(InvalidArgumentException::class);
<add> $this->expectExceptionMessage('The `This` association on `Articles` cannot target the same table.');
<ide> $assoc->junction();
<ide> }
<ide> | 1 |
Javascript | Javascript | use warning() over console.error() direct call | dd1b7afc14056c1d1415a8eed0781365360ba646 | <ide><path>packages/react-dom/src/events/ReactDOMEventListener.js
<ide> var ReactGenericBatching = require('events/ReactGenericBatching');
<ide> var ReactErrorUtils = require('shared/ReactErrorUtils');
<ide> var ReactFiberTreeReflection = require('shared/ReactFiberTreeReflection');
<ide> var ReactTypeOfWork = require('shared/ReactTypeOfWork');
<add>var warning = require('fbjs/lib/warning');
<ide> var {HostRoot} = ReactTypeOfWork;
<ide>
<ide> var getEventTarget = require('./getEventTarget');
<ide> var ReactDOMEventListener = {
<ide> element.addEventListener(handlerBaseName, callback, true);
<ide> } else {
<ide> if (__DEV__) {
<del> console.error(
<add> warning(
<add> false,
<ide> 'Attempted to listen to events during the capture phase on a ' +
<ide> 'browser that does not support the capture phase. Your application ' +
<ide> 'will not receive some events.', | 1 |
Text | Text | fix missing dash | 3363f26a42e5743540218d3ea46cda160ea4b560 | <ide><path>examples/with-dotenv/README.md
<ide> Execute [`create-next-app`](https://github.com/segmentio/create-next-app) with [
<ide> ```bash
<ide> npx create-next-app --example with-dotenv with-dotenv-app
<ide> # or
<del>yarn create next-app --example with-dotenv with-dotenv-app
<add>yarn create-next-app --example with-dotenv with-dotenv-app
<ide> ```
<ide>
<ide> ### Download manually | 1 |
Text | Text | fix few typos in readme. | 63877ae8499b8bc8152ec38246c4cbdf876b50be | <ide><path>README.md
<ide> Flowable.range(1, 10)
<ide> .blockingSubscribe(System.out::println);
<ide> ```
<ide>
<del>Practically, paralellism in RxJava means running independent flows and merging their results back into a single flow. The operator `flatMap` does this by first mapping each number from 1 to 10 into its own individual `Flowable`, runs them and merges the computed squares.
<add>Practically, parallelism in RxJava means running independent flows and merging their results back into a single flow. The operator `flatMap` does this by first mapping each number from 1 to 10 into its own individual `Flowable`, runs them and merges the computed squares.
<ide>
<ide> Note, however, that `flatMap` doesn't guarantee any order and the end result from the inner flows may end up interleaved. There are alternative operators:
<ide>
<ide> inventorySource.flatMap(inventoryItem ->
<ide>
<ide> ### Continuations
<ide>
<del>Sometimes, when an item has become available, one would like to perform some dependent computations on it. This is sometimes called **continuations** and, depending on what should happen and what types are involed, may involve various operators to accomplish.
<add>Sometimes, when an item has become available, one would like to perform some dependent computations on it. This is sometimes called **continuations** and, depending on what should happen and what types are involved, may involve various operators to accomplish.
<ide>
<ide> #### Dependent
<ide>
<ide> This can get also ambiguous when functional interface types get involved as the
<ide>
<ide> #### Error handling
<ide>
<del>Dataflows can fail, at which point the error is emitted to the consumer(s). Sometimes though, multiple sources may fail at which point there is a choice wether or not wait for all of them to complete or fail. To indicate this opportunity, many operator names are suffixed with the `DelayError` words (while others feature a `delayError` or `delayErrors` boolean flag in one of their overloads):
<add>Dataflows can fail, at which point the error is emitted to the consumer(s). Sometimes though, multiple sources may fail at which point there is a choice whether or not wait for all of them to complete or fail. To indicate this opportunity, many operator names are suffixed with the `DelayError` words (while others feature a `delayError` or `delayErrors` boolean flag in one of their overloads):
<ide>
<ide> ```java
<ide> Flowable<T> concat(Publisher<? extends Publisher<? extends T>> sources); | 1 |
PHP | PHP | fix inflector use in wincacheengine, xcacheengine | 6488669eb2eb52001d5303acdd6789328af4add2 | <ide><path>src/Cache/Engine/WincacheEngine.php
<ide> namespace Cake\Cache\Engine;
<ide>
<ide> use Cake\Cache\CacheEngine;
<add>use Cake\Utility\Inflector;
<ide>
<ide> /**
<ide> * Wincache storage engine for cache
<ide><path>src/Cache/Engine/XcacheEngine.php
<ide> namespace Cake\Cache\Engine;
<ide>
<ide> use Cake\Cache\CacheEngine;
<add>use Cake\Utility\Inflector;
<ide>
<ide> /**
<ide> * Xcache storage engine for cache | 2 |
Python | Python | fix unbound error | f3d661de6676125bc765e286c5dd89e3e10ad82d | <ide><path>flask/app.py
<ide> def __init__(self, import_name, static_path=None, static_url_path=None,
<ide> #: def to_python(self, value):
<ide> #: return value.split(',')
<ide> #: def to_url(self, values):
<del> #: return ','.join(BaseConverter.to_url(value)
<add> #: return ','.join(super(ListConverter, self).to_url(value)
<ide> #: for value in values)
<ide> #:
<ide> #: app = Flask(__name__) | 1 |
Java | Java | add space before cookie attributes | 6e71828a351ae31dec6bb0621266e4cef6e4a42f | <ide><path>spring-test/src/main/java/org/springframework/mock/web/MockHttpServletResponse.java
<ide> private String getCookieHeader(Cookie cookie) {
<ide> StringBuilder buf = new StringBuilder();
<ide> buf.append(cookie.getName()).append('=').append(cookie.getValue() == null ? "" : cookie.getValue());
<ide> if (StringUtils.hasText(cookie.getPath())) {
<del> buf.append(";Path=").append(cookie.getPath());
<add> buf.append("; Path=").append(cookie.getPath());
<ide> }
<ide> if (StringUtils.hasText(cookie.getDomain())) {
<del> buf.append(";Domain=").append(cookie.getDomain());
<add> buf.append("; Domain=").append(cookie.getDomain());
<ide> }
<ide> int maxAge = cookie.getMaxAge();
<ide> if (maxAge >= 0) {
<del> buf.append(";Max-Age=").append(maxAge);
<del> buf.append(";Expires=");
<add> buf.append("; Max-Age=").append(maxAge);
<add> buf.append("; Expires=");
<ide> HttpHeaders headers = new HttpHeaders();
<ide> headers.setExpires(maxAge > 0 ? System.currentTimeMillis() + 1000L * maxAge : 0);
<ide> buf.append(headers.getFirst(HttpHeaders.EXPIRES));
<ide> }
<ide>
<ide> if (cookie.getSecure()) {
<del> buf.append(";Secure");
<add> buf.append("; Secure");
<ide> }
<ide> if (cookie.isHttpOnly()) {
<del> buf.append(";HttpOnly");
<add> buf.append("; HttpOnly");
<ide> }
<ide> return buf.toString();
<ide> }
<ide><path>spring-test/src/test/java/org/springframework/mock/web/MockHttpServletResponseTests.java
<ide> public void cookies() {
<ide>
<ide> response.addCookie(cookie);
<ide>
<del> assertEquals("foo=bar;Path=/path;Domain=example.com;" +
<del> "Max-Age=0;Expires=Thu, 01 Jan 1970 00:00:00 GMT;" +
<del> "Secure;HttpOnly", response.getHeader(HttpHeaders.SET_COOKIE));
<add> assertEquals("foo=bar; Path=/path; Domain=example.com; " +
<add> "Max-Age=0; Expires=Thu, 01 Jan 1970 00:00:00 GMT; " +
<add> "Secure; HttpOnly", response.getHeader(HttpHeaders.SET_COOKIE));
<ide> }
<ide>
<ide> @Test
<ide><path>spring-test/src/test/java/org/springframework/test/web/servlet/htmlunit/MockWebResponseBuilderTests.java
<ide> public void buildResponseHeaders() throws Exception {
<ide> assertThat(header.getValue(), equalTo("value"));
<ide> header = responseHeaders.get(2);
<ide> assertThat(header.getName(), equalTo("Set-Cookie"));
<del> assertThat(header.getValue(), startsWith("cookieA=valueA;Path=/path;Domain=domain;Max-Age=1800;Expires="));
<del> assertThat(header.getValue(), endsWith(";Secure;HttpOnly"));
<add> assertThat(header.getValue(), startsWith("cookieA=valueA; Path=/path; Domain=domain; Max-Age=1800; Expires="));
<add> assertThat(header.getValue(), endsWith("; Secure; HttpOnly"));
<ide> }
<ide>
<ide> // SPR-14169
<ide><path>spring-test/src/test/java/org/springframework/test/web/servlet/result/PrintingResultHandlerTests.java
<ide> public void printResponse() throws Exception {
<ide> assertEquals(2, cookieValues.size());
<ide> assertEquals("cookie=cookieValue", cookieValues.get(0));
<ide> assertTrue("Actual: " + cookieValues.get(1), cookieValues.get(1).startsWith(
<del> "enigma=42;Path=/crumbs;Domain=.example.com;Max-Age=1234;Expires="));
<add> "enigma=42; Path=/crumbs; Domain=.example.com; Max-Age=1234; Expires="));
<ide>
<ide> HttpHeaders headers = new HttpHeaders();
<ide> headers.set("header", "headerValue");
<ide><path>spring-web/src/test/java/org/springframework/mock/web/test/MockHttpServletResponse.java
<ide> private String getCookieHeader(Cookie cookie) {
<ide> StringBuilder buf = new StringBuilder();
<ide> buf.append(cookie.getName()).append('=').append(cookie.getValue() == null ? "" : cookie.getValue());
<ide> if (StringUtils.hasText(cookie.getPath())) {
<del> buf.append(";Path=").append(cookie.getPath());
<add> buf.append("; Path=").append(cookie.getPath());
<ide> }
<ide> if (StringUtils.hasText(cookie.getDomain())) {
<del> buf.append(";Domain=").append(cookie.getDomain());
<add> buf.append("; Domain=").append(cookie.getDomain());
<ide> }
<ide> int maxAge = cookie.getMaxAge();
<ide> if (maxAge >= 0) {
<del> buf.append(";Max-Age=").append(maxAge);
<del> buf.append(";Expires=");
<add> buf.append("; Max-Age=").append(maxAge);
<add> buf.append("; Expires=");
<ide> HttpHeaders headers = new HttpHeaders();
<ide> headers.setExpires(maxAge > 0 ? System.currentTimeMillis() + 1000L * maxAge : 0);
<ide> buf.append(headers.getFirst(HttpHeaders.EXPIRES));
<ide> }
<ide>
<ide> if (cookie.getSecure()) {
<del> buf.append(";Secure");
<add> buf.append("; Secure");
<ide> }
<ide> if (cookie.isHttpOnly()) {
<del> buf.append(";HttpOnly");
<add> buf.append("; HttpOnly");
<ide> }
<ide> return buf.toString();
<ide> } | 5 |
Text | Text | remove period from within links | 102ee601ff8245831b931288c4b47bf0ba47fc66 | <ide><path>guides/source/security.md
<ide> Additional Resources
<ide>
<ide> The security landscape shifts and it is important to keep up to date, because missing a new vulnerability can be catastrophic. You can find additional resources about (Rails) security here:
<ide>
<del>* Subscribe to the Rails security [mailing list.](http://groups.google.com/group/rubyonrails-security)
<add>* Subscribe to the Rails security [mailing list](http://groups.google.com/group/rubyonrails-security).
<ide> * [Brakeman - Rails Security Scanner](http://brakemanscanner.org/) - To perform static security analysis for Rails applications.
<del>* [Keep up to date on the other application layers.](http://secunia.com/) (they have a weekly newsletter, too)
<del>* A [good security blog](https://www.owasp.org) including the [Cross-Site scripting Cheat Sheet.](https://www.owasp.org/index.php/DOM_based_XSS_Prevention_Cheat_Sheet)
<add>* [Keep up to date on the other application layers](http://secunia.com/) (they have a weekly newsletter, too).
<add>* A [good security blog](https://www.owasp.org) including the [Cross-Site scripting Cheat Sheet](https://www.owasp.org/index.php/DOM_based_XSS_Prevention_Cheat_Sheet). | 1 |
Go | Go | use runtime spec modifier for metrics plugin hook | 426e610e43179d58b29c496bc79a53f410a4b1e1 | <ide><path>daemon/metrics.go
<ide> package daemon
<ide>
<ide> import (
<del> "path/filepath"
<ide> "sync"
<ide>
<del> "github.com/docker/docker/pkg/mount"
<ide> "github.com/docker/docker/pkg/plugingetter"
<ide> metrics "github.com/docker/go-metrics"
<ide> "github.com/pkg/errors"
<ide> func (d *Daemon) cleanupMetricsPlugins() {
<ide> }
<ide> }
<ide>
<del>type metricsPlugin struct {
<del> plugingetter.CompatPlugin
<del>}
<del>
<del>func (p metricsPlugin) sock() string {
<del> return "metrics.sock"
<del>}
<del>
<del>func (p metricsPlugin) sockBase() string {
<del> return filepath.Join(p.BasePath(), "run", "docker")
<del>}
<del>
<ide> func pluginStartMetricsCollection(p plugingetter.CompatPlugin) error {
<ide> type metricsPluginResponse struct {
<ide> Err string
<ide> func pluginStopMetricsCollection(p plugingetter.CompatPlugin) {
<ide> if err := p.Client().Call(metricsPluginType+".StopMetrics", nil, nil); err != nil {
<ide> logrus.WithError(err).WithField("name", p.Name()).Error("error stopping metrics collector")
<ide> }
<del>
<del> mp := metricsPlugin{p}
<del> sockPath := filepath.Join(mp.sockBase(), mp.sock())
<del> if err := mount.Unmount(sockPath); err != nil {
<del> if mounted, _ := mount.Mounted(sockPath); mounted {
<del> logrus.WithError(err).WithField("name", p.Name()).WithField("socket", sockPath).Error("error unmounting metrics socket for plugin")
<del> }
<del> }
<ide> }
<ide><path>daemon/metrics_unix.go
<ide> package daemon
<ide> import (
<ide> "net"
<ide> "net/http"
<del> "os"
<ide> "path/filepath"
<ide>
<del> "github.com/docker/docker/pkg/mount"
<ide> "github.com/docker/docker/pkg/plugingetter"
<ide> "github.com/docker/docker/pkg/plugins"
<add> "github.com/docker/docker/plugin"
<ide> metrics "github.com/docker/go-metrics"
<add> specs "github.com/opencontainers/runtime-spec/specs-go"
<ide> "github.com/pkg/errors"
<ide> "github.com/sirupsen/logrus"
<ide> "golang.org/x/sys/unix"
<ide> func (daemon *Daemon) listenMetricsSock() (string, error) {
<ide> return path, nil
<ide> }
<ide>
<del>func registerMetricsPluginCallback(getter plugingetter.PluginGetter, sockPath string) {
<del> getter.Handle(metricsPluginType, func(name string, client *plugins.Client) {
<add>func registerMetricsPluginCallback(store *plugin.Store, sockPath string) {
<add> store.RegisterRuntimeOpt(metricsPluginType, func(s *specs.Spec) {
<add> f := plugin.WithSpecMounts([]specs.Mount{
<add> {Type: "bind", Source: sockPath, Destination: "/run/docker/metrics.sock", Options: []string{"bind", "ro"}},
<add> })
<add> f(s)
<add> })
<add> store.Handle(metricsPluginType, func(name string, client *plugins.Client) {
<ide> // Use lookup since nothing in the system can really reference it, no need
<ide> // to protect against removal
<del> p, err := getter.Get(name, metricsPluginType, plugingetter.Lookup)
<add> p, err := store.Get(name, metricsPluginType, plugingetter.Lookup)
<ide> if err != nil {
<ide> return
<ide> }
<ide>
<del> mp := metricsPlugin{p}
<del> sockBase := mp.sockBase()
<del> if err := os.MkdirAll(sockBase, 0755); err != nil {
<del> logrus.WithError(err).WithField("name", name).WithField("path", sockBase).Error("error creating metrics plugin base path")
<del> return
<del> }
<del>
<del> defer func() {
<del> if err != nil {
<del> os.RemoveAll(sockBase)
<del> }
<del> }()
<del>
<del> pluginSockPath := filepath.Join(sockBase, mp.sock())
<del> _, err = os.Stat(pluginSockPath)
<del> if err == nil {
<del> mount.Unmount(pluginSockPath)
<del> } else {
<del> logrus.WithField("path", pluginSockPath).Debugf("creating plugin socket")
<del> f, err := os.OpenFile(pluginSockPath, os.O_CREATE, 0600)
<del> if err != nil {
<del> return
<del> }
<del> f.Close()
<del> }
<del>
<del> if err := mount.Mount(sockPath, pluginSockPath, "none", "bind,ro"); err != nil {
<del> logrus.WithError(err).WithField("name", name).Error("could not mount metrics socket to plugin")
<del> return
<del> }
<del>
<ide> if err := pluginStartMetricsCollection(p); err != nil {
<del> if err := mount.Unmount(pluginSockPath); err != nil {
<del> if mounted, _ := mount.Mounted(pluginSockPath); mounted {
<del> logrus.WithError(err).WithField("sock_path", pluginSockPath).Error("error unmounting metrics socket from plugin during cleanup")
<del> }
<del> }
<ide> logrus.WithError(err).WithField("name", name).Error("error while initializing metrics plugin")
<ide> }
<ide> })
<ide><path>plugin/defs.go
<ide> import (
<ide>
<ide> "github.com/docker/docker/pkg/plugins"
<ide> "github.com/docker/docker/plugin/v2"
<add> specs "github.com/opencontainers/runtime-spec/specs-go"
<ide> )
<ide>
<ide> // Store manages the plugin inventory in memory and on-disk
<ide> type Store struct {
<ide> sync.RWMutex
<del> plugins map[string]*v2.Plugin
<add> plugins map[string]*v2.Plugin
<add> specOpts map[string][]SpecOpt
<ide> /* handlers are necessary for transition path of legacy plugins
<ide> * to the new model. Legacy plugins use Handle() for registering an
<ide> * activation callback.*/
<ide> type Store struct {
<ide> func NewStore() *Store {
<ide> return &Store{
<ide> plugins: make(map[string]*v2.Plugin),
<add> specOpts: make(map[string][]SpecOpt),
<ide> handlers: make(map[string][]func(string, *plugins.Client)),
<ide> }
<ide> }
<ide>
<add>// SpecOpt is used for subsystems that need to modify the runtime spec of a plugin
<add>type SpecOpt func(*specs.Spec)
<add>
<ide> // CreateOpt is used to configure specific plugin details when created
<ide> type CreateOpt func(p *v2.Plugin)
<ide>
<ide> func WithSwarmService(id string) CreateOpt {
<ide> p.SwarmServiceID = id
<ide> }
<ide> }
<add>
<add>// WithSpecMounts is a SpecOpt which appends the provided mounts to the runtime spec
<add>func WithSpecMounts(mounts []specs.Mount) SpecOpt {
<add> return func(s *specs.Spec) {
<add> s.Mounts = append(s.Mounts, mounts...)
<add> }
<add>}
<ide><path>plugin/store.go
<ide> import (
<ide> "github.com/docker/docker/pkg/plugingetter"
<ide> "github.com/docker/docker/pkg/plugins"
<ide> "github.com/docker/docker/plugin/v2"
<add> specs "github.com/opencontainers/runtime-spec/specs-go"
<ide> "github.com/pkg/errors"
<ide> "github.com/sirupsen/logrus"
<ide> )
<ide> func (ps *Store) GetAll() map[string]*v2.Plugin {
<ide> func (ps *Store) SetAll(plugins map[string]*v2.Plugin) {
<ide> ps.Lock()
<ide> defer ps.Unlock()
<add>
<add> for _, p := range plugins {
<add> ps.setSpecOpts(p)
<add> }
<ide> ps.plugins = plugins
<ide> }
<ide>
<ide> func (ps *Store) SetState(p *v2.Plugin, state bool) {
<ide> p.PluginObj.Enabled = state
<ide> }
<ide>
<add>func (ps *Store) setSpecOpts(p *v2.Plugin) {
<add> var specOpts []SpecOpt
<add> for _, typ := range p.GetTypes() {
<add> opts, ok := ps.specOpts[typ.String()]
<add> if ok {
<add> specOpts = append(specOpts, opts...)
<add> }
<add> }
<add>
<add> p.SetSpecOptModifier(func(s *specs.Spec) {
<add> for _, o := range specOpts {
<add> o(s)
<add> }
<add> })
<add>}
<add>
<ide> // Add adds a plugin to memory and plugindb.
<ide> // An error will be returned if there is a collision.
<ide> func (ps *Store) Add(p *v2.Plugin) error {
<ide> func (ps *Store) Add(p *v2.Plugin) error {
<ide> if v, exist := ps.plugins[p.GetID()]; exist {
<ide> return fmt.Errorf("plugin %q has the same ID %s as %q", p.Name(), p.GetID(), v.Name())
<ide> }
<add>
<add> ps.setSpecOpts(p)
<add>
<ide> ps.plugins[p.GetID()] = p
<ide> return nil
<ide> }
<ide> func (ps *Store) GetAllByCap(capability string) ([]plugingetter.CompatPlugin, er
<ide> return result, nil
<ide> }
<ide>
<add>func pluginType(cap string) string {
<add> return fmt.Sprintf("docker.%s/%s", strings.ToLower(cap), defaultAPIVersion)
<add>}
<add>
<ide> // Handle sets a callback for a given capability. It is only used by network
<ide> // and ipam drivers during plugin registration. The callback registers the
<ide> // driver with the subsystem (network, ipam).
<ide> func (ps *Store) Handle(capability string, callback func(string, *plugins.Client)) {
<del> pluginType := fmt.Sprintf("docker.%s/%s", strings.ToLower(capability), defaultAPIVersion)
<add> typ := pluginType(capability)
<ide>
<ide> // Register callback with new plugin model.
<ide> ps.Lock()
<del> handlers, ok := ps.handlers[pluginType]
<add> handlers, ok := ps.handlers[typ]
<ide> if !ok {
<ide> handlers = []func(string, *plugins.Client){}
<ide> }
<ide> handlers = append(handlers, callback)
<del> ps.handlers[pluginType] = handlers
<add> ps.handlers[typ] = handlers
<ide> ps.Unlock()
<ide>
<ide> // Register callback with legacy plugin model.
<ide> func (ps *Store) Handle(capability string, callback func(string, *plugins.Client
<ide> }
<ide> }
<ide>
<add>// RegisterRuntimeOpt stores a list of SpecOpts for the provided capability.
<add>// These options are applied to the runtime spec before a plugin is started for the specified capability.
<add>func (ps *Store) RegisterRuntimeOpt(cap string, opts ...SpecOpt) {
<add> ps.Lock()
<add> defer ps.Unlock()
<add> typ := pluginType(cap)
<add> ps.specOpts[typ] = append(ps.specOpts[typ], opts...)
<add>}
<add>
<ide> // CallHandler calls the registered callback. It is invoked during plugin enable.
<ide> func (ps *Store) CallHandler(p *v2.Plugin) {
<ide> for _, typ := range p.GetTypes() {
<ide><path>plugin/v2/plugin.go
<ide> import (
<ide> "github.com/docker/docker/pkg/plugingetter"
<ide> "github.com/docker/docker/pkg/plugins"
<ide> "github.com/opencontainers/go-digest"
<add> specs "github.com/opencontainers/runtime-spec/specs-go"
<ide> )
<ide>
<ide> // Plugin represents an individual plugin.
<ide> type Plugin struct {
<ide> Config digest.Digest
<ide> Blobsums []digest.Digest
<ide>
<add> modifyRuntimeSpec func(*specs.Spec)
<add>
<ide> SwarmServiceID string
<ide> }
<ide>
<ide> func (p *Plugin) Acquire() {
<ide> func (p *Plugin) Release() {
<ide> p.AddRefCount(plugingetter.Release)
<ide> }
<add>
<add>// SetSpecOptModifier sets the function to use to modify the the generated
<add>// runtime spec.
<add>func (p *Plugin) SetSpecOptModifier(f func(*specs.Spec)) {
<add> p.mu.Lock()
<add> p.modifyRuntimeSpec = f
<add> p.mu.Unlock()
<add>}
<ide><path>plugin/v2/plugin_linux.go
<ide> import (
<ide> // InitSpec creates an OCI spec from the plugin's config.
<ide> func (p *Plugin) InitSpec(execRoot string) (*specs.Spec, error) {
<ide> s := oci.DefaultSpec()
<add>
<ide> s.Root = &specs.Root{
<ide> Path: p.Rootfs,
<ide> Readonly: false, // TODO: all plugins should be readonly? settable in config?
<ide> func (p *Plugin) InitSpec(execRoot string) (*specs.Spec, error) {
<ide> caps.Inheritable = append(caps.Inheritable, p.PluginObj.Config.Linux.Capabilities...)
<ide> caps.Effective = append(caps.Effective, p.PluginObj.Config.Linux.Capabilities...)
<ide>
<add> if p.modifyRuntimeSpec != nil {
<add> p.modifyRuntimeSpec(&s)
<add> }
<add>
<ide> return &s, nil
<ide> } | 6 |
Javascript | Javascript | fix style issues in core and scales | bddd4cd94bbb2fa40d36029433069fa7950fd3ef | <ide><path>src/core/core.helpers.js
<ide> module.exports = function(Chart) {
<ide> return objClone;
<ide> };
<ide> helpers.extend = function(base) {
<del> var setFn = function(value, key) { base[key] = value; };
<add> var setFn = function(value, key) {
<add> base[key] = value;
<add> };
<ide> for (var i = 1, ilen = arguments.length; i < ilen; i++) {
<ide> helpers.each(arguments[i], setFn);
<ide> }
<ide> module.exports = function(Chart) {
<ide> return value === undefined ? defaultValue : value;
<ide> };
<ide> helpers.indexOf = Array.prototype.indexOf?
<del> function(array, item) { return array.indexOf(item); } :
<add> function(array, item) {
<add> return array.indexOf(item);
<add> }:
<ide> function(array, item) {
<ide> for (var i = 0, ilen = array.length; i < ilen; ++i) {
<ide> if (array[i] === item) {
<ide> module.exports = function(Chart) {
<ide> helpers.where = function(collection, filterCallback) {
<ide> if (helpers.isArray(collection) && Array.prototype.filter) {
<ide> return collection.filter(filterCallback);
<del> } else {
<del> var filtered = [];
<add> }
<add> var filtered = [];
<ide>
<del> helpers.each(collection, function(item) {
<del> if (filterCallback(item)) {
<del> filtered.push(item);
<del> }
<del> });
<add> helpers.each(collection, function(item) {
<add> if (filterCallback(item)) {
<add> filtered.push(item);
<add> }
<add> });
<ide>
<del> return filtered;
<del> }
<add> return filtered;
<ide> };
<ide> helpers.findIndex = Array.prototype.findIndex?
<del> function(array, callback, scope) { return array.findIndex(callback, scope); } :
<add> function(array, callback, scope) {
<add> return array.findIndex(callback, scope);
<add> } :
<ide> function(array, callback, scope) {
<ide> scope = scope === undefined? array : scope;
<ide> for (var i = 0, ilen = array.length; i < ilen; ++i) {
<ide> module.exports = function(Chart) {
<ide> };
<ide> helpers.inherits = function(extensions) {
<ide> // Basic javascript inheritance based on the model created in Backbone.js
<del> var parent = this;
<add> var me = this;
<ide> var ChartElement = (extensions && extensions.hasOwnProperty('constructor')) ? extensions.constructor : function() {
<del> return parent.apply(this, arguments);
<add> return me.apply(this, arguments);
<ide> };
<ide>
<ide> var Surrogate = function() {
<ide> this.constructor = ChartElement;
<ide> };
<del> Surrogate.prototype = parent.prototype;
<add> Surrogate.prototype = me.prototype;
<ide> ChartElement.prototype = new Surrogate();
<ide>
<ide> ChartElement.extend = helpers.inherits;
<ide> module.exports = function(Chart) {
<ide> helpers.extend(ChartElement.prototype, extensions);
<ide> }
<ide>
<del> ChartElement.__super__ = parent.prototype;
<add> ChartElement.__super__ = me.prototype;
<ide>
<ide> return ChartElement;
<ide> };
<ide> module.exports = function(Chart) {
<ide> return array.reduce(function(max, value) {
<ide> if (!isNaN(value)) {
<ide> return Math.max(max, value);
<del> } else {
<del> return max;
<ide> }
<add> return max;
<ide> }, Number.NEGATIVE_INFINITY);
<ide> };
<ide> helpers.min = function(array) {
<ide> return array.reduce(function(min, value) {
<ide> if (!isNaN(value)) {
<ide> return Math.min(min, value);
<del> } else {
<del> return min;
<ide> }
<add> return min;
<ide> }, Number.POSITIVE_INFINITY);
<ide> };
<ide> helpers.sign = Math.sign?
<del> function(x) { return Math.sign(x); } :
<add> function(x) {
<add> return Math.sign(x);
<add> } :
<ide> function(x) {
<ide> x = +x; // convert to a number
<ide> if (x === 0 || isNaN(x)) {
<ide> module.exports = function(Chart) {
<ide> return x > 0 ? 1 : -1;
<ide> };
<ide> helpers.log10 = Math.log10?
<del> function(x) { return Math.log10(x); } :
<add> function(x) {
<add> return Math.log10(x);
<add> } :
<ide> function(x) {
<ide> return Math.log(x) / Math.LN10;
<ide> };
<ide> module.exports = function(Chart) {
<ide> } else {
<ide> niceFraction = 10;
<ide> }
<add> } else if (fraction <= 1.0) {
<add> niceFraction = 1;
<add> } else if (fraction <= 2) {
<add> niceFraction = 2;
<add> } else if (fraction <= 5) {
<add> niceFraction = 5;
<ide> } else {
<del> if (fraction <= 1.0) {
<del> niceFraction = 1;
<del> } else if (fraction <= 2) {
<del> niceFraction = 2;
<del> } else if (fraction <= 5) {
<del> niceFraction = 5;
<del> } else {
<del> niceFraction = 10;
<del> }
<add> niceFraction = 10;
<ide> }
<ide>
<ide> return niceFraction * Math.pow(10, exponent);
<ide> module.exports = function(Chart) {
<ide> return 1 * (7.5625 * (t -= (1.5 / 2.75)) * t + 0.75);
<ide> } else if (t < (2.5 / 2.75)) {
<ide> return 1 * (7.5625 * (t -= (2.25 / 2.75)) * t + 0.9375);
<del> } else {
<del> return 1 * (7.5625 * (t -= (2.625 / 2.75)) * t + 0.984375);
<ide> }
<add> return 1 * (7.5625 * (t -= (2.625 / 2.75)) * t + 0.984375);
<ide> },
<ide> easeInOutBounce: function(t) {
<ide> if (t < 1 / 2) {
<ide> module.exports = function(Chart) {
<ide> }
<ide> };
<ide> helpers.isArray = Array.isArray?
<del> function(obj) { return Array.isArray(obj); } :
<add> function(obj) {
<add> return Array.isArray(obj);
<add> } :
<ide> function(obj) {
<ide> return Object.prototype.toString.call(obj) === '[object Array]';
<ide> };
<ide> module.exports = function(Chart) {
<ide> fn.apply(_tArg, args);
<ide> }
<ide> };
<del> helpers.getHoverColor = function(color) {
<add> helpers.getHoverColor = function(colorValue) {
<ide> /* global CanvasPattern */
<del> return (color instanceof CanvasPattern) ?
<del> color :
<del> helpers.color(color).saturate(0.5).darken(0.1).rgbString();
<add> return (colorValue instanceof CanvasPattern) ?
<add> colorValue :
<add> helpers.color(colorValue).saturate(0.5).darken(0.1).rgbString();
<ide> };
<ide> };
<ide><path>src/core/core.layoutService.js
<ide> module.exports = function(Chart) {
<ide>
<ide> // Function to fit a box
<ide> function fitBox(box) {
<del> var minBoxSize = helpers.findNextWhere(minBoxSizes, function(minBoxSize) {
<del> return minBoxSize.box === box;
<add> var minBoxSize = helpers.findNextWhere(minBoxSizes, function(minBox) {
<add> return minBox.box === box;
<ide> });
<ide>
<ide> if (minBoxSize) {
<ide> module.exports = function(Chart) {
<ide> helpers.each(leftBoxes.concat(rightBoxes), finalFitVerticalBox);
<ide>
<ide> function finalFitVerticalBox(box) {
<del> var minBoxSize = helpers.findNextWhere(minBoxSizes, function(minBoxSize) {
<del> return minBoxSize.box === box;
<add> var minBoxSize = helpers.findNextWhere(minBoxSizes, function(minSize) {
<add> return minSize.box === box;
<ide> });
<ide>
<ide> var scaleMargin = {
<ide><path>src/core/core.legend.js
<ide> module.exports = function(Chart) {
<ide> cursor.line++;
<ide> x = cursor.x = me.left + ((legendWidth - lineWidths[cursor.line]) / 2);
<ide> }
<del> } else {
<del> if (y + itemHeight > me.bottom) {
<del> x = cursor.x = x + me.columnWidths[cursor.line] + labelOpts.padding;
<del> y = cursor.y = me.top;
<del> cursor.line++;
<del> }
<add> } else if (y + itemHeight > me.bottom) {
<add> x = cursor.x = x + me.columnWidths[cursor.line] + labelOpts.padding;
<add> y = cursor.y = me.top;
<add> cursor.line++;
<ide> }
<ide>
<ide> drawLegendBox(x, y, legendItem);
<ide><path>src/core/core.scale.js
<ide> module.exports = function(Chart) {
<ide> if (typeof(rawValue) === 'object') {
<ide> if ((rawValue instanceof Date) || (rawValue.isValid)) {
<ide> return rawValue;
<del> } else {
<del> return this.getRightValue(this.isHorizontal() ? rawValue.x : rawValue.y);
<ide> }
<add> return this.getRightValue(this.isHorizontal() ? rawValue.x : rawValue.y);
<ide> }
<ide>
<ide> // Value is good, return it
<ide> module.exports = function(Chart) {
<ide> var finalVal = me.left + Math.round(pixel);
<ide> finalVal += me.isFullWidth() ? me.margins.left : 0;
<ide> return finalVal;
<del> } else {
<del> var innerHeight = me.height - (me.paddingTop + me.paddingBottom);
<del> return me.top + (index * (innerHeight / (me.ticks.length - 1)));
<ide> }
<add> var innerHeight = me.height - (me.paddingTop + me.paddingBottom);
<add> return me.top + (index * (innerHeight / (me.ticks.length - 1)));
<ide> },
<ide>
<ide> // Utility for getting the pixel location of a percentage of scale
<ide> module.exports = function(Chart) {
<ide> var finalVal = me.left + Math.round(valueOffset);
<ide> finalVal += me.isFullWidth() ? me.margins.left : 0;
<ide> return finalVal;
<del> } else {
<del> return me.top + (decimal * me.height);
<ide> }
<add> return me.top + (decimal * me.height);
<ide> },
<ide>
<ide> getBasePixel: function() {
<ide> module.exports = function(Chart) {
<ide>
<ide> // Common properties
<ide> var tx1, ty1, tx2, ty2, x1, y1, x2, y2, labelX, labelY;
<del> var textAlign, textBaseline = 'middle';
<add> var textAlign = 'middle';
<add> var textBaseline = 'middle';
<ide>
<ide> if (isHorizontal) {
<ide> if (!isRotated) {
<ide> module.exports = function(Chart) {
<ide> labelX = me.right - optionTicks.padding;
<ide> textAlign = 'right';
<ide> }
<add> // right side
<add> } else if (optionTicks.mirror) {
<add> labelX = me.left - optionTicks.padding;
<add> textAlign = 'right';
<ide> } else {
<del> // right side
<del> if (optionTicks.mirror) {
<del> labelX = me.left - optionTicks.padding;
<del> textAlign = 'right';
<del> } else {
<del> labelX = me.left + optionTicks.padding;
<del> textAlign = 'left';
<del> }
<add> labelX = me.left + optionTicks.padding;
<add> textAlign = 'left';
<ide> }
<ide>
<ide> var yLineValue = me.getPixelForTick(index); // xvalues for grid lines
<ide><path>src/core/core.tooltip.js
<ide> module.exports = function(Chart) {
<ide> } else if (xAlign === 'right') {
<ide> pt.x -= paddingAndSize;
<ide> }
<del> } else {
<del> if (xAlign === 'left') {
<del> pt.x -= radiusAndPadding;
<del> } else if (xAlign === 'right') {
<del> pt.x += radiusAndPadding;
<del> }
<add> } else if (xAlign === 'left') {
<add> pt.x -= radiusAndPadding;
<add> } else if (xAlign === 'right') {
<add> pt.x += radiusAndPadding;
<ide> }
<ide>
<ide> return pt;
<ide><path>src/scales/scale.category.js
<ide> module.exports = function(Chart) {
<ide>
<ide> if ((data.xLabels && isHorizontal) || (data.yLabels && !isHorizontal)) {
<ide> return me.getRightValue(data.datasets[datasetIndex].data[index]);
<del> } else {
<del> return me.ticks[index];
<ide> }
<add> return me.ticks[index];
<ide> },
<ide>
<ide> // Used to get data value locations. Value can either be an index or a numerical value
<ide> module.exports = function(Chart) {
<ide> }
<ide>
<ide> return me.left + Math.round(widthOffset);
<del> } else {
<del> var innerHeight = me.height - (me.paddingTop + me.paddingBottom);
<del> var valueHeight = innerHeight / offsetAmt;
<del> var heightOffset = (valueHeight * (index - me.minIndex)) + me.paddingTop;
<del>
<del> if (me.options.gridLines.offsetGridLines && includeOffset) {
<del> heightOffset += (valueHeight / 2);
<del> }
<add> }
<add> var innerHeight = me.height - (me.paddingTop + me.paddingBottom);
<add> var valueHeight = innerHeight / offsetAmt;
<add> var heightOffset = (valueHeight * (index - me.minIndex)) + me.paddingTop;
<ide>
<del> return me.top + Math.round(heightOffset);
<add> if (me.options.gridLines.offsetGridLines && includeOffset) {
<add> heightOffset += (valueHeight / 2);
<ide> }
<add>
<add> return me.top + Math.round(heightOffset);
<ide> },
<ide> getPixelForTick: function(index, includeOffset) {
<ide> return this.getPixelForValue(this.ticks[index], index + this.minIndex, null, includeOffset);
<ide><path>src/scales/scale.linear.js
<ide> module.exports = function(Chart) {
<ide>
<ide> if (opts.stacked) {
<ide> var valuesPerType = {};
<del> var hasPositiveValues = false;
<del> var hasNegativeValues = false;
<ide>
<ide> helpers.each(datasets, function(dataset, datasetIndex) {
<ide> var meta = chart.getDatasetMeta(datasetIndex);
<ide> module.exports = function(Chart) {
<ide>
<ide> if (opts.relativePoints) {
<ide> positiveValues[index] = 100;
<add> } else if (value < 0) {
<add> negativeValues[index] += value;
<ide> } else {
<del> if (value < 0) {
<del> hasNegativeValues = true;
<del> negativeValues[index] += value;
<del> } else {
<del> hasPositiveValues = true;
<del> positiveValues[index] += value;
<del> }
<add> positiveValues[index] += value;
<ide> }
<ide> });
<ide> }
<ide> module.exports = function(Chart) {
<ide> innerDimension = me.width - (paddingLeft + me.paddingRight);
<ide> pixel = me.left + (innerDimension / range * (rightValue - start));
<ide> return Math.round(pixel + paddingLeft);
<del> } else {
<del> innerDimension = me.height - (me.paddingTop + paddingBottom);
<del> pixel = (me.bottom - paddingBottom) - (innerDimension / range * (rightValue - start));
<del> return Math.round(pixel);
<ide> }
<add> innerDimension = me.height - (me.paddingTop + paddingBottom);
<add> pixel = (me.bottom - paddingBottom) - (innerDimension / range * (rightValue - start));
<add> return Math.round(pixel);
<ide> },
<ide> getValueForPixel: function(pixel) {
<ide> var me = this;
<ide><path>src/scales/scale.logarithmic.js
<ide> module.exports = function(Chart) {
<ide> return '0';
<ide> } else if (remain === 1 || remain === 2 || remain === 5 || index === 0 || index === arr.length - 1) {
<ide> return value.toExponential();
<del> } else {
<del> return '';
<ide> }
<add> return '';
<ide> }
<ide> }
<ide> };
<ide><path>src/scales/scale.radialLinear.js
<ide> module.exports = function(Chart) {
<ide> furthestRight = pointPosition.x + textWidth;
<ide> furthestRightIndex = i;
<ide> }
<del> } else {
<del> // More than half the values means we'll right align the text
<del> if (pointPosition.x - textWidth < furthestLeft) {
<del> furthestLeft = pointPosition.x - textWidth;
<del> furthestLeftIndex = i;
<del> }
<add> // More than half the values means we'll right align the text
<add> } else if (pointPosition.x - textWidth < furthestLeft) {
<add> furthestLeft = pointPosition.x - textWidth;
<add> furthestLeftIndex = i;
<ide> }
<ide> }
<ide>
<ide> module.exports = function(Chart) {
<ide> var scalingFactor = me.drawingArea / (me.max - me.min);
<ide> if (me.options.reverse) {
<ide> return (me.max - value) * scalingFactor;
<del> } else {
<del> return (value - me.min) * scalingFactor;
<ide> }
<add> return (value - me.min) * scalingFactor;
<ide> },
<ide> getPointPosition: function(index, distanceFromCenter) {
<ide> var me = this; | 9 |
PHP | PHP | update savemany parameter hint | 4219f0ac29615626be29940a6e20f047cb8204b9 | <ide><path>src/Illuminate/Database/Eloquent/Relations/HasOneOrMany.php
<ide> public function save(Model $model)
<ide> /**
<ide> * Attach a collection of models to the parent instance.
<ide> *
<del> * @param \Illuminate\Database\Eloquent\Collection|array $models
<del> * @return \Illuminate\Database\Eloquent\Collection|array
<add> * @param \Traversable|array $models
<add> * @return \Traversable|array
<ide> */
<ide> public function saveMany($models)
<ide> { | 1 |
PHP | PHP | refactor the mail fake to not be really stupid | b1d8f813d13960096493f3adc3bc32ace66ba2e6 | <ide><path>src/Illuminate/Support/Testing/Fakes/MailFake.php
<ide> public function assertSent($mailable, $callback = null)
<ide> );
<ide> }
<ide>
<del> /**
<del> * Assert if a mailable was sent based on a truth-test callback.
<del> *
<del> * @param mixed $users
<del> * @param string $mailable
<del> * @param callable|null $callback
<del> * @return void
<del> */
<del> public function assertSentTo($users, $mailable, $callback = null)
<del> {
<del> $users = $this->formatRecipients($users);
<del>
<del> return $this->assertSent($mailable, function ($mailable, $to) use ($users, $callback) {
<del> if (! $this->recipientsMatch($users, $this->formatRecipients($to))) {
<del> return false;
<del> }
<del>
<del> if (! is_null($callback)) {
<del> return $callback(...func_get_args());
<del> }
<del>
<del> return true;
<del> });
<del> }
<del>
<del> /**
<del> * Format the recipients into a collection.
<del> *
<del> * @param mixed $recipients
<del> * @return \Illuminate\Support\Collection
<del> */
<del> protected function formatRecipients($recipients)
<del> {
<del> if ($recipients instanceof Collection) {
<del> return $recipients;
<del> }
<del>
<del> return collect(is_array($recipients) ? $recipients : [$recipients]);
<del> }
<del>
<del> /**
<del> * Determine if two given recipient lists match.
<del> *
<del> * @param \Illuminate\Support\Collection $expected
<del> * @param \Illuminate\Support\Collection $recipients
<del> * @return bool
<del> */
<del> protected function recipientsMatch($expected, $recipients)
<del> {
<del> $expected = $expected->map(function ($expected) {
<del> return is_object($expected) ? $expected->email : $expected;
<del> });
<del>
<del> return $recipients->map(function ($recipient) {
<del> if (is_array($recipient)) {
<del> return $recipient['email'];
<del> }
<del>
<del> return is_object($recipient) ? $recipient->email : $recipient;
<del> })->diff($expected)->count() === 0;
<del> }
<del>
<ide> /**
<ide> * Determine if a mailable was sent based on a truth-test callback.
<ide> *
<ide> public function sent($mailable, $callback = null)
<ide> };
<ide>
<ide> return $this->mailablesOf($mailable)->filter(function ($mailable) use ($callback) {
<del> return $callback($mailable->mailable, ...array_values($mailable->getRecipients()));
<add> return $callback($mailable);
<ide> });
<ide> }
<ide>
<ide> public function hasSent($mailable)
<ide> */
<ide> protected function mailablesOf($type)
<ide> {
<del> return collect($this->mailables)->filter(function ($m) use ($type) {
<del> return $m->mailable instanceof $type;
<add> return collect($this->mailables)->filter(function ($mailable) use ($type) {
<add> return $mailable instanceof $type;
<ide> });
<ide> }
<ide>
<ide> protected function mailablesOf($type)
<ide> */
<ide> public function to($users)
<ide> {
<del> $this->mailables[] = $mailable = (new PendingMailFake)->to($users);
<del>
<del> return $mailable;
<add> return (new PendingMailFake($this))->to($users);
<ide> }
<ide>
<ide> /**
<ide> public function to($users)
<ide> */
<ide> public function bcc($users)
<ide> {
<del> $this->mailables[] = $mailable = (new PendingMailFake)->bcc($users);
<del>
<del> return $mailable;
<add> return (new PendingMailFake($this))->bcc($users);
<ide> }
<ide>
<ide> /**
<ide> public function send($view, array $data = [], $callback = null)
<ide> return;
<ide> }
<ide>
<del> Container::getInstance()->call([$view, 'build']);
<del>
<del> $mailable = new PendingMailFake;
<del>
<del> $mailable->mailable = $view;
<del>
<del> if ($recipients = $view->to) {
<del> $mailable->to($recipients);
<del> }
<del>
<del> if ($recipients = $view->bcc) {
<del> $mailable->bcc($recipients);
<del> }
<del>
<del> if ($recipients = $view->cc) {
<del> $mailable->cc($recipients);
<del> }
<del>
<del> $this->mailables[] = $mailable;
<del> }
<del>
<del> /**
<del> * Get the array of failed recipients.
<del> *
<del> * @return array
<del> */
<del> public function failures()
<del> {
<del> //
<add> $this->mailables[] = $view;
<ide> }
<ide>
<ide> /**
<ide> public function queue($view, array $data = [], $callback = null, $queue = null)
<ide> {
<ide> $this->send($view);
<ide> }
<add>
<add> /**
<add> * Get the array of failed recipients.
<add> *
<add> * @return array
<add> */
<add> public function failures()
<add> {
<add> //
<add> }
<ide> }
<ide><path>src/Illuminate/Support/Testing/Fakes/PendingMailFake.php
<ide>
<ide> class PendingMailFake extends PendingMail
<ide> {
<del> /**
<del> * The mailable instance.
<del> *
<del> * @var mixed
<del> */
<del> public $mailable;
<del>
<ide> /**
<ide> * Create a new instance.
<ide> *
<add> * @param \Illuminate\Support\Testing\Fakes\MailFake
<ide> * @return void
<ide> */
<del> public function __construct()
<add> public function __construct($mailer)
<ide> {
<del> //
<add> $this->mailer = $mailer;
<ide> }
<ide>
<ide> /**
<ide> public function send(Mailable $mailable)
<ide> */
<ide> public function sendNow(Mailable $mailable)
<ide> {
<del> $this->mailable = $mailable;
<add> $this->mailer->send($this->fill($mailable));
<ide> }
<ide>
<ide> /**
<ide> public function queue(Mailable $mailable)
<ide> {
<ide> return $this->sendNow($mailable);
<ide> }
<del>
<del> /**
<del> * Get the recipient information for the mailable.
<del> *
<del> * @return array
<del> */
<del> public function getRecipients()
<del> {
<del> return ['to' => $this->to, 'cc' => $this->cc, 'bcc' => $this->bcc];
<del> }
<ide> } | 2 |
Go | Go | use condition variable to wake stats collector | e75e6b0e31428c00047bc814746aff4b4c7c90ad | <ide><path>daemon/stats/collector.go
<ide> import (
<ide> // Collector manages and provides container resource stats
<ide> type Collector struct {
<ide> m sync.Mutex
<add> cond *sync.Cond
<ide> supervisor supervisor
<ide> interval time.Duration
<ide> publishers map[*container.Container]*pubsub.Publisher
<ide> func NewCollector(supervisor supervisor, interval time.Duration) *Collector {
<ide> publishers: make(map[*container.Container]*pubsub.Publisher),
<ide> bufReader: bufio.NewReaderSize(nil, 128),
<ide> }
<add> s.cond = sync.NewCond(&s.m)
<ide>
<ide> platformNewStatsCollector(s)
<ide>
<ide> type supervisor interface {
<ide> // the event loop for collection on the specified interval returning
<ide> // a channel for the subscriber to receive on.
<ide> func (s *Collector) Collect(c *container.Container) chan interface{} {
<del> s.m.Lock()
<del> defer s.m.Unlock()
<add> s.cond.L.Lock()
<add> defer s.cond.L.Unlock()
<add>
<ide> publisher, exists := s.publishers[c]
<ide> if !exists {
<ide> publisher = pubsub.NewPublisher(100*time.Millisecond, 1024)
<ide> s.publishers[c] = publisher
<ide> }
<add>
<add> s.cond.Broadcast()
<ide> return publisher.Subscribe()
<ide> }
<ide>
<ide> func (s *Collector) Run() {
<ide> var pairs []publishersPair
<ide>
<ide> for {
<del> // Put sleep at the start so that it will always be hit,
<del> // preventing a tight loop if no stats are collected.
<del> time.Sleep(s.interval)
<add> s.cond.L.Lock()
<add> for len(s.publishers) == 0 {
<add> s.cond.Wait()
<add> }
<ide>
<ide> // it does not make sense in the first iteration,
<ide> // but saves allocations in further iterations
<ide> pairs = pairs[:0]
<ide>
<del> s.m.Lock()
<ide> for container, publisher := range s.publishers {
<ide> // copy pointers here to release the lock ASAP
<ide> pairs = append(pairs, publishersPair{container, publisher})
<ide> }
<del> s.m.Unlock()
<del> if len(pairs) == 0 {
<del> continue
<del> }
<add>
<add> s.cond.L.Unlock()
<ide>
<ide> onlineCPUs, err := s.getNumberOnlineCPUs()
<ide> if err != nil {
<ide> func (s *Collector) Run() {
<ide> })
<ide> }
<ide> }
<add>
<add> time.Sleep(s.interval)
<ide> }
<ide> }
<ide> | 1 |
Python | Python | preserve array order in np.delete | ed527cd7b28a76c8ecb2bd0e32a74a03767916bb | <ide><path>numpy/lib/function_base.py
<ide> def delete(arr, obj, axis=None):
<ide> if wrap:
<ide> return wrap(arr)
<ide> else:
<del> return arr.copy()
<add> return arr.copy(order=arrorder)
<ide>
<ide> slobj = [slice(None)]*ndim
<ide> N = arr.shape[axis]
<ide> def delete(arr, obj, axis=None):
<ide>
<ide> if numtodel <= 0:
<ide> if wrap:
<del> return wrap(arr.copy())
<add> return wrap(arr.copy(order=arrorder))
<ide> else:
<del> return arr.copy()
<add> return arr.copy(order=arrorder)
<ide>
<ide> # Invert if step is negative:
<ide> if step < 0:
<ide> def insert(arr, obj, values, axis=None):
<ide> warnings.warn(
<ide> "in the future the special handling of scalars will be removed "
<ide> "from insert and raise an error", DeprecationWarning)
<del> arr = arr.copy()
<add> arr = arr.copy(order=arrorder)
<ide> arr[...] = values
<ide> if wrap:
<ide> return wrap(arr)
<ide><path>numpy/lib/tests/test_function_base.py
<ide> class SubClass(np.ndarray):
<ide> assert_(isinstance(delete(a, slice(1, 2)), SubClass))
<ide> assert_(isinstance(delete(a, slice(1, -2)), SubClass))
<ide>
<add> def test_array_order_preserve(self):
<add> # See gh-7113
<add> k = np.arange(10).reshape(2, 5, order='F')
<add> m = delete(k, slice(60, None), axis=1)
<add>
<add> # 'k' is Fortran ordered, and 'm' should have the
<add> # same ordering as 'k' and NOT become C ordered
<add> assert_equal(m.flags.c_contiguous, k.flags.c_contiguous)
<add> assert_equal(m.flags.f_contiguous, k.flags.f_contiguous)
<add>
<ide>
<ide> class TestGradient(TestCase):
<ide> | 2 |
PHP | PHP | move the paths used into options | 442d889f99a11fe4a28f43dd2d35ac77a9fb41b1 | <ide><path>lib/Cake/Model/Model.php
<ide> protected function _findThreaded($state, $query, $results = array()) {
<ide> return $query;
<ide> } elseif ($state === 'after') {
<ide> return Set::nest($results, array(
<del> 'alias' => $this->alias,
<del> 'primaryKey' => $this->primaryKey,
<del> 'parent' => 'parent_id',
<del> 'children' => 'children'
<del> ));
<add> 'alias' => $this->alias,
<add> 'key' => $this->primaryKey,
<add> 'parent' => 'parent_id',
<add> 'children' => 'children'
<add> ));
<ide> }
<ide> }
<ide>
<ide><path>lib/Cake/Utility/Set.php
<ide> public static function apply($path, $data, $callback, $options = array()) {
<ide> * @param mixed $data
<ide> * @param array $options Options are:
<ide> * alias - the first array key to look for
<del> * primaryKey - the key to use to identify the rows
<add> * key - the key to use to identify the rows
<ide> * parentId - the key to use to identify the parent
<ide> * children - the key name to use in the resultset for children
<add> * idPath - the path to a key that identifies each entry
<add> * parentPath - the path to a key that identifies the parent of each entry
<ide> * @return array of results, nested
<ide> * @link
<ide> */
<ide> public static function nest($data, $options = array()) {
<ide>
<ide> $options = array(
<ide> 'alias' => key(current($data)),
<del> 'primaryKey' => 'id',
<add> 'key' => 'id',
<ide> 'parentId' => 'parent_id',
<del> 'children' => 'children'
<add> 'children' => 'children',
<ide> ) + $options;
<ide>
<add> if (empty($options['idPath'])) {
<add> $options['idPath'] = '/' . $options['alias'] . '/' . $options['key'];
<add> }
<add> if (empty($options['parentPath'])) {
<add> $options['parentPath'] = '/' . $options['alias'] . '/' . $options['parentId'];
<add> }
<add>
<ide> $return = $idMap = array();
<del> $ids = Set::extract($data, '{n}.' . $options['alias'] . '.' . $options['primaryKey']);
<add> $ids = Set::extract($data, $options['idPath']);
<ide>
<ide> foreach ($data as $result) {
<ide> $result[$options['children']] = array();
<del> $id = $result[$options['alias']][$options['primaryKey']];
<add> $id = $result[$options['alias']][$options['key']];
<ide> if (isset($result[$options['alias']][$options['parentId']])) {
<ide> $parentId = $result[$options['alias']][$options['parentId']];
<ide> } else {
<ide> $parentId = null;
<ide> }
<del> if (isset($idMap[$id]['children'])) {
<add> if (isset($idMap[$id][$options['children']])) {
<ide> $idMap[$id] = array_merge($result, (array)$idMap[$id]);
<ide> } else {
<del> $idMap[$id] = array_merge($result, array('children' => array()));
<add> $idMap[$id] = array_merge($result, array($options['children'] => array()));
<ide> }
<ide> if (!$parentId || !in_array($parentId, $ids)) {
<ide> $return[] =& $idMap[$id];
<ide> } else {
<del> $idMap[$parentId]['children'][] =& $idMap[$id];
<add> $idMap[$parentId][$options['children']][] =& $idMap[$id];
<ide> }
<ide> }
<ide> if (count($return) > 1) {
<del> $ids = array_unique(Set::extract('/' . $options['alias'] . '/' . $options['parentId'], $return));
<add> $ids = array_unique(Set::extract($options['parentPath'], $return));
<ide> if (count($ids) > 1) {
<ide> $root = $return[0][$options['alias']][$options['parentId']];
<ide> foreach ($return as $key => $value) { | 2 |
Go | Go | dump request when daemon is set to debug | 37dbe075196d638d6bd417716deaf067247ee966 | <ide><path>api/server/middleware.go
<ide> package server
<ide>
<ide> import (
<add> "bytes"
<add> "encoding/json"
<add> "io/ioutil"
<ide> "net/http"
<ide> "runtime"
<ide> "strings"
<ide> func (s *Server) loggingMiddleware(handler httputils.APIFunc) httputils.APIFunc
<ide> }
<ide> }
<ide>
<add>// debugRequestMiddleware dumps the request to logger
<add>// This is implemented separately from `loggingMiddleware` so that we don't have to
<add>// check the logging level or have httputil.DumpRequest called on each request.
<add>// Instead the middleware is only injected when the logging level is set to debug
<add>func (s *Server) debugRequestMiddleware(handler httputils.APIFunc) httputils.APIFunc {
<add> return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
<add> if s.cfg.Logging && r.Method == "POST" {
<add> if err := httputils.CheckForJSON(r); err == nil {
<add> var buf bytes.Buffer
<add> if _, err := buf.ReadFrom(r.Body); err == nil {
<add> r.Body.Close()
<add> r.Body = ioutil.NopCloser(&buf)
<add> var postForm map[string]interface{}
<add> if err := json.Unmarshal(buf.Bytes(), &postForm); err == nil {
<add> if _, exists := postForm["password"]; exists {
<add> postForm["password"] = "*****"
<add> }
<add> logrus.Debugf("form data: %q", postForm)
<add> }
<add> }
<add> }
<add> }
<add> return handler(ctx, w, r, vars)
<add> }
<add>}
<add>
<ide> // userAgentMiddleware checks the User-Agent header looking for a valid docker client spec.
<ide> func (s *Server) userAgentMiddleware(handler httputils.APIFunc) httputils.APIFunc {
<ide> return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
<ide> func (s *Server) handleWithGlobalMiddlewares(handler httputils.APIFunc) httputil
<ide> s.loggingMiddleware,
<ide> }
<ide>
<add> // Only want this on debug level
<add> // this is separate from the logging middleware so that we can do this check here once,
<add> // rather than for each request.
<add> if logrus.GetLevel() == logrus.DebugLevel {
<add> middlewares = append(middlewares, s.debugRequestMiddleware)
<add> }
<add>
<ide> h := handler
<ide> for _, m := range middlewares {
<ide> h = m(h) | 1 |
Python | Python | use python 3 syntax for super() where possible | 4c62e53f28a15d4766548cbe1eaff2cfbe8b877a | <ide><path>keras/api/tests/api_compatibility_test.py
<ide> def _FilterGoldenProtoDict(golden_proto_dict, omit_golden_symbols_map):
<ide> class ApiCompatibilityTest(tf.test.TestCase):
<ide>
<ide> def __init__(self, *args, **kwargs):
<del> super(ApiCompatibilityTest, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide>
<ide> self._update_golden_warning = file_io.read_file_to_string(
<ide> _UPDATE_WARNING_FILE)
<ide><path>keras/backend.py
<ide> def __init__(self):
<ide> # Constructors for classes subclassing threading.local run once
<ide> # per thread accessing something in the class. Thus, each thread will
<ide> # get a different key.
<del> super(_DummyEagerGraph, self).__init__()
<add> super().__init__()
<ide> self.key = _DummyEagerGraph._WeakReferencableClass()
<ide> self.learning_phase_is_set = False
<ide>
<ide><path>keras/benchmarks/keras_examples_benchmarks/antirectifier_benchmark_test.py
<ide> class AntirectifierBenchmark(tf.test.Benchmark):
<ide> """Benchmarks for Antirectifier using `tf.test.Benchmark`."""
<ide>
<ide> def __init__(self):
<del> super(AntirectifierBenchmark, self).__init__()
<add> super().__init__()
<ide> (self.x_train, self.y_train), _ = tf.keras.datasets.mnist.load_data()
<ide> self.x_train = self.x_train.reshape(-1, 784)
<ide> self.x_train = self.x_train.astype("float32") / 255
<ide> class Antirectifier(tf.keras.layers.Layer):
<ide> """Build simple custom layer."""
<ide>
<ide> def __init__(self, initializer="he_normal", **kwargs):
<del> super(Antirectifier, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.initializer = tf.keras.initializers.get(initializer)
<ide>
<ide> def build(self, input_shape):
<ide> def call(self, inputs): #pylint: disable=arguments-differ
<ide>
<ide> def get_config(self):
<ide> # Implement get_config to enable serialization. This is optional.
<del> base_config = super(Antirectifier, self).get_config()
<add> base_config = super().get_config()
<ide> config = {"initializer": tf.keras.initializers.serialize(self.initializer)}
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide><path>keras/benchmarks/keras_examples_benchmarks/bidirectional_lstm_benchmark_test.py
<ide> class BidirectionalLSTMBenchmark(tf.test.Benchmark):
<ide> """Benchmarks for Bidirectional LSTM using `tf.test.Benchmark`."""
<ide>
<ide> def __init__(self):
<del> super(BidirectionalLSTMBenchmark, self).__init__()
<add> super().__init__()
<ide> self.max_feature = 20000
<ide> self.max_len = 200
<ide> (self.imdb_x, self.imdb_y), _ = tf.keras.datasets.imdb.load_data(
<ide><path>keras/benchmarks/keras_examples_benchmarks/cifar10_cnn_benchmark_test.py
<ide> class Cifar10CNNBenchmark(tf.test.Benchmark):
<ide> """Benchmarks for CNN using `tf.test.Benchmark`."""
<ide>
<ide> def __init__(self):
<del> super(Cifar10CNNBenchmark, self).__init__()
<add> super().__init__()
<ide> self.num_classes = 10
<ide> (self.x_train, self.y_train), _ = tf.keras.datasets.cifar10.load_data()
<ide> self.x_train = self.x_train.astype('float32') / 255
<ide><path>keras/benchmarks/keras_examples_benchmarks/mnist_conv_benchmark_test.py
<ide> class ConvMnistBenchmark(tf.test.Benchmark):
<ide> """Benchmarks for Convnet using `tf.test.Benchmark`."""
<ide>
<ide> def __init__(self):
<del> super(ConvMnistBenchmark, self).__init__()
<add> super().__init__()
<ide> self.num_classes = 10
<ide> self.input_shape = (28, 28, 1)
<ide> (self.x_train, self.y_train), _ = tf.keras.datasets.mnist.load_data()
<ide><path>keras/benchmarks/keras_examples_benchmarks/mnist_conv_custom_training_benchmark_test.py
<ide> class CustomMnistBenchmark(tf.test.Benchmark):
<ide> """Benchmarks for custom training loop using `tf.test.Benchmark`."""
<ide>
<ide> def __init__(self):
<del> super(CustomMnistBenchmark, self).__init__()
<add> super().__init__()
<ide> self.num_classes = 10
<ide> self.input_shape = (28, 28, 1)
<ide> self.epochs = 15
<ide><path>keras/benchmarks/keras_examples_benchmarks/mnist_hierarchical_rnn_benchmark_test.py
<ide> class HierarchicalRNNBenchmark(tf.test.Benchmark):
<ide> """Benchmarks for Hierarchical RNN using `tf.test.Benchmark`."""
<ide>
<ide> def __init__(self):
<del> super(HierarchicalRNNBenchmark, self).__init__()
<add> super().__init__()
<ide> self.num_classes = 10
<ide> self.row_hidden, self.col_hidden = 128, 128
<ide> (self.x_train, self.y_train), _ = tf.keras.datasets.mnist.load_data()
<ide><path>keras/benchmarks/keras_examples_benchmarks/mnist_irnn_benchmark_test.py
<ide> class IRNNMnistBenchmark(tf.test.Benchmark):
<ide> """Benchmarks for IRNN using `tf.test.Benchmark`."""
<ide>
<ide> def __init__(self):
<del> super(IRNNMnistBenchmark, self).__init__()
<add> super().__init__()
<ide> self.num_classes = 10
<ide> self.hidden_units = 100
<ide> self.learning_rate = 1e-6
<ide><path>keras/benchmarks/keras_examples_benchmarks/reuters_mlp_benchmark_test.py
<ide> class MLPReutersBenchmark(tf.test.Benchmark):
<ide> """Benchmarks for MLP using `tf.test.Benchmark`."""
<ide>
<ide> def __init__(self):
<del> super(MLPReutersBenchmark, self).__init__()
<add> super().__init__()
<ide> self.max_words = 1000
<ide> (self.x_train, self.y_train), _ = tf.keras.datasets.reuters.load_data(
<ide> num_words=self.max_words)
<ide><path>keras/benchmarks/keras_examples_benchmarks/text_classification_transformer_benchmark_test.py
<ide> class TextWithTransformerBenchmark(tf.test.Benchmark):
<ide> """
<ide>
<ide> def __init__(self):
<del> super(TextWithTransformerBenchmark, self).__init__()
<add> super().__init__()
<ide> self.max_feature = 20000
<ide> self.max_len = 200
<ide> (self.imdb_x, self.imdb_y), _ = tf.keras.datasets.imdb.load_data(
<ide> class MultiHeadSelfAttention(tf.keras.layers.Layer):
<ide> """Implement multi head self attention as a Keras layer."""
<ide>
<ide> def __init__(self, embed_dim, num_heads=8):
<del> super(MultiHeadSelfAttention, self).__init__()
<add> super().__init__()
<ide> self.embed_dim = embed_dim
<ide> self.num_heads = num_heads
<ide> if embed_dim % num_heads != 0:
<ide> class TransformerBlock(tf.keras.layers.Layer):
<ide> """Implement a Transformer block as a layer."""
<ide>
<ide> def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1):
<del> super(TransformerBlock, self).__init__()
<add> super().__init__()
<ide> self.att = MultiHeadSelfAttention(embed_dim, num_heads)
<ide> self.ffn = tf.keras.Sequential([
<ide> tf.keras.layers.Dense(ff_dim, activation='relu'),
<ide> class TokenAndPositionEmbedding(tf.keras.layers.Layer):
<ide> """Implement embedding layer."""
<ide>
<ide> def __init__(self, maxlen, vocab_size, embed_dim):
<del> super(TokenAndPositionEmbedding, self).__init__()
<add> super().__init__()
<ide> self.token_emb = tf.keras.layers.Embedding(
<ide> input_dim=vocab_size, output_dim=embed_dim)
<ide> self.pos_emb = tf.keras.layers.Embedding(
<ide><path>keras/benchmarks/model_components_benchmarks_test.py
<ide> class SubclassedKerasModel(tf.keras.Model):
<ide>
<ide> def __init__(self, initializer="ones"):
<del> super(SubclassedKerasModel, self).__init__()
<add> super().__init__()
<ide> self.layer_a = tf.keras.layers.Dense(
<ide> 64, kernel_initializer=initializer, bias_initializer="zeros")
<ide> self.layer_b = tf.keras.layers.Dense(
<ide><path>keras/callbacks.py
<ide> class BaseLogger(Callback):
<ide> """
<ide>
<ide> def __init__(self, stateful_metrics=None):
<del> super(BaseLogger, self).__init__()
<add> super().__init__()
<ide> self.stateful_metrics = set(stateful_metrics or [])
<ide>
<ide> def on_epoch_begin(self, epoch, logs=None):
<ide> class TerminateOnNaN(Callback):
<ide> """
<ide>
<ide> def __init__(self):
<del> super(TerminateOnNaN, self).__init__()
<add> super().__init__()
<ide> self._supports_tf_logs = True
<ide>
<ide> def on_batch_end(self, batch, logs=None):
<ide> class ProgbarLogger(Callback):
<ide> """
<ide>
<ide> def __init__(self, count_mode='samples', stateful_metrics=None):
<del> super(ProgbarLogger, self).__init__()
<add> super().__init__()
<ide> self._supports_tf_logs = True
<ide> if count_mode == 'samples':
<ide> self.use_steps = False
<ide> class History(Callback):
<ide> """
<ide>
<ide> def __init__(self):
<del> super(History, self).__init__()
<add> super().__init__()
<ide> self.history = {}
<ide>
<ide> def on_train_begin(self, logs=None):
<ide> def __init__(self,
<ide> options=None,
<ide> initial_value_threshold=None,
<ide> **kwargs):
<del> super(ModelCheckpoint, self).__init__()
<add> super().__init__()
<ide> self._supports_tf_logs = True
<ide> self.monitor = monitor
<ide> self.verbose = verbose
<ide> class BackupAndRestore(Callback):
<ide> """
<ide>
<ide> def __init__(self, backup_dir):
<del> super(BackupAndRestore, self).__init__()
<add> super().__init__()
<ide> self.backup_dir = backup_dir
<ide> self._supports_tf_logs = True
<ide> self._supported_strategies = (
<ide> def __init__(self, *args, **kwargs):
<ide> '`tf.keras.callbacks.experimental.BackupAndRestore` endpoint is '
<ide> 'deprecated and will be removed in a future release. Please use '
<ide> '`tf.keras.callbacks.BackupAndRestore`.')
<del> super(BackupAndRestoreExperimental, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide>
<ide>
<ide> @keras_export('keras.callbacks.EarlyStopping')
<ide> def __init__(self,
<ide> mode='auto',
<ide> baseline=None,
<ide> restore_best_weights=False):
<del> super(EarlyStopping, self).__init__()
<add> super().__init__()
<ide>
<ide> self.monitor = monitor
<ide> self.patience = patience
<ide> def __init__(self,
<ide> field='data',
<ide> headers=None,
<ide> send_as_json=False):
<del> super(RemoteMonitor, self).__init__()
<add> super().__init__()
<ide>
<ide> self.root = root
<ide> self.path = path
<ide> class LearningRateScheduler(Callback):
<ide> """
<ide>
<ide> def __init__(self, schedule, verbose=0):
<del> super(LearningRateScheduler, self).__init__()
<add> super().__init__()
<ide> self.schedule = schedule
<ide> self.verbose = verbose
<ide>
<ide> def __init__(self,
<ide> embeddings_freq=0,
<ide> embeddings_metadata=None,
<ide> **kwargs):
<del> super(TensorBoard, self).__init__()
<add> super().__init__()
<ide> self._supports_tf_logs = True
<ide> self._validate_kwargs(kwargs)
<ide>
<ide> def __init__(self,
<ide> cooldown=0,
<ide> min_lr=0,
<ide> **kwargs):
<del> super(ReduceLROnPlateau, self).__init__()
<add> super().__init__()
<ide>
<ide> self.monitor = monitor
<ide> if factor >= 1.0:
<ide> def __init__(self, filename, separator=',', append=False):
<ide> self.writer = None
<ide> self.keys = None
<ide> self.append_header = True
<del> super(CSVLogger, self).__init__()
<add> super().__init__()
<ide>
<ide> def on_train_begin(self, logs=None):
<ide> if self.append:
<ide> def __init__(self,
<ide> on_train_begin=None,
<ide> on_train_end=None,
<ide> **kwargs):
<del> super(LambdaCallback, self).__init__()
<add> super().__init__()
<ide> self.__dict__.update(kwargs)
<ide> if on_epoch_begin is not None:
<ide> self.on_epoch_begin = on_epoch_begin
<ide><path>keras/callbacks_test.py
<ide> class AddAllOnes(keras.metrics.Metric):
<ide> """A simple metric that adds all the one's in `y_true`."""
<ide>
<ide> def __init__(self, name='add_all_ones', **kwargs):
<del> super(AddAllOnes, self).__init__(name=name, **kwargs)
<add> super().__init__(name=name, **kwargs)
<ide> self.total = self.add_weight(name='total', initializer='zeros')
<ide>
<ide> def update_state(self, y_true, y_pred, sample_weight=None):
<ide> def on_predict_batch_end(self, batch, logs=None):
<ide> class MyCallbackWithTFBatchHooks(keras.callbacks.Callback):
<ide>
<ide> def __init__(self):
<del> super(MyCallbackWithTFBatchHooks, self).__init__()
<add> super().__init__()
<ide> self._supports_tf_logs = True
<ide>
<ide> class MyCallbackWithoutBatchHooks(keras.callbacks.Callback):
<ide> def _run(self, *args, logs=None):
<ide> class MutateTensorFlowLogs(CallAllHooks):
<ide>
<ide> def __init__(self):
<del> super(MutateTensorFlowLogs, self).__init__()
<add> super().__init__()
<ide> self._supports_tf_logs = True
<ide>
<ide> def _run(self, *args, logs=None):
<ide> def _run(self, *args, logs=None):
<ide> class AssertTensorFlowLogs(AssertNumpyLogs):
<ide>
<ide> def __init__(self):
<del> super(AssertTensorFlowLogs, self).__init__()
<add> super().__init__()
<ide> self._supports_tf_logs = True
<ide>
<ide> cb_list = keras.callbacks.CallbackList([
<ide> def test_stop_training_batch_level(self):
<ide> class MyCallback(keras.callbacks.Callback):
<ide>
<ide> def __init__(self):
<del> super(MyCallback, self).__init__()
<add> super().__init__()
<ide> self.batch_counter = 0
<ide>
<ide> def on_train_batch_end(self, batch, logs=None):
<ide> class CustomCallback(keras.callbacks.Callback):
<ide> class TestingCallbackList(keras.callbacks.CallbackList):
<ide>
<ide> def __init__(self, *args, **kwargs):
<del> super(TestingCallbackList, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide> if ((not isinstance(self.callbacks[0], CustomCallback)) or
<ide> (not isinstance(self.callbacks[1], keras.callbacks.History)) or
<ide> (not isinstance(self.callbacks[2], keras.callbacks.ProgbarLogger))):
<ide> def testKerasModel_subclass(self):
<ide> class SimpleSubclass(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(SimpleSubclass, self).__init__(name='subclass')
<add> super().__init__(name='subclass')
<ide> self.dense = Dense(10, input_shape=(100,))
<ide> self.activation = Activation('relu', name='my_relu')
<ide>
<ide><path>keras/distribute/ctl_correctness_test.py
<ide> class TestDistributionStrategyDnnCorrectness(tf.test.TestCase,
<ide> """Test custom training loop correctness with a simple DNN model."""
<ide>
<ide> def setUp(self):
<del> super(TestDistributionStrategyDnnCorrectness, self).setUp()
<add> super().setUp()
<ide> np.random.seed(_RANDOM_SEED)
<ide> tf.compat.v1.set_random_seed(_RANDOM_SEED)
<ide>
<ide><path>keras/distribute/custom_training_loop_models_test.py
<ide> class CustomModel(tf.Module):
<ide>
<ide> def __init__(self, name=None):
<del> super(CustomModel, self).__init__(name=name)
<add> super().__init__(name=name)
<ide> with self.name_scope:
<ide> self._layers = [
<ide> keras.layers.Dense(4, name="dense"),
<ide> def get_subclass_model():
<ide> class KerasSubclassModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(KerasSubclassModel, self).__init__()
<add> super().__init__()
<ide> self.l = keras.layers.Dense(4, name="dense")
<ide>
<ide> def call(self, x):
<ide> def test_tf_function_jit_compile(self, distribution):
<ide> class CustomDense(keras.layers.Layer):
<ide>
<ide> def __init__(self, num_outputs):
<del> super(CustomDense, self).__init__()
<add> super().__init__()
<ide> self.num_outputs = num_outputs
<ide>
<ide> def build(self, input_shape):
<ide><path>keras/distribute/dataset_creator_model_fit_ps_only_test.py
<ide> def testModelFitCallbackSupportsTFLogs(self, strategy, use_dataset_creator):
<ide> class MyCallback(callbacks_lib.Callback):
<ide>
<ide> def __init__(self):
<del> super(MyCallback, self).__init__()
<add> super().__init__()
<ide> # Fetches the RemoteValues if necessary.
<ide> self._supports_tf_logs = True
<ide>
<ide><path>keras/distribute/distribute_strategy_test.py
<ide> def simple_subclassed_model(num_labels=_NUM_CLASS):
<ide> class _SimpleMLP(keras.Model):
<ide>
<ide> def __init__(self, num_labels):
<del> super(_SimpleMLP, self).__init__()
<add> super().__init__()
<ide> self.dense = keras.layers.Dense(num_labels)
<ide>
<ide> def call(self, inputs):
<ide> def strategy_and_optimizer_combinations():
<ide> class BatchCountingCB(keras.callbacks.Callback):
<ide>
<ide> def __init__(self):
<del> super(BatchCountingCB, self).__init__()
<add> super().__init__()
<ide> self.train_begin_batches = []
<ide> self.train_end_batches = []
<ide> self.test_begin_batches = []
<ide> def build(self, input_shape):
<ide> # Gradients w.r.t. extra_weights are None
<ide> self.extra_weight_1 = self.add_weight('extra_weight_1', shape=(),
<ide> initializer='ones')
<del> super(DenseWithExtraWeight, self).build(input_shape)
<add> super().build(input_shape)
<ide> self.extra_weight_2 = self.add_weight('extra_weight_2', shape=(),
<ide> initializer='ones')
<ide>
<ide> class TestDistributionStrategyWithDatasetsFile(tf.test.TestCase,
<ide> parameterized.TestCase):
<ide>
<ide> def setUp(self):
<del> super(TestDistributionStrategyWithDatasetsFile, self).setUp()
<add> super().setUp()
<ide> self.input_file_name = os.path.join(self.get_temp_dir(), 'input.tfrecord')
<ide> inputs = np.zeros((20, 3), dtype=np.float32)
<ide> input_dataset = tf.data.Dataset.from_tensor_slices(inputs)
<ide> class ToRagged(keras.layers.Layer):
<ide> """Create a ragged tensor based on a given dense tensor."""
<ide>
<ide> def __init__(self, padding, ragged_rank=1, **kwargs):
<del> super(ToRagged, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self._padding = padding
<ide> self._ragged_rank = ragged_rank
<ide>
<ide> class DeterministicModel(keras.Model):
<ide> """
<ide>
<ide> def __init__(self, strategy):
<del> super(DeterministicModel, self).__init__()
<add> super().__init__()
<ide> self.x = None
<ide> self.strategy = strategy
<ide>
<ide><path>keras/distribute/keras_dnn_correctness_test.py
<ide> def test_identity_model_metric_eval_correctness(self, distribution):
<ide> class SubclassedModel(keras.Model):
<ide>
<ide> def __init__(self, initial_weights, input_shapes):
<del> super(SubclassedModel, self).__init__()
<add> super().__init__()
<ide> self.dense1 = keras.layers.Dense(10, activation='relu', input_shape=(1,))
<ide> self.dense2 = keras.layers.Dense(
<ide> 10, activation='relu', kernel_regularizer=keras.regularizers.l2(1e-4))
<ide><path>keras/distribute/keras_embedding_model_correctness_test.py
<ide> def get_data(self,
<ide> max_word_id=19,
<ide> num_classes=2):
<ide> features_a, labels_a, _ = (
<del> super(DistributionStrategySiameseEmbeddingModelCorrectnessTest,
<del> self).get_data(count, min_words, max_words, max_word_id,
<add> super().get_data(count, min_words, max_words, max_word_id,
<ide> num_classes))
<ide>
<ide> features_b, labels_b, _ = (
<del> super(DistributionStrategySiameseEmbeddingModelCorrectnessTest,
<del> self).get_data(count, min_words, max_words, max_word_id,
<add> super().get_data(count, min_words, max_words, max_word_id,
<ide> num_classes))
<ide>
<ide> y_train = np.zeros((count, 1), dtype=np.float32)
<ide><path>keras/distribute/keras_metrics_test.py
<ide> def testAddMetric(self, distribution, jit_compile):
<ide> class MetricLayer(base_layer.Layer):
<ide>
<ide> def __init__(self):
<del> super(MetricLayer, self).__init__(name="metric_layer")
<add> super().__init__(name="metric_layer")
<ide> self.sum = metrics.Sum(name="sum")
<ide> # Using aggregation for jit_compile results in failure. Thus only set
<ide> # aggregation for PS Strategy for multi-gpu tests.
<ide><path>keras/distribute/keras_save_load_test.py
<ide> class KerasSaveLoadTest(test_base.TestSavedModelBase):
<ide>
<ide> def setUp(self):
<ide> self._root_dir = 'keras_save_load'
<del> super(KerasSaveLoadTest, self).setUp()
<add> super().setUp()
<ide>
<ide> def _save_model(self, model, saved_dir):
<ide> model.save(saved_dir, save_format='tf')
<ide><path>keras/distribute/keras_utils_test.py
<ide> def test_distribution_strategy_on_subclassed_model(
<ide> class _SimpleMLP(keras.Model):
<ide>
<ide> def __init__(self, num_labels):
<del> super(_SimpleMLP, self).__init__()
<add> super().__init__()
<ide> self.dense = keras.layers.Dense(num_labels)
<ide>
<ide> def call(self, inputs):
<ide><path>keras/distribute/mirrored_strategy_test.py
<ide> class MiniModel(keras_training.Model):
<ide> """
<ide>
<ide> def __init__(self):
<del> super(MiniModel, self).__init__(name="")
<add> super().__init__(name="")
<ide> self.fc = keras_core.Dense(1, name="fc", kernel_initializer="ones",
<ide> bias_initializer="ones")
<ide>
<ide><path>keras/distribute/multi_worker_test.py
<ide> def __init__(self, num_epoch, num_worker):
<ide> num_epoch: Number of epochs this Callback is expected to be called for.
<ide> num_worker: Number of workers this Callback is expected to be called from.
<ide> """
<del> super(MultiWorkerVerificationCallback, self).__init__()
<add> super().__init__()
<ide> self._num_epoch = num_epoch
<ide> self._num_worker = num_worker
<ide> self._task_dict = {
<ide><path>keras/distribute/saved_model_mixed_api_test.py
<ide> class SavedModelSaveAndLoadTest(test_base.TestSavedModelBase):
<ide>
<ide> def setUp(self):
<ide> self._root_dir = 'saved_model_save_load'
<del> super(SavedModelSaveAndLoadTest, self).setUp()
<add> super().setUp()
<ide>
<ide> def _save_model(self, model, saved_dir):
<ide> save.save_model(model, saved_dir, save_format='tf')
<ide><path>keras/distribute/saved_model_save_load_test.py
<ide> class SavedModelKerasModelTest(test_base.TestSavedModelBase):
<ide>
<ide> def setUp(self):
<ide> self._root_dir = 'saved_model_save_load'
<del> super(SavedModelKerasModelTest, self).setUp()
<add> super().setUp()
<ide>
<ide> def _save_model(self, model, saved_dir):
<ide> tf.saved_model.save(model, saved_dir)
<ide> class SavedModelTFModuleTest(test_base.TestSavedModelBase):
<ide>
<ide> def setUp(self):
<ide> self._root_dir = 'saved_model_save_load'
<del> super(SavedModelTFModuleTest, self).setUp()
<add> super().setUp()
<ide>
<ide> def _train_model(self, model, x_train, y_train, batch_size):
<ide> pass
<ide><path>keras/distribute/saved_model_test_base.py
<ide> def setUp(self):
<ide> np.random.seed(_RANDOM_SEED)
<ide> tf.compat.v1.set_random_seed(_RANDOM_SEED)
<ide> self._root_dir = 'base'
<del> super(TestSavedModelBase, self).setUp()
<add> super().setUp()
<ide>
<ide> def _save_model(self, model, saved_dir):
<ide> """Save the given model to the given saved_dir.
<ide><path>keras/distribute/sidecar_evaluator.py
<ide> def __init__(self, *args, **kwargs):
<ide> '`tf.keras.experimental.SidecarEvaluator` endpoint is '
<ide> 'deprecated and will be removed in a future release. Please use '
<ide> '`tf.keras.utils.SidecarEvaluator`.')
<del> super(SidecarEvaluatorExperimental, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide><path>keras/distribute/simple_models.py
<ide> def get_batch_size(self):
<ide> class _SimpleModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(_SimpleModel, self).__init__()
<add> super().__init__()
<ide> self._dense_layer = keras.layers.Dense(5, dtype=tf.float32)
<ide>
<ide> def call(self, inputs):
<ide><path>keras/dtensor/initializers_test.py
<ide> class InitializersTest(test_util.DTensorBaseTest):
<ide>
<ide> def setUp(self):
<del> super(InitializersTest, self).setUp()
<add> super().setUp()
<ide> global_ids = test_util.create_device_ids_array((2, 2))
<ide> local_device_ids = np.ravel(global_ids).tolist()
<ide> mesh_dict = {
<ide><path>keras/dtensor/layers_test.py
<ide> class LayersTest(test_util.DTensorBaseTest):
<ide>
<ide> def setUp(self):
<del> super(LayersTest, self).setUp()
<add> super().setUp()
<ide> backend.enable_tf_random_generator()
<ide> tf_utils.set_random_seed(1337)
<ide> global_ids = test_util.create_device_ids_array((2, 2))
<ide><path>keras/dtensor/layout_map_test.py
<ide> class LayoutMapTest(test_util.DTensorBaseTest):
<ide>
<ide> def setUp(self):
<del> super(LayoutMapTest, self).setUp()
<add> super().setUp()
<ide> backend.enable_tf_random_generator()
<ide> tf_utils.set_random_seed(1337)
<ide> global_ids = test_util.create_device_ids_array((2, 2))
<ide> def call(self, inputs, training=None):
<ide> class ObjectPathMappingTest(test_util.DTensorBaseTest):
<ide>
<ide> def setUp(self):
<del> super(ObjectPathMappingTest, self).setUp()
<add> super().setUp()
<ide> backend.enable_tf_random_generator()
<ide> tf_utils.set_random_seed(1337)
<ide> global_ids = test_util.create_device_ids_array((2, 2))
<ide><path>keras/dtensor/lazy_variable.py
<ide> def __init__(
<ide> unique_id) = _infer_shape_dtype_and_create_handle(initial_value, shape,
<ide> dtype, name)
<ide>
<del> super(LazyInitVariable, self).__init__(
<add> super().__init__(
<ide> distribute_strategy=distribute_strategy,
<ide> initial_value=initial_value,
<ide> shape=shape,
<ide> def create_and_initialize(self):
<ide> initial_value, self._shape, self._dtype, self._name)
<ide> self.initialize()
<ide>
<del> super(LazyInitVariable, self).__init__(
<add> super().__init__(
<ide> trainable=self._trainable,
<ide> shape=shape,
<ide> dtype=dtype,
<ide><path>keras/dtensor/metrics_test.py
<ide> class MetricsTest(test_util.DTensorBaseTest):
<ide>
<ide> def setUp(self):
<del> super(MetricsTest, self).setUp()
<add> super().setUp()
<ide> global_ids = test_util.create_device_ids_array((2, 2))
<ide> local_device_ids = np.ravel(global_ids).tolist()
<ide> mesh_dict = {
<ide><path>keras/dtensor/optimizers_test.py
<ide> class OptimizersTest(test_util.DTensorBaseTest):
<ide>
<ide> def setUp(self):
<del> super(OptimizersTest, self).setUp()
<add> super().setUp()
<ide> global_ids = test_util.create_device_ids_array((2, 2))
<ide> local_device_ids = np.ravel(global_ids).tolist()
<ide> mesh_dict = {
<ide><path>keras/dtensor/utils_test.py
<ide> class UtilsTest(test_util.DTensorBaseTest):
<ide>
<ide> def setUp(self):
<del> super(UtilsTest, self).setUp()
<add> super().setUp()
<ide> global_ids = test_util.create_device_ids_array((2, 2))
<ide> local_device_ids = np.ravel(global_ids).tolist()
<ide> mesh_dict = {
<ide><path>keras/engine/base_layer_test.py
<ide> class DynamicLayer(base_layer.Layer):
<ide>
<ide> def __init__(self, dynamic=False, **kwargs):
<del> super(DynamicLayer, self).__init__(dynamic=dynamic, **kwargs)
<add> super().__init__(dynamic=dynamic, **kwargs)
<ide>
<ide> def call(self, inputs):
<ide> samples = tf.TensorArray(
<ide> def test_manual_compute_output_shape(self):
<ide> class BuildCounter(base_layer.Layer):
<ide>
<ide> def __init__(self, *args, **kwargs): # pylint: disable=redefined-outer-name
<del> super(BuildCounter, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide> self.build_counter = 0
<ide>
<ide> def build(self, input_shape):
<ide> def test_dynamic_subclassed_model_no_shape_inference(self):
<ide> class MyModel(training_lib.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__(dynamic=True)
<add> super().__init__(dynamic=True)
<ide> self.layer1 = layers.Dense(3)
<ide> self.layer2 = layers.Dense(3)
<ide>
<ide> def test_dynamic_subclassed_model_with_shape_inference(self):
<ide> class MyModel(training_lib.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__(dynamic=True)
<add> super().__init__(dynamic=True)
<ide> self.layer1 = layers.Dense(3)
<ide> self.layer2 = layers.Dense(3)
<ide>
<ide> def test_default_add_weight(self):
<ide> class TestLayer(base_layer.Layer):
<ide>
<ide> def __init__(self):
<del> super(TestLayer, self).__init__()
<add> super().__init__()
<ide> self.default_weight = self.add_weight()
<ide> self.weight_without_name = self.add_weight(shape=(3, 4))
<ide> self.regularized_weight_without_name = self.add_weight(
<ide> def test_layer_can_return_variable(self):
<ide> class ComputeSum(base_layer.Layer):
<ide>
<ide> def __init__(self):
<del> super(ComputeSum, self).__init__()
<add> super().__init__()
<ide> self.total = tf.Variable(
<ide> initial_value=tf.zeros((1, 1)), trainable=False)
<ide> if not tf.executing_eagerly():
<ide> def test_raw_variable_assignment(self):
<ide> class RawVariableLayer(base_layer.Layer):
<ide>
<ide> def __init__(self, **kwargs):
<del> super(RawVariableLayer, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> # Test variables in nested structure.
<ide> self.var_list = [tf.Variable(1.), {'a': tf.Variable(2.)}]
<ide>
<ide> def test_get_config_error(self):
<ide> class MyLayer(base_layer.Layer):
<ide>
<ide> def __init__(self, my_kwarg='default', **kwargs):
<del> super(MyLayer, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.my_kwarg = my_kwarg
<ide>
<ide> # `__init__` includes kwargs but `get_config` is not overridden, so
<ide> def __init__(self, my_kwarg='default', **kwargs):
<ide> class MyLayerNew(base_layer.Layer):
<ide>
<ide> def __init__(self, my_kwarg='default', **kwargs):
<del> super(MyLayerNew, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.my_kwarg = my_kwarg
<ide>
<ide> def get_config(self):
<del> config = super(MyLayerNew, self).get_config()
<add> config = super().get_config()
<ide> config['my_kwarg'] = self.my_kwarg
<ide> return config
<ide>
<ide> def get_config(self):
<ide> class MyLayerNew2(base_layer.Layer):
<ide>
<ide> def __init__(self, name='MyLayerName', dtype=None, **kwargs): # pylint:disable=redefined-outer-name
<del> super(MyLayerNew2, self).__init__(name=name, dtype=dtype, **kwargs)
<add> super().__init__(name=name, dtype=dtype, **kwargs)
<ide>
<ide> # Check that if the kwargs in `__init__` are base layer constructor
<ide> # arguments, no error is thrown:
<ide> class CustomLayer(base_layer.Layer):
<ide>
<ide> def build(self, input_shape):
<ide> self.add_weight('w', shape=input_shape[1:])
<del> super(CustomLayer, self).build(input_shape)
<add> super().build(input_shape)
<ide>
<ide> layer = CustomLayer()
<ide> self.assertFalse(layer.built)
<ide> def test_custom_layer_training_arg(self):
<ide> class CustomLayerNoTrainingArg(base_layer.Layer):
<ide>
<ide> def __init__(self, nested_layer=None):
<del> super(CustomLayerNoTrainingArg, self).__init__()
<add> super().__init__()
<ide> self._nested_layer = nested_layer or tf.identity
<ide>
<ide> def call(self, inputs):
<ide> def call(self, inputs):
<ide> class CustomLayerDefaultTrainingMissing(base_layer.Layer):
<ide>
<ide> def __init__(self, nested_layer=None):
<del> super(CustomLayerDefaultTrainingMissing, self).__init__()
<add> super().__init__()
<ide> self._nested_layer = nested_layer or tf.identity
<ide>
<ide> def call(self, inputs, training):
<ide> def call(self, inputs, training):
<ide> class CustomLayerDefaultTrainingNone(base_layer.Layer):
<ide>
<ide> def __init__(self, nested_layer=None):
<del> super(CustomLayerDefaultTrainingNone, self).__init__()
<add> super().__init__()
<ide> self._nested_layer = nested_layer or tf.identity
<ide>
<ide> def call(self, inputs, training=None):
<ide> def call(self, inputs, training=None):
<ide> class CustomLayerDefaultTrainingFalse(base_layer.Layer):
<ide>
<ide> def __init__(self, nested_layer=None):
<del> super(CustomLayerDefaultTrainingFalse, self).__init__()
<add> super().__init__()
<ide> self._nested_layer = nested_layer or tf.identity
<ide>
<ide> def call(self, inputs, training=False):
<ide> def call(self, inputs, training=False):
<ide> class CustomLayerDefaultTrainingTrue(base_layer.Layer):
<ide>
<ide> def __init__(self, nested_layer=None):
<del> super(CustomLayerDefaultTrainingTrue, self).__init__()
<add> super().__init__()
<ide> self._nested_layer = nested_layer or tf.identity
<ide>
<ide> def call(self, inputs, training=True):
<ide> def test_custom_layer_training_arg_kwargonly(self):
<ide> class CustomLayerNoTrainingArg(base_layer.Layer):
<ide>
<ide> def __init__(self, nested_layer=None):
<del> super(CustomLayerNoTrainingArg, self).__init__()
<add> super().__init__()
<ide> self._nested_layer = nested_layer or tf.identity
<ide>
<ide> def call(self, inputs):
<ide> def call(self, inputs):
<ide> class CustomLayerDefaultTrainingMissing(base_layer.Layer):
<ide>
<ide> def __init__(self, nested_layer=None):
<del> super(CustomLayerDefaultTrainingMissing, self).__init__()
<add> super().__init__()
<ide> self._nested_layer = nested_layer or tf.identity
<ide>
<ide> def call(self, inputs, *, training):
<ide> def call(self, inputs, *, training):
<ide> class CustomLayerDefaultTrainingNone(base_layer.Layer):
<ide>
<ide> def __init__(self, nested_layer=None):
<del> super(CustomLayerDefaultTrainingNone, self).__init__()
<add> super().__init__()
<ide> self._nested_layer = nested_layer or tf.identity
<ide>
<ide> def call(self, inputs, *, training=None):
<ide> def call(self, inputs, *, training=None):
<ide> class CustomLayerDefaultTrainingFalse(base_layer.Layer):
<ide>
<ide> def __init__(self, nested_layer=None):
<del> super(CustomLayerDefaultTrainingFalse, self).__init__()
<add> super().__init__()
<ide> self._nested_layer = nested_layer or tf.identity
<ide>
<ide> def call(self, inputs, *, training=False):
<ide> def call(self, inputs, *, training=False):
<ide> class CustomLayerDefaultTrainingTrue(base_layer.Layer):
<ide>
<ide> def __init__(self, nested_layer=None):
<del> super(CustomLayerDefaultTrainingTrue, self).__init__()
<add> super().__init__()
<ide> self._nested_layer = nested_layer or tf.identity
<ide>
<ide> def call(self, inputs, *, training=True):
<ide> def test_tf_module_tracking(self):
<ide> class MyModule(tf.Module):
<ide>
<ide> def __init__(self):
<del> super(MyModule, self).__init__()
<add> super().__init__()
<ide> self.v1 = tf.Variable(1., trainable=True, name='v1')
<ide> self.v2 = tf.Variable(2., trainable=False, name='v2')
<ide>
<ide> def __call__(self, x):
<ide> class MyLayer(base_layer.Layer):
<ide>
<ide> def __init__(self, **kwargs):
<del> super(MyLayer, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.my_modules = {}
<ide> self.my_modules['a'] = MyModule()
<ide>
<ide> def call(self, x):
<ide> class MyModel(training_lib.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.my_modules = []
<ide> self.my_modules.append(MyModule())
<ide>
<ide> def test_nested_layer_variable_tracking(self):
<ide> class MyLayer(base_layer.Layer):
<ide>
<ide> def __init__(self):
<del> super(MyLayer, self).__init__()
<add> super().__init__()
<ide> self.dense1 = layers.Dense(1)
<ide> self.dense2 = layers.BatchNormalization()
<ide>
<ide> def build(self, _):
<ide> self.v1 = self.add_weight('v1', shape=())
<ide>
<ide> def __init__(self):
<del> super(MyLayer, self).__init__()
<add> super().__init__()
<ide> self.ul1 = UpdateAndLossLayer()
<ide> self.ul2 = UpdateAndLossLayer()
<ide>
<ide> def test_layer_class_not_tracked_as_sublayer(self):
<ide> class LayerWithClassAttribute(base_layer.Layer):
<ide>
<ide> def __init__(self):
<del> super(LayerWithClassAttribute, self).__init__()
<add> super().__init__()
<ide> self.layer_fn = layers.Dense
<ide>
<ide> layer = LayerWithClassAttribute()
<ide> def test_sequential_model(self):
<ide> class Sequential(training_lib.Model):
<ide>
<ide> def __init__(self):
<del> super(Sequential, self).__init__()
<add> super().__init__()
<ide> self.dense_layers = [layers.Dense(10), layers.Dense(5)]
<ide>
<ide> def call(self, inputs):
<ide> def test_name_scope_functional_api_nested(self):
<ide> class NestedLayer(base_layer.Layer):
<ide>
<ide> def __init__(self, name='OuterName'):
<del> super(NestedLayer, self).__init__(name=name)
<add> super().__init__(name=name)
<ide> self.dense = layers.Dense(10, name='InnerName')
<ide>
<ide> def call(self, inputs):
<ide> def test_conditional_losses_in_call(self):
<ide> class MyLayer(base_layer.Layer):
<ide>
<ide> def __init__(self):
<del> super(MyLayer,
<del> self).__init__(dynamic=test_utils.should_run_eagerly())
<add> super().__init__(dynamic=test_utils.should_run_eagerly())
<ide>
<ide> def call(self, inputs, training=None):
<ide> if training:
<ide> def test_conditional_metrics_in_call(self):
<ide> class MyLayer(base_layer.Layer):
<ide>
<ide> def __init__(self):
<del> super(MyLayer,
<del> self).__init__(dynamic=test_utils.should_run_eagerly())
<add> super().__init__(dynamic=test_utils.should_run_eagerly())
<ide>
<ide> def call(self, inputs, training=None):
<ide> if training:
<ide> def test_conditional_activity_regularizer_in_call(self):
<ide> class TestModel(training_lib.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel, self).__init__(
<add> super().__init__(
<ide> name='test_model', dynamic=test_utils.should_run_eagerly())
<ide> self.layer = layers.Dense(2, activity_regularizer='l2')
<ide>
<ide> def test_conditional_activity_regularizer_with_wrappers_in_call(self):
<ide> class TestModel(training_lib.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel, self).__init__(
<add> super().__init__(
<ide> name='test_model', dynamic=test_utils.should_run_eagerly())
<ide> self.layer = layers.TimeDistributed(
<ide> layers.Dense(2, activity_regularizer='l2'), input_shape=(3, 4))
<ide> class IdentityLayerWithoutAutocast(IdentityLayer):
<ide>
<ide> def __init__(self, *args, **kwargs):
<ide> kwargs['autocast'] = False
<del> super(IdentityLayerWithoutAutocast, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide>
<ide> layer = IdentityLayerWithoutAutocast(dtype='float64')
<ide> self.assertEqual(layer(self._const('float32')).dtype, 'float32')
<ide><path>keras/engine/base_preprocessing_layer.py
<ide> class PreprocessingLayer(Layer, metaclass=abc.ABCMeta):
<ide> _must_restore_from_config = True
<ide>
<ide> def __init__(self, **kwargs):
<del> super(PreprocessingLayer, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self._is_compiled = False
<ide> self._is_adapted = False
<ide>
<ide><path>keras/engine/base_preprocessing_layer_test.py
<ide> class AddingPreprocessingLayer(base_preprocessing_layer.PreprocessingLayer):
<ide>
<ide> def build(self, input_shape):
<del> super(AddingPreprocessingLayer, self).build(input_shape)
<add> super().build(input_shape)
<ide> self.sum = tf.Variable(0., dtype=tf.float32)
<ide>
<ide> def update_state(self, data):
<ide><path>keras/engine/control_flow_test.py
<ide> class NestedControlFlowLayer(base_layer.Layer):
<ide> """Layer nested with a control flow layer."""
<ide>
<ide> def __init__(self, **kwargs):
<del> super(NestedControlFlowLayer, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.layer = ControlFlowLayer1()
<ide>
<ide> def call(self, inputs):
<ide> class NestedControlFlowModel(keras.Model):
<ide> """Model with an `if` condition in call using a control flow layer."""
<ide>
<ide> def __init__(self, **kwargs):
<del> super(NestedControlFlowModel, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.layer = NestedControlFlowLayer()
<ide>
<ide> def call(self, inputs):
<ide><path>keras/engine/correctness_test.py
<ide> class MultiInputSubclassed(keras.Model):
<ide> """Subclassed Model that adds its inputs and then adds a bias."""
<ide>
<ide> def __init__(self):
<del> super(MultiInputSubclassed, self).__init__()
<add> super().__init__()
<ide> self.add = keras.layers.Add()
<ide> self.bias = test_utils.Bias()
<ide>
<ide><path>keras/engine/data_adapter.py
<ide> def __init__(self,
<ide> steps=None,
<ide> shuffle=False,
<ide> **kwargs):
<del> super(TensorLikeDataAdapter, self).__init__(x, y, **kwargs)
<add> super().__init__(x, y, **kwargs)
<ide> x, y, sample_weights = _process_tensorlike((x, y, sample_weights))
<ide> sample_weight_modes = broadcast_sample_weight_modes(
<ide> sample_weights, sample_weight_modes)
<ide> def __init__(self, *args, **kwargs):
<ide> "supported by TensorFlow I/O (https://github.com/tensorflow/io) we "
<ide> "recommend using that to load a Dataset instead.")
<ide>
<del> super(GenericArrayLikeDataAdapter, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide>
<ide> def slice_inputs(self, indices_dataset, inputs):
<ide> """Slice inputs into a Dataset of batches.
<ide> class DatasetCreatorAdapter(DataAdapter):
<ide> """Adapter that handles dataset functions."""
<ide>
<ide> def __init__(self, x, y, steps=None, distribution_strategy=None, **kwargs):
<del> super(DatasetCreatorAdapter, self).__init__(x, **kwargs)
<add> super().__init__(x, **kwargs)
<ide>
<ide> if not isinstance(x, dataset_creator.DatasetCreator):
<ide> raise TypeError("The input of a `DatasetCreatorAdapter` should be a "
<ide> def __init__(self,
<ide> steps=None,
<ide> shuffle=False,
<ide> **kwargs):
<del> super(CompositeTensorDataAdapter, self).__init__(x, y, **kwargs)
<add> super().__init__(x, y, **kwargs)
<ide> x, y, sample_weights = _process_tensorlike((x, y, sample_weights))
<ide> sample_weight_modes = broadcast_sample_weight_modes(
<ide> sample_weights, sample_weight_modes)
<ide> def __init__(self,
<ide> batch_size=None,
<ide> shuffle=False,
<ide> **kwargs):
<del> super(ListsOfScalarsDataAdapter, self).__init__(x, y, **kwargs)
<add> super().__init__(x, y, **kwargs)
<ide> x = np.asarray(x)
<ide> if y is not None:
<ide> y = np.asarray(y)
<ide> def __init__(self,
<ide> sample_weights=None,
<ide> steps=None,
<ide> **kwargs):
<del> super(DatasetAdapter, self).__init__(x, y, **kwargs)
<add> super().__init__(x, y, **kwargs)
<ide> # Note that the dataset instance is immutable, its fine to reuse the user
<ide> # provided dataset.
<ide> self._dataset = x
<ide> def __init__(self,
<ide> raise ValueError("`sample_weight` argument is not supported when using "
<ide> "python generator as input.")
<ide>
<del> super(GeneratorDataAdapter, self).__init__(x, y, **kwargs)
<add> super().__init__(x, y, **kwargs)
<ide>
<ide> # Since we have to know the dtype of the python generator when we build the
<ide> # dataset, we have to look at a batch to infer the structure.
<ide> def __init__(self,
<ide> self._shuffle_sequence = shuffle
<ide> self._keras_sequence = x
<ide> self._enqueuer = None
<del> super(KerasSequenceAdapter, self).__init__(
<add> super().__init__(
<ide> x,
<ide> shuffle=False, # Shuffle is handed in the _make_callable override.
<ide> workers=workers,
<ide><path>keras/engine/data_adapter_test.py
<ide> def fail_on_convert(x, **kwargs):
<ide> class DataAdapterTestBase(test_combinations.TestCase):
<ide>
<ide> def setUp(self):
<del> super(DataAdapterTestBase, self).setUp()
<add> super().setUp()
<ide> self.batch_size = 5
<ide> self.numpy_input = np.zeros((50, 10))
<ide> self.numpy_target = np.ones(50)
<ide> def _linearly_increasing_batch_size(self):
<ide> class TensorLikeDataAdapterTest(DataAdapterTestBase):
<ide>
<ide> def setUp(self):
<del> super(TensorLikeDataAdapterTest, self).setUp()
<add> super().setUp()
<ide> self.adapter_cls = data_adapter.TensorLikeDataAdapter
<ide>
<ide> def test_can_handle_numpy(self):
<ide> def test_training_with_increasing_batch_size(self):
<ide> class GenericArrayLikeDataAdapterTest(DataAdapterTestBase):
<ide>
<ide> def setUp(self):
<del> super(GenericArrayLikeDataAdapterTest, self).setUp()
<add> super().setUp()
<ide> self.adapter_cls = data_adapter.GenericArrayLikeDataAdapter
<ide>
<ide> def test_can_handle_some_numpy(self):
<ide> def test_partial_batch(
<ide> class DatasetAdapterTest(DataAdapterTestBase):
<ide>
<ide> def setUp(self):
<del> super(DatasetAdapterTest, self).setUp()
<add> super().setUp()
<ide> self.adapter_cls = data_adapter.DatasetAdapter
<ide>
<ide> def test_can_handle(self):
<ide> def test_invalid_sample_weights_argument(self):
<ide> class GeneratorDataAdapterTest(DataAdapterTestBase):
<ide>
<ide> def setUp(self):
<del> super(GeneratorDataAdapterTest, self).setUp()
<add> super().setUp()
<ide> self.adapter_cls = data_adapter.GeneratorDataAdapter
<ide>
<ide> def test_can_handle(self):
<ide> def test_step(self, data):
<ide> class KerasSequenceAdapterTest(DataAdapterTestBase):
<ide>
<ide> def setUp(self):
<del> super(KerasSequenceAdapterTest, self).setUp()
<add> super().setUp()
<ide> self.adapter_cls = data_adapter.KerasSequenceAdapter
<ide>
<ide> def test_can_handle(self):
<ide> def test_invalid_sample_weights_argument(self):
<ide> class KerasSequenceAdapterSparseTest(KerasSequenceAdapterTest):
<ide>
<ide> def setUp(self):
<del> super(KerasSequenceAdapterSparseTest, self).setUp()
<add> super().setUp()
<ide> self.sequence_input = TestSparseSequence(self.batch_size, 10)
<ide>
<ide>
<ide> class KerasSequenceAdapterRaggedTest(KerasSequenceAdapterTest):
<ide>
<ide> def setUp(self):
<del> super(KerasSequenceAdapterRaggedTest, self).setUp()
<add> super().setUp()
<ide> self.sequence_input = TestRaggedSequence(self.batch_size, 10)
<ide>
<ide> self.model = keras.models.Sequential([
<ide> def test_validation_split_none(self):
<ide> class ListsOfScalarsDataAdapterTest(DataAdapterTestBase):
<ide>
<ide> def setUp(self):
<del> super(ListsOfScalarsDataAdapterTest, self).setUp()
<add> super().setUp()
<ide> self.adapter_cls = data_adapter.ListsOfScalarsDataAdapter
<ide>
<ide> def test_can_list_inputs(self):
<ide><path>keras/engine/feature_columns_integration_test.py
<ide> class TestDNNModel(keras.models.Model):
<ide>
<ide> def __init__(self, feature_columns, units, name=None, **kwargs):
<del> super(TestDNNModel, self).__init__(name=name, **kwargs)
<add> super().__init__(name=name, **kwargs)
<ide> self._input_layer = df.DenseFeatures(feature_columns, name='input_layer')
<ide> self._dense_layer = keras.layers.Dense(units, name='dense_layer')
<ide>
<ide><path>keras/engine/functional.py
<ide> def __init__(self, inputs, outputs, name=None, trainable=True,
<ide> if skip_init:
<ide> return
<ide> generic_utils.validate_kwargs(kwargs, {})
<del> super(Functional, self).__init__(name=name, trainable=trainable)
<add> super().__init__(name=name, trainable=trainable)
<ide> # Check if the inputs contain any intermediate `KerasTensor` (not created
<ide> # by tf.keras.Input()). In this case we need to clone the `Node` and
<ide> # `KerasTensor` objects to mimic rebuilding a new model from new inputs.
<ide> def _layer_checkpoint_dependencies(self):
<ide> def _trackable_children(self, save_type='checkpoint', **kwargs):
<ide> dependencies = self._layer_checkpoint_dependencies
<ide> dependencies.update(
<del> super(Functional, self)._trackable_children(save_type, **kwargs))
<add> super()._trackable_children(save_type, **kwargs))
<ide> return dependencies
<ide>
<ide> def _lookup_dependency(self, name):
<ide> layer_dependencies = self._layer_checkpoint_dependencies
<ide> if name in layer_dependencies:
<ide> return layer_dependencies[name]
<del> return super(Functional, self)._lookup_dependency(name)
<add> return super()._lookup_dependency(name)
<ide>
<ide> def _handle_deferred_layer_dependencies(self, layers):
<ide> """Handles layer checkpoint dependencies that are added after init."""
<ide> def _get_save_spec(self, dynamic_batch=True, inputs_only=True):
<ide> # Functional models and Sequential models that have an explicit input
<ide> # shape should use the batch size set by the input layer.
<ide> dynamic_batch = False
<del> return super(Functional, self)._get_save_spec(dynamic_batch, inputs_only)
<add> return super()._get_save_spec(dynamic_batch, inputs_only)
<ide>
<ide>
<ide> def _make_node_key(layer_name, node_index):
<ide> def __init__(self, module, method_name=None, **kwargs):
<ide> Raises:
<ide> ValueError: If `method` is not defined on `module`.
<ide> """
<del> super(ModuleWrapper, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> if method_name is None:
<ide> if hasattr(module, '__call__'):
<ide> method_name = '__call__'
<ide><path>keras/engine/functional_test.py
<ide> def testNoneInShape(self):
<ide> class Model(training_lib.Model):
<ide>
<ide> def __init__(self):
<del> super(Model, self).__init__()
<add> super().__init__()
<ide> self.conv1 = layers.Conv2D(8, 3)
<ide> self.pool = layers.GlobalAveragePooling2D()
<ide> self.fc = layers.Dense(3)
<ide> def testNoneInShapeWithCompoundModel(self):
<ide> class BasicBlock(training_lib.Model):
<ide>
<ide> def __init__(self):
<del> super(BasicBlock, self).__init__()
<add> super().__init__()
<ide> self.conv1 = layers.Conv2D(8, 3)
<ide> self.pool = layers.GlobalAveragePooling2D()
<ide> self.dense = layers.Dense(3)
<ide> def call(self, x):
<ide> class CompoundModel(training_lib.Model):
<ide>
<ide> def __init__(self):
<del> super(CompoundModel, self).__init__()
<add> super().__init__()
<ide> self.block = BasicBlock()
<ide>
<ide> def call(self, x):
<ide> class BasicBlock(training_lib.Model):
<ide> # inside a model created using functional API.
<ide>
<ide> def __init__(self):
<del> super(BasicBlock, self).__init__()
<add> super().__init__()
<ide> self.conv1 = layers.Conv2D(8, 3)
<ide>
<ide> def call(self, x):
<ide> def test_subclass_model_without_build_method(self):
<ide> class SubclassModel(models.Model):
<ide>
<ide> def __init__(self):
<del> super(SubclassModel, self).__init__()
<add> super().__init__()
<ide> self.w = self.add_weight(shape=(), initializer='ones')
<ide>
<ide> def call(self, inputs):
<ide> class AttrTrackingLayer(base_layer.Layer):
<ide> def __init__(self, *args, **kwargs):
<ide> self.stateful_count = 0
<ide> self.dynamic_count = 0
<del> super(AttrTrackingLayer, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide>
<ide> @base_layer.Layer.stateful.getter
<ide> def stateful(self):
<ide> self.stateful_count += 1
<del> return super(AttrTrackingLayer, self).stateful
<add> return super().stateful
<ide>
<ide> @property
<ide> def dynamic(self):
<ide> self.dynamic_count += 1
<del> return super(AttrTrackingLayer, self).dynamic
<add> return super().dynamic
<ide>
<ide>
<ide> @test_combinations.generate(test_combinations.combine(mode=['graph', 'eager']))
<ide><path>keras/engine/input_layer.py
<ide> def __init__(self,
<ide> '`input_tensor.dtype` differs from `dtype`. Received: '
<ide> f'input_tensor.dtype={input_tensor.dtype} '
<ide> f'but expected dtype={dtype}')
<del> super(InputLayer, self).__init__(dtype=dtype, name=name)
<add> super().__init__(dtype=dtype, name=name)
<ide> self.built = True
<ide> self.sparse = True if sparse else False
<ide> self.ragged = True if ragged else False
<ide><path>keras/engine/keras_tensor.py
<ide> class RaggedKerasTensor(KerasTensor):
<ide> def _to_placeholder(self):
<ide> ragged_spec = self.type_spec
<ide> if ragged_spec.ragged_rank == 0 or ragged_spec.shape.rank is None:
<del> return super(RaggedKerasTensor, self)._to_placeholder()
<add> return super()._to_placeholder()
<ide>
<ide> flat_shape = ragged_spec.shape[ragged_spec.ragged_rank:]
<ide> result = tf.compat.v1.placeholder(ragged_spec.dtype, flat_shape)
<ide> def __init__(self, user_registered_symbolic_object):
<ide> type_spec = UserRegisteredSpec(x.shape, x.dtype)
<ide> name = getattr(x, 'name', None)
<ide>
<del> super(UserRegisteredTypeKerasTensor, self).__init__(type_spec, name)
<add> super().__init__(type_spec, name)
<ide>
<ide> @classmethod
<ide> def from_tensor(cls, tensor):
<ide><path>keras/engine/sequential.py
<ide> def layers(self):
<ide> # bottom of the stack.
<ide> # `Trackable` manages the `_layers` attributes and does filtering
<ide> # over it.
<del> layers = super(Sequential, self).layers
<add> layers = super().layers
<ide> if layers and isinstance(layers[0], input_layer.InputLayer):
<ide> return layers[1:]
<ide> return layers[:]
<ide> def build(self, input_shape=None):
<ide> if not self.built:
<ide> input_shape = tuple(input_shape)
<ide> self._build_input_shape = input_shape
<del> super(Sequential, self).build(input_shape)
<add> super().build(input_shape)
<ide> self.built = True
<ide>
<ide> def call(self, inputs, training=None, mask=None): # pylint: disable=redefined-outer-name
<ide> def call(self, inputs, training=None, mask=None): # pylint: disable=redefined-o
<ide> if self._graph_initialized:
<ide> if not self.built:
<ide> self._init_graph_network(self.inputs, self.outputs)
<del> return super(Sequential, self).call(inputs, training=training, mask=mask)
<add> return super().call(inputs, training=training, mask=mask)
<ide>
<ide> outputs = inputs # handle the corner case where self.layers is empty
<ide> for layer in self.layers:
<ide> def compute_mask(self, inputs, mask):
<ide>
<ide> def get_config(self):
<ide> layer_configs = []
<del> for layer in super(Sequential, self).layers:
<add> for layer in super().layers:
<ide> # `super().layers` include the InputLayer if available (it is filtered out
<ide> # of `self.layers`). Note that `self._self_tracked_trackables` is managed
<ide> # by the tracking infrastructure and should not be used.
<ide><path>keras/engine/sequential_test.py
<ide> def test_defun_on_call(self):
<ide> class MySequential(keras.Sequential):
<ide>
<ide> def __init__(self, name=None):
<del> super(MySequential, self).__init__(name=name)
<add> super().__init__(name=name)
<ide> self.call = tf.function(self.call)
<ide>
<ide> model = MySequential()
<ide><path>keras/engine/training.py
<ide> def __init__(self, *args, **kwargs):
<ide> generic_utils.validate_kwargs(kwargs, {
<ide> 'trainable', 'dtype', 'dynamic', 'name', 'autocast', 'inputs', 'outputs'
<ide> })
<del> super(Model, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> # By default, Model is a subclass model, which is not in graph network.
<ide> self._is_graph_network = False
<ide>
<ide> def _init_batch_counters(self):
<ide>
<ide> def __setattr__(self, name, value):
<ide> if not getattr(self, '_self_setattr_tracking', True):
<del> super(Model, self).__setattr__(name, value)
<add> super().__setattr__(name, value)
<ide> return
<ide>
<ide> if all(
<ide> def __setattr__(self, name, value):
<ide> 'forgot to call `super().__init__()`.'
<ide> ' Always start with this line.')
<ide>
<del> super(Model, self).__setattr__(name, value)
<add> super().__setattr__(name, value)
<ide>
<ide> def __reduce__(self):
<ide> if self.built:
<ide> def __reduce__(self):
<ide> # can be serialized as plain Python objects.
<ide> # Thus we call up the superclass hierarchy to get an implementation of
<ide> # __reduce__ that can pickle this Model as a plain Python object.
<del> return super(Model, self).__reduce__()
<add> return super().__reduce__()
<ide>
<ide> def __deepcopy__(self, memo):
<ide> if self.built:
<ide> def __deepcopy__(self, memo):
<ide> memo[id(self)] = new
<ide> else:
<ide> # See comment in __reduce__ for explanation
<del> deserializer, serialized, *rest = super(Model, self).__reduce__()
<add> deserializer, serialized, *rest = super().__reduce__()
<ide> new = deserializer(*serialized)
<ide> memo[id(self)] = new
<ide> if rest:
<ide> def build(self, input_shape):
<ide> on real tensor data.
<ide> """
<ide> if self._is_graph_network:
<del> super(Model, self).build(input_shape)
<add> super().build(input_shape)
<ide> return
<ide>
<ide> if input_shape is None:
<ide> def build(self, input_shape):
<ide> 'model, call your model on real tensor data (of '
<ide> 'the correct dtype).\n\nThe actual error from '
<ide> f'`call` is: {e}.')
<del> super(Model, self).build(input_shape)
<add> super().build(input_shape)
<ide>
<ide> @traceback_utils.filter_traceback
<ide> def __call__(self, *args, **kwargs):
<ide> def get_weights(self):
<ide> A flat list of Numpy arrays.
<ide> """
<ide> with self.distribute_strategy.scope():
<del> return super(Model, self).get_weights()
<add> return super().get_weights()
<ide>
<ide> @traceback_utils.filter_traceback
<ide> def save(self,
<ide> def _set_save_spec(self, inputs, args=None, kwargs=None):
<ide> inputs_spec.append(
<ide> tf_utils.get_tensor_spec(tensor, dynamic_batch=False, name=name))
<ide> inputs_spec = tf.nest.pack_sequence_as(inputs, inputs_spec)
<del> super(Model, self)._set_save_spec(inputs_spec, args, kwargs)
<add> super()._set_save_spec(inputs_spec, args, kwargs)
<ide>
<ide> # Store the input shapes
<ide> if (self.__class__.__name__ == 'Sequential' and
<ide> def _trackable_children(self, save_type='checkpoint', **kwargs):
<ide> self.predict_function = None
<ide> self.train_tf_function = None
<ide>
<del> children = super(Model, self)._trackable_children(save_type, **kwargs)
<add> children = super()._trackable_children(save_type, **kwargs)
<ide>
<ide> if save_type == 'savedmodel':
<ide> self.train_function = train_function
<ide><path>keras/engine/training_arrays_test.py
<ide> def test_dict_float64_input(self):
<ide> class MyModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__(self)
<add> super().__init__(self)
<ide> self.dense1 = keras.layers.Dense(10, activation="relu")
<ide> self.dense2 = keras.layers.Dense(10, activation="relu")
<ide> self.concat = keras.layers.Concatenate()
<ide> def test_dict_validation_input(self):
<ide> class my_model(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(my_model, self).__init__(self)
<add> super().__init__(self)
<ide> self.hidden_layer_0 = keras.layers.Dense(100, activation="relu")
<ide> self.hidden_layer_1 = keras.layers.Dense(100, activation="relu")
<ide> self.concat = keras.layers.Concatenate()
<ide><path>keras/engine/training_eager_test.py
<ide> def test_dynamic_model_has_trainable_weights(self):
<ide> class DynamicModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(DynamicModel, self).__init__(dynamic=True)
<add> super().__init__(dynamic=True)
<ide> self.dense = keras.layers.Dense(
<ide> 1, kernel_initializer='zeros', bias_initializer='ones')
<ide>
<ide><path>keras/engine/training_test.py
<ide> def call(self, inputs, training=None):
<ide> class ReturnTraining(layers_module.Layer):
<ide>
<ide> def __init__(self, input_shape=None, **kwargs):
<del> super(ReturnTraining, self).__init__(input_shape=input_shape, **kwargs)
<add> super().__init__(input_shape=input_shape, **kwargs)
<ide> self._nested_layer = None
<ide>
<ide> def build(self, input_shape):
<ide> class XYSequence(data_utils.Sequence):
<ide>
<ide> def __init__(self, use_namedtuple):
<ide> self._use_namedtuple = use_namedtuple
<del> super(XYSequence, self).__init__()
<add> super().__init__()
<ide>
<ide> def __getitem__(self, idx):
<ide> x, y = np.ones((4, 1)), np.ones((4, 1))
<ide> class XSequence(data_utils.Sequence):
<ide>
<ide> def __init__(self, use_namedtuple):
<ide> self._use_namedtuple = use_namedtuple
<del> super(XSequence, self).__init__()
<add> super().__init__()
<ide>
<ide> def __getitem__(self, idx):
<ide> x = np.ones((4, 1))
<ide> def __init__(self, dense_to_track):
<ide> # doubling the learning rate if weights are not deduped.
<ide> self._kernel = dense_to_track.kernel
<ide> self._bias = dense_to_track.bias
<del> super(WatchingLayer, self).__init__()
<add> super().__init__()
<ide>
<ide> inp = layers_module.Input(shape=(1,))
<ide> dense_layer = layers_module.Dense(1)
<ide> class AddWeightLayer(layers_module.Layer):
<ide> def __init__(self, trainable_var, non_trainable_var):
<ide> self.trainable_var = trainable_var
<ide> self.non_trainable_var = non_trainable_var
<del> super(AddWeightLayer, self).__init__()
<add> super().__init__()
<ide>
<ide> def call(self, inputs):
<ide> return inputs + self.trainable_var
<ide>
<ide> class LayerWithWeightSharedLayers(layers_module.Layer):
<ide>
<ide> def __init__(self):
<del> super(LayerWithWeightSharedLayers, self).__init__()
<add> super().__init__()
<ide> shared_trainable_var = tf.Variable(1.)
<ide> shared_non_trainable_var = tf.Variable(
<ide> 1., trainable=False)
<ide> def test_logs_passed_to_callbacks(self):
<ide> class TestCallback(Callback):
<ide>
<ide> def __init__(self):
<del> super(TestCallback, self).__init__()
<add> super().__init__()
<ide> self.epoch_end_logs = None
<ide> self.batch_end_logs = None
<ide> self.epoch_end_call_count = 0
<ide> def call(self, inputs, training=None):
<ide> class ModelWithTrainingArg(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(ModelWithTrainingArg, self).__init__()
<add> super().__init__()
<ide> self.l1 = LayerWithTrainingArg()
<ide>
<ide> def call(self, inputs, training=None):
<ide> class _Optimizer(optimizer_v2.gradient_descent.SGD):
<ide>
<ide> def __init__(self):
<ide> self.aggregate_gradients_called = False
<del> super(_Optimizer, self).__init__(name='MyOptimizer')
<add> super().__init__(name='MyOptimizer')
<ide>
<ide> def _aggregate_gradients(self, grads):
<ide> self.aggregate_gradients_called = True
<del> return super(_Optimizer, self)._aggregate_gradients(grads)
<add> return super()._aggregate_gradients(grads)
<ide>
<ide> mock_optimizer = _Optimizer()
<ide>
<ide> class _OptimizerOverrideApplyGradients(_Optimizer):
<ide> _HAS_AGGREGATE_GRAD = False
<ide>
<ide> def apply_gradients(self, grads_and_vars, name=None): # pylint: disable=useless-super-delegation
<del> return super(_OptimizerOverrideApplyGradients,
<del> self).apply_gradients(grads_and_vars, name)
<add> return super().apply_gradients(grads_and_vars, name)
<ide>
<ide> mock_optimizer = _OptimizerOverrideApplyGradients()
<ide> model.compile(mock_optimizer, 'mse',
<ide> def build(self, input_shape):
<ide> # Gradients w.r.t. extra_weights are None
<ide> self.extra_weight_1 = self.add_weight('extra_weight_1', shape=(),
<ide> initializer='ones')
<del> super(DenseWithExtraWeight, self).build(input_shape)
<add> super().build(input_shape)
<ide> self.extra_weight_2 = self.add_weight('extra_weight_2', shape=(),
<ide> initializer='ones')
<ide>
<ide> class MyLayer(layers_module.Layer):
<ide> class MyModel(training_module.Model):
<ide>
<ide> def __init__(self, name):
<del> super(MyModel, self).__init__(name=name)
<add> super().__init__(name=name)
<ide>
<ide> self.weight = tf.Variable(0, name=name)
<ide>
<ide> def test_trainable_state_setting(self):
<ide> class UpdateLayer(layers_module.Layer):
<ide>
<ide> def __init__(self):
<del> super(UpdateLayer, self).__init__()
<add> super().__init__()
<ide> self.v = tf.Variable(0., trainable=False)
<ide>
<ide> def call(self, x):
<ide> class MyModel(training_module.Model):
<ide> def train_step(self, data):
<ide> # No tuple wrapping for single x input and no targets.
<ide> test_case.assertIsInstance(data, expected_data_type)
<del> return super(MyModel, self).train_step(data)
<add> return super().train_step(data)
<ide>
<ide> def test_step(self, data):
<ide> test_case.assertIsInstance(data, expected_data_type)
<del> return super(MyModel, self).test_step(data)
<add> return super().test_step(data)
<ide>
<ide> def predict_step(self, data):
<ide> test_case.assertIsInstance(data, expected_data_type)
<del> return super(MyModel, self).predict_step(data)
<add> return super().predict_step(data)
<ide>
<ide> inputs = layers_module.Input(shape=(1,), name='my_input')
<ide> outputs = layers_module.Dense(1)(inputs)
<ide> def sq_diff_plus_x(self, x, y_true, y_pred):
<ide>
<ide> def update_state(self, x, y_true, y_pred, sample_weight=None):
<ide> matches = self.sq_diff_plus_x(x, y_true, y_pred)
<del> return super(CustomMetric, self).update_state(matches)
<add> return super().update_state(matches)
<ide>
<ide> class MyModel(sequential.Sequential):
<ide>
<ide> def compute_metrics(self, x, y, y_pred, sample_weight):
<del> metric_results = super(MyModel,
<del> self).compute_metrics(x, y, y_pred,
<add> metric_results = super().compute_metrics(x, y, y_pred,
<ide> sample_weight)
<ide> self.custom_metric.update_state(x, y, y_pred, sample_weight)
<ide> metric_results['custom_metric_name'] = self.custom_metric.result()
<ide> def test_custom_compute_loss(self):
<ide> class MyModel(training_module.Model):
<ide>
<ide> def __init__(self, *args, **kwargs):
<del> super(MyModel, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide> self.loss_metric = metrics_module.Mean(name='loss')
<ide>
<ide> def compute_loss(self, x, y, y_pred, sample_weight):
<ide> def test_mask_argument_in_layer(self):
<ide> class CustomMaskedLayer(layers_module.Layer):
<ide>
<ide> def __init__(self):
<del> super(CustomMaskedLayer, self).__init__()
<add> super().__init__()
<ide> self.supports_masking = True
<ide>
<ide> def call(self, inputs, mask=None):
<ide> def test_add_metric_in_model_call(self):
<ide> class TestModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel, self).__init__(name='test_model')
<add> super().__init__(name='test_model')
<ide> self.dense1 = layers_module.Dense(2, kernel_initializer='ones')
<ide> self.mean = metrics_module.Mean(name='metric_1')
<ide>
<ide> def test_model_metrics_list(self):
<ide> class LayerWithAddMetric(layers_module.Layer):
<ide>
<ide> def __init__(self):
<del> super(LayerWithAddMetric, self).__init__()
<add> super().__init__()
<ide> self.dense = layers_module.Dense(1, kernel_initializer='ones')
<ide>
<ide> def __call__(self, inputs):
<ide> def __call__(self, inputs):
<ide> class LayerWithNestedAddMetricLayer(layers_module.Layer):
<ide>
<ide> def __init__(self):
<del> super(LayerWithNestedAddMetricLayer, self).__init__()
<add> super().__init__()
<ide> self.layer = LayerWithAddMetric()
<ide>
<ide> def call(self, inputs):
<ide> def test_model_metrics_list_in_call(self):
<ide> class TestModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel, self).__init__(name='test_model')
<add> super().__init__(name='test_model')
<ide> self.dense1 = layers_module.Dense(2, kernel_initializer='ones')
<ide>
<ide> def call(self, x):
<ide> def test_multiple_add_metric_calls(self):
<ide> class TestModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel, self).__init__(name='test_model')
<add> super().__init__(name='test_model')
<ide> self.dense1 = layers_module.Dense(2, kernel_initializer='ones')
<ide> self.mean1 = metrics_module.Mean(name='metric_1')
<ide> self.mean2 = metrics_module.Mean(name='metric_2')
<ide> def test_multiple_add_metric_calls_layer(self):
<ide> class TestLayer(layers_module.Layer):
<ide>
<ide> def __init__(self):
<del> super(TestLayer, self).__init__(name='test_layer')
<add> super().__init__(name='test_layer')
<ide> self.dense1 = layers_module.Dense(2, kernel_initializer='ones')
<ide> self.m1 = metrics_module.Mean(name='m_1')
<ide> self.m2 = [
<ide> def test_duplicate_metric_name_in_add_metric(self):
<ide> class TestModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel, self).__init__(name='test_model')
<add> super().__init__(name='test_model')
<ide> self.dense1 = layers_module.Dense(2, kernel_initializer='ones')
<ide> self.mean = metrics_module.Mean(name='metric_1')
<ide> self.mean2 = metrics_module.Mean(name='metric_1')
<ide> def test_add_metric_without_name(self):
<ide> class TestModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel, self).__init__(name='test_model')
<add> super().__init__(name='test_model')
<ide> self.dense1 = layers_module.Dense(2, kernel_initializer='ones')
<ide>
<ide> def call(self, x):
<ide> def call(self, inputs, training=None, mask=None):
<ide> class MyModel(training_module.Model):
<ide>
<ide> def __init__(self, **kwargs):
<del> super(MyModel, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self._sampler = MyLayer(name='sampler')
<ide>
<ide> def call(self, inputs, training=None, mask=None):
<ide> def test_add_metric_aggregation_mean(self):
<ide> class TestModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel, self).__init__(name='test_model')
<add> super().__init__(name='test_model')
<ide> self.dense1 = layers_module.Dense(2, kernel_initializer='ones')
<ide>
<ide> def call(self, x):
<ide> def test_add_metric_aggregation_none(self):
<ide> class TestModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel, self).__init__(name='test_model')
<add> super().__init__(name='test_model')
<ide> self.dense1 = layers_module.Dense(2, kernel_initializer='ones')
<ide> self.mean = metrics_module.Mean(name='metric_1')
<ide>
<ide> def test_model_with_nested_compiled_model(self):
<ide> class LayerWithAddMetric(layers_module.Layer):
<ide>
<ide> def __init__(self):
<del> super(LayerWithAddMetric, self).__init__()
<add> super().__init__()
<ide> self.dense = layers_module.Dense(1, kernel_initializer='ones')
<ide>
<ide> def call(self, inputs):
<ide> def test_model_with_metric_class_that_returns_dict(self):
<ide> class DictMetric(metrics_module.Metric):
<ide>
<ide> def __init__(self):
<del> super(DictMetric, self).__init__()
<add> super().__init__()
<ide> self.sample_count = tf.Variable(0)
<ide> self.l2_sum = tf.Variable(0.)
<ide>
<ide> def test_build_list_of_inputs(self):
<ide> class MyModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.l1 = layers_module.Dense(1)
<ide> self.l2 = layers_module.Dense(2)
<ide>
<ide> def test_build_single_inputs(self):
<ide> class MyModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.l1 = layers_module.Dense(1)
<ide>
<ide> def call(self, x):
<ide> def test_build_dict_inputs(self):
<ide> class MyModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.l1 = layers_module.Dense(1)
<ide>
<ide> def call(self, inputs):
<ide> def test_save_top_level_model_weights_h5(self):
<ide> class MyModel(training_module.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.class_token = self.add_weight(shape=(1,), name='class_token')
<ide> self.inner_layer = layers_module.Dense(1)
<ide>
<ide><path>keras/engine/training_utils_v1.py
<ide> class MetricsAggregator(Aggregator):
<ide> """
<ide>
<ide> def __init__(self, use_steps, num_samples=None, steps=None):
<del> super(MetricsAggregator, self).__init__(
<add> super().__init__(
<ide> use_steps=use_steps,
<ide> num_samples=num_samples,
<ide> steps=steps,
<ide> class ConcatAggregator(Aggregator):
<ide>
<ide> def __init__(self, batch_size):
<ide> self.composite = None
<del> super(ConcatAggregator, self).__init__(
<add> super().__init__(
<ide> use_steps=True, num_samples=None, steps=None, batch_size=batch_size)
<ide>
<ide> def create(self, batch_element):
<ide> def __init__(self, num_samples, batch_size):
<ide> self._async_copies = []
<ide> self._pool = get_copy_pool()
<ide> self._errors = []
<del> super(SliceAggregator, self).__init__(
<add> super().__init__(
<ide> use_steps=False,
<ide> num_samples=num_samples,
<ide> steps=None,
<ide><path>keras/engine/training_utils_v1_test.py
<ide> class MonitoredPool(multiprocessing.pool.ThreadPool):
<ide> def __init__(self, *args, **kwargs):
<ide> self._apply_counter = 0
<ide> self._func_wrapper = None
<del> super(MonitoredPool, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide>
<ide> def apply_async(self, func, *args, **kwargs):
<ide> self._apply_counter += 1
<ide> if self._func_wrapper:
<ide> func = self._func_wrapper(func) # pylint: disable=not-callable
<del> return super(MonitoredPool, self).apply_async(func, *args, **kwargs)
<add> return super().apply_async(func, *args, **kwargs)
<ide>
<ide>
<ide> def add_sleep(f):
<ide> def wrapped(batch_element, batch_start, batch_end, is_finished): # pylint: disa
<ide> class AggregationTest(test_combinations.TestCase):
<ide>
<ide> def setUp(self):
<del> super(AggregationTest, self).setUp()
<add> super().setUp()
<ide> self._old_pool = training_utils_v1._COPY_POOL
<ide> self._old_threshold = (
<ide> training_utils_v1.SliceAggregator._BINARY_SIZE_THRESHOLD)
<ide> def setUp(self):
<ide> training_utils_v1._COPY_THREADS)
<ide>
<ide> def tearDown(self):
<del> super(AggregationTest, self).tearDown()
<add> super().tearDown()
<ide> training_utils_v1._COPY_POOL = self._old_pool
<ide> training_utils_v1.SliceAggregator._BINARY_SIZE_THRESHOLD = (
<ide> self._old_threshold)
<ide><path>keras/engine/training_v1.py
<ide> def call(self, inputs, training=False):
<ide> """
<ide>
<ide> def __init__(self, *args, **kwargs):
<del> super(Model, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide> # initializing _distribution_strategy here since it is possible to call
<ide> # predict on a model without compiling it.
<ide> self._distribution_strategy = None
<ide> def load_weights(self, filepath, by_name=False, skip_mismatch=False):
<ide> (not saving_utils.is_hdf5_filepath(filepath))): # pylint: disable=protected-access
<ide> raise ValueError('Load weights is not yet supported with TPUStrategy '
<ide> 'with steps_per_run greater than 1.')
<del> return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
<add> return super().load_weights(filepath, by_name, skip_mismatch)
<ide>
<ide> @tf.__internal__.tracking.no_automatic_dependency_tracking
<ide> def compile(self,
<ide> def metrics(self):
<ide> # See b/155687393 for more details, the model is created as a v2
<ide> # instance but converted to v1. Fallback to use base Model to retrieve
<ide> # the metrics.
<del> return super(Model, self).metrics
<add> return super().metrics
<ide> metrics += self._compile_metric_functions
<ide> metrics.extend(self._metrics)
<ide> metrics.extend(
<ide> def metrics_names(self):
<ide> # See b/155687393 for more details, the model is created as a v2
<ide> # instance but converted to v1. Fallback to use base Model to retrieve
<ide> # the metrics name
<del> return super(Model, self).metrics_names
<add> return super().metrics_names
<ide>
<ide> # Add output loss metric names to the metric names list.
<ide> if len(self._training_endpoints) > 1:
<ide> class DistributedCallbackModel(Model):
<ide> """Model that is used for callbacks with tf.distribute.Strategy."""
<ide>
<ide> def __init__(self, model):
<del> super(DistributedCallbackModel, self).__init__()
<add> super().__init__()
<ide> self.optimizer = model.optimizer
<ide>
<ide> def set_original_model(self, orig_model):
<ide> def __getattr__(self, item):
<ide> logging.warning('You are accessing attribute ' + item + ' of the '
<ide> 'DistributedCallbackModel that may not have been set '
<ide> 'correctly.')
<del> return super(DistributedCallbackModel, self).__getattr__(item)
<add> return super().__getattr__(item)
<ide>
<ide>
<ide> class _TrainingEndpoint:
<ide><path>keras/feature_column/base_feature_layer.py
<ide> def __init__(self,
<ide> name,
<ide> partitioner=None,
<ide> **kwargs):
<del> super(_BaseFeaturesLayer, self).__init__(
<add> super().__init__(
<ide> name=name, trainable=trainable, **kwargs)
<ide> self._feature_columns = _normalize_feature_columns(
<ide> feature_columns)
<ide> def build(self, _):
<ide> with tf.compat.v1.variable_scope(
<ide> _sanitize_column_name_for_variable_scope(column.name)):
<ide> column.create_state(self._state_manager)
<del> super(_BaseFeaturesLayer, self).build(None)
<add> super().build(None)
<ide>
<ide> def _output_shape(self, input_shape, num_elements):
<ide> """Computes expected output shape of the layer or a column's dense tensor.
<ide> def get_config(self):
<ide> config['partitioner'] = generic_utils.serialize_keras_object(
<ide> self._partitioner)
<ide>
<del> base_config = super( # pylint: disable=bad-super-call
<del> _BaseFeaturesLayer, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide><path>keras/feature_column/dense_features.py
<ide> def __init__(self,
<ide> Raises:
<ide> ValueError: if an item in `feature_columns` is not a `DenseColumn`.
<ide> """
<del> super(DenseFeatures, self).__init__(
<add> super().__init__(
<ide> feature_columns=feature_columns,
<ide> trainable=trainable,
<ide> name=name,
<ide> def _tracking_metadata(self):
<ide> Returns:
<ide> A serialized JSON storing information necessary for recreating this layer.
<ide> """
<del> metadata = json.loads(super(DenseFeatures, self)._tracking_metadata)
<add> metadata = json.loads(super()._tracking_metadata)
<ide> metadata['_is_feature_layer'] = True
<ide> return json.dumps(metadata, default=json_utils.get_json_type)
<ide>
<ide><path>keras/feature_column/dense_features_v2.py
<ide> def __init__(self,
<ide> Raises:
<ide> ValueError: if an item in `feature_columns` is not a `DenseColumn`.
<ide> """
<del> super(DenseFeatures, self).__init__(
<add> super().__init__(
<ide> feature_columns=feature_columns,
<ide> trainable=trainable,
<ide> name=name,
<ide><path>keras/feature_column/sequence_feature_column.py
<ide> def __init__(
<ide> ValueError: If any of the `feature_columns` is not a
<ide> `SequenceDenseColumn`.
<ide> """
<del> super(SequenceFeatures, self).__init__(
<add> super().__init__(
<ide> feature_columns=feature_columns,
<ide> trainable=trainable,
<ide> name=name,
<ide><path>keras/initializers/initializers_v1.py
<ide> class RandomNormal(tf.compat.v1.random_normal_initializer):
<ide> """
<ide>
<ide> def __init__(self, mean=0.0, stddev=0.05, seed=None, dtype=tf.float32):
<del> super(RandomNormal, self).__init__(
<add> super().__init__(
<ide> mean=mean, stddev=stddev, seed=seed, dtype=dtype)
<ide>
<ide>
<ide> class RandomUniform(tf.compat.v1.random_uniform_initializer):
<ide>
<ide> def __init__(self, minval=-0.05, maxval=0.05, seed=None,
<ide> dtype=tf.float32):
<del> super(RandomUniform, self).__init__(
<add> super().__init__(
<ide> minval=minval, maxval=maxval, seed=seed, dtype=dtype)
<ide>
<ide>
<ide> def __init__(self, mean=0.0, stddev=0.05, seed=None, dtype=tf.float32):
<ide> dtype: Default data type, used if no `dtype` argument is provided when
<ide> calling the initializer. Only floating point types are supported.
<ide> """
<del> super(TruncatedNormal, self).__init__(
<add> super().__init__(
<ide> mean=mean, stddev=stddev, seed=seed, dtype=dtype)
<ide>
<ide>
<ide> @keras_export(v1=['keras.initializers.lecun_normal'])
<ide> class LecunNormal(tf.compat.v1.variance_scaling_initializer):
<ide>
<ide> def __init__(self, seed=None):
<del> super(LecunNormal, self).__init__(
<add> super().__init__(
<ide> scale=1., mode='fan_in', distribution='truncated_normal', seed=seed)
<ide>
<ide> def get_config(self):
<ide> def get_config(self):
<ide> class LecunUniform(tf.compat.v1.variance_scaling_initializer):
<ide>
<ide> def __init__(self, seed=None):
<del> super(LecunUniform, self).__init__(
<add> super().__init__(
<ide> scale=1., mode='fan_in', distribution='uniform', seed=seed)
<ide>
<ide> def get_config(self):
<ide> def get_config(self):
<ide> class HeNormal(tf.compat.v1.variance_scaling_initializer):
<ide>
<ide> def __init__(self, seed=None):
<del> super(HeNormal, self).__init__(
<add> super().__init__(
<ide> scale=2., mode='fan_in', distribution='truncated_normal', seed=seed)
<ide>
<ide> def get_config(self):
<ide> def get_config(self):
<ide> class HeUniform(tf.compat.v1.variance_scaling_initializer):
<ide>
<ide> def __init__(self, seed=None):
<del> super(HeUniform, self).__init__(
<add> super().__init__(
<ide> scale=2., mode='fan_in', distribution='uniform', seed=seed)
<ide>
<ide> def get_config(self):
<ide><path>keras/initializers/initializers_v2.py
<ide> class GlorotUniform(VarianceScaling):
<ide> """
<ide>
<ide> def __init__(self, seed=None):
<del> super(GlorotUniform, self).__init__(
<add> super().__init__(
<ide> scale=1.0,
<ide> mode='fan_avg',
<ide> distribution='uniform',
<ide> class GlorotNormal(VarianceScaling):
<ide> """
<ide>
<ide> def __init__(self, seed=None):
<del> super(GlorotNormal, self).__init__(
<add> super().__init__(
<ide> scale=1.0,
<ide> mode='fan_avg',
<ide> distribution='truncated_normal',
<ide> class LecunNormal(VarianceScaling):
<ide> """
<ide>
<ide> def __init__(self, seed=None):
<del> super(LecunNormal, self).__init__(
<add> super().__init__(
<ide> scale=1., mode='fan_in', distribution='truncated_normal', seed=seed)
<ide>
<ide> def get_config(self):
<ide> class LecunUniform(VarianceScaling):
<ide> """
<ide>
<ide> def __init__(self, seed=None):
<del> super(LecunUniform, self).__init__(
<add> super().__init__(
<ide> scale=1., mode='fan_in', distribution='uniform', seed=seed)
<ide>
<ide> def get_config(self):
<ide> class HeNormal(VarianceScaling):
<ide> """
<ide>
<ide> def __init__(self, seed=None):
<del> super(HeNormal, self).__init__(
<add> super().__init__(
<ide> scale=2., mode='fan_in', distribution='truncated_normal', seed=seed)
<ide>
<ide> def get_config(self):
<ide> class HeUniform(VarianceScaling):
<ide> """
<ide>
<ide> def __init__(self, seed=None):
<del> super(HeUniform, self).__init__(
<add> super().__init__(
<ide> scale=2., mode='fan_in', distribution='uniform', seed=seed)
<ide>
<ide> def get_config(self):
<ide><path>keras/integration_test/forwardprop_test.py
<ide> def testEmbeddingLayerInFunction(self):
<ide> class M(tf.keras.Model):
<ide>
<ide> def __init__(self):
<del> super(M, self).__init__()
<add> super().__init__()
<ide> self.embed = tf.keras.layers.Embedding(5, 1)
<ide> self.proj = tf.keras.layers.Dense(1)
<ide>
<ide><path>keras/integration_test/function_test.py
<ide> class MiniModel(tf.keras.Model):
<ide> """
<ide>
<ide> def __init__(self):
<del> super(MiniModel, self).__init__(name='')
<add> super().__init__(name='')
<ide> self.fc = tf.keras.layers.Dense(1, name='fc', kernel_initializer='ones',
<ide> bias_initializer='ones')
<ide>
<ide> def call(self, inputs, training=True):
<ide> class ModelWithOptimizer(tf.keras.Model):
<ide>
<ide> def __init__(self):
<del> super(ModelWithOptimizer, self).__init__()
<add> super().__init__()
<ide> self.dense = tf.keras.layers.Dense(1)
<ide> self.optimizer = tf.keras.optimizers.Adam(0.01)
<ide>
<ide><path>keras/integration_test/gradient_checkpoint_test.py
<ide> def test_does_not_raise_oom_exception(self):
<ide> self.assertLen(losses, n_step)
<ide>
<ide> def tearDown(self):
<del> super(GradientCheckpointTest, self).tearDown()
<add> super().tearDown()
<ide> # Make sure all the models created in keras has been deleted and cleared
<ide> # from the global keras grpah, also do a force GC to recycle the GPU memory.
<ide> tf.keras.backend.clear_session()
<ide><path>keras/integration_test/gradients_test.py
<ide> class TestKerasModelClass(tf.keras.Model):
<ide> """A simple tensorflow keras Model class definition."""
<ide>
<ide> def __init__(self, width):
<del> super(TestKerasModelClass, self).__init__()
<add> super().__init__()
<ide> self.width = width
<ide>
<ide> def build(self, input_shape):
<ide> def testLSTMBatchJacobian(self):
<ide> class HasLSTM(tf.keras.Model):
<ide>
<ide> def __init__(self):
<del> super(HasLSTM, self).__init__()
<add> super().__init__()
<ide> self.lstm = tf.keras.layers.LSTM(units=5)
<ide> self.dense = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid)
<ide>
<ide><path>keras/integration_test/legacy_rnn_test.py
<ide> class KerasNetworkTFRNNs(tf.keras.Model):
<ide>
<ide> def __init__(self, name=None):
<del> super(KerasNetworkTFRNNs, self).__init__(name=name)
<add> super().__init__(name=name)
<ide> self._cell = tf.nn.rnn_cell.MultiRNNCell(
<ide> [tf.nn.rnn_cell.LSTMCell(1) for _ in range(2)])
<ide>
<ide> def call(self, inputs):
<ide> class KerasNetworkKerasRNNs(tf.keras.Model):
<ide>
<ide> def __init__(self, name=None):
<del> super(KerasNetworkKerasRNNs, self).__init__(name=name)
<add> super().__init__(name=name)
<ide> self._cell = tf.keras.layers.StackedRNNCells(
<ide> [tf.keras.layers.LSTMCell(1) for _ in range(2)])
<ide>
<ide> def call(self, inputs):
<ide> class LegacyRNNTest(tf.test.TestCase):
<ide>
<ide> def setUp(self):
<del> super(LegacyRNNTest, self).setUp()
<add> super().setUp()
<ide> self._seed = 23489
<ide> np.random.seed(self._seed)
<ide>
<ide><path>keras/integration_test/parameter_server_custom_training_loop_test.py
<ide> def create_in_process_cluster(self, num_workers, num_ps):
<ide> return cluster_spec
<ide>
<ide> def setUp(self):
<del> super(ParameterServerCustomTrainingLoopTest, self).setUp()
<add> super().setUp()
<ide>
<ide> cluster_spec = self.create_in_process_cluster(num_workers=3, num_ps=2)
<ide> cluster_resolver = tf.distribute.cluster_resolver.SimpleClusterResolver(
<ide><path>keras/integration_test/parameter_server_keras_preprocessing_test.py
<ide> def create_in_process_cluster(num_workers, num_ps):
<ide> class KPLTest(tf.test.TestCase, parameterized.TestCase):
<ide>
<ide> def setUp(self):
<del> super(KPLTest, self).setUp()
<add> super().setUp()
<ide>
<ide> cluster_spec = create_in_process_cluster(num_workers=3, num_ps=2)
<ide> cluster_resolver = tf.distribute.cluster_resolver.SimpleClusterResolver(
<ide> class KPLCreatedInDatasetsFromFunctionTest(tf.test.TestCase,
<ide> parameterized.TestCase):
<ide>
<ide> def setUp(self):
<del> super(KPLCreatedInDatasetsFromFunctionTest, self).setUp()
<add> super().setUp()
<ide>
<ide> cluster_spec = create_in_process_cluster(num_workers=3, num_ps=2)
<ide> cluster_resolver = tf.distribute.cluster_resolver.SimpleClusterResolver(
<ide><path>keras/integration_test/saved_model_test.py
<ide> def test_optimizer(self, cycles):
<ide> class _HasOptimizer(tf.Module):
<ide>
<ide> def __init__(self):
<del> super(_HasOptimizer, self).__init__()
<add> super().__init__()
<ide> self.layer = tf.keras.layers.Dense(1)
<ide> self.optimizer = tf.keras.optimizers.Adam(0.01)
<ide>
<ide><path>keras/layers/__init__.py
<ide> def __getattr__(self, name):
<ide> serialization.populate_deserializable_objects()
<ide> if name in serialization.LOCAL.ALL_OBJECTS:
<ide> return serialization.LOCAL.ALL_OBJECTS[name]
<del> return super(VersionAwareLayers, self).__getattr__(name)
<add> return super().__getattr__(name)
<ide><path>keras/layers/activation/elu.py
<ide> class ELU(Layer):
<ide> """
<ide>
<ide> def __init__(self, alpha=1.0, **kwargs):
<del> super(ELU, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> if alpha is None:
<ide> raise ValueError(
<ide> 'Alpha of an ELU layer cannot be None, expecting a float. '
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'alpha': float(self.alpha)}
<del> base_config = super(ELU, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @tf_utils.shape_type_conversion
<ide><path>keras/layers/activation/leaky_relu.py
<ide> class LeakyReLU(Layer):
<ide> """
<ide>
<ide> def __init__(self, alpha=0.3, **kwargs):
<del> super(LeakyReLU, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> if alpha is None:
<ide> raise ValueError(
<ide> 'The alpha value of a Leaky ReLU layer cannot be None, '
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'alpha': float(self.alpha)}
<del> base_config = super(LeakyReLU, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @tf_utils.shape_type_conversion
<ide><path>keras/layers/activation/prelu.py
<ide> def __init__(self,
<ide> alpha_constraint=None,
<ide> shared_axes=None,
<ide> **kwargs):
<del> super(PReLU, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.supports_masking = True
<ide> self.alpha_initializer = initializers.get(alpha_initializer)
<ide> self.alpha_regularizer = regularizers.get(alpha_regularizer)
<ide> def get_config(self):
<ide> 'alpha_constraint': constraints.serialize(self.alpha_constraint),
<ide> 'shared_axes': self.shared_axes
<ide> }
<del> base_config = super(PReLU, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @tf_utils.shape_type_conversion
<ide><path>keras/layers/activation/relu.py
<ide> class ReLU(Layer):
<ide> """
<ide>
<ide> def __init__(self, max_value=None, negative_slope=0., threshold=0., **kwargs):
<del> super(ReLU, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> if max_value is not None and max_value < 0.:
<ide> raise ValueError('max_value of a ReLU layer cannot be a negative '
<ide> f'value. Received: {max_value}')
<ide> def get_config(self):
<ide> 'negative_slope': self.negative_slope,
<ide> 'threshold': self.threshold
<ide> }
<del> base_config = super(ReLU, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @tf_utils.shape_type_conversion
<ide><path>keras/layers/activation/softmax.py
<ide> class Softmax(Layer):
<ide> """
<ide>
<ide> def __init__(self, axis=-1, **kwargs):
<del> super(Softmax, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.supports_masking = True
<ide> self.axis = axis
<ide>
<ide> def call(self, inputs, mask=None):
<ide>
<ide> def get_config(self):
<ide> config = {'axis': self.axis}
<del> base_config = super(Softmax, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @tf_utils.shape_type_conversion
<ide><path>keras/layers/activation/thresholded_relu.py
<ide> class ThresholdedReLU(Layer):
<ide> """
<ide>
<ide> def __init__(self, theta=1.0, **kwargs):
<del> super(ThresholdedReLU, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> if theta is None:
<ide> raise ValueError(
<ide> 'Theta of a Thresholded ReLU layer cannot be None, expecting a float.'
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'theta': float(self.theta)}
<del> base_config = super(ThresholdedReLU, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @tf_utils.shape_type_conversion
<ide><path>keras/layers/attention/additive_attention.py
<ide> class AdditiveAttention(BaseDenseAttention):
<ide> """
<ide>
<ide> def __init__(self, use_scale=True, **kwargs):
<del> super(AdditiveAttention, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.use_scale = use_scale
<ide>
<ide> def build(self, input_shape):
<ide> def build(self, input_shape):
<ide> trainable=True)
<ide> else:
<ide> self.scale = None
<del> super(AdditiveAttention, self).build(input_shape)
<add> super().build(input_shape)
<ide>
<ide> def _calculate_scores(self, query, key):
<ide> """Calculates attention scores as a nonlinear sum of query and key.
<ide> def _calculate_scores(self, query, key):
<ide>
<ide> def get_config(self):
<ide> config = {'use_scale': self.use_scale}
<del> base_config = super(AdditiveAttention, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/attention/attention.py
<ide> class Attention(BaseDenseAttention):
<ide> """
<ide>
<ide> def __init__(self, use_scale=False, score_mode='dot', **kwargs):
<del> super(Attention, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.use_scale = use_scale
<ide> self.score_mode = score_mode
<ide> if self.score_mode not in ['dot', 'concat']:
<ide> def build(self, input_shape):
<ide> trainable=True)
<ide> else:
<ide> self.concat_score_weight = None
<del> super(Attention, self).build(input_shape)
<add> super().build(input_shape)
<ide>
<ide> def _calculate_scores(self, query, key):
<ide> """Calculates attention scores as a query-key dot product.
<ide> def _calculate_scores(self, query, key):
<ide>
<ide> def get_config(self):
<ide> config = {'use_scale': self.use_scale, 'score_mode': self.score_mode}
<del> base_config = super(Attention, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/attention/base_dense_attention.py
<ide> class BaseDenseAttention(base_layer.BaseRandomLayer):
<ide> """
<ide>
<ide> def __init__(self, causal=False, dropout=0.0, **kwargs):
<del> super(BaseDenseAttention, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.causal = causal
<ide> self.dropout = dropout
<ide> self.supports_masking = True
<ide> def get_config(self):
<ide> 'causal': self.causal,
<ide> 'dropout': self.dropout,
<ide> }
<del> base_config = super(BaseDenseAttention, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide><path>keras/layers/attention/multi_head_attention.py
<ide> def __init__(self,
<ide> kernel_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(MultiHeadAttention, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self._num_heads = num_heads
<ide> self._key_dim = key_dim
<ide> self._value_dim = value_dim if value_dim else key_dim
<ide> def get_config(self):
<ide> "key_shape": self._key_shape,
<ide> "value_shape": self._value_shape,
<ide> }
<del> base_config = super(MultiHeadAttention, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide><path>keras/layers/attention/multi_head_attention_test.py
<ide> def test_initializer(self):
<ide> class TestModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel, self).__init__()
<add> super().__init__()
<ide> self.attention = keras.layers.MultiHeadAttention(
<ide> num_heads=3,
<ide> key_dim=4,
<ide><path>keras/layers/convolutional/base_conv.py
<ide> def __init__(self,
<ide> name=None,
<ide> conv_op=None,
<ide> **kwargs):
<del> super(Conv, self).__init__(
<add> super().__init__(
<ide> trainable=trainable,
<ide> name=name,
<ide> activity_regularizer=regularizers.get(activity_regularizer),
<ide> def get_config(self):
<ide> 'bias_constraint':
<ide> constraints.serialize(self.bias_constraint)
<ide> }
<del> base_config = super(Conv, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> def _compute_causal_padding(self, inputs):
<ide><path>keras/layers/convolutional/base_depthwise_conv.py
<ide> def __init__(self,
<ide> depthwise_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(DepthwiseConv, self).__init__(
<add> super().__init__(
<ide> rank,
<ide> filters=None,
<ide> kernel_size=kernel_size,
<ide> def call(self, inputs):
<ide> raise NotImplementedError
<ide>
<ide> def get_config(self):
<del> config = super(DepthwiseConv, self).get_config()
<add> config = super().get_config()
<ide> config.pop('filters')
<ide> config.pop('kernel_initializer')
<ide> config.pop('kernel_regularizer')
<ide><path>keras/layers/convolutional/base_separable_conv.py
<ide> def __init__(self,
<ide> trainable=True,
<ide> name=None,
<ide> **kwargs):
<del> super(SeparableConv, self).__init__(
<add> super().__init__(
<ide> rank=rank,
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> def get_config(self):
<ide> 'bias_constraint':
<ide> constraints.serialize(self.bias_constraint)
<ide> }
<del> base_config = super(SeparableConv, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/convolutional/conv1d.py
<ide> def __init__(self,
<ide> kernel_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(Conv1D, self).__init__(
<add> super().__init__(
<ide> rank=1,
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide><path>keras/layers/convolutional/conv1d_transpose.py
<ide> def __init__(self,
<ide> kernel_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(Conv1DTranspose, self).__init__(
<add> super().__init__(
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide> def compute_output_shape(self, input_shape):
<ide> return tf.TensorShape(output_shape)
<ide>
<ide> def get_config(self):
<del> config = super(Conv1DTranspose, self).get_config()
<add> config = super().get_config()
<ide> config['output_padding'] = self.output_padding
<ide> return config
<ide>
<ide><path>keras/layers/convolutional/conv2d.py
<ide> def __init__(self,
<ide> kernel_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(Conv2D, self).__init__(
<add> super().__init__(
<ide> rank=2,
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide><path>keras/layers/convolutional/conv2d_transpose.py
<ide> def __init__(self,
<ide> kernel_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(Conv2DTranspose, self).__init__(
<add> super().__init__(
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide> def compute_output_shape(self, input_shape):
<ide> return tf.TensorShape(output_shape)
<ide>
<ide> def get_config(self):
<del> config = super(Conv2DTranspose, self).get_config()
<add> config = super().get_config()
<ide> config['output_padding'] = self.output_padding
<ide> return config
<ide>
<ide><path>keras/layers/convolutional/conv3d.py
<ide> def __init__(self,
<ide> kernel_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(Conv3D, self).__init__(
<add> super().__init__(
<ide> rank=3,
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide><path>keras/layers/convolutional/conv3d_transpose.py
<ide> def __init__(self,
<ide> kernel_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(Conv3DTranspose, self).__init__(
<add> super().__init__(
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide> def compute_output_shape(self, input_shape):
<ide> return tf.TensorShape(output_shape)
<ide>
<ide> def get_config(self):
<del> config = super(Conv3DTranspose, self).get_config()
<add> config = super().get_config()
<ide> config.pop('dilation_rate')
<ide> config['output_padding'] = self.output_padding
<ide> return config
<ide><path>keras/layers/convolutional/depthwise_conv1d.py
<ide> def __init__(self,
<ide> depthwise_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(DepthwiseConv1D, self).__init__(
<add> super().__init__(
<ide> 1,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide><path>keras/layers/convolutional/depthwise_conv2d.py
<ide> def __init__(self,
<ide> depthwise_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(DepthwiseConv2D, self).__init__(
<add> super().__init__(
<ide> 2,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide><path>keras/layers/convolutional/separable_conv1d.py
<ide> def __init__(self,
<ide> pointwise_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(SeparableConv1D, self).__init__(
<add> super().__init__(
<ide> rank=1,
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide><path>keras/layers/convolutional/separable_conv2d.py
<ide> def __init__(self,
<ide> pointwise_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(SeparableConv2D, self).__init__(
<add> super().__init__(
<ide> rank=2,
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide><path>keras/layers/core/activation.py
<ide> class Activation(Layer):
<ide> """
<ide>
<ide> def __init__(self, activation, **kwargs):
<del> super(Activation, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.supports_masking = True
<ide> self.activation = activations.get(activation)
<ide>
<ide> def compute_output_shape(self, input_shape):
<ide>
<ide> def get_config(self):
<ide> config = {'activation': activations.serialize(self.activation)}
<del> base_config = super(Activation, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide><path>keras/layers/core/dense.py
<ide> def __init__(self,
<ide> kernel_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(Dense, self).__init__(
<add> super().__init__(
<ide> activity_regularizer=activity_regularizer, **kwargs)
<ide>
<ide> self.units = int(units) if not isinstance(units, int) else units
<ide> def compute_output_shape(self, input_shape):
<ide> return input_shape[:-1].concatenate(self.units)
<ide>
<ide> def get_config(self):
<del> config = super(Dense, self).get_config()
<add> config = super().get_config()
<ide> config.update({
<ide> 'units': self.units,
<ide> 'activation': activations.serialize(self.activation),
<ide><path>keras/layers/core/einsum_dense.py
<ide> def __init__(self,
<ide> kernel_constraint=None,
<ide> bias_constraint=None,
<ide> **kwargs):
<del> super(EinsumDense, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.equation = equation
<ide> if isinstance(output_shape, int):
<ide> self.partial_output_shape = [output_shape]
<ide> def build(self, input_shape):
<ide> trainable=True)
<ide> else:
<ide> self.bias = None
<del> super(EinsumDense, self).build(input_shape)
<add> super().build(input_shape)
<ide>
<ide> def compute_output_shape(self, _):
<ide> return tf.TensorShape(self.full_output_shape)
<ide> def get_config(self):
<ide> "kernel_constraint": constraints.serialize(self.kernel_constraint),
<ide> "bias_constraint": constraints.serialize(self.bias_constraint),
<ide> }
<del> base_config = super(EinsumDense, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> def call(self, inputs):
<ide><path>keras/layers/core/embedding.py
<ide> def __init__(self,
<ide> # before casting to int32 might cause the int32 values to be different due
<ide> # to a loss of precision.
<ide> kwargs['autocast'] = False
<del> super(Embedding, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> self.input_dim = input_dim
<ide> self.output_dim = output_dim
<ide> def get_config(self):
<ide> 'mask_zero': self.mask_zero,
<ide> 'input_length': self.input_length
<ide> }
<del> base_config = super(Embedding, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/core/lambda_layer.py
<ide> def __init__(self,
<ide> mask=None,
<ide> arguments=None,
<ide> **kwargs):
<del> super(Lambda, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> self.arguments = arguments or {}
<ide> self.function = function
<ide> def compute_output_shape(self, input_shape):
<ide> # `add_loss`.
<ide> with tf.__internal__.eager_context.eager_mode():
<ide> try:
<del> return super(Lambda, self).compute_output_shape(input_shape)
<add> return super().compute_output_shape(input_shape)
<ide> except NotImplementedError:
<ide> raise NotImplementedError(
<ide> 'We could not automatically infer the shape of the Lambda\'s '
<ide> def get_config(self):
<ide> })
<ide> config['arguments'] = self.arguments
<ide>
<del> base_config = super(Lambda, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> def _serialize_function_to_config(self, inputs, allow_raw=False):
<ide><path>keras/layers/core/masking.py
<ide> class Masking(Layer):
<ide> """
<ide>
<ide> def __init__(self, mask_value=0., **kwargs):
<del> super(Masking, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.supports_masking = True
<ide> self.mask_value = mask_value
<ide> self._compute_output_and_mask_jointly = True
<ide> def compute_output_shape(self, input_shape):
<ide>
<ide> def get_config(self):
<ide> config = {'mask_value': self.mask_value}
<del> base_config = super(Masking, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/core/tf_op_layer.py
<ide> def __init__(self, cls_ref, method_name, **kwargs):
<ide> # Do not individually trace op layers in the SavedModel.
<ide> self._must_restore_from_config = True
<ide>
<del> super(ClassMethod, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> # Preserve all argument data structures when saving/loading a config
<ide> # (e.g., don't unnest lists that contain one element)
<ide> def get_config(self):
<ide> 'public TensorFlow API symbols can be serialized.')
<ide>
<ide> config = {'cls_symbol': self.cls_symbol, 'method_name': self.method_name}
<del> base_config = super(ClassMethod, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide> def __init__(self, attr_name, **kwargs):
<ide> # Do not individually trace op layers in the SavedModel.
<ide> self._must_restore_from_config = True
<ide>
<del> super(InstanceProperty, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> # Preserve all argument data structures when saving/loading a config
<ide> # (e.g., don't unnest lists that contain one element)
<ide> def call(self, obj):
<ide>
<ide> def get_config(self):
<ide> config = {'attr_name': self.attr_name}
<del> base_config = super(InstanceProperty, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide> def _call_wrapper(*args, **kwargs):
<ide> # Do not individually trace op layers in the SavedModel.
<ide> self._must_restore_from_config = True
<ide>
<del> super(TFOpLambda, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> # Preserve all argument data structures when saving/loading a config
<ide> # (e.g., don't unnest lists that contain one element)
<ide> def get_config(self):
<ide> 'public TensorFlow API symbols can be serialized.')
<ide> config = {'function': self.symbol}
<ide>
<del> base_config = super(TFOpLambda, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide> class SlicingOpLambda(TFOpLambda):
<ide>
<ide> @tf.__internal__.tracking.no_automatic_dependency_tracking
<ide> def __init__(self, function, **kwargs):
<del> super(SlicingOpLambda, self).__init__(function, **kwargs)
<add> super().__init__(function, **kwargs)
<ide>
<ide> original_call = self.call
<ide>
<ide><path>keras/layers/kernelized.py
<ide> def __init__(self,
<ide> if scale is not None and scale <= 0.0:
<ide> raise ValueError('When provided, `scale` should be a positive float. '
<ide> f'Received: {scale}')
<del> super(RandomFourierFeatures, self).__init__(
<add> super().__init__(
<ide> trainable=trainable, name=name, **kwargs)
<ide> self.output_dim = output_dim
<ide> self.kernel_initializer = kernel_initializer
<ide> def build(self, input_shape):
<ide> initializer=tf.compat.v1.constant_initializer(self.scale),
<ide> trainable=True,
<ide> constraint='NonNeg')
<del> super(RandomFourierFeatures, self).build(input_shape)
<add> super().build(input_shape)
<ide>
<ide> def call(self, inputs):
<ide> inputs = tf.convert_to_tensor(inputs, dtype=self.dtype)
<ide> def get_config(self):
<ide> 'kernel_initializer': kernel_initializer,
<ide> 'scale': self.scale,
<ide> }
<del> base_config = super(RandomFourierFeatures, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide><path>keras/layers/locally_connected/locally_connected1d.py
<ide> def __init__(self,
<ide> bias_constraint=None,
<ide> implementation=1,
<ide> **kwargs):
<del> super(LocallyConnected1D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.filters = filters
<ide> self.kernel_size = conv_utils.normalize_tuple(kernel_size, 1, 'kernel_size')
<ide> self.strides = conv_utils.normalize_tuple(
<ide> def get_config(self):
<ide> 'implementation':
<ide> self.implementation
<ide> }
<del> base_config = super(LocallyConnected1D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/locally_connected/locally_connected2d.py
<ide> def __init__(self,
<ide> bias_constraint=None,
<ide> implementation=1,
<ide> **kwargs):
<del> super(LocallyConnected2D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.filters = filters
<ide> self.kernel_size = conv_utils.normalize_tuple(kernel_size, 2, 'kernel_size')
<ide> self.strides = conv_utils.normalize_tuple(
<ide> def get_config(self):
<ide> 'implementation':
<ide> self.implementation
<ide> }
<del> base_config = super(LocallyConnected2D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/merging/base_merge.py
<ide> def __init__(self, **kwargs):
<ide> Args:
<ide> **kwargs: standard layer keyword arguments.
<ide> """
<del> super(_Merge, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.supports_masking = True
<ide>
<ide> def _merge_function(self, inputs):
<ide><path>keras/layers/merging/concatenate.py
<ide> def __init__(self, axis=-1, **kwargs):
<ide> axis: Axis along which to concatenate.
<ide> **kwargs: standard layer keyword arguments.
<ide> """
<del> super(Concatenate, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.axis = axis
<ide> self.supports_masking = True
<ide> self._reshape_required = False
<ide> def get_config(self):
<ide> config = {
<ide> 'axis': self.axis,
<ide> }
<del> base_config = super(Concatenate, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide><path>keras/layers/merging/dot.py
<ide> def __init__(self, axes, normalize=False, **kwargs):
<ide> is the cosine proximity between the two samples.
<ide> **kwargs: Standard layer keyword arguments.
<ide> """
<del> super(Dot, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> if not isinstance(axes, int):
<ide> if not isinstance(axes, (list, tuple)):
<ide> raise TypeError(
<ide> def get_config(self):
<ide> 'axes': self.axes,
<ide> 'normalize': self.normalize,
<ide> }
<del> base_config = super(Dot, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide><path>keras/layers/merging/subtract.py
<ide> class Subtract(_Merge):
<ide>
<ide> @tf_utils.shape_type_conversion
<ide> def build(self, input_shape):
<del> super(Subtract, self).build(input_shape)
<add> super().build(input_shape)
<ide> if len(input_shape) != 2:
<ide> raise ValueError(
<ide> 'A `Subtract` layer should be called on exactly 2 inputs. '
<ide><path>keras/layers/normalization/batch_normalization.py
<ide> def __init__(self,
<ide> adjustment=None,
<ide> name=None,
<ide> **kwargs):
<del> super(BatchNormalizationBase, self).__init__(name=name, **kwargs)
<add> super().__init__(name=name, **kwargs)
<ide> if isinstance(axis, (list, tuple)):
<ide> self.axis = axis[:]
<ide> elif isinstance(axis, int):
<ide> def get_config(self):
<ide> 'layer cannot be serialized and has been omitted from '
<ide> 'the layer config. It will not be included when '
<ide> 're-creating the layer from the saved config.')
<del> base_config = super(BatchNormalizationBase, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> '`fused` argument cannot be True for SyncBatchNormalization.')
<ide>
<ide> # Currently we only support aggregating over the global batch size.
<del> super(SyncBatchNormalization, self).__init__(
<add> super().__init__(
<ide> axis=axis,
<ide> momentum=momentum,
<ide> epsilon=epsilon,
<ide> def __init__(self,
<ide> beta_constraint=None,
<ide> gamma_constraint=None,
<ide> **kwargs):
<del> super(BatchNormalization, self).__init__(
<add> super().__init__(
<ide> axis=axis,
<ide> momentum=momentum,
<ide> epsilon=epsilon,
<ide><path>keras/layers/normalization/batch_normalization_test.py
<ide> def test_eager_batchnorm_in_custom_model_call_with_tf_function(self):
<ide> class MyModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.bn = keras.layers.BatchNormalization()
<ide>
<ide> @tf.function()
<ide><path>keras/layers/normalization/layer_normalization.py
<ide> def __init__(self,
<ide> beta_constraint=None,
<ide> gamma_constraint=None,
<ide> **kwargs):
<del> super(LayerNormalization, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> if isinstance(axis, (list, tuple)):
<ide> self.axis = list(axis)
<ide> elif isinstance(axis, int):
<ide> def get_config(self):
<ide> 'beta_constraint': constraints.serialize(self.beta_constraint),
<ide> 'gamma_constraint': constraints.serialize(self.gamma_constraint)
<ide> }
<del> base_config = super(LayerNormalization, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/normalization/unit_normalization.py
<ide> def compute_output_shape(self, input_shape):
<ide> return input_shape
<ide>
<ide> def get_config(self):
<del> config = super(UnitNormalization, self).get_config()
<add> config = super().get_config()
<ide> config.update({'axis': self.axis})
<ide> return config
<ide><path>keras/layers/pooling/average_pooling1d.py
<ide> class AveragePooling1D(Pooling1D):
<ide>
<ide> def __init__(self, pool_size=2, strides=None,
<ide> padding='valid', data_format='channels_last', **kwargs):
<del> super(AveragePooling1D, self).__init__(
<add> super().__init__(
<ide> functools.partial(backend.pool2d, pool_mode='avg'),
<ide> pool_size=pool_size,
<ide> strides=strides,
<ide><path>keras/layers/pooling/average_pooling2d.py
<ide> def __init__(self,
<ide> padding='valid',
<ide> data_format=None,
<ide> **kwargs):
<del> super(AveragePooling2D, self).__init__(
<add> super().__init__(
<ide> tf.nn.avg_pool,
<ide> pool_size=pool_size, strides=strides,
<ide> padding=padding, data_format=data_format, **kwargs)
<ide><path>keras/layers/pooling/average_pooling3d.py
<ide> def __init__(self,
<ide> padding='valid',
<ide> data_format=None,
<ide> **kwargs):
<del> super(AveragePooling3D, self).__init__(
<add> super().__init__(
<ide> tf.nn.avg_pool3d,
<ide> pool_size=pool_size, strides=strides,
<ide> padding=padding, data_format=data_format, **kwargs)
<ide><path>keras/layers/pooling/base_global_pooling1d.py
<ide> class GlobalPooling1D(Layer):
<ide> """Abstract class for different global pooling 1D layers."""
<ide>
<ide> def __init__(self, data_format='channels_last', keepdims=False, **kwargs):
<del> super(GlobalPooling1D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.input_spec = InputSpec(ndim=3)
<ide> self.data_format = conv_utils.normalize_data_format(data_format)
<ide> self.keepdims = keepdims
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'data_format': self.data_format, 'keepdims': self.keepdims}
<del> base_config = super(GlobalPooling1D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide><path>keras/layers/pooling/base_global_pooling2d.py
<ide> class GlobalPooling2D(Layer):
<ide> """Abstract class for different global pooling 2D layers."""
<ide>
<ide> def __init__(self, data_format=None, keepdims=False, **kwargs):
<del> super(GlobalPooling2D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.data_format = conv_utils.normalize_data_format(data_format)
<ide> self.input_spec = InputSpec(ndim=4)
<ide> self.keepdims = keepdims
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'data_format': self.data_format, 'keepdims': self.keepdims}
<del> base_config = super(GlobalPooling2D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/pooling/base_global_pooling3d.py
<ide> class GlobalPooling3D(Layer):
<ide> """Abstract class for different global pooling 3D layers."""
<ide>
<ide> def __init__(self, data_format=None, keepdims=False, **kwargs):
<del> super(GlobalPooling3D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.data_format = conv_utils.normalize_data_format(data_format)
<ide> self.input_spec = InputSpec(ndim=5)
<ide> self.keepdims = keepdims
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'data_format': self.data_format, 'keepdims': self.keepdims}
<del> base_config = super(GlobalPooling3D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/pooling/base_pooling1d.py
<ide> class Pooling1D(Layer):
<ide> def __init__(self, pool_function, pool_size, strides,
<ide> padding='valid', data_format='channels_last',
<ide> name=None, **kwargs):
<del> super(Pooling1D, self).__init__(name=name, **kwargs)
<add> super().__init__(name=name, **kwargs)
<ide> if data_format is None:
<ide> data_format = backend.image_data_format()
<ide> if strides is None:
<ide> def get_config(self):
<ide> 'padding': self.padding,
<ide> 'data_format': self.data_format,
<ide> }
<del> base_config = super(Pooling1D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/pooling/base_pooling2d.py
<ide> class Pooling2D(Layer):
<ide> def __init__(self, pool_function, pool_size, strides,
<ide> padding='valid', data_format=None,
<ide> name=None, **kwargs):
<del> super(Pooling2D, self).__init__(name=name, **kwargs)
<add> super().__init__(name=name, **kwargs)
<ide> if data_format is None:
<ide> data_format = backend.image_data_format()
<ide> if strides is None:
<ide> def get_config(self):
<ide> 'strides': self.strides,
<ide> 'data_format': self.data_format
<ide> }
<del> base_config = super(Pooling2D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/pooling/base_pooling3d.py
<ide> class Pooling3D(Layer):
<ide> def __init__(self, pool_function, pool_size, strides,
<ide> padding='valid', data_format='channels_last',
<ide> name=None, **kwargs):
<del> super(Pooling3D, self).__init__(name=name, **kwargs)
<add> super().__init__(name=name, **kwargs)
<ide> if data_format is None:
<ide> data_format = backend.image_data_format()
<ide> if strides is None:
<ide> def get_config(self):
<ide> 'strides': self.strides,
<ide> 'data_format': self.data_format
<ide> }
<del> base_config = super(Pooling3D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/pooling/global_average_pooling1d.py
<ide> class GlobalAveragePooling1D(GlobalPooling1D):
<ide> """
<ide>
<ide> def __init__(self, data_format='channels_last', **kwargs):
<del> super(GlobalAveragePooling1D, self).__init__(data_format=data_format,
<add> super().__init__(data_format=data_format,
<ide> **kwargs)
<ide> self.supports_masking = True
<ide>
<ide><path>keras/layers/pooling/max_pooling1d.py
<ide> class MaxPooling1D(Pooling1D):
<ide> def __init__(self, pool_size=2, strides=None,
<ide> padding='valid', data_format='channels_last', **kwargs):
<ide>
<del> super(MaxPooling1D, self).__init__(
<add> super().__init__(
<ide> functools.partial(backend.pool2d, pool_mode='max'),
<ide> pool_size=pool_size,
<ide> strides=strides,
<ide><path>keras/layers/pooling/max_pooling2d.py
<ide> def __init__(self,
<ide> padding='valid',
<ide> data_format=None,
<ide> **kwargs):
<del> super(MaxPooling2D, self).__init__(
<add> super().__init__(
<ide> tf.compat.v1.nn.max_pool,
<ide> pool_size=pool_size, strides=strides,
<ide> padding=padding, data_format=data_format, **kwargs)
<ide><path>keras/layers/pooling/max_pooling3d.py
<ide> def __init__(self,
<ide> padding='valid',
<ide> data_format=None,
<ide> **kwargs):
<del> super(MaxPooling3D, self).__init__(
<add> super().__init__(
<ide> tf.nn.max_pool3d,
<ide> pool_size=pool_size, strides=strides,
<ide> padding=padding, data_format=data_format, **kwargs)
<ide><path>keras/layers/preprocessing/category_encoding.py
<ide> def __init__(self,
<ide> if "dtype" not in kwargs:
<ide> kwargs["dtype"] = backend.floatx()
<ide>
<del> super(CategoryEncoding, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell("CategoryEncoding").set(
<ide> True)
<ide>
<ide> def get_config(self):
<ide> "output_mode": self.output_mode,
<ide> "sparse": self.sparse,
<ide> }
<del> base_config = super(CategoryEncoding, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> def call(self, inputs, count_weights=None):
<ide><path>keras/layers/preprocessing/image_preprocessing.py
<ide> def __init__(self,
<ide> self.interpolation = interpolation
<ide> self.crop_to_aspect_ratio = crop_to_aspect_ratio
<ide> self._interpolation_method = image_utils.get_interpolation(interpolation)
<del> super(Resizing, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('Resizing').set(True)
<ide>
<ide> def call(self, inputs):
<ide> def get_config(self):
<ide> 'interpolation': self.interpolation,
<ide> 'crop_to_aspect_ratio': self.crop_to_aspect_ratio,
<ide> }
<del> base_config = super(Resizing, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class CenterCrop(base_layer.Layer):
<ide> def __init__(self, height, width, **kwargs):
<ide> self.height = height
<ide> self.width = width
<del> super(CenterCrop, self).__init__(**kwargs, autocast=False)
<add> super().__init__(**kwargs, autocast=False)
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('CenterCrop').set(True)
<ide>
<ide> def call(self, inputs):
<ide> def get_config(self):
<ide> 'height': self.height,
<ide> 'width': self.width,
<ide> }
<del> base_config = super(CenterCrop, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class RandomCrop(BaseImageAugmentationLayer):
<ide>
<ide> def __init__(self, height, width, seed=None, **kwargs):
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('RandomCrop').set(True)
<del> super(RandomCrop, self).__init__(**kwargs, autocast=False, seed=seed,
<add> super().__init__(**kwargs, autocast=False, seed=seed,
<ide> force_generator=True)
<ide> self.height = height
<ide> self.width = width
<ide> def get_config(self):
<ide> 'width': self.width,
<ide> 'seed': self.seed,
<ide> }
<del> base_config = super(RandomCrop, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class Rescaling(base_layer.Layer):
<ide> def __init__(self, scale, offset=0., **kwargs):
<ide> self.scale = scale
<ide> self.offset = offset
<del> super(Rescaling, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('Rescaling').set(True)
<ide>
<ide> def call(self, inputs):
<ide> def get_config(self):
<ide> 'scale': self.scale,
<ide> 'offset': self.offset,
<ide> }
<del> base_config = super(Rescaling, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> mode=HORIZONTAL_AND_VERTICAL,
<ide> seed=None,
<ide> **kwargs):
<del> super(RandomFlip, self).__init__(seed=seed, force_generator=True, **kwargs)
<add> super().__init__(seed=seed, force_generator=True, **kwargs)
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('RandomFlip').set(True)
<ide> self.mode = mode
<ide> if mode == HORIZONTAL:
<ide> def get_config(self):
<ide> config = {
<ide> 'mode': self.mode,
<ide> }
<del> base_config = super(RandomFlip, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> **kwargs):
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('RandomTranslation').set(
<ide> True)
<del> super(RandomTranslation, self).__init__(seed=seed, force_generator=True,
<add> super().__init__(seed=seed, force_generator=True,
<ide> **kwargs)
<ide> self.height_factor = height_factor
<ide> if isinstance(height_factor, (tuple, list)):
<ide> def get_config(self):
<ide> 'interpolation': self.interpolation,
<ide> 'seed': self.seed,
<ide> }
<del> base_config = super(RandomTranslation, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> **kwargs):
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('RandomRotation').set(
<ide> True)
<del> super(RandomRotation, self).__init__(seed=seed, force_generator=True,
<add> super().__init__(seed=seed, force_generator=True,
<ide> **kwargs)
<ide> self.factor = factor
<ide> if isinstance(factor, (tuple, list)):
<ide> def get_config(self):
<ide> 'interpolation': self.interpolation,
<ide> 'seed': self.seed,
<ide> }
<del> base_config = super(RandomRotation, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> fill_value=0.0,
<ide> **kwargs):
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('RandomZoom').set(True)
<del> super(RandomZoom, self).__init__(seed=seed, force_generator=True, **kwargs)
<add> super().__init__(seed=seed, force_generator=True, **kwargs)
<ide> self.height_factor = height_factor
<ide> if isinstance(height_factor, (tuple, list)):
<ide> self.height_lower = height_factor[0]
<ide> def get_config(self):
<ide> 'interpolation': self.interpolation,
<ide> 'seed': self.seed,
<ide> }
<del> base_config = super(RandomZoom, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class RandomContrast(BaseImageAugmentationLayer):
<ide> def __init__(self, factor, seed=None, **kwargs):
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('RandomContrast').set(
<ide> True)
<del> super(RandomContrast, self).__init__(seed=seed, force_generator=True,
<add> super().__init__(seed=seed, force_generator=True,
<ide> **kwargs)
<ide> self.factor = factor
<ide> if isinstance(factor, (tuple, list)):
<ide> def get_config(self):
<ide> 'factor': self.factor,
<ide> 'seed': self.seed,
<ide> }
<del> base_config = super(RandomContrast, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> seed=None,
<ide> **kwargs):
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('RandomHeight').set(True)
<del> super(RandomHeight, self).__init__(seed=seed, force_generator=True,
<add> super().__init__(seed=seed, force_generator=True,
<ide> **kwargs)
<ide> self.factor = factor
<ide> if isinstance(factor, (tuple, list)):
<ide> def get_config(self):
<ide> 'interpolation': self.interpolation,
<ide> 'seed': self.seed,
<ide> }
<del> base_config = super(RandomHeight, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> seed=None,
<ide> **kwargs):
<ide> base_preprocessing_layer.keras_kpl_gauge.get_cell('RandomWidth').set(True)
<del> super(RandomWidth, self).__init__(seed=seed, force_generator=True, **kwargs)
<add> super().__init__(seed=seed, force_generator=True, **kwargs)
<ide> self.factor = factor
<ide> if isinstance(factor, (tuple, list)):
<ide> self.width_lower = factor[0]
<ide> def get_config(self):
<ide> 'interpolation': self.interpolation,
<ide> 'seed': self.seed,
<ide> }
<del> base_config = super(RandomWidth, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/preprocessing/integer_lookup.py
<ide> def __init__(self,
<ide> mask_token = None if mask_token is None else np.int64(mask_token)
<ide> oov_token = None if oov_token is None else np.int64(oov_token)
<ide>
<del> super(IntegerLookup, self).__init__(
<add> super().__init__(
<ide> max_tokens=max_tokens,
<ide> num_oov_indices=num_oov_indices,
<ide> mask_token=mask_token,
<ide><path>keras/layers/preprocessing/preprocessing_stage_functional_test.py
<ide> class PL(base_preprocessing_layer.PreprocessingLayer):
<ide> def __init__(self, **kwargs):
<ide> self.adapt_time = None
<ide> self.adapt_count = 0
<del> super(PL, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def adapt(self, data, reset_state=True):
<ide> self.adapt_time = time.time()
<ide><path>keras/layers/preprocessing/preprocessing_stage_test.py
<ide> class PL(base_preprocessing_layer.PreprocessingLayer):
<ide> def __init__(self, **kwargs):
<ide> self.adapt_time = None
<ide> self.adapt_count = 0
<del> super(PL, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def adapt(self, data, reset_state=True):
<ide> self.adapt_time = time.time()
<ide><path>keras/layers/preprocessing/string_lookup.py
<ide> def __init__(self,
<ide>
<ide> self.encoding = encoding
<ide>
<del> super(StringLookup, self).__init__(
<add> super().__init__(
<ide> max_tokens=max_tokens,
<ide> num_oov_indices=num_oov_indices,
<ide> mask_token=mask_token,
<ide> def __init__(self,
<ide>
<ide> def get_config(self):
<ide> config = {"encoding": self.encoding}
<del> base_config = super(StringLookup, self).get_config()
<add> base_config = super().get_config()
<ide> # There is only one valid dtype for strings, so we don't expose this.
<ide> del base_config["vocabulary_dtype"]
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/preprocessing/text_vectorization.py
<ide> def get_config(self):
<ide> "vocabulary": utils.listify_tensors(vocab),
<ide> "idf_weights": utils.listify_tensors(idf_weights),
<ide> }
<del> base_config = super(TextVectorization, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> def set_vocabulary(self, vocabulary, idf_weights=None):
<ide><path>keras/layers/regularization/activity_regularization.py
<ide> class ActivityRegularization(Layer):
<ide> """
<ide>
<ide> def __init__(self, l1=0., l2=0., **kwargs):
<del> super(ActivityRegularization, self).__init__(
<add> super().__init__(
<ide> activity_regularizer=regularizers.L1L2(l1=l1, l2=l2), **kwargs)
<ide> self.supports_masking = True
<ide> self.l1 = l1
<ide> def compute_output_shape(self, input_shape):
<ide>
<ide> def get_config(self):
<ide> config = {'l1': self.l1, 'l2': self.l2}
<del> base_config = super(ActivityRegularization, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/regularization/alpha_dropout.py
<ide> class AlphaDropout(base_layer.BaseRandomLayer):
<ide> """
<ide>
<ide> def __init__(self, rate, noise_shape=None, seed=None, **kwargs):
<del> super(AlphaDropout, self).__init__(seed=seed, **kwargs)
<add> super().__init__(seed=seed, **kwargs)
<ide> self.rate = rate
<ide> self.noise_shape = noise_shape
<ide> self.seed = seed
<ide> def dropped_inputs(inputs=inputs, rate=self.rate): # pylint: disable=missing-do
<ide>
<ide> def get_config(self):
<ide> config = {'rate': self.rate, 'seed': self.seed}
<del> base_config = super(AlphaDropout, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @tf_utils.shape_type_conversion
<ide><path>keras/layers/regularization/dropout.py
<ide> class Dropout(base_layer.BaseRandomLayer):
<ide> """
<ide>
<ide> def __init__(self, rate, noise_shape=None, seed=None, **kwargs):
<del> super(Dropout, self).__init__(seed=seed, **kwargs)
<add> super().__init__(seed=seed, **kwargs)
<ide> if isinstance(rate, (int, float)) and not 0 <= rate <= 1:
<ide> raise ValueError(f'Invalid value {rate} received for '
<ide> f'`rate`, expected a value between 0 and 1.')
<ide> def get_config(self):
<ide> 'noise_shape': self.noise_shape,
<ide> 'seed': self.seed
<ide> }
<del> base_config = super(Dropout, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/regularization/gaussian_dropout.py
<ide> class GaussianDropout(base_layer.BaseRandomLayer):
<ide> """
<ide>
<ide> def __init__(self, rate, seed=None, **kwargs):
<del> super(GaussianDropout, self).__init__(seed=seed, **kwargs)
<add> super().__init__(seed=seed, **kwargs)
<ide> self.supports_masking = True
<ide> self.rate = rate
<ide> self.seed = seed
<ide> def noised():
<ide>
<ide> def get_config(self):
<ide> config = {'rate': self.rate, 'seed': self.seed}
<del> base_config = super(GaussianDropout, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @tf_utils.shape_type_conversion
<ide><path>keras/layers/regularization/gaussian_noise.py
<ide> class GaussianNoise(base_layer.BaseRandomLayer):
<ide> """
<ide>
<ide> def __init__(self, stddev, seed=None, **kwargs):
<del> super(GaussianNoise, self).__init__(seed=seed, **kwargs)
<add> super().__init__(seed=seed, **kwargs)
<ide> self.supports_masking = True
<ide> self.stddev = stddev
<ide> self.seed = seed
<ide> def noised():
<ide>
<ide> def get_config(self):
<ide> config = {'stddev': self.stddev, 'seed': self.seed}
<del> base_config = super(GaussianNoise, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @tf_utils.shape_type_conversion
<ide><path>keras/layers/regularization/spatial_dropout1d.py
<ide> class SpatialDropout1D(Dropout):
<ide> """
<ide>
<ide> def __init__(self, rate, **kwargs):
<del> super(SpatialDropout1D, self).__init__(rate, **kwargs)
<add> super().__init__(rate, **kwargs)
<ide> self.input_spec = InputSpec(ndim=3)
<ide>
<ide> def _get_noise_shape(self, inputs):
<ide><path>keras/layers/regularization/spatial_dropout2d.py
<ide> class SpatialDropout2D(Dropout):
<ide> """
<ide>
<ide> def __init__(self, rate, data_format=None, **kwargs):
<del> super(SpatialDropout2D, self).__init__(rate, **kwargs)
<add> super().__init__(rate, **kwargs)
<ide> if data_format is None:
<ide> data_format = backend.image_data_format()
<ide> if data_format not in {'channels_last', 'channels_first'}:
<ide><path>keras/layers/regularization/spatial_dropout3d.py
<ide> class SpatialDropout3D(Dropout):
<ide> """
<ide>
<ide> def __init__(self, rate, data_format=None, **kwargs):
<del> super(SpatialDropout3D, self).__init__(rate, **kwargs)
<add> super().__init__(rate, **kwargs)
<ide> if data_format is None:
<ide> data_format = backend.image_data_format()
<ide> if data_format not in {'channels_last', 'channels_first'}:
<ide><path>keras/layers/reshaping/cropping1d.py
<ide> class Cropping1D(Layer):
<ide> """
<ide>
<ide> def __init__(self, cropping=(1, 1), **kwargs):
<del> super(Cropping1D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.cropping = conv_utils.normalize_tuple(
<ide> cropping, 2, 'cropping', allow_zero=True)
<ide> self.input_spec = InputSpec(ndim=3)
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'cropping': self.cropping}
<del> base_config = super(Cropping1D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/cropping2d.py
<ide> class Cropping2D(Layer):
<ide> """
<ide>
<ide> def __init__(self, cropping=((0, 0), (0, 0)), data_format=None, **kwargs):
<del> super(Cropping2D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.data_format = conv_utils.normalize_data_format(data_format)
<ide> if isinstance(cropping, int):
<ide> self.cropping = ((cropping, cropping), (cropping, cropping))
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'cropping': self.cropping, 'data_format': self.data_format}
<del> base_config = super(Cropping2D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/cropping3d.py
<ide> def __init__(self,
<ide> cropping=((1, 1), (1, 1), (1, 1)),
<ide> data_format=None,
<ide> **kwargs):
<del> super(Cropping3D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.data_format = conv_utils.normalize_data_format(data_format)
<ide> if isinstance(cropping, int):
<ide> self.cropping = ((cropping, cropping), (cropping, cropping), (cropping,
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'cropping': self.cropping, 'data_format': self.data_format}
<del> base_config = super(Cropping3D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/flatten.py
<ide> class Flatten(Layer):
<ide> """
<ide>
<ide> def __init__(self, data_format=None, **kwargs):
<del> super(Flatten, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.data_format = conv_utils.normalize_data_format(data_format)
<ide> self.input_spec = InputSpec(min_ndim=1)
<ide> self._channels_first = self.data_format == 'channels_first'
<ide> def compute_output_shape(self, input_shape):
<ide> return tf.TensorShape(output_shape)
<ide>
<ide> def get_config(self):
<del> config = super(Flatten, self).get_config()
<add> config = super().get_config()
<ide> config.update({'data_format': self.data_format})
<ide> return config
<ide><path>keras/layers/reshaping/permute.py
<ide> class Permute(Layer):
<ide> """
<ide>
<ide> def __init__(self, dims, **kwargs):
<del> super(Permute, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.dims = tuple(dims)
<ide> if sorted(dims) != list(range(1, len(dims) + 1)):
<ide> raise ValueError(
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'dims': self.dims}
<del> base_config = super(Permute, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/repeat_vector.py
<ide> class RepeatVector(Layer):
<ide> """
<ide>
<ide> def __init__(self, n, **kwargs):
<del> super(RepeatVector, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.n = n
<ide> if not isinstance(n, int):
<ide> raise TypeError(f'Expected an integer value for `n`, got {type(n)}.')
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'n': self.n}
<del> base_config = super(RepeatVector, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/reshape.py
<ide> def __init__(self, target_shape, **kwargs):
<ide> samples dimension (batch size).
<ide> **kwargs: Any additional layer keyword arguments.
<ide> """
<del> super(Reshape, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.target_shape = tuple(target_shape)
<ide>
<ide> def _fix_unknown_dimension(self, input_shape, output_shape):
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'target_shape': self.target_shape}
<del> base_config = super(Reshape, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/up_sampling1d.py
<ide> class UpSampling1D(Layer):
<ide> """
<ide>
<ide> def __init__(self, size=2, **kwargs):
<del> super(UpSampling1D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.size = int(size)
<ide> self.input_spec = InputSpec(ndim=3)
<ide>
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'size': self.size}
<del> base_config = super(UpSampling1D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/up_sampling2d.py
<ide> def __init__(self,
<ide> data_format=None,
<ide> interpolation='nearest',
<ide> **kwargs):
<del> super(UpSampling2D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.data_format = conv_utils.normalize_data_format(data_format)
<ide> self.size = conv_utils.normalize_tuple(size, 2, 'size')
<ide> interpolations = {
<ide> def get_config(self):
<ide> 'data_format': self.data_format,
<ide> 'interpolation': self.interpolation
<ide> }
<del> base_config = super(UpSampling2D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/up_sampling3d.py
<ide> def __init__(self, size=(2, 2, 2), data_format=None, **kwargs):
<ide> self.data_format = conv_utils.normalize_data_format(data_format)
<ide> self.size = conv_utils.normalize_tuple(size, 3, 'size')
<ide> self.input_spec = InputSpec(ndim=5)
<del> super(UpSampling3D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def compute_output_shape(self, input_shape):
<ide> input_shape = tf.TensorShape(input_shape).as_list()
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'size': self.size, 'data_format': self.data_format}
<del> base_config = super(UpSampling3D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/zero_padding1d.py
<ide> class ZeroPadding1D(Layer):
<ide> """
<ide>
<ide> def __init__(self, padding=1, **kwargs):
<del> super(ZeroPadding1D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.padding = conv_utils.normalize_tuple(
<ide> padding, 2, 'padding', allow_zero=True)
<ide> self.input_spec = InputSpec(ndim=3)
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'padding': self.padding}
<del> base_config = super(ZeroPadding1D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/zero_padding2d.py
<ide> class ZeroPadding2D(Layer):
<ide> """
<ide>
<ide> def __init__(self, padding=(1, 1), data_format=None, **kwargs):
<del> super(ZeroPadding2D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.data_format = conv_utils.normalize_data_format(data_format)
<ide> if isinstance(padding, int):
<ide> self.padding = ((padding, padding), (padding, padding))
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'padding': self.padding, 'data_format': self.data_format}
<del> base_config = super(ZeroPadding2D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/reshaping/zero_padding3d.py
<ide> class ZeroPadding3D(Layer):
<ide> """
<ide>
<ide> def __init__(self, padding=(1, 1, 1), data_format=None, **kwargs):
<del> super(ZeroPadding3D, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.data_format = conv_utils.normalize_data_format(data_format)
<ide> if isinstance(padding, int):
<ide> self.padding = ((padding, padding), (padding, padding), (padding,
<ide> def call(self, inputs):
<ide>
<ide> def get_config(self):
<ide> config = {'padding': self.padding, 'data_format': self.data_format}
<del> base_config = super(ZeroPadding3D, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/rnn/base_conv_lstm.py
<ide> def __init__(self,
<ide> dropout=0.0,
<ide> recurrent_dropout=0.0,
<ide> **kwargs):
<del> super(ConvLSTMCell, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.rank = rank
<ide> if self.rank > 3:
<ide> raise ValueError(f'Rank {rank} convolutions are not currently '
<ide> def get_config(self):
<ide> 'recurrent_dropout':
<ide> self.recurrent_dropout,
<ide> }
<del> base_config = super(ConvLSTMCell, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> dropout=dropout,
<ide> recurrent_dropout=recurrent_dropout,
<ide> dtype=kwargs.get('dtype'))
<del> super(ConvLSTM, self).__init__(
<add> super().__init__(
<ide> rank,
<ide> cell,
<ide> return_sequences=return_sequences,
<ide> def __init__(self,
<ide> self.activity_regularizer = regularizers.get(activity_regularizer)
<ide>
<ide> def call(self, inputs, mask=None, training=None, initial_state=None):
<del> return super(ConvLSTM, self).call(
<add> return super().call(
<ide> inputs, mask=mask, training=training, initial_state=initial_state)
<ide>
<ide> @property
<ide> def get_config(self):
<ide> 'bias_constraint': constraints.serialize(self.bias_constraint),
<ide> 'dropout': self.dropout,
<ide> 'recurrent_dropout': self.recurrent_dropout}
<del> base_config = super(ConvLSTM, self).get_config()
<add> base_config = super().get_config()
<ide> del base_config['cell']
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide><path>keras/layers/rnn/base_conv_rnn.py
<ide> def __init__(self,
<ide> 'stack convolutional cells. Only pass a single cell '
<ide> 'instance as the `cell` argument. Received: '
<ide> f'cell={cell}')
<del> super(ConvRNN, self).__init__(cell, return_sequences, return_state,
<add> super().__init__(cell, return_sequences, return_state,
<ide> go_backwards, stateful, unroll, **kwargs)
<ide> self.rank = rank
<ide> self.input_spec = [InputSpec(ndim=rank + 3)]
<ide><path>keras/layers/rnn/base_cudnn_rnn.py
<ide> def non_trainable_weights(self):
<ide>
<ide> @property
<ide> def losses(self):
<del> return super(RNN, self).losses
<add> return super(RNN, self).losses # pylint: disable=bad-super-call
<ide>
<ide> def get_losses_for(self, inputs=None):
<ide> return super( # pylint: disable=bad-super-call
<ide><path>keras/layers/rnn/base_rnn.py
<ide> def __init__(self,
<ide> kwargs.pop('input_dim', None))
<ide> kwargs['input_shape'] = input_shape
<ide>
<del> super(RNN, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.cell = cell
<ide> self.return_sequences = return_sequences
<ide> self.return_state = return_state
<ide> def _use_input_spec_as_call_signature(self):
<ide> # called with any time step value, as long as it is not None), so it
<ide> # cannot be used as the call function signature when saving to SavedModel.
<ide> return False
<del> return super(RNN, self)._use_input_spec_as_call_signature
<add> return super()._use_input_spec_as_call_signature
<ide>
<ide> @property
<ide> def states(self):
<ide> def __call__(self, inputs, initial_state=None, constants=None, **kwargs):
<ide> inputs, initial_state, constants, self._num_constants)
<ide>
<ide> if initial_state is None and constants is None:
<del> return super(RNN, self).__call__(inputs, **kwargs)
<add> return super().__call__(inputs, **kwargs)
<ide>
<ide> # If any of `initial_state` or `constants` are specified and are Keras
<ide> # tensors, then add them to the inputs and temporarily modify the
<ide> def __call__(self, inputs, initial_state=None, constants=None, **kwargs):
<ide> tf.nest.map_structure(lambda _: None, inputs)) + additional_specs
<ide> # Perform the call with temporarily replaced input_spec
<ide> self.input_spec = full_input_spec
<del> output = super(RNN, self).__call__(full_input, **kwargs)
<add> output = super().__call__(full_input, **kwargs)
<ide> # Remove the additional_specs from input spec and keep the rest. It is
<ide> # important to keep since the input spec was populated by build(), and
<ide> # will be reused in the stateful=True.
<ide> def __call__(self, inputs, initial_state=None, constants=None, **kwargs):
<ide> kwargs['initial_state'] = initial_state
<ide> if constants is not None:
<ide> kwargs['constants'] = constants
<del> return super(RNN, self).__call__(inputs, **kwargs)
<add> return super().__call__(inputs, **kwargs)
<ide>
<ide> def call(self,
<ide> inputs,
<ide> def get_config(self):
<ide> config['zero_output_for_mask'] = self.zero_output_for_mask
<ide>
<ide> config['cell'] = generic_utils.serialize_keras_object(self.cell)
<del> base_config = super(RNN, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide><path>keras/layers/rnn/base_rnn_test.py
<ide> class MinimalRNNCell(keras.layers.Layer):
<ide> def __init__(self, units, **kwargs):
<ide> self.units = units
<ide> self.state_size = units
<del> super(MinimalRNNCell, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def build(self, input_shape):
<ide> self.kernel = self.add_weight(shape=(input_shape[-1], self.units),
<ide> def call(self, inputs, states):
<ide>
<ide> def get_config(self):
<ide> config = {'units': self.units}
<del> base_config = super(MinimalRNNCell, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> # Test basic case.
<ide> class MinimalRNNCell(keras.layers.AbstractRNNCell):
<ide>
<ide> def __init__(self, units, **kwargs):
<ide> self.units = units
<del> super(MinimalRNNCell, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> @property
<ide> def state_size(self):
<ide> class CustomRNNCell(keras.layers.Layer):
<ide> def __init__(self, units, **kwargs):
<ide> self.units = units
<ide> self.state_size = units
<del> super(CustomRNNCell, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def build(self, input_shape):
<ide> self.kernel = self.add_weight(shape=(input_shape[-1], self.units),
<ide> def call(self, inputs, states):
<ide>
<ide> def get_config(self):
<ide> config = {'units': self.units}
<del> base_config = super(CustomRNNCell, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> for cell_class in [keras.layers.SimpleRNNCell,
<ide> def test_stacked_rnn_with_training_param(self):
<ide> class CellWrapper(keras.layers.AbstractRNNCell):
<ide>
<ide> def __init__(self, cell):
<del> super(CellWrapper, self).__init__()
<add> super().__init__()
<ide> self.cell = cell
<ide>
<ide> @property
<ide> class Cell(keras.layers.Layer):
<ide> def __init__(self):
<ide> self.state_size = None
<ide> self.output_size = None
<del> super(Cell, self).__init__()
<add> super().__init__()
<ide>
<ide> def build(self, input_shape):
<ide> self.state_size = input_shape[-1]
<ide> class StatelessCell(keras.layers.Layer):
<ide> def __init__(self):
<ide> self.state_size = ((), [], ())
<ide> self.output_size = None
<del> super(StatelessCell, self).__init__()
<add> super().__init__()
<ide>
<ide> def build(self, input_shape):
<ide> self.output_size = input_shape[-1]
<ide> def __init__(self, units, constant_size, **kwargs):
<ide> self.units = units
<ide> self.state_size = units
<ide> self.constant_size = constant_size
<del> super(RNNCellWithConstants, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def build(self, input_shape):
<ide> self.input_kernel = self.add_weight(
<ide> def call(self, inputs, states, constants):
<ide>
<ide> def get_config(self):
<ide> config = {'units': self.units, 'constant_size': self.constant_size}
<del> base_config = super(RNNCellWithConstants, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self, unit_a, unit_b, **kwargs):
<ide> self.unit_b = unit_b
<ide> self.state_size = tf.TensorShape([unit_a, unit_b])
<ide> self.output_size = tf.TensorShape([unit_a, unit_b])
<del> super(Minimal2DRNNCell, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def build(self, input_shape):
<ide> input_a = input_shape[-2]
<ide> class PlusOneRNNCell(keras.layers.Layer):
<ide>
<ide> def __init__(self, num_unit, **kwargs):
<ide> self.state_size = num_unit
<del> super(PlusOneRNNCell, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def build(self, input_shape):
<ide> self.output_size = input_shape[-1]
<ide> def __init__(self, unit_1, unit_2, unit_3, use_tuple=False, **kwargs):
<ide> self.unit_2 = unit_2
<ide> self.unit_3 = unit_3
<ide> self.use_tuple = use_tuple
<del> super(NestedCell, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> # A nested state.
<ide> if use_tuple:
<ide> self.state_size = NestedState(
<ide><path>keras/layers/rnn/base_wrapper.py
<ide> class Wrapper(Layer):
<ide> def __init__(self, layer, **kwargs):
<ide> assert isinstance(layer, Layer)
<ide> self.layer = layer
<del> super(Wrapper, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def build(self, input_shape=None):
<ide> if not self.layer.built:
<ide> def activity_regularizer(self):
<ide>
<ide> def get_config(self):
<ide> config = {'layer': generic_utils.serialize_keras_object(self.layer)}
<del> base_config = super(Wrapper, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide><path>keras/layers/rnn/bidirectional.py
<ide> def __init__(self,
<ide> # We don't want to track `layer` since we're already tracking the two copies
<ide> # of it we actually run.
<ide> self._setattr_tracking = False
<del> super(Bidirectional, self).__init__(layer, **kwargs)
<add> super().__init__(layer, **kwargs)
<ide> self._setattr_tracking = True
<ide>
<ide> # Recreate the forward layer from the original layer config, so that it will
<ide> def __call__(self, inputs, initial_state=None, constants=None, **kwargs):
<ide> inputs = inputs[0]
<ide>
<ide> if initial_state is None and constants is None:
<del> return super(Bidirectional, self).__call__(inputs, **kwargs)
<add> return super().__call__(inputs, **kwargs)
<ide>
<ide> # Applies the same workaround as in `RNN.__call__`
<ide> additional_inputs = []
<ide> def __call__(self, inputs, initial_state=None, constants=None, **kwargs):
<ide> # Perform the call with temporarily replaced input_spec
<ide> original_input_spec = self.input_spec
<ide> self.input_spec = full_input_spec
<del> output = super(Bidirectional, self).__call__(full_input, **kwargs)
<add> output = super().__call__(full_input, **kwargs)
<ide> self.input_spec = original_input_spec
<ide> return output
<ide> else:
<del> return super(Bidirectional, self).__call__(inputs, **kwargs)
<add> return super().__call__(inputs, **kwargs)
<ide>
<ide> def call(self,
<ide> inputs,
<ide> def get_config(self):
<ide>
<ide> if hasattr(self, '_backward_layer_config'):
<ide> config['backward_layer'] = self._backward_layer_config
<del> base_config = super(Bidirectional, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide><path>keras/layers/rnn/bidirectional_test.py
<ide> def __init__(self, units, constant_size, **kwargs):
<ide> self.units = units
<ide> self.state_size = units
<ide> self.constant_size = constant_size
<del> super(_RNNCellWithConstants, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def build(self, input_shape):
<ide> self.input_kernel = self.add_weight(
<ide> def call(self, inputs, states, constants):
<ide>
<ide> def get_config(self):
<ide> config = {'units': self.units, 'constant_size': self.constant_size}
<del> base_config = super(_RNNCellWithConstants, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class _ResidualLSTMCell(keras.layers.LSTMCell):
<ide>
<ide> def call(self, inputs, states, training=None):
<del> output, states = super(_ResidualLSTMCell, self).call(inputs, states)
<add> output, states = super().call(inputs, states)
<ide> return output + inputs, states
<ide>
<ide>
<ide> def compute_output_shape(self, input_shape):
<ide> class TestListLayer(TestLayer):
<ide>
<ide> def compute_output_shape(self, input_shape):
<del> shape = super(TestListLayer, self).compute_output_shape(input_shape)
<add> shape = super().compute_output_shape(input_shape)
<ide> return shape.as_list()
<ide>
<ide> class TestTupleLayer(TestLayer):
<ide>
<ide> def compute_output_shape(self, input_shape):
<del> shape = super(TestTupleLayer, self).compute_output_shape(input_shape)
<add> shape = super().compute_output_shape(input_shape)
<ide> return tuple(shape.as_list())
<ide>
<ide> # Layers can specify output shape as list/tuple/TensorShape
<ide><path>keras/layers/rnn/cell_wrappers.py
<ide> class _RNNCellWrapper(AbstractRNNCell):
<ide> """
<ide>
<ide> def __init__(self, cell, *args, **kwargs):
<del> super(_RNNCellWrapper, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide> self.cell = cell
<ide> cell_call_spec = tf_inspect.getfullargspec(cell.call)
<ide> self._call_spec.expects_training_arg = (("training"
<ide> def get_config(self):
<ide> "config": self.cell.get_config()
<ide> },
<ide> }
<del> base_config = super(_RNNCellWrapper, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide> def dropout_state_filter_visitor(s):
<ide> raise ValueError("keras LSTM cell does not work with DropoutWrapper. "
<ide> "Please use LSTMCell(dropout=x, recurrent_dropout=y) "
<ide> "instead.")
<del> super(DropoutWrapper, self).__init__(cell, dtype=dtype, **kwargs)
<add> super().__init__(cell, dtype=dtype, **kwargs)
<ide>
<ide> if (dropout_state_filter_visitor is not None and
<ide> not callable(dropout_state_filter_visitor)):
<ide> def get_config(self):
<ide> config.update({"dropout_fn": function,
<ide> "dropout_fn_type": function_type,
<ide> "dropout_fn_module": function_module})
<del> base_config = super(DropoutWrapper, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide> def __init__(self, cell, residual_fn=None, **kwargs):
<ide> and outputs.
<ide> **kwargs: dict of keyword arguments for base layer.
<ide> """
<del> super(ResidualWrapper, self).__init__(cell, **kwargs)
<add> super().__init__(cell, **kwargs)
<ide> self._residual_fn = residual_fn
<ide>
<ide> def _call_wrapped_cell(self, inputs, state, cell_call_fn, **kwargs):
<ide> def get_config(self):
<ide> }
<ide> else:
<ide> config = {}
<del> base_config = super(ResidualWrapper, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide> def __init__(self, cell, device, **kwargs):
<ide> device: A device string or function, for passing to `tf.device`.
<ide> **kwargs: dict of keyword arguments for base layer.
<ide> """
<del> super(DeviceWrapper, self).__init__(cell, **kwargs)
<add> super().__init__(cell, **kwargs)
<ide> self._device = device
<ide>
<ide> def zero_state(self, batch_size, dtype):
<ide> def _call_wrapped_cell(self, inputs, state, cell_call_fn, **kwargs):
<ide>
<ide> def get_config(self):
<ide> config = {"device": self._device}
<del> base_config = super(DeviceWrapper, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide><path>keras/layers/rnn/conv_lstm1d.py
<ide> def __init__(self,
<ide> dropout=0.0,
<ide> recurrent_dropout=0.0,
<ide> **kwargs):
<del> super(ConvLSTM1D, self).__init__(
<add> super().__init__(
<ide> rank=1,
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide><path>keras/layers/rnn/conv_lstm2d.py
<ide> def __init__(self,
<ide> dropout=0.0,
<ide> recurrent_dropout=0.0,
<ide> **kwargs):
<del> super(ConvLSTM2D, self).__init__(
<add> super().__init__(
<ide> rank=2,
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide><path>keras/layers/rnn/conv_lstm3d.py
<ide> def __init__(self,
<ide> dropout=0.0,
<ide> recurrent_dropout=0.0,
<ide> **kwargs):
<del> super(ConvLSTM3D, self).__init__(
<add> super().__init__(
<ide> rank=3,
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide><path>keras/layers/rnn/cudnn_gru.py
<ide> def __init__(self,
<ide> self.units = units
<ide> cell_spec = collections.namedtuple('cell', 'state_size')
<ide> self._cell = cell_spec(state_size=self.units)
<del> super(CuDNNGRU, self).__init__(
<add> super().__init__(
<ide> return_sequences=return_sequences,
<ide> return_state=return_state,
<ide> go_backwards=go_backwards,
<ide> def cell(self):
<ide> return self._cell
<ide>
<ide> def build(self, input_shape):
<del> super(CuDNNGRU, self).build(input_shape)
<add> super().build(input_shape)
<ide> if isinstance(input_shape, list):
<ide> input_shape = input_shape[0]
<ide> input_dim = int(input_shape[-1])
<ide> def get_config(self):
<ide> constraints.serialize(self.recurrent_constraint),
<ide> 'bias_constraint': constraints.serialize(self.bias_constraint)
<ide> }
<del> base_config = super(CuDNNGRU, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/rnn/cudnn_lstm.py
<ide> def __init__(self,
<ide> self.units = units
<ide> cell_spec = collections.namedtuple('cell', 'state_size')
<ide> self._cell = cell_spec(state_size=(self.units, self.units))
<del> super(CuDNNLSTM, self).__init__(
<add> super().__init__(
<ide> return_sequences=return_sequences,
<ide> return_state=return_state,
<ide> go_backwards=go_backwards,
<ide> def cell(self):
<ide> return self._cell
<ide>
<ide> def build(self, input_shape):
<del> super(CuDNNLSTM, self).build(input_shape)
<add> super().build(input_shape)
<ide> if isinstance(input_shape, list):
<ide> input_shape = input_shape[0]
<ide> input_dim = int(input_shape[-1])
<ide> def get_config(self):
<ide> constraints.serialize(self.recurrent_constraint),
<ide> 'bias_constraint': constraints.serialize(self.bias_constraint)
<ide> }
<del> base_config = super(CuDNNLSTM, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/rnn/dropout_rnn_cell_mixin.py
<ide> class DropoutRNNCellMixin:
<ide>
<ide> def __init__(self, *args, **kwargs):
<ide> self._create_non_trackable_mask_cache()
<del> super(DropoutRNNCellMixin, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide>
<ide> @tf.__internal__.tracking.no_automatic_dependency_tracking
<ide> def _create_non_trackable_mask_cache(self):
<ide> def get_recurrent_dropout_mask_for_cell(self, inputs, training, count=1):
<ide> def __getstate__(self):
<ide> # Used for deepcopy. The caching can't be pickled by python, since it will
<ide> # contain tensor and graph.
<del> state = super(DropoutRNNCellMixin, self).__getstate__()
<add> state = super().__getstate__()
<ide> state.pop('_dropout_mask_cache', None)
<ide> state.pop('_recurrent_dropout_mask_cache', None)
<ide> return state
<ide> def __setstate__(self, state):
<ide> self._create_dropout_mask)
<ide> state['_recurrent_dropout_mask_cache'] = backend.ContextValueCache(
<ide> self._create_recurrent_dropout_mask)
<del> super(DropoutRNNCellMixin, self).__setstate__(state)
<add> super().__setstate__(state)
<ide>
<ide>
<ide> def _generate_dropout_mask(generator, ones, rate, training=None, count=1):
<ide><path>keras/layers/rnn/gru.py
<ide> def __init__(self,
<ide> self._enable_caching_device = kwargs.pop('enable_caching_device', True)
<ide> else:
<ide> self._enable_caching_device = kwargs.pop('enable_caching_device', False)
<del> super(GRUCell, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.units = units
<ide> self.activation = activations.get(activation)
<ide> self.recurrent_activation = activations.get(recurrent_activation)
<ide> def get_config(self):
<ide> 'reset_after': self.reset_after
<ide> }
<ide> config.update(rnn_utils.config_for_enable_caching_device(self))
<del> base_config = super(GRUCell, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> def get_initial_state(self, inputs=None, batch_size=None, dtype=None):
<ide> def __init__(self,
<ide> dtype=kwargs.get('dtype'),
<ide> trainable=kwargs.get('trainable', True),
<ide> **cell_kwargs)
<del> super(GRU, self).__init__(
<add> super().__init__(
<ide> cell,
<ide> return_sequences=return_sequences,
<ide> return_state=return_state,
<ide> def get_config(self):
<ide> self.reset_after
<ide> }
<ide> config.update(rnn_utils.config_for_enable_caching_device(self.cell))
<del> base_config = super(GRU, self).get_config()
<add> base_config = super().get_config()
<ide> del base_config['cell']
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide><path>keras/layers/rnn/gru_v1.py
<ide> def __init__(self,
<ide> recurrent_dropout=0.,
<ide> reset_after=False,
<ide> **kwargs):
<del> super(GRUCell, self).__init__(
<add> super().__init__(
<ide> units,
<ide> activation=activation,
<ide> recurrent_activation=recurrent_activation,
<ide> def __init__(self,
<ide> dtype=kwargs.get('dtype'),
<ide> trainable=kwargs.get('trainable', True),
<ide> **cell_kwargs)
<del> super(GRU, self).__init__(
<add> super().__init__(
<ide> cell,
<ide> return_sequences=return_sequences,
<ide> return_state=return_state,
<ide> def __init__(self,
<ide> self.input_spec = [InputSpec(ndim=3)]
<ide>
<ide> def call(self, inputs, mask=None, training=None, initial_state=None):
<del> return super(GRU, self).call(
<add> return super().call(
<ide> inputs, mask=mask, training=training, initial_state=initial_state)
<ide>
<ide> @property
<ide> def get_config(self):
<ide> self.reset_after
<ide> }
<ide> config.update(rnn_utils.config_for_enable_caching_device(self.cell))
<del> base_config = super(GRU, self).get_config()
<add> base_config = super().get_config()
<ide> del base_config['cell']
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide><path>keras/layers/rnn/legacy_cell_wrappers.py
<ide> class _RNNCellWrapperV1(RNNCell):
<ide> """
<ide>
<ide> def __init__(self, cell, *args, **kwargs):
<del> super(_RNNCellWrapperV1, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide> assert_like_rnncell("cell", cell)
<ide> self.cell = cell
<ide> if isinstance(cell, tf.__internal__.tracking.Trackable):
<ide> def get_config(self):
<ide> "config": self.cell.get_config()
<ide> },
<ide> }
<del> base_config = super(_RNNCellWrapperV1, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide> def dropout_state_filter_visitor(s):
<ide> but not `callable`.
<ide> ValueError: if any of the keep_probs are not between 0 and 1.
<ide> """
<del> super(DropoutWrapper, self).__init__(cell, dtype=dtype, **kwargs)
<add> super().__init__(cell, dtype=dtype, **kwargs)
<ide>
<ide> if (dropout_state_filter_visitor is not None and
<ide> not callable(dropout_state_filter_visitor)):
<ide> def get_config(self):
<ide> config.update({"dropout_fn": function,
<ide> "dropout_fn_type": function_type,
<ide> "dropout_fn_module": function_module})
<del> base_config = super(DropoutWrapper, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide> def __init__(self, cell, residual_fn=None, **kwargs):
<ide> and outputs.
<ide> **kwargs: dict of keyword arguments for base layer.
<ide> """
<del> super(ResidualWrapper, self).__init__(cell, **kwargs)
<add> super().__init__(cell, **kwargs)
<ide> self._residual_fn = residual_fn
<ide>
<ide> def _call_wrapped_cell(self, inputs, state, cell_call_fn, **kwargs):
<ide> def get_config(self):
<ide> }
<ide> else:
<ide> config = {}
<del> base_config = super(ResidualWrapper, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide> def __init__(self, cell, device, **kwargs):
<ide> device: A device string or function, for passing to `tf.device`.
<ide> **kwargs: dict of keyword arguments for base layer.
<ide> """
<del> super(DeviceWrapper, self).__init__(cell, **kwargs)
<add> super().__init__(cell, **kwargs)
<ide> self._device = device
<ide>
<ide> def zero_state(self, batch_size, dtype):
<ide> def _call_wrapped_cell(self, inputs, state, cell_call_fn, **kwargs):
<ide>
<ide> def get_config(self):
<ide> config = {"device": self._device}
<del> base_config = super(DeviceWrapper, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide><path>keras/layers/rnn/legacy_cells.py
<ide> class RNNCell(base_layer.Layer):
<ide> """
<ide>
<ide> def __init__(self, trainable=True, name=None, dtype=None, **kwargs):
<del> super(RNNCell, self).__init__(
<add> super().__init__(
<ide> trainable=trainable, name=name, dtype=dtype, **kwargs)
<ide> # Attribute that indicates whether the cell is a TF RNN cell, due the slight
<ide> # difference between TF and Keras RNN cell. Notably the state is not wrapped
<ide> def __call__(self, inputs, state, scope=None):
<ide> if scope is not None:
<ide> with tf.compat.v1.variable_scope(
<ide> scope, custom_getter=self._rnn_get_variable) as scope:
<del> return super(RNNCell, self).__call__(inputs, state, scope=scope)
<add> return super().__call__(inputs, state, scope=scope)
<ide> else:
<ide> scope_attrname = "rnncell_scope"
<ide> scope = getattr(self, scope_attrname, None)
<ide> def __call__(self, inputs, state, scope=None):
<ide> custom_getter=self._rnn_get_variable)
<ide> setattr(self, scope_attrname, scope)
<ide> with scope:
<del> return super(RNNCell, self).__call__(inputs, state)
<add> return super().__call__(inputs, state)
<ide>
<ide> def _rnn_get_variable(self, getter, *args, **kwargs):
<ide> variable = getter(*args, **kwargs)
<ide> def zero_state(self, batch_size, dtype):
<ide>
<ide> # TODO(b/134773139): Remove when contrib RNN cells implement `get_config`
<ide> def get_config(self): # pylint: disable=useless-super-delegation
<del> return super(RNNCell, self).get_config()
<add> return super().get_config()
<ide>
<ide> @property
<ide> def _use_input_spec_as_call_signature(self):
<ide> def __init__(self,
<ide> "is equivalent as `tf.keras.layers.SimpleRNNCell`, "
<ide> "and will be replaced by that in Tensorflow 2.0.",
<ide> stacklevel=2)
<del> super(BasicRNNCell, self).__init__(
<add> super().__init__(
<ide> _reuse=reuse, name=name, dtype=dtype, **kwargs)
<ide> _check_supported_dtypes(self.dtype)
<ide> if tf.executing_eagerly() and tf.config.list_logical_devices("GPU"):
<ide> def get_config(self):
<ide> "activation": activations.serialize(self._activation),
<ide> "reuse": self._reuse,
<ide> }
<del> base_config = super(BasicRNNCell, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> "is equivalent as `tf.keras.layers.GRUCell`, "
<ide> "and will be replaced by that in Tensorflow 2.0.",
<ide> stacklevel=2)
<del> super(GRUCell, self).__init__(
<add> super().__init__(
<ide> _reuse=reuse, name=name, dtype=dtype, **kwargs)
<ide> _check_supported_dtypes(self.dtype)
<ide>
<ide> def get_config(self):
<ide> "activation": activations.serialize(self._activation),
<ide> "reuse": self._reuse,
<ide> }
<del> base_config = super(GRUCell, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> "is equivalent as `tf.keras.layers.LSTMCell`, "
<ide> "and will be replaced by that in Tensorflow 2.0.",
<ide> stacklevel=2)
<del> super(BasicLSTMCell, self).__init__(
<add> super().__init__(
<ide> _reuse=reuse, name=name, dtype=dtype, **kwargs)
<ide> _check_supported_dtypes(self.dtype)
<ide> if not state_is_tuple:
<ide> def get_config(self):
<ide> "activation": activations.serialize(self._activation),
<ide> "reuse": self._reuse,
<ide> }
<del> base_config = super(BasicLSTMCell, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> "is equivalent as `tf.keras.layers.LSTMCell`, "
<ide> "and will be replaced by that in Tensorflow 2.0.",
<ide> stacklevel=2)
<del> super(LSTMCell, self).__init__(
<add> super().__init__(
<ide> _reuse=reuse, name=name, dtype=dtype, **kwargs)
<ide> _check_supported_dtypes(self.dtype)
<ide> if not state_is_tuple:
<ide> def get_config(self):
<ide> "activation": activations.serialize(self._activation),
<ide> "reuse": self._reuse,
<ide> }
<del> base_config = super(LSTMCell, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self, cells, state_is_tuple=True):
<ide> logging.warning("`tf.nn.rnn_cell.MultiRNNCell` is deprecated. This class "
<ide> "is equivalent as `tf.keras.layers.StackedRNNCells`, "
<ide> "and will be replaced by that in Tensorflow 2.0.")
<del> super(MultiRNNCell, self).__init__()
<add> super().__init__()
<ide> if not cells:
<ide> raise ValueError("Must specify at least one cell for MultiRNNCell.")
<ide> if not tf.nest.is_nested(cells):
<ide> def zero_state(self, batch_size, dtype):
<ide> else:
<ide> # We know here that state_size of each cell is not a tuple and
<ide> # presumably does not contain TensorArrays or anything else fancy
<del> return super(MultiRNNCell, self).zero_state(batch_size, dtype)
<add> return super().zero_state(batch_size, dtype)
<ide>
<ide> @property
<ide> def trainable_weights(self):
<ide><path>keras/layers/rnn/lstm.py
<ide> def __init__(self,
<ide> self._enable_caching_device = kwargs.pop('enable_caching_device', True)
<ide> else:
<ide> self._enable_caching_device = kwargs.pop('enable_caching_device', False)
<del> super(LSTMCell, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.units = units
<ide> self.activation = activations.get(activation)
<ide> self.recurrent_activation = activations.get(recurrent_activation)
<ide> def get_config(self):
<ide> self.implementation
<ide> }
<ide> config.update(rnn_utils.config_for_enable_caching_device(self))
<del> base_config = super(LSTMCell, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> def get_initial_state(self, inputs=None, batch_size=None, dtype=None):
<ide> def __init__(self,
<ide> dtype=kwargs.get('dtype'),
<ide> trainable=kwargs.get('trainable', True),
<ide> **cell_kwargs)
<del> super(LSTM, self).__init__(
<add> super().__init__(
<ide> cell,
<ide> return_sequences=return_sequences,
<ide> return_state=return_state,
<ide> def get_config(self):
<ide> self.implementation
<ide> }
<ide> config.update(rnn_utils.config_for_enable_caching_device(self.cell))
<del> base_config = super(LSTM, self).get_config()
<add> base_config = super().get_config()
<ide> del base_config['cell']
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide><path>keras/layers/rnn/lstm_v1.py
<ide> def __init__(self,
<ide> dropout=0.,
<ide> recurrent_dropout=0.,
<ide> **kwargs):
<del> super(LSTMCell, self).__init__(
<add> super().__init__(
<ide> units,
<ide> activation=activation,
<ide> recurrent_activation=recurrent_activation,
<ide> def __init__(self,
<ide> dtype=kwargs.get('dtype'),
<ide> trainable=kwargs.get('trainable', True),
<ide> **cell_kwargs)
<del> super(LSTM, self).__init__(
<add> super().__init__(
<ide> cell,
<ide> return_sequences=return_sequences,
<ide> return_state=return_state,
<ide> def __init__(self,
<ide> self.input_spec = [InputSpec(ndim=3)]
<ide>
<ide> def call(self, inputs, mask=None, training=None, initial_state=None):
<del> return super(LSTM, self).call(
<add> return super().call(
<ide> inputs, mask=mask, training=training, initial_state=initial_state)
<ide>
<ide> @property
<ide> def get_config(self):
<ide> self.implementation
<ide> }
<ide> config.update(rnn_utils.config_for_enable_caching_device(self.cell))
<del> base_config = super(LSTM, self).get_config()
<add> base_config = super().get_config()
<ide> del base_config['cell']
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide><path>keras/layers/rnn/simple_rnn.py
<ide> def __init__(self,
<ide> self._enable_caching_device = kwargs.pop('enable_caching_device', True)
<ide> else:
<ide> self._enable_caching_device = kwargs.pop('enable_caching_device', False)
<del> super(SimpleRNNCell, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.units = units
<ide> self.activation = activations.get(activation)
<ide> self.use_bias = use_bias
<ide> def get_config(self):
<ide> self.recurrent_dropout
<ide> }
<ide> config.update(rnn_utils.config_for_enable_caching_device(self))
<del> base_config = super(SimpleRNNCell, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> dtype=kwargs.get('dtype'),
<ide> trainable=kwargs.get('trainable', True),
<ide> **cell_kwargs)
<del> super(SimpleRNN, self).__init__(
<add> super().__init__(
<ide> cell,
<ide> return_sequences=return_sequences,
<ide> return_state=return_state,
<ide> def __init__(self,
<ide> self.input_spec = [InputSpec(ndim=3)]
<ide>
<ide> def call(self, inputs, mask=None, training=None, initial_state=None):
<del> return super(SimpleRNN, self).call(
<add> return super().call(
<ide> inputs, mask=mask, training=training, initial_state=initial_state)
<ide>
<ide> @property
<ide> def get_config(self):
<ide> 'recurrent_dropout':
<ide> self.recurrent_dropout
<ide> }
<del> base_config = super(SimpleRNN, self).get_config()
<add> base_config = super().get_config()
<ide> config.update(rnn_utils.config_for_enable_caching_device(self.cell))
<ide> del base_config['cell']
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide><path>keras/layers/rnn/stacked_rnn_cells.py
<ide> def __init__(self, cells, **kwargs):
<ide> 'be deprecated. Please update the code to work with the '
<ide> 'natural order of states if you rely on the RNN states, '
<ide> 'eg RNN(return_state=True).')
<del> super(StackedRNNCells, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> @property
<ide> def state_size(self):
<ide> def get_config(self):
<ide> for cell in self.cells:
<ide> cells.append(generic_utils.serialize_keras_object(cell))
<ide> config = {'cells': cells}
<del> base_config = super(StackedRNNCells, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide><path>keras/layers/rnn/time_distributed.py
<ide> def __init__(self, layer, **kwargs):
<ide> raise ValueError(
<ide> 'Please initialize `TimeDistributed` layer with a '
<ide> f'`tf.keras.layers.Layer` instance. Received: {layer}')
<del> super(TimeDistributed, self).__init__(layer, **kwargs)
<add> super().__init__(layer, **kwargs)
<ide> self.supports_masking = True
<ide>
<ide> # It is safe to use the fast, reshape-based approach with all of our
<ide> def build(self, input_shape):
<ide> child_input_shape = tf.nest.map_structure(self._remove_timesteps,
<ide> input_shape)
<ide> child_input_shape = tf_utils.convert_shapes(child_input_shape)
<del> super(TimeDistributed, self).build(tuple(child_input_shape))
<add> super().build(tuple(child_input_shape))
<ide> self.built = True
<ide>
<ide> def compute_output_shape(self, input_shape):
<ide><path>keras/layers/rnn/time_distributed_test.py
<ide> def compute_output_shape(self, input_shape):
<ide> class TestListLayer(TestLayer):
<ide>
<ide> def compute_output_shape(self, input_shape):
<del> shape = super(TestListLayer, self).compute_output_shape(input_shape)
<add> shape = super().compute_output_shape(input_shape)
<ide> return shape.as_list()
<ide>
<ide> class TestTupleLayer(TestLayer):
<ide>
<ide> def compute_output_shape(self, input_shape):
<del> shape = super(TestTupleLayer, self).compute_output_shape(input_shape)
<add> shape = super().compute_output_shape(input_shape)
<ide> return tuple(shape.as_list())
<ide>
<ide> # Layers can specify output shape as list/tuple/TensorShape
<ide> def test_TimeDistributed_with_mimo(self):
<ide> class TestLayer(keras.layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(TestLayer, self).__init__()
<add> super().__init__()
<ide> self.dense_1 = dense_1
<ide> self.dense_2 = dense_2
<ide>
<ide><path>keras/legacy_tf_layers/base.py
<ide> def __init__(self, trainable=True, name=None, dtype=None,
<ide> # Mark that legacy layers should not be instrumented as Keras usage
<ide> self._disable_keras_instrumentation = True
<ide>
<del> super(Layer, self).__init__(trainable=trainable, name=name, dtype=dtype,
<add> super().__init__(trainable=trainable, name=name, dtype=dtype,
<ide> **kwargs)
<ide>
<ide> if _is_in_keras_style_scope():
<ide> def scope_name(self):
<ide> def add_loss(self, losses, inputs=None):
<ide> previous_losses_length = len(self._losses)
<ide> previous_callable_losses_length = len(self._callable_losses)
<del> super(Layer, self).add_loss(losses, inputs=inputs)
<add> super().add_loss(losses, inputs=inputs)
<ide> if not tf.executing_eagerly():
<ide> # TODO(fchollet): deprecate collection below.
<ide> new_losses = self._losses[previous_losses_length:]
<ide> def add_loss(self, losses, inputs=None):
<ide> def _name_scope(self): # pylint: disable=method-hidden
<ide> """Determines op naming for the Layer."""
<ide> if self._keras_style:
<del> return super(Layer, self)._name_scope()
<add> return super()._name_scope()
<ide> return self._current_scope.original_name_scope
<ide>
<ide> def _set_scope(self, scope=None):
<ide> def add_weight(self,
<ide> if kwarg != 'experimental_autocast':
<ide> raise TypeError('Unknown keyword argument:', kwarg)
<ide> if self._keras_style:
<del> return super(Layer, self).add_weight(
<add> return super().add_weight(
<ide> name=name,
<ide> shape=shape,
<ide> dtype=dtype,
<ide> def _should_add_regularizer(variable, existing_variable_set):
<ide> scope.use_resource)
<ide> if initializer is None:
<ide> initializer = scope.initializer
<del> variable = super(Layer, self).add_weight(
<add> variable = super().add_weight(
<ide> name,
<ide> shape,
<ide> dtype=tf.as_dtype(dtype),
<ide> def __call__(self, inputs, *args, **kwargs):
<ide> raise ValueError(
<ide> 'scope argument not allowed when keras style layers are enabled, '
<ide> 'but saw: {}'.format(scope))
<del> return super(Layer, self).__call__(inputs, *args, **kwargs)
<add> return super().__call__(inputs, *args, **kwargs)
<ide>
<ide> self._set_scope(scope)
<ide>
<ide> def __call__(self, inputs, *args, **kwargs):
<ide> kwargs['scope'] = scope
<ide>
<ide> # Actually call layer
<del> outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
<add> outputs = super().__call__(inputs, *args, **kwargs)
<ide>
<ide> if not tf.executing_eagerly():
<ide> # Update global default collections.
<ide><path>keras/legacy_tf_layers/base_test.py
<ide> def testInputSpecNdimCheck(self):
<ide> class CustomerLayer(base_tf_layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(CustomerLayer, self).__init__()
<add> super().__init__()
<ide> self.input_spec = input_spec.InputSpec(ndim=2)
<ide>
<ide> def call(self, inputs):
<ide> def testInputSpecMinNdimCheck(self):
<ide> class CustomLayer(base_tf_layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(CustomLayer, self).__init__()
<add> super().__init__()
<ide> self.input_spec = input_spec.InputSpec(min_ndim=2)
<ide>
<ide> def call(self, inputs):
<ide> def testInputSpecMaxNdimCheck(self):
<ide> class CustomerLayer(base_tf_layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(CustomerLayer, self).__init__()
<add> super().__init__()
<ide> self.input_spec = input_spec.InputSpec(max_ndim=2)
<ide>
<ide> def call(self, inputs):
<ide> def testInputSpecDtypeCheck(self):
<ide> class CustomerLayer(base_tf_layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(CustomerLayer, self).__init__()
<add> super().__init__()
<ide> self.input_spec = input_spec.InputSpec(dtype='float32')
<ide>
<ide> def call(self, inputs):
<ide> def testInputSpecAxesCheck(self):
<ide> class CustomerLayer(base_tf_layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(CustomerLayer, self).__init__()
<add> super().__init__()
<ide> self.input_spec = input_spec.InputSpec(axes={-1: 2})
<ide>
<ide> def call(self, inputs):
<ide> def testInputSpecShapeCheck(self):
<ide> class CustomerLayer(base_tf_layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(CustomerLayer, self).__init__()
<add> super().__init__()
<ide> self.input_spec = input_spec.InputSpec(shape=(None, 3))
<ide>
<ide> def call(self, inputs):
<ide> def testNoInputSpec(self):
<ide> class CustomerLayer(base_tf_layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(CustomerLayer, self).__init__()
<add> super().__init__()
<ide> self.input_spec = None
<ide>
<ide> def call(self, inputs):
<ide><path>keras/legacy_tf_layers/convolutional.py
<ide> def __init__(self, filters,
<ide> trainable=True,
<ide> name=None,
<ide> **kwargs):
<del> super(Conv1D, self).__init__(
<add> super().__init__(
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide> def __init__(self, filters,
<ide> trainable=True,
<ide> name=None,
<ide> **kwargs):
<del> super(Conv2D, self).__init__(
<add> super().__init__(
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide> def __init__(self, filters,
<ide> trainable=True,
<ide> name=None,
<ide> **kwargs):
<del> super(Conv3D, self).__init__(
<add> super().__init__(
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide> def __init__(self, filters,
<ide> trainable=True,
<ide> name=None,
<ide> **kwargs):
<del> super(SeparableConv1D, self).__init__(
<add> super().__init__(
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide> def __init__(self, filters,
<ide> trainable=True,
<ide> name=None,
<ide> **kwargs):
<del> super(SeparableConv2D, self).__init__(
<add> super().__init__(
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide> def __init__(self, filters,
<ide> trainable=True,
<ide> name=None,
<ide> **kwargs):
<del> super(Conv2DTranspose, self).__init__(
<add> super().__init__(
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide> def __init__(self,
<ide> trainable=True,
<ide> name=None,
<ide> **kwargs):
<del> super(Conv3DTranspose, self).__init__(
<add> super().__init__(
<ide> filters=filters,
<ide> kernel_size=kernel_size,
<ide> strides=strides,
<ide><path>keras/legacy_tf_layers/core.py
<ide> def __init__(self, units,
<ide> trainable=True,
<ide> name=None,
<ide> **kwargs):
<del> super(Dense, self).__init__(units=units,
<add> super().__init__(units=units,
<ide> activation=activation,
<ide> use_bias=use_bias,
<ide> kernel_initializer=kernel_initializer,
<ide> def __init__(self, rate=0.5,
<ide> seed=None,
<ide> name=None,
<ide> **kwargs):
<del> super(Dropout, self).__init__(rate=rate,
<add> super().__init__(rate=rate,
<ide> noise_shape=noise_shape,
<ide> seed=seed,
<ide> name=name,
<ide> **kwargs)
<ide>
<ide> def call(self, inputs, training=False):
<del> return super(Dropout, self).call(inputs, training=training)
<add> return super().call(inputs, training=training)
<ide>
<ide>
<ide> @keras_export(v1=['keras.__internal__.legacy.layers.dropout'])
<ide><path>keras/legacy_tf_layers/normalization.py
<ide> def __init__(self,
<ide> adjustment=None,
<ide> name=None,
<ide> **kwargs):
<del> super(BatchNormalization, self).__init__(
<add> super().__init__(
<ide> axis=axis,
<ide> momentum=momentum,
<ide> epsilon=epsilon,
<ide> def __init__(self,
<ide> **kwargs)
<ide>
<ide> def call(self, inputs, training=False):
<del> return super(BatchNormalization, self).call(inputs, training=training)
<add> return super().call(inputs, training=training)
<ide>
<ide>
<ide> @keras_export(v1=['keras.__internal__.legacy.layers.batch_normalization'])
<ide><path>keras/legacy_tf_layers/pooling.py
<ide> def __init__(self, pool_size, strides,
<ide> name=None, **kwargs):
<ide> if strides is None:
<ide> raise ValueError('Argument `strides` must not be None.')
<del> super(AveragePooling1D, self).__init__(
<add> super().__init__(
<ide> pool_size=pool_size,
<ide> strides=strides,
<ide> padding=padding,
<ide> def __init__(self, pool_size, strides,
<ide> name=None, **kwargs):
<ide> if strides is None:
<ide> raise ValueError('Argument `strides` must not be None.')
<del> super(MaxPooling1D, self).__init__(
<add> super().__init__(
<ide> pool_size=pool_size,
<ide> strides=strides,
<ide> padding=padding,
<ide> def __init__(self, pool_size, strides,
<ide> name=None, **kwargs):
<ide> if strides is None:
<ide> raise ValueError('Argument `strides` must not be None.')
<del> super(AveragePooling2D, self).__init__(
<add> super().__init__(
<ide> pool_size=pool_size, strides=strides,
<ide> padding=padding, data_format=data_format, name=name, **kwargs)
<ide>
<ide> def __init__(self, pool_size, strides,
<ide> name=None, **kwargs):
<ide> if strides is None:
<ide> raise ValueError('Argument `strides` must not be None.')
<del> super(MaxPooling2D, self).__init__(
<add> super().__init__(
<ide> pool_size=pool_size, strides=strides,
<ide> padding=padding, data_format=data_format, name=name, **kwargs)
<ide>
<ide> def __init__(self, pool_size, strides,
<ide> name=None, **kwargs):
<ide> if strides is None:
<ide> raise ValueError('Argument `strides` must not be None.')
<del> super(AveragePooling3D, self).__init__(
<add> super().__init__(
<ide> pool_size=pool_size, strides=strides,
<ide> padding=padding, data_format=data_format, name=name, **kwargs)
<ide>
<ide> def __init__(self, pool_size, strides,
<ide> name=None, **kwargs):
<ide> if strides is None:
<ide> raise ValueError('Argument `strides` must not be None.')
<del> super(MaxPooling3D, self).__init__(
<add> super().__init__(
<ide> pool_size=pool_size, strides=strides,
<ide> padding=padding, data_format=data_format, name=name, **kwargs)
<ide>
<ide><path>keras/losses.py
<ide> def get_config(self):
<ide> config = {
<ide> 'gamma': self.gamma,
<ide> }
<del> base_config = super(BinaryFocalCrossentropy, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide><path>keras/metrics/base_metric.py
<ide> def result(self):
<ide> """
<ide>
<ide> def __init__(self, name=None, dtype=None, **kwargs):
<del> super(Metric, self).__init__(name=name, dtype=dtype, **kwargs)
<add> super().__init__(name=name, dtype=dtype, **kwargs)
<ide> self.stateful = True # All metric layers are stateful.
<ide> self.built = True
<ide> if not base_layer_utils.v2_dtype_behavior_enabled():
<ide> def add_weight(
<ide> additional_kwargs = {}
<ide>
<ide> with tf.init_scope():
<del> return super(Metric, self).add_weight(
<add> return super().add_weight(
<ide> name=name,
<ide> shape=shape,
<ide> dtype=self._dtype if dtype is None else dtype,
<ide> class Reduce(Metric):
<ide> """
<ide>
<ide> def __init__(self, reduction, name, dtype=None):
<del> super(Reduce, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide> self.reduction = reduction
<ide> self.total = self.add_weight(
<ide> 'total', initializer='zeros')
<ide> class Sum(Reduce):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='sum', dtype=None):
<del> super(Sum, self).__init__(reduction=metrics_utils.Reduction.SUM,
<add> super().__init__(reduction=metrics_utils.Reduction.SUM,
<ide> name=name, dtype=dtype)
<ide>
<ide>
<ide> class Mean(Reduce):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='mean', dtype=None):
<del> super(Mean, self).__init__(
<add> super().__init__(
<ide> reduction=metrics_utils.Reduction.WEIGHTED_MEAN, name=name, dtype=dtype)
<ide>
<ide>
<ide> def accuracy(y_true, y_pred):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, fn, name=None, dtype=None, **kwargs):
<del> super(MeanMetricWrapper, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide> self._fn = fn
<ide> self._fn_kwargs = kwargs
<ide>
<ide> def update_state(self, y_true, y_pred, sample_weight=None):
<ide>
<ide> ag_fn = tf.__internal__.autograph.tf_convert(self._fn, tf.__internal__.autograph.control_status_ctx())
<ide> matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
<del> return super(MeanMetricWrapper, self).update_state(
<add> return super().update_state(
<ide> matches, sample_weight=sample_weight)
<ide>
<ide> def get_config(self):
<ide> def get_config(self):
<ide>
<ide> for k, v in self._fn_kwargs.items():
<ide> config[k] = backend.eval(v) if is_tensor_or_variable(v) else v
<del> base_config = super(MeanMetricWrapper, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> @classmethod
<ide> class MeanTensor(Metric):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='mean_tensor', dtype=None, shape=None):
<del> super(MeanTensor, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide> self._shape = None
<ide> self._total = None
<ide> self._count = None
<ide> class SumOverBatchSize(Reduce):
<ide> """
<ide>
<ide> def __init__(self, name='sum_over_batch_size', dtype=None):
<del> super(SumOverBatchSize, self).__init__(
<add> super().__init__(
<ide> reduction=metrics_utils.Reduction.SUM_OVER_BATCH_SIZE,
<ide> name=name,
<ide> dtype=dtype)
<ide> def __init__(self, fn, name=None, dtype=None, **kwargs):
<ide> dtype: (Optional) data type of the metric result.
<ide> **kwargs: The keyword arguments that are passed on to `fn`.
<ide> """
<del> super(SumOverBatchSizeMetricWrapper, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide> self._fn = fn
<ide> self._fn_kwargs = kwargs
<ide>
<ide> def update_state(self, y_true, y_pred, sample_weight=None):
<ide>
<ide> ag_fn = tf.__internal__.autograph.tf_convert(self._fn, tf.__internal__.autograph.control_status_ctx())
<ide> matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
<del> return super(SumOverBatchSizeMetricWrapper, self).update_state(
<add> return super().update_state(
<ide> matches, sample_weight=sample_weight)
<ide>
<ide> def get_config(self):
<ide> config = {}
<ide> for k, v in self._fn_kwargs.items():
<ide> config[k] = backend.eval(v) if is_tensor_or_variable(v) else v
<del> base_config = super(SumOverBatchSizeMetricWrapper, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide><path>keras/metrics/base_metric_test.py
<ide> def test_in_keras_model(self):
<ide> class ModelWithMetric(Model):
<ide>
<ide> def __init__(self):
<del> super(ModelWithMetric, self).__init__()
<add> super().__init__()
<ide> self.dense1 = layers.Dense(
<ide> 3, activation='relu', kernel_initializer='ones')
<ide> self.dense2 = layers.Dense(
<ide> def call(self, x):
<ide> class BinaryTruePositives(metrics.Metric):
<ide>
<ide> def __init__(self, name='binary_true_positives', **kwargs):
<del> super(BinaryTruePositives, self).__init__(name=name, **kwargs)
<add> super().__init__(name=name, **kwargs)
<ide> self.true_positives = self.add_weight(name='tp', initializer='zeros')
<ide>
<ide> def update_state(self, y_true, y_pred, sample_weight=None):
<ide> def result(self):
<ide> class BinaryTruePositivesViaControlFlow(metrics.Metric):
<ide>
<ide> def __init__(self, name='binary_true_positives', **kwargs):
<del> super(BinaryTruePositivesViaControlFlow, self).__init__(name=name, **kwargs)
<add> super().__init__(name=name, **kwargs)
<ide> self.true_positives = self.add_weight(name='tp', initializer='zeros')
<ide>
<ide> def update_state(self, y_true, y_pred, sample_weight=None):
<ide> def test_metric_not_tracked_as_sublayer_in_layer(self):
<ide> class MyLayer(base_layer.Layer):
<ide>
<ide> def __init__(self, **kwargs):
<del> super(MyLayer, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.mean_obj = metrics.Mean(name='my_mean_obj')
<ide>
<ide> def call(self, x):
<ide> def test_metric_not_tracked_as_sublayer_in_model(self):
<ide> class MyModel(training_module.Model):
<ide>
<ide> def __init__(self, **kwargs):
<del> super(MyModel, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.mean_obj = metrics.Mean(name='my_mean_obj')
<ide>
<ide> def call(self, x):
<ide><path>keras/metrics/metrics.py
<ide> class MeanRelativeError(base_metric.Mean):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, normalizer, name=None, dtype=None):
<del> super(MeanRelativeError, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide> normalizer = tf.cast(normalizer, self._dtype)
<ide> self.normalizer = normalizer
<ide>
<ide> def update_state(self, y_true, y_pred, sample_weight=None):
<ide> relative_errors = tf.math.divide_no_nan(
<ide> tf.abs(y_true - y_pred), self.normalizer)
<ide>
<del> return super(MeanRelativeError, self).update_state(
<add> return super().update_state(
<ide> relative_errors, sample_weight=sample_weight)
<ide>
<ide> def get_config(self):
<ide> n = self.normalizer
<ide> config = {'normalizer': backend.eval(n) if is_tensor_or_variable(n) else n}
<del> base_config = super(MeanRelativeError, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class Accuracy(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='accuracy', dtype=None):
<del> super(Accuracy, self).__init__(accuracy, name, dtype=dtype)
<add> super().__init__(accuracy, name, dtype=dtype)
<ide>
<ide>
<ide> @keras_export('keras.metrics.BinaryAccuracy')
<ide> class BinaryAccuracy(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='binary_accuracy', dtype=None, threshold=0.5):
<del> super(BinaryAccuracy, self).__init__(
<add> super().__init__(
<ide> metrics_utils.binary_matches, name, dtype=dtype, threshold=threshold)
<ide>
<ide>
<ide> class CategoricalAccuracy(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='categorical_accuracy', dtype=None):
<del> super(CategoricalAccuracy, self).__init__(
<add> super().__init__(
<ide> lambda y_true, y_pred: metrics_utils.sparse_categorical_matches( # pylint: disable=g-long-lambda
<ide> tf.math.argmax(y_true, axis=-1), y_pred),
<ide> name,
<ide> class SparseCategoricalAccuracy(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='sparse_categorical_accuracy', dtype=None):
<del> super(SparseCategoricalAccuracy, self).__init__(
<add> super().__init__(
<ide> metrics_utils.sparse_categorical_matches, name, dtype=dtype)
<ide>
<ide>
<ide> class TopKCategoricalAccuracy(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, k=5, name='top_k_categorical_accuracy', dtype=None):
<del> super(TopKCategoricalAccuracy, self).__init__(
<add> super().__init__(
<ide> lambda yt, yp, k: metrics_utils.sparse_top_k_categorical_matches( # pylint: disable=g-long-lambda
<ide> tf.math.argmax(yt, axis=-1), yp, k),
<ide> name,
<ide> class SparseTopKCategoricalAccuracy(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, k=5, name='sparse_top_k_categorical_accuracy', dtype=None):
<del> super(SparseTopKCategoricalAccuracy, self).__init__(
<add> super().__init__(
<ide> metrics_utils.sparse_top_k_categorical_matches, name, dtype=dtype, k=k)
<ide>
<ide>
<ide> def __init__(self,
<ide> thresholds=None,
<ide> name=None,
<ide> dtype=None):
<del> super(_ConfusionMatrixConditionCount, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide> self._confusion_matrix_cond = confusion_matrix_cond
<ide> self.init_thresholds = thresholds
<ide> self.thresholds = metrics_utils.parse_init_thresholds(
<ide> def reset_state(self):
<ide>
<ide> def get_config(self):
<ide> config = {'thresholds': self.init_thresholds}
<del> base_config = super(_ConfusionMatrixConditionCount, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class FalsePositives(_ConfusionMatrixConditionCount):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, thresholds=None, name=None, dtype=None):
<del> super(FalsePositives, self).__init__(
<add> super().__init__(
<ide> confusion_matrix_cond=metrics_utils.ConfusionMatrix.FALSE_POSITIVES,
<ide> thresholds=thresholds,
<ide> name=name,
<ide> class FalseNegatives(_ConfusionMatrixConditionCount):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, thresholds=None, name=None, dtype=None):
<del> super(FalseNegatives, self).__init__(
<add> super().__init__(
<ide> confusion_matrix_cond=metrics_utils.ConfusionMatrix.FALSE_NEGATIVES,
<ide> thresholds=thresholds,
<ide> name=name,
<ide> class TrueNegatives(_ConfusionMatrixConditionCount):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, thresholds=None, name=None, dtype=None):
<del> super(TrueNegatives, self).__init__(
<add> super().__init__(
<ide> confusion_matrix_cond=metrics_utils.ConfusionMatrix.TRUE_NEGATIVES,
<ide> thresholds=thresholds,
<ide> name=name,
<ide> class TruePositives(_ConfusionMatrixConditionCount):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, thresholds=None, name=None, dtype=None):
<del> super(TruePositives, self).__init__(
<add> super().__init__(
<ide> confusion_matrix_cond=metrics_utils.ConfusionMatrix.TRUE_POSITIVES,
<ide> thresholds=thresholds,
<ide> name=name,
<ide> def __init__(self,
<ide> class_id=None,
<ide> name=None,
<ide> dtype=None):
<del> super(Precision, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide> self.init_thresholds = thresholds
<ide> self.top_k = top_k
<ide> self.class_id = class_id
<ide> def get_config(self):
<ide> 'top_k': self.top_k,
<ide> 'class_id': self.class_id
<ide> }
<del> base_config = super(Precision, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> class_id=None,
<ide> name=None,
<ide> dtype=None):
<del> super(Recall, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide> self.init_thresholds = thresholds
<ide> self.top_k = top_k
<ide> self.class_id = class_id
<ide> def get_config(self):
<ide> 'top_k': self.top_k,
<ide> 'class_id': self.class_id
<ide> }
<del> base_config = super(Recall, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> class_id=None,
<ide> name=None,
<ide> dtype=None):
<del> super(SensitivitySpecificityBase, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide> if num_thresholds <= 0:
<ide> raise ValueError(
<ide> 'Argument `num_thresholds` must be an integer > 0. '
<ide> def reset_state(self):
<ide>
<ide> def get_config(self):
<ide> config = {'class_id': self.class_id}
<del> base_config = super(SensitivitySpecificityBase, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide> def _find_max_under_constraint(self, constrained, dependent, predicate):
<ide> def __init__(self,
<ide> f'Received: specificity={specificity}')
<ide> self.specificity = specificity
<ide> self.num_thresholds = num_thresholds
<del> super(SensitivityAtSpecificity, self).__init__(
<add> super().__init__(
<ide> specificity,
<ide> num_thresholds=num_thresholds,
<ide> class_id=class_id,
<ide> def get_config(self):
<ide> 'num_thresholds': self.num_thresholds,
<ide> 'specificity': self.specificity
<ide> }
<del> base_config = super(SensitivityAtSpecificity, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> f'Received: sensitivity={sensitivity}')
<ide> self.sensitivity = sensitivity
<ide> self.num_thresholds = num_thresholds
<del> super(SpecificityAtSensitivity, self).__init__(
<add> super().__init__(
<ide> sensitivity,
<ide> num_thresholds=num_thresholds,
<ide> class_id=class_id,
<ide> def get_config(self):
<ide> 'num_thresholds': self.num_thresholds,
<ide> 'sensitivity': self.sensitivity
<ide> }
<del> base_config = super(SpecificityAtSensitivity, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> f'Received: recall={recall}')
<ide> self.recall = recall
<ide> self.num_thresholds = num_thresholds
<del> super(PrecisionAtRecall, self).__init__(
<add> super().__init__(
<ide> value=recall,
<ide> num_thresholds=num_thresholds,
<ide> class_id=class_id,
<ide> def result(self):
<ide>
<ide> def get_config(self):
<ide> config = {'num_thresholds': self.num_thresholds, 'recall': self.recall}
<del> base_config = super(PrecisionAtRecall, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> f'Received: precision={precision}')
<ide> self.precision = precision
<ide> self.num_thresholds = num_thresholds
<del> super(RecallAtPrecision, self).__init__(
<add> super().__init__(
<ide> value=precision,
<ide> num_thresholds=num_thresholds,
<ide> class_id=class_id,
<ide> def result(self):
<ide> def get_config(self):
<ide> config = {'num_thresholds': self.num_thresholds,
<ide> 'precision': self.precision}
<del> base_config = super(RecallAtPrecision, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> else:
<ide> self.summation_method = metrics_utils.AUCSummationMethod.from_str(
<ide> summation_method)
<del> super(AUC, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide>
<ide> # Handle multilabel arguments.
<ide> self.multi_label = multi_label
<ide> def get_config(self):
<ide> # were initialized. This ensures that a metric initialized from this
<ide> # config has the same thresholds.
<ide> config['thresholds'] = self.thresholds[1:-1]
<del> base_config = super(AUC, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class CosineSimilarity(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='cosine_similarity', dtype=None, axis=-1):
<del> super(CosineSimilarity, self).__init__(
<add> super().__init__(
<ide> cosine_similarity, name, dtype=dtype, axis=axis)
<ide>
<ide>
<ide> class MeanAbsoluteError(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='mean_absolute_error', dtype=None):
<del> super(MeanAbsoluteError, self).__init__(
<add> super().__init__(
<ide> mean_absolute_error, name, dtype=dtype)
<ide>
<ide>
<ide> class MeanAbsolutePercentageError(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='mean_absolute_percentage_error', dtype=None):
<del> super(MeanAbsolutePercentageError, self).__init__(
<add> super().__init__(
<ide> mean_absolute_percentage_error, name, dtype=dtype)
<ide>
<ide>
<ide> class MeanSquaredError(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='mean_squared_error', dtype=None):
<del> super(MeanSquaredError, self).__init__(
<add> super().__init__(
<ide> mean_squared_error, name, dtype=dtype)
<ide>
<ide>
<ide> class MeanSquaredLogarithmicError(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='mean_squared_logarithmic_error', dtype=None):
<del> super(MeanSquaredLogarithmicError, self).__init__(
<add> super().__init__(
<ide> mean_squared_logarithmic_error, name, dtype=dtype)
<ide>
<ide>
<ide> class Hinge(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='hinge', dtype=None):
<del> super(Hinge, self).__init__(hinge, name, dtype=dtype)
<add> super().__init__(hinge, name, dtype=dtype)
<ide>
<ide>
<ide> @keras_export('keras.metrics.SquaredHinge')
<ide> class SquaredHinge(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='squared_hinge', dtype=None):
<del> super(SquaredHinge, self).__init__(squared_hinge, name, dtype=dtype)
<add> super().__init__(squared_hinge, name, dtype=dtype)
<ide>
<ide>
<ide> @keras_export('keras.metrics.CategoricalHinge')
<ide> class CategoricalHinge(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='categorical_hinge', dtype=None):
<del> super(CategoricalHinge, self).__init__(categorical_hinge, name, dtype=dtype)
<add> super().__init__(categorical_hinge, name, dtype=dtype)
<ide>
<ide>
<ide> @keras_export('keras.metrics.RootMeanSquaredError')
<ide> class RootMeanSquaredError(base_metric.Mean):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='root_mean_squared_error', dtype=None):
<del> super(RootMeanSquaredError, self).__init__(name, dtype=dtype)
<add> super().__init__(name, dtype=dtype)
<ide>
<ide> def update_state(self, y_true, y_pred, sample_weight=None):
<ide> """Accumulates root mean squared error statistics.
<ide> def update_state(self, y_true, y_pred, sample_weight=None):
<ide> y_pred, y_true = losses_utils.squeeze_or_expand_dimensions(
<ide> y_pred, y_true)
<ide> error_sq = tf.math.squared_difference(y_pred, y_true)
<del> return super(RootMeanSquaredError, self).update_state(
<add> return super().update_state(
<ide> error_sq, sample_weight=sample_weight)
<ide>
<ide> def result(self):
<ide> class LogCoshError(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='logcosh', dtype=None):
<del> super(LogCoshError, self).__init__(logcosh, name, dtype=dtype)
<add> super().__init__(logcosh, name, dtype=dtype)
<ide>
<ide>
<ide> @keras_export('keras.metrics.Poisson')
<ide> class Poisson(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='poisson', dtype=None):
<del> super(Poisson, self).__init__(poisson, name, dtype=dtype)
<add> super().__init__(poisson, name, dtype=dtype)
<ide>
<ide>
<ide> @keras_export('keras.metrics.KLDivergence')
<ide> class KLDivergence(base_metric.MeanMetricWrapper):
<ide>
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, name='kullback_leibler_divergence', dtype=None):
<del> super(KLDivergence, self).__init__(
<add> super().__init__(
<ide> kullback_leibler_divergence, name, dtype=dtype)
<ide>
<ide>
<ide> class _IoUBase(base_metric.Metric):
<ide> """
<ide>
<ide> def __init__(self, num_classes, name=None, dtype=None):
<del> super(_IoUBase, self).__init__(name=name, dtype=dtype)
<add> super().__init__(name=name, dtype=dtype)
<ide> self.num_classes = num_classes
<ide>
<ide> # Variable to accumulate the predictions in the confusion matrix.
<ide> def __init__(
<ide> name=None,
<ide> dtype=None,
<ide> ):
<del> super(IoU, self).__init__(
<add> super().__init__(
<ide> name=name,
<ide> num_classes=num_classes,
<ide> dtype=dtype,
<ide> def get_config(self):
<ide> 'num_classes': self.num_classes,
<ide> 'target_class_ids': self.target_class_ids,
<ide> }
<del> base_config = super(IoU, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(
<ide> dtype=None,
<ide> ):
<ide>
<del> super(BinaryIoU, self).__init__(
<add> super().__init__(
<ide> num_classes=2,
<ide> target_class_ids=target_class_ids,
<ide> name=name,
<ide> class MeanIoU(IoU):
<ide> @dtensor_utils.inject_mesh
<ide> def __init__(self, num_classes, name=None, dtype=None):
<ide> target_class_ids = list(range(num_classes))
<del> super(MeanIoU, self).__init__(
<add> super().__init__(
<ide> name=name,
<ide> num_classes=num_classes,
<ide> target_class_ids=target_class_ids,
<ide> def __init__(
<ide> name=None,
<ide> dtype=None,
<ide> ):
<del> super(OneHotIoU, self).__init__(
<add> super().__init__(
<ide> num_classes=num_classes,
<ide> target_class_ids=target_class_ids,
<ide> name=name,
<ide> def __init__(
<ide> name=None,
<ide> dtype=None,
<ide> ):
<del> super(OneHotMeanIoU, self).__init__(
<add> super().__init__(
<ide> num_classes=num_classes,
<ide> name=name,
<ide> dtype=dtype,
<ide> def __init__(self,
<ide> dtype=None,
<ide> from_logits=False,
<ide> label_smoothing=0):
<del> super(BinaryCrossentropy, self).__init__(
<add> super().__init__(
<ide> binary_crossentropy,
<ide> name,
<ide> dtype=dtype,
<ide> def __init__(self,
<ide> dtype=None,
<ide> from_logits=False,
<ide> label_smoothing=0):
<del> super(CategoricalCrossentropy, self).__init__(
<add> super().__init__(
<ide> categorical_crossentropy,
<ide> name,
<ide> dtype=dtype,
<ide> def __init__(self,
<ide> dtype=None,
<ide> from_logits=False,
<ide> axis=-1):
<del> super(SparseCategoricalCrossentropy, self).__init__(
<add> super().__init__(
<ide> sparse_categorical_crossentropy,
<ide> name,
<ide> dtype=dtype,
<ide><path>keras/metrics/metrics_test.py
<ide> def test_axis(self):
<ide> class BinaryTruePositives(metrics.Metric):
<ide>
<ide> def __init__(self, name='binary_true_positives', **kwargs):
<del> super(BinaryTruePositives, self).__init__(name=name, **kwargs)
<add> super().__init__(name=name, **kwargs)
<ide> self.true_positives = self.add_weight(name='tp', initializer='zeros')
<ide>
<ide> def update_state(self, y_true, y_pred, sample_weight=None):
<ide> def result(self):
<ide> class BinaryTruePositivesViaControlFlow(metrics.Metric):
<ide>
<ide> def __init__(self, name='binary_true_positives', **kwargs):
<del> super(BinaryTruePositivesViaControlFlow, self).__init__(name=name, **kwargs)
<add> super().__init__(name=name, **kwargs)
<ide> self.true_positives = self.add_weight(name='tp', initializer='zeros')
<ide>
<ide> def update_state(self, y_true, y_pred, sample_weight=None):
<ide><path>keras/mixed_precision/autocast_variable_test.py
<ide> class AutoCastVariableTest(tf.test.TestCase, parameterized.TestCase):
<ide>
<ide> def setUp(self):
<ide> set_cpu_logical_devices_to_at_least(3)
<del> super(AutoCastVariableTest, self).setUp()
<add> super().setUp()
<ide>
<ide> @tf.__internal__.distribute.combinations.generate(maybe_distribute)
<ide> def test_read(self, distribution):
<ide><path>keras/mixed_precision/layer_correctness_test.py
<ide> def _create_normalization_layer_without_adapt():
<ide> class LayerCorrectnessTest(test_combinations.TestCase):
<ide>
<ide> def setUp(self):
<del> super(LayerCorrectnessTest, self).setUp()
<add> super().setUp()
<ide> # Set two virtual CPUs to test MirroredStrategy with multiple devices
<ide> cpus = tf.config.list_physical_devices('CPU')
<ide> tf.config.set_logical_device_configuration(cpus[0], [
<ide><path>keras/mixed_precision/layer_test.py
<ide> class MultiplyLayerWithFunction(mp_test_util.MultiplyLayer):
<ide>
<ide> @tf.function
<ide> def _multiply(self, x, y):
<del> return super(MultiplyLayerWithFunction, self)._multiply(x, y)
<add> return super()._multiply(x, y)
<ide>
<ide>
<ide> # If called outside any strategy.scope() calls, this will return the default
<ide><path>keras/mixed_precision/loss_scale_optimizer.py
<ide> def __init__(self,
<ide> growth_steps,
<ide> multiplier):
<ide> """Creates the dynamic loss scale."""
<del> super(_DynamicLossScaleState, self).__init__()
<add> super().__init__()
<ide> self._initial_loss_scale = float(initial_loss_scale)
<ide> self._growth_steps = int(growth_steps)
<ide> self._multiplier = float(multiplier)
<ide> def _trackable_children(self, save_type='checkpoint', **kwargs):
<ide> if g == graph_key:
<ide> weights[name] = v
<ide> weights.update(
<del> super(_DynamicLossScaleState,
<del> self)._trackable_children(save_type, **kwargs))
<add> super()._trackable_children(save_type, **kwargs))
<ide> return weights
<ide>
<ide> def _lookup_dependency(self, name):
<ide> """From Trackable. Find a weight in the current graph."""
<del> unconditional = super(_DynamicLossScaleState, self)._lookup_dependency(name)
<add> unconditional = super()._lookup_dependency(name)
<ide> if unconditional is not None:
<ide> return unconditional
<ide> if tf.executing_eagerly():
<ide> def __getattribute__(self, name):
<ide> raise e
<ide>
<ide> def __dir__(self):
<del> result = set(super(LossScaleOptimizer, self).__dir__())
<add> result = set(super().__dir__())
<ide> if '_optimizer' in result:
<ide> result |= self._optimizer._hyper.keys()
<ide> if 'learning_rate' in self._optimizer._hyper.keys():
<ide> def __setattr__(self, name, value):
<ide> and not has_attribute):
<ide> self._optimizer._set_hyper(name, value)
<ide> else:
<del> super(LossScaleOptimizer, self).__setattr__(name, value)
<add> super().__setattr__(name, value)
<ide>
<ide> # Explicitly delegate learning_rate. Normally hyperparameters are delegated in
<ide> # __getattribute__, but if a hyperparameter is not in self._optimizer._hyper
<ide><path>keras/mixed_precision/loss_scale_optimizer_test.py
<ide> def apply_gradients(self,
<ide> experimental_aggregate_gradients=True):
<ide> for grad, _ in grads_and_vars:
<ide> outer_self.assertIsInstance(grad, tf.Tensor)
<del> return super(MyOptimizer,
<del> self).apply_gradients(grads_and_vars, name,
<add> return super().apply_gradients(grads_and_vars, name,
<ide> experimental_aggregate_gradients)
<ide>
<ide> with create_mirrored_strategy().scope() as strategy:
<ide><path>keras/mixed_precision/mixed_precision_graph_rewrite_test.py
<ide> class MixedPrecisionTest(test_combinations.TestCase):
<ide> IGNORE_PERF_VAR = 'TF_AUTO_MIXED_PRECISION_GRAPH_REWRITE_IGNORE_PERFORMANCE'
<ide>
<ide> def setUp(self):
<del> super(MixedPrecisionTest, self).setUp()
<add> super().setUp()
<ide> # Enable the tests to be run on pre-Volta GPUs by telling the grappler pass
<ide> # to ignore performance and always transform the graph.
<ide> self._original_ignore_perf_value = os.getenv(self.IGNORE_PERF_VAR)
<ide> def tearDown(self):
<ide> del os.environ[self.IGNORE_PERF_VAR]
<ide>
<ide> tf.compat.v1.mixed_precision.disable_mixed_precision_graph_rewrite()
<del> super(MixedPrecisionTest, self).tearDown()
<add> super().tearDown()
<ide>
<ide> @test_combinations.generate(
<ide> test_combinations.combine(mode=['graph', 'eager']))
<ide><path>keras/mixed_precision/test_util.py
<ide> class AssertTypeLayer(base_layer.Layer):
<ide> def __init__(self, assert_type=None, **kwargs):
<ide> self._assert_type = (tf.as_dtype(assert_type).name if assert_type
<ide> else None)
<del> super(AssertTypeLayer, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide>
<ide> def assert_input_types(self, inputs):
<ide> """Asserts `inputs` are of the correct type. Should be called in call()."""
<ide> def __init__(self,
<ide>
<ide> self._use_operator = use_operator
<ide> self._var_name = var_name
<del> super(MultiplyLayer, self).__init__(
<add> super().__init__(
<ide> activity_regularizer=self._activity_regularizer, **kwargs)
<ide>
<ide> def build(self, _):
<ide> def _multiply(self, x, y):
<ide> return tf.multiply(x, y)
<ide>
<ide> def get_config(self):
<del> config = super(MultiplyLayer, self).get_config()
<add> config = super().get_config()
<ide> config['regularizer'] = regularizers.serialize(self._regularizer)
<ide> config['activity_regularizer'] = regularizers.serialize(
<ide> self._activity_regularizer)
<ide><path>keras/models/cloning_test.py
<ide> class TestModel(keras.Model):
<ide>
<ide> def __init__(self, n_outputs=4, trainable=True):
<ide> """A test class with one dense layer and number of outputs as a variable."""
<del> super(TestModel, self).__init__()
<add> super().__init__()
<ide> self.layer1 = keras.layers.Dense(n_outputs)
<ide> self.n_outputs = tf.Variable(n_outputs, trainable=trainable)
<ide>
<ide><path>keras/optimizers/optimizer_experimental/adadelta.py
<ide> def __init__(self,
<ide> jit_compile=True,
<ide> name='Adadelta',
<ide> **kwargs):
<del> super(Adadelta, self).__init__(
<add> super().__init__(
<ide> clipnorm=clipnorm,
<ide> clipvalue=clipvalue,
<ide> global_clipnorm=global_clipnorm,
<ide> def rms(x):
<ide> variable.assign_add(lr * delta_var)
<ide>
<ide> def get_config(self):
<del> config = super(Adadelta, self).get_config()
<add> config = super().get_config()
<ide>
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter(self._learning_rate),
<ide><path>keras/optimizers/optimizer_experimental/adagrad.py
<ide> def __init__(self,
<ide> jit_compile=True,
<ide> name='Adagrad',
<ide> **kwargs):
<del> super(Adagrad, self).__init__(
<add> super().__init__(
<ide> clipnorm=clipnorm,
<ide> clipvalue=clipvalue,
<ide> global_clipnorm=global_clipnorm,
<ide> def update_step(self, grad, variable):
<ide> variable.assign_sub(lr * grad / tf.sqrt(accumulator + self.epsilon))
<ide>
<ide> def get_config(self):
<del> config = super(Adagrad, self).get_config()
<add> config = super().get_config()
<ide>
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter(self._learning_rate),
<ide><path>keras/optimizers/optimizer_experimental/adam.py
<ide> def __init__(self,
<ide> jit_compile=True,
<ide> name='Adam',
<ide> **kwargs):
<del> super(Adam, self).__init__(
<add> super().__init__(
<ide> name=name,
<ide> clipnorm=clipnorm,
<ide> clipvalue=clipvalue,
<ide> def update_step(self, gradient, variable):
<ide> variable.assign_sub((m * alpha) / (tf.sqrt(v) + self.epsilon))
<ide>
<ide> def get_config(self):
<del> config = super(Adam, self).get_config()
<add> config = super().get_config()
<ide>
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter(self._learning_rate),
<ide><path>keras/optimizers/optimizer_experimental/adamax.py
<ide> def __init__(self,
<ide> jit_compile=True,
<ide> name='Adamax',
<ide> **kwargs):
<del> super(Adamax, self).__init__(
<add> super().__init__(
<ide> name=name,
<ide> clipnorm=clipnorm,
<ide> clipvalue=clipvalue,
<ide> def update_step(self, gradient, variable):
<ide> variable.assign_sub((lr * m) / ((1 - beta_1_power) * (u + self.epsilon)))
<ide>
<ide> def get_config(self):
<del> config = super(Adamax, self).get_config()
<add> config = super().get_config()
<ide>
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter(self._learning_rate),
<ide><path>keras/optimizers/optimizer_experimental/adamw.py
<ide> def __init__(self,
<ide> jit_compile=True,
<ide> name='AdamW',
<ide> **kwargs):
<del> super(AdamW, self).__init__(
<add> super().__init__(
<ide> name=name,
<ide> clipnorm=clipnorm,
<ide> clipvalue=clipvalue,
<ide> def update_step(self, gradient, variable):
<ide> variable.assign_sub((m * alpha) / (tf.sqrt(v) + self.epsilon))
<ide>
<ide> def get_config(self):
<del> config = super(AdamW, self).get_config()
<add> config = super().get_config()
<ide>
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter(self._learning_rate),
<ide><path>keras/optimizers/optimizer_experimental/optimizer.py
<ide> def _var_key(self, variable):
<ide> # TODO(b/197554203): replace _distributed_container() with a public api.
<ide> if hasattr(variable, "_distributed_container"):
<ide> variable = variable._distributed_container()
<del> return super(Optimizer, self)._var_key(variable)
<add> return super()._var_key(variable)
<ide>
<ide> def aggregate_gradients(self, grads_and_vars):
<ide> """Aggregate gradients on all devices.
<ide> def apply_grad_to_update_var(var, grad):
<ide> class RestoredOptimizer(Optimizer):
<ide>
<ide> def __init__(self):
<del> super(RestoredOptimizer, self).__init__("RestoredOptimizer")
<add> super().__init__("RestoredOptimizer")
<ide>
<ide> def get_config(self):
<ide> raise NotImplementedError(
<ide><path>keras/optimizers/optimizer_experimental/rmsprop.py
<ide> def __init__(self,
<ide> jit_compile=True,
<ide> name='RMSprop',
<ide> **kwargs):
<del> super(RMSprop, self).__init__(
<add> super().__init__(
<ide> clipnorm=clipnorm,
<ide> clipvalue=clipvalue,
<ide> global_clipnorm=global_clipnorm,
<ide> def update_step(self, gradient, variable):
<ide> variable.assign_add(-lr * transformed_grad)
<ide>
<ide> def get_config(self):
<del> config = super(RMSprop, self).get_config()
<add> config = super().get_config()
<ide>
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter(self._learning_rate),
<ide><path>keras/optimizers/optimizer_experimental/sgd.py
<ide> def __init__(self,
<ide> jit_compile=True,
<ide> name='SGD',
<ide> **kwargs):
<del> super(SGD, self).__init__(
<add> super().__init__(
<ide> name=name,
<ide> clipnorm=clipnorm,
<ide> clipvalue=clipvalue,
<ide> def update_step(self, gradient, variable):
<ide> variable.assign_add(-gradient * lr)
<ide>
<ide> def get_config(self):
<del> config = super(SGD, self).get_config()
<add> config = super().get_config()
<ide>
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter(self._learning_rate),
<ide><path>keras/optimizers/optimizer_v1.py
<ide> class SGD(Optimizer):
<ide> """
<ide>
<ide> def __init__(self, lr=0.01, momentum=0., decay=0., nesterov=False, **kwargs):
<del> super(SGD, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> with backend.name_scope(self.__class__.__name__):
<ide> self.iterations = backend.variable(0, dtype='int64', name='iterations')
<ide> self.lr = backend.variable(lr, name='lr')
<ide> def get_config(self):
<ide> 'decay': float(backend.get_value(self.decay)),
<ide> 'nesterov': self.nesterov
<ide> }
<del> base_config = super(SGD, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class RMSprop(Optimizer):
<ide> """
<ide>
<ide> def __init__(self, lr=0.001, rho=0.9, epsilon=None, decay=0., **kwargs):
<del> super(RMSprop, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> with backend.name_scope(self.__class__.__name__):
<ide> self.lr = backend.variable(lr, name='lr')
<ide> self.rho = backend.variable(rho, name='rho')
<ide> def get_config(self):
<ide> 'decay': float(backend.get_value(self.decay)),
<ide> 'epsilon': self.epsilon
<ide> }
<del> base_config = super(RMSprop, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class Adagrad(Optimizer):
<ide> """
<ide>
<ide> def __init__(self, lr=0.01, epsilon=None, decay=0., **kwargs):
<del> super(Adagrad, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> with backend.name_scope(self.__class__.__name__):
<ide> self.lr = backend.variable(lr, name='lr')
<ide> self.decay = backend.variable(decay, name='decay')
<ide> def get_config(self):
<ide> 'decay': float(backend.get_value(self.decay)),
<ide> 'epsilon': self.epsilon
<ide> }
<del> base_config = super(Adagrad, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> class Adadelta(Optimizer):
<ide> """
<ide>
<ide> def __init__(self, lr=1.0, rho=0.95, epsilon=None, decay=0., **kwargs):
<del> super(Adadelta, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> with backend.name_scope(self.__class__.__name__):
<ide> self.lr = backend.variable(lr, name='lr')
<ide> self.decay = backend.variable(decay, name='decay')
<ide> def get_config(self):
<ide> 'decay': float(backend.get_value(self.decay)),
<ide> 'epsilon': self.epsilon
<ide> }
<del> base_config = super(Adadelta, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> decay=0.,
<ide> amsgrad=False,
<ide> **kwargs):
<del> super(Adam, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> with backend.name_scope(self.__class__.__name__):
<ide> self.iterations = backend.variable(0, dtype='int64', name='iterations')
<ide> self.lr = backend.variable(lr, name='lr')
<ide> def get_config(self):
<ide> 'epsilon': self.epsilon,
<ide> 'amsgrad': self.amsgrad
<ide> }
<del> base_config = super(Adam, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> epsilon=None,
<ide> decay=0.,
<ide> **kwargs):
<del> super(Adamax, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> with backend.name_scope(self.__class__.__name__):
<ide> self.iterations = backend.variable(0, dtype='int64', name='iterations')
<ide> self.lr = backend.variable(lr, name='lr')
<ide> def get_config(self):
<ide> 'decay': float(backend.get_value(self.decay)),
<ide> 'epsilon': self.epsilon
<ide> }
<del> base_config = super(Adamax, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide> def __init__(self,
<ide> epsilon=None,
<ide> schedule_decay=0.004,
<ide> **kwargs):
<del> super(Nadam, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> with backend.name_scope(self.__class__.__name__):
<ide> self.iterations = backend.variable(0, dtype='int64', name='iterations')
<ide> self.m_schedule = backend.variable(1., name='m_schedule')
<ide> def get_config(self):
<ide> 'epsilon': self.epsilon,
<ide> 'schedule_decay': self.schedule_decay
<ide> }
<del> base_config = super(Nadam, self).get_config()
<add> base_config = super().get_config()
<ide> return dict(list(base_config.items()) + list(config.items()))
<ide>
<ide>
<ide><path>keras/optimizers/optimizer_v2/adadelta.py
<ide> def __init__(self,
<ide> epsilon=1e-7,
<ide> name='Adadelta',
<ide> **kwargs):
<del> super(Adadelta, self).__init__(name, **kwargs)
<add> super().__init__(name, **kwargs)
<ide> self._set_hyper('learning_rate', kwargs.get('lr', learning_rate))
<ide> self._set_hyper('decay', self._initial_decay)
<ide> self._set_hyper('rho', rho)
<ide> def _create_slots(self, var_list):
<ide> self.add_slot(v, 'accum_var')
<ide>
<ide> def _prepare_local(self, var_device, var_dtype, apply_state):
<del> super(Adadelta, self)._prepare_local(var_device, var_dtype, apply_state)
<add> super()._prepare_local(var_device, var_dtype, apply_state)
<ide> apply_state[(var_device, var_dtype)].update(
<ide> dict(
<ide> epsilon=tf.convert_to_tensor(
<ide> def set_weights(self, weights):
<ide> # iteration to 0.
<ide> if len(params) == len(weights) + 1:
<ide> weights = [np.array(0)] + weights
<del> super(Adadelta, self).set_weights(weights)
<add> super().set_weights(weights)
<ide>
<ide> def _resource_apply_dense(self, grad, var, apply_state=None):
<ide> var_device, var_dtype = var.device, var.dtype.base_dtype
<ide> def _resource_apply_sparse(self, grad, var, indices, apply_state=None):
<ide> use_locking=self._use_locking)
<ide>
<ide> def get_config(self):
<del> config = super(Adadelta, self).get_config()
<add> config = super().get_config()
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter('learning_rate'),
<ide> 'decay': self._initial_decay,
<ide><path>keras/optimizers/optimizer_v2/adagrad.py
<ide> def __init__(self,
<ide> initial_accumulator_value)
<ide> if epsilon is None:
<ide> epsilon = backend_config.epsilon()
<del> super(Adagrad, self).__init__(name, **kwargs)
<add> super().__init__(name, **kwargs)
<ide> self._set_hyper('learning_rate', kwargs.get('lr', learning_rate))
<ide> self._set_hyper('decay', self._initial_decay)
<ide> self._initial_accumulator_value = initial_accumulator_value
<ide> def _create_slots(self, var_list):
<ide> self.add_slot(var, 'accumulator', init)
<ide>
<ide> def _prepare_local(self, var_device, var_dtype, apply_state):
<del> super(Adagrad, self)._prepare_local(var_device, var_dtype, apply_state)
<add> super()._prepare_local(var_device, var_dtype, apply_state)
<ide> apply_state[(var_device, var_dtype)].update(
<ide> dict(
<ide> epsilon=tf.convert_to_tensor(
<ide> def set_weights(self, weights):
<ide> # iteration to 0.
<ide> if len(params) == len(weights) + 1:
<ide> weights = [np.array(0)] + weights
<del> super(Adagrad, self).set_weights(weights)
<add> super().set_weights(weights)
<ide>
<ide> @classmethod
<ide> def from_config(cls, config, custom_objects=None):
<ide> def _resource_apply_sparse(self, grad, var, indices, apply_state=None):
<ide> use_locking=self._use_locking)
<ide>
<ide> def get_config(self):
<del> config = super(Adagrad, self).get_config()
<add> config = super().get_config()
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter('learning_rate'),
<ide> 'decay': self._initial_decay,
<ide><path>keras/optimizers/optimizer_v2/adam.py
<ide> def __init__(self,
<ide> amsgrad=False,
<ide> name='Adam',
<ide> **kwargs):
<del> super(Adam, self).__init__(name, **kwargs)
<add> super().__init__(name, **kwargs)
<ide> self._set_hyper('learning_rate', kwargs.get('lr', learning_rate))
<ide> self._set_hyper('decay', self._initial_decay)
<ide> self._set_hyper('beta_1', beta_1)
<ide> def _create_slots(self, var_list):
<ide> self.add_slot(var, 'vhat')
<ide>
<ide> def _prepare_local(self, var_device, var_dtype, apply_state):
<del> super(Adam, self)._prepare_local(var_device, var_dtype, apply_state)
<add> super()._prepare_local(var_device, var_dtype, apply_state)
<ide>
<ide> local_step = tf.cast(self.iterations + 1, var_dtype)
<ide> beta_1_t = tf.identity(self._get_hyper('beta_1', var_dtype))
<ide> def set_weights(self, weights):
<ide> num_vars = int((len(params) - 1) / 2)
<ide> if len(weights) == 3 * num_vars + 1:
<ide> weights = weights[:len(params)]
<del> super(Adam, self).set_weights(weights)
<add> super().set_weights(weights)
<ide>
<ide> def _resource_apply_dense(self, grad, var, apply_state=None):
<ide> var_device, var_dtype = var.device, var.dtype.base_dtype
<ide> def _resource_apply_sparse(self, grad, var, indices, apply_state=None):
<ide> return tf.group(*[var_update, m_t, v_t, v_hat_t])
<ide>
<ide> def get_config(self):
<del> config = super(Adam, self).get_config()
<add> config = super().get_config()
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter('learning_rate'),
<ide> 'decay': self._initial_decay,
<ide> def __init__(self,
<ide> compatibility, recommended to use `learning_rate` instead.
<ide> """
<ide>
<del> super(NonFusedAdam, self).__init__(name, **kwargs)
<add> super().__init__(name, **kwargs)
<ide> self._set_hyper('learning_rate', kwargs.get('lr', learning_rate))
<ide> self._set_hyper('decay', self._initial_decay)
<ide> self._set_hyper('beta_1', beta_1)
<ide> def _create_slots(self, var_list):
<ide> self.add_slot(var, 'vhat')
<ide>
<ide> def _prepare_local(self, var_device, var_dtype, apply_state):
<del> super(NonFusedAdam, self)._prepare_local(var_device, var_dtype, apply_state)
<add> super()._prepare_local(var_device, var_dtype, apply_state)
<ide>
<ide> local_step = tf.cast(self.iterations + 1, var_dtype)
<ide> beta_1_t = tf.identity(self._get_hyper('beta_1', var_dtype))
<ide> def set_weights(self, weights):
<ide> num_vars = int((len(params) - 1) / 2)
<ide> if len(weights) == 3 * num_vars + 1:
<ide> weights = weights[:len(params)]
<del> super(NonFusedAdam, self).set_weights(weights)
<add> super().set_weights(weights)
<ide>
<ide> @tf.function(jit_compile=True)
<ide> def _resource_apply_dense(self, grad, var, apply_state=None):
<ide> def _resource_apply_sparse(self, grad, var, indices, apply_state=None):
<ide> (tf.sqrt(v_hat) + coefficients['epsilon']))
<ide>
<ide> def get_config(self):
<del> config = super(NonFusedAdam, self).get_config()
<add> config = super().get_config()
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter('learning_rate'),
<ide> 'decay': self._initial_decay,
<ide><path>keras/optimizers/optimizer_v2/adamax.py
<ide> def __init__(self,
<ide> epsilon=1e-7,
<ide> name='Adamax',
<ide> **kwargs):
<del> super(Adamax, self).__init__(name, **kwargs)
<add> super().__init__(name, **kwargs)
<ide> self._set_hyper('learning_rate', kwargs.get('lr', learning_rate))
<ide> self._set_hyper('decay', self._initial_decay)
<ide> self._set_hyper('beta_1', beta_1)
<ide> def _create_slots(self, var_list):
<ide> self.add_slot(var, 'v') # Create slots for the second moments.
<ide>
<ide> def _prepare_local(self, var_device, var_dtype, apply_state):
<del> super(Adamax, self)._prepare_local(var_device, var_dtype, apply_state)
<add> super()._prepare_local(var_device, var_dtype, apply_state)
<ide>
<ide> local_step = tf.cast(self.iterations + 1, var_dtype)
<ide> beta_1_t = tf.identity(self._get_hyper('beta_1', var_dtype))
<ide> def _resource_apply_sparse(self, grad, var, indices, apply_state=None):
<ide> return tf.group(*[var_update, m_t, v_t])
<ide>
<ide> def get_config(self):
<del> config = super(Adamax, self).get_config()
<add> config = super().get_config()
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter('learning_rate'),
<ide> 'decay': self._initial_decay,
<ide><path>keras/optimizers/optimizer_v2/ftrl.py
<ide> def __init__(self,
<ide> l2_shrinkage_regularization_strength=0.0,
<ide> beta=0.0,
<ide> **kwargs):
<del> super(Ftrl, self).__init__(name, **kwargs)
<add> super().__init__(name, **kwargs)
<ide>
<ide> if initial_accumulator_value < 0.0:
<ide> raise ValueError(
<ide> def _create_slots(self, var_list):
<ide> self.add_slot(var, 'linear')
<ide>
<ide> def _prepare_local(self, var_device, var_dtype, apply_state):
<del> super(Ftrl, self)._prepare_local(var_device, var_dtype, apply_state)
<add> super()._prepare_local(var_device, var_dtype, apply_state)
<ide> apply_state[(var_device, var_dtype)].update(
<ide> dict(
<ide> learning_rate_power=tf.identity(
<ide> def _resource_apply_sparse(self, grad, var, indices, apply_state=None):
<ide> use_locking=self._use_locking)
<ide>
<ide> def get_config(self):
<del> config = super(Ftrl, self).get_config()
<add> config = super().get_config()
<ide> config.update({
<ide> 'learning_rate':
<ide> self._serialize_hyperparameter('learning_rate'),
<ide><path>keras/optimizers/optimizer_v2/gradient_descent.py
<ide> def __init__(self,
<ide> nesterov=False,
<ide> name="SGD",
<ide> **kwargs):
<del> super(SGD, self).__init__(name, **kwargs)
<add> super().__init__(name, **kwargs)
<ide> self._set_hyper("learning_rate", kwargs.get("lr", learning_rate))
<ide> self._set_hyper("decay", self._initial_decay)
<ide>
<ide> def _create_slots(self, var_list):
<ide> self.add_slot(var, "momentum")
<ide>
<ide> def _prepare_local(self, var_device, var_dtype, apply_state):
<del> super(SGD, self)._prepare_local(var_device, var_dtype, apply_state)
<add> super()._prepare_local(var_device, var_dtype, apply_state)
<ide> apply_state[(var_device, var_dtype)]["momentum"] = tf.identity(
<ide> self._get_hyper("momentum", var_dtype))
<ide>
<ide> def _resource_apply_dense(self, grad, var, apply_state=None):
<ide> def _resource_apply_sparse_duplicate_indices(self, grad, var, indices,
<ide> **kwargs):
<ide> if self._momentum:
<del> return super(SGD, self)._resource_apply_sparse_duplicate_indices(
<add> return super()._resource_apply_sparse_duplicate_indices(
<ide> grad, var, indices, **kwargs)
<ide> else:
<ide> var_device, var_dtype = var.device, var.dtype.base_dtype
<ide> def _resource_apply_sparse(self, grad, var, indices, apply_state=None):
<ide> use_nesterov=self.nesterov)
<ide>
<ide> def get_config(self):
<del> config = super(SGD, self).get_config()
<add> config = super().get_config()
<ide> config.update({
<ide> "learning_rate": self._serialize_hyperparameter("learning_rate"),
<ide> "decay": self._initial_decay,
<ide><path>keras/optimizers/optimizer_v2/nadam.py
<ide> def __init__(self,
<ide> 'tf.keras.optimizers.LearningRateSchedules as the '
<ide> 'learning rate.')
<ide>
<del> super(Nadam, self).__init__(name, **kwargs)
<add> super().__init__(name, **kwargs)
<ide> self._set_hyper('learning_rate', kwargs.get('lr', learning_rate))
<ide> self._set_hyper('decay', self._initial_decay)
<ide> self._set_hyper('beta_1', beta_1)
<ide> def _prepare_local(self, var_device, var_dtype, apply_state):
<ide> def _prepare(self, var_list):
<ide> # Get the value of the momentum cache before starting to apply gradients.
<ide> self._m_cache_read = tf.identity(self._m_cache)
<del> return super(Nadam, self)._prepare(var_list)
<add> return super()._prepare(var_list)
<ide>
<ide> def _resource_apply_dense(self, grad, var, apply_state=None):
<ide> var_device, var_dtype = var.device, var.dtype.base_dtype
<ide> def _resource_apply_sparse(self, grad, var, indices, apply_state=None):
<ide> return tf.group(*[var_update, m_t_bar, v_t])
<ide>
<ide> def get_config(self):
<del> config = super(Nadam, self).get_config()
<add> config = super().get_config()
<ide> config.update({
<ide> 'learning_rate': self._serialize_hyperparameter('learning_rate'),
<ide> 'decay': self._initial_decay,
<ide><path>keras/optimizers/optimizer_v2/optimizer_v2.py
<ide> def _create_all_weights(self, var_list):
<ide> def __getattribute__(self, name):
<ide> """Overridden to support hyperparameter access."""
<ide> try:
<del> return super(OptimizerV2, self).__getattribute__(name)
<add> return super().__getattribute__(name)
<ide> except AttributeError as e:
<ide> # Needed to avoid infinite recursion with __setattr__.
<ide> if name == "_hyper":
<ide> def __getattribute__(self, name):
<ide> raise e
<ide>
<ide> def __dir__(self):
<del> result = set(super(OptimizerV2, self).__dir__())
<add> result = set(super().__dir__())
<ide> if "_hyper" in result:
<ide> result |= self._hyper.keys()
<ide> if "learning_rate" in self._hyper.keys():
<ide> def __setattr__(self, name, value):
<ide> if hasattr(self, "_hyper") and name in self._hyper:
<ide> self._set_hyper(name, value)
<ide> else:
<del> super(OptimizerV2, self).__setattr__(name, value)
<add> super().__setattr__(name, value)
<ide>
<ide> def get_slot_names(self):
<ide> """A list of names for this optimizer's slots."""
<ide> class RestoredOptimizer(OptimizerV2):
<ide> # methods.
<ide>
<ide> def __init__(self):
<del> super(RestoredOptimizer, self).__init__("RestoredOptimizer")
<add> super().__init__("RestoredOptimizer")
<ide> self._hypers_created = True
<ide>
<ide> def get_config(self):
<ide><path>keras/optimizers/optimizer_v2/optimizer_v2_test.py
<ide> def test_subclass_compat(self, optimizer_class, init_kwargs=None):
<ide> class SubclassedOptimizer(optimizer_class):
<ide>
<ide> def _resource_apply_dense(self, grad, var): # pylint: disable=useless-super-delegation
<del> return super(SubclassedOptimizer, self)._resource_apply_dense(grad, var)
<add> return super()._resource_apply_dense(grad, var)
<ide>
<ide> def _resource_apply_sparse(self, grad, var, indices): # pylint: disable=useless-super-delegation
<del> return super(SubclassedOptimizer, self)._resource_apply_sparse(
<add> return super()._resource_apply_sparse(
<ide> grad, var, indices)
<ide>
<ide> init_kwargs = init_kwargs or {}
<ide><path>keras/optimizers/optimizer_v2/rmsprop.py
<ide> def __init__(self,
<ide> different invocations of optimizer functions.
<ide> @end_compatibility
<ide> """
<del> super(RMSprop, self).__init__(name, **kwargs)
<add> super().__init__(name, **kwargs)
<ide> self._set_hyper("learning_rate", kwargs.get("lr", learning_rate))
<ide> self._set_hyper("decay", self._initial_decay)
<ide> self._set_hyper("rho", rho)
<ide> def _create_slots(self, var_list):
<ide> self.add_slot(var, "mg")
<ide>
<ide> def _prepare_local(self, var_device, var_dtype, apply_state):
<del> super(RMSprop, self)._prepare_local(var_device, var_dtype, apply_state)
<add> super()._prepare_local(var_device, var_dtype, apply_state)
<ide>
<ide> rho = tf.identity(self._get_hyper("rho", var_dtype))
<ide> apply_state[(var_device, var_dtype)].update(
<ide> def set_weights(self, weights):
<ide> # iteration to 0.
<ide> if len(params) == len(weights) + 1:
<ide> weights = [np.array(0)] + weights
<del> super(RMSprop, self).set_weights(weights)
<add> super().set_weights(weights)
<ide>
<ide> def get_config(self):
<del> config = super(RMSprop, self).get_config()
<add> config = super().get_config()
<ide> config.update({
<ide> "learning_rate": self._serialize_hyperparameter("learning_rate"),
<ide> "decay": self._initial_decay,
<ide><path>keras/optimizers/schedules/learning_rate_schedule.py
<ide> def __init__(
<ide> name: String. Optional name of the operation. Defaults to
<ide> 'ExponentialDecay'.
<ide> """
<del> super(ExponentialDecay, self).__init__()
<add> super().__init__()
<ide> self.initial_learning_rate = initial_learning_rate
<ide> self.decay_steps = decay_steps
<ide> self.decay_rate = decay_rate
<ide> def __init__(
<ide> Raises:
<ide> ValueError: if the number of elements in the lists do not match.
<ide> """
<del> super(PiecewiseConstantDecay, self).__init__()
<add> super().__init__()
<ide>
<ide> if len(boundaries) != len(values) - 1:
<ide> raise ValueError(
<ide> def __init__(
<ide> name: String. Optional name of the operation. Defaults to
<ide> 'PolynomialDecay'.
<ide> """
<del> super(PolynomialDecay, self).__init__()
<add> super().__init__()
<ide>
<ide> self.initial_learning_rate = initial_learning_rate
<ide> self.decay_steps = decay_steps
<ide> def __init__(
<ide> name: String. Optional name of the operation. Defaults to
<ide> 'InverseTimeDecay'.
<ide> """
<del> super(InverseTimeDecay, self).__init__()
<add> super().__init__()
<ide>
<ide> self.initial_learning_rate = initial_learning_rate
<ide> self.decay_steps = decay_steps
<ide> def __init__(
<ide> Minimum learning rate value as a fraction of initial_learning_rate.
<ide> name: String. Optional name of the operation. Defaults to 'CosineDecay'.
<ide> """
<del> super(CosineDecay, self).__init__()
<add> super().__init__()
<ide>
<ide> self.initial_learning_rate = initial_learning_rate
<ide> self.decay_steps = decay_steps
<ide> def __init__(
<ide> Minimum learning rate value as a fraction of the initial_learning_rate.
<ide> name: String. Optional name of the operation. Defaults to 'SGDRDecay'.
<ide> """
<del> super(CosineDecayRestarts, self).__init__()
<add> super().__init__()
<ide>
<ide> self.initial_learning_rate = initial_learning_rate
<ide> self.first_decay_steps = first_decay_steps
<ide> def __init__(
<ide> name: String. Optional name of the operation. Defaults to
<ide> 'LinearCosineDecay'.
<ide> """
<del> super(LinearCosineDecay, self).__init__()
<add> super().__init__()
<ide>
<ide> self.initial_learning_rate = initial_learning_rate
<ide> self.decay_steps = decay_steps
<ide> def __init__(
<ide> name: String. Optional name of the operation. Defaults to
<ide> 'NoisyLinearCosineDecay'.
<ide> """
<del> super(NoisyLinearCosineDecay, self).__init__()
<add> super().__init__()
<ide>
<ide> self.initial_learning_rate = initial_learning_rate
<ide> self.decay_steps = decay_steps
<ide><path>keras/premade_models/linear.py
<ide> def __init__(self,
<ide> self.bias_initializer = initializers.get(bias_initializer)
<ide> self.kernel_regularizer = regularizers.get(kernel_regularizer)
<ide> self.bias_regularizer = regularizers.get(bias_regularizer)
<del> super(LinearModel, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> base_layer.keras_premade_model_gauge.get_cell('Linear').set(True)
<ide>
<ide> def build(self, input_shape):
<ide><path>keras/premade_models/wide_deep.py
<ide> def __init__(self, linear_model, dnn_model, activation=None, **kwargs):
<ide> **kwargs: The keyword arguments that are passed on to BaseLayer.__init__.
<ide> Allowed keyword arguments include `name`.
<ide> """
<del> super(WideDeepModel, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> base_layer.keras_premade_model_gauge.get_cell('WideDeep').set(True)
<ide> self.linear_model = linear_model
<ide> self.dnn_model = dnn_model
<ide><path>keras/saving/losses_serialization_test.py
<ide> class MyMeanAbsoluteError(losses.LossFunctionWrapper):
<ide> def __init__(self,
<ide> reduction=losses_utils.ReductionV2.AUTO,
<ide> name='mean_absolute_error'):
<del> super(MyMeanAbsoluteError, self).__init__(
<add> super().__init__(
<ide> my_mae, name=name, reduction=reduction)
<ide>
<ide>
<ide><path>keras/saving/metrics_serialization_test.py
<ide> class MyMeanAbsoluteError(metrics.MeanMetricWrapper):
<ide>
<ide> def __init__(self, name='my_mae', dtype=None):
<del> super(MyMeanAbsoluteError, self).__init__(_my_mae, name, dtype=dtype)
<add> super().__init__(_my_mae, name, dtype=dtype)
<ide>
<ide>
<ide> # Custom metric function
<ide><path>keras/saving/save_test.py
<ide> class TestSaveModel(tf.test.TestCase, parameterized.TestCase):
<ide>
<ide> def setUp(self):
<del> super(TestSaveModel, self).setUp()
<add> super().setUp()
<ide> self.model = test_utils.get_small_sequential_mlp(1, 2, 3)
<ide> self.subclassed_model = test_utils.get_small_subclass_mlp(1, 2)
<ide>
<ide> def test_saving_optimizer_weights(self):
<ide> class MyModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.layer = keras.layers.Dense(1)
<ide>
<ide> def call(self, x):
<ide> def test_saving_model_with_name_conflict(self):
<ide> class Sequential(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(Sequential, self).__init__()
<add> super().__init__()
<ide> self.layer = keras.layers.Dense(1)
<ide>
<ide> def call(self, x):
<ide> def test_nested_layers(self):
<ide> class MyLayer(keras.layers.Layer):
<ide>
<ide> def __init__(self, sublayers, **kwargs):
<del> super(MyLayer, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.sublayers = sublayers
<ide>
<ide> def get_config(self):
<del> config = super(MyLayer, self).get_config()
<add> config = super().get_config()
<ide> config['sublayers'] = self.sublayers
<ide> return config
<ide>
<ide> def test_shared_objects(self):
<ide> class OuterLayer(keras.layers.Layer):
<ide>
<ide> def __init__(self, inner_layer):
<del> super(OuterLayer, self).__init__()
<add> super().__init__()
<ide> self.inner_layer = inner_layer
<ide>
<ide> def call(self, inputs):
<ide> def from_config(cls, config):
<ide> class InnerLayer(keras.layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(InnerLayer, self).__init__()
<add> super().__init__()
<ide> self.v = self.add_weight(name='v', shape=[], dtype=tf.float32)
<ide>
<ide> def call(self, inputs):
<ide> def _make_sequential_input_shape(input_size, output_size):
<ide> class _make_subclassed(keras.Model): # pylint: disable=invalid-name
<ide>
<ide> def __init__(self, input_size, output_size):
<del> super(_make_subclassed, self).__init__()
<add> super().__init__()
<ide> self._config = {'input_size': input_size, 'output_size': output_size}
<ide> self._hidden_layer = keras.layers.Dense(8, activation='relu', name='hidden')
<ide> self._logits_layer = keras.layers.Dense(output_size, name='logits')
<ide> def from_config(cls, config):
<ide> class _make_subclassed_built(_make_subclassed): # pylint: disable=invalid-name
<ide>
<ide> def __init__(self, input_size, output_size):
<del> super(_make_subclassed_built, self).__init__(input_size, output_size)
<add> super().__init__(input_size, output_size)
<ide> self.build((None, input_size))
<ide>
<ide>
<ide><path>keras/saving/save_weights_test.py
<ide> def test_load_weights_from_saved_model(self):
<ide> class SubclassedModel(training.Model):
<ide>
<ide> def __init__(self):
<del> super(SubclassedModel, self).__init__()
<add> super().__init__()
<ide> self.x_layer = keras.layers.Dense(3)
<ide> self.b_layer = keras.layers.Dense(1)
<ide>
<ide> def test_weight_loading_subclassed_model_added_layer(self):
<ide> class SubclassedModelRestore(training.Model):
<ide>
<ide> def __init__(self):
<del> super(SubclassedModelRestore, self).__init__()
<add> super().__init__()
<ide> self.x_layer = keras.layers.Dense(3)
<ide> self.y_layer = keras.layers.Dense(3)
<ide> self.b_layer = keras.layers.Dense(1)
<ide><path>keras/saving/saved_model/json_utils.py
<ide> def default(self, obj): # pylint: disable=method-hidden
<ide> return get_json_type(obj)
<ide>
<ide> def encode(self, obj):
<del> return super(Encoder, self).encode(_encode_tuple(obj))
<add> return super().encode(_encode_tuple(obj))
<ide>
<ide>
<ide> def _encode_tuple(x):
<ide><path>keras/saving/saved_model/layer_serialization.py
<ide> def object_identifier(self):
<ide>
<ide> def _get_serialized_attributes_internal(self, serialization_cache):
<ide> objects, functions = (
<del> super(RNNSavedModelSaver, self)._get_serialized_attributes_internal(
<add> super()._get_serialized_attributes_internal(
<ide> serialization_cache))
<ide> states = tf.__internal__.tracking.wrap(self.obj.states)
<ide> # SaveModel require all the objects to be Trackable when saving.
<ide><path>keras/saving/saved_model/load_context.py
<ide> class LoadContext(threading.local):
<ide> """A context for loading a model."""
<ide>
<ide> def __init__(self):
<del> super(LoadContext, self).__init__()
<add> super().__init__()
<ide> self._entered_load_context = []
<ide> self._load_options = None
<ide>
<ide><path>keras/saving/saved_model/model_serialization.py
<ide> def object_identifier(self):
<ide> return constants.MODEL_IDENTIFIER
<ide>
<ide> def _python_properties_internal(self):
<del> metadata = super(ModelSavedModelSaver, self)._python_properties_internal()
<add> metadata = super()._python_properties_internal()
<ide> # Network stateful property is dependent on the child layers.
<ide> metadata.pop('stateful')
<ide> metadata['is_graph_network'] = self.obj._is_graph_network # pylint: disable=protected-access
<ide> def _get_serialized_attributes_internal(self, serialization_cache):
<ide> # Other than the default signature function, all other attributes match with
<ide> # the ones serialized by Layer.
<ide> objects, functions = (
<del> super(ModelSavedModelSaver, self)._get_serialized_attributes_internal(
<add> super()._get_serialized_attributes_internal(
<ide> serialization_cache))
<ide> functions['_default_save_signature'] = default_signature
<ide> return objects, functions
<ide><path>keras/saving/saved_model/revive_test.py
<ide> class SubclassedModelNoConfig(keras.Model):
<ide>
<ide> def __init__(self, a, b):
<del> super(SubclassedModelNoConfig, self).__init__()
<add> super().__init__()
<ide>
<ide> self.a = a
<ide> self.b = b
<ide> def build(self, input_shape):
<ide> # TODO(b/145029112): Bug with losses when there are shared layers.
<ide> # self.shared, <-- Enable when bug is fixed.
<ide> CustomLayerNoConfig(self.a + 5, self.b + 6)])])
<del> super(SubclassedModelNoConfig, self).build(input_shape)
<add> super().build(input_shape)
<ide>
<ide> def call(self, inputs):
<ide> x = inputs
<ide> def call(self, inputs):
<ide> class SubclassedSparseModelNoConfig(keras.Model):
<ide>
<ide> def __init__(self, a, b):
<del> super(SubclassedSparseModelNoConfig, self).__init__()
<add> super().__init__()
<ide> self.a = a
<ide> self.shared = CustomLayerNoConfig(a, b)
<ide> self.all_layers = [SparseDense(4)]
<ide> def from_config(cls, config):
<ide> class CustomLayerNoConfig(keras.layers.Layer):
<ide>
<ide> def __init__(self, a, b, name=None):
<del> super(CustomLayerNoConfig, self).__init__(name=name)
<add> super().__init__(name=name)
<ide> self.a = tf.Variable(a, name='a')
<ide> self.b = b
<ide> def a_regularizer():
<ide> def __init__(self, num_classes, name=None):
<ide> inputs = keras.Input((2, 3), name='inputs')
<ide> x = keras.layers.Flatten(name='flatten')(inputs)
<ide> y = keras.layers.Dense(num_classes, name='outputs')(x)
<del> super(CustomNetworkDefaultConfig, self).__init__(inputs, y, name=name)
<add> super().__init__(inputs, y, name=name)
<ide>
<ide>
<ide> class CustomNetworkWithConfig(CustomNetworkDefaultConfig):
<ide>
<ide> def __init__(self, num_classes, name=None):
<del> super(CustomNetworkWithConfig, self).__init__(num_classes, name=name)
<add> super().__init__(num_classes, name=name)
<ide> self._config_dict = dict(num_classes=num_classes)
<ide>
<ide> def get_config(self):
<ide> def from_config(cls, config):
<ide> class CustomNetworkWithConfigName(CustomNetworkWithConfig):
<ide>
<ide> def __init__(self, num_classes, name=None):
<del> super(CustomNetworkWithConfigName, self).__init__(num_classes, name=name)
<add> super().__init__(num_classes, name=name)
<ide> self._config_dict['name'] = self.name
<ide>
<ide>
<ide> class UnregisteredCustomSequentialModel(keras.Sequential):
<ide> # This class is *not* registered in the CustomObjectScope.
<ide>
<ide> def __init__(self, **kwargs):
<del> super(UnregisteredCustomSequentialModel, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.add(keras.layers.InputLayer(input_shape=(2, 3)))
<ide>
<ide>
<ide> class WideDeepModel(SubclassedModelWithConfig):
<ide> class ReviveTestBase(test_combinations.TestCase):
<ide>
<ide> def setUp(self):
<del> super(ReviveTestBase, self).setUp()
<add> super().setUp()
<ide> self.path = self.get_temp_dir()
<ide> self.addCleanup(shutil.rmtree, self.path, ignore_errors=True)
<ide>
<ide> def test_revive(self):
<ide> class SubclassedModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(SubclassedModel, self).__init__()
<add> super().__init__()
<ide> self.all_layers = [CustomLayerWithConfig(1., 2),
<ide> CustomLayerNoConfig(3., 4),
<ide> SubclassedModelWithConfig(4., 6.),
<ide><path>keras/saving/saved_model/save_impl.py
<ide> def _restore_layer_losses(losses_dict):
<ide> class LayerTracingContext(threading.local):
<ide>
<ide> def __init__(self):
<del> super(LayerTracingContext, self).__init__()
<add> super().__init__()
<ide> self.enable_call_tracing = False
<ide> self.trace_queue = []
<ide>
<ide><path>keras/saving/saved_model/saved_model_test.py
<ide> def test_metadata_input_spec(self):
<ide> class LayerWithNestedSpec(keras.layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(LayerWithNestedSpec, self).__init__()
<add> super().__init__()
<ide> self.input_spec = {
<ide> 'a': keras.layers.InputSpec(max_ndim=3, axes={-1: 2}),
<ide> 'b': keras.layers.InputSpec(shape=(None, 2, 3), dtype='int32')}
<ide> def call(self, inputs, training=True):
<ide> class Model(keras.models.Model):
<ide>
<ide> def __init__(self):
<del> super(Model, self).__init__()
<add> super().__init__()
<ide> self.layer_with_training_default_none = LayerWithLearningPhase()
<ide> self.layer_with_training_default_true = LayerWithTrainingDefaultTrue()
<ide> self.layer_with_required_training_arg = LayerWithTrainingRequiredArg()
<ide> class CustomAdd(keras.layers.Add):
<ide>
<ide> def build(self, input_shape):
<ide> self.w = self.add_weight('w', shape=[])
<del> super(CustomAdd, self).build(input_shape)
<add> super().build(input_shape)
<ide>
<ide> def call(self, inputs):
<del> outputs = super(CustomAdd, self).call(inputs)
<add> outputs = super().call(inputs)
<ide> return outputs * self.w
<ide>
<ide> input1 = keras.layers.Input(shape=(None, 3), name='input_1')
<ide> def test_wrapped_layer_training(self):
<ide> class Custom(keras.models.Model):
<ide>
<ide> def __init__(self):
<del> super(Custom, self).__init__()
<add> super().__init__()
<ide> self.layer = LayerWithLearningPhase()
<ide>
<ide> def call(self, inputs):
<ide> def __call__(self, inputs):
<ide> class Model(keras.models.Model):
<ide>
<ide> def __init__(self):
<del> super(Model, self).__init__()
<add> super().__init__()
<ide> self.layer = CustomLayer()
<ide>
<ide> @tf.function(
<ide> def test_save_without_tracing(self):
<ide> class DoNotTrace(keras.layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(DoNotTrace, self).__init__()
<add> super().__init__()
<ide> self.input_spec = keras.layers.InputSpec(shape=[None])
<ide> self.built = True
<ide>
<ide> class LayerWithChildLayer(keras.engine.base_layer.Layer):
<ide>
<ide> def __init__(self):
<ide> self.child = LayerWithKwargs()
<del> super(LayerWithChildLayer, self).__init__()
<add> super().__init__()
<ide>
<ide> def call(self, inputs):
<ide> return self.child(inputs)
<ide> def update_state(self, *args): # pylint: disable=useless-super-delegation
<ide> # Sometimes built-in metrics return an op in update_state. Custom
<ide> # metrics don't support returning ops, so wrap the update_state method
<ide> # while returning nothing.
<del> super(CustomMeanMetric, self).update_state(*args)
<add> super().update_state(*args)
<ide>
<ide>
<ide> @test_combinations.generate(test_combinations.combine(mode=['graph', 'eager']))
<ide> def update_state(self, *args): # pylint: disable=useless-super-delegation
<ide> # Sometimes built-in metrics return an op in update_state. Custom
<ide> # metrics don't support returning ops, so wrap the update_state method
<ide> # while returning nothing.
<del> super(CustomMetric, self).update_state(*args)
<add> super().update_state(*args)
<ide>
<ide> with self.cached_session():
<ide> metric = CustomMetric()
<ide> class NegativeMean(keras.metrics.Mean):
<ide> @tf.function(
<ide> input_signature=[tf.TensorSpec(None, tf.float32)])
<ide> def update_state(self, value):
<del> super(NegativeMean, self).update_state(-value)
<add> super().update_state(-value)
<ide>
<ide> metric = NegativeMean()
<ide> self.evaluate([v.initializer for v in metric.variables])
<ide><path>keras/saving/saved_model/utils.py
<ide> def set_training_arg_spec(arg_spec, default_training_value):
<ide> class SaveOptionsContext(threading.local):
<ide>
<ide> def __init__(self):
<del> super(SaveOptionsContext, self).__init__()
<add> super().__init__()
<ide> self.save_traces = True
<ide>
<ide>
<ide><path>keras/saving/saved_model_experimental_test.py
<ide> def test_saving_subclassed_model_raise_error(self):
<ide> class SubclassedModel(model_lib.Model):
<ide>
<ide> def __init__(self):
<del> super(SubclassedModel, self).__init__()
<add> super().__init__()
<ide> self.layer1 = keras.layers.Dense(3)
<ide> self.layer2 = keras.layers.Dense(1)
<ide>
<ide> def sequential_model_without_input_shape(uses_learning_phase=True):
<ide> class Subclassed(keras.models.Model):
<ide>
<ide> def __init__(self):
<del> super(Subclassed, self).__init__()
<add> super().__init__()
<ide> self.dense1 = keras.layers.Dense(2)
<ide> self.dense2 = keras.layers.Dense(3)
<ide>
<ide><path>keras/saving/saving_utils_test.py
<ide> def test_subclassed_model_with_input_signature(self):
<ide> class Model(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(Model, self).__init__()
<add> super().__init__()
<ide> self.dense = keras.layers.Dense(3, name='dense')
<ide>
<ide> @tf.function(
<ide> def call(self, inp):
<ide> class BasicAutographedMetricModel(keras.models.Model):
<ide>
<ide> def __init__(self):
<del> super(BasicAutographedMetricModel, self).__init__(name='test_model')
<add> super().__init__(name='test_model')
<ide> self._layer = BasicAutographedMetricLayer()
<ide>
<ide> def call(self, inputs, **kwargs):
<ide> def test_extract_model_metrics(self):
<ide> class UnbuiltModelSavingErrorMessageTest(test_combinations.TestCase):
<ide>
<ide> def setUp(self):
<del> super(UnbuiltModelSavingErrorMessageTest, self).setUp()
<add> super().setUp()
<ide> if not tf.__internal__.tf2.enabled():
<ide> self.skipTest('The test does not intend to cover TF1.')
<ide>
<ide><path>keras/testing_infra/test_combinations.py
<ide> class TestCase(tf.test.TestCase, parameterized.TestCase):
<ide>
<ide> def tearDown(self):
<ide> keras.backend.clear_session()
<del> super(TestCase, self).tearDown()
<add> super().tearDown()
<ide>
<ide>
<ide> def run_with_all_saved_model_formats(
<ide><path>keras/testing_infra/test_utils.py
<ide> def __init__(self,
<ide> use_bn=False,
<ide> use_dp=False,
<ide> **kwargs):
<del> super(SmallSubclassMLP, self).__init__(name='test_model', **kwargs)
<add> super().__init__(name='test_model', **kwargs)
<ide> self.use_bn = use_bn
<ide> self.use_dp = use_dp
<ide>
<ide> class _SmallSubclassMLPCustomBuild(models.Model):
<ide> """A subclass model small MLP that uses a custom build method."""
<ide>
<ide> def __init__(self, num_hidden, num_classes):
<del> super(_SmallSubclassMLPCustomBuild, self).__init__()
<add> super().__init__()
<ide> self.layer_a = None
<ide> self.layer_b = None
<ide> self.num_hidden = num_hidden
<ide> def __init__(self, model_layers, *args, **kwargs):
<ide> """
<ide>
<ide> inputs = kwargs.pop('input_tensor', None)
<del> super(_SubclassModel, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide> # Note that clone and build doesn't support lists of layers in subclassed
<ide> # models. Adding each layer directly here.
<ide> for i, layer in enumerate(model_layers):
<ide> class _SubclassModelCustomBuild(models.Model):
<ide> """A Keras subclass model that uses a custom build method."""
<ide>
<ide> def __init__(self, layer_generating_func, *args, **kwargs):
<del> super(_SubclassModelCustomBuild, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide> self.all_layers = None
<ide> self._layer_generating_func = layer_generating_func
<ide>
<ide> class _MultiIOSubclassModel(models.Model):
<ide>
<ide> def __init__(self, branch_a, branch_b, shared_input_branch=None,
<ide> shared_output_branch=None, name=None):
<del> super(_MultiIOSubclassModel, self).__init__(name=name)
<add> super().__init__(name=name)
<ide> self._shared_input_branch = shared_input_branch
<ide> self._branch_a = branch_a
<ide> self._branch_b = branch_b
<ide> class _MultiIOSubclassModelCustomBuild(models.Model):
<ide> def __init__(self, branch_a_func, branch_b_func,
<ide> shared_input_branch_func=None,
<ide> shared_output_branch_func=None):
<del> super(_MultiIOSubclassModelCustomBuild, self).__init__()
<add> super().__init__()
<ide> self._shared_input_branch_func = shared_input_branch_func
<ide> self._branch_a_func = branch_a_func
<ide> self._branch_b_func = branch_b_func
<ide><path>keras/tests/add_loss_correctness_test.py
<ide> def train_step(x, y, w=None):
<ide> class TestAddLossCorrectness(test_combinations.TestCase):
<ide>
<ide> def setUp(self):
<del> super(TestAddLossCorrectness, self).setUp()
<add> super().setUp()
<ide> self.x = np.array([[0.], [1.], [2.]], dtype='float32')
<ide> self.y = np.array([[0.5], [2.], [3.5]], dtype='float32')
<ide> self.w = np.array([[1.25], [0.5], [1.25]], dtype='float32')
<ide> def test_loss_with_sample_weight_in_model_call(self):
<ide> class MyModel(Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.bias = test_utils.Bias()
<ide>
<ide> def call(self, inputs):
<ide> def test_loss_with_sample_weight_in_layer_call(self):
<ide> class MyLayer(layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(MyLayer, self).__init__()
<add> super().__init__()
<ide> self.bias = test_utils.Bias()
<ide>
<ide> def call(self, inputs):
<ide> def call(self, inputs):
<ide> class LayerWithNestedLayerWithLoss(layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(LayerWithNestedLayerWithLoss, self).__init__()
<add> super().__init__()
<ide> self.loss_layer = LayerWithLoss()
<ide>
<ide> def call(self, inputs):
<ide> def test_clear_losses(self):
<ide> class LayerWithSharedNestedLossLayer(layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(LayerWithSharedNestedLossLayer, self).__init__()
<add> super().__init__()
<ide> self.loss_layer = layers.ActivityRegularization(l2=0.001)
<ide> self.add_weight(shape=(1,), regularizer='l2')
<ide>
<ide><path>keras/tests/automatic_outside_compilation_test.py
<ide> class CustomModel(training.Model):
<ide> """Custom model with summary ops in model call definition."""
<ide>
<ide> def __init__(self, name=None, enable_histograms=True):
<del> super(CustomModel, self).__init__()
<add> super().__init__()
<ide> self._my_layers = [
<ide> layer_lib.Dense(
<ide> 4096,
<ide> def mnist_model(input_shape, enable_histograms=True):
<ide> class AutoOutsideCompilationWithKerasTest(tf.test.TestCase):
<ide>
<ide> def setUp(self):
<del> super(AutoOutsideCompilationWithKerasTest, self).setUp()
<add> super().setUp()
<ide> set_soft_device_placement(True)
<ide> self.summary_dir = self.get_temp_dir()
<ide>
<ide><path>keras/tests/convert_to_constants_test.py
<ide> def testEmbeddings(self):
<ide> class EmbeddingModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(EmbeddingModel, self).__init__()
<add> super().__init__()
<ide> self.shared_weights = self.add_weight(
<ide> "weights",
<ide> shape=(2000, 300),
<ide><path>keras/tests/custom_training_loop_test.py
<ide> def test_learning_phase_propagation(self, defun):
<ide> class MyModel(keras.layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.layer = LayerWithTrainingArg()
<ide>
<ide> def call(self, inputs):
<ide> def test_training_arg_priorities(self, defun):
<ide> class MyModel(keras.layers.Layer):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.layer = LayerWithTrainingArg()
<ide>
<ide> def call(self, inputs, training=False):
<ide><path>keras/tests/memory_test.py
<ide> class SingleLayerNet(keras.Model):
<ide> """Simple keras model used to ensure that there are no leaks."""
<ide>
<ide> def __init__(self):
<del> super(SingleLayerNet, self).__init__()
<add> super().__init__()
<ide> self.fc1 = keras.layers.Dense(5)
<ide>
<ide> def call(self, x):
<ide><path>keras/tests/model_architectures.py
<ide> class MySubclassModel(keras.Model):
<ide> """A subclass model."""
<ide>
<ide> def __init__(self, input_dim=3):
<del> super(MySubclassModel, self).__init__(name='my_subclass_model')
<add> super().__init__(name='my_subclass_model')
<ide> self._config = {'input_dim': input_dim}
<ide> self.dense1 = keras.layers.Dense(8, activation='relu')
<ide> self.dense2 = keras.layers.Dense(2, activation='softmax')
<ide> class NestedSubclassModel(keras.Model):
<ide> """A nested subclass model."""
<ide>
<ide> def __init__(self):
<del> super(NestedSubclassModel, self).__init__()
<add> super().__init__()
<ide> self.dense1 = keras.layers.Dense(4, activation='relu')
<ide> self.dense2 = keras.layers.Dense(2, activation='relu')
<ide> self.bn = keras.layers.BatchNormalization()
<ide> class NestedFunctionalInSubclassModel(keras.Model):
<ide> """A functional nested in subclass model."""
<ide>
<ide> def __init__(self):
<del> super(NestedFunctionalInSubclassModel, self).__init__(
<add> super().__init__(
<ide> name='nested_functional_in_subclassed_model')
<ide> self.dense1 = keras.layers.Dense(4, activation='relu')
<ide> self.dense2 = keras.layers.Dense(2, activation='relu')
<ide> class SharedLayerSubclassModel(keras.Model):
<ide> """A subclass model with shared layers."""
<ide>
<ide> def __init__(self):
<del> super(SharedLayerSubclassModel, self).__init__(
<add> super().__init__(
<ide> name='shared_layer_subclass_model')
<ide> self.dense = keras.layers.Dense(3, activation='relu')
<ide> self.dp = keras.layers.Dropout(0.5)
<ide><path>keras/tests/model_subclassing_compiled_test.py
<ide> def test_updates(self):
<ide> class BNNet(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(BNNet, self).__init__()
<add> super().__init__()
<ide> self.bn = keras.layers.BatchNormalization(beta_initializer='ones',
<ide> gamma_initializer='ones')
<ide>
<ide> def test_training_and_inference_behavior(self):
<ide> class DPNet(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(DPNet, self).__init__()
<add> super().__init__()
<ide> self.dp = keras.layers.Dropout(0.5)
<ide> self.dense = keras.layers.Dense(1,
<ide> use_bias=False,
<ide> def test_subclass_nested_in_sequential(self):
<ide> class Inner(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(Inner, self).__init__()
<add> super().__init__()
<ide> self.dense1 = keras.layers.Dense(32, activation='relu')
<ide> self.dense2 = keras.layers.Dense(num_classes, activation='relu')
<ide> self.bn = keras.layers.BatchNormalization()
<ide> def test_support_for_manual_training_arg(self):
<ide> class DPNet(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(DPNet, self).__init__()
<add> super().__init__()
<ide> self.dp = keras.layers.Dropout(0.5)
<ide> self.dense = keras.layers.Dense(1,
<ide> use_bias=False,
<ide><path>keras/tests/model_subclassing_test.py
<ide> def test_custom_build(self):
<ide> class DummyModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(DummyModel, self).__init__()
<add> super().__init__()
<ide> self.dense1 = keras.layers.Dense(32, activation='relu')
<ide> self.uses_custom_build = False
<ide>
<ide> def test_custom_build_with_fit(self):
<ide> class DummyModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(DummyModel, self).__init__()
<add> super().__init__()
<ide> self.layer1 = keras.layers.Dense(10, activation='relu')
<ide>
<ide> def build(self, input_shape):
<ide> def test_dataset_dict_with_fit(self):
<ide> class MyModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.dense1 = keras.layers.Dense(1)
<ide> self.dense2 = keras.layers.Dense(1)
<ide> self.add = keras.layers.Add()
<ide> class Embedding(keras.layers.Layer):
<ide> """An Embedding layer."""
<ide>
<ide> def __init__(self, vocab_size, embedding_dim, **kwargs):
<del> super(Embedding, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self.vocab_size = vocab_size
<ide> self.embedding_dim = embedding_dim
<ide>
<ide> def call(self, x):
<ide> class EmbedModel(keras.Model):
<ide>
<ide> def __init__(self, vocab_size, embed_size):
<del> super(EmbedModel, self).__init__()
<add> super().__init__()
<ide> self.embed1 = Embedding(vocab_size, embed_size)
<ide>
<ide> def call(self, inputs):
<ide> def test_single_time_step_rnn_build(self):
<ide> class SimpleRNNModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(SimpleRNNModel, self).__init__()
<add> super().__init__()
<ide> self.lstm = keras.layers.LSTM(units)
<ide>
<ide> def call(self, inputs):
<ide> def test_no_dependency(self):
<ide> class Foo(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(Foo, self).__init__()
<add> super().__init__()
<ide> self.isdep = keras.layers.Dense(1)
<ide> self.notdep = data_structures.NoDependency(keras.layers.Dense(2))
<ide> self.notdep_var = data_structures.NoDependency(
<ide> def test_extra_variable(self):
<ide> class ExtraVar(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(ExtraVar, self).__init__()
<add> super().__init__()
<ide> self.dense = keras.layers.Dense(1)
<ide> self.var = tf.Variable(1.)
<ide> self.not_trainable_var = tf.Variable(2., trainable=False)
<ide> def test_add_weight_in_model(self):
<ide> class MyModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.b = self.add_weight('bias', (10,))
<ide> self.c = self.add_weight('bias2', (10,), trainable=False)
<ide>
<ide> def test_add_update_in_model(self):
<ide> class MyModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.b = self.add_weight('bias', (10,))
<ide> self.c = self.add_weight('bias2', (10,))
<ide>
<ide> def test_updates_and_losses_for_nested_models_in_subclassed_model(self):
<ide> class TestModel1(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel1, self).__init__()
<add> super().__init__()
<ide> self.fc = keras.layers.Dense(10, input_shape=(784,),
<ide> activity_regularizer='l1')
<ide> self.bn = keras.Sequential([keras.layers.BatchNormalization(axis=1)])
<ide> def call(self, x):
<ide> class TestModel2(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel2, self).__init__()
<add> super().__init__()
<ide> self.fc = keras.layers.Dense(10, input_shape=(784,),
<ide> activity_regularizer='l1')
<ide> self.bn = keras.Sequential(
<ide> def call(self, x):
<ide> class TestModel3(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(TestModel3, self).__init__()
<add> super().__init__()
<ide> self.fc = keras.layers.Dense(10, input_shape=(784,),
<ide> activity_regularizer='l1')
<ide> self.bn = bn
<ide> def test_deepcopy(self):
<ide> class MyModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.my_variable = tf.Variable(0.0, trainable=False)
<ide> self.layer = keras.layers.Dense(4)
<ide>
<ide> def test_batch_counters_not_in_variables(self):
<ide> class MyModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self.layer = keras.layers.Dense(4)
<ide>
<ide> def call(self, obs):
<ide><path>keras/tests/model_subclassing_test_util.py
<ide> class SimpleConvTestModel(keras.Model):
<ide>
<ide> def __init__(self, num_classes=10):
<del> super(SimpleConvTestModel, self).__init__(name='test_model')
<add> super().__init__(name='test_model')
<ide> self.num_classes = num_classes
<ide>
<ide> self.conv1 = keras.layers.Conv2D(32, (3, 3), activation='relu')
<ide> class NestedTestModel1(keras.Model):
<ide> """
<ide>
<ide> def __init__(self, num_classes=2):
<del> super(NestedTestModel1, self).__init__(name='nested_model_1')
<add> super().__init__(name='nested_model_1')
<ide> self.num_classes = num_classes
<ide> self.dense1 = keras.layers.Dense(32, activation='relu')
<ide> self.dense2 = keras.layers.Dense(num_classes, activation='relu')
<ide> class NestedTestModel2(keras.Model):
<ide> """
<ide>
<ide> def __init__(self, num_classes=2):
<del> super(NestedTestModel2, self).__init__(name='nested_model_2')
<add> super().__init__(name='nested_model_2')
<ide> self.num_classes = num_classes
<ide> self.dense1 = keras.layers.Dense(32, activation='relu')
<ide> self.dense2 = keras.layers.Dense(num_classes, activation='relu')
<ide> def get_nested_model_3(input_dim, num_classes):
<ide> class Inner(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(Inner, self).__init__()
<add> super().__init__()
<ide> self.dense1 = keras.layers.Dense(32, activation='relu')
<ide> self.dense2 = keras.layers.Dense(5, activation='relu')
<ide> self.bn = keras.layers.BatchNormalization()
<ide> def call(self, inputs):
<ide> class CustomCallModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(CustomCallModel, self).__init__()
<add> super().__init__()
<ide> self.dense1 = keras.layers.Dense(1, activation='relu')
<ide> self.dense2 = keras.layers.Dense(1, activation='softmax')
<ide>
<ide> def call(self, first, second, fiddle_with_output='no', training=True):
<ide> class TrainingNoDefaultModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(TrainingNoDefaultModel, self).__init__()
<add> super().__init__()
<ide> self.dense1 = keras.layers.Dense(1)
<ide>
<ide> def call(self, x, training):
<ide> def call(self, x, training):
<ide> class TrainingMaskingModel(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(TrainingMaskingModel, self).__init__()
<add> super().__init__()
<ide> self.dense1 = keras.layers.Dense(1)
<ide>
<ide> def call(self, x, training=False, mask=None):
<ide><path>keras/tests/saved_model_test.py
<ide> def call(self, x, y):
<ide> class MemoryTests(tf.test.TestCase):
<ide>
<ide> def setUp(self):
<del> super(MemoryTests, self).setUp()
<add> super().setUp()
<ide> self._model = _ModelWithOptimizerUsingDefun()
<ide>
<ide> @tf_test_utils.assert_no_garbage_created
<ide><path>keras/tests/saver_test.py
<ide> class NonLayerTrackable(tf.Module):
<ide>
<ide> def __init__(self):
<del> super(NonLayerTrackable, self).__init__()
<add> super().__init__()
<ide> self.a_variable = trackable_utils.add_variable(
<ide> self, name="a_variable", shape=[])
<ide>
<ide> class MyModel(training.Model):
<ide> """A concrete Model for testing."""
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self._named_dense = core.Dense(1, use_bias=True)
<ide> self._second = core.Dense(1, use_bias=False)
<ide> # We can still track Trackables which aren't Layers.
<ide><path>keras/tests/tracking_test.py
<ide> class HasList(training.Model):
<ide>
<ide> def __init__(self):
<del> super(HasList, self).__init__()
<add> super().__init__()
<ide> self.layer_list = tf.__internal__.tracking.wrap([core.Dense(3)])
<ide> self.layer_list.append(core.Dense(4))
<ide> self.layer_list.extend(
<ide> def testSubSequentialTracking(self):
<ide> class _Subclassed(training.Model):
<ide>
<ide> def __init__(self, wrapped):
<del> super(_Subclassed, self).__init__()
<add> super().__init__()
<ide> self._wrapped = wrapped
<ide>
<ide> def call(self, x):
<ide> def testLayerTrackedThroughSequential(self):
<ide> class AttrDict(dict):
<ide>
<ide> def __init__(self, *args, **kwargs):
<del> super(AttrDict, self).__init__(*args, **kwargs)
<add> super().__init__(*args, **kwargs)
<ide> self.__dict__ = self
<ide>
<ide> def ffnet(layer_sizes, name):
<ide> def ffnet(layer_sizes, name):
<ide> class MyModel2(training.Model):
<ide>
<ide> def __init__(self, config, name="my_model_2"):
<del> super(MyModel2, self).__init__(name=name)
<add> super().__init__(name=name)
<ide> self._num_tokens = config.num_tokens
<ide>
<ide> # list of sub-models
<ide> def testModelContainersCompareEqual(self):
<ide> class HasEqualContainers(training.Model):
<ide>
<ide> def __init__(self):
<del> super(HasEqualContainers, self).__init__()
<add> super().__init__()
<ide> self.l1 = []
<ide> self.l2 = []
<ide>
<ide> def testTensorConversion(self):
<ide> class ListToTensor(training.Model):
<ide>
<ide> def __init__(self):
<del> super(ListToTensor, self).__init__()
<add> super().__init__()
<ide> self.l = [1., 2., 3.]
<ide>
<ide> self.assertAllEqual(
<ide> def testLayerCollectionWithExternalMutation(self):
<ide> class HasMapping(training.Model):
<ide>
<ide> def __init__(self):
<del> super(HasMapping, self).__init__()
<add> super().__init__()
<ide> self.layer_dict = tf.__internal__.tracking.wrap(dict(output=core.Dense(7)))
<ide> self.layer_dict["norm"] = tf.__internal__.tracking.wrap([])
<ide> self.layer_dict["dense"] = tf.__internal__.tracking.wrap([])
<ide> def testIter(self):
<ide> class HasTuple(training.Model):
<ide>
<ide> def __init__(self):
<del> super(HasTuple, self).__init__()
<add> super().__init__()
<ide> self.layer_list = (
<ide> core.Dense(3), core.Dense(4),
<ide> core.Dense(5, kernel_regularizer=tf.reduce_sum))
<ide> def testSubSequentialTracking(self):
<ide> class _Subclassed(training.Model):
<ide>
<ide> def __init__(self, wrapped):
<del> super(_Subclassed, self).__init__()
<add> super().__init__()
<ide> self._wrapped = wrapped
<ide>
<ide> def call(self, x):
<ide> def testModelContainersCompareEqual(self):
<ide> class HasEqualContainers(training.Model):
<ide>
<ide> def __init__(self):
<del> super(HasEqualContainers, self).__init__()
<add> super().__init__()
<ide> self.l1 = ()
<ide> self.l2 = ()
<ide>
<ide> def testTensorConversion(self):
<ide> class TupleToTensor(training.Model):
<ide>
<ide> def __init__(self):
<del> super(TupleToTensor, self).__init__()
<add> super().__init__()
<ide> self.l = (1., 2., 3.)
<ide>
<ide> self.assertAllEqual(
<ide> class NoDependencyModel(training.Model):
<ide>
<ide> @tf.__internal__.tracking.no_automatic_dependency_tracking
<ide> def __init__(self):
<del> super(NoDependencyModel, self).__init__()
<add> super().__init__()
<ide> self.a = []
<ide> self.b = tf.Module()
<ide>
<ide><path>keras/tests/tracking_util_test.py
<ide> class MyModel(training.Model):
<ide> """A concrete Model for testing."""
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self._named_dense = core.Dense(1, use_bias=True)
<ide> self._second = core.Dense(1, use_bias=False)
<ide> # We can still track Trackables which aren't Layers.
<ide> def call(self, values):
<ide> class NonLayerTrackable(tf.Module):
<ide>
<ide> def __init__(self):
<del> super(NonLayerTrackable, self).__init__()
<add> super().__init__()
<ide> self.a_variable = trackable_utils.add_variable(
<ide> self, name="a_variable", shape=[])
<ide>
<ide> def testAnonymousVarsInInit(self):
<ide> class Model(training.Model):
<ide>
<ide> def __init__(self):
<del> super(Model, self).__init__()
<add> super().__init__()
<ide> self.w = tf.Variable(0.0)
<ide> self.b = tf.Variable(0.0)
<ide> self.vars = [self.w, self.b]
<ide><path>keras/tests/tracking_util_with_v1_optimizers_test.py
<ide> class NonLayerTrackable(tf.Module):
<ide>
<ide> def __init__(self):
<del> super(NonLayerTrackable, self).__init__()
<add> super().__init__()
<ide> self.a_variable = trackable_utils.add_variable(
<ide> self, name="a_variable", shape=[])
<ide>
<ide> class MyModel(training.Model):
<ide> """A concrete Model for testing."""
<ide>
<ide> def __init__(self):
<del> super(MyModel, self).__init__()
<add> super().__init__()
<ide> self._named_dense = core.Dense(1, use_bias=True)
<ide> self._second = core.Dense(1, use_bias=False)
<ide> # We can still track Trackables which aren't Layers.
<ide> def testAnonymousVarsInInit(self):
<ide> class Model(training.Model):
<ide>
<ide> def __init__(self):
<del> super(Model, self).__init__()
<add> super().__init__()
<ide> self.w = tf.Variable(0.0)
<ide> self.b = tf.Variable(0.0)
<ide> self.vars = [self.w, self.b]
<ide><path>keras/tests/tracking_util_xla_test.py
<ide> class NonLayerTrackable(tf.Module):
<ide>
<ide> def __init__(self):
<del> super(NonLayerTrackable, self).__init__()
<add> super().__init__()
<ide> self.a_variable = trackable_utils.add_variable(
<ide> self, name="a_variable", shape=[])
<ide>
<ide> class Subclassed(training.Model):
<ide> """A concrete Model for testing."""
<ide>
<ide> def __init__(self):
<del> super(Subclassed, self).__init__()
<add> super().__init__()
<ide> self._named_dense = core.Dense(1, use_bias=True)
<ide> self._second = core.Dense(1, use_bias=False)
<ide> # We can still track Trackables which aren't Layers.
<ide><path>keras/utils/composite_tensor_support_test.py
<ide> class ToDense(Layer):
<ide> """Create a dense (standard) tensor from the given input tensor."""
<ide>
<ide> def __init__(self, default_value, **kwargs):
<del> super(ToDense, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self._default_value = default_value
<ide>
<ide> def call(self, inputs):
<ide> class ToRagged(Layer):
<ide> """Create a ragged tensor based on a given dense tensor."""
<ide>
<ide> def __init__(self, padding, ragged_rank=1, **kwargs):
<del> super(ToRagged, self).__init__(**kwargs)
<add> super().__init__(**kwargs)
<ide> self._padding = padding
<ide> self._ragged_rank = ragged_rank
<ide>
<ide> class _SubclassModel(keras.Model):
<ide> """A Keras subclass model."""
<ide>
<ide> def __init__(self, layers, i_layer=None):
<del> super(_SubclassModel, self).__init__()
<add> super().__init__()
<ide> # Note that clone and build doesn't support lists of layers in subclassed
<ide> # models. Adding each layer directly here.
<ide> for i, layer in enumerate(layers):
<ide><path>keras/utils/data_utils.py
<ide> class OrderedEnqueuer(SequenceEnqueuer):
<ide> """
<ide>
<ide> def __init__(self, sequence, use_multiprocessing=False, shuffle=False):
<del> super(OrderedEnqueuer, self).__init__(sequence, use_multiprocessing)
<add> super().__init__(sequence, use_multiprocessing)
<ide> self.shuffle = shuffle
<ide>
<ide> def _get_executor_init(self, workers):
<ide> class GeneratorEnqueuer(SequenceEnqueuer):
<ide> def __init__(self, generator,
<ide> use_multiprocessing=False,
<ide> random_seed=None):
<del> super(GeneratorEnqueuer, self).__init__(generator, use_multiprocessing)
<add> super().__init__(generator, use_multiprocessing)
<ide> self.random_seed = random_seed
<ide>
<ide> def _get_executor_init(self, workers):
<ide><path>keras/utils/generic_utils.py
<ide> class SharedObjectConfig(dict):
<ide> def __init__(self, base_config, object_id, **kwargs):
<ide> self.ref_count = 1
<ide> self.object_id = object_id
<del> super(SharedObjectConfig, self).__init__(base_config, **kwargs)
<add> super().__init__(base_config, **kwargs)
<ide>
<ide> def increment_ref_count(self):
<ide> # As soon as we've seen the object more than once, we want to attach the
<ide> class LazyLoader(python_types.ModuleType):
<ide> def __init__(self, local_name, parent_module_globals, name):
<ide> self._local_name = local_name
<ide> self._parent_module_globals = parent_module_globals
<del> super(LazyLoader, self).__init__(name)
<add> super().__init__(name)
<ide>
<ide> def _load(self):
<ide> """Load the module and insert it into the parent's globals."""
<ide><path>keras/utils/layer_utils_test.py
<ide> def test_summary_subclass_model_expand_nested(self):
<ide> class Sequential(keras.Model):
<ide>
<ide> def __init__(self, *args):
<del> super(Sequential, self).__init__()
<add> super().__init__()
<ide> self.module_list = list(args) if args else []
<ide>
<ide> def call(self, x):
<ide> def call(self, x):
<ide> class Block(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(Block, self).__init__()
<add> super().__init__()
<ide> self.module = Sequential(
<ide> keras.layers.Dense(10),
<ide> keras.layers.Dense(10),
<ide> def call(self, input_tensor):
<ide> class Base(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(Base, self).__init__()
<add> super().__init__()
<ide> self.module = Sequential(Block(), Block())
<ide>
<ide> def call(self, input_tensor):
<ide> def call(self, input_tensor):
<ide> class Network(keras.Model):
<ide>
<ide> def __init__(self):
<del> super(Network, self).__init__()
<add> super().__init__()
<ide> self.child = Base()
<ide>
<ide> def call(self, inputs):
<ide> def test_property_cache(self):
<ide> class MyObject(tf.__internal__.tracking.AutoTrackable):
<ide>
<ide> def __init__(self):
<del> super(MyObject, self).__init__()
<add> super().__init__()
<ide> self._frozen = True
<ide>
<ide> def __setattr__(self, key, value):
<ide> """Enforce that cache does not set attribute on MyObject."""
<ide> if getattr(self, '_frozen', False):
<ide> raise ValueError('Cannot mutate when frozen.')
<del> return super(MyObject, self).__setattr__(key, value)
<add> return super().__setattr__(key, value)
<ide>
<ide> @property
<ide> @layer_utils.cached_per_instance
<ide><path>keras/utils/object_identity.py
<ide> class _WeakObjectIdentityWrapper(_ObjectIdentityWrapper):
<ide> __slots__ = ()
<ide>
<ide> def __init__(self, wrapped):
<del> super(_WeakObjectIdentityWrapper, self).__init__(weakref.ref(wrapped))
<add> super().__init__(weakref.ref(wrapped))
<ide>
<ide> @property
<ide> def unwrapped(self):
<ide><path>keras/utils/tf_utils_test.py
<ide> def _fn(*fargs, **fkwargs):
<ide> d.shape = x.shape
<ide> d.get_shape = x.get_shape
<ide> return d, x
<del> super(PlumbingLayer, self).__init__(_fn, **kwargs)
<add> super().__init__(_fn, **kwargs)
<ide> self._enter_dunder_call = False
<ide>
<ide> def __call__(self, inputs, *args, **kwargs):
<ide> self._enter_dunder_call = True
<del> d, _ = super(PlumbingLayer, self).__call__(inputs, *args, **kwargs)
<add> d, _ = super().__call__(inputs, *args, **kwargs)
<ide> self._enter_dunder_call = False
<ide> return d
<ide>
<ide> def call(self, inputs, *args, **kwargs):
<del> d, v = super(PlumbingLayer, self).call(inputs, *args, **kwargs)
<add> d, v = super().call(inputs, *args, **kwargs)
<ide> if self._enter_dunder_call:
<ide> return d, v
<ide> return d
<ide><path>keras/wrappers/scikit_learn.py
<ide> def fit(self, x, y, **kwargs):
<ide> else:
<ide> raise ValueError('Invalid shape for y: ' + str(y.shape))
<ide> self.n_classes_ = len(self.classes_)
<del> return super(KerasClassifier, self).fit(x, y, **kwargs)
<add> return super().fit(x, y, **kwargs)
<ide>
<ide> def predict(self, x, **kwargs):
<ide> """Returns the class predictions for the given test data. | 260 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.