Search is not available for this dataset
id
stringlengths 1
8
| text
stringlengths 72
9.81M
| addition_count
int64 0
10k
| commit_subject
stringlengths 0
3.7k
| deletion_count
int64 0
8.43k
| file_extension
stringlengths 0
32
| lang
stringlengths 1
94
| license
stringclasses 10
values | repo_name
stringlengths 9
59
|
---|---|---|---|---|---|---|---|---|
1800 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
exit(1)
if version_info[0] == 2:
if version_info[1] < 6:
stderr.write("The hitch bootstrapper will not run on versions of python below v2.6.\n")
exit(1)
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitch.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> DOCS : Fixed bad link to docs.
<DFF> @@ -43,7 +43,7 @@ setup(name="hitch",
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
- url='https://hitch.readthedocs.org/',
+ url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
| 1 | DOCS : Fixed bad link to docs. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1801 | <NME> index.rst
<BEF> Hitch
=====
Hitch is a framework for :doc:`/glossary/integration_testing`.
Features
--------
* Runs reliably without modification on Mac OS X, Ubuntu/Debian, Fedora, CentOS and Arch and in Docker.
* Automates its own deployment and does not interfere with your system other than to install packages.
* Provides boilerplate and tools to substantially minimize the problem of :doc:`/glossary/brittle_tests`.
* Readable :doc:`/glossary/hitch_test_description_language` that doesn't require you to write regular expressions.
* Built-in :doc:`/glossary/service_orchestration` library for running groups of services (databases, webservers, microservices) together.
* Built-in :doc:`/glossary/step_library` for common tasks (interacting with browsers, command line & emails).
* Provides a suitable environment for :doc:`/glossary/acceptance_test_driven_development` complete with debugging tools.
quickstart/index
faq/index
glossary/index
plugins/index
Documentation
-------------
.. toctree::
:maxdepth: 2
quickstart/index
howto/index
faq/index
api/index
misc/index
See the full :doc:`/glossary/index` here.
<MSG> DOCS : Overhaul of APIs
<DFF> @@ -17,4 +17,6 @@ Contents:
quickstart/index
faq/index
+ api/index
glossary/index
+
| 2 | DOCS : Overhaul of APIs | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1802 | <NME> index.rst
<BEF> Getting started quickly with Hitch
==================================
This is a basic introduction to getting your first hitch test up and running.
Install prerequisites
---------------------
You should have a reasonably up to date Ubuntu, Debian, Arch, Fedora or Mac.
On Ubuntu/Debian::
$ sudo apt-get install python3 python-pip python-virtualenv
$ sudo pip install --upgrade hitch
On Mac OS X::
$ brew install python python3
$ pip install --upgrade hitch virtualenv
On Arch::
$ sudo pacman -Sy python python-virtualenv
$ sudo pip install --upgrade hitch
On Fedora/RHEL/CentOS::
$ sudo yum install python3 python-virtualenv python-pip python3
$ sudo pip install --upgrade hitch
.. note::
The 'hitch' package (the bootstrapper) is a small python package with no dependencies.
Create your test directory
--------------------------
Create a directory inside the root of your project to put your tests in. For example::
~/yourproject$ mkdir tests
~/yourproject$ cd tests
~/yourproject/tests$
If you already have a tests directory you can call it something else.
Create the hitch environment
----------------------------
To initialize a hitch environment, run hitch init in your tests directory::
~/yourproject/tests$ hitch init
This will:
* Install any necessary system packages required to run hitch.
* Create a .hitch directory, create a python 3 virtualenv in it and install all the necessary packages to run hitch tests there.
* Ask you some basic questions about the project which you are testing.
* Create a skeleton hitch project template for you to use based upon the answers.
The skeleton template will include all of the following:
* :doc:`/glossary/hitchreqs.txt`
* :doc:`/glossary/engine.py`
* tdd.settings (:doc:`/glossary/hitch_settings`)
* ci.settings
* all.settings
* :doc:`/glossary/stub.test`
* README.rst
You might want to take a look around these files. They all try to be self-explanatory.
Running your first test
-----------------------
You can now run the stub test. Try running it in test driven development mode::
$ hitch test stub.test --settings tdd.settings
The first time you run this command it *may take a while* (up to 25 minutes depending upon what you answered).
In [1]:
This is the interactive prompt that appears during the pause step. This is an :doc:`glossary/ipython`
prompt that can be used to interact with your app, inspect logs and try out test
steps.
`twitter feed <https://twitter.com/testhitch>`_ for updates and news.
Back?
-----
.. note::
If the stub test failed, please `raise an issue <https://github.com/hitchtest/hitch/issues/new>`_.
.. note::
Was there anything that confused you? Please tell us! Help with :doc:`misc/clarifying_documentation`.
Further reading
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
SUCCESS
* :doc:`/howto/test_driven_development`
* :doc:`/howto/parameterize_test_cases`
* :doc:`/howto/continuous_integration`
.. note::
Need tutorials for any other topics? `Please raise a ticket <https://github.com/hitchtest/hitch/issues/new>`_.
.. note::
Was there anything that went wrong or was confusing? Please tell us! Help with :doc:`/misc/clarifying_documentation`.
Further reading
---------------
* :doc:`/howto/web_applications`
* :doc:`/howto/command_line_applications`
Advanced topics
---------------
* :doc:`/howto/test_driven_development`
* :doc:`/howto/parameterize_test_cases`
* :doc:`/howto/external_apis`
* :doc:`/howto/continuous_integration`
Plugin Documentation
--------------------
.. toctree::
:glob:
:maxdepth: 1
/plugins/*
.. note::
Need tutorials for any other topics? `Please raise a ticket <https://github.com/hitchtest/hitch/issues/new>`_.
<MSG> DOCS : Updated README, glossary and some tutorials.
<DFF> @@ -84,7 +84,7 @@ Once the test run is done setting up and running things, if there were no proble
In [1]:
-This is the interactive prompt that appears during the pause step. This is an :doc:`glossary/ipython`
+This is the interactive prompt that appears during the pause step. This is an :doc:`/glossary/ipython`
prompt that can be used to interact with your app, inspect logs and try out test
steps.
@@ -101,7 +101,7 @@ Happy testing!
.. note::
- Was there anything that confused you? Please tell us! Help with :doc:`misc/clarifying_documentation`.
+ Was there anything that went wrong or was confusing? Please tell us! Help with :doc:`/misc/clarifying_documentation`.
Further reading
@@ -115,8 +115,19 @@ Advanced topics
* :doc:`/howto/test_driven_development`
* :doc:`/howto/parameterize_test_cases`
+* :doc:`/howto/external_apis`
* :doc:`/howto/continuous_integration`
+Plugin Documentation
+--------------------
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ /plugins/*
+
+
.. note::
Need tutorials for any other topics? `Please raise a ticket <https://github.com/hitchtest/hitch/issues/new>`_.
| 13 | DOCS : Updated README, glossary and some tutorials. | 2 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1803 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
stderr.write(languagestrings.YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33)
exit(1)
if hitchdir.hitch_exists():
hitchdir.check_hitch_directory_integrity()
update_requirements()
exit(0)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
cli.add_command(clean)
cli.add_command(freeze)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> BUG : Fixed python 2.6.x bug caused by using {} instead of {0} in string.format().
<DFF> @@ -243,7 +243,7 @@ def run():
cli.add_command(clean)
cli.add_command(freeze)
cli.add_command(init)
- cli.help = "Hitch test runner for:\n\n {}.".format(hitchdir.get_hitch_directory())
+ cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
| 1 | BUG : Fixed python 2.6.x bug caused by using {} instead of {0} in string.format(). | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1804 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
-----
Dragon is a **C**(Computation)**G**(Graph)**V**(Virtual)**M**(Machine) based distributed deep learning framework.
Our goal is to reduce the unnecessary structures or interfaces. Therefore, in addition to feed or fetch, the last thing is designing a objective function through available operators.
Besides, we demonstrate a cross-frameworks frontend(**Deep Learning VirtualBox**) is feasible, and further more, will get benefit from all participating crucial interfaces especially when one is not reasonable.
## News
Dragon 0.2.1 Released - The preliminary documentation, and massive known bugs are fixed.
## License and Citation
Dragon is released under the [BSD 2-Clause license](https://github.com/neopenx/Dragon/blob/master/LICENSE).
Please cite Dragon in your publications if it helps your research:
Journal = {arXiv preprint arXiv:1707.08265},
Title = {Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework},
Year = {2017}
}
<MSG> Add ND-Crop & ND-Pad support
<DFF> @@ -5,16 +5,15 @@
-----
Dragon is a **C**(Computation)**G**(Graph)**V**(Virtual)**M**(Machine) based distributed deep learning framework.
-Our goal is to reduce the unnecessary structures or interfaces. Therefore, in addition to feed or fetch, the last thing is designing a objective function through available operators.
+Our goal is to reduce the unnecessary structures or interfaces. Therefore, in addition to feed or fetch, the last thing is designing a objective function through all available operators.
-Besides, we demonstrate a cross-frameworks frontend(**Deep Learning VirtualBox**) is feasible, and further more, will get benefit from all participating crucial interfaces especially when one is not reasonable.
+Besides, we demonstrate that a cross-frameworks frontend(**Deep Learning VirtualBox**) is feasible, and further more, will get benefit from all participating crucial interfaces especially when one is not reasonable.
## News
Dragon 0.2.1 Released - The preliminary documentation, and massive known bugs are fixed.
## License and Citation
-
Dragon is released under the [BSD 2-Clause license](https://github.com/neopenx/Dragon/blob/master/LICENSE).
Please cite Dragon in your publications if it helps your research:
@@ -24,4 +23,5 @@ Please cite Dragon in your publications if it helps your research:
Journal = {arXiv preprint arXiv:1707.08265},
Title = {Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework},
Year = {2017}
+
}
| 3 | Add ND-Crop & ND-Pad support | 3 | .md | md | bsd-2-clause | neopenx/Dragon |
1805 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
-----
Dragon is a **C**(Computation)**G**(Graph)**V**(Virtual)**M**(Machine) based distributed deep learning framework.
Our goal is to reduce the unnecessary structures or interfaces. Therefore, in addition to feed or fetch, the last thing is designing a objective function through available operators.
Besides, we demonstrate a cross-frameworks frontend(**Deep Learning VirtualBox**) is feasible, and further more, will get benefit from all participating crucial interfaces especially when one is not reasonable.
## News
Dragon 0.2.1 Released - The preliminary documentation, and massive known bugs are fixed.
## License and Citation
Dragon is released under the [BSD 2-Clause license](https://github.com/neopenx/Dragon/blob/master/LICENSE).
Please cite Dragon in your publications if it helps your research:
Journal = {arXiv preprint arXiv:1707.08265},
Title = {Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework},
Year = {2017}
}
<MSG> Add ND-Crop & ND-Pad support
<DFF> @@ -5,16 +5,15 @@
-----
Dragon is a **C**(Computation)**G**(Graph)**V**(Virtual)**M**(Machine) based distributed deep learning framework.
-Our goal is to reduce the unnecessary structures or interfaces. Therefore, in addition to feed or fetch, the last thing is designing a objective function through available operators.
+Our goal is to reduce the unnecessary structures or interfaces. Therefore, in addition to feed or fetch, the last thing is designing a objective function through all available operators.
-Besides, we demonstrate a cross-frameworks frontend(**Deep Learning VirtualBox**) is feasible, and further more, will get benefit from all participating crucial interfaces especially when one is not reasonable.
+Besides, we demonstrate that a cross-frameworks frontend(**Deep Learning VirtualBox**) is feasible, and further more, will get benefit from all participating crucial interfaces especially when one is not reasonable.
## News
Dragon 0.2.1 Released - The preliminary documentation, and massive known bugs are fixed.
## License and Citation
-
Dragon is released under the [BSD 2-Clause license](https://github.com/neopenx/Dragon/blob/master/LICENSE).
Please cite Dragon in your publications if it helps your research:
@@ -24,4 +23,5 @@ Please cite Dragon in your publications if it helps your research:
Journal = {arXiv preprint arXiv:1707.08265},
Title = {Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework},
Year = {2017}
+
}
| 3 | Add ND-Crop & ND-Pad support | 3 | .md | md | bsd-2-clause | neopenx/Dragon |
1806 | <NME> RedisSentinelManager.java
<BEF> package org.crazycake.shiro;
import org.apache.commons.pool2.impl.GenericObjectPoolConfig;
import redis.clients.jedis.JedisSentinelPool;
import redis.clients.jedis.Protocol;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
public class RedisSentinelManager extends WorkAloneRedisManager implements IRedisManager {
private static final String DEFAULT_HOST = "127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381";
private String host = DEFAULT_HOST;
private static final String DEFAULT_MASTER_NAME = "mymaster";
private String masterName = DEFAULT_MASTER_NAME;
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private int timeout = Protocol.DEFAULT_TIMEOUT;
private String password;
private int database = Protocol.DEFAULT_DATABASE;
private GenericObjectPoolConfig genericObjectPoolConfig = new GenericObjectPoolConfig();
private void init() {
synchronized (this) {
protected Jedis getJedis() {
String[] sentinelHosts = host.split(",\\s+");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
jedisPool = new JedisSentinelPool(masterName, sentinels, genericObjectPoolConfig, timeout, password, database);
}
}
}
if (jedisPool == null) {
synchronized (RedisSentinelManager.class) {
if (jedisPool == null) {
String[] sentinelHosts = host.split(",\\s*");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
jedisPool = new JedisSentinelPool(masterName, sentinels, getJedisPoolConfig(), timeout, soTimeout, password, database);
}
}
}
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public int getTimeout() {
return timeout;
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
public String getMasterName() {
return masterName;
}
this.masterName = masterName;
}
public GenericObjectPoolConfig getGenericObjectPoolConfig() {
return genericObjectPoolConfig;
}
public void setGenericObjectPoolConfig(GenericObjectPoolConfig genericObjectPoolConfig) {
this.genericObjectPoolConfig = genericObjectPoolConfig;
}
}
}
}
<MSG> Merge branch 'master' of https://github.com/alexxiyang/shiro-redis
# Conflicts:
# src/main/java/org/crazycake/shiro/IRedisManager.java
# src/main/java/org/crazycake/shiro/RedisCache.java
# src/main/java/org/crazycake/shiro/RedisManager.java
# src/main/java/org/crazycake/shiro/RedisSentinelManager.java
<DFF> @@ -1,6 +1,6 @@
package org.crazycake.shiro;
-import org.apache.commons.pool2.impl.GenericObjectPoolConfig;
+import redis.clients.jedis.JedisPoolConfig;
import redis.clients.jedis.JedisSentinelPool;
import redis.clients.jedis.Protocol;
@@ -19,11 +19,14 @@ public class RedisSentinelManager extends BaseRedisManager implements IRedisMana
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private int timeout = Protocol.DEFAULT_TIMEOUT;
+ // timeout for jedis try to read data from redis server
+ private int soTimeout = Protocol.DEFAULT_TIMEOUT;
+
private String password;
private int database = Protocol.DEFAULT_DATABASE;
- private GenericObjectPoolConfig genericObjectPoolConfig = new GenericObjectPoolConfig();
+ private JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
private void init() {
synchronized (this) {
@@ -31,7 +34,7 @@ public class RedisSentinelManager extends BaseRedisManager implements IRedisMana
String[] sentinelHosts = host.split(",\\s+");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
- jedisPool = new JedisSentinelPool(masterName, sentinels, genericObjectPoolConfig, timeout, password, database);
+ jedisPool = new JedisSentinelPool(masterName, sentinels, jedisPoolConfig, timeout, soTimeout, password, database);
}
}
}
@@ -85,11 +88,19 @@ public class RedisSentinelManager extends BaseRedisManager implements IRedisMana
this.masterName = masterName;
}
- public GenericObjectPoolConfig getGenericObjectPoolConfig() {
- return genericObjectPoolConfig;
+ public JedisPoolConfig getJedisPoolConfig() {
+ return jedisPoolConfig;
+ }
+
+ public void setJedisPoolConfig(JedisPoolConfig jedisPoolConfig) {
+ this.jedisPoolConfig = jedisPoolConfig;
+ }
+
+ public int getSoTimeout() {
+ return soTimeout;
}
- public void setGenericObjectPoolConfig(GenericObjectPoolConfig genericObjectPoolConfig) {
- this.genericObjectPoolConfig = genericObjectPoolConfig;
+ public void setSoTimeout(int soTimeout) {
+ this.soTimeout = soTimeout;
}
}
| 18 | Merge branch 'master' of https://github.com/alexxiyang/shiro-redis | 7 | .java | java | mit | alexxiyang/shiro-redis |
1807 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
python3 = check_output(["which", "python3"]).replace("\n", "")
if hitchdir.hitch_exists():
stderr.write("Hitch has already been initialized in this directory.\n")
stderr.flush()
exit(1)
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
stderr.write(languagestrings.YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33)
exit(1)
if hitchdir.hitch_exists():
hitchdir.check_hitch_directory_integrity()
update_requirements()
exit(0)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
cli.add_command(uninstall)
cli.add_command(clean)
cli.add_command(freeze)
cli.help = "Hitch test runner for:\n\n {}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : hitch init gives a clearer warning message.
<DFF> @@ -30,7 +30,8 @@ def init():
python3 = check_output(["which", "python3"]).replace("\n", "")
if hitchdir.hitch_exists():
- stderr.write("Hitch has already been initialized in this directory.\n")
+ stderr.write("Hitch has already been initialized in this directory or a directory above it.\n")
+ stderr.write("If you wish to re-initialize hitch in this directory, delete the .hitch directory and run hitch init here again.\n")
stderr.flush()
exit(1)
@@ -167,6 +168,7 @@ def run():
cli.add_command(uninstall)
cli.add_command(clean)
cli.add_command(freeze)
+ cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
| 3 | FEATURE : hitch init gives a clearer warning message. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1808 | <NME> what_does_the_init_script_do.rst
<BEF> What does the init script do?
=============================
.. note::
This script tries to respect your existing environment as much as possible and
avoids the use of sudo except where necessary to install packages via your
system's package manager.
The init script is a one step method of setting up a hitch environment and running
all the tests in a directory. It is intended to be a low friction way of:
On Ubuntu/Debian::
$ sudo apt-get install python python3 python-dev python-setuptools python-virtualenv python3-dev automake libtool
On Fedora/Red Hat/CentOS::
$ sudo yum install python python-devel python-setuptools python-virtualenv python3 python3-devel automake libtool
On Arch::
$ sudo pacman -Sy python python-setuptools python-virtualenv python automake libtool
2. Install or upgrades the hitch bootstrap script (may require sudo)
--------------------------------------------------------------------
If pipsi is found, the script will attempt::
$ pipsi install --upgrade hitch
On the Mac it will run::
On Arch::
$ sudo pacman -Sy python python-setuptools python-virtualenv python automake libtool
$ sudo pip install --upgrade hitch
This script has zero package dependencies.
See also:
2. Install or upgrades the hitch bootstrap script (may require sudo)
--------------------------------------------------------------------
Takes approximately: 5 seconds
This is a small python script with no dependencies that bootstraps your testing
environment and lets you trigger test runs. It installs a single command ('hitch')
on your system's path.
On the Mac the init script will run::
$ pip install --upgrade hitch
On Linux::
$ sudo pip install --upgrade hitch
See also:
* :doc:`/faq/what_does_the_hitch_bootstrap_script_do`
3. Runs "hitch clean", "hitch cleanpkg" and "hitch init" in the current directory (may require sudo)
----------------------------------------------------------------------------------------------------
During the course of running the tests, the test may attempt to use sudo to install
necessary packages. It will always print the exact command it is trying to run
(e.g. sudo apt-get install xvfb). If you run this command in another terminal, it
won't complain. If the packages are already installed, hitch will not attempt to
install them.
See also:
so it can be rebuilt::
$ hitch cleanpkg
This builds a .hitch directory in the current directory and installs any more required
system packages via unixpackage. This asks to install system packages specified in
hitch plugins and packages specified in the system.packages file::
$ hitch init
* :doc:`/faq/what_does_hitch_init_do`
4. Run "hitch test ." to run all tests (does not require sudo)
--------------------------------------------------------------
Takes approximately: 15 minutes (subsequent test runs will be quicker)
If there are tests in the directory where the init script is run, it will run all
of them.
During the course of running the tests it will attempt to download and compile
certain pieces of software (e.g. postgres). The software will be installed in the
"~/.hitchpkg" directory. This does not require sudo and it will not interfere
with software you may already have installed.
See also:
* :doc:`why_is_my_test_downloading_and_compiling_software`
* :doc:`why_does_the_first_test_run_take_so_long`
All software installed there can easily be removed by deleting the "~/.hitchpkg"
directory or running the command "hitch cleanpkg".
See also:
* :doc:`how_do_i_uninstall_hitch_completely`
<MSG> DOCS : Updated the FAQ on what the init script does
<DFF> @@ -13,23 +13,25 @@ If you'd prefer instead to perform the steps manually, you can use this as a gui
On Ubuntu/Debian::
- $ sudo apt-get install python python3 python-dev python-setuptools python-virtualenv python3-dev automake libtool
+ $ sudo apt-get install -y python python3 python-dev python-setuptools python-virtualenv python3-dev automake libtool
On Fedora/Red Hat/CentOS::
- $ sudo yum install python python-devel python-setuptools python-virtualenv python3 python3-devel automake libtool
+ $ yum -y install python python-devel python-setuptools python-virtualenv python-pip python3 python3-devel automake libtool gcc-c++
On Arch::
- $ sudo pacman -Sy python python-setuptools python-virtualenv python automake libtool
+ $ pacman -Sy python python-setuptools python-virtualenv python automake libtool
+On Mac OS X::
-2. Install or upgrades the hitch bootstrap script (may require sudo)
---------------------------------------------------------------------
+ $ brew install python python3 libtool automake cmake
+
+ $ pip install --upgrade pip setuptools virtualenv
-If pipsi is found, the script will attempt::
- $ pipsi install --upgrade hitch
+2. Install or upgrades the hitch bootstrap script (may require sudo)
+--------------------------------------------------------------------
On the Mac it will run::
@@ -39,7 +41,7 @@ Or on Linux::
$ sudo pip install --upgrade hitch
-This script has zero package dependencies.
+This is a small python script with zero dependencies.
See also:
@@ -72,9 +74,9 @@ of them.
During the course of running the tests, the test may attempt to use sudo to install
necessary packages. It will always print the exact command it is trying to run
-(e.g. sudo apt-get install xvfb). If you run this command in another terminal, it
-won't complain. If the packages are already installed, hitch will not attempt to
-install them.
+(e.g. sudo apt-get install xvfb).
+
+If the packages are already installed, hitch will not attempt to install them.
See also:
| 13 | DOCS : Updated the FAQ on what the init script does | 11 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1809 | <NME> RedisSentinelManager.java
<BEF> package org.crazycake.shiro;
import org.crazycake.shiro.common.WorkAloneRedisManager;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisSentinelPool;
import redis.clients.jedis.Protocol;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
public class RedisSentinelManager extends WorkAloneRedisManager implements IRedisManager {
private static final String DEFAULT_HOST = "127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381";
private String host = DEFAULT_HOST;
private static final String DEFAULT_MASTER_NAME = "mymaster";
private String masterName = DEFAULT_MASTER_NAME;
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private int timeout = Protocol.DEFAULT_TIMEOUT;
// timeout for jedis try to read data from redis server
private int soTimeout = Protocol.DEFAULT_TIMEOUT;
private String password;
private int database = Protocol.DEFAULT_DATABASE;
private volatile JedisSentinelPool jedisPool;
private void init() {
synchronized (this) {
if (jedisPool == null) {
String[] sentinelHosts = host.split(",\\s+");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
jedisPool = new JedisSentinelPool(masterName, sentinels, jedisPoolConfig, timeout, soTimeout, password, database);
}
private void init() {
if (jedisPool == null) {
synchronized (RedisSentinelManager.class) {
if (jedisPool == null) {
String[] sentinelHosts = host.split(",\\s*");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
jedisPool = new JedisSentinelPool(masterName, sentinels, getJedisPoolConfig(), timeout, soTimeout, password, database);
}
}
}
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public int getTimeout() {
return timeout;
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
public String getMasterName() {
return masterName;
}
public void setMasterName(String masterName) {
this.masterName = masterName;
}
public int getSoTimeout() {
return soTimeout;
}
public void setSoTimeout(int soTimeout) {
this.soTimeout = soTimeout;
}
public JedisSentinelPool getJedisPool() {
return jedisPool;
}
public void setJedisPool(JedisSentinelPool jedisPool) {
this.jedisPool = jedisPool;
}
}
<MSG> Fix host parse bug
<DFF> @@ -31,7 +31,7 @@ public class RedisSentinelManager extends BaseRedisManager implements IRedisMana
private void init() {
synchronized (this) {
if (jedisPool == null) {
- String[] sentinelHosts = host.split(",\\s+");
+ String[] sentinelHosts = host.split(",\\s*");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
jedisPool = new JedisSentinelPool(masterName, sentinels, jedisPoolConfig, timeout, soTimeout, password, database);
| 1 | Fix host parse bug | 1 | .java | java | mit | alexxiyang/shiro-redis |
1810 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
## Deprecated. See [seetaresearch/Dragon](http://github.com/seetaresearch/Dragon).
- Run 3rdparty/setup_mpi.sh
```Shell
sudo ./setup_mpi.sh
```
#### Windows:
- We use Microsoft MPI which can perfectly run at lastest Windows10
- Microsoft MPI is intergrated into 3rdparty and you should do nothing
<MSG> speedup io
<DFF> @@ -65,9 +65,13 @@
- Run 3rdparty/setup_mpi.sh
```Shell
- sudo ./setup_mpi.sh
+ ./setup_mpi.sh
```
+ - Install
+ ```Shell
+ sudo cp openmpi/install/bin/mpirun /usr/bin
+ ```
#### Windows:
- We use Microsoft MPI which can perfectly run at lastest Windows10
- Microsoft MPI is intergrated into 3rdparty and you should do nothing
| 5 | speedup io | 1 | .md | md | bsd-2-clause | neopenx/Dragon |
1811 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
## Deprecated. See [seetaresearch/Dragon](http://github.com/seetaresearch/Dragon).
- Run 3rdparty/setup_mpi.sh
```Shell
sudo ./setup_mpi.sh
```
#### Windows:
- We use Microsoft MPI which can perfectly run at lastest Windows10
- Microsoft MPI is intergrated into 3rdparty and you should do nothing
<MSG> speedup io
<DFF> @@ -65,9 +65,13 @@
- Run 3rdparty/setup_mpi.sh
```Shell
- sudo ./setup_mpi.sh
+ ./setup_mpi.sh
```
+ - Install
+ ```Shell
+ sudo cp openmpi/install/bin/mpirun /usr/bin
+ ```
#### Windows:
- We use Microsoft MPI which can perfectly run at lastest Windows10
- Microsoft MPI is intergrated into 3rdparty and you should do nothing
| 5 | speedup io | 1 | .md | md | bsd-2-clause | neopenx/Dragon |
1812 | <NME> common.py
<BEF> # --------------------------------------------------------
# Caffe @ Dragon
# Copyright(c) 2017 SeetaTech
# Written by Ting Pan
# --------------------------------------------------------
import dragon.ops as ops
from dragon.core.tensor import Tensor
from ..layer import Layer
class InnerProductLayer(Layer):
"""The implementation of ``InnerProductLayer``.
Parameters
----------
num_output : int
The output dim. Refer `InnerProductParameter.num_output`_.
bias_term : boolean
Whether to use bias. Refer `InnerProductParameter.bias_term`_.
weight_filler : caffe_pb2.FillerParameter
The filler of weight. Refer `InnerProductParameter.weight_filler`_.
bias_filler : caffe_pb2.FillerParameter
The filler of bias. Refer `InnerProductParameter.bias_filler`_.
axis : int
The start axis to calculate. Refer `InnerProductParameter.axis`_.
transpose : boolean
Whether to transpose the weights. Refer `InnerProductParameter.transpose`_.
"""
def __init__(self, LayerParameter):
super(InnerProductLayer, self).__init__(LayerParameter)
param = LayerParameter.inner_product_param
self._param = {'axis': param.axis,
'num_output': param.num_output,
'TransW': not param.transpose}
weight = Tensor(LayerParameter.name + '@param0')
weight_diff = Tensor(LayerParameter.name + '@param0_grad')
self.Fill(weight, param, 'weight_filler')
self._blobs.append({'data': weight, 'diff': weight_diff})
if param.bias_term:
bias = Tensor(LayerParameter.name + '@param1')
bias_diff = Tensor(LayerParameter.name + '@param1_grad')
self.Fill(bias, param, 'bias_filler')
self._blobs.append({'data': bias, 'diff': bias_diff})
def Setup(self, bottom):
super(InnerProductLayer, self).Setup(bottom)
return ops.InnerProduct(bottom + [blob['data'] for blob in self._blobs], **self._param)
class AccuracyLayer(Layer):
"""The implementation of ``AccuracyLayer``.
Parameters
----------
top_k : int
The top-k accuracy. Refer `AccuracyParameter.top_k`_.
axis : int
The axis of classes. Refer `AccuracyParameter.axis`_.
ignore_label : int
The label to ignore. Refer `AccuracyParameter.ignore_label`_.
"""
def __init__(self, LayerParameter):
super(AccuracyLayer, self).__init__(LayerParameter)
param = LayerParameter.accuracy_param
self._param = {'top_k': param.top_k,
'ignore_labels': [param.ignore_label]
if param.HasField('ignore_label') else []}
def Setup(self, bottom):
super(AccuracyLayer, self).Setup(bottom)
return ops.Accuracy(bottom, **self._param)
class PythonLayer(Layer):
"""The implementation of ``PythonLayer``.
Parameters
----------
module : str
The module. Refer `PythonParameter.module`_.
layer : str
The class name of layer. Refer `PythonParameter.layer`_.
param_str : str
The str describing parameters. Refer `PythonParameter.param_str`_.
"""
def __init__(self, LayerParameter):
super(PythonLayer, self).__init__(LayerParameter)
param = LayerParameter.python_param
self._param = {'module': param.module,
'op': param.layer,
'param_str': param.param_str}
def Setup(self, bottom):
super(PythonLayer, self).Setup(bottom)
return ops.Run(bottom, nout=len(self._top), **self._param)
class EltwiseLayer(Layer):
"""The implementation of ``EltwiseLayer``.
Parameters
----------
operation : EltwiseParameter.EltwiseOp
The operation. Refer `EltwiseParameter.operation`_.
coeff : list of float
The coefficients. Refer `EltwiseParameter.coeff`_.
"""
def __init__(self, LayerParameter):
super(EltwiseLayer, self).__init__(LayerParameter)
param = LayerParameter.eltwise_param
self._param = {'operation': {0: 'PROD', 1: 'SUM', 2: 'MAX'}[param.operation],
'coeffs': [element for element in param.coeff]
if len(param.coeff) > 0 else None}
def Setup(self, bottom):
super(EltwiseLayer, self).Setup(bottom)
return ops.Eltwise(bottom, **self._param)
class AddLayer(Layer):
"""
The extended implementation of ``EltwiseLayer``.
"""
def __init__(self, LayerParameter):
super(AddLayer, self).__init__(LayerParameter)
def Setup(self, bottom):
super(AddLayer, self).Setup(bottom)
return ops.Add(bottom, **self._param)
class ConcatLayer(Layer):
"""The implementation of ``ConcatLayer``.
Parameters
----------
axis : int
The axis to concatenate. Refer `ConcatParameter.axis`_.
"""
def __init__(self, LayerParameter):
super(ConcatLayer, self).__init__(LayerParameter)
param = LayerParameter.concat_param
self._param = {'axis': param.axis}
def Setup(self, bottom):
super(ConcatLayer, self).Setup(bottom)
return ops.Concat(bottom, **self._param)
class DenseConcatLayer(Layer):
"""The extended implementation for `DenseNet`_.
Parameters
----------
axis : int
The axis to concatenate. Refer `ConcatParameter.axis`_.
growth_rate : int
The growth rate.
"""
def __init__(self, LayerParameter):
super(DenseConcatLayer, self).__init__(LayerParameter)
param = LayerParameter.dense_concat_param
self._param = {'axis': param.axis,
'growth_rate': param.growth_rate}
def Setup(self, bottom):
super(DenseConcatLayer, self).Setup(bottom)
return ops.DenseConcat(bottom, **self._param)
class CropLayer(Layer):
"""The implementation of ``CropLayer``.
Parameters
----------
axis : int
The start axis. Refer `CropParameter.axis`_.
offset : list of int
The offsets. Refer `CropParameter.offset`_.
"""
def __init__(self, LayerParameter):
super(CropLayer, self).__init__(LayerParameter)
param = LayerParameter.crop_param
self._param = {'start_axis': param.axis,
'offsets': [int(element) for element in param.offset]}
def Setup(self, bottom):
super(CropLayer, self).Setup(bottom)
self._param['shape_like'] = bottom[1]
self._param['starts'] = self._param['ends'] = None
return ops.Crop(bottom[0], **self._param)
class ReshapeLayer(Layer):
"""The implementation of ``ReshapeLayer``.
Parameters
----------
shape : list of int
The output shape. Refer `ReshapeParameter.shape`_.
"""
def __init__(self, LayerParameter):
super(ReshapeLayer, self).__init__(LayerParameter)
param = LayerParameter.reshape_param
shape = param.shape
self._param = {'shape': [int(element) for element in shape.dim]}
def Setup(self, bottom):
super(ReshapeLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Reshape(input, **self._param)
class PermuteLayer(Layer):
"""The implementation of ``PermuteLayer``.
Parameters
----------
order : list of int
The permutation. Refer `PermuteParameter.order`_.
"""
def __init__(self, LayerParameter):
super(PermuteLayer, self).__init__(LayerParameter)
param = LayerParameter.permute_param
self._param = {'perms': [int(element) for element in param.order]}
def Setup(self, bottom):
super(PermuteLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Transpose(input, **self._param)
class FlattenLayer(Layer):
"""The implementation of ``FlattenLayer``.
Parameters
----------
axis : int
The start axis. Refer `FlattenParameter.axis`_.
end_axis : int
The end axis. Refer `FlattenParameter.end_axis`_.
"""
def __init__(self, LayerParameter):
super(FlattenLayer, self).__init__(LayerParameter)
param = LayerParameter.flatten_param
axis = param.axis; end_axis = param.end_axis
num_axes = end_axis - axis + 1 if end_axis != -1 else -1
'eps': param.eps}
mean = Tensor(LayerParameter.name + '@param0').Constant()
var = Tensor(LayerParameter.name + '@param1').Constant()
scale = Tensor(LayerParameter.name + '@param2').Uniform(low=0.0, high=1.0)
bias = Tensor(LayerParameter.name + '@param3').Constant(value=0.0)
self.norm_blobs = [{'data': mean, 'diff': None},
{'data': var, 'diff': None}]
class GatherLayer(Layer):
"""The extended implementation of ``GatherOp``.
Parameters
----------
axis : int
The axis for gathering. Refer ``GatherParameter.axis``.
"""
def __init__(self, LayerParameter):
super(GatherLayer, self).__init__(LayerParameter)
param = LayerParameter.gather_param
self._param = {'axis': param.axis}
def Setup(self, bottom):
super(GatherLayer, self).Setup(bottom)
return ops.Gather(bottom[0], indices=bottom[1], **self._param)
class SoftmaxLayer(Layer):
"""The implementation of ``SoftmaxLayer``.
Parameters
----------
axis : int
The axis to perform softmax. Refer `SoftmaxParameter.axis`_.
"""
def __init__(self, LayerParameter):
super(SoftmaxLayer, self).__init__(LayerParameter)
param = LayerParameter.softmax_param
self._param = {'axis': param.axis}
def Setup(self, bottom):
super(SoftmaxLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Softmax(input, **self._param)
class ArgMaxLayer(Layer):
"""The implementation of ``ArgMaxLayer``.
Parameters
----------
top_k : int
The top k results to keep. Refer `ArgMaxParameter.top_k`_.
axis : int
The axis to perform argmax. Refer `ArgMaxParameter.axis`_.
"""
def __init__(self, LayerParameter):
super(ArgMaxLayer, self).__init__(LayerParameter)
param = LayerParameter.argmax_param
self._param = {'top_k': param.top_k,
'axis': param.axis,
'keep_dims': True}
def Setup(self, bottom):
super(ArgMaxLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Argmax(input, **self._param)
class BatchNormLayer(Layer):
"""The implementation of ``BatchNormLayer``.
Parameters
----------
use_global_stats : boolean
Refer `BatchNormParameter.use_global_stats`_.
moving_average_fraction : float
Refer `BatchNormParameter.moving_average_fraction`_.
eps : float
Refer `BatchNormParameter.eps`_.
"""
def __init__(self, LayerParameter):
super(BatchNormLayer, self).__init__(LayerParameter)
param = LayerParameter.batch_norm_param
self._param = {'use_stats': int(param.use_global_stats)
if param.HasField('use_global_stats') else -1,
'momentum': param.moving_average_fraction,
'eps': param.eps,
'axis': 1,
'mode': 'CAFFE'}
# mean, var, factor are set to 0 in order to do statistics
mean = Tensor(LayerParameter.name + '@param0').Constant(value=0.0)
var = Tensor(LayerParameter.name + '@param1').Constant(value=0.0)
factor = Tensor(LayerParameter.name + '@param2').Constant(value=0.0)
# in dragon, set diff as None will ignore computing grad automatically
# but in bvlc-caffe1, you must set lr_mult = 0 manually
self._blobs.append({'data': mean, 'diff': None})
self._blobs.append({'data': var, 'diff': None})
self._blobs.append({'data': factor, 'diff': None})
def Setup(self, bottom):
super(BatchNormLayer, self).Setup(bottom)
return ops.BatchNorm(bottom + [blob['data'] for blob in self._blobs], **self._param)
class BatchRenormLayer(Layer):
"""The implementation of ``BatchRenormLayer``.
Parameters
----------
use_global_stats : boolean
Refer ``BatchRenormParameter.use_global_stats``.
moving_average_fraction : float
Refer ``BatchRenormParameter.moving_average_fraction``.
eps : float
Refer ``BatchRenormParameter.eps``.
r_max : float
Refer ``BatchRenormParameter.r_max``.
d_max : float
Refer ``BatchRenormParameter.d_max``.
t_delta : float
Refer ``BatchRenormParameter.t_delta``.
"""
def __init__(self, LayerParameter):
super(BatchRenormLayer, self).__init__(LayerParameter)
param = LayerParameter.batch_renorm_param
self._param = {'use_stats': int(param.use_global_stats)
if param.HasField('use_global_stats') else -1,
'momentum': param.moving_average_fraction,
'eps': param.eps,
'r_max': float(param.r_max),
'd_max': float(param.d_max),
't_delta': float(param.t_delta),
'axis': 1,
'mode': 'CAFFE'}
mean = Tensor(LayerParameter.name + '@param0').Constant(value=0.0)
var = Tensor(LayerParameter.name + '@param1').Constant(value=0.0)
factor = Tensor(LayerParameter.name + '@param2').Constant(value=0.0)
self._blobs.append({'data': mean, 'diff': None})
self._blobs.append({'data': var, 'diff': None})
self._blobs.append({'data': factor, 'diff': None})
def Setup(self, bottom):
super(BatchRenormLayer, self).Setup(bottom)
return ops.BatchRenorm(bottom + [blob['data'] for blob in self._blobs], **self._param)
class InstanceNormLayer(Layer):
"""
The implementation of ``InstanceNormLayer``.
Introduced by `[Ulyanov et.al, 2016] <https://arxiv.org/abs/1607.08022>`_
"""
def __init__(self, LayerParameter):
super(InstanceNormLayer, self).__init__(LayerParameter)
self._param = {'axis': 1}
def Setup(self, bottom):
super(InstanceNormLayer, self).Setup(bottom)
return ops.InstanceNorm(bottom[0], **self._param)
class ScaleLayer(Layer):
"""The implementation of ``ScaleLayer``.
Parameters
----------
axis : int
The start axis. Refer `ScaleParameter.axis`_.
num_axes : int
The number of axes. Refer `ScaleParameter.num_axes`_.
filler : FillerParameter
The filler of scale parameter. Refer `ScaleParameter.filler`_.
bias_term : boolean
Whether to use bias. Refer `ScaleParameter.bias_term`_.
bias_filler : FillerParameter
The filler of bias parameter. Refer `ScaleParameter.bias_filler`_.
"""
def __init__(self, LayerParameter):
super(ScaleLayer, self).__init__(LayerParameter)
param = LayerParameter.scale_param
self._param = {'axis': param.axis,
'num_axes': param.num_axes}
scale = Tensor(LayerParameter.name + '@param0')
scale_diff = Tensor(LayerParameter.name + '@param0_grad')
if param.HasField('filler'):
self.Fill(scale, param, 'filler')
else: scale.Constant(value=1.0)
self._blobs.append({'data': scale, 'diff': scale_diff})
if param.bias_term:
bias = Tensor(LayerParameter.name + '@param1')
bias_diff = Tensor(LayerParameter.name + '@param1_grad')
# auto fill 0 if not specficed bias_filler
self.Fill(bias, param, 'bias_filler')
self._blobs.append({'data': bias, 'diff': bias_diff})
def Setup(self, bottom):
super(ScaleLayer, self).Setup(bottom)
return ops.Scale(bottom + [blob['data'] for blob in self._blobs], **self._param)
class BNLayer(Layer):
"""The implementation of ``BNLayer``.
Parameters
----------
use_global_stats : boolean
Refer `BatchNormParameter.use_global_stats`_.
moving_average_fraction : float
Refer `BatchNormParameter.moving_average_fraction`_.
eps : float
Refer `BatchNormParameter.eps`_.
filler : FillerParameter
The filler of scale parameter. Refer `ScaleParameter.filler`_.
bias_filler : FillerParameter
The filler of bias parameter. Refer `ScaleParameter.bias_filler`_.
"""
def __init__(self, LayerParameter):
super(BNLayer, self).__init__(LayerParameter)
bn_param = LayerParameter.batch_norm_param
scale_param = LayerParameter.scale_param
self._param = {'use_stats': int(bn_param.use_global_stats)
if bn_param.HasField('use_global_stats') else -1,
'momentum': bn_param.moving_average_fraction,
'eps': bn_param.eps,
'axis': 1}
mean = Tensor(LayerParameter.name + '@param0').Constant(value=0.0)
var = Tensor(LayerParameter.name + '@param1').Constant(value=0.0)
scale = Tensor(LayerParameter.name + '@param2')
scale_diff = Tensor(LayerParameter.name + '@param2_grad')
bias = Tensor(LayerParameter.name + '@param3')
bias_diff = Tensor(LayerParameter.name + '@param3_grad')
if scale_param.HasField('filler'):
self.Fill(scale, scale_param, 'filler')
else: scale.Constant(value=1.0)
self.Fill(bias, scale_param, 'bias_filler')
self.norm_blobs = [{'data': mean, 'diff': None},
{'data': var, 'diff': None}]
self.scale_blobs = [{'data': scale, 'diff': scale_diff},
{'data': bias, 'diff': bias_diff}]
self._blobs.extend(self.norm_blobs)
self._blobs.extend(self.scale_blobs)
def Setup(self, bottom):
super(BNLayer, self).Setup(bottom)
return ops.FusedBatchNorm(bottom + [blob['data'] for blob in self._blobs], **self._param)
class NormalizeLayer(Layer):
"""The implementation of ``NormalizeLayer``.
Parameters
----------
across_spatial : boolean
Whether to stat spatially. Refer `NormalizeParameter.across_spatial`_.
scale_filler : FillerParameter
The filler of scale parameter. Refer `NormalizeParameter.scale_filler`_.
channel_shared : boolean
Whether to scale across channels. Refer `NormalizeParameter.channel_shared`_.
eps : float
The eps. Refer `NormalizeParameter.eps`_.
"""
def __init__(self, LayerParameter):
super(NormalizeLayer, self).__init__(LayerParameter)
param = LayerParameter.normalize_param
self._l2norm_param = {'axis': 1,
'num_axes': -1 if param.across_spatial else 1,
'eps': param.eps}
self._scale_param = {'axis': 1,
'num_axes': 0 if param.channel_shared else 1}
scale = Tensor(LayerParameter.name + '@param0')
if param.HasField('scale_filler'):
self.Fill(scale, param, 'scale_filler')
else: scale.Constant(value=1.0)
self.scale_blobs = [{'data': scale, 'diff': Tensor(scale.name + '_grad')}]
self._blobs.extend(self.scale_blobs)
def Setup(self, bottom):
super(NormalizeLayer, self).Setup(bottom)
norm_out = [ops.L2Norm(bottom[0], **self._l2norm_param)]
scale_out = ops.Scale(norm_out + [blob['data'] for blob in self.scale_blobs],
**self._scale_param)
return scale_out
class TileLayer(Layer):
"""The extended implementation of ``TileLayer``.
Parameters
----------
multiples : caffe_pb2.BlobShape
The multiples. Refer `TileParameter.multiples`_.
"""
def __init__(self, LayerParameter):
super(TileLayer, self).__init__(LayerParameter)
param = LayerParameter.tile_param
multiples = param.multiples
self._param = {'multiples': [int(multiple) for multiple in multiples.dim]}
def Setup(self, bottom):
super(TileLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Tile(input, **self._param)
class ReductionLayer(Layer):
"""The extended implementation of ``ReductionLayer``.
Parameters
----------
operation : caffe_pb2.ReductionOp
The operation. Refer `ReductionParameter.operation`_.
axis : int
The axis to to reduce. Refer `ReductionParameter.axis`_.
"""
def __init__(self, LayerParameter):
super(ReductionLayer, self).__init__(LayerParameter)
param = LayerParameter.reduction_param
if param.axis < 0:
if param.axis != -1:
raise ValueError('The negative axis can only be -1(reduce all).')
self._param = {'operation': {1: 'SUM', 4: 'MEAN'}[param.operation],
'axis': param.axis}
def Setup(self, bottom):
super(ReductionLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Reduce(input, **self._param)
class ExpandDimsLayer(Layer):
"""The implementation of ``ExpandDimsLayer``.
Parameters
----------
axis : int
This axis to expand at. Refer `ExpandDimsParameter.axis`_.
"""
def __init__(self, LayerParameter):
super(ExpandDimsLayer, self).__init__(LayerParameter)
param = LayerParameter.expand_dims_param
self._param = {'axis': param.axis}
def Setup(self, bottom):
super(ExpandDimsLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.ExpandDims(input, **self._param)
class StopGradientLayer(Layer):
"""
The implementation of ``StopGradientLayer``.
"""
def __init__(self, LayerParameter):
super(StopGradientLayer, self).__init__(LayerParameter)
def Setup(self, bottom):
super(StopGradientLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.StopGradient(input, **self._param)
class ProposalLayer(Layer):
"""The implementation of ``ProposalLayer``.
Parameters
----------
stride : list of int
The stride of anchors. Refer ``ProposalParameter.stride``.
scale : list of float
The scales of anchors. Refer `ProposalParameter.scale`_.
ratio : list of float
The ratios of anchors. Refer `ProposalParameter.ratio`_.
pre_nms_top_n : int
The num of anchors before nms. Refer `ProposalParameter.pre_nms_topn`_.
post_nms_top_n : int
The num of anchors after nms. Refer `ProposalParameter.post_nms_topn`_.
nms_thresh : float
The threshold of nms. Refer `ProposalParameter.nms_thresh`_.
min_size : int
The min size of anchors. Refer `ProposalParameter.min_size`_.
min_level : int
Finest level of the FPN pyramid. Refer ``ProposalParameter.min_level``.
max_level : int
Coarsest level of the FPN pyramid. Refer ``ProposalParameter.max_level``.
canonical_scale : int
The baseline scale of mapping policy. Refer ``ProposalParameter.canonical_scale``.
canonical_level : int
Heuristic level of the canonical scale. Refer ``ProposalParameter.canonical_level``.
"""
def __init__(self, LayerParameter):
super(ProposalLayer, self).__init__(LayerParameter)
param = LayerParameter.proposal_param
self._param = {'strides': param.stride,
'ratios': param.ratio,
'scales': param.scale,
'pre_nms_top_n': param.pre_nms_top_n,
'post_nms_top_n': param.post_nms_top_n,
'nms_thresh': param.nms_thresh,
'min_size': param.min_size,
'min_level': param.min_level,
'max_level': param.max_level,
'canonical_scale': param.canonical_scale,
'canonical_level': param.canonical_level}
def Setup(self, bottom):
super(ProposalLayer, self).Setup(bottom)
return ops.Proposal(bottom, **self._param)
<MSG> fix bugs of issue https://github.com/neopenx/Dragon/issues/5
<DFF> @@ -260,7 +260,7 @@ class BNLayer(Layer):
'eps': param.eps}
mean = Tensor(LayerParameter.name + '@param0').Constant()
var = Tensor(LayerParameter.name + '@param1').Constant()
- scale = Tensor(LayerParameter.name + '@param2').Uniform(low=0.0, high=1.0)
+ scale = Tensor(LayerParameter.name + '@param2').Constant(value=1.0)
bias = Tensor(LayerParameter.name + '@param3').Constant(value=0.0)
self.norm_blobs = [{'data': mean, 'diff': None},
{'data': var, 'diff': None}]
| 1 | fix bugs of issue https://github.com/neopenx/Dragon/issues/5 | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1813 | <NME> common.py
<BEF> # --------------------------------------------------------
# Caffe @ Dragon
# Copyright(c) 2017 SeetaTech
# Written by Ting Pan
# --------------------------------------------------------
import dragon.ops as ops
from dragon.core.tensor import Tensor
from ..layer import Layer
class InnerProductLayer(Layer):
"""The implementation of ``InnerProductLayer``.
Parameters
----------
num_output : int
The output dim. Refer `InnerProductParameter.num_output`_.
bias_term : boolean
Whether to use bias. Refer `InnerProductParameter.bias_term`_.
weight_filler : caffe_pb2.FillerParameter
The filler of weight. Refer `InnerProductParameter.weight_filler`_.
bias_filler : caffe_pb2.FillerParameter
The filler of bias. Refer `InnerProductParameter.bias_filler`_.
axis : int
The start axis to calculate. Refer `InnerProductParameter.axis`_.
transpose : boolean
Whether to transpose the weights. Refer `InnerProductParameter.transpose`_.
"""
def __init__(self, LayerParameter):
super(InnerProductLayer, self).__init__(LayerParameter)
param = LayerParameter.inner_product_param
self._param = {'axis': param.axis,
'num_output': param.num_output,
'TransW': not param.transpose}
weight = Tensor(LayerParameter.name + '@param0')
weight_diff = Tensor(LayerParameter.name + '@param0_grad')
self.Fill(weight, param, 'weight_filler')
self._blobs.append({'data': weight, 'diff': weight_diff})
if param.bias_term:
bias = Tensor(LayerParameter.name + '@param1')
bias_diff = Tensor(LayerParameter.name + '@param1_grad')
self.Fill(bias, param, 'bias_filler')
self._blobs.append({'data': bias, 'diff': bias_diff})
def Setup(self, bottom):
super(InnerProductLayer, self).Setup(bottom)
return ops.InnerProduct(bottom + [blob['data'] for blob in self._blobs], **self._param)
class AccuracyLayer(Layer):
"""The implementation of ``AccuracyLayer``.
Parameters
----------
top_k : int
The top-k accuracy. Refer `AccuracyParameter.top_k`_.
axis : int
The axis of classes. Refer `AccuracyParameter.axis`_.
ignore_label : int
The label to ignore. Refer `AccuracyParameter.ignore_label`_.
"""
def __init__(self, LayerParameter):
super(AccuracyLayer, self).__init__(LayerParameter)
param = LayerParameter.accuracy_param
self._param = {'top_k': param.top_k,
'ignore_labels': [param.ignore_label]
if param.HasField('ignore_label') else []}
def Setup(self, bottom):
super(AccuracyLayer, self).Setup(bottom)
return ops.Accuracy(bottom, **self._param)
class PythonLayer(Layer):
"""The implementation of ``PythonLayer``.
Parameters
----------
module : str
The module. Refer `PythonParameter.module`_.
layer : str
The class name of layer. Refer `PythonParameter.layer`_.
param_str : str
The str describing parameters. Refer `PythonParameter.param_str`_.
"""
def __init__(self, LayerParameter):
super(PythonLayer, self).__init__(LayerParameter)
param = LayerParameter.python_param
self._param = {'module': param.module,
'op': param.layer,
'param_str': param.param_str}
def Setup(self, bottom):
super(PythonLayer, self).Setup(bottom)
return ops.Run(bottom, nout=len(self._top), **self._param)
class EltwiseLayer(Layer):
"""The implementation of ``EltwiseLayer``.
Parameters
----------
operation : EltwiseParameter.EltwiseOp
The operation. Refer `EltwiseParameter.operation`_.
coeff : list of float
The coefficients. Refer `EltwiseParameter.coeff`_.
"""
def __init__(self, LayerParameter):
super(EltwiseLayer, self).__init__(LayerParameter)
param = LayerParameter.eltwise_param
self._param = {'operation': {0: 'PROD', 1: 'SUM', 2: 'MAX'}[param.operation],
'coeffs': [element for element in param.coeff]
if len(param.coeff) > 0 else None}
def Setup(self, bottom):
super(EltwiseLayer, self).Setup(bottom)
return ops.Eltwise(bottom, **self._param)
class AddLayer(Layer):
"""
The extended implementation of ``EltwiseLayer``.
"""
def __init__(self, LayerParameter):
super(AddLayer, self).__init__(LayerParameter)
def Setup(self, bottom):
super(AddLayer, self).Setup(bottom)
return ops.Add(bottom, **self._param)
class ConcatLayer(Layer):
"""The implementation of ``ConcatLayer``.
Parameters
----------
axis : int
The axis to concatenate. Refer `ConcatParameter.axis`_.
"""
def __init__(self, LayerParameter):
super(ConcatLayer, self).__init__(LayerParameter)
param = LayerParameter.concat_param
self._param = {'axis': param.axis}
def Setup(self, bottom):
super(ConcatLayer, self).Setup(bottom)
return ops.Concat(bottom, **self._param)
class DenseConcatLayer(Layer):
"""The extended implementation for `DenseNet`_.
Parameters
----------
axis : int
The axis to concatenate. Refer `ConcatParameter.axis`_.
growth_rate : int
The growth rate.
"""
def __init__(self, LayerParameter):
super(DenseConcatLayer, self).__init__(LayerParameter)
param = LayerParameter.dense_concat_param
self._param = {'axis': param.axis,
'growth_rate': param.growth_rate}
def Setup(self, bottom):
super(DenseConcatLayer, self).Setup(bottom)
return ops.DenseConcat(bottom, **self._param)
class CropLayer(Layer):
"""The implementation of ``CropLayer``.
Parameters
----------
axis : int
The start axis. Refer `CropParameter.axis`_.
offset : list of int
The offsets. Refer `CropParameter.offset`_.
"""
def __init__(self, LayerParameter):
super(CropLayer, self).__init__(LayerParameter)
param = LayerParameter.crop_param
self._param = {'start_axis': param.axis,
'offsets': [int(element) for element in param.offset]}
def Setup(self, bottom):
super(CropLayer, self).Setup(bottom)
self._param['shape_like'] = bottom[1]
self._param['starts'] = self._param['ends'] = None
return ops.Crop(bottom[0], **self._param)
class ReshapeLayer(Layer):
"""The implementation of ``ReshapeLayer``.
Parameters
----------
shape : list of int
The output shape. Refer `ReshapeParameter.shape`_.
"""
def __init__(self, LayerParameter):
super(ReshapeLayer, self).__init__(LayerParameter)
param = LayerParameter.reshape_param
shape = param.shape
self._param = {'shape': [int(element) for element in shape.dim]}
def Setup(self, bottom):
super(ReshapeLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Reshape(input, **self._param)
class PermuteLayer(Layer):
"""The implementation of ``PermuteLayer``.
Parameters
----------
order : list of int
The permutation. Refer `PermuteParameter.order`_.
"""
def __init__(self, LayerParameter):
super(PermuteLayer, self).__init__(LayerParameter)
param = LayerParameter.permute_param
self._param = {'perms': [int(element) for element in param.order]}
def Setup(self, bottom):
super(PermuteLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Transpose(input, **self._param)
class FlattenLayer(Layer):
"""The implementation of ``FlattenLayer``.
Parameters
----------
axis : int
The start axis. Refer `FlattenParameter.axis`_.
end_axis : int
The end axis. Refer `FlattenParameter.end_axis`_.
"""
def __init__(self, LayerParameter):
super(FlattenLayer, self).__init__(LayerParameter)
param = LayerParameter.flatten_param
axis = param.axis; end_axis = param.end_axis
num_axes = end_axis - axis + 1 if end_axis != -1 else -1
'eps': param.eps}
mean = Tensor(LayerParameter.name + '@param0').Constant()
var = Tensor(LayerParameter.name + '@param1').Constant()
scale = Tensor(LayerParameter.name + '@param2').Uniform(low=0.0, high=1.0)
bias = Tensor(LayerParameter.name + '@param3').Constant(value=0.0)
self.norm_blobs = [{'data': mean, 'diff': None},
{'data': var, 'diff': None}]
class GatherLayer(Layer):
"""The extended implementation of ``GatherOp``.
Parameters
----------
axis : int
The axis for gathering. Refer ``GatherParameter.axis``.
"""
def __init__(self, LayerParameter):
super(GatherLayer, self).__init__(LayerParameter)
param = LayerParameter.gather_param
self._param = {'axis': param.axis}
def Setup(self, bottom):
super(GatherLayer, self).Setup(bottom)
return ops.Gather(bottom[0], indices=bottom[1], **self._param)
class SoftmaxLayer(Layer):
"""The implementation of ``SoftmaxLayer``.
Parameters
----------
axis : int
The axis to perform softmax. Refer `SoftmaxParameter.axis`_.
"""
def __init__(self, LayerParameter):
super(SoftmaxLayer, self).__init__(LayerParameter)
param = LayerParameter.softmax_param
self._param = {'axis': param.axis}
def Setup(self, bottom):
super(SoftmaxLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Softmax(input, **self._param)
class ArgMaxLayer(Layer):
"""The implementation of ``ArgMaxLayer``.
Parameters
----------
top_k : int
The top k results to keep. Refer `ArgMaxParameter.top_k`_.
axis : int
The axis to perform argmax. Refer `ArgMaxParameter.axis`_.
"""
def __init__(self, LayerParameter):
super(ArgMaxLayer, self).__init__(LayerParameter)
param = LayerParameter.argmax_param
self._param = {'top_k': param.top_k,
'axis': param.axis,
'keep_dims': True}
def Setup(self, bottom):
super(ArgMaxLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Argmax(input, **self._param)
class BatchNormLayer(Layer):
"""The implementation of ``BatchNormLayer``.
Parameters
----------
use_global_stats : boolean
Refer `BatchNormParameter.use_global_stats`_.
moving_average_fraction : float
Refer `BatchNormParameter.moving_average_fraction`_.
eps : float
Refer `BatchNormParameter.eps`_.
"""
def __init__(self, LayerParameter):
super(BatchNormLayer, self).__init__(LayerParameter)
param = LayerParameter.batch_norm_param
self._param = {'use_stats': int(param.use_global_stats)
if param.HasField('use_global_stats') else -1,
'momentum': param.moving_average_fraction,
'eps': param.eps,
'axis': 1,
'mode': 'CAFFE'}
# mean, var, factor are set to 0 in order to do statistics
mean = Tensor(LayerParameter.name + '@param0').Constant(value=0.0)
var = Tensor(LayerParameter.name + '@param1').Constant(value=0.0)
factor = Tensor(LayerParameter.name + '@param2').Constant(value=0.0)
# in dragon, set diff as None will ignore computing grad automatically
# but in bvlc-caffe1, you must set lr_mult = 0 manually
self._blobs.append({'data': mean, 'diff': None})
self._blobs.append({'data': var, 'diff': None})
self._blobs.append({'data': factor, 'diff': None})
def Setup(self, bottom):
super(BatchNormLayer, self).Setup(bottom)
return ops.BatchNorm(bottom + [blob['data'] for blob in self._blobs], **self._param)
class BatchRenormLayer(Layer):
"""The implementation of ``BatchRenormLayer``.
Parameters
----------
use_global_stats : boolean
Refer ``BatchRenormParameter.use_global_stats``.
moving_average_fraction : float
Refer ``BatchRenormParameter.moving_average_fraction``.
eps : float
Refer ``BatchRenormParameter.eps``.
r_max : float
Refer ``BatchRenormParameter.r_max``.
d_max : float
Refer ``BatchRenormParameter.d_max``.
t_delta : float
Refer ``BatchRenormParameter.t_delta``.
"""
def __init__(self, LayerParameter):
super(BatchRenormLayer, self).__init__(LayerParameter)
param = LayerParameter.batch_renorm_param
self._param = {'use_stats': int(param.use_global_stats)
if param.HasField('use_global_stats') else -1,
'momentum': param.moving_average_fraction,
'eps': param.eps,
'r_max': float(param.r_max),
'd_max': float(param.d_max),
't_delta': float(param.t_delta),
'axis': 1,
'mode': 'CAFFE'}
mean = Tensor(LayerParameter.name + '@param0').Constant(value=0.0)
var = Tensor(LayerParameter.name + '@param1').Constant(value=0.0)
factor = Tensor(LayerParameter.name + '@param2').Constant(value=0.0)
self._blobs.append({'data': mean, 'diff': None})
self._blobs.append({'data': var, 'diff': None})
self._blobs.append({'data': factor, 'diff': None})
def Setup(self, bottom):
super(BatchRenormLayer, self).Setup(bottom)
return ops.BatchRenorm(bottom + [blob['data'] for blob in self._blobs], **self._param)
class InstanceNormLayer(Layer):
"""
The implementation of ``InstanceNormLayer``.
Introduced by `[Ulyanov et.al, 2016] <https://arxiv.org/abs/1607.08022>`_
"""
def __init__(self, LayerParameter):
super(InstanceNormLayer, self).__init__(LayerParameter)
self._param = {'axis': 1}
def Setup(self, bottom):
super(InstanceNormLayer, self).Setup(bottom)
return ops.InstanceNorm(bottom[0], **self._param)
class ScaleLayer(Layer):
"""The implementation of ``ScaleLayer``.
Parameters
----------
axis : int
The start axis. Refer `ScaleParameter.axis`_.
num_axes : int
The number of axes. Refer `ScaleParameter.num_axes`_.
filler : FillerParameter
The filler of scale parameter. Refer `ScaleParameter.filler`_.
bias_term : boolean
Whether to use bias. Refer `ScaleParameter.bias_term`_.
bias_filler : FillerParameter
The filler of bias parameter. Refer `ScaleParameter.bias_filler`_.
"""
def __init__(self, LayerParameter):
super(ScaleLayer, self).__init__(LayerParameter)
param = LayerParameter.scale_param
self._param = {'axis': param.axis,
'num_axes': param.num_axes}
scale = Tensor(LayerParameter.name + '@param0')
scale_diff = Tensor(LayerParameter.name + '@param0_grad')
if param.HasField('filler'):
self.Fill(scale, param, 'filler')
else: scale.Constant(value=1.0)
self._blobs.append({'data': scale, 'diff': scale_diff})
if param.bias_term:
bias = Tensor(LayerParameter.name + '@param1')
bias_diff = Tensor(LayerParameter.name + '@param1_grad')
# auto fill 0 if not specficed bias_filler
self.Fill(bias, param, 'bias_filler')
self._blobs.append({'data': bias, 'diff': bias_diff})
def Setup(self, bottom):
super(ScaleLayer, self).Setup(bottom)
return ops.Scale(bottom + [blob['data'] for blob in self._blobs], **self._param)
class BNLayer(Layer):
"""The implementation of ``BNLayer``.
Parameters
----------
use_global_stats : boolean
Refer `BatchNormParameter.use_global_stats`_.
moving_average_fraction : float
Refer `BatchNormParameter.moving_average_fraction`_.
eps : float
Refer `BatchNormParameter.eps`_.
filler : FillerParameter
The filler of scale parameter. Refer `ScaleParameter.filler`_.
bias_filler : FillerParameter
The filler of bias parameter. Refer `ScaleParameter.bias_filler`_.
"""
def __init__(self, LayerParameter):
super(BNLayer, self).__init__(LayerParameter)
bn_param = LayerParameter.batch_norm_param
scale_param = LayerParameter.scale_param
self._param = {'use_stats': int(bn_param.use_global_stats)
if bn_param.HasField('use_global_stats') else -1,
'momentum': bn_param.moving_average_fraction,
'eps': bn_param.eps,
'axis': 1}
mean = Tensor(LayerParameter.name + '@param0').Constant(value=0.0)
var = Tensor(LayerParameter.name + '@param1').Constant(value=0.0)
scale = Tensor(LayerParameter.name + '@param2')
scale_diff = Tensor(LayerParameter.name + '@param2_grad')
bias = Tensor(LayerParameter.name + '@param3')
bias_diff = Tensor(LayerParameter.name + '@param3_grad')
if scale_param.HasField('filler'):
self.Fill(scale, scale_param, 'filler')
else: scale.Constant(value=1.0)
self.Fill(bias, scale_param, 'bias_filler')
self.norm_blobs = [{'data': mean, 'diff': None},
{'data': var, 'diff': None}]
self.scale_blobs = [{'data': scale, 'diff': scale_diff},
{'data': bias, 'diff': bias_diff}]
self._blobs.extend(self.norm_blobs)
self._blobs.extend(self.scale_blobs)
def Setup(self, bottom):
super(BNLayer, self).Setup(bottom)
return ops.FusedBatchNorm(bottom + [blob['data'] for blob in self._blobs], **self._param)
class NormalizeLayer(Layer):
"""The implementation of ``NormalizeLayer``.
Parameters
----------
across_spatial : boolean
Whether to stat spatially. Refer `NormalizeParameter.across_spatial`_.
scale_filler : FillerParameter
The filler of scale parameter. Refer `NormalizeParameter.scale_filler`_.
channel_shared : boolean
Whether to scale across channels. Refer `NormalizeParameter.channel_shared`_.
eps : float
The eps. Refer `NormalizeParameter.eps`_.
"""
def __init__(self, LayerParameter):
super(NormalizeLayer, self).__init__(LayerParameter)
param = LayerParameter.normalize_param
self._l2norm_param = {'axis': 1,
'num_axes': -1 if param.across_spatial else 1,
'eps': param.eps}
self._scale_param = {'axis': 1,
'num_axes': 0 if param.channel_shared else 1}
scale = Tensor(LayerParameter.name + '@param0')
if param.HasField('scale_filler'):
self.Fill(scale, param, 'scale_filler')
else: scale.Constant(value=1.0)
self.scale_blobs = [{'data': scale, 'diff': Tensor(scale.name + '_grad')}]
self._blobs.extend(self.scale_blobs)
def Setup(self, bottom):
super(NormalizeLayer, self).Setup(bottom)
norm_out = [ops.L2Norm(bottom[0], **self._l2norm_param)]
scale_out = ops.Scale(norm_out + [blob['data'] for blob in self.scale_blobs],
**self._scale_param)
return scale_out
class TileLayer(Layer):
"""The extended implementation of ``TileLayer``.
Parameters
----------
multiples : caffe_pb2.BlobShape
The multiples. Refer `TileParameter.multiples`_.
"""
def __init__(self, LayerParameter):
super(TileLayer, self).__init__(LayerParameter)
param = LayerParameter.tile_param
multiples = param.multiples
self._param = {'multiples': [int(multiple) for multiple in multiples.dim]}
def Setup(self, bottom):
super(TileLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Tile(input, **self._param)
class ReductionLayer(Layer):
"""The extended implementation of ``ReductionLayer``.
Parameters
----------
operation : caffe_pb2.ReductionOp
The operation. Refer `ReductionParameter.operation`_.
axis : int
The axis to to reduce. Refer `ReductionParameter.axis`_.
"""
def __init__(self, LayerParameter):
super(ReductionLayer, self).__init__(LayerParameter)
param = LayerParameter.reduction_param
if param.axis < 0:
if param.axis != -1:
raise ValueError('The negative axis can only be -1(reduce all).')
self._param = {'operation': {1: 'SUM', 4: 'MEAN'}[param.operation],
'axis': param.axis}
def Setup(self, bottom):
super(ReductionLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.Reduce(input, **self._param)
class ExpandDimsLayer(Layer):
"""The implementation of ``ExpandDimsLayer``.
Parameters
----------
axis : int
This axis to expand at. Refer `ExpandDimsParameter.axis`_.
"""
def __init__(self, LayerParameter):
super(ExpandDimsLayer, self).__init__(LayerParameter)
param = LayerParameter.expand_dims_param
self._param = {'axis': param.axis}
def Setup(self, bottom):
super(ExpandDimsLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.ExpandDims(input, **self._param)
class StopGradientLayer(Layer):
"""
The implementation of ``StopGradientLayer``.
"""
def __init__(self, LayerParameter):
super(StopGradientLayer, self).__init__(LayerParameter)
def Setup(self, bottom):
super(StopGradientLayer, self).Setup(bottom)
input = bottom[0] if isinstance(bottom, list) else bottom
return ops.StopGradient(input, **self._param)
class ProposalLayer(Layer):
"""The implementation of ``ProposalLayer``.
Parameters
----------
stride : list of int
The stride of anchors. Refer ``ProposalParameter.stride``.
scale : list of float
The scales of anchors. Refer `ProposalParameter.scale`_.
ratio : list of float
The ratios of anchors. Refer `ProposalParameter.ratio`_.
pre_nms_top_n : int
The num of anchors before nms. Refer `ProposalParameter.pre_nms_topn`_.
post_nms_top_n : int
The num of anchors after nms. Refer `ProposalParameter.post_nms_topn`_.
nms_thresh : float
The threshold of nms. Refer `ProposalParameter.nms_thresh`_.
min_size : int
The min size of anchors. Refer `ProposalParameter.min_size`_.
min_level : int
Finest level of the FPN pyramid. Refer ``ProposalParameter.min_level``.
max_level : int
Coarsest level of the FPN pyramid. Refer ``ProposalParameter.max_level``.
canonical_scale : int
The baseline scale of mapping policy. Refer ``ProposalParameter.canonical_scale``.
canonical_level : int
Heuristic level of the canonical scale. Refer ``ProposalParameter.canonical_level``.
"""
def __init__(self, LayerParameter):
super(ProposalLayer, self).__init__(LayerParameter)
param = LayerParameter.proposal_param
self._param = {'strides': param.stride,
'ratios': param.ratio,
'scales': param.scale,
'pre_nms_top_n': param.pre_nms_top_n,
'post_nms_top_n': param.post_nms_top_n,
'nms_thresh': param.nms_thresh,
'min_size': param.min_size,
'min_level': param.min_level,
'max_level': param.max_level,
'canonical_scale': param.canonical_scale,
'canonical_level': param.canonical_level}
def Setup(self, bottom):
super(ProposalLayer, self).Setup(bottom)
return ops.Proposal(bottom, **self._param)
<MSG> fix bugs of issue https://github.com/neopenx/Dragon/issues/5
<DFF> @@ -260,7 +260,7 @@ class BNLayer(Layer):
'eps': param.eps}
mean = Tensor(LayerParameter.name + '@param0').Constant()
var = Tensor(LayerParameter.name + '@param1').Constant()
- scale = Tensor(LayerParameter.name + '@param2').Uniform(low=0.0, high=1.0)
+ scale = Tensor(LayerParameter.name + '@param2').Constant(value=1.0)
bias = Tensor(LayerParameter.name + '@param3').Constant(value=0.0)
self.norm_blobs = [{'data': mean, 'diff': None},
{'data': var, 'diff': None}]
| 1 | fix bugs of issue https://github.com/neopenx/Dragon/issues/5 | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1814 | <NME> RedisSessionDAOTest.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.session.Session;
import org.apache.shiro.session.UnknownSessionException;
import org.junit.Before;
import org.junit.Test;
import org.junit.jupiter.api.Test;
import java.io.Serializable;
import java.util.Collection;
import java.util.Date;
import java.util.HashSet;
import java.util.Set;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.CoreMatchers.*;
public class RedisSessionDAOTest {
private IRedisManager redisManager;
private StringSerializer keySerializer = new StringSerializer();
private ObjectSerializer valueSerializer = new ObjectSerializer();
@BeforeEach
public void setUp() {
redisManager = mock(IRedisManager.class);
}
private RedisSessionDAO mountRedisSessionDAO(Integer expire) {
RedisSessionDAO redisSessionDAO = new RedisSessionDAO();
if (expire != null) {
redisSessionDAO.setExpire(expire);
}
redisSessionDAO.setKeyPrefix("student:");
redisSessionDAO.setRedisManager(redisManager);
return redisSessionDAO;
}
@Test
public void testUpdate() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
StudentSession session = new StudentSession(99, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:99"), valueSerializer.serialize(session), 2);
}
@Test
public void testUpdateByCustomExpire() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(3);
StudentSession session = new StudentSession(98, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:98"), valueSerializer.serialize(session), 3);
}
@Test
public void testUpdateByNoExpire() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(-1);
StudentSession session = new StudentSession(97, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:97"), valueSerializer.serialize(session), -1);
}
@Test
public void testDelete() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
StudentSession session = new StudentSession(96, 1000);
sessionDAO.delete(session);
verify(redisManager).del(keySerializer.serialize("student:96"));
}
@Test
public void testGetActiveSessions() throws SerializationException {
Set<byte[]> mockKeys = new HashSet<byte[]>();
mockKeys.add(keySerializer.serialize("student:1"));
mockKeys.add(keySerializer.serialize("student:2"));
when(redisManager.keys(keySerializer.serialize("student:*"))).thenReturn(mockKeys);
StudentSession mockSession1 = new StudentSession(1, 2000);
StudentSession mockSession2 = new StudentSession(2, 2000);
when(redisManager.get(keySerializer.serialize("student:1"))).thenReturn(valueSerializer.serialize(mockSession1));
when(redisManager.get(keySerializer.serialize("student:2"))).thenReturn(valueSerializer.serialize(mockSession2));
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
assertThat(sessionDAO.getActiveSessions().size(), is(2));
}
}
class StudentSession implements Session, Serializable {
private Integer id;
private long timeout;
public StudentSession(Integer id, long timeout) {
this.id = id;
this.timeout = timeout;
}
@Override
public Serializable getId() {
return id;
}
@Override
public Date getStartTimestamp() {
return null;
}
@Override
public Date getLastAccessTime() {
return null;
}
@Override
public long getTimeout() throws InvalidSessionException {
return timeout;
}
@Override
public void setTimeout(long l) throws InvalidSessionException {
}
@Override
public String getHost() {
return null;
}
@Override
public void touch() throws InvalidSessionException {
}
@Override
public void stop() throws InvalidSessionException {
}
@Override
public Collection<Object> getAttributeKeys() throws InvalidSessionException {
return null;
}
@Override
public Object getAttribute(Object o) throws InvalidSessionException {
return null;
}
@Override
public void setAttribute(Object o, Object o1) throws InvalidSessionException {
}
@Override
public Object removeAttribute(Object o) throws InvalidSessionException {
return null;
}
}
<MSG> Add AuthCachePrincipal to get the cache key for storing authorization object in Redis
<DFF> @@ -2,6 +2,7 @@ package org.crazycake.shiro;
import org.apache.shiro.session.Session;
import org.apache.shiro.session.UnknownSessionException;
+import org.crazycake.shiro.exception.SerializationException;
import org.junit.Before;
import org.junit.Test;
| 1 | Add AuthCachePrincipal to get the cache key for storing authorization object in Redis | 0 | .java | java | mit | alexxiyang/shiro-redis |
1815 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.3",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version
<DFF> @@ -13,7 +13,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.3",
+ version="0.4",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1816 | <NME> RedisSentinelManager.java
<BEF> package org.crazycake.shiro;
import org.crazycake.shiro.common.WorkAloneRedisManager;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisSentinelPool;
import redis.clients.jedis.Protocol;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
public class RedisSentinelManager extends WorkAloneRedisManager implements IRedisManager {
private static final String DEFAULT_HOST = "127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381";
private String host = DEFAULT_HOST;
private static final String DEFAULT_MASTER_NAME = "mymaster";
private String masterName = DEFAULT_MASTER_NAME;
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private int timeout = Protocol.DEFAULT_TIMEOUT;
// timeout for jedis try to read data from redis server
private int soTimeout = Protocol.DEFAULT_TIMEOUT;
private int database = Protocol.DEFAULT_DATABASE;
private JedisSentinelPool jedisPool;
@Override
protected Jedis getJedis() {
protected Jedis getJedis() {
if (jedisPool == null) {
init();
}
return jedisPool.getResource();
}
private void init() {
if (jedisPool == null) {
synchronized (RedisSentinelManager.class) {
if (jedisPool == null) {
String[] sentinelHosts = host.split(",\\s*");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
jedisPool = new JedisSentinelPool(masterName, sentinels, getJedisPoolConfig(), timeout, soTimeout, password, database);
}
}
}
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public int getTimeout() {
return timeout;
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
public String getMasterName() {
return masterName;
}
public void setMasterName(String masterName) {
this.masterName = masterName;
}
public int getSoTimeout() {
return soTimeout;
}
public void setSoTimeout(int soTimeout) {
this.soTimeout = soTimeout;
}
public JedisSentinelPool getJedisPool() {
return jedisPool;
}
public void setJedisPool(JedisSentinelPool jedisPool) {
this.jedisPool = jedisPool;
}
}
<MSG> Merge pull request #131 from pluone/patch-1
<DFF> @@ -26,7 +26,7 @@ public class RedisSentinelManager extends WorkAloneRedisManager implements IRedi
private int database = Protocol.DEFAULT_DATABASE;
- private JedisSentinelPool jedisPool;
+ private volatile JedisSentinelPool jedisPool;
@Override
protected Jedis getJedis() {
| 1 | Merge pull request #131 from pluone/patch-1 | 1 | .java | java | mit | alexxiyang/shiro-redis |
1817 | <NME> RedisClusterManager.java
<BEF> ADDFILE
<MSG> Merge pull request #42 from xchendeveloper/pr/8
add Redis cluster manager
<DFF> @@ -0,0 +1,211 @@
+package org.crazycake.shiro;
+
+import redis.clients.jedis.*;
+
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+public class RedisClusterManager implements IRedisManager {
+
+ protected String ip = "127.0.0.1";
+
+ protected String host = ip + ":" + Protocol.DEFAULT_PORT ;
+
+ protected static final int DEFAULT_EXPIRE = 3600;
+
+ // expire time in seconds
+ protected int expire = DEFAULT_EXPIRE;
+
+ // timeout for jedis try to connect to redis server, not expire time! In milliseconds
+ protected int timeout = Protocol.DEFAULT_TIMEOUT;
+
+ // timeout for jedis try to read data from redis server
+ protected int soTimeout = Protocol.DEFAULT_TIMEOUT;
+
+ // requirepass
+ protected String password;
+
+ // default select database
+ protected int database = Protocol.DEFAULT_DATABASE;
+
+ //scan numbers each time
+ protected int count = 100;
+
+
+ // max attempts to connect to server
+ private int maxAttempts = 3;
+
+ private volatile JedisCluster jedisCluster = null;
+
+ private void init() {
+ synchronized (this) {
+ if (jedisCluster == null) {
+ jedisCluster = new JedisCluster(getHostAndPortSet(), timeout, soTimeout, maxAttempts, password, new JedisPoolConfig());
+ }
+ }
+ }
+
+ private Set<HostAndPort> getHostAndPortSet() {
+ String[] hostAndPortArr = host.split(",");
+ Set<HostAndPort> hostAndPorts = new HashSet<HostAndPort>();
+ for (String hostAndPortStr : hostAndPortArr) {
+ String[] hostAndPort = hostAndPortStr.split(":");
+ hostAndPorts.add(new HostAndPort(hostAndPort[0], Integer.parseInt(hostAndPort[1])));
+ }
+ return hostAndPorts;
+ }
+
+
+ protected JedisCluster getJedisCluster() {
+ if (jedisCluster == null) {
+ init();
+ }
+ return jedisCluster;
+ }
+
+ public byte[] get(byte[] key) {
+ if (key == null) {
+ return null;
+ }
+ return getJedisCluster().get(key);
+ }
+
+ public byte[] set(byte[] key, byte[] value) {
+ if (key == null) {
+ return null;
+ }
+ getJedisCluster().set(key, value);
+ if (this.expire != 0) {
+ getJedisCluster().expire(key, this.expire);
+ }
+ return value;
+ }
+
+ public byte[] set(byte[] key, byte[] value, int expire) {
+ if (key == null) {
+ return null;
+ }
+ getJedisCluster().set(key, value);
+ if (this.expire != 0) {
+ getJedisCluster().expire(key, expire);
+ }
+ return value;
+ }
+
+ public void del(byte[] key) {
+ if (key == null) {
+ return;
+ }
+ getJedisCluster().del(key);
+ }
+
+ public Long dbSize() {
+ Long dbSize = 0L;
+ Map<String, JedisPool> clusterNodes = getJedisCluster().getClusterNodes();
+ for (String k : clusterNodes.keySet()) {
+ JedisPool jp = clusterNodes.get(k);
+ Jedis connection = jp.getResource();
+ try {
+ dbSize += connection.dbSize();
+ } catch (Exception e) {
+ e.printStackTrace();
+ } finally {
+ connection.close();
+ }
+ }
+ return dbSize;
+ }
+
+ public Set<byte[]> keys(byte[] pattern) {
+ Set<byte[]> keys = new HashSet<byte[]>();
+ ScanParams params = new ScanParams();
+ params.count(count);
+ params.match(pattern);
+ byte[] cursor = ScanParams.SCAN_POINTER_START_BINARY;
+ ScanResult<byte[]> scanResult;
+ do {
+ scanResult = getJedisCluster().scan(cursor, params);
+ keys.addAll(scanResult.getResult());
+ cursor = scanResult.getCursorAsBytes();
+ } while (scanResult.getStringCursor().compareTo(ScanParams.SCAN_POINTER_START) > 0);
+
+ return keys;
+ }
+
+ public int getMaxAttempts() {
+ return maxAttempts;
+ }
+
+ public void setMaxAttempts(int maxAttempts) {
+ this.maxAttempts = maxAttempts;
+ }
+
+ public String getIp() {
+ return ip;
+ }
+
+ public void setIp(String ip) {
+ this.ip = ip;
+ }
+
+ public String getHost() {
+ return host;
+ }
+
+ public void setHost(String host) {
+ this.host = host;
+ }
+
+ public int getExpire() {
+ return expire;
+ }
+
+ public void setExpire(int expire) {
+ this.expire = expire;
+ }
+
+ public int getTimeout() {
+ return timeout;
+ }
+
+ public void setTimeout(int timeout) {
+ this.timeout = timeout;
+ }
+
+ public int getSoTimeout() {
+ return soTimeout;
+ }
+
+ public void setSoTimeout(int soTimeout) {
+ this.soTimeout = soTimeout;
+ }
+
+ public String getPassword() {
+ return password;
+ }
+
+ public void setPassword(String password) {
+ this.password = password;
+ }
+
+ public int getDatabase() {
+ return database;
+ }
+
+ public void setDatabase(int database) {
+ this.database = database;
+ }
+
+ public int getCount() {
+ return count;
+ }
+
+ public void setCount(int count) {
+ this.count = count;
+ }
+
+ public void setJedisCluster(JedisCluster jedisCluster) {
+ this.jedisCluster = jedisCluster;
+ }
+}
| 211 | Merge pull request #42 from xchendeveloper/pr/8 | 0 | .java | java | mit | alexxiyang/shiro-redis |
1818 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
# Create cacheManager
cacheManager = org.crazycake.shiro.RedisCacheManager
# Custom your redis key prefix for cache management, if you doesn't define this parameter, shiro-redis will use 'shiro_redis_session:' as default prefix
# Note: Remember to add colon at the end of prefix.
cacheManager.keyPrefix = shiro:cache:
For example, you need to change the charset of keySerializer.
```properties
#=====================================
# Redis-based cache configuration
#=====================================
# Create cacheManager
cacheManager = org.crazycake.shiro.RedisCacheManager
# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
cacheManagerKeySerializer = org.crazycake.shiro.StringSerializer
# Refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
cacheManagerKeySerializer.charset = UTF-16
cacheManager.keySerializer = $cacheManagerKeySerializer
```
These 4 Serializers are replaceable:
<MSG> Update README.md
<DFF> @@ -104,6 +104,17 @@ securityManager.sessionManager = $sessionManager
# Create cacheManager
cacheManager = org.crazycake.shiro.RedisCacheManager
+# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
+#
+# cacheManagerKeySerializer = org.crazycake.shiro.StringSerializer
+
+# Refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
+# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
+#
+# cacheManagerKeySerializer.charset = UTF-8
+
+# cacheManager.keySerializer = $cacheManagerKeySerializer
+
# Custom your redis key prefix for cache management, if you doesn't define this parameter, shiro-redis will use 'shiro_redis_session:' as default prefix
# Note: Remember to add colon at the end of prefix.
cacheManager.keyPrefix = shiro:cache:
@@ -241,17 +252,16 @@ You can use your own custom serializer, as long as this custom serializer implem
For example, you need to change the charset of keySerializer.
```properties
-#=====================================
-# Redis-based cache configuration
-#=====================================
-# Create cacheManager
-cacheManager = org.crazycake.shiro.RedisCacheManager
# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
-cacheManagerKeySerializer = org.crazycake.shiro.StringSerializer
+#
+# cacheManagerKeySerializer = org.crazycake.shiro.StringSerializer
+
# Refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
-cacheManagerKeySerializer.charset = UTF-16
-cacheManager.keySerializer = $cacheManagerKeySerializer
+#
+# cacheManagerKeySerializer.charset = UTF-8
+
+# cacheManager.keySerializer = $cacheManagerKeySerializer
```
These 4 Serializers are replaceable:
| 18 | Update README.md | 8 | .md | md | mit | alexxiyang/shiro-redis |
1819 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.4.4",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version.
<DFF> @@ -13,7 +13,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.4.4",
+ version="0.4.5",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1820 | <NME> index.rst
<BEF> Hitch
=====
Hitch is a testing framework to help you write functional tests that are:
* Simple
* Easy to read and maintain
* Loosely coupled
* Isolated
* Integrated
* Self-bootstrapping
Contents:
:maxdepth: 2
plugins/index
Documentation
-------------
.. toctree::
:maxdepth: 2
quickstart/index
howto/index
faq/index
api/index
misc/index
See the full :doc:`/glossary/index` here.
<MSG> Updated the intro description.
<DFF> @@ -1,14 +1,20 @@
Hitch
=====
-Hitch is a testing framework to help you write functional tests that are:
-
-* Simple
-* Easy to read and maintain
-* Loosely coupled
-* Isolated
-* Integrated
-* Self-bootstrapping
+Hitch is a testing framework for writing integration tests that are:
+
+* Loosely coupled to your code
+* Realistic
+* Readable
+* Reliable
+* Fail fast and fail clearly
+
+Additionally, the framework aims to let you easily create your own test driven development environment that:
+
+* Automates its own deployment
+* Requires no system configuration changes to set up
+* Runs without modification on Mac OS X, Ubuntu/Debian, Fedora, CentOS and Arch.
+* Provides first class tools to test and debug a wide variety of languages, frameworks and environments.
Contents:
| 14 | Updated the intro description. | 8 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1821 | <NME> README.md
<BEF> shiro-redis
[](https://travis-ci.org/alexxiyang/shiro-redis)
===========
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
<MSG> Update README.md
change readme
<DFF> @@ -2,7 +2,6 @@
[](https://travis-ci.org/alexxiyang/shiro-redis)
-===========
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
| 0 | Update README.md | 1 | .md | md | mit | alexxiyang/shiro-redis |
1822 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
===========
You can chose these 2 ways to include shiro-redis into your project
* directly download jar file
Download shiro-redis.jar in bin folder and add it into your classpath.
* add maven dependency
```xml
<dependency>
<version>2.4.2-RELEASE</version>
</dependency>
```xml
<MSG> Update README.md
update readme
<DFF> @@ -7,9 +7,7 @@ How to use it?
===========
You can chose these 2 ways to include shiro-redis into your project
-* directly download jar file
-Download shiro-redis.jar in bin folder and add it into your classpath.
-* add maven dependency
+
```xml
<dependency>
@@ -18,3 +16,36 @@ Download shiro-redis.jar in bin folder and add it into your classpath.
<version>2.4.2-RELEASE</version>
</dependency>
```xml
+
+Edit shiro.ini
+
+```properties
+#redisManager
+redisManager = org.crazycake.shiro.RedisManager
+#optional if you don't specify host the default value is 127.0.0.1
+redisManager.host = 127.0.0.1
+#optional , default value: 6379
+redisManager.port = 6379
+#optional, default value:0 .The expire time is in second
+redisManager.expire = 30
+
+#============redisSessionDAO=============
+redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
+redisSessionDAO.redisManager = $redisManager
+sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
+sessionManager.sessionDAO = $redisSessionDAO
+securityManager.sessionManager = $sessionManager
+
+#============redisCacheManager===========
+cacheManager = org.crazycake.shiro.RedisCacheManager
+cacheManager.redisManager = $redisManager
+#custom your redis key prefix, if you doesn't define this parameter shiro-redis will use 'shiro_redis_session:' as default prefix
+shiroCacheManager.keyPrefix = users:security:authz:
+securityManager.cacheManager = $cacheManager
+```
+
+
+If you found any bugs
+===========
+
+Please send email to [email protected]
| 34 | Update README.md | 3 | .md | md | mit | alexxiyang/shiro-redis |
1823 | <NME> RedisSessionDAOTest.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.session.InvalidSessionException;
import org.apache.shiro.session.Session;
import org.crazycake.shiro.exception.SerializationException;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import java.io.Serializable;
import java.util.Collection;
import java.util.Date;
import java.util.HashSet;
import java.util.Set;
public class RedisSessionDAOTest {
private RedisManager redisManager;
private RedisSessionDAO redisSessionDAO;
private StringSerializer keySerializer;
private String testKey;
public class RedisSessionDAOTest {
private IRedisManager redisManager;
private StringSerializer keySerializer = new StringSerializer();
private ObjectSerializer valueSerializer = new ObjectSerializer();
@BeforeEach
public void setUp() {
redisManager = mock(IRedisManager.class);
}
private RedisSessionDAO mountRedisSessionDAO(Integer expire) {
RedisSessionDAO redisSessionDAO = new RedisSessionDAO();
if (expire != null) {
redisSessionDAO.setExpire(expire);
}
redisSessionDAO.setKeyPrefix("student:");
redisSessionDAO.setRedisManager(redisManager);
return redisSessionDAO;
}
@Test
public void testUpdate() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
StudentSession session = new StudentSession(99, 2000);
testValues.add(paulSession);
billySession = new FakeSession(3, "billy");
testValues.add(billySession);
redisManager = mock(RedisManager.class);
when(redisManager.dbSize()).thenReturn(2L);
when(redisManager.get(keySerializer.serialize(testPrefix + testKey))).thenReturn(valueSeralizer.serialize(testValue));
when(redisManager.keys(keySerializer.serialize(testPrefix + "*"))).thenReturn(testSet);
StudentSession session = new StudentSession(98, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:98"), valueSerializer.serialize(session), 3);
}
@Test
public void testUpdateByNoExpire() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(-1);
StudentSession session = new StudentSession(97, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:97"), valueSerializer.serialize(session), -1);
}
@Test
public void testDelete() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
StudentSession session = new StudentSession(96, 1000);
sessionDAO.delete(session);
verify(redisManager).del(keySerializer.serialize("student:96"));
}
@Test
public void testGetActiveSessions() throws SerializationException {
Set<byte[]> mockKeys = new HashSet<byte[]>();
mockKeys.add(keySerializer.serialize("student:1"));
mockKeys.add(keySerializer.serialize("student:2"));
when(redisManager.keys(keySerializer.serialize("student:*"))).thenReturn(mockKeys);
StudentSession mockSession1 = new StudentSession(1, 2000);
StudentSession mockSession2 = new StudentSession(2, 2000);
when(redisManager.get(keySerializer.serialize("student:1"))).thenReturn(valueSerializer.serialize(mockSession1));
when(redisManager.get(keySerializer.serialize("student:2"))).thenReturn(valueSerializer.serialize(mockSession2));
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
assertThat(sessionDAO.getActiveSessions().size(), is(2));
}
}
class StudentSession implements Session, Serializable {
private Integer id;
private long timeout;
public StudentSession(Integer id, long timeout) {
this.id = id;
this.timeout = timeout;
}
@Override
public Serializable getId() {
return id;
}
@Override
public Date getStartTimestamp() {
return null;
}
@Override
public Date getLastAccessTime() {
return null;
}
@Override
public long getTimeout() throws InvalidSessionException {
return timeout;
}
@Override
public void setTimeout(long l) throws InvalidSessionException {
}
@Override
public String getHost() {
return null;
}
@Override
public void touch() throws InvalidSessionException {
}
@Override
public void stop() throws InvalidSessionException {
}
@Override
public Collection<Object> getAttributeKeys() throws InvalidSessionException {
return null;
}
@Override
public Object getAttribute(Object o) throws InvalidSessionException {
return null;
}
@Override
public void setAttribute(Object o, Object o1) throws InvalidSessionException {
}
@Override
public Object removeAttribute(Object o) throws InvalidSessionException {
return null;
}
}
<MSG> modify test case
<DFF> @@ -16,7 +16,7 @@ import static org.mockito.Mockito.when;
public class RedisSessionDAOTest {
- private RedisManager redisManager;
+ private RedisSingletonManager redisManager;
private RedisSessionDAO redisSessionDAO;
private StringSerializer keySerializer;
private String testKey;
@@ -47,7 +47,7 @@ public class RedisSessionDAOTest {
testValues.add(paulSession);
billySession = new FakeSession(3, "billy");
testValues.add(billySession);
- redisManager = mock(RedisManager.class);
+ redisManager = mock(RedisSingletonManager.class);
when(redisManager.dbSize()).thenReturn(2L);
when(redisManager.get(keySerializer.serialize(testPrefix + testKey))).thenReturn(valueSeralizer.serialize(testValue));
when(redisManager.keys(keySerializer.serialize(testPrefix + "*"))).thenReturn(testSet);
| 2 | modify test case | 2 | .java | java | mit | alexxiyang/shiro-redis |
1824 | <NME> setup.py
<BEF> from distutils.core import setup
import os.path, sys
import shutil
packages = []
def find_packages(root_dir):
filenames = os.listdir(root_dir)
for filename in filenames:
filepath = os.path.join(root_dir, filename)
if os.path.isdir(filepath):
find_packages(filepath)
else:
if filename == '__init__.py':
packages.append(root_dir)
def find_modules():
dragon_c_lib_win32 = '../lib/dragon.dll'
dragon_c_lib_other = '../lib/libdragon.so'
if os.path.exists(dragon_c_lib_win32):
shutil.copy(dragon_c_lib_win32, 'dragon/libdragon.pyd')
elif os.path.exists(dragon_c_lib_other):
shutil.copy(dragon_c_lib_other, 'dragon/libdragon.so')
else:
print('ERROR: Unable to find modules. built Dragon using CMake.')
sys.exit()
def find_resources():
c_lib = ['libdragon.*']
protos = ['protos/*.proto', 'vm/caffe/proto/*.proto']
others = []
return c_lib + protos + others
find_packages('dragon')
find_modules()
setup(name = 'dragon',
version='0.2.1.5',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
license='BSD 2-Clause',
packages=packages,
package_dir={'dragon': 'dragon'},
package_data={'dragon': find_resources()})
<MSG> VM.TensorFlow Preview
<DFF> @@ -36,7 +36,7 @@ find_packages('dragon')
find_modules()
setup(name = 'dragon',
- version='0.2.1.5',
+ version='0.2.1.6',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
| 1 | VM.TensorFlow Preview | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1825 | <NME> setup.py
<BEF> from distutils.core import setup
import os.path, sys
import shutil
packages = []
def find_packages(root_dir):
filenames = os.listdir(root_dir)
for filename in filenames:
filepath = os.path.join(root_dir, filename)
if os.path.isdir(filepath):
find_packages(filepath)
else:
if filename == '__init__.py':
packages.append(root_dir)
def find_modules():
dragon_c_lib_win32 = '../lib/dragon.dll'
dragon_c_lib_other = '../lib/libdragon.so'
if os.path.exists(dragon_c_lib_win32):
shutil.copy(dragon_c_lib_win32, 'dragon/libdragon.pyd')
elif os.path.exists(dragon_c_lib_other):
shutil.copy(dragon_c_lib_other, 'dragon/libdragon.so')
else:
print('ERROR: Unable to find modules. built Dragon using CMake.')
sys.exit()
def find_resources():
c_lib = ['libdragon.*']
protos = ['protos/*.proto', 'vm/caffe/proto/*.proto']
others = []
return c_lib + protos + others
find_packages('dragon')
find_modules()
setup(name = 'dragon',
version='0.2.1.5',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
license='BSD 2-Clause',
packages=packages,
package_dir={'dragon': 'dragon'},
package_data={'dragon': find_resources()})
<MSG> VM.TensorFlow Preview
<DFF> @@ -36,7 +36,7 @@ find_packages('dragon')
find_modules()
setup(name = 'dragon',
- version='0.2.1.5',
+ version='0.2.1.6',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
| 1 | VM.TensorFlow Preview | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1826 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
## Deprecated. See [seetaresearch/Dragon](http://github.com/seetaresearch/Dragon).
dragon.config.EnableCUDA(device_id, use_cudnn=True)
```
### Automatic Memory Optimization(AMC)
```Shell
import dragon.config
dragon.config.SetDebugMode(False)
```
This option will make all gradients share a global tensor(debugging is intractable).
which prefers a 50% memory-usage and 15% slower solution during training phase.
### Scope
- NameScope
```Shell
<MSG> add memonger for Dragon
<DFF> @@ -164,19 +164,38 @@ dragon.config.EnableCPU()
dragon.config.EnableCUDA(device_id, use_cudnn=True)
```
-### Automatic Memory Optimization(AMC)
+### Memonger
+Dragon is a extremely memory efficient framework.
+
+It is supported to drop intermediate results(mirrow stage) during forward phase, and share grads during backward phase,
+
+takes 25% and 50% memory-usage comparing caffe and tensorflow respectively.
+
+To use it, just:
+
```Shell
-import dragon.config
-dragon.config.SetDebugMode(False)
+import dragon.memonger as opt
+```
+
+- ShareGrads
+
+```Shell
+opt.share_grads()
```
-This option will make all gradients share a global tensor(debugging is intractable).
+- Drop
+
+```Shell
+import dragon.ops as ops
+y = opt.drop(ops.Relu, x)
+```
-which prefers a 50% memory-usage and 15% slower solution during training phase.
### Scope
+As a graph based framework, Dragon supports various scopes.
+
- NameScope
```Shell
| 24 | add memonger for Dragon | 5 | .md | md | bsd-2-clause | neopenx/Dragon |
1827 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
## Deprecated. See [seetaresearch/Dragon](http://github.com/seetaresearch/Dragon).
dragon.config.EnableCUDA(device_id, use_cudnn=True)
```
### Automatic Memory Optimization(AMC)
```Shell
import dragon.config
dragon.config.SetDebugMode(False)
```
This option will make all gradients share a global tensor(debugging is intractable).
which prefers a 50% memory-usage and 15% slower solution during training phase.
### Scope
- NameScope
```Shell
<MSG> add memonger for Dragon
<DFF> @@ -164,19 +164,38 @@ dragon.config.EnableCPU()
dragon.config.EnableCUDA(device_id, use_cudnn=True)
```
-### Automatic Memory Optimization(AMC)
+### Memonger
+Dragon is a extremely memory efficient framework.
+
+It is supported to drop intermediate results(mirrow stage) during forward phase, and share grads during backward phase,
+
+takes 25% and 50% memory-usage comparing caffe and tensorflow respectively.
+
+To use it, just:
+
```Shell
-import dragon.config
-dragon.config.SetDebugMode(False)
+import dragon.memonger as opt
+```
+
+- ShareGrads
+
+```Shell
+opt.share_grads()
```
-This option will make all gradients share a global tensor(debugging is intractable).
+- Drop
+
+```Shell
+import dragon.ops as ops
+y = opt.drop(ops.Relu, x)
+```
-which prefers a 50% memory-usage and 15% slower solution during training phase.
### Scope
+As a graph based framework, Dragon supports various scopes.
+
- NameScope
```Shell
| 24 | add memonger for Dragon | 5 | .md | md | bsd-2-clause | neopenx/Dragon |
1828 | <NME> RedisSentinelManager.java
<BEF> package org.crazycake.shiro;
import org.crazycake.shiro.common.WorkAloneRedisManager;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisSentinelPool;
import redis.clients.jedis.Protocol;
import java.util.Collections;
import java.util.HashSet;
* @create 2018-02-26 11:16
**/
public class RedisSentinelManager implements IRedisManager{
private String host = "127.0.0.1:26379";
private String masterName = "mymaster";
private static final int DEFAULT_EXPIRE = 3600;
// expire time in seconds
private int expire = DEFAULT_EXPIRE;
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private int timeout = Protocol.DEFAULT_TIMEOUT;
// timeout for jedis try to read data from redis server
private int soTimeout = Protocol.DEFAULT_TIMEOUT;
private String password;
private int database = Protocol.DEFAULT_DATABASE;
private volatile JedisSentinelPool jedisSentinelPool = null;
private void init() {
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private int timeout = Protocol.DEFAULT_TIMEOUT;
// timeout for jedis try to read data from redis server
private int soTimeout = Protocol.DEFAULT_TIMEOUT;
private String password;
private int database = Protocol.DEFAULT_DATABASE;
private volatile JedisSentinelPool jedisPool;
@Override
return hostAndPorts;
}
private void checkAndInit() {
if (jedisSentinelPool == null) {
init();
}
}
/**
* get value from redis
* @param key
* @return
*/
public byte[] get(byte[] key){
checkAndInit();
if (key == null) {
return null;
}
byte[] value = null;
Jedis jedis = jedisSentinelPool.getResource();
try{
value = jedis.get(key);
}finally{
jedis.close();
}
return value;
}
/**
* set
* @param key
* @param value
* @return
*/
public byte[] set(byte[] key,byte[] value){
checkAndInit();
if (key == null) {
return null;
}
Jedis jedis = jedisSentinelPool.getResource();
try{
jedis.set(key,value);
if(this.expire != 0){
jedis.expire(key, this.expire);
}
}finally{
jedis.close();
}
return value;
}
/**
* set
* @param key
* @param value
* @param expire
* @return
*/
public byte[] set(byte[] key,byte[] value,int expire){
checkAndInit();
if (key == null) {
return null;
}
Jedis jedis = jedisSentinelPool.getResource();
try{
jedis.set(key,value);
if(expire != 0){
jedis.expire(key, expire);
}
}finally{
jedis.close();
}
return value;
}
/**
* del
* @param key
*/
public void del(byte[] key){
checkAndInit();
if (key == null) {
return;
}
Jedis jedis = jedisSentinelPool.getResource();
try{
jedis.del(key);
}finally{
jedis.close();
}
}
/**
* size
*/
public Long dbSize(){
checkAndInit();
Long dbSize = 0L;
Jedis jedis = jedisSentinelPool.getResource();
try{
dbSize = jedis.dbSize();
}finally{
jedis.close();
}
return dbSize;
}
/**
* keys
* @param pattern
* @return
*/
public Set<byte[]> keys(byte[] pattern){
checkAndInit();
Set<byte[]> keys = null;
Jedis jedis = jedisSentinelPool.getResource();
try{
keys = jedis.keys(pattern);
}finally{
jedis.close();
}
return keys;
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public String getMasterName() {
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
jedisPool = new JedisSentinelPool(masterName, sentinels, getJedisPoolConfig(), timeout, soTimeout, password, database);
}
this.masterName = masterName;
}
public int getExpire() {
return expire;
}
public void setExpire(int expire) {
this.expire = expire;
}
public int getTimeout() {
return timeout;
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
public int getSoTimeout() {
return soTimeout;
}
public void setSoTimeout(int soTimeout) {
this.soTimeout = soTimeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public int getTimeout() {
return timeout;
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
public String getMasterName() {
return masterName;
}
public void setMasterName(String masterName) {
this.masterName = masterName;
}
public int getSoTimeout() {
return soTimeout;
}
public void setSoTimeout(int soTimeout) {
this.soTimeout = soTimeout;
}
public JedisSentinelPool getJedisPool() {
return jedisPool;
}
public void setJedisPool(JedisSentinelPool jedisPool) {
this.jedisPool = jedisPool;
}
}
<MSG> 1. add PropertiesRedisManager , manage public properties used by redis
2. add JedisManager , Whether JedisPool or JedisSentinelPool is used, we are going to operate redis by acquiring Jedis objects. The subclass realizes the way to get Jedis objects by realizing the getJedis () method of JedisManager.
3. restract class Redismanager and RedisSentinelManager , make them extends JedisManager, implements getJedis() method
<DFF> @@ -10,25 +10,10 @@ import java.util.Set;
* @create 2018-02-26 11:16
**/
-public class RedisSentinelManager implements IRedisManager{
+public class RedisSentinelManager extends JedisManager{
- private String host = "127.0.0.1:26379";
private String masterName = "mymaster";
- private static final int DEFAULT_EXPIRE = 3600;
- // expire time in seconds
- private int expire = DEFAULT_EXPIRE;
-
- // timeout for jedis try to connect to redis server, not expire time! In milliseconds
- private int timeout = Protocol.DEFAULT_TIMEOUT;
-
- // timeout for jedis try to read data from redis server
- private int soTimeout = Protocol.DEFAULT_TIMEOUT;
-
- private String password;
-
- private int database = Protocol.DEFAULT_DATABASE;
-
private volatile JedisSentinelPool jedisSentinelPool = null;
private void init() {
@@ -48,134 +33,12 @@ public class RedisSentinelManager implements IRedisManager{
return hostAndPorts;
}
- private void checkAndInit() {
+ @Override
+ protected Jedis getJedis() {
if (jedisSentinelPool == null) {
init();
}
- }
-
- /**
- * get value from redis
- * @param key
- * @return
- */
- public byte[] get(byte[] key){
- checkAndInit();
- if (key == null) {
- return null;
- }
- byte[] value = null;
- Jedis jedis = jedisSentinelPool.getResource();
- try{
- value = jedis.get(key);
- }finally{
- jedis.close();
- }
- return value;
- }
-
- /**
- * set
- * @param key
- * @param value
- * @return
- */
- public byte[] set(byte[] key,byte[] value){
- checkAndInit();
- if (key == null) {
- return null;
- }
- Jedis jedis = jedisSentinelPool.getResource();
- try{
- jedis.set(key,value);
- if(this.expire != 0){
- jedis.expire(key, this.expire);
- }
- }finally{
- jedis.close();
- }
- return value;
- }
-
- /**
- * set
- * @param key
- * @param value
- * @param expire
- * @return
- */
- public byte[] set(byte[] key,byte[] value,int expire){
- checkAndInit();
- if (key == null) {
- return null;
- }
- Jedis jedis = jedisSentinelPool.getResource();
- try{
- jedis.set(key,value);
- if(expire != 0){
- jedis.expire(key, expire);
- }
- }finally{
- jedis.close();
- }
- return value;
- }
-
- /**
- * del
- * @param key
- */
- public void del(byte[] key){
- checkAndInit();
- if (key == null) {
- return;
- }
- Jedis jedis = jedisSentinelPool.getResource();
- try{
- jedis.del(key);
- }finally{
- jedis.close();
- }
- }
-
- /**
- * size
- */
- public Long dbSize(){
- checkAndInit();
- Long dbSize = 0L;
- Jedis jedis = jedisSentinelPool.getResource();
- try{
- dbSize = jedis.dbSize();
- }finally{
- jedis.close();
- }
- return dbSize;
- }
-
- /**
- * keys
- * @param pattern
- * @return
- */
- public Set<byte[]> keys(byte[] pattern){
- checkAndInit();
- Set<byte[]> keys = null;
- Jedis jedis = jedisSentinelPool.getResource();
- try{
- keys = jedis.keys(pattern);
- }finally{
- jedis.close();
- }
- return keys;
- }
-
- public String getHost() {
- return host;
- }
-
- public void setHost(String host) {
- this.host = host;
+ return jedisSentinelPool.getResource();
}
public String getMasterName() {
@@ -186,44 +49,4 @@ public class RedisSentinelManager implements IRedisManager{
this.masterName = masterName;
}
- public int getExpire() {
- return expire;
- }
-
- public void setExpire(int expire) {
- this.expire = expire;
- }
-
- public int getTimeout() {
- return timeout;
- }
-
- public void setTimeout(int timeout) {
- this.timeout = timeout;
- }
-
- public int getSoTimeout() {
- return soTimeout;
- }
-
- public void setSoTimeout(int soTimeout) {
- this.soTimeout = soTimeout;
- }
-
- public String getPassword() {
- return password;
- }
-
- public void setPassword(String password) {
- this.password = password;
- }
-
- public int getDatabase() {
- return database;
- }
-
- public void setDatabase(int database) {
- this.database = database;
- }
-
}
| 4 | 1. add PropertiesRedisManager , manage public properties used by redis 2. add JedisManager , Whether JedisPool or JedisSentinelPool is used, we are going to operate redis by acquiring Jedis objects. The subclass realizes the way to get Jedis objects by realizing the getJedis () method of JedisManager. 3. restract class Redismanager and RedisSentinelManager , make them extends JedisManager, implements getJedis() method | 181 | .java | java | mit | alexxiyang/shiro-redis |
1829 | <NME> RedisCacheTest.java
<BEF> package org.crazycake.shiro;
import org.apache.commons.lang3.math.NumberUtils;
import org.apache.shiro.subject.PrincipalCollection;
import org.crazycake.shiro.exception.CacheManagerPrincipalIdNotAssignedException;
import org.crazycake.shiro.exception.PrincipalInstanceException;
import org.crazycake.shiro.model.*;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.Before;
import org.junit.Test;
import java.util.Properties;
import java.util.Set;
import static fixture.TestFixture.turnUserToFakeAuth;
import static org.junit.Assert.fail;
import static fixture.TestFixture.*;
/**
* input key, value (java)
@BeforeEach
public void setUp() {
redisManager = mock(IRedisManager.class);
}
private RedisCache<PrincipalCollection, FakeAuth> redisCache;
private RedisCache<PrincipalCollection, FakeAuth> redisCacheWithPrincipalIdFieldName;
private RedisCache<PrincipalCollection, FakeAuth> redisCacheWithEmptyPrincipalIdFieldName;
private Properties properties = loadProperties("shiro-standalone.ini");
private PrincipalCollection user1;
private PrincipalCollection user2;
private PrincipalCollection user3;
private Set users1_2_3;
private String prefix;
RedisCache rc = mountRedisCache();
Object value = rc.put("foo", "bar");
assertThat(value, is("bar"));
verify(redisManager).set(keySerializer.serialize("employee:foo"), valueSerializer.serialize("bar"), 1);
PrincipalCollection principal = new EmployeePrincipal(3);
rc.put(principal, "account information");
redisCache = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), RedisCacheManager.DEFAULT_PRINCIPAL_ID_FIELD_NAME);
redisCacheWithPrincipalIdFieldName = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), properties.getProperty("cacheManager.principalIdFieldName"));
redisCacheWithEmptyPrincipalIdFieldName = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), "");
user1 = scaffoldAuthKey(scaffoldUser());
user2 = scaffoldAuthKey(scaffoldUser());
user3 = scaffoldAuthKey(scaffoldUser());
users1_2_3 = scaffoldKeys(user1, user2, user3);
}
public int getId() {
return this.id;
}
}
class EmployeePrincipal implements PrincipalCollection {
private Employee primaryPrincipal;
public EmployeePrincipal(int id) {
this.primaryPrincipal = new Employee(id);
}
@Override
public Employee getPrimaryPrincipal() {
return this.primaryPrincipal;
}
@Override
public <T> T oneByType(Class<T> aClass) {
return null;
}
@Override
public <T> Collection<T> byType(Class<T> aClass) {
return null;
}
@Override
public List asList() {
return null;
}
@Override
public Set asSet() {
return null;
}
@Override
public Collection fromRealm(String s) {
return null;
}
FakeAuth fakeAuth = redisCache.get(user1);
assertAuthEquals(fakeAuth, turnUserToFakeAuth((UserInfo)user1.getPrimaryPrincipal()));
}
@Test
public void testSize() throws InterruptedException {
return null;
}
}
<MSG> Merge pull request #90 from theo/support-string
#89 Add support for strings being the Principal
<DFF> @@ -1,20 +1,24 @@
package org.crazycake.shiro;
+import static fixture.TestFixture.*;
+import static org.junit.Assert.fail;
+
+import java.util.Properties;
+import java.util.Set;
+
import org.apache.commons.lang3.math.NumberUtils;
import org.apache.shiro.subject.PrincipalCollection;
+import org.apache.shiro.subject.SimplePrincipalCollection;
import org.crazycake.shiro.exception.CacheManagerPrincipalIdNotAssignedException;
import org.crazycake.shiro.exception.PrincipalInstanceException;
-import org.crazycake.shiro.model.*;
+import org.crazycake.shiro.model.FakeAuth;
+import org.crazycake.shiro.model.UserInfo;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.Before;
import org.junit.Test;
-import java.util.Properties;
-import java.util.Set;
-import static fixture.TestFixture.turnUserToFakeAuth;
-import static org.junit.Assert.fail;
-import static fixture.TestFixture.*;
+import com.github.javafaker.Faker;
/**
* input key, value (java)
@@ -25,10 +29,14 @@ public class RedisCacheTest {
private RedisCache<PrincipalCollection, FakeAuth> redisCache;
private RedisCache<PrincipalCollection, FakeAuth> redisCacheWithPrincipalIdFieldName;
private RedisCache<PrincipalCollection, FakeAuth> redisCacheWithEmptyPrincipalIdFieldName;
+ private RedisCache<PrincipalCollection, String> redisCacheWithStrings;
+
private Properties properties = loadProperties("shiro-standalone.ini");
private PrincipalCollection user1;
private PrincipalCollection user2;
private PrincipalCollection user3;
+ private PrincipalCollection user4;
+
private Set users1_2_3;
private String prefix;
@@ -42,9 +50,11 @@ public class RedisCacheTest {
redisCache = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), RedisCacheManager.DEFAULT_PRINCIPAL_ID_FIELD_NAME);
redisCacheWithPrincipalIdFieldName = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), properties.getProperty("cacheManager.principalIdFieldName"));
redisCacheWithEmptyPrincipalIdFieldName = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), "");
+ redisCacheWithStrings = scaffoldRedisCache(redisManager, new StringSerializer(), new ObjectSerializer(), prefix, NumberUtils.toInt(properties.getProperty("cacheManager.expire")), properties.getProperty("cacheManager.principalIdFieldName"));
user1 = scaffoldAuthKey(scaffoldUser());
user2 = scaffoldAuthKey(scaffoldUser());
user3 = scaffoldAuthKey(scaffoldUser());
+ user4 = new SimplePrincipalCollection(Faker.instance().gameOfThrones().character(), Faker.instance().gameOfThrones().city());
users1_2_3 = scaffoldKeys(user1, user2, user3);
}
@@ -94,6 +104,13 @@ public class RedisCacheTest {
FakeAuth fakeAuth = redisCache.get(user1);
assertAuthEquals(fakeAuth, turnUserToFakeAuth((UserInfo)user1.getPrimaryPrincipal()));
}
+
+ @Test
+ public void testPutString() {
+ redisCacheWithStrings.put(user4, user4.getPrimaryPrincipal().toString());
+ String auth = redisCacheWithStrings.get(user4);
+ assertEquals(auth, user4.getPrimaryPrincipal());
+ }
@Test
public void testSize() throws InterruptedException {
| 23 | Merge pull request #90 from theo/support-string | 6 | .java | java | mit | alexxiyang/shiro-redis |
1830 | <NME> index.rst
<BEF> Getting started quickly with Hitch
==================================
This is a basic introduction to getting your first hitch test up and running.
Install prerequisites
---------------------
You should have a reasonably up to date Ubuntu, Debian, Arch, Fedora or Mac.
On Ubuntu/Debian::
$ sudo apt-get install python3 python-pip python-virtualenv
$ sudo pip install --upgrade hitch
On Mac OS X::
$ brew install python python3
$ pip install --upgrade hitch virtualenv
On Arch::
$ sudo pacman -Sy python python-virtualenv
$ sudo pip install --upgrade hitch
On Fedora/RHEL/CentOS::
$ sudo yum install python3 python-virtualenv python-pip python3
$ sudo pip install --upgrade hitch
.. note::
The 'hitch' package (the bootstrapper) is a small python package with no dependencies.
Apart from installing all of the required packages and creating a .hitch directory,
the following files are created in your tests directory:
* :doc:`glossary/hitchreqs.txt`
* :doc:`glossary/engine.py`
* tdd.settings (:doc:`glossary/hitch_settings`)
* ci.settings
* all.settings
* :doc:`stub.test`
* README.rst
You might want to take a look around these files. They all try to be self-explanatory.
Create the hitch environment
----------------------------
To initialize a hitch environment, run hitch init in your tests directory::
~/yourproject/tests$ hitch init
This will:
* Install any necessary system packages required to run hitch.
* Create a .hitch directory, create a python 3 virtualenv in it and install all the necessary packages to run hitch tests there.
* Ask you some basic questions about the project which you are testing.
* Create a skeleton hitch project template for you to use based upon the answers.
The skeleton template will include all of the following:
* :doc:`/glossary/hitchreqs.txt`
* :doc:`/glossary/engine.py`
* tdd.settings (:doc:`/glossary/hitch_settings`)
* ci.settings
* all.settings
* :doc:`/glossary/stub.test`
* README.rst
You might want to take a look around these files. They all try to be self-explanatory.
Running your first test
-----------------------
You can now run the stub test. Try running it in test driven development mode::
$ hitch test stub.test --settings tdd.settings
The first time you run this command it *may take a while* (up to 25 minutes depending upon what you answered).
.. note::
:doc:`/faq/why_does_the_first_test_run_take_so_long`
This might be a good time to take a break.
While you're at it, subscribe to the `hitch subreddit <https://reddit.com/r/hitchtest>`_ and
`twitter feed <https://twitter.com/testhitch>`_ for updates and news.
Back?
-----
.. note::
If the stub test failed, please `raise an issue <https://github.com/hitchtest/hitch/issues/new>`_.
Once the test run is done setting up, if there were no problems, you should see this::
Python 3.4.3 (default, Jul 28 2015, 18:20:59)
Type "copyright", "credits" or "license" for more information.
Further reading
---------------
* :doc:`howto/web_applications`
* :doc:`howto/command_line_applications`
Advanced topics
---------------
* :doc:`howto/test_driven_development`
* :doc:`howto/parameterize_test_cases`
* :doc:`howto/continuous_integration`
.. note::
The components you selected during the set up should also be running. For example, if you
chose postgres, the latest version of postgres will have been installed in the ~/.hitchpkg
directory and it will be running and accessible.
To exit, simply hit ctrl-D.
This will shut everything down and then quit.
You're now ready to start writing new tests.
Happy testing!
.. note::
Was there anything that went wrong or was confusing? Please tell us! Help with :doc:`/misc/clarifying_documentation`.
Further reading
---------------
* :doc:`/howto/web_applications`
* :doc:`/howto/command_line_applications`
Advanced topics
---------------
* :doc:`/howto/test_driven_development`
* :doc:`/howto/parameterize_test_cases`
* :doc:`/howto/external_apis`
* :doc:`/howto/continuous_integration`
Plugin Documentation
--------------------
.. toctree::
:glob:
:maxdepth: 1
/plugins/*
.. note::
Need tutorials for any other topics? `Please raise a ticket <https://github.com/hitchtest/hitch/issues/new>`_.
<MSG> DOCS : Fix for quickstart docs
<DFF> @@ -36,12 +36,12 @@ mostly requiring a yes or no answer and will then generate a skeleton project te
Apart from installing all of the required packages and creating a .hitch directory,
the following files are created in your tests directory:
-* :doc:`glossary/hitchreqs.txt`
-* :doc:`glossary/engine.py`
-* tdd.settings (:doc:`glossary/hitch_settings`)
+* :doc:`/glossary/hitchreqs.txt`
+* :doc:`/glossary/engine.py`
+* tdd.settings (:doc:`/glossary/hitch_settings`)
* ci.settings
* all.settings
-* :doc:`stub.test`
+* :doc:`/glossary/stub.test`
* README.rst
You might want to take a look around these files. They all try to be self-explanatory.
@@ -107,15 +107,15 @@ Happy testing!
Further reading
---------------
-* :doc:`howto/web_applications`
-* :doc:`howto/command_line_applications`
+* :doc:`/howto/web_applications`
+* :doc:`/howto/command_line_applications`
Advanced topics
---------------
-* :doc:`howto/test_driven_development`
-* :doc:`howto/parameterize_test_cases`
-* :doc:`howto/continuous_integration`
+* :doc:`/howto/test_driven_development`
+* :doc:`/howto/parameterize_test_cases`
+* :doc:`/howto/continuous_integration`
.. note::
| 9 | DOCS : Fix for quickstart docs | 9 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1831 | <NME> .travis.yml
<BEF> language: java
<MSG> test travis ci redis
<DFF> @@ -1,1 +1,3 @@
-language: java
\ No newline at end of file
+language: java
+services:
+ - redis-server
\ No newline at end of file
| 3 | test travis ci redis | 1 | .yml | travis | mit | alexxiyang/shiro-redis |
1832 | <NME> README.rst
<BEF> Hitch
=====
.. image:: https://badges.gitter.im/Join%20Chat.svg
:alt: Join the chat at https://gitter.im/hitchtest/hitch
:target: https://gitter.im/hitchtest/hitch?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
Hitch is a UNIX-based testing framework for writing integration tests with an emphasis on:
* Minimizing and eliminating `brittle tests <https://hitchtest.readthedocs.org/en/latest/glossary/brittle_tests.html>`_
* `Test readability <https://hitchtest.readthedocs.org/en/latest/glossary/test_readability.html>`_
* `Loose coupling <https://hitchtest.readthedocs.org/en/latest/glossary/loose_coupling.html>`_
* `Test realism <https://hitchtest.readthedocs.org/en/latest/glossary/test_realism.html>`_
* Tests that `fail fast <https://hitchtest.readthedocs.org/en/latest/glossary/fail_fast.html>`_ and `fail clearly <https://hitchtest.readthedocs.org/en/latest/glossary/fail_clearly.html>`_
Available plugins
-----------------
Hitch comes with a variety of plugins to aid you to realistically testing various
kinds of software, components and scenarios, including:
* `Python <https://hitchtest.readthedocs.org/en/latest/plugins/hitchpython.html>`_ (includes Django and Celery service definitions)
* `Postgresql <https://hitchtest.readthedocs.org/en/latest/plugins/hitchpostgres.html>`_
* `Redis <https://hitchtest.readthedocs.org/en/latest/plugins/hitchredis.html>`_
* `Web apps (using selenium) <https://hitchtest.readthedocs.org/en/latest/plugins/hitchselenium.html>`_
* Command line apps (using pexpect)
* `Cron <https://hitchtest.readthedocs.org/en/latest/plugins/hitchcron.html>`_
* MySQL
* RabbitMQ
* Elastic Search
`Plugin documentation <https://hitchtest.readthedocs.org/en/latest/plugins/>`_
Getting started
---------------
See the `quickstart tutorial <https://hitchtest.readthedocs.org/en/latest/quickstart/index.html>`_ on how to
get started testing an existing project.
Also check out `cookiecutter-django <https://github.com/pydanny/cookiecutter-django>`_
if you want to start a new Django project with tests.
Status
------
Hitch is currently in beta.
It is regression tested on:
* Operating Systems : Mac OS X Yosemite, Ubuntu, Debian, Fedora and Arch Linux.
* Python versions : 3.5.0, 3.4.3, 3.4.0 and 3.3.0 `(what about python 2?) <https://hitchtest.readthedocs.org/en/latest/faq/what_about_python2.html>`_
It does not currently work on Windows.
See `tested on <https://hitchtest.readthedocs.org/en/latest/misc/tested_on.html>`_ for more details on
how the framework is tested (with itself, naturally).
Contents of this project
------------------------
This project contains:
* The code for the bootstrapper script
* Documentation for the whole project (`hosted at readthedocs <https://hitchtest.readthedocs.org/en/latest/>`_)
* Code for other components is at: https://github.com/hitchtest/
Or::
$ pipsi_ install hitch (if you do not want to use root)
This is a very simple script (with one dependency: click), which creates its own
virtualenv that you can use to install all the other components.
There is currently one tutorial for Hitch:
* Getting started testing with Hitch and Django, Celery, Cron, Redis and Postgresql
Components
* HitchTest_ - simple declarative test description language based on YAML and jinja2.
* HitchServe_ - simple service orchestration to let you easily write functional tests for service based applications.
* HitchEnvironment_ - plugin to let you define the environment your tests will run on.
* HitchSMTP_ - Mock SMTP server - to test email sending in your functional tests.
* HitchCron_ - Mock cron server - to test applications require cron-like behavior.
* HitchSelenium_ - Simple wrapper around selenium to let you mock browser usage.
.. _roadmap: https://github.com/hitchtest/hitch/ROADMAP.rst
.. _HitchTest: https://github.com/hitchtest/hitchtest
.. _HitchServe: https://github.com/hitchtest/hitchserve
.. _HitchEnvironment: https://github.com/hitchtest/hitchenvironment
.. _HitchSMTP: https://github.com/hitchtest/hitchsmtp
.. _HitchCron: https://github.com/hitchtest/hitchcron
.. _HitchSelenium: https://github.com/hitchtest/hitchselenium
<MSG> DOCS : Updated README.
<DFF> @@ -68,14 +68,12 @@ You can install the hitch bootstrapping script with::
Or::
- $ pipsi_ install hitch (if you do not want to use root)
+ $ pipsi install hitch (if you do not want to use root)
This is a very simple script (with one dependency: click), which creates its own
virtualenv that you can use to install all the other components.
-There is currently one tutorial for Hitch:
-
-* Getting started testing with Hitch and Django, Celery, Cron, Redis and Postgresql
+Documentation is here : http://hitchtest.readthedocs.org/
Components
@@ -86,7 +84,6 @@ together, or not at all. Those are:
* HitchTest_ - simple declarative test description language based on YAML and jinja2.
* HitchServe_ - simple service orchestration to let you easily write functional tests for service based applications.
-* HitchEnvironment_ - plugin to let you define the environment your tests will run on.
* HitchSMTP_ - Mock SMTP server - to test email sending in your functional tests.
* HitchCron_ - Mock cron server - to test applications require cron-like behavior.
* HitchSelenium_ - Simple wrapper around selenium to let you mock browser usage.
@@ -111,7 +108,6 @@ See the roadmap_ for planned future features.
.. _roadmap: https://github.com/hitchtest/hitch/ROADMAP.rst
.. _HitchTest: https://github.com/hitchtest/hitchtest
.. _HitchServe: https://github.com/hitchtest/hitchserve
-.. _HitchEnvironment: https://github.com/hitchtest/hitchenvironment
.. _HitchSMTP: https://github.com/hitchtest/hitchsmtp
.. _HitchCron: https://github.com/hitchtest/hitchcron
.. _HitchSelenium: https://github.com/hitchtest/hitchselenium
| 2 | DOCS : Updated README. | 6 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1833 | <NME> web_applications.rst
<BEF> mHow to test web applications
============================
.. note::
This tutorial assumes that you have the :doc:`glossary/hitch_plugin` :doc:`plugins/hitchselenium`
installed and its step library is set up.
If you followed the quickstart tutorial and said yes to testing a webapp, this should already be done for you.
.. warning::
This tutorial is a work in progress. It is not currently complete.
Writing a step that clicks on a button or link
----------------------------------------------
To click on an individual item, you need to use the step "click" like so::
- Click: register
This is telling hitch to click on an HTML element with the HTML ID "register".
.. note::
This part is sometimes controversial. If you disagree, read :doc:`faq/why_just_html_ids_and_classes` for the rationale.
Now, there's a good chance that:
* Your HTML element does not have that ID - in which case you should *change the HTML itself* so that it does have that ID.
* That button has a different HTML ID - in which case you should use that ID instead (bookmark :doc:`/howto/refactor_your_tests` for later).
Writing a step that clicks on an item that is part of a group
-------------------------------------------------------------
Sometimes the thing that you want to click on is part of a group, or a group of groups.
For instance, you may want to click on the first link in a list of links. To do that you use the same step::
- Click: first friend-link
Here, "friend-link" is an *HTML class*.
As before, if the list of elements do not have a readable HTML class signifying what they are, you should *add* a class in the HTML itself.
Elements can have multiple classes, so if an element already has a class but it does not clearly identify all of the items
in the list, you should add a class that does.
If you want to click on the 2nd item::
- Click: 2nd friend-link
Or the last::
- Click: Last friend link
To click on an item that is part of a group which is *also* itself part of a group, you can specify two classes::
- Click: First calendar day-31
Try to keep the test steps readable by using appropriately named classes where possible.
Verifying an element exists on the page - e.g. an error message
---------------------------------------------------------------
The same pattern can be used to wait for elements to be visible on the page. e.g.::
- Wait to appear: first friend-link
- Wait to appear: 2nd friend-link
- Wait to appear: Last friend-link
- Wait to appear: First calendar day-31
This is the recommended approach for items which signify certain things that you want to happen.
If, for example, you are testing for the presence of an error message indicating that a user must enter a ZIP code,
the following is a good way of doing it::
- Wait to appear: error-message-zip-code
Waiting for text to appear
--------------------------
Note that waiting for specific text to appear is *not* a good approach for detecting error messages,
or, indeed, any other kind of text which is decided upon by the application. Why? Translations.
If an application is translated and you test the same scenario by checking for IDs, the test will
continue to work. If you just check for the presence of text, it will break.
Nonetheless, waiting for text to appear is often a good way to determine if text entered by the user
in a test shows up in the right place.
Waiting for text to appear also follows the same pattern as above::
- Wait to contain:
item: first username
text: django
- Wait to appear:
item: second username
text: django
- Wait to appear:
item: last username
text: django
- Wait to appear:
item: first user username
text: django
<MSG> DOCS : Fixes for bad links.
<DFF> @@ -1,16 +1,16 @@
-mHow to test web applications
+How to test web applications
============================
.. note::
- This tutorial assumes that you have the :doc:`glossary/hitch_plugin` :doc:`plugins/hitchselenium`
+ This tutorial assumes that you have the :doc:`/glossary/hitch_plugin` :doc:`/plugins/hitchselenium`
installed and its step library is set up.
If you followed the quickstart tutorial and said yes to testing a webapp, this should already be done for you.
.. warning::
- This tutorial is a work in progress. It is not currently complete.
+ This tutorial is a work in progress. It is usable, but more is coming soon.
Writing a step that clicks on a button or link
@@ -24,7 +24,7 @@ This is telling hitch to click on an HTML element with the HTML ID "register".
.. note::
- This part is sometimes controversial. If you disagree, read :doc:`faq/why_just_html_ids_and_classes` for the rationale.
+ This part is sometimes controversial. If you disagree, read :doc:`/faq/why_just_html_ids_and_classes` for the rationale.
Now, there's a good chance that:
| 4 | DOCS : Fixes for bad links. | 4 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1834 | <NME> generic_service_api.rst
<BEF> Generic Service API
===================
.. note::
This documentation applies to the latest version of hitchserve.
All of the services listed are run using the generic service API. This API lets
you start, monitor and stop any kind of process during a test.
Defining a Service Bundle
-------------------------
To run one or more services together during your tests, you must first define a
:doc:`/glossary/service_bundle` which will run them all together.
The definition of your service bundle should go in your :doc:`/glossary/test_setup` in the
:doc:`/glossary/execution_engine`:
.. code-block:: python
# Create a service bundle
self.services = hitchserve.ServiceBundle(
project_directory=PROJECT_DIRECTORY, # Default directory all of your services are started in
startup_timeout=15.0, # How long to wait for all of the services to startup
shutdown_timeout=5.0, # How long to wait for all of the services to shutdown before killing
)
Once your service bundle is defined, you can start defining services.
Defining a Service to Run
--------------------------
Next, your code needs to define services to run with your service bundle. You define
generic services like this:
.. code-block:: python
self.services['MyService'] = hitchserve.Service(
command=["command", "arg1", "arg2", "arg3"], # Mandatory - command to run the service
log_line_ready_checker=lambda line: line == "READY", # Mandatory - function used to ascertain readiness of the service
directory="/directory/to/run/command/in", # Optional
no_libfaketime=False, # Optional (if set to True, the service is run without libfaketime)
env_vars={'A':1, 'B':2}, # Optional (dictionary of environment variables to feed to the service)
needs=[self.services['Django']], # Optional (services to start and wait for before starting this one)
)
Starting a service bundle
-------------------------
Once all of your services are defined, they still aren't started. To start your services
you must call the startup method:
.. code-block:: python
self.services.startup(interactive=False)
.. note::
interactive=False should be the default for all tests. However, if you want to run this
command in an :doc:`/glossary/ipython` console, use interactive=True.
interactive=False will take over the console and start printing logs as they arrive.
interactive=True will not take over the console.
Interacting with a Service Bundle: Switching to Interactive Mode
----------------------------------------------------------------
If you want your service bundle to stop logging to the screen (e.g. so you can launch
IPython), you can start the interactive mode.
.. code-block:: python
self.services.start_interactive_mode()
# Do interactive stuff here
self.services.stop_interactive_mode()
If you just want to print a log message during your test alongside all of the
other logs, however, you can just use:
.. code-block:: python
self.services.log("Your message here")
self.services.warn("A bad thing just happened")
.. warning::
Avoid using the print("") command to log messages. It will cause an error.
Interacting with a Service Bundle: Logs
---------------------------------------
Most services output information about what they are doing. In UNIX, there are two
'pipes' known as stdout and stderr where processes can log regular information
and errors.
During normal operation in a test, both of these are logged to the screen, alongside
the name of the service. E.g.::
[ Django] Performing system checks...
[ Django] System check identified no issues (0 silenced).
[ Django] July 11, 2015 - 10:36:58
[ Django] Django version 1.8, using settings 'remindme.settings'
[ Django] Starting development server at http://127.0.0.1:18080/
[ Django] Quit the server with CONTROL-C.
[ Err Django] [11/Jul/2015 10:36:59]"GET / HTTP/1.1" 500 99545
[ Err Django] [11/Jul/2015 10:36:59]"GET /favicon.ico HTTP/1.1" 404 2416
[ Err Django] [11/Jul/2015 10:36:59]"GET /favicon.ico HTTP/1.1" 404 2416
This will hopefully tell you most of what you need to know about why your services
are reporting errors.
While a test is paused and interactive mode is switched off, you can access
these logs via the log object::
In [1]: self.service['Django'].logs
[ Prints all of the logs ]
You can see the stdout and stderr individually, too::
In [2]: self.service['Django'].logs.out
[ Prints all of the logs ]
In [3]: self.service['Django'].logs.err
[ Prints all of the logs ]
As with the UNIX console, you can also tail your logs. This is a useful debugging
tool::
In [4]: self.service['Django'].logs.tail.follow(lines_back=2)
[ Prints logs from two lines before the command starts. ]
[ Continues logging in real time until you hit ctrl-C ]
Interacting with a Service Bundle: JSON Logs
--------------------------------------------
The full API docs for psutil's Process class are here: https://pythonhosted.org/psutil/#process-class
'sent_from': 'noreply@localhost',
'sent_to': ['[email protected]'],
'subject': 'Reminder'}]
This is a useful feature for verifying interactions with mock services went according to plan.
You can also tail the logs until a specific condition is met in a JSON line, for instance::
In [5]: self.services['HitchSMTP'].logs.out.tail.until_json(
lambda email: containing in email['payload'] or containing in email['subject'],
timeout=15,
lines_back=1,
)
[ returns full dict representation of JSON snippet representing email once it has been received ]
Interacting with a Service Bundle: Time Travel
----------------------------------------------
Many bugs and test scenarios often cannot be realistically replicated without
jumping through time.
The example application - a reminders app - is one example. To test that a reminder
is really sent after 30 days, the application must *think* that 30 days have actually
passed.
You can mimic these scenarios for services run using your service bundle by
calling the time_travel API, which can be used like so::
In [1]: self.services.time_travel(days=1)
Time traveling to 23 hours from now
In [2]: self.services.time_travel(hours=25)
Time traveling to 2 days from now
In [3]: self.services.time_travel(minutes=60)
Time traveling to 2 days from now
In [4]: self.services.time_travel(seconds=60)
Time traveling to 2 days from now
In [5]: from datetime import timedelta
In [6]: self.services.time_travel(timedelta=timedelta(hours=1))
Time traveling to 2 days from now
If you forgot where you are, you can get the current (mocked) time via::
In [7]: self.services.now()
Out[7]: datetime.datetime(2015, 7, 19, 16, 21, 33, 703669)
To move to an absolute time::
In [8]: from datetime import datetime
In [9]: self.services.time_travel(datetime=datetime.now())
Time traveling to now
Note that if no_libfaketime is set to True for a service, it will not pick up on the new time.
.. warning::
This feature relies upon a C library called libfaketime.
Libfaketime sometimes causes buggy and unpredictable behavior in some programs (e.g. node.js and Java)
on some platforms.
If you see problems when running a service, you may need to switch it off with 'no_libfaketime=True'.
Some programs will also work fine (e.g. firefox), but they will not pick up on the time being fed
to them.
Libfaketime works well with python and postgresql.
Interacting with a Service Bundle: Connecting to a service's IPython Kernel
---------------------------------------------------------------------------
IPython kernels are a great way of debugging your code. They give you access
to a REPL which you can use to inspect variables and run commands to see their
effect.
With python code, you can invoke a kernel by putting the following line of
code in your application:
.. code-block:: python
import IPython ; IPython.embed_kernel()
Hitch provides a convenience function which you can use to listen to a service's
logs and detect the presence of a recently embedded kernel and then connect
directly to it and launch an interpreter in interactive mode.
.. code-block:: python
def connect_to_kernel(self, service_name):
self.services.connect_to_ipykernel(service_name)
This is a step that can be called just by adding ::
- Connect to kernel: Celery
Note that if you are connecting to a kernel after clicking a button in a web
app, be sure to replace 'click' with the following step::
- Click and dont wait for page load: button-id
The regular click step will wait for the next page to load before continuing,
which will never happen because your app paused on loading it due to the embed_kernel.
Interacting with a Service Bundle: The Process API
--------------------------------------------------
To see a service's process ID::
In [1]: self.services['HitchSMTP'].pid
Out[1]: 43215
To interact with or inspect the service's process::
In [1]: self.services['HitchSMTP'].process.<TAB>
self.services['HitchSMTP'].process.as_dict self.services['HitchSMTP'].process.is_running self.services['HitchSMTP'].process.pid
self.services['HitchSMTP'].process.children self.services['HitchSMTP'].process.kill self.services['HitchSMTP'].process.ppid
self.services['HitchSMTP'].process.cmdline self.services['HitchSMTP'].process.memory_info self.services['HitchSMTP'].process.resume
self.services['HitchSMTP'].process.connections self.services['HitchSMTP'].process.memory_info_ex self.services['HitchSMTP'].process.rlimit
self.services['HitchSMTP'].process.cpu_affinity self.services['HitchSMTP'].process.memory_maps self.services['HitchSMTP'].process.send_signal
self.services['HitchSMTP'].process.cpu_percent self.services['HitchSMTP'].process.memory_percent self.services['HitchSMTP'].process.status
self.services['HitchSMTP'].process.cpu_times self.services['HitchSMTP'].process.name self.services['HitchSMTP'].process.suspend
self.services['HitchSMTP'].process.create_time self.services['HitchSMTP'].process.nice self.services['HitchSMTP'].process.terminal
self.services['HitchSMTP'].process.cwd self.services['HitchSMTP'].process.num_ctx_switches self.services['HitchSMTP'].process.terminate
self.services['HitchSMTP'].process.exe self.services['HitchSMTP'].process.num_fds self.services['HitchSMTP'].process.threads
self.services['HitchSMTP'].process.gids self.services['HitchSMTP'].process.num_threads self.services['HitchSMTP'].process.uids
self.services['HitchSMTP'].process.io_counters self.services['HitchSMTP'].process.open_files self.services['HitchSMTP'].process.username
self.services['HitchSMTP'].process.ionice self.services['HitchSMTP'].process.parent self.services['HitchSMTP'].process.wait
The psutil Process class API can be used to inspect the CPU usage of the service, its memory usage, list open files and much much more.
The full API docs for psutil's Process class are here: https://pythonhosted.org/psutil/#process-class
Interacting with a Service Bundle: Service Sub-commands
-------------------------------------------------------
Many services have special commands which are run during their operation.
For example, Django has the manage command, Redis has redis-cli and
Postgresql has psql.
Hitch provides an API to let you run these commands in the same environment
as the service you are running. This means that they will inherit the same
environment variables and time::
In [1]: self.services['Django'].manage("help").run()
Running Arbitrary Code Before and After Starting a Service
----------------------------------------------------------
Some services can just be started and stopped, but others require special
code to be run before and after. A good example of this is postgresql,
which requires initdb be run before starting the database service, and CREATE
USER / CREATE DATABASE to be run after.
If your service has special requirements like this, you can subclass the
hitchserve Service object and override the setup and poststart
methods:
.. code-block:: python
from hitchserve import Service
import signal
class MyService(Service):
def __init__(self, **kwargs):
kwargs['log_line_ready_checker'] = lambda line: "line in logs that signals readiness" in line
kwargs['command'] = ["start_service_command", "arg1", "arg2", "arg3", ]
super(MyService, self).__init__(**kwargs)
def setup(self):
"""This is where you run all of the code you want run before starting the service."""
pass
def poststart(self):
"""This is where you put all of the code you want run after the service is ready."""
pass
<MSG> DOCS : Updated generic service API docs.
<DFF> @@ -145,3 +145,34 @@ The psutil Process class API can be used to inspect the CPU usage of the process
The full API docs for psutil's Process class are here: https://pythonhosted.org/psutil/#process-class
+
+Running Arbitrary Code Before and After Starting
+------------------------------------------------
+
+Some services can just be started and stopped, but others require special
+code to be run before and/or after. A good example of this is postgresql,
+which needs initdb run before starting the database service, and CREATE
+USER / CREATE DATABASE to be run after.
+
+If your service has special requirements, you can subclass the hitchserve
+Service object:
+
+.. code-block:: python
+
+ from hitchserve import Service
+ import signal
+
+
+ class MyService(Service):
+ def __init__(self, **kwargs):
+ kwargs['log_line_ready_checker'] = lambda line: "line in logs that signals readiness" in line
+ kwargs['command'] = ["start_service_command", "arg1", "arg2", "arg3", ]
+ super(MyService, self).__init__(**kwargs)
+
+ def setup(self):
+ """This is where you run all of the code you want run before starting the service."""
+ pass
+
+ def poststart(self):
+ """This is where you run all of the code you want run after starting the service."""
+ pass
| 31 | DOCS : Updated generic service API docs. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1835 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.4.7",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version.
<DFF> @@ -13,7 +13,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.4.7",
+ version="0.4.8",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1836 | <NME> gen_lmdb.py
<BEF> # --------------------------------------------------------
# Cifar-10 for Dragon
# Copyright(c) 2017 SeetaTech
# Written by Ting Pan
# --------------------------------------------------------
""" Generate database """
import os
import sys
import time
import tarfile
import numpy as np
from six.moves import range as xrange
from dragon.tools.db import LMDB
from dragon.vm.caffe.proto import caffe_pb2
ZFILL = 8
def untar(tar_file):
t = tarfile.open(tar_file)
t.extractall(path='data')
def wrapper_str(raw_str):
if sys.version_info >= (3, 0):
return raw_str.encode()
return raw_str
def extract_images():
prefix = 'data/cifar-10-batches-py'
def extract_images():
prefix = 'data/cifar-10-batches-py'
extract_path = 'data/extract'
if not os.path.exists(os.path.join(extract_path, 'JPEGImages')):
os.makedirs(os.path.join(extract_path, 'JPEGImages'))
if not os.path.exists(os.path.join(extract_path, 'ImageSets')):
os.makedirs(os.path.join(extract_path, 'ImageSets'))
batches = [os.path.join(prefix, 'data_batch_{}'.format(i)) for i in xrange(1, 6)]
batches += [os.path.join(prefix, 'test_batch')]
# process batches
for batch in batches:
with open(batch, 'rb') as f:
if sys.version_info >= (3, 0):
import pickle
with open(batch, 'rb') as f:
dict = pickle.load(f, encoding='bytes')
else:
import cPickle
with open(batch, 'rb') as f:
dict = cPickle.load(f)
for item_idx in xrange(len(dict[wrapper_str('labels')])):
im = dict[wrapper_str('data')][item_idx].reshape((3, 32, 32))
label = dict[wrapper_str('labels')][item_idx]
im = im.transpose((1, 2, 0))
im = im[:, :, ::-1]
label = dict[wrapper_str('labels')][item_idx]
im = im.transpose((1, 2, 0))
im = im[:, :, ::-1]
filename = str(total_idx).zfill(ZFILL) + '.jpg'
cv2.imwrite(os.path.join(extract_path, 'JPEGImages', filename), im)
images_list.append((filename, str(label)))
total_idx += 1
# make list
with open(os.path.join(extract_path, 'ImageSets', 'train.txt'), 'w') as f:
for i in xrange(50000):
item = images_list[i][0] + ' ' + images_list[i][1]
if i != 49999: item += '\n'
f.write(item)
with open(os.path.join(extract_path, 'ImageSets', 'test.txt'), 'w') as f:
for i in xrange(50000, 60000):
item = images_list[i][0] + ' ' + images_list[i][1]
if i != 59999: item += '\n'
f.write(item)
def make_db(image_path, label_path, database_path, pad=0):
if os.path.isfile(label_path) is False:
raise ValueError('input path is empty or wrong.')
if os.path.isdir(database_path) is True:
raise ValueError('the database path is already exist.')
db.open(database_path, mode='w')
db = LMDB(max_commit=10000)
db.open(database_path, mode='w')
total_line = sum(1 for line in open(label_path))
count = 0
zfill_flag = '{0:0%d}' % (ZFILL)
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 95]
start_time = time.time()
with open(label_path, 'r') as input_file:
for record in input_file:
count += 1
if count % 10000 == 0:
now_time = time.time()
print('{0} / {1} in {2:.2f} sec'.format(
count, total_line, now_time - start_time))
db.commit()
record = record.split()
path = record[0]
label = record[1]
img = cv2.imread(os.path.join(image_path ,path))
if pad > 0:
pad_img = np.zeros((img.shape[0] + 2 * pad,
img.shape[1] + 2 * pad, 3), dtype=np.uint8)
pad_img[pad : pad + img.shape[0],
pad : pad + img.shape[1], :] = img
img = pad_img
result, imgencode = cv2.imencode('.jpg', img, encode_param)
datum = caffe_pb2.Datum()
datum.height, datum.width, datum.channels = img.shape
datum.label = int(label)
datum.encoded = True
datum.data = imgencode.tostring()
db.put(zfill_flag.format(count - 1), datum.SerializeToString())
now_time = time.time()
print('{0} / {1} in {2:.2f} sec'.format(count, total_line, now_time - start_time))
print('The size of database is {0} MB.'.format(
float(os.path.getsize(database_path + '/data.mdb') / 1000 / 1000)))
db.commit()
db.close()
shutil.copy(label_path, database_path + '/image_list.txt')
end_time = time.time()
print('{0} images have been stored in the database.'.format(total_line))
print('This task finishes within {0:.2f} seconds.'.format(
images_list = extract_images()
make_db(images_list[0:50000], 'data/train_lmdb')
make_db(images_list[50000:60000], 'data/test_lmdb')
untar('data/cifar-10-python.tar.gz')
extract_images()
make_db('data/extract/JPEGImages',
'data/extract/ImageSets/train.txt',
'data/train_lmdb')
make_db('data/extract/JPEGImages',
'data/extract/ImageSets/test.txt',
'data/test_lmdb')
<MSG> remove default inplace for densenet
<DFF> @@ -32,12 +32,6 @@ def wrapper_str(raw_str):
def extract_images():
prefix = 'data/cifar-10-batches-py'
- extract_path = 'data/extract'
- if not os.path.exists(os.path.join(extract_path, 'JPEGImages')):
- os.makedirs(os.path.join(extract_path, 'JPEGImages'))
- if not os.path.exists(os.path.join(extract_path, 'ImageSets')):
- os.makedirs(os.path.join(extract_path, 'ImageSets'))
-
batches = [os.path.join(prefix, 'data_batch_{}'.format(i)) for i in xrange(1, 6)]
batches += [os.path.join(prefix, 'test_batch')]
@@ -60,28 +54,13 @@ def extract_images():
label = dict[wrapper_str('labels')][item_idx]
im = im.transpose((1, 2, 0))
im = im[:, :, ::-1]
- filename = str(total_idx).zfill(ZFILL) + '.jpg'
- cv2.imwrite(os.path.join(extract_path, 'JPEGImages', filename), im)
- images_list.append((filename, str(label)))
+ images_list.append((im, str(label)))
total_idx += 1
- # make list
- with open(os.path.join(extract_path, 'ImageSets', 'train.txt'), 'w') as f:
- for i in xrange(50000):
- item = images_list[i][0] + ' ' + images_list[i][1]
- if i != 49999: item += '\n'
- f.write(item)
-
- with open(os.path.join(extract_path, 'ImageSets', 'test.txt'), 'w') as f:
- for i in xrange(50000, 60000):
- item = images_list[i][0] + ' ' + images_list[i][1]
- if i != 59999: item += '\n'
- f.write(item)
+ return images_list
-def make_db(image_path, label_path, database_path, pad=0):
- if os.path.isfile(label_path) is False:
- raise ValueError('input path is empty or wrong.')
+def make_db(images_list, database_path, pad=0):
if os.path.isdir(database_path) is True:
raise ValueError('the database path is already exist.')
@@ -90,42 +69,35 @@ def make_db(image_path, label_path, database_path, pad=0):
db = LMDB(max_commit=10000)
db.open(database_path, mode='w')
- total_line = sum(1 for line in open(label_path))
+ total_line = len(images_list)
count = 0
zfill_flag = '{0:0%d}' % (ZFILL)
- encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 95]
-
start_time = time.time()
- with open(label_path, 'r') as input_file:
- for record in input_file:
- count += 1
- if count % 10000 == 0:
- now_time = time.time()
- print('{0} / {1} in {2:.2f} sec'.format(
- count, total_line, now_time - start_time))
- db.commit()
-
- record = record.split()
- path = record[0]
- label = record[1]
-
- img = cv2.imread(os.path.join(image_path ,path))
- if pad > 0:
- pad_img = np.zeros((img.shape[0] + 2 * pad,
- img.shape[1] + 2 * pad, 3), dtype=np.uint8)
- pad_img[pad : pad + img.shape[0],
+ for record in images_list:
+ count += 1
+ if count % 10000 == 0:
+ now_time = time.time()
+ print('{0} / {1} in {2:.2f} sec'.format(
+ count, total_line, now_time - start_time))
+ db.commit()
+
+ img = record[0]
+ label = record[1]
+ if pad > 0:
+ pad_img = np.zeros((img.shape[0] + 2 * pad,
+ img.shape[1] + 2 * pad, 3), dtype=np.uint8)
+ pad_img[pad : pad + img.shape[0],
pad : pad + img.shape[1], :] = img
- img = pad_img
- result, imgencode = cv2.imencode('.jpg', img, encode_param)
+ img = pad_img
- datum = caffe_pb2.Datum()
- datum.height, datum.width, datum.channels = img.shape
- datum.label = int(label)
- datum.encoded = True
- datum.data = imgencode.tostring()
- db.put(zfill_flag.format(count - 1), datum.SerializeToString())
+ datum = caffe_pb2.Datum()
+ datum.height, datum.width, datum.channels = img.shape
+ datum.label = int(label)
+ datum.encoded = False
+ datum.data = img.tostring()
+ db.put(zfill_flag.format(count - 1), datum.SerializeToString())
now_time = time.time()
print('{0} / {1} in {2:.2f} sec'.format(count, total_line, now_time - start_time))
@@ -134,7 +106,6 @@ def make_db(image_path, label_path, database_path, pad=0):
db.commit()
db.close()
- shutil.copy(label_path, database_path + '/image_list.txt')
end_time = time.time()
print('{0} images have been stored in the database.'.format(total_line))
print('This task finishes within {0:.2f} seconds.'.format(
@@ -147,12 +118,8 @@ if __name__ == '__main__':
untar('data/cifar-10-python.tar.gz')
- extract_images()
+ images_list = extract_images()
- make_db('data/extract/JPEGImages',
- 'data/extract/ImageSets/train.txt',
- 'data/train_lmdb')
+ make_db(images_list[0:50000], 'data/train_lmdb')
- make_db('data/extract/JPEGImages',
- 'data/extract/ImageSets/test.txt',
- 'data/test_lmdb')
+ make_db(images_list[50000:60000], 'data/test_lmdb')
| 28 | remove default inplace for densenet | 61 | .py | py | bsd-2-clause | neopenx/Dragon |
1837 | <NME> gen_lmdb.py
<BEF> # --------------------------------------------------------
# Cifar-10 for Dragon
# Copyright(c) 2017 SeetaTech
# Written by Ting Pan
# --------------------------------------------------------
""" Generate database """
import os
import sys
import time
import tarfile
import numpy as np
from six.moves import range as xrange
from dragon.tools.db import LMDB
from dragon.vm.caffe.proto import caffe_pb2
ZFILL = 8
def untar(tar_file):
t = tarfile.open(tar_file)
t.extractall(path='data')
def wrapper_str(raw_str):
if sys.version_info >= (3, 0):
return raw_str.encode()
return raw_str
def extract_images():
prefix = 'data/cifar-10-batches-py'
def extract_images():
prefix = 'data/cifar-10-batches-py'
extract_path = 'data/extract'
if not os.path.exists(os.path.join(extract_path, 'JPEGImages')):
os.makedirs(os.path.join(extract_path, 'JPEGImages'))
if not os.path.exists(os.path.join(extract_path, 'ImageSets')):
os.makedirs(os.path.join(extract_path, 'ImageSets'))
batches = [os.path.join(prefix, 'data_batch_{}'.format(i)) for i in xrange(1, 6)]
batches += [os.path.join(prefix, 'test_batch')]
# process batches
for batch in batches:
with open(batch, 'rb') as f:
if sys.version_info >= (3, 0):
import pickle
with open(batch, 'rb') as f:
dict = pickle.load(f, encoding='bytes')
else:
import cPickle
with open(batch, 'rb') as f:
dict = cPickle.load(f)
for item_idx in xrange(len(dict[wrapper_str('labels')])):
im = dict[wrapper_str('data')][item_idx].reshape((3, 32, 32))
label = dict[wrapper_str('labels')][item_idx]
im = im.transpose((1, 2, 0))
im = im[:, :, ::-1]
label = dict[wrapper_str('labels')][item_idx]
im = im.transpose((1, 2, 0))
im = im[:, :, ::-1]
filename = str(total_idx).zfill(ZFILL) + '.jpg'
cv2.imwrite(os.path.join(extract_path, 'JPEGImages', filename), im)
images_list.append((filename, str(label)))
total_idx += 1
# make list
with open(os.path.join(extract_path, 'ImageSets', 'train.txt'), 'w') as f:
for i in xrange(50000):
item = images_list[i][0] + ' ' + images_list[i][1]
if i != 49999: item += '\n'
f.write(item)
with open(os.path.join(extract_path, 'ImageSets', 'test.txt'), 'w') as f:
for i in xrange(50000, 60000):
item = images_list[i][0] + ' ' + images_list[i][1]
if i != 59999: item += '\n'
f.write(item)
def make_db(image_path, label_path, database_path, pad=0):
if os.path.isfile(label_path) is False:
raise ValueError('input path is empty or wrong.')
if os.path.isdir(database_path) is True:
raise ValueError('the database path is already exist.')
db.open(database_path, mode='w')
db = LMDB(max_commit=10000)
db.open(database_path, mode='w')
total_line = sum(1 for line in open(label_path))
count = 0
zfill_flag = '{0:0%d}' % (ZFILL)
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 95]
start_time = time.time()
with open(label_path, 'r') as input_file:
for record in input_file:
count += 1
if count % 10000 == 0:
now_time = time.time()
print('{0} / {1} in {2:.2f} sec'.format(
count, total_line, now_time - start_time))
db.commit()
record = record.split()
path = record[0]
label = record[1]
img = cv2.imread(os.path.join(image_path ,path))
if pad > 0:
pad_img = np.zeros((img.shape[0] + 2 * pad,
img.shape[1] + 2 * pad, 3), dtype=np.uint8)
pad_img[pad : pad + img.shape[0],
pad : pad + img.shape[1], :] = img
img = pad_img
result, imgencode = cv2.imencode('.jpg', img, encode_param)
datum = caffe_pb2.Datum()
datum.height, datum.width, datum.channels = img.shape
datum.label = int(label)
datum.encoded = True
datum.data = imgencode.tostring()
db.put(zfill_flag.format(count - 1), datum.SerializeToString())
now_time = time.time()
print('{0} / {1} in {2:.2f} sec'.format(count, total_line, now_time - start_time))
print('The size of database is {0} MB.'.format(
float(os.path.getsize(database_path + '/data.mdb') / 1000 / 1000)))
db.commit()
db.close()
shutil.copy(label_path, database_path + '/image_list.txt')
end_time = time.time()
print('{0} images have been stored in the database.'.format(total_line))
print('This task finishes within {0:.2f} seconds.'.format(
images_list = extract_images()
make_db(images_list[0:50000], 'data/train_lmdb')
make_db(images_list[50000:60000], 'data/test_lmdb')
untar('data/cifar-10-python.tar.gz')
extract_images()
make_db('data/extract/JPEGImages',
'data/extract/ImageSets/train.txt',
'data/train_lmdb')
make_db('data/extract/JPEGImages',
'data/extract/ImageSets/test.txt',
'data/test_lmdb')
<MSG> remove default inplace for densenet
<DFF> @@ -32,12 +32,6 @@ def wrapper_str(raw_str):
def extract_images():
prefix = 'data/cifar-10-batches-py'
- extract_path = 'data/extract'
- if not os.path.exists(os.path.join(extract_path, 'JPEGImages')):
- os.makedirs(os.path.join(extract_path, 'JPEGImages'))
- if not os.path.exists(os.path.join(extract_path, 'ImageSets')):
- os.makedirs(os.path.join(extract_path, 'ImageSets'))
-
batches = [os.path.join(prefix, 'data_batch_{}'.format(i)) for i in xrange(1, 6)]
batches += [os.path.join(prefix, 'test_batch')]
@@ -60,28 +54,13 @@ def extract_images():
label = dict[wrapper_str('labels')][item_idx]
im = im.transpose((1, 2, 0))
im = im[:, :, ::-1]
- filename = str(total_idx).zfill(ZFILL) + '.jpg'
- cv2.imwrite(os.path.join(extract_path, 'JPEGImages', filename), im)
- images_list.append((filename, str(label)))
+ images_list.append((im, str(label)))
total_idx += 1
- # make list
- with open(os.path.join(extract_path, 'ImageSets', 'train.txt'), 'w') as f:
- for i in xrange(50000):
- item = images_list[i][0] + ' ' + images_list[i][1]
- if i != 49999: item += '\n'
- f.write(item)
-
- with open(os.path.join(extract_path, 'ImageSets', 'test.txt'), 'w') as f:
- for i in xrange(50000, 60000):
- item = images_list[i][0] + ' ' + images_list[i][1]
- if i != 59999: item += '\n'
- f.write(item)
+ return images_list
-def make_db(image_path, label_path, database_path, pad=0):
- if os.path.isfile(label_path) is False:
- raise ValueError('input path is empty or wrong.')
+def make_db(images_list, database_path, pad=0):
if os.path.isdir(database_path) is True:
raise ValueError('the database path is already exist.')
@@ -90,42 +69,35 @@ def make_db(image_path, label_path, database_path, pad=0):
db = LMDB(max_commit=10000)
db.open(database_path, mode='w')
- total_line = sum(1 for line in open(label_path))
+ total_line = len(images_list)
count = 0
zfill_flag = '{0:0%d}' % (ZFILL)
- encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 95]
-
start_time = time.time()
- with open(label_path, 'r') as input_file:
- for record in input_file:
- count += 1
- if count % 10000 == 0:
- now_time = time.time()
- print('{0} / {1} in {2:.2f} sec'.format(
- count, total_line, now_time - start_time))
- db.commit()
-
- record = record.split()
- path = record[0]
- label = record[1]
-
- img = cv2.imread(os.path.join(image_path ,path))
- if pad > 0:
- pad_img = np.zeros((img.shape[0] + 2 * pad,
- img.shape[1] + 2 * pad, 3), dtype=np.uint8)
- pad_img[pad : pad + img.shape[0],
+ for record in images_list:
+ count += 1
+ if count % 10000 == 0:
+ now_time = time.time()
+ print('{0} / {1} in {2:.2f} sec'.format(
+ count, total_line, now_time - start_time))
+ db.commit()
+
+ img = record[0]
+ label = record[1]
+ if pad > 0:
+ pad_img = np.zeros((img.shape[0] + 2 * pad,
+ img.shape[1] + 2 * pad, 3), dtype=np.uint8)
+ pad_img[pad : pad + img.shape[0],
pad : pad + img.shape[1], :] = img
- img = pad_img
- result, imgencode = cv2.imencode('.jpg', img, encode_param)
+ img = pad_img
- datum = caffe_pb2.Datum()
- datum.height, datum.width, datum.channels = img.shape
- datum.label = int(label)
- datum.encoded = True
- datum.data = imgencode.tostring()
- db.put(zfill_flag.format(count - 1), datum.SerializeToString())
+ datum = caffe_pb2.Datum()
+ datum.height, datum.width, datum.channels = img.shape
+ datum.label = int(label)
+ datum.encoded = False
+ datum.data = img.tostring()
+ db.put(zfill_flag.format(count - 1), datum.SerializeToString())
now_time = time.time()
print('{0} / {1} in {2:.2f} sec'.format(count, total_line, now_time - start_time))
@@ -134,7 +106,6 @@ def make_db(image_path, label_path, database_path, pad=0):
db.commit()
db.close()
- shutil.copy(label_path, database_path + '/image_list.txt')
end_time = time.time()
print('{0} images have been stored in the database.'.format(total_line))
print('This task finishes within {0:.2f} seconds.'.format(
@@ -147,12 +118,8 @@ if __name__ == '__main__':
untar('data/cifar-10-python.tar.gz')
- extract_images()
+ images_list = extract_images()
- make_db('data/extract/JPEGImages',
- 'data/extract/ImageSets/train.txt',
- 'data/train_lmdb')
+ make_db(images_list[0:50000], 'data/train_lmdb')
- make_db('data/extract/JPEGImages',
- 'data/extract/ImageSets/test.txt',
- 'data/test_lmdb')
+ make_db(images_list[50000:60000], 'data/test_lmdb')
| 28 | remove default inplace for densenet | 61 | .py | py | bsd-2-clause | neopenx/Dragon |
1838 | <NME> hitch_test_cli.rst
<BEF> Hitch Command Line Interface
============================
.. note::
This documentation applies to the latest version of hitch and hitchtest.
hitch init
----------
If not already initialized, running::
$ hitch init
Will do the following:
* Create a .hitch directory
* Check for and ask you to install the following system packages: python-dev, python3-dev, libtool, automake, cmake (or equivalents).
* Check that the system packages specified in the file system.packages are installed. Attempt to install them if not.
* Installs all hitch plugin packages specified in hitchreqs.txt.
* Ask you to install any system packages required by plugins and not already installed.
If already initialized, hitch init will:
* Install any package versions specified in hitchreqs.txt but not installed in your environment.
* Verify again that all required system packages from system packages, hitch plugins.
You should run this command whenever hitchreqs.txt is changed (for example if you do a git pull).
.. note::
If you are running tests in a continuous integration environment, you should ensure that hitch init is run before
every test run and that the user it is run as has passwordless sudo.
You should also run hitch init on your development environment if somebody else changes your hitchreqs.txt.
$ hitch test . --extra '{"failfast":true}'
Note that you must specify the settings using JSON.
For more about settings and configuration see :doc:`/api/settings`
To run *all* of the tests in a directory (and its subdirectories), e.g.::
If you are using Jinja2 to parameterize your tests, you may want
to just display the YAML generated by it to check that your jinja2 is
generating the YAML correctly.
You can do that using the --yaml switch::
hitch test [ tests ] --tags
---------------------------
To only run tests that match specified tags, you can use the --tag switch to specify
the tests you want to run::
$ hitch test . --tags tag1,tag2,tag3
hitch test [ tests ] --settings
-------------------------------
You will at some point want to run the same tests in different environments. This is
what hitch settings are for.
To use a different settings file, you can use the --settings switch to specify
the settings file to use. For example, for a continuous integration environment you might run::
$ hitch test . --settings ci.settings
Whereas in your test driven development environment you might run::
$ hitch test . --settings tdd.settings
These commands will ensure that your tests are run with settings from both all.settings
and the YAML settings file specified on the command line.
You can also specify settings using JSON on the command line by using the --extra switch, e.g.::
$ hitch test . --settings ci.settings --extra '{"failfast":true}'
For more about settings and configuration see :doc:`/api/settings`
hitch test [ tests ] --yaml
---------------------------
If you are using Jinja2 to parameterize your tests, you may want to just display the YAML
generated by it to check that your jinja2 has no problems and is templating your tests correctly.
You can do that using the --yaml switch::
$ hitch test simple_reminder.test --yaml
hitch install
-------------
Once you have an environment set up, you can install hitch plugin packages to test different kinds
of things. For example::
$ hitch install hitchselenium
This will install the hitch plugin package responsible for testing web applications with the
firefox browser and selenium.
Hitch install will do the following when you run 'hitch install':
* Download the hitch plugin package from pypi and install it in your environment.
* Update hitchreqs.txt with the plugin package as well as any packages it depends upon.
* Attempt to install any *system packages* required by the plugin package. For instance, hitchselenium needs firefox and xvfb.
.. note::
Note that hitch plugin packages are just regular python packages downloaded from pypi. Running
this command is roughly equivalent to running pip install.
If you want to install any other python packages to use with your engine.py, you can use this command.
hitch uninstall
---------------
This lets you uninstall plugin packages from your environment. For example::
$ hitch uninstall hitchselenium
It will also update hitchreqs.txt afterwards.
hitch upgrade
-------------
See : :doc:`/howto/upgrade_hitch`
This command will upgrade all of the plugins installed in your hitch directory
and save all of their newly fixed versions to hitchreqs.txt.
hitch clean
-----------
This command removes the .hitch folder. It is a good idea to run this command and
run hitch init after if you suspect your .hitch directory might have been corrupted.
hitch cleanpkg
--------------
This command removes the ~/.hitchpkg folder, which contains all of the hitch
packages downloaded, compiled and installed in order to run tests.
It is a good idea to run this command and re-run your tests after if you suspect
your ~/.hitchpkg may have been corrupted.
<MSG> DOCS : Updated CLI docs
<DFF> @@ -38,7 +38,7 @@ To specify individual settings on the command line, you can use the --extra swit
$ hitch test . --extra '{"failfast":true}'
-Note that you must specify the settings using JSON.
+Note that you must specify the settings using JSON via --extra and YAML in a .settings file.
For more about settings and configuration see :doc:`/api/settings`
@@ -48,7 +48,7 @@ Display Test YAML
If you are using Jinja2 to parameterize your tests, you may want
to just display the YAML generated by it to check that your jinja2 is
-generating the YAML correctly.
+generating the tests correctly.
You can do that using the --yaml switch::
| 2 | DOCS : Updated CLI docs | 2 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1839 | <NME> README.rst
<BEF> Hitch
=====
.. image:: https://badges.gitter.im/Join%20Chat.svg
:alt: Join the chat at https://gitter.im/hitchtest/hitch
:target: https://gitter.im/hitchtest/hitch?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
Hitch is a UNIX-based testing framework for writing integration tests with an emphasis on:
* Minimizing and eliminating `brittle tests <https://hitchtest.readthedocs.org/en/latest/glossary/brittle_tests.html>`_
* `Test readability <https://hitchtest.readthedocs.org/en/latest/glossary/test_readability.html>`_
* `Loose coupling <https://hitchtest.readthedocs.org/en/latest/glossary/loose_coupling.html>`_
* `Test realism <https://hitchtest.readthedocs.org/en/latest/glossary/test_realism.html>`_
* Tests that `fail fast <https://hitchtest.readthedocs.org/en/latest/glossary/fail_fast.html>`_ and `fail clearly <https://hitchtest.readthedocs.org/en/latest/glossary/fail_clearly.html>`_
Available plugins
-----------------
Hitch comes with a variety of plugins to aid you to realistically testing various
kinds of software, components and scenarios, including:
* `Python <https://hitchtest.readthedocs.org/en/latest/plugins/hitchpython.html>`_ (includes Django and Celery service definitions)
* `Postgresql <https://hitchtest.readthedocs.org/en/latest/plugins/hitchpostgres.html>`_
* `Redis <https://hitchtest.readthedocs.org/en/latest/plugins/hitchredis.html>`_
* `Web apps (using selenium) <https://hitchtest.readthedocs.org/en/latest/plugins/hitchselenium.html>`_
* Command line apps (using pexpect)
* `Cron <https://hitchtest.readthedocs.org/en/latest/plugins/hitchcron.html>`_
* MySQL
* RabbitMQ
* Elastic Search
`Plugin documentation <https://hitchtest.readthedocs.org/en/latest/plugins/>`_
Getting started
---------------
See the `quickstart tutorial <https://hitchtest.readthedocs.org/en/latest/quickstart/index.html>`_ on how to
get started testing an existing project.
Also check out `cookiecutter-django <https://github.com/pydanny/cookiecutter-django>`_
if you want to start a new Django project with tests.
Status
------
Hitch is currently in beta.
It is regression tested on:
* Operating Systems : Mac OS X Yosemite, Ubuntu, Debian, Fedora and Arch Linux.
* Python versions : 3.5.0, 3.4.3, 3.4.0 and 3.3.0 `(what about python 2?) <https://hitchtest.readthedocs.org/en/latest/faq/what_about_python2.html>`_
It does not currently work on Windows.
See `tested on <https://hitchtest.readthedocs.org/en/latest/misc/tested_on.html>`_ for more details on
how the framework is tested (with itself, naturally).
Contents of this project
------------------------
This project contains:
* The code for the bootstrapper script
* Documentation for the whole project (`hosted at readthedocs <https://hitchtest.readthedocs.org/en/latest/>`_)
* Code for other components is at: https://github.com/hitchtest/
* HitchPostgres_ - Simple wrapper around Postgres.
* HitchSelenium_ - Simple wrapper around Selenium.
* HitchRedis_ - Simple wrapper around Redis.
* HitchDjango_ - Simple wrapper around Django.
* HitchCelery_ - Simple wrapper around Celery.
More coming soon.
.. _HitchCron: https://github.com/hitchtest/hitchcron
.. _HitchSelenium: https://github.com/hitchtest/hitchselenium
.. _HitchRedis: https://github.com/hitchtest/hitchredis
.. _HitchDjango: https://github.com/hitchtest/hitchdjango
.. _HitchPostgres: https://github.com/hitchtest/hitchpostgres
.. _HitchCelery: https://github.com/hitchtest/hitchcelery
.. _pipsi: https://github.com/mitsuhiko/pipsi
<MSG> Remove links to obsolete projects and link to hitchpython instead.
This should clean up the README file.
<DFF> @@ -94,8 +94,7 @@ together, or not at all. Those are:
* HitchPostgres_ - Simple wrapper around Postgres.
* HitchSelenium_ - Simple wrapper around Selenium.
* HitchRedis_ - Simple wrapper around Redis.
-* HitchDjango_ - Simple wrapper around Django.
-* HitchCelery_ - Simple wrapper around Celery.
+* HitchPython_ - Simple wrapper around python programs including Django and Celery.
More coming soon.
@@ -116,8 +115,7 @@ See the roadmap_ for planned future features.
.. _HitchCron: https://github.com/hitchtest/hitchcron
.. _HitchSelenium: https://github.com/hitchtest/hitchselenium
.. _HitchRedis: https://github.com/hitchtest/hitchredis
-.. _HitchDjango: https://github.com/hitchtest/hitchdjango
.. _HitchPostgres: https://github.com/hitchtest/hitchpostgres
-.. _HitchCelery: https://github.com/hitchtest/hitchcelery
+.. _HitchPython: https://github.com/hitchtest/hitchpython
.. _pipsi: https://github.com/mitsuhiko/pipsi
| 2 | Remove links to obsolete projects and link to hitchpython instead. | 4 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1840 | <NME> RedisSessionDAO.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.session.Session;
import org.apache.shiro.session.UnknownSessionException;
import org.apache.shiro.session.mgt.eis.AbstractSessionDAO;
import org.crazycake.shiro.common.SessionInMemory;
import org.crazycake.shiro.exception.SerializationException;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.RedisSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.Serializable;
import java.util.*;
private static Logger logger = LoggerFactory.getLogger(RedisSessionDAO.class);
private static final String DEFAULT_SESSION_KEY_PREFIX = "shiro:session:";
private RedisManager redisManager;
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
private RedisSerializer keySerializer = new StringSerializer();
private RedisSerializer valueSerializer = new ObjectSerializer();
private static final String DEFAULT_SESSION_KEY_PREFIX = "shiro:session:";
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
/**
* doReadSession be called about 10 times when login.
* Save Session in ThreadLocal to resolve this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal.
* The default value is 1000 milliseconds (1s).
* Most of time, you don't need to change it.
*
* You can turn it off by setting sessionInMemoryEnabled to false
*/
private static final long DEFAULT_SESSION_IN_MEMORY_TIMEOUT = 1000L;
private long sessionInMemoryTimeout = DEFAULT_SESSION_IN_MEMORY_TIMEOUT;
private static final boolean DEFAULT_SESSION_IN_MEMORY_ENABLED = true;
private boolean sessionInMemoryEnabled = DEFAULT_SESSION_IN_MEMORY_ENABLED;
private static ThreadLocal sessionsInThread = new ThreadLocal();
/**
* expire time in seconds.
* NOTE: Please make sure expire is longer than session.getTimeout(),
* otherwise you might need the issue that session in Redis got erased when the Session is still available
*
* DEFAULT_EXPIRE: use the timeout of session instead of setting it by yourself
* NO_EXPIRE: never expire
*/
private static final int DEFAULT_EXPIRE = -2;
private static final int NO_EXPIRE = -1;
private int expire = DEFAULT_EXPIRE;
private static final int MILLISECONDS_IN_A_SECOND = 1000;
/**
* redisManager used for communicate with Redis
*/
private IRedisManager redisManager;
/**
* Serializer of key
*/
private RedisSerializer keySerializer = new StringSerializer();
/**
* Serializer of value
*/
private RedisSerializer valueSerializer = new ObjectSerializer();
/**
* save/update session
* @param session
* @throws UnknownSessionException
*/
@Override
public void update(Session session) throws UnknownSessionException {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
this.saveSession(session);
if (this.sessionInMemoryEnabled) {
this.setSessionToThreadLocal(session.getId(), session);
}
}
private void saveSession(Session session) throws UnknownSessionException {
if (session == null || session.getId() == null) {
logger.error("session or session id is null");
throw new UnknownSessionException("session or session id is null");
}
byte[] key;
byte[] value;
try {
key = keySerializer.serialize(getRedisSessionKey(session.getId()));
value = valueSerializer.serialize(session);
} catch (SerializationException e) {
logger.error("serialize session error. session id=" + session.getId());
throw new UnknownSessionException(e);
}
if (expire == DEFAULT_EXPIRE) {
redisManager.set(key, value, (int) (session.getTimeout() / MILLISECONDS_IN_A_SECOND));
return;
}
if (expire != NO_EXPIRE && expire * MILLISECONDS_IN_A_SECOND < session.getTimeout()) {
logger.warn("Redis session expire time: "
+ (expire * MILLISECONDS_IN_A_SECOND)
+ " is less than Session timeout: "
+ session.getTimeout()
+ " . It may cause some problems.");
}
redisManager.set(key, value, expire);
}
/**
* delete session
* @param session
return this.keyPrefix + sessionId;
}
public RedisManager getRedisManager() {
return redisManager;
}
public void setRedisManager(RedisManager redisManager) {
this.redisManager = redisManager;
}
this.delSessionFromThreadLocal(session.getId());
}
try {
redisManager.del(keySerializer.serialize(getRedisSessionKey(session.getId())));
} catch (SerializationException e) {
logger.error("delete session error. session id=" + session.getId());
}
}
/**
* get all active sessions
* @return
*/
@Override
public Collection<Session> getActiveSessions() {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
Set<Session> sessions = new HashSet<Session>();
try {
Set<byte[]> keys = redisManager.keys(keySerializer.serialize(this.keyPrefix + "*"));
if (keys != null && keys.size() > 0) {
for (byte[] key:keys) {
Session s = (Session) valueSerializer.deserialize(redisManager.get(key));
sessions.add(s);
}
}
} catch (SerializationException e) {
logger.error("get active sessions error.");
}
return sessions;
}
@Override
protected Serializable doCreate(Session session) {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
if (session == null) {
logger.error("session is null");
throw new UnknownSessionException("session is null");
}
Serializable sessionId = this.generateSessionId(session);
this.assignSessionId(session, sessionId);
this.saveSession(session);
return sessionId;
}
/**
* I change
* @param sessionId
* @return
*/
@Override
protected Session doReadSession(Serializable sessionId) {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
if (sessionId == null) {
logger.warn("session id is null");
return null;
}
if (this.sessionInMemoryEnabled) {
Session session = getSessionFromThreadLocal(sessionId);
if (session != null) {
return session;
}
}
Session session = null;
try {
String sessionRedisKey = getRedisSessionKey(sessionId);
logger.debug("read session: " + sessionRedisKey + " from Redis");
session = (Session) valueSerializer.deserialize(redisManager.get(keySerializer.serialize(sessionRedisKey)));
if (this.sessionInMemoryEnabled) {
setSessionToThreadLocal(sessionId, session);
}
} catch (SerializationException e) {
logger.error("read session error. sessionId: " + sessionId);
}
return session;
}
private void setSessionToThreadLocal(Serializable sessionId, Session session) {
this.initSessionsInThread();
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
sessionMap.put(sessionId, this.createSessionInMemory(session));
}
private void delSessionFromThreadLocal(Serializable sessionId) {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
return;
}
sessionMap.remove(sessionId);
}
private SessionInMemory createSessionInMemory(Session session) {
SessionInMemory sessionInMemory = new SessionInMemory();
sessionInMemory.setCreateTime(new Date());
sessionInMemory.setSession(session);
return sessionInMemory;
}
private void initSessionsInThread() {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
sessionMap = new HashMap<Serializable, SessionInMemory>();
sessionsInThread.set(sessionMap);
}
}
private void removeExpiredSessionInMemory() {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
return;
}
Iterator<Serializable> it = sessionMap.keySet().iterator();
while (it.hasNext()) {
Serializable sessionId = it.next();
SessionInMemory sessionInMemory = sessionMap.get(sessionId);
if (sessionInMemory == null) {
it.remove();
continue;
}
long liveTime = getSessionInMemoryLiveTime(sessionInMemory);
if (liveTime > sessionInMemoryTimeout) {
it.remove();
}
}
if (sessionMap.size() == 0) {
sessionsInThread.remove();
}
}
private Session getSessionFromThreadLocal(Serializable sessionId) {
if (sessionsInThread.get() == null) {
return null;
}
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
SessionInMemory sessionInMemory = sessionMap.get(sessionId);
if (sessionInMemory == null) {
return null;
}
logger.debug("read session from memory");
return sessionInMemory.getSession();
}
private long getSessionInMemoryLiveTime(SessionInMemory sessionInMemory) {
Date now = new Date();
return now.getTime() - sessionInMemory.getCreateTime().getTime();
}
private String getRedisSessionKey(Serializable sessionId) {
return this.keyPrefix + sessionId;
}
public IRedisManager getRedisManager() {
return redisManager;
}
public void setRedisManager(IRedisManager redisManager) {
this.redisManager = redisManager;
}
public String getKeyPrefix() {
return keyPrefix;
}
public void setKeyPrefix(String keyPrefix) {
this.keyPrefix = keyPrefix;
}
public RedisSerializer getKeySerializer() {
return keySerializer;
}
public void setKeySerializer(RedisSerializer keySerializer) {
this.keySerializer = keySerializer;
}
public RedisSerializer getValueSerializer() {
return valueSerializer;
}
public void setValueSerializer(RedisSerializer valueSerializer) {
this.valueSerializer = valueSerializer;
}
public long getSessionInMemoryTimeout() {
return sessionInMemoryTimeout;
}
public void setSessionInMemoryTimeout(long sessionInMemoryTimeout) {
this.sessionInMemoryTimeout = sessionInMemoryTimeout;
}
public int getExpire() {
return expire;
}
public void setExpire(int expire) {
this.expire = expire;
}
public boolean getSessionInMemoryEnabled() {
return sessionInMemoryEnabled;
}
public void setSessionInMemoryEnabled(boolean sessionInMemoryEnabled) {
this.sessionInMemoryEnabled = sessionInMemoryEnabled;
}
public static ThreadLocal getSessionsInThread() {
return sessionsInThread;
}
}
<MSG> Merge pull request #36 from xchendeveloper/redis-sentinel
add redis sentinel support
<DFF> @@ -16,7 +16,7 @@ public class RedisSessionDAO extends AbstractSessionDAO {
private static Logger logger = LoggerFactory.getLogger(RedisSessionDAO.class);
private static final String DEFAULT_SESSION_KEY_PREFIX = "shiro:session:";
- private RedisManager redisManager;
+ private IRedisManager redisManager;
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
private RedisSerializer keySerializer = new StringSerializer();
private RedisSerializer valueSerializer = new ObjectSerializer();
@@ -119,11 +119,11 @@ public class RedisSessionDAO extends AbstractSessionDAO {
return this.keyPrefix + sessionId;
}
- public RedisManager getRedisManager() {
+ public IRedisManager getRedisManager() {
return redisManager;
}
- public void setRedisManager(RedisManager redisManager) {
+ public void setRedisManager(IRedisManager redisManager) {
this.redisManager = redisManager;
}
| 3 | Merge pull request #36 from xchendeveloper/redis-sentinel | 3 | .java | java | mit | alexxiyang/shiro-redis |
1841 | <NME> README.md
<BEF> shiro-redis
===========
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
<MSG> Merge branch 'master' of https://github.com/alexxiyang/shiro-redis
<DFF> @@ -2,3 +2,28 @@ shiro-redis
===========
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
+
+How to use it?
+===========
+
+copy /bin/shiro-redis.jar to your classpath, such as 'webapp/WEB-INF/lib'
+
+edit in shiro.ini
+
+```properties
+#required
+cacheManager = org.yqr.shiro.RedisCacheManager
+#optional if you don't specify host the default value is 127.0.0.1
+cacheManager.host=127.0.0.1
+#optional , default value: 6379
+cacheManager.port=6379
+#optional, default value:0 .The expire time is in second
+cacheManager.expire=5
+#required
+securityManager.cacheManager = $cacheManager
+```
+
+If you found any bugs
+===========
+
+Please send email to [email protected]
| 25 | Merge branch 'master' of https://github.com/alexxiyang/shiro-redis | 0 | .md | md | mit | alexxiyang/shiro-redis |
1842 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
securityManager.cacheManager = $cacheManager
```
If you found any bugs
===========
<MSG> try to remember username
<DFF> @@ -23,6 +23,7 @@ cacheManager.expire=5
securityManager.cacheManager = $cacheManager
```
+
If you found any bugs
===========
| 1 | try to remember username | 0 | .md | md | mit | alexxiyang/shiro-redis |
1843 | <NME> README.rst
<BEF> Hitch
=====
.. image:: https://badges.gitter.im/Join%20Chat.svg
:alt: Join the chat at https://gitter.im/hitchtest/hitch
:target: https://gitter.im/hitchtest/hitch?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
Hitch is a UNIX-based testing framework for writing integration tests with an emphasis on:
* Minimizing and eliminating `brittle tests <https://hitchtest.readthedocs.org/en/latest/glossary/brittle_tests.html>`_
* `Test readability <https://hitchtest.readthedocs.org/en/latest/glossary/test_readability.html>`_
* `Loose coupling <https://hitchtest.readthedocs.org/en/latest/glossary/loose_coupling.html>`_
* `Test realism <https://hitchtest.readthedocs.org/en/latest/glossary/test_realism.html>`_
* Tests that `fail fast <https://hitchtest.readthedocs.org/en/latest/glossary/fail_fast.html>`_ and `fail clearly <https://hitchtest.readthedocs.org/en/latest/glossary/fail_clearly.html>`_
Available plugins
-----------------
Hitch comes with a variety of plugins to aid you to realistically testing various
kinds of software, components and scenarios, including:
* `Python <https://hitchtest.readthedocs.org/en/latest/plugins/hitchpython.html>`_ (includes Django and Celery service definitions)
* `Postgresql <https://hitchtest.readthedocs.org/en/latest/plugins/hitchpostgres.html>`_
* `Redis <https://hitchtest.readthedocs.org/en/latest/plugins/hitchredis.html>`_
* `Web apps (using selenium) <https://hitchtest.readthedocs.org/en/latest/plugins/hitchselenium.html>`_
* Command line apps (using pexpect)
* `Cron <https://hitchtest.readthedocs.org/en/latest/plugins/hitchcron.html>`_
* MySQL
* RabbitMQ
* Elastic Search
`Plugin documentation <https://hitchtest.readthedocs.org/en/latest/plugins/>`_
Getting started
---------------
See the `quickstart tutorial <https://hitchtest.readthedocs.org/en/latest/quickstart/index.html>`_ on how to
get started testing an existing project.
Also check out `cookiecutter-django <https://github.com/pydanny/cookiecutter-django>`_
if you want to start a new Django project with tests.
Status
------
Hitch is currently in beta.
It is regression tested on:
* Operating Systems : Mac OS X Yosemite, Ubuntu, Debian, Fedora and Arch Linux.
* Python versions : 3.5.0, 3.4.3, 3.4.0 and 3.3.0 `(what about python 2?) <https://hitchtest.readthedocs.org/en/latest/faq/what_about_python2.html>`_
It does not currently work on Windows.
See `tested on <https://hitchtest.readthedocs.org/en/latest/misc/tested_on.html>`_ for more details on
how the framework is tested (with itself, naturally).
Contents of this project
------------------------
This project contains:
* The code for the bootstrapper script
* Documentation for the whole project (`hosted at readthedocs <https://hitchtest.readthedocs.org/en/latest/>`_)
* Code for other components is at: https://github.com/hitchtest/
* HitchPostgres_ - Simple wrapper around Postgres.
* HitchSelenium_ - Simple wrapper around Selenium.
* HitchRedis_ - Simple wrapper around Redis.
* HitchDjango_ - Simple wrapper around Django.
* HitchCelery_ - Simple wrapper around Celery.
More coming soon.
.. _HitchCron: https://github.com/hitchtest/hitchcron
.. _HitchSelenium: https://github.com/hitchtest/hitchselenium
.. _HitchRedis: https://github.com/hitchtest/hitchredis
.. _HitchDjango: https://github.com/hitchtest/hitchdjango
.. _HitchPostgres: https://github.com/hitchtest/hitchpostgres
.. _HitchCelery: https://github.com/hitchtest/hitchcelery
.. _pipsi: https://github.com/mitsuhiko/pipsi
<MSG> Merge branch 'master' of github.com:hitchtest/hitch
<DFF> @@ -94,8 +94,7 @@ together, or not at all. Those are:
* HitchPostgres_ - Simple wrapper around Postgres.
* HitchSelenium_ - Simple wrapper around Selenium.
* HitchRedis_ - Simple wrapper around Redis.
-* HitchDjango_ - Simple wrapper around Django.
-* HitchCelery_ - Simple wrapper around Celery.
+* HitchPython_ - Simple wrapper around python programs including Django and Celery.
More coming soon.
@@ -116,8 +115,7 @@ See the roadmap_ for planned future features.
.. _HitchCron: https://github.com/hitchtest/hitchcron
.. _HitchSelenium: https://github.com/hitchtest/hitchselenium
.. _HitchRedis: https://github.com/hitchtest/hitchredis
-.. _HitchDjango: https://github.com/hitchtest/hitchdjango
.. _HitchPostgres: https://github.com/hitchtest/hitchpostgres
-.. _HitchCelery: https://github.com/hitchtest/hitchcelery
+.. _HitchPython: https://github.com/hitchtest/hitchpython
.. _pipsi: https://github.com/mitsuhiko/pipsi
| 2 | Merge branch 'master' of github.com:hitchtest/hitch | 4 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1844 | <NME> RedisManager.java
<BEF> package org.crazycake.shiro;
import org.crazycake.shiro.common.WorkAloneRedisManager;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.Protocol;
public class RedisManager extends WorkAloneRedisManager implements IRedisManager {
private static final String DEFAULT_HOST = "127.0.0.1:6379";
private String host = DEFAULT_HOST;
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private int timeout = Protocol.DEFAULT_TIMEOUT;
// 0 - never expire
private int expire = 0;
private static JedisPool jedisPool = null;
public RedisManager(){
String[] hostAndPort = host.split(":");
jedisPool = new JedisPool(getJedisPoolConfig(), hostAndPort[0], Integer.parseInt(hostAndPort[1]), timeout, password, database);
}
}
}
*/
public void init(){
if(jedisPool == null){
jedisPool = new JedisPool(new JedisPoolConfig(), host, port);
}
}
public void setHost(String host) {
this.host = host;
}
public int getTimeout() {
return timeout;
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
public JedisPool getJedisPool() {
return jedisPool;
}
public void setJedisPool(JedisPool jedisPool) {
this.jedisPool = jedisPool;
}
}
public void setExpire(int expire) {
this.expire = expire;
}
<MSG> add password and timeout support
<DFF> @@ -15,6 +15,11 @@ public class RedisManager {
// 0 - never expire
private int expire = 0;
+ //timeout for jedis try to connect to redis server, not expire time! In milliseconds
+ private int timeout = 0;
+
+ private String password = "";
+
private static JedisPool jedisPool = null;
public RedisManager(){
@@ -26,7 +31,14 @@ public class RedisManager {
*/
public void init(){
if(jedisPool == null){
- jedisPool = new JedisPool(new JedisPoolConfig(), host, port);
+ if(password != null && !"".equals(password)){
+ jedisPool = new JedisPool(new JedisPoolConfig(), host, port, timeout, password);
+ }else if(timeout != 0){
+ jedisPool = new JedisPool(new JedisPoolConfig(), host, port,timeout);
+ }else{
+ jedisPool = new JedisPool(new JedisPoolConfig(), host, port);
+ }
+
}
}
@@ -163,6 +175,22 @@ public class RedisManager {
public void setExpire(int expire) {
this.expire = expire;
}
+
+ public int getTimeout() {
+ return timeout;
+ }
+
+ public void setTimeout(int timeout) {
+ this.timeout = timeout;
+ }
+
+ public String getPassword() {
+ return password;
+ }
+
+ public void setPassword(String password) {
+ this.password = password;
+ }
| 29 | add password and timeout support | 1 | .java | java | mit | alexxiyang/shiro-redis |
1845 | <NME> op_kernel.cu <BEF> #ifdef WITH_CUDA #include <cmath> #include "core/context_cuda.h" #include "core/tensor.h" #include "utils/cuda_device.h" #include "utils/op_kernel.h" #include "utils/math_functions.h" namespace dragon { namespace kernel { template <typename T> __global__ void _Empty() { } template<> void Empty<float, CUDAContext>() { _Empty<float> << <1, 1 >> >(); CUDA_POST_KERNEL_CHECK; } template<> void Empty<float16, CUDAContext>() { _Empty<float16> << <1, 1 >> >(); CUDA_POST_KERNEL_CHECK; } /******************** activation.dropout ********************/ template<typename T> __global__ void _Dropout(const int count, const uint32_t thresh, const T scale, const T* x, const uint32_t* mask, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] * (mask[idx] > thresh) * scale; } } template<> void Dropout<float, CUDAContext>(const int count, float prob, float scale, const float* x, uint32_t* mask, float* y, CUDAContext* context) { uint32_t thresh = static_cast<uint32_t>(UINT_MAX * prob); math::RandomUniform<uint32_t, CUDAContext>(count, float(0), float(UINT_MAX), mask); _Dropout<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, thresh, scale, x, mask, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _DropoutGrad(const int count, const uint32_t thresh, const T scale, const T* dy, const uint32_t* mask, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * (mask[idx] > thresh) * scale; } } template<> void DropoutGrad<float, CUDAContext>(const int count, float prob, float scale, const float* dy, const uint32_t* mask, float* dx) { uint32_t thresh = static_cast<uint32_t>(UINT_MAX * prob); _DropoutGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, thresh, scale, dy, mask, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.elu ********************/ template <typename T> __global__ void _Elu(const int count, const T* x, const float alpha, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] > 0 ? x[idx] : alpha * (std::exp(x[idx]) - 1); } } template<> void Elu<float, CUDAContext>(const int count, const float* x, const float alpha, float* y) { _Elu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, alpha, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _EluGrad(const int count, const T* dy, const T* y, const float alpha, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = y[idx] > 0 ? dy[idx] : dy[idx] * (y[idx] + alpha); } } template<> void EluGrad<float, CUDAContext>(const int count, const float* dy, const float* y, const float alpha, float* dx) { _EluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, alpha, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.relu ********************/ template <typename T> __global__ void _Relu(const int count, const T* x, const float slope, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] > 0 ? x[idx] : x[idx] * slope; } } template<> void Relu<float, CUDAContext>(const int count, const float* x, const float slope, float* y) { _Relu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, slope, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _ReluHalf(const int count, const half* x, const float slope, half* y) { const half kSlope = __float2half(slope); const half kZero = __float2half(0.0); CUDA_KERNEL_LOOP(idx, count) { #if __CUDA_ARCH__ >= 530 y[idx] = __hgt(x[idx], kZero) ? x[idx] : __hmul(x[idx], kSlope); #endif } } template<> void Relu<float16, CUDAContext>(const int count, const float16* x, const float slope, float16* y) { _ReluHalf<half> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(x), slope, reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _ReluGrad(const int count, const T* dy, const T* y, const float slope, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * ((y[idx] > 0) + slope * (y[idx] <= 0)); } } template<> void ReluGrad<float, CUDAContext>(const int count, const float* dy, const float* y, const float slope, float* dx) { _ReluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, slope, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.sigmoid ********************/ template <typename T> __device__ T _SigmoidUnit(const T x) { return T(1) / (T(1) + exp(-x)); } template <typename T> __global__ void _Sigmoid(const int n, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, n) { y[idx] = _SigmoidUnit<T>(x[idx]); } } template<> void Sigmoid<float, CUDAContext>(const int count, const float* x, float* y) { _Sigmoid<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SigmoidGrad(const int count, const T* dy, const T* y, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * y[idx] * (1 - y[idx]); } } template<> void SigmoidGrad<float, CUDAContext>(const int count, const float* dy, const float* y, float* dx) { _SigmoidGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.softmax ********************/ template <typename T> __global__ void _SoftmaxMaxClass(const int outer_dim, const int classes, const int inner_dim, const T* x, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T max_val = -FLT_MAX; for (int c = 0; c < classes; c++) max_val = max(x[(o_idx * classes + c) * inner_dim + i_idx], max_val); scale[idx] = max_val; } } template <typename T> __global__ void _SoftmaxSubtract(const int count, const int classes, const int inner_dim, const T* scale, T* y) { CUDA_KERNEL_LOOP(idx, count) { int o_idx = idx / inner_dim / classes; int i_idx = idx % inner_dim; y[idx] -= scale[o_idx * inner_dim + i_idx]; } } template <typename T> __global__ void _SoftmaxExp(const int count, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = std::exp(y[idx]); } } template <typename T> __global__ void _SoftmaxSumClass(const int outer_dim, const int classes, const int inner_dim, const T* y, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T sum = 0; for (int c = 0; c < classes; c++) sum += y[(o_idx * classes + c) * inner_dim + i_idx]; scale[idx] = sum; } } template <typename T> __global__ void _SoftmaxDiv(const int count, const int classes, const int inner_dim, const T* scale, T* y) { CUDA_KERNEL_LOOP(idx, count) { int o_idx = idx / inner_dim / classes; int i_idx = idx % inner_dim; y[idx] /= scale[o_idx * inner_dim + i_idx]; } } template<> void Softmax<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* sum_multiplier, const float* x, float* scale, float* y, CUDAContext* context) { const int num_preds = inner_dim * outer_dim; _SoftmaxMaxClass<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, x, scale); _SoftmaxSubtract<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, y); _SoftmaxExp<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, y); _SoftmaxSumClass<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, y, scale); _SoftmaxDiv<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SoftmaxDot(const int outer_dim, const int classes, const int inner_dim, const T* dy, const T* y, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T dot = 0; for (int c = 0; c < classes; c++) dot += (y[(o_idx * classes + c) * inner_dim + i_idx] * dy[(o_idx * classes + c) * inner_dim + i_idx]); scale[idx] = dot; } } template<> void SoftmaxGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* sum_multiplier, const float* dy, const float* y, float* scale, float* dx) { const int num_preds = inner_dim * outer_dim; _SoftmaxDot<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, dy, y, scale); _SoftmaxSubtract<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, dx); math::Mul<float, CUDAContext>(count, dx, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.tanh ********************/ template <typename T> __global__ void _Tanh(const int count, const T* x, T* y) { CUDA_KERNEL_LOOP(i, count) { y[i] = std::tanh(x[i]); } } template<> void Tanh<float, CUDAContext>(const int count, const float* x, float* y) { _Tanh<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _TanhGrad(const int count, const T* dy, const T* y, T* dx) { CUDA_KERNEL_LOOP(i, count) { dx[i] = dy[i] * (1 - y[i] * y[i]); } } template<> void TanhGrad<float, CUDAContext>(const int count, const float* dy, const float* y, float* dx) { _TanhGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** arithmetic.bias_add ********************/ template <typename T> __global__ void _BiasAdd_NCHW(const int count, const int dim, const int inner_dim, const T* bias, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int bias_idx = (idx / inner_dim) % dim; y[idx] += bias[bias_idx]; } } template <typename T> __global__ void _BiasAdd_NHWC(const int count, const int dim, const int inner_dim, const T* bias, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] += bias[idx % dim]; } } template<> void BiasAdd<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const string& data_format, const float* bias, const float* bias_multiplier, float* y) { if (data_format == "NCHW") { _BiasAdd_NCHW<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, inner_dim, bias, y); } else if (data_format == "NHWC") { _BiasAdd_NHWC<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, inner_dim, bias, y); } else LOG(FATAL) << "Unknown data format: " << data_format; } /******************** arithmetic.clip ********************/ template <typename T> __global__ void _Clip(const int count, const T low, const T high, const T* x, T* mask, T* y) { CUDA_KERNEL_LOOP(idx, count) { mask[idx] = 1.0; if (x[idx] > high || x[idx] < low) mask[idx] = 0.0; y[idx] = x[idx] > high ? high : x[idx]; y[idx] = x[idx] < low ? low : x[idx]; } } template <> void Clip<float, CUDAContext>(const int count, const float low, const float high, const float* x, float* mask, float* y) { _Clip<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, low, high, x, mask, y); } /******************** arithmetic.scale ********************/ template <typename T> __global__ void _ScaleWithoutBias(const int n, const T* x, const T* scale, const int scale_dim, const int inner_dim, T* y) { CUDA_KERNEL_LOOP(idx, n) { const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = x[idx] * scale[scale_idx]; } } template <typename T> __global__ void _ScaleWithBias(const int n, const T* x, const T* scale, const T* bias, const int scale_dim, const int inner_dim, T* y) { CUDA_KERNEL_LOOP(idx, n) { const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = x[idx] * scale[scale_idx] + bias[scale_idx]; } } template<> void Scale<float, CUDAContext>(const int axis, Tensor* x, Tensor* gamma, Tensor* beta, Tensor* BMul, Tensor* y) { const int count = x->count(); const int inner_dim = x->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* Xdata = x->data<float, CUDAContext>(); auto* Ydata = y->mutable_data<float, CUDAContext>(); auto* Sdata = gamma->data<float, CUDAContext>(); auto* Bdata = beta != nullptr ? beta->data<float, CUDAContext>() : nullptr; if (Bdata != nullptr) _ScaleWithBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, Xdata, Sdata, Bdata, scale_dim, inner_dim, Ydata); else _ScaleWithoutBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, Xdata, Sdata, scale_dim, inner_dim, Ydata); } #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _ScaleWithoutBiasHalf(const int n, const half* x, const half* scale, const int scale_dim, const int inner_dim, half* y) { CUDA_KERNEL_LOOP(idx, n) { #if __CUDA_ARCH__ >= 530 const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = __hmul(x[idx], scale[scale_idx]); #endif } } template <typename T> __global__ void _ScaleWithBiasHalf(const int n, const half* x, const half* scale, const half* bias, const int scale_dim, const int inner_dim, half* y) { CUDA_KERNEL_LOOP(idx, n) { #if __CUDA_ARCH__ >= 530 const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = __hadd(__hmul(x[idx], scale[scale_idx]), bias[scale_idx]); #endif } } template<> void Scale<float16, CUDAContext>(const int axis, Tensor* x, Tensor* gamma, Tensor* beta, Tensor* BMul, Tensor* y) { const int count = x->count(); const int inner_dim = x->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* Xdata = x->data<float16, CUDAContext>(); auto* Ydata = y->mutable_data<float16, CUDAContext>(); auto* Sdata = gamma->data<float16, CUDAContext>(); auto* Bdata = beta != nullptr ? beta->data<float16, CUDAContext>() : nullptr; if (Bdata != nullptr) _ScaleWithBiasHalf<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(Xdata), reinterpret_cast<const half*>(Sdata), reinterpret_cast<const half*>(Bdata), scale_dim, inner_dim, reinterpret_cast<half*>(Ydata)); else _ScaleWithoutBiasHalf<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(Xdata), reinterpret_cast<const half*>(Sdata), scale_dim, inner_dim, reinterpret_cast<half*>(Ydata)); } #endif template <> void ScaleGrad<float, CUDAContext>(const int axis, Tensor* dy, Tensor* gamma, Tensor* dx) { const int count = dx->count(); const int inner_dim = dx->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* dYdata = dy->data<float, CUDAContext>(); auto* dXdata = dx->mutable_data<float, CUDAContext>(); auto* Sdata = gamma->data<float, CUDAContext>(); _ScaleWithoutBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dYdata, Sdata, scale_dim, inner_dim, dXdata); } /******************** cast.float2half ********************/ #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _FloatToHalfKernel(const int count, const float* x, half* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = __float2half(x[idx]); } } template <> void Float2Half<float, CUDAContext>(const int count, const float* x, float16* y) { _FloatToHalfKernel<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif /******************** control_flow.compare ********************/ template <typename T> __global__ void _Equal(const int count, const T* a, const T* b, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = fabs(a[idx] - b[idx]) < FLT_EPSILON ? 1.0 : 0.0; } } template <> void Equal<float, CUDAContext>(const int count, const float* a, const float* b, float* y) { _Equal<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, a, b, y); CUDA_POST_KERNEL_CHECK; } /******************** loss.l1_loss ********************/ template <typename T> __global__ void _AbsGrad(const int count, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const T val = dy[idx]; // val > 0: 1 | val == 0: 0 | val < 0: -1 dx[idx] = (val > T(0)) - (val < T(0)); } } template<> void AbsGrad<float, CUDAContext>(const int count, const float* dy, float* dx) { _AbsGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** loss.sigmoid_cross_entropy ********************/ template <typename T> __global__ void _SigmoidCrossEntropy(const int count, const T* x, const T* target, T* loss, T* valid) { CUDA_KERNEL_LOOP(idx, count) { if (target[idx] < 0) { loss[idx] = 0.; valid[idx] = 0.; } else { loss[idx] = std::log(1 + std::exp(x[idx] - 2 * x[idx] * (x[idx] >= 0))) + x[idx] * ((x[idx] >= 0) - target[idx]); valid[idx] = 1.; } } } template <> void SigmoidCrossEntropy<float, CUDAContext>(const int count, const float* x, const float* target, float* loss, float* valid) { _SigmoidCrossEntropy<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, target, loss, valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SigmoidCrossEntropyGrad(const int count, const T* x, const T* target, T* dx, T* valid) { CUDA_KERNEL_LOOP(idx, count) { if (target[idx] < 0) { dx[idx] = 0.; valid[idx] = 0.; } else { dx[idx] = 1. / (1. + expf(-x[idx])) - target[idx]; valid[idx] = 1.; } } } template <> void SigmoidCrossEntropyGrad<float, CUDAContext>(const int count, const float* x, const float* target, float* dx, float* valid) { _SigmoidCrossEntropyGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, target, dx, valid); CUDA_POST_KERNEL_CHECK; } /******************** loss.smooth_l1_loss ********************/ template <typename T> __global__ void _SmoothL1(const int count, const float sigma2, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const T val = x[idx]; const T abs_val = abs(val); if (abs_val < 1.0 / sigma2) y[idx] = 0.5 * val * val * sigma2; else y[idx] = abs_val - 0.5 / sigma2; } } template<> void SmoothL1<float, CUDAContext>(const int count, const float sigma2, const float* x, float* y) { _SmoothL1<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, sigma2, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SmoothL1Grad(const int count, const float sigma2, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const T val = dy[idx]; const T abs_val = abs(val); if (abs_val < 1.0 / sigma2) dx[idx] = val * sigma2; // val > 0: 1 | val == 0: 0 | val < 0: -1 else dx[idx] = (val > T(0)) - (val < T(0)); } } template<> void SmoothL1Grad<float, CUDAContext>(const int count, const float sigma2, const float* dy, float* dx) { _SmoothL1Grad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, sigma2, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** loss.softmax_cross_entropy ********************/ template <typename T> __global__ void _SoftmaxCrossEntropy(const int count, const T* prob, const T* target, T* loss) { CUDA_KERNEL_LOOP(idx, count) { loss[idx] = -target[idx] * log(max(prob[idx], FLT_MIN)); } } template <> void SoftmaxCrossEntropy<float, CUDAContext>(const int count, const float* prob, const float* target, float* loss) { _SoftmaxCrossEntropy<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, prob, target, loss); CUDA_POST_KERNEL_CHECK; } /******************** loss.sparse_softmax_cross_entropy ********************/ template <typename T> __global__ void _SparseSoftmaxCrossEntropy(const int count, const T* prob, const T* labels, T* loss, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) { if (label == ignores[k]) { loss[idx] = valid[idx] = 0; break; } } if (k == ignore_num) { loss[idx] = -log(max(prob[(o_idx * classes + label) * inner_dim + i_idx], FLT_MIN)); valid[idx] = 1; } } } template <> void SparseSoftmaxCrossEntropy<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* prob, const float* labels, float* loss, float* valid, Tensor* ignore) { const int* ignores = ignore->count() > 0 ? ignore->data<int, CUDAContext>() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxCrossEntropy<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, prob, labels, loss, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SparseSoftmaxCrossEntropyGrad(const int count, const T* prob, const T* labels, T* dx, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) if (label == ignores[k]) break; if (k != ignore_num) { for (int c = 0; c < classes; c++) dx[(o_idx * classes + c) * inner_dim + i_idx] = 0; valid[idx] = 0; } else { dx[(o_idx * classes + label) * inner_dim + i_idx] -= 1; valid[idx] = 1; } } } template<> void SparseSoftmaxCrossEntropyGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* prob, const float* labels, float* valid, Tensor* ignore, float* dXdata) { const int* ignores = ignore->count() > 0 ? ignore->data <int, CUDAContext >() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxCrossEntropyGrad<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, prob, labels, dXdata, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } /******************** loss.sparse_softmax_focal_loss ********************/ template <typename T> __global__ void _SparseSoftmaxFocalScale(const int count, const float gamma, const T* prob, T* scale) { CUDA_KERNEL_LOOP(idx, count) { scale[idx] = std::pow((1.0f - prob[idx]), gamma); } } template <typename T> __global__ void _SparseSoftmaxFocalLoss(const int count, const float pos_alpha, const float neg_alpha, const int neg_id, T* scale, const T* prob, const T* labels, T* loss, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) { if (label == ignores[k]) { loss[idx] = valid[idx] = 0; break; } } if (k == ignore_num) { const int t_ = (o_idx * classes + label) * inner_dim + i_idx; scale[t_] = label > neg_id ? pos_alpha * scale[t_] : neg_alpha * scale[t_]; loss[idx] = -scale[t_] * std::log(max(prob[t_], FLT_MIN)); valid[idx] = label > neg_id ? 1 : 0; } } } template <> void SparseSoftmaxFocalLoss<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float pos_alpha, const float neg_alpha, const float gamma, const int neg_id, const float* prob, const float* labels, float* scale, float* loss, float* valid, Tensor* ignore) { const int* ignores = ignore->count() > 0 ? ignore->data<int, CUDAContext>() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxFocalScale<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, gamma, prob, scale); _SparseSoftmaxFocalLoss<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, pos_alpha, neg_alpha, neg_id, scale, prob, labels, loss, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SparseSoftmaxFocalLossGrad(const int count, const float gamma, const int neg_id, const float eps, const T* scale, const T* prob, const T* labels, T* dx, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) if (label == ignores[k]) break; if (k != ignore_num) { for (int c = 0; c < classes; c++) dx[(o_idx * classes + c) * inner_dim + i_idx] = 0; valid[idx] = 0; } else { const int t_ = (o_idx * classes + label) * inner_dim + i_idx; T grad = -gamma * (scale[t_] / max((1.0f - prob[t_]), eps)) * std::log(max(prob[t_], FLT_MIN)) * prob[t_] + scale[t_]; for (int c = 0; c < classes; c++) { const int i_ = (o_idx * classes + c) * inner_dim + i_idx; if (c == label) { dx[i_] = grad * (prob[t_] - 1); } else { dx[i_] = grad * prob[i_]; } } valid[idx] = label > neg_id ? 1 : 0; } } } template<> void SparseSoftmaxFocalLossGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float gamma, const int neg_id, const float eps, const float* scale, const float* prob, const float* labels, float* valid, Tensor* ignore, float* dXdata) { const int* ignores = ignore->count() > 0 ? ignore->data <int, CUDAContext >() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxFocalLossGrad<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, gamma, neg_id, eps, scale, prob, labels, dXdata, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } /******************** misc.image_data ********************/ template <typename Tx, typename Ty> __global__ void _ImageData_NCHW(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int w = idx % W; const int h = (idx / W) % H; const int c = (idx / W / H) % C; const int n = idx / W / H / C; Ty raw_value = x[((n * H + h) * W + w) * C + c]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = raw_value; } } template <typename Tx, typename Ty> __global__ void _ImageData_NHWC(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % C; Ty raw_value = x[idx]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = raw_value; } } template <typename Tx, typename Ty> __global__ void _ImageDataHalf_NCHW(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int w = idx % W; const int h = (idx / W) % H; const int c = (idx / W / H) % C; const int n = idx / W / H / C; float raw_value = x[((n * H + h) * W + w) * C + c]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = __float2half(raw_value); } } template <typename Tx, typename Ty> __global__ void _ImageDataHalf_NHWC(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % C; float raw_value = x[idx]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = __float2half(raw_value); } } template <> void ImageData<float, float, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const float* x, float* y) { if (data_format == "NCHW") { _ImageData_NCHW<float, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else if (data_format == "NHWC") { _ImageData_NHWC<float, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } template <> void ImageData<uint8_t, float, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const uint8_t* x, float* y) { if (data_format == "NCHW") { _ImageData_NCHW<uint8_t, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else if (data_format == "NHWC") { _ImageData_NHWC<uint8_t, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void ImageData<float, float16, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const float* x, float16* y) { if (data_format == "NCHW") { _ImageDataHalf_NCHW<float, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else if (data_format == "NHWC") { _ImageDataHalf_NHWC<float, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } template <> void ImageData<uint8_t, float16, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const uint8_t* x, float16* y) { if (data_format == "NCHW") { _ImageDataHalf_NCHW<uint8_t, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else if (data_format == "NHWC") { _ImageDataHalf_NHWC<uint8_t, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } #endif /******************** ndarray.argmax ********************/ template <typename T> __global__ void _Arange(const int count, const int start, const int step, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = start + idx * step; } } template<> void Arange<float, CUDAContext>(const int count, const int start, const int step, float* y) { _Arange<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, start, step, y); CUDA_POST_KERNEL_CHECK; } template<> void Arange<int, CUDAContext>(const int count, const int start, const int step, int* y) { _Arange<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, start, step, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.argmax ********************/ template <typename T> __global__ void _Argmax(const int count, const int axis_dim, const int inner_dim, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { T max_val = -FLT_MAX; int max_idx = -1; for (int j = 0; j < axis_dim; ++j) { const T val = x[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim]; if (val > max_val) { max_val = val; max_idx = j; } } y[idx] = max_idx; } } template<> void Argmax<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const int top_k, const float* x, float* y) { CHECK_EQ(top_k, 1) << "top_k > 1 is not supported with CUDA"; _Argmax<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, x, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.argmin ********************/ template <typename T> __global__ void _Argmin(const int count, const int axis_dim, const int inner_dim, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { T min_val = FLT_MAX; int min_idx = -1; for (int j = 0; j < axis_dim; ++j) { const T val = x[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim]; if (val < min_val) { min_val = val; min_idx = j; } } y[idx] = min_idx; } } template<> void Argmin<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const int top_k, const float* x, float* y) { CHECK_EQ(top_k, 1) << "top_k > 1 is not supported with CUDA"; _Argmin<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, x, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.gather ********************/ template <typename T> __global__ void _CanonicalAxis(const int count, const int dim, T* y) { CUDA_KERNEL_LOOP(idx, count) { if (y[idx] < 0) y[idx] += dim; } } template <> void CanonicalAxis<int, CUDAContext>(const int count, const int dim, int* y) { _CanonicalAxis<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _Gather(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int outer_idx = idx / inner_dim / y_slice_dim; const int slice_idx = idx % inner_dim; const int y_idx_offset = (idx / inner_dim) % y_slice_dim; const int x_idx_offset = indices[y_idx_offset]; const int x_idx = (outer_idx * x_slice_dim + x_idx_offset) * inner_dim + slice_idx; y[idx] = x[x_idx]; } } template <> void Gather<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const float* x, float* y, CUDAContext* context) { _Gather<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, x, y); CUDA_POST_KERNEL_CHECK; } template <> void Gather<int, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const int* x, int* y, CUDAContext* context) { _Gather<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _GatherGrad(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int outer_idx = idx / inner_dim / y_slice_dim; const int slice_idx = idx % inner_dim; const int y_idx_offset = (idx / inner_dim) % y_slice_dim; const int x_idx_offset = indices[y_idx_offset]; const int x_idx = (outer_idx * x_slice_dim + x_idx_offset) * inner_dim + slice_idx; atomicAdd(dx + x_idx, dy[idx]); } } template <> void GatherGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const float* dy, float* dx) { _GatherGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, dy, dx); CUDA_POST_KERNEL_CHECK; } template <> void GatherGrad<int, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const int* dy, int* dx) { _GatherGrad<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.concat ********************/ template <typename T> __global__ void _Concat(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = x_concat_dim * inner_dim; const int outer_idx = idx / tmp; const int concat_idx = idx % tmp; const int y_idx = (outer_idx * y_concat_dim + concat_offset) * inner_dim + concat_idx; y[y_idx] = x[idx]; } } template <> void Concat<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float* x, float* y, CUDAContext* context) { _Concat<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, x, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void Concat<float16, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float16* x, float16* y, CUDAContext* context) { _Concat<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, reinterpret_cast<const half*>(x), reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _ConcatGrad(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = x_concat_dim * inner_dim; const int outer_idx = idx / tmp; const int concat_idx = idx % tmp; const int y_idx = (outer_idx * y_concat_dim + concat_offset) * inner_dim + concat_idx; dx[idx] = dy[y_idx]; } } template <> void ConcatGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float* dy, float* dx, CUDAContext* context) { _ConcatGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, dy, dx); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void ConcatGrad<float16, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float16* dy, float16* dx, CUDAContext* context) { _ConcatGrad<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, reinterpret_cast<const half*>(dy), reinterpret_cast<half*>(dx)); CUDA_POST_KERNEL_CHECK; } #endif /******************** ndarray.crop ********************/ template<typename T> __global__ void _Crop1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; y[idx] = x[(o * dim + ex_d + start) * inner_dim + i]; } } template<> void Crop1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const float* x, float* y, CUDAContext* context) { _Crop1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, start, x, y); CUDA_POST_KERNEL_CHECK; } template<typename T> __global__ void _Crop1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const int end, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int d = (idx / inner_dim) % dim; const int o = idx / inner_dim / dim; if (d >= start && d < end) dx[idx] = dy[(o * ex_dim + d - start) * inner_dim + i]; } } template<> void Crop1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const int end, const float* dy, float* dx, CUDAContext* context) { _Crop1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, start, end, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.pad ********************/ template <typename T> __global__ void _ConstPad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T value, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = ex_d - pad_l; y[idx] = (d < 0 || d >= dim) ? value : x[(o * dim + d) * inner_dim + i]; } } template <> void ConstPad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float value, const float* x, float* y, CUDAContext* context) { _ConstPad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, value, x, y); } template <typename T> __global__ void _ReflectPad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; int d = ex_d - pad_l; d = max(d, -d); d = min(d, 2 * dim - d - 2); y[idx] = x[(o * dim + d) * inner_dim + i]; } } template <> void ReflectPad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* x, float* y, CUDAContext* context) { _ReflectPad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, x, y); } template <typename T> __global__ void _EdgePad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = min(dim - 1, max(ex_d - pad_l, 0)); y[idx] = x[(o * dim + d) * inner_dim + i]; } } template <> void EdgePad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* x, float* y, CUDAContext* context) { _EdgePad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, x, y); } template <typename T> __global__ void _ConstPad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % dim + pad_l; const int o = idx / inner_dim / dim; dx[idx] = dy[(o * ex_dim + ex_d) * inner_dim + i]; } } template <> void ConstPad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx, CUDAContext* context) { _ConstPad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } template <typename T> __global__ void _ReflectPad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; int d = ex_d - pad_l; d = max(d, -d); d = min(d, 2 * dim - d - 2); atomicAdd(&dx[(o * dim + d) * inner_dim + i], dy[idx]); } } template <> void ReflectPad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx) { _ReflectPad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } template <typename T> __global__ void _EdgePad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = min(dim - 1, max(ex_d - pad_l, 0)); atomicAdd(&dx[(o * dim + d) * inner_dim + i], dy[idx]); } } template <> void EdgePad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx, CUDAContext* context) { _EdgePad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } /******************** ndarray.one_hot ********************/ template <typename T> __global__ void _OneHot(const int count, const int depth, const int on_value, const float* x, float* y) { CUDA_KERNEL_LOOP(idx, count) { const int val = x[idx]; y[idx * depth + val] = on_value; } } template <> void OneHot<float, CUDAContext>(const int count, const int depth, const int on_value, const float* x, float* y) { _OneHot<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, depth, on_value, x, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.reduce ********************/ template <typename T> __global__ void _Sum(const int count, const int axis_dim, const int inner_dim, const T* x, float* y) { CUDA_KERNEL_LOOP(idx, count) { T sum_val = 0.0; for (int j = 0; j < axis_dim; j++) sum_val += x[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim]; y[idx] = sum_val; } } template<> void Sum<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const float* x, float* y) { _Sum<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SumGrad(const int count, const int axis_dim, const int inner_dim, const T coeff, const T* dy, float* dx) { CUDA_KERNEL_LOOP(idx, count) { for (int j = 0; j < axis_dim; j++) dx[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim] = dy[idx] * coeff; } } template<> void SumGrad<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const float coeff, const float* dy, float* dx) { _SumGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, coeff, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.repeat ********************/ template <typename T> __global__ void _Repeat(const int count, const int inner_dim, const int repeats, const int dim, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % inner_dim; const int b = (idx / inner_dim / repeats) % dim; const int n = idx / inner_dim / repeats / dim; const int x_idx = (n * dim + b) * inner_dim + d; y[idx] = x[x_idx]; } } template <> void Repeat<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const int repeats, const float* x, float* y, CUDAContext* context) { _Repeat<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, inner_dim, repeats, dim, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _RepeatGrad(const int count, const int inner_dim, const int repeats, const int dim, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % inner_dim; const int b = (idx / inner_dim) % dim; const int n = idx / inner_dim / dim; T gradient = 0; for (int t = 0; t < repeats; t++) gradient += dy[(((n * dim + b) * repeats) + t) * inner_dim + d]; dx[idx] = gradient; } } template <> void RepeatGrad<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const int repeats, const float* dy, float* dx, CUDAContext* context) { _RepeatGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, inner_dim, repeats, dim, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.slice ********************/ template <typename T> __global__ void _Slice(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = y_slice_dim * inner_dim; const int outer_idx = idx / tmp; const int slice_idx = idx % tmp; const int x_idx = (outer_idx * x_slice_dim + slice_offset) * inner_dim + slice_idx; y[idx] = x[x_idx]; } } template <> void Slice<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const float* x, float* y, CUDAContext* context) { _Slice<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, slice_offset, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SliceGrad(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = y_slice_dim * inner_dim; const int outer_idx = idx / tmp; const int slice_idx = idx % tmp; const int x_idx = (outer_idx * x_slice_dim + slice_offset) * inner_dim + slice_idx; dx[x_idx] = dy[idx]; } } template <> void SliceGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const float* dy, float* dx, CUDAContext* context) { _SliceGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, slice_offset, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.tile ********************/ template <typename T> __global__ void _Tile(const int count, const int ex_inner_dim, const int multiple, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % ex_inner_dim; const int n = idx / ex_inner_dim / multiple; const int x_idx = n * ex_inner_dim + d; y[idx] = x[x_idx]; } } template <> void Tile<float, CUDAContext>(const int count, const int outer_dim, const int ex_inner_dim, const int multiple, const float* x, float* y, CUDAContext* context) { _Tile<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ex_inner_dim, multiple, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _TileGrad(const int count, const int ex_inner_dim, const int multiple, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % ex_inner_dim; const int n = idx / ex_inner_dim; T gradient = 0; for (int t = 0; t < multiple; t++) gradient += dy[(n * multiple + t) * ex_inner_dim + d]; dx[idx] = gradient; } } template <> void TileGrad<float, CUDAContext>(const int count, const int outer_dim, const int ex_inner_dim, const int multiple, const float* dy, float* dx, CUDAContext* context) { _TileGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ex_inner_dim, multiple, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.transpose ********************/ template <typename T> __global__ void _Transpose(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { int x_idx = 0, y_idx = idx; for (int j = 0; j < ndim; ++j) { int k = order[j]; x_idx += (y_idx / new_steps[j]) * old_steps[k]; y_idx %= new_steps[j]; } y[idx] = x[x_idx]; } } template <> void Transpose<float, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float* x, float* y) { _Transpose<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, x, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void Transpose<float16, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float16* x, float16* y) { _Transpose<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, reinterpret_cast<const half*>(x), reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _TransposeGrad(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { int x_idx = 0, y_idx = idx; for (int j = 0; j < ndim; ++j) { int k = order[j]; x_idx += (y_idx / new_steps[j]) * old_steps[k]; y_idx %= new_steps[j]; } dx[x_idx] = dy[idx]; } } template <> void TransposeGrad<float, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float* dy, float* dx) { _TransposeGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, dy, dx); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void TransposeGrad<float16, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float16* dy, float16* dx) { _TransposeGrad<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, reinterpret_cast<const half*>(dy), reinterpret_cast<half*>(dx)); CUDA_POST_KERNEL_CHECK; } #endif /******************** recurrent.lstm_uint ********************/ template <typename T> __global__ void _LSTMUnitAct(const int count, const int channels, const int g_offset, const int x_offset, const T* x, T* x_act) { CUDA_KERNEL_LOOP(idx, count) { const int ch_4 = idx % x_offset; if (ch_4 < g_offset) x_act[idx] = _SigmoidUnit<float>(x[idx]); else x_act[idx] = std::tanh(x[idx]); } } template <typename T> __global__ void _LSTMUnit(const int count, const int channels, const int o_offset, const int g_offset, const int x_offset, const T* c_1, T* x_act, const T* cont, T* c, T* h) { CUDA_KERNEL_LOOP(idx, count) { const int n = idx / channels; const int ch = idx % channels; T* x_act_ = x_act + n * x_offset; const T i = x_act_[ch]; if (cont != nullptr && cont[n] != T(1)) x_act_[channels + ch] *= cont[n]; const T f = x_act_[channels + ch]; const T o = x_act_[o_offset + ch]; const T g = x_act_[g_offset + ch]; const T c_ = c[idx] = f * c_1[idx] + i * g; h[idx] = o * std::tanh(c_); } } template <> void LSTMUnit<float, CUDAContext>(const int count, const int num, const int channels, const float* c_1, const float* x, const float* cont, float* x_act, float* c, float* h) { const int o_offset = 2 * channels, g_offset = 3 * channels; const int x_offset = 4 * channels, y_count = count / 4; _LSTMUnitAct<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, g_offset, x_offset, x, x_act); _LSTMUnit<float> << <GET_BLOCKS(y_count), CUDA_NUM_THREADS >> >(y_count, channels, o_offset, g_offset, x_offset, c_1, x_act, cont, c, h); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _LSTMUnitGrad(const int count, const int channels, const int o_offset, const int g_offset, const int x_offset, const T* c_1, const T* x_act, const T* c, const T* dc, const T* dh, T* dc_1, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int n = idx / channels; const int ch = idx % channels; const T* x_act_ = x_act + n * x_offset; T* dx_ = dx + n * x_offset; const T i = x_act_[ch]; const T f = x_act_[channels + ch]; const T o = x_act_[o_offset + ch]; const T g = x_act_[g_offset + ch]; T* p_di = dx_ + ch; T* p_df = dx_ + channels + ch; T* p_do = dx_ + o_offset + ch; T* p_dg = dx_ + g_offset + ch; const T tanh_c_t = tanh(c[idx]); const T dc_1_sum_term = dh[idx] * o * (1 - tanh_c_t * tanh_c_t) + dc[idx]; dc_1[idx] = dc_1_sum_term * f; *p_di = dc_1_sum_term * g; *p_df = dc_1_sum_term * c_1[idx]; *p_do = dh[idx] * tanh_c_t; *p_dg = dc_1_sum_term * i; } } template <typename T> __global__ void _LSTMUnitGradAct(const int count, const int channels, const int g_offset, const int x_offset, const T* x_act, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int ch_4 = idx % x_offset; const T x_act_ = x_act[idx]; if (ch_4 < g_offset) dx[idx] = dx[idx] * x_act_ * (T(1) - x_act_); else dx[idx] = dx[idx] * (T(1) - x_act_ * x_act_); } } template <> void LSTMUnitGrad<float, CUDAContext>(const int count, const int num, const int channels, const float* c_1, const float* x_act, const float* c, const float* dc, const float* dh, float* dc_1, float* dx) { const int o_offset = 2 * channels, g_offset = 3 * channels; const int x_offset = 4 * channels, y_count = count / 4; _LSTMUnitGrad<float> << <GET_BLOCKS(y_count), CUDA_NUM_THREADS >> >(y_count, channels, o_offset, g_offset, x_offset, c_1, x_act, c, dc, dh, dc_1, dx); _LSTMUnitGradAct<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, g_offset, x_offset, x_act, dx); CUDA_POST_KERNEL_CHECK; } /******************** update.adam_update ********************/ template <typename T> __global__ void _AdamUpdate(const int n, T* g, T* m, T* v, const T beta1, const T beta2, const T eps, const T lr) { CUDA_KERNEL_LOOP(i, n) { T gi = g[i]; T mi = m[i] = m[i] * beta1 + gi * (1 - beta1); T vi = v[i] = v[i] * beta2 + gi * gi * (1 - beta2); g[i] = lr * mi / (sqrt(vi) + eps); } } template <> void AdamUpdate<float, CUDAContext>(Tensor* x, Tensor* m, Tensor* v, Tensor* t, const float beta1, const float beta2, const float eps, const float lr) { TIndex count = x->count(); auto* Xdata = x->mutable_data<float, CUDAContext>(); auto* Mdata = m->mutable_data<float, CUDAContext>(); auto* Vdata = v->mutable_data<float, CUDAContext>(); _AdamUpdate<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, Xdata, Mdata, Vdata, beta1, beta2, eps, lr); CUDA_POST_KERNEL_CHECK; } /******************** update.nesterov_update ********************/ template <typename T> __global__ void _NesterovUpdate(const int n, T* g, T* h, const T momentum, const T lr) { CUDA_KERNEL_LOOP(i, n) { T hi = h[i]; T hi_new = h[i] = momentum * hi + lr * g[i]; g[i] = (1 + momentum) * hi_new - momentum * hi; } } template <> void NesterovUpdate<float, CUDAContext>(const int count, float* x, float* h, Tensor* t, const float momentum, const float lr, CUDAContext* ctx) { _NesterovUpdate<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, h, momentum, lr); CUDA_POST_KERNEL_CHECK; } /******************** update.rmsprop_update ********************/ template <typename T> __global__ void _RMSPropUpdate(const int n, T* g, T* h, const | 396 | Add SELU & PReLU support | 151 | .cu | cu | bsd-2-clause | neopenx/Dragon |
1846 | <NME> op_kernel.cu <BEF> #ifdef WITH_CUDA #include <cmath> #include "core/context_cuda.h" #include "core/tensor.h" #include "utils/cuda_device.h" #include "utils/op_kernel.h" #include "utils/math_functions.h" namespace dragon { namespace kernel { template <typename T> __global__ void _Empty() { } template<> void Empty<float, CUDAContext>() { _Empty<float> << <1, 1 >> >(); CUDA_POST_KERNEL_CHECK; } template<> void Empty<float16, CUDAContext>() { _Empty<float16> << <1, 1 >> >(); CUDA_POST_KERNEL_CHECK; } /******************** activation.dropout ********************/ template<typename T> __global__ void _Dropout(const int count, const uint32_t thresh, const T scale, const T* x, const uint32_t* mask, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] * (mask[idx] > thresh) * scale; } } template<> void Dropout<float, CUDAContext>(const int count, float prob, float scale, const float* x, uint32_t* mask, float* y, CUDAContext* context) { uint32_t thresh = static_cast<uint32_t>(UINT_MAX * prob); math::RandomUniform<uint32_t, CUDAContext>(count, float(0), float(UINT_MAX), mask); _Dropout<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, thresh, scale, x, mask, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _DropoutGrad(const int count, const uint32_t thresh, const T scale, const T* dy, const uint32_t* mask, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * (mask[idx] > thresh) * scale; } } template<> void DropoutGrad<float, CUDAContext>(const int count, float prob, float scale, const float* dy, const uint32_t* mask, float* dx) { uint32_t thresh = static_cast<uint32_t>(UINT_MAX * prob); _DropoutGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, thresh, scale, dy, mask, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.elu ********************/ template <typename T> __global__ void _Elu(const int count, const T* x, const float alpha, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] > 0 ? x[idx] : alpha * (std::exp(x[idx]) - 1); } } template<> void Elu<float, CUDAContext>(const int count, const float* x, const float alpha, float* y) { _Elu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, alpha, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _EluGrad(const int count, const T* dy, const T* y, const float alpha, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = y[idx] > 0 ? dy[idx] : dy[idx] * (y[idx] + alpha); } } template<> void EluGrad<float, CUDAContext>(const int count, const float* dy, const float* y, const float alpha, float* dx) { _EluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, alpha, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.relu ********************/ template <typename T> __global__ void _Relu(const int count, const T* x, const float slope, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = x[idx] > 0 ? x[idx] : x[idx] * slope; } } template<> void Relu<float, CUDAContext>(const int count, const float* x, const float slope, float* y) { _Relu<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, slope, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _ReluHalf(const int count, const half* x, const float slope, half* y) { const half kSlope = __float2half(slope); const half kZero = __float2half(0.0); CUDA_KERNEL_LOOP(idx, count) { #if __CUDA_ARCH__ >= 530 y[idx] = __hgt(x[idx], kZero) ? x[idx] : __hmul(x[idx], kSlope); #endif } } template<> void Relu<float16, CUDAContext>(const int count, const float16* x, const float slope, float16* y) { _ReluHalf<half> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(x), slope, reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _ReluGrad(const int count, const T* dy, const T* y, const float slope, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * ((y[idx] > 0) + slope * (y[idx] <= 0)); } } template<> void ReluGrad<float, CUDAContext>(const int count, const float* dy, const float* y, const float slope, float* dx) { _ReluGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, slope, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.sigmoid ********************/ template <typename T> __device__ T _SigmoidUnit(const T x) { return T(1) / (T(1) + exp(-x)); } template <typename T> __global__ void _Sigmoid(const int n, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, n) { y[idx] = _SigmoidUnit<T>(x[idx]); } } template<> void Sigmoid<float, CUDAContext>(const int count, const float* x, float* y) { _Sigmoid<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SigmoidGrad(const int count, const T* dy, const T* y, T* dx) { CUDA_KERNEL_LOOP(idx, count) { dx[idx] = dy[idx] * y[idx] * (1 - y[idx]); } } template<> void SigmoidGrad<float, CUDAContext>(const int count, const float* dy, const float* y, float* dx) { _SigmoidGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.softmax ********************/ template <typename T> __global__ void _SoftmaxMaxClass(const int outer_dim, const int classes, const int inner_dim, const T* x, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T max_val = -FLT_MAX; for (int c = 0; c < classes; c++) max_val = max(x[(o_idx * classes + c) * inner_dim + i_idx], max_val); scale[idx] = max_val; } } template <typename T> __global__ void _SoftmaxSubtract(const int count, const int classes, const int inner_dim, const T* scale, T* y) { CUDA_KERNEL_LOOP(idx, count) { int o_idx = idx / inner_dim / classes; int i_idx = idx % inner_dim; y[idx] -= scale[o_idx * inner_dim + i_idx]; } } template <typename T> __global__ void _SoftmaxExp(const int count, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = std::exp(y[idx]); } } template <typename T> __global__ void _SoftmaxSumClass(const int outer_dim, const int classes, const int inner_dim, const T* y, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T sum = 0; for (int c = 0; c < classes; c++) sum += y[(o_idx * classes + c) * inner_dim + i_idx]; scale[idx] = sum; } } template <typename T> __global__ void _SoftmaxDiv(const int count, const int classes, const int inner_dim, const T* scale, T* y) { CUDA_KERNEL_LOOP(idx, count) { int o_idx = idx / inner_dim / classes; int i_idx = idx % inner_dim; y[idx] /= scale[o_idx * inner_dim + i_idx]; } } template<> void Softmax<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* sum_multiplier, const float* x, float* scale, float* y, CUDAContext* context) { const int num_preds = inner_dim * outer_dim; _SoftmaxMaxClass<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, x, scale); _SoftmaxSubtract<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, y); _SoftmaxExp<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, y); _SoftmaxSumClass<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, y, scale); _SoftmaxDiv<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SoftmaxDot(const int outer_dim, const int classes, const int inner_dim, const T* dy, const T* y, T* scale) { CUDA_KERNEL_LOOP(idx, outer_dim * inner_dim) { int o_idx = idx / inner_dim; int i_idx = idx % inner_dim; T dot = 0; for (int c = 0; c < classes; c++) dot += (y[(o_idx * classes + c) * inner_dim + i_idx] * dy[(o_idx * classes + c) * inner_dim + i_idx]); scale[idx] = dot; } } template<> void SoftmaxGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* sum_multiplier, const float* dy, const float* y, float* scale, float* dx) { const int num_preds = inner_dim * outer_dim; _SoftmaxDot<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(outer_dim, classes, inner_dim, dy, y, scale); _SoftmaxSubtract<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, classes, inner_dim, scale, dx); math::Mul<float, CUDAContext>(count, dx, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** activation.tanh ********************/ template <typename T> __global__ void _Tanh(const int count, const T* x, T* y) { CUDA_KERNEL_LOOP(i, count) { y[i] = std::tanh(x[i]); } } template<> void Tanh<float, CUDAContext>(const int count, const float* x, float* y) { _Tanh<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _TanhGrad(const int count, const T* dy, const T* y, T* dx) { CUDA_KERNEL_LOOP(i, count) { dx[i] = dy[i] * (1 - y[i] * y[i]); } } template<> void TanhGrad<float, CUDAContext>(const int count, const float* dy, const float* y, float* dx) { _TanhGrad<float> << < GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, y, dx); CUDA_POST_KERNEL_CHECK; } /******************** arithmetic.bias_add ********************/ template <typename T> __global__ void _BiasAdd_NCHW(const int count, const int dim, const int inner_dim, const T* bias, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int bias_idx = (idx / inner_dim) % dim; y[idx] += bias[bias_idx]; } } template <typename T> __global__ void _BiasAdd_NHWC(const int count, const int dim, const int inner_dim, const T* bias, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] += bias[idx % dim]; } } template<> void BiasAdd<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const string& data_format, const float* bias, const float* bias_multiplier, float* y) { if (data_format == "NCHW") { _BiasAdd_NCHW<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, inner_dim, bias, y); } else if (data_format == "NHWC") { _BiasAdd_NHWC<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, inner_dim, bias, y); } else LOG(FATAL) << "Unknown data format: " << data_format; } /******************** arithmetic.clip ********************/ template <typename T> __global__ void _Clip(const int count, const T low, const T high, const T* x, T* mask, T* y) { CUDA_KERNEL_LOOP(idx, count) { mask[idx] = 1.0; if (x[idx] > high || x[idx] < low) mask[idx] = 0.0; y[idx] = x[idx] > high ? high : x[idx]; y[idx] = x[idx] < low ? low : x[idx]; } } template <> void Clip<float, CUDAContext>(const int count, const float low, const float high, const float* x, float* mask, float* y) { _Clip<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, low, high, x, mask, y); } /******************** arithmetic.scale ********************/ template <typename T> __global__ void _ScaleWithoutBias(const int n, const T* x, const T* scale, const int scale_dim, const int inner_dim, T* y) { CUDA_KERNEL_LOOP(idx, n) { const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = x[idx] * scale[scale_idx]; } } template <typename T> __global__ void _ScaleWithBias(const int n, const T* x, const T* scale, const T* bias, const int scale_dim, const int inner_dim, T* y) { CUDA_KERNEL_LOOP(idx, n) { const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = x[idx] * scale[scale_idx] + bias[scale_idx]; } } template<> void Scale<float, CUDAContext>(const int axis, Tensor* x, Tensor* gamma, Tensor* beta, Tensor* BMul, Tensor* y) { const int count = x->count(); const int inner_dim = x->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* Xdata = x->data<float, CUDAContext>(); auto* Ydata = y->mutable_data<float, CUDAContext>(); auto* Sdata = gamma->data<float, CUDAContext>(); auto* Bdata = beta != nullptr ? beta->data<float, CUDAContext>() : nullptr; if (Bdata != nullptr) _ScaleWithBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, Xdata, Sdata, Bdata, scale_dim, inner_dim, Ydata); else _ScaleWithoutBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, Xdata, Sdata, scale_dim, inner_dim, Ydata); } #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _ScaleWithoutBiasHalf(const int n, const half* x, const half* scale, const int scale_dim, const int inner_dim, half* y) { CUDA_KERNEL_LOOP(idx, n) { #if __CUDA_ARCH__ >= 530 const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = __hmul(x[idx], scale[scale_idx]); #endif } } template <typename T> __global__ void _ScaleWithBiasHalf(const int n, const half* x, const half* scale, const half* bias, const int scale_dim, const int inner_dim, half* y) { CUDA_KERNEL_LOOP(idx, n) { #if __CUDA_ARCH__ >= 530 const int scale_idx = (idx / inner_dim) % scale_dim; y[idx] = __hadd(__hmul(x[idx], scale[scale_idx]), bias[scale_idx]); #endif } } template<> void Scale<float16, CUDAContext>(const int axis, Tensor* x, Tensor* gamma, Tensor* beta, Tensor* BMul, Tensor* y) { const int count = x->count(); const int inner_dim = x->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* Xdata = x->data<float16, CUDAContext>(); auto* Ydata = y->mutable_data<float16, CUDAContext>(); auto* Sdata = gamma->data<float16, CUDAContext>(); auto* Bdata = beta != nullptr ? beta->data<float16, CUDAContext>() : nullptr; if (Bdata != nullptr) _ScaleWithBiasHalf<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(Xdata), reinterpret_cast<const half*>(Sdata), reinterpret_cast<const half*>(Bdata), scale_dim, inner_dim, reinterpret_cast<half*>(Ydata)); else _ScaleWithoutBiasHalf<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, reinterpret_cast<const half*>(Xdata), reinterpret_cast<const half*>(Sdata), scale_dim, inner_dim, reinterpret_cast<half*>(Ydata)); } #endif template <> void ScaleGrad<float, CUDAContext>(const int axis, Tensor* dy, Tensor* gamma, Tensor* dx) { const int count = dx->count(); const int inner_dim = dx->count(axis + gamma->ndim()); const int scale_dim = gamma->count(); auto* dYdata = dy->data<float, CUDAContext>(); auto* dXdata = dx->mutable_data<float, CUDAContext>(); auto* Sdata = gamma->data<float, CUDAContext>(); _ScaleWithoutBias<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dYdata, Sdata, scale_dim, inner_dim, dXdata); } /******************** cast.float2half ********************/ #ifdef WITH_CUDA_FP16 template <typename T> __global__ void _FloatToHalfKernel(const int count, const float* x, half* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = __float2half(x[idx]); } } template <> void Float2Half<float, CUDAContext>(const int count, const float* x, float16* y) { _FloatToHalfKernel<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif /******************** control_flow.compare ********************/ template <typename T> __global__ void _Equal(const int count, const T* a, const T* b, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = fabs(a[idx] - b[idx]) < FLT_EPSILON ? 1.0 : 0.0; } } template <> void Equal<float, CUDAContext>(const int count, const float* a, const float* b, float* y) { _Equal<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, a, b, y); CUDA_POST_KERNEL_CHECK; } /******************** loss.l1_loss ********************/ template <typename T> __global__ void _AbsGrad(const int count, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const T val = dy[idx]; // val > 0: 1 | val == 0: 0 | val < 0: -1 dx[idx] = (val > T(0)) - (val < T(0)); } } template<> void AbsGrad<float, CUDAContext>(const int count, const float* dy, float* dx) { _AbsGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** loss.sigmoid_cross_entropy ********************/ template <typename T> __global__ void _SigmoidCrossEntropy(const int count, const T* x, const T* target, T* loss, T* valid) { CUDA_KERNEL_LOOP(idx, count) { if (target[idx] < 0) { loss[idx] = 0.; valid[idx] = 0.; } else { loss[idx] = std::log(1 + std::exp(x[idx] - 2 * x[idx] * (x[idx] >= 0))) + x[idx] * ((x[idx] >= 0) - target[idx]); valid[idx] = 1.; } } } template <> void SigmoidCrossEntropy<float, CUDAContext>(const int count, const float* x, const float* target, float* loss, float* valid) { _SigmoidCrossEntropy<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, target, loss, valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SigmoidCrossEntropyGrad(const int count, const T* x, const T* target, T* dx, T* valid) { CUDA_KERNEL_LOOP(idx, count) { if (target[idx] < 0) { dx[idx] = 0.; valid[idx] = 0.; } else { dx[idx] = 1. / (1. + expf(-x[idx])) - target[idx]; valid[idx] = 1.; } } } template <> void SigmoidCrossEntropyGrad<float, CUDAContext>(const int count, const float* x, const float* target, float* dx, float* valid) { _SigmoidCrossEntropyGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, target, dx, valid); CUDA_POST_KERNEL_CHECK; } /******************** loss.smooth_l1_loss ********************/ template <typename T> __global__ void _SmoothL1(const int count, const float sigma2, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const T val = x[idx]; const T abs_val = abs(val); if (abs_val < 1.0 / sigma2) y[idx] = 0.5 * val * val * sigma2; else y[idx] = abs_val - 0.5 / sigma2; } } template<> void SmoothL1<float, CUDAContext>(const int count, const float sigma2, const float* x, float* y) { _SmoothL1<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, sigma2, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SmoothL1Grad(const int count, const float sigma2, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const T val = dy[idx]; const T abs_val = abs(val); if (abs_val < 1.0 / sigma2) dx[idx] = val * sigma2; // val > 0: 1 | val == 0: 0 | val < 0: -1 else dx[idx] = (val > T(0)) - (val < T(0)); } } template<> void SmoothL1Grad<float, CUDAContext>(const int count, const float sigma2, const float* dy, float* dx) { _SmoothL1Grad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, sigma2, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** loss.softmax_cross_entropy ********************/ template <typename T> __global__ void _SoftmaxCrossEntropy(const int count, const T* prob, const T* target, T* loss) { CUDA_KERNEL_LOOP(idx, count) { loss[idx] = -target[idx] * log(max(prob[idx], FLT_MIN)); } } template <> void SoftmaxCrossEntropy<float, CUDAContext>(const int count, const float* prob, const float* target, float* loss) { _SoftmaxCrossEntropy<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, prob, target, loss); CUDA_POST_KERNEL_CHECK; } /******************** loss.sparse_softmax_cross_entropy ********************/ template <typename T> __global__ void _SparseSoftmaxCrossEntropy(const int count, const T* prob, const T* labels, T* loss, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) { if (label == ignores[k]) { loss[idx] = valid[idx] = 0; break; } } if (k == ignore_num) { loss[idx] = -log(max(prob[(o_idx * classes + label) * inner_dim + i_idx], FLT_MIN)); valid[idx] = 1; } } } template <> void SparseSoftmaxCrossEntropy<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* prob, const float* labels, float* loss, float* valid, Tensor* ignore) { const int* ignores = ignore->count() > 0 ? ignore->data<int, CUDAContext>() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxCrossEntropy<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, prob, labels, loss, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SparseSoftmaxCrossEntropyGrad(const int count, const T* prob, const T* labels, T* dx, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) if (label == ignores[k]) break; if (k != ignore_num) { for (int c = 0; c < classes; c++) dx[(o_idx * classes + c) * inner_dim + i_idx] = 0; valid[idx] = 0; } else { dx[(o_idx * classes + label) * inner_dim + i_idx] -= 1; valid[idx] = 1; } } } template<> void SparseSoftmaxCrossEntropyGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float* prob, const float* labels, float* valid, Tensor* ignore, float* dXdata) { const int* ignores = ignore->count() > 0 ? ignore->data <int, CUDAContext >() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxCrossEntropyGrad<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, prob, labels, dXdata, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } /******************** loss.sparse_softmax_focal_loss ********************/ template <typename T> __global__ void _SparseSoftmaxFocalScale(const int count, const float gamma, const T* prob, T* scale) { CUDA_KERNEL_LOOP(idx, count) { scale[idx] = std::pow((1.0f - prob[idx]), gamma); } } template <typename T> __global__ void _SparseSoftmaxFocalLoss(const int count, const float pos_alpha, const float neg_alpha, const int neg_id, T* scale, const T* prob, const T* labels, T* loss, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) { if (label == ignores[k]) { loss[idx] = valid[idx] = 0; break; } } if (k == ignore_num) { const int t_ = (o_idx * classes + label) * inner_dim + i_idx; scale[t_] = label > neg_id ? pos_alpha * scale[t_] : neg_alpha * scale[t_]; loss[idx] = -scale[t_] * std::log(max(prob[t_], FLT_MIN)); valid[idx] = label > neg_id ? 1 : 0; } } } template <> void SparseSoftmaxFocalLoss<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float pos_alpha, const float neg_alpha, const float gamma, const int neg_id, const float* prob, const float* labels, float* scale, float* loss, float* valid, Tensor* ignore) { const int* ignores = ignore->count() > 0 ? ignore->data<int, CUDAContext>() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxFocalScale<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, gamma, prob, scale); _SparseSoftmaxFocalLoss<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, pos_alpha, neg_alpha, neg_id, scale, prob, labels, loss, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SparseSoftmaxFocalLossGrad(const int count, const float gamma, const int neg_id, const float eps, const T* scale, const T* prob, const T* labels, T* dx, const int classes, const int inner_dim, const int* ignores, const int ignore_num, T* valid) { CUDA_KERNEL_LOOP(idx, count) { const int o_idx = idx / inner_dim; const int i_idx = idx % inner_dim; const int label = labels[o_idx * inner_dim + i_idx]; int k; for (k = 0; k < ignore_num; k++) if (label == ignores[k]) break; if (k != ignore_num) { for (int c = 0; c < classes; c++) dx[(o_idx * classes + c) * inner_dim + i_idx] = 0; valid[idx] = 0; } else { const int t_ = (o_idx * classes + label) * inner_dim + i_idx; T grad = -gamma * (scale[t_] / max((1.0f - prob[t_]), eps)) * std::log(max(prob[t_], FLT_MIN)) * prob[t_] + scale[t_]; for (int c = 0; c < classes; c++) { const int i_ = (o_idx * classes + c) * inner_dim + i_idx; if (c == label) { dx[i_] = grad * (prob[t_] - 1); } else { dx[i_] = grad * prob[i_]; } } valid[idx] = label > neg_id ? 1 : 0; } } } template<> void SparseSoftmaxFocalLossGrad<float, CUDAContext>(const int count, const int classes, const int outer_dim, const int inner_dim, const float gamma, const int neg_id, const float eps, const float* scale, const float* prob, const float* labels, float* valid, Tensor* ignore, float* dXdata) { const int* ignores = ignore->count() > 0 ? ignore->data <int, CUDAContext >() : nullptr; const int num_preds = outer_dim * inner_dim; _SparseSoftmaxFocalLossGrad<float> << <GET_BLOCKS(num_preds), CUDA_NUM_THREADS >> >(num_preds, gamma, neg_id, eps, scale, prob, labels, dXdata, classes, inner_dim, ignores, ignore->count(), valid); CUDA_POST_KERNEL_CHECK; } /******************** misc.image_data ********************/ template <typename Tx, typename Ty> __global__ void _ImageData_NCHW(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int w = idx % W; const int h = (idx / W) % H; const int c = (idx / W / H) % C; const int n = idx / W / H / C; Ty raw_value = x[((n * H + h) * W + w) * C + c]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = raw_value; } } template <typename Tx, typename Ty> __global__ void _ImageData_NHWC(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % C; Ty raw_value = x[idx]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = raw_value; } } template <typename Tx, typename Ty> __global__ void _ImageDataHalf_NCHW(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int w = idx % W; const int h = (idx / W) % H; const int c = (idx / W / H) % C; const int n = idx / W / H / C; float raw_value = x[((n * H + h) * W + w) * C + c]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = __float2half(raw_value); } } template <typename Tx, typename Ty> __global__ void _ImageDataHalf_NHWC(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const Tx* x, Ty* y) { CUDA_KERNEL_LOOP(idx, count) { const int c = idx % C; float raw_value = x[idx]; if (mean_values != nullptr) raw_value -= mean_values[c]; if (std_values != nullptr) raw_value /= std_values[c]; y[idx] = __float2half(raw_value); } } template <> void ImageData<float, float, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const float* x, float* y) { if (data_format == "NCHW") { _ImageData_NCHW<float, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else if (data_format == "NHWC") { _ImageData_NHWC<float, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } template <> void ImageData<uint8_t, float, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const uint8_t* x, float* y) { if (data_format == "NCHW") { _ImageData_NCHW<uint8_t, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else if (data_format == "NHWC") { _ImageData_NHWC<uint8_t, float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, y); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void ImageData<float, float16, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const float* x, float16* y) { if (data_format == "NCHW") { _ImageDataHalf_NCHW<float, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else if (data_format == "NHWC") { _ImageDataHalf_NHWC<float, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } template <> void ImageData<uint8_t, float16, CUDAContext>(const int count, const int N, const int C, const int H, const int W, const float* mean_values, const float* std_values, const string& data_format, const uint8_t* x, float16* y) { if (data_format == "NCHW") { _ImageDataHalf_NCHW<uint8_t, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else if (data_format == "NHWC") { _ImageDataHalf_NHWC<uint8_t, half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, N, C, H, W, mean_values, std_values, x, reinterpret_cast<half*>(y)); } else LOG(FATAL) << "Unknown data format: " << data_format; CUDA_POST_KERNEL_CHECK; } #endif /******************** ndarray.argmax ********************/ template <typename T> __global__ void _Arange(const int count, const int start, const int step, T* y) { CUDA_KERNEL_LOOP(idx, count) { y[idx] = start + idx * step; } } template<> void Arange<float, CUDAContext>(const int count, const int start, const int step, float* y) { _Arange<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, start, step, y); CUDA_POST_KERNEL_CHECK; } template<> void Arange<int, CUDAContext>(const int count, const int start, const int step, int* y) { _Arange<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, start, step, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.argmax ********************/ template <typename T> __global__ void _Argmax(const int count, const int axis_dim, const int inner_dim, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { T max_val = -FLT_MAX; int max_idx = -1; for (int j = 0; j < axis_dim; ++j) { const T val = x[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim]; if (val > max_val) { max_val = val; max_idx = j; } } y[idx] = max_idx; } } template<> void Argmax<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const int top_k, const float* x, float* y) { CHECK_EQ(top_k, 1) << "top_k > 1 is not supported with CUDA"; _Argmax<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, x, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.argmin ********************/ template <typename T> __global__ void _Argmin(const int count, const int axis_dim, const int inner_dim, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { T min_val = FLT_MAX; int min_idx = -1; for (int j = 0; j < axis_dim; ++j) { const T val = x[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim]; if (val < min_val) { min_val = val; min_idx = j; } } y[idx] = min_idx; } } template<> void Argmin<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const int top_k, const float* x, float* y) { CHECK_EQ(top_k, 1) << "top_k > 1 is not supported with CUDA"; _Argmin<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, x, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.gather ********************/ template <typename T> __global__ void _CanonicalAxis(const int count, const int dim, T* y) { CUDA_KERNEL_LOOP(idx, count) { if (y[idx] < 0) y[idx] += dim; } } template <> void CanonicalAxis<int, CUDAContext>(const int count, const int dim, int* y) { _CanonicalAxis<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _Gather(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int outer_idx = idx / inner_dim / y_slice_dim; const int slice_idx = idx % inner_dim; const int y_idx_offset = (idx / inner_dim) % y_slice_dim; const int x_idx_offset = indices[y_idx_offset]; const int x_idx = (outer_idx * x_slice_dim + x_idx_offset) * inner_dim + slice_idx; y[idx] = x[x_idx]; } } template <> void Gather<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const float* x, float* y, CUDAContext* context) { _Gather<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, x, y); CUDA_POST_KERNEL_CHECK; } template <> void Gather<int, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const int* x, int* y, CUDAContext* context) { _Gather<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _GatherGrad(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int outer_idx = idx / inner_dim / y_slice_dim; const int slice_idx = idx % inner_dim; const int y_idx_offset = (idx / inner_dim) % y_slice_dim; const int x_idx_offset = indices[y_idx_offset]; const int x_idx = (outer_idx * x_slice_dim + x_idx_offset) * inner_dim + slice_idx; atomicAdd(dx + x_idx, dy[idx]); } } template <> void GatherGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const float* dy, float* dx) { _GatherGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, dy, dx); CUDA_POST_KERNEL_CHECK; } template <> void GatherGrad<int, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int* indices, const int* dy, int* dx) { _GatherGrad<int> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, indices, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.concat ********************/ template <typename T> __global__ void _Concat(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = x_concat_dim * inner_dim; const int outer_idx = idx / tmp; const int concat_idx = idx % tmp; const int y_idx = (outer_idx * y_concat_dim + concat_offset) * inner_dim + concat_idx; y[y_idx] = x[idx]; } } template <> void Concat<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float* x, float* y, CUDAContext* context) { _Concat<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, x, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void Concat<float16, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float16* x, float16* y, CUDAContext* context) { _Concat<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, reinterpret_cast<const half*>(x), reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _ConcatGrad(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = x_concat_dim * inner_dim; const int outer_idx = idx / tmp; const int concat_idx = idx % tmp; const int y_idx = (outer_idx * y_concat_dim + concat_offset) * inner_dim + concat_idx; dx[idx] = dy[y_idx]; } } template <> void ConcatGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float* dy, float* dx, CUDAContext* context) { _ConcatGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, dy, dx); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void ConcatGrad<float16, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_concat_dim, const int y_concat_dim, const int concat_offset, const float16* dy, float16* dx, CUDAContext* context) { _ConcatGrad<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_concat_dim, y_concat_dim, concat_offset, reinterpret_cast<const half*>(dy), reinterpret_cast<half*>(dx)); CUDA_POST_KERNEL_CHECK; } #endif /******************** ndarray.crop ********************/ template<typename T> __global__ void _Crop1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; y[idx] = x[(o * dim + ex_d + start) * inner_dim + i]; } } template<> void Crop1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const float* x, float* y, CUDAContext* context) { _Crop1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, start, x, y); CUDA_POST_KERNEL_CHECK; } template<typename T> __global__ void _Crop1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const int end, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int d = (idx / inner_dim) % dim; const int o = idx / inner_dim / dim; if (d >= start && d < end) dx[idx] = dy[(o * ex_dim + d - start) * inner_dim + i]; } } template<> void Crop1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int start, const int end, const float* dy, float* dx, CUDAContext* context) { _Crop1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, start, end, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.pad ********************/ template <typename T> __global__ void _ConstPad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T value, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = ex_d - pad_l; y[idx] = (d < 0 || d >= dim) ? value : x[(o * dim + d) * inner_dim + i]; } } template <> void ConstPad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float value, const float* x, float* y, CUDAContext* context) { _ConstPad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, value, x, y); } template <typename T> __global__ void _ReflectPad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; int d = ex_d - pad_l; d = max(d, -d); d = min(d, 2 * dim - d - 2); y[idx] = x[(o * dim + d) * inner_dim + i]; } } template <> void ReflectPad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* x, float* y, CUDAContext* context) { _ReflectPad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, x, y); } template <typename T> __global__ void _EdgePad1D(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = min(dim - 1, max(ex_d - pad_l, 0)); y[idx] = x[(o * dim + d) * inner_dim + i]; } } template <> void EdgePad1D<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* x, float* y, CUDAContext* context) { _EdgePad1D<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, x, y); } template <typename T> __global__ void _ConstPad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % dim + pad_l; const int o = idx / inner_dim / dim; dx[idx] = dy[(o * ex_dim + ex_d) * inner_dim + i]; } } template <> void ConstPad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx, CUDAContext* context) { _ConstPad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } template <typename T> __global__ void _ReflectPad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; int d = ex_d - pad_l; d = max(d, -d); d = min(d, 2 * dim - d - 2); atomicAdd(&dx[(o * dim + d) * inner_dim + i], dy[idx]); } } template <> void ReflectPad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx) { _ReflectPad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } template <typename T> __global__ void _EdgePad1DGrad(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int i = idx % inner_dim; const int ex_d = (idx / inner_dim) % ex_dim; const int o = idx / inner_dim / ex_dim; const int d = min(dim - 1, max(ex_d - pad_l, 0)); atomicAdd(&dx[(o * dim + d) * inner_dim + i], dy[idx]); } } template <> void EdgePad1DGrad<float, CUDAContext>(const int count, const int dim, const int ex_dim, const int inner_dim, const int pad_l, const float* dy, float* dx, CUDAContext* context) { _EdgePad1DGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, dim, ex_dim, inner_dim, pad_l, dy, dx); } /******************** ndarray.one_hot ********************/ template <typename T> __global__ void _OneHot(const int count, const int depth, const int on_value, const float* x, float* y) { CUDA_KERNEL_LOOP(idx, count) { const int val = x[idx]; y[idx * depth + val] = on_value; } } template <> void OneHot<float, CUDAContext>(const int count, const int depth, const int on_value, const float* x, float* y) { _OneHot<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, depth, on_value, x, y); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.reduce ********************/ template <typename T> __global__ void _Sum(const int count, const int axis_dim, const int inner_dim, const T* x, float* y) { CUDA_KERNEL_LOOP(idx, count) { T sum_val = 0.0; for (int j = 0; j < axis_dim; j++) sum_val += x[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim]; y[idx] = sum_val; } } template<> void Sum<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const float* x, float* y) { _Sum<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SumGrad(const int count, const int axis_dim, const int inner_dim, const T coeff, const T* dy, float* dx) { CUDA_KERNEL_LOOP(idx, count) { for (int j = 0; j < axis_dim; j++) dx[(idx / inner_dim * axis_dim + j) * inner_dim + idx % inner_dim] = dy[idx] * coeff; } } template<> void SumGrad<float, CUDAContext>(const int count, const int axis_dim, const int inner_dim, const float coeff, const float* dy, float* dx) { _SumGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, axis_dim, inner_dim, coeff, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.repeat ********************/ template <typename T> __global__ void _Repeat(const int count, const int inner_dim, const int repeats, const int dim, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % inner_dim; const int b = (idx / inner_dim / repeats) % dim; const int n = idx / inner_dim / repeats / dim; const int x_idx = (n * dim + b) * inner_dim + d; y[idx] = x[x_idx]; } } template <> void Repeat<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const int repeats, const float* x, float* y, CUDAContext* context) { _Repeat<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, inner_dim, repeats, dim, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _RepeatGrad(const int count, const int inner_dim, const int repeats, const int dim, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % inner_dim; const int b = (idx / inner_dim) % dim; const int n = idx / inner_dim / dim; T gradient = 0; for (int t = 0; t < repeats; t++) gradient += dy[(((n * dim + b) * repeats) + t) * inner_dim + d]; dx[idx] = gradient; } } template <> void RepeatGrad<float, CUDAContext>(const int count, const int outer_dim, const int dim, const int inner_dim, const int repeats, const float* dy, float* dx, CUDAContext* context) { _RepeatGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, inner_dim, repeats, dim, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.slice ********************/ template <typename T> __global__ void _Slice(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = y_slice_dim * inner_dim; const int outer_idx = idx / tmp; const int slice_idx = idx % tmp; const int x_idx = (outer_idx * x_slice_dim + slice_offset) * inner_dim + slice_idx; y[idx] = x[x_idx]; } } template <> void Slice<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const float* x, float* y, CUDAContext* context) { _Slice<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, slice_offset, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _SliceGrad(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int tmp = y_slice_dim * inner_dim; const int outer_idx = idx / tmp; const int slice_idx = idx % tmp; const int x_idx = (outer_idx * x_slice_dim + slice_offset) * inner_dim + slice_idx; dx[x_idx] = dy[idx]; } } template <> void SliceGrad<float, CUDAContext>(const int count, const int outer_dim, const int inner_dim, const int x_slice_dim, const int y_slice_dim, const int slice_offset, const float* dy, float* dx, CUDAContext* context) { _SliceGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, outer_dim, inner_dim, x_slice_dim, y_slice_dim, slice_offset, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.tile ********************/ template <typename T> __global__ void _Tile(const int count, const int ex_inner_dim, const int multiple, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % ex_inner_dim; const int n = idx / ex_inner_dim / multiple; const int x_idx = n * ex_inner_dim + d; y[idx] = x[x_idx]; } } template <> void Tile<float, CUDAContext>(const int count, const int outer_dim, const int ex_inner_dim, const int multiple, const float* x, float* y, CUDAContext* context) { _Tile<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ex_inner_dim, multiple, x, y); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _TileGrad(const int count, const int ex_inner_dim, const int multiple, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int d = idx % ex_inner_dim; const int n = idx / ex_inner_dim; T gradient = 0; for (int t = 0; t < multiple; t++) gradient += dy[(n * multiple + t) * ex_inner_dim + d]; dx[idx] = gradient; } } template <> void TileGrad<float, CUDAContext>(const int count, const int outer_dim, const int ex_inner_dim, const int multiple, const float* dy, float* dx, CUDAContext* context) { _TileGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ex_inner_dim, multiple, dy, dx); CUDA_POST_KERNEL_CHECK; } /******************** ndarray.transpose ********************/ template <typename T> __global__ void _Transpose(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const T* x, T* y) { CUDA_KERNEL_LOOP(idx, count) { int x_idx = 0, y_idx = idx; for (int j = 0; j < ndim; ++j) { int k = order[j]; x_idx += (y_idx / new_steps[j]) * old_steps[k]; y_idx %= new_steps[j]; } y[idx] = x[x_idx]; } } template <> void Transpose<float, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float* x, float* y) { _Transpose<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, x, y); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void Transpose<float16, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float16* x, float16* y) { _Transpose<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, reinterpret_cast<const half*>(x), reinterpret_cast<half*>(y)); CUDA_POST_KERNEL_CHECK; } #endif template <typename T> __global__ void _TransposeGrad(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const T* dy, T* dx) { CUDA_KERNEL_LOOP(idx, count) { int x_idx = 0, y_idx = idx; for (int j = 0; j < ndim; ++j) { int k = order[j]; x_idx += (y_idx / new_steps[j]) * old_steps[k]; y_idx %= new_steps[j]; } dx[x_idx] = dy[idx]; } } template <> void TransposeGrad<float, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float* dy, float* dx) { _TransposeGrad<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, dy, dx); CUDA_POST_KERNEL_CHECK; } #ifdef WITH_CUDA_FP16 template <> void TransposeGrad<float16, CUDAContext>(const int count, const int ndim, const int* order, const int* old_steps, const int* new_steps, const float16* dy, float16* dx) { _TransposeGrad<half> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, ndim, order, old_steps, new_steps, reinterpret_cast<const half*>(dy), reinterpret_cast<half*>(dx)); CUDA_POST_KERNEL_CHECK; } #endif /******************** recurrent.lstm_uint ********************/ template <typename T> __global__ void _LSTMUnitAct(const int count, const int channels, const int g_offset, const int x_offset, const T* x, T* x_act) { CUDA_KERNEL_LOOP(idx, count) { const int ch_4 = idx % x_offset; if (ch_4 < g_offset) x_act[idx] = _SigmoidUnit<float>(x[idx]); else x_act[idx] = std::tanh(x[idx]); } } template <typename T> __global__ void _LSTMUnit(const int count, const int channels, const int o_offset, const int g_offset, const int x_offset, const T* c_1, T* x_act, const T* cont, T* c, T* h) { CUDA_KERNEL_LOOP(idx, count) { const int n = idx / channels; const int ch = idx % channels; T* x_act_ = x_act + n * x_offset; const T i = x_act_[ch]; if (cont != nullptr && cont[n] != T(1)) x_act_[channels + ch] *= cont[n]; const T f = x_act_[channels + ch]; const T o = x_act_[o_offset + ch]; const T g = x_act_[g_offset + ch]; const T c_ = c[idx] = f * c_1[idx] + i * g; h[idx] = o * std::tanh(c_); } } template <> void LSTMUnit<float, CUDAContext>(const int count, const int num, const int channels, const float* c_1, const float* x, const float* cont, float* x_act, float* c, float* h) { const int o_offset = 2 * channels, g_offset = 3 * channels; const int x_offset = 4 * channels, y_count = count / 4; _LSTMUnitAct<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, g_offset, x_offset, x, x_act); _LSTMUnit<float> << <GET_BLOCKS(y_count), CUDA_NUM_THREADS >> >(y_count, channels, o_offset, g_offset, x_offset, c_1, x_act, cont, c, h); CUDA_POST_KERNEL_CHECK; } template <typename T> __global__ void _LSTMUnitGrad(const int count, const int channels, const int o_offset, const int g_offset, const int x_offset, const T* c_1, const T* x_act, const T* c, const T* dc, const T* dh, T* dc_1, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int n = idx / channels; const int ch = idx % channels; const T* x_act_ = x_act + n * x_offset; T* dx_ = dx + n * x_offset; const T i = x_act_[ch]; const T f = x_act_[channels + ch]; const T o = x_act_[o_offset + ch]; const T g = x_act_[g_offset + ch]; T* p_di = dx_ + ch; T* p_df = dx_ + channels + ch; T* p_do = dx_ + o_offset + ch; T* p_dg = dx_ + g_offset + ch; const T tanh_c_t = tanh(c[idx]); const T dc_1_sum_term = dh[idx] * o * (1 - tanh_c_t * tanh_c_t) + dc[idx]; dc_1[idx] = dc_1_sum_term * f; *p_di = dc_1_sum_term * g; *p_df = dc_1_sum_term * c_1[idx]; *p_do = dh[idx] * tanh_c_t; *p_dg = dc_1_sum_term * i; } } template <typename T> __global__ void _LSTMUnitGradAct(const int count, const int channels, const int g_offset, const int x_offset, const T* x_act, T* dx) { CUDA_KERNEL_LOOP(idx, count) { const int ch_4 = idx % x_offset; const T x_act_ = x_act[idx]; if (ch_4 < g_offset) dx[idx] = dx[idx] * x_act_ * (T(1) - x_act_); else dx[idx] = dx[idx] * (T(1) - x_act_ * x_act_); } } template <> void LSTMUnitGrad<float, CUDAContext>(const int count, const int num, const int channels, const float* c_1, const float* x_act, const float* c, const float* dc, const float* dh, float* dc_1, float* dx) { const int o_offset = 2 * channels, g_offset = 3 * channels; const int x_offset = 4 * channels, y_count = count / 4; _LSTMUnitGrad<float> << <GET_BLOCKS(y_count), CUDA_NUM_THREADS >> >(y_count, channels, o_offset, g_offset, x_offset, c_1, x_act, c, dc, dh, dc_1, dx); _LSTMUnitGradAct<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, channels, g_offset, x_offset, x_act, dx); CUDA_POST_KERNEL_CHECK; } /******************** update.adam_update ********************/ template <typename T> __global__ void _AdamUpdate(const int n, T* g, T* m, T* v, const T beta1, const T beta2, const T eps, const T lr) { CUDA_KERNEL_LOOP(i, n) { T gi = g[i]; T mi = m[i] = m[i] * beta1 + gi * (1 - beta1); T vi = v[i] = v[i] * beta2 + gi * gi * (1 - beta2); g[i] = lr * mi / (sqrt(vi) + eps); } } template <> void AdamUpdate<float, CUDAContext>(Tensor* x, Tensor* m, Tensor* v, Tensor* t, const float beta1, const float beta2, const float eps, const float lr) { TIndex count = x->count(); auto* Xdata = x->mutable_data<float, CUDAContext>(); auto* Mdata = m->mutable_data<float, CUDAContext>(); auto* Vdata = v->mutable_data<float, CUDAContext>(); _AdamUpdate<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, Xdata, Mdata, Vdata, beta1, beta2, eps, lr); CUDA_POST_KERNEL_CHECK; } /******************** update.nesterov_update ********************/ template <typename T> __global__ void _NesterovUpdate(const int n, T* g, T* h, const T momentum, const T lr) { CUDA_KERNEL_LOOP(i, n) { T hi = h[i]; T hi_new = h[i] = momentum * hi + lr * g[i]; g[i] = (1 + momentum) * hi_new - momentum * hi; } } template <> void NesterovUpdate<float, CUDAContext>(const int count, float* x, float* h, Tensor* t, const float momentum, const float lr, CUDAContext* ctx) { _NesterovUpdate<float> << <GET_BLOCKS(count), CUDA_NUM_THREADS >> >(count, x, h, momentum, lr); CUDA_POST_KERNEL_CHECK; } /******************** update.rmsprop_update ********************/ template <typename T> __global__ void _RMSPropUpdate(const int n, T* g, T* h, const | 396 | Add SELU & PReLU support | 151 | .cu | cu | bsd-2-clause | neopenx/Dragon |
1847 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis</artifactId>
<version>2.4.6</version>
</dependency>
```
<MSG> Change README.md to use 2.4.8
<DFF> @@ -16,7 +16,7 @@ You can choose these 2 ways to include shiro-redis into your project
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis</artifactId>
- <version>2.4.6</version>
+ <version>2.4.8</version>
</dependency>
```
| 1 | Change README.md to use 2.4.8 | 1 | .md | md | mit | alexxiyang/shiro-redis |
1848 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
securityManager.cacheManager = $cacheManager
```
If you found any bugs
===========
<MSG> try to remember
<DFF> @@ -23,6 +23,7 @@ cacheManager.expire=5
securityManager.cacheManager = $cacheManager
```
+
If you found any bugs
===========
| 1 | try to remember | 0 | .md | md | mit | alexxiyang/shiro-redis |
1849 | <NME> l2_norm_op.cc
<BEF> #include "operators/norm/l2_norm_op.h"
#include "core/workspace.h"
#include "utils/math_functions.h"
namespace dragon {
template <class Context> template <typename T>
void L2NormOp<Context>::RunWithType() {
INIT_MULTIPLIER(multiplier, dim);
// normalize by outer dim independently
buffer = ws()->GetBuffer();
vector<TIndex> dims = input(0).dims();
for (int i = 0; i < axis; i++) dims[i] = 1;
buffer->Reshape(dims);
// normalize by inner_dim independently if not across it
norm = ws()->CreateTensor("/mnt/" + anchor() + "/l2norm_normalizer");
dims = input(0).dims();
for (int i = axis; i < end_axis; i++) dims[i] = 1;
norm->Reshape(dims);
auto* Xdata = input(0).template data<T, Context>();
auto* DMuldata = multiplier->data<T, Context>();
auto* Ydata = output(0)->template mutable_data<T, Context>();
auto* Bdata = buffer->template mutable_data<T, Context>();
auto* Ndata = norm->template mutable_data<T, Context>();
for (int n = 0; n < outer_dim; n++) {
if (across_inner) {
auto* Ndata_ = norm->template mutable_data<float, CPUContext>();
float sum_of_sqr = math::Dot<T, Context>(buffer->count(), Xdata, Xdata);
if (mode == "MEAN") sum_of_sqr = sum_of_sqr / dim;
Ndata_[n] = pow(sum_of_sqr + eps, 0.5);
math::Scale<T, Context>(buffer->count(), 1.0 / Ndata_[n], Xdata, Ydata);
} else {
math::Set<T, Context>(norm->count(), dragon_cast<T, float>(eps), Ndata);
math::Square<T, Context>(buffer->count(), Xdata, Bdata);
// compute T1 = \sum_{i} x_{i,j}^{2}
math::Gemv<T, Context>(CblasTrans, dim, inner_dim,
mode == "MEAN" ? 1.0 / dim : 1.0,
Bdata, DMuldata,
1.0,
Ndata);
// compute T2 = \sqrt{T1}
math::Sqrt<T, Context>(inner_dim, Ndata, Ndata);
// compute T3 = x / [(T2)]_{dim}
math::Gemm<T, Context>(CblasNoTrans, CblasNoTrans, dim, inner_dim, 1,
1.0,
DMuldata, Ndata,
0.0,
Bdata);
math::Div<T, Context>(buffer->count(), Xdata, Bdata, Ydata);
Ndata += inner_dim;
}
Xdata += buffer->count();
Ydata += buffer->count();
}
// release buffer
ws()->ReleaseBuffer(buffer);
}
template <class Context>
void L2NormOp<Context>::RunOnDevice() {
if (num_axes >= 0) {
if (num_axes == 0) num_axes += 1;
} else num_axes = (int)input(0).ndim() - axis;
end_axis = axis + num_axes;
CHECK_LE(end_axis, int(input(0).ndim()));
// do statistics through [axis, end_axis)
outer_dim = input(0).count(0, axis);
dim = input(0).count(axis, axis + num_axes);
inner_dim = input(0).count(axis + num_axes);
if (inner_dim == 1) across_inner = true;
else across_inner = false;
output(0)->ReshapeLike(input(0));
if (input(0).template IsType<float>()) RunWithType<float>();
#ifdef WITH_CUDA_FP16
else if (input(0).template IsType<float16>()) RunWithType<float16>();
#endif
else LOG(FATAL) << "Unsupported input types.";
}
DEPLOY_CPU(L2Norm);
#ifdef WITH_CUDA
DEPLOY_CUDA(L2Norm);
#endif
OPERATOR_SCHEMA(L2Norm).NumInputs(1).NumOutputs(1);
template <class Context> template <typename T>
void L2NormGradientOp<Context>::RunWithType() {
INIT_MULTIPLIER(multiplier, dim);
// normalize by inner_dim independently if not across it
norm = ws()->GetTensor("/mnt/" + anchor() + "/l2norm_normalizer");
buffer = ws()->GetBuffer();
vector<TIndex> dims = input(0).dims();
for (int i = 0; i < axis; i++) dims[i] = 1;
buffer->Reshape(dims);
buffer_inner = ws()->GetBuffer();
buffer_inner->Reshape(vector<TIndex>(1, inner_dim));
auto* Xdata = input(0).template data<T, Context>();
auto* dYdata = input(-1).template data<T, Context>();
auto* DMuldata = multiplier->data<T, Context>();
auto* Ndata = norm->template data<T, Context>();
auto* dXdata = output(0)->template mutable_data<T, Context>();
auto* Bdata = buffer->template mutable_data<T, Context>();
auto* BInnerdata = buffer_inner->template mutable_data<T, Context>();
for (int n = 0; n < outer_dim; n++) {
if (across_inner) {
Ndata = norm->template data<T, CPUContext>();
T sum_of_x_mul_dy = math::Dot<T, Context>(buffer->count(), Xdata, dYdata);
if (mode == "MEAN") sum_of_x_mul_dy = sum_of_x_mul_dy / dim;
math::Scale<T, Context>(buffer->count(), sum_of_x_mul_dy / Ndata[n] / Ndata[n], Xdata, dXdata);
math::Sub<T, Context>(buffer->count(), dYdata, dXdata, dXdata);
math::Scal<T, Context>(buffer->count(), T(1.0 / Ndata[n]), dXdata);
} else {
// compute \sum_{i} x_{i, j}dy_{i, j}
math::Mul<T, Context>(buffer->count(), Xdata, dYdata, Bdata);
math::Gemv<T, Context>(CblasTrans, dim, inner_dim,
mode == "MEAN" ? 1.0 / dim : 1.0,
Bdata, DMuldata,
0.0,
BInnerdata);
// compute T1 = x[(\sum_{i} x_{i, j}dy_{i, j})]_{dim}
math::Gemm<T, Context>(CblasNoTrans, CblasNoTrans, dim, inner_dim, 1,
1.0,
DMuldata, BInnerdata,
0.0,
Bdata);
math::Mul<T, Context>(buffer->count(), Xdata, Bdata, dXdata);
// compute T2 = T1 / Normalizer^{2}
math::Pow<T, Context>(inner_dim, 2.0, Ndata, BInnerdata);
math::Gemm<T, Context>(CblasNoTrans, CblasNoTrans, dim, inner_dim, 1,
1.0,
DMuldata, BInnerdata,
0.0,
Bdata);
math::Div<T, Context>(buffer->count(), dXdata, Bdata, dXdata);
// compute T3 = (dy - T2) / Normalizer
math::Sub<T, Context>(buffer->count(), dYdata, dXdata, dXdata);
math::Gemm<T, Context>(CblasNoTrans, CblasNoTrans, dim, inner_dim, 1,
1.0,
DMuldata, Ndata,
0.0,
Bdata);
math::Div<T, Context>(buffer->count(), dXdata, Bdata, dXdata);
Ndata += inner_dim;
}
Xdata += buffer->count();
dYdata += buffer->count();
dXdata += buffer->count();
}
// release buffer
ws()->ReleaseBuffer(buffer_inner);
ws()->ReleaseBuffer(buffer);
}
template <class Context>
void L2NormGradientOp<Context>::RunOnDevice() {
if (num_axes >= 0) {
if (num_axes == 0) num_axes += 1;
} else { num_axes = (int)input(0).ndim() - axis; }
end_axis = axis + num_axes;
CHECK_LE(end_axis, int(input(0).ndim()));
// do statistics through [axis, end_axis)
outer_dim = input(0).count(0, axis);
dim = input(0).count(axis, axis + num_axes);
inner_dim = input(0).count(axis + num_axes);
if (inner_dim == 1) across_inner = true;
else across_inner = false;
output(0)->ReshapeLike(input(0));
if (input(0).template IsType<float>()) RunWithType<float>();
else LOG(FATAL) << "Unsupported input types.";
}
DEPLOY_CPU(L2NormGradient);
#ifdef WITH_CUDA
DEPLOY_CUDA(L2NormGradient);
#endif
OPERATOR_SCHEMA(L2NormGradient).NumInputs(2).NumOutputs(1);
class GetL2NormGradient final : public GradientMakerBase {
public:
GRADIENT_MAKER_CTOR(GetL2NormGradient);
vector<OperatorDef> MakeDefs() override {
return SingleDef(def.type() + "Gradient", "",
vector<string> {I(0), GO(0)},
vector<string> {GI(0)});
}
};
REGISTER_GRADIENT(L2Norm, GetL2NormGradient);
} // namespace dragon
<MSG> Fix the potential crash on L2NormOp
<DFF> @@ -25,6 +25,7 @@ void L2NormOp<Context>::RunWithType() {
auto* Ydata = output(0)->template mutable_data<T, Context>();
auto* Bdata = buffer->template mutable_data<T, Context>();
auto* Ndata = norm->template mutable_data<T, Context>();
+ math::Set<T, Context>(norm->count(), dragon_cast<T, float>(eps), Ndata);
for (int n = 0; n < outer_dim; n++) {
if (across_inner) {
@@ -34,7 +35,6 @@ void L2NormOp<Context>::RunWithType() {
Ndata_[n] = pow(sum_of_sqr + eps, 0.5);
math::Scale<T, Context>(buffer->count(), 1.0 / Ndata_[n], Xdata, Ydata);
} else {
- math::Set<T, Context>(norm->count(), dragon_cast<T, float>(eps), Ndata);
math::Square<T, Context>(buffer->count(), Xdata, Bdata);
// compute T1 = \sum_{i} x_{i,j}^{2}
math::Gemv<T, Context>(CblasTrans, dim, inner_dim,
| 1 | Fix the potential crash on L2NormOp | 1 | .cc | cc | bsd-2-clause | neopenx/Dragon |
1850 | <NME> l2_norm_op.cc
<BEF> #include "operators/norm/l2_norm_op.h"
#include "core/workspace.h"
#include "utils/math_functions.h"
namespace dragon {
template <class Context> template <typename T>
void L2NormOp<Context>::RunWithType() {
INIT_MULTIPLIER(multiplier, dim);
// normalize by outer dim independently
buffer = ws()->GetBuffer();
vector<TIndex> dims = input(0).dims();
for (int i = 0; i < axis; i++) dims[i] = 1;
buffer->Reshape(dims);
// normalize by inner_dim independently if not across it
norm = ws()->CreateTensor("/mnt/" + anchor() + "/l2norm_normalizer");
dims = input(0).dims();
for (int i = axis; i < end_axis; i++) dims[i] = 1;
norm->Reshape(dims);
auto* Xdata = input(0).template data<T, Context>();
auto* DMuldata = multiplier->data<T, Context>();
auto* Ydata = output(0)->template mutable_data<T, Context>();
auto* Bdata = buffer->template mutable_data<T, Context>();
auto* Ndata = norm->template mutable_data<T, Context>();
for (int n = 0; n < outer_dim; n++) {
if (across_inner) {
auto* Ndata_ = norm->template mutable_data<float, CPUContext>();
float sum_of_sqr = math::Dot<T, Context>(buffer->count(), Xdata, Xdata);
if (mode == "MEAN") sum_of_sqr = sum_of_sqr / dim;
Ndata_[n] = pow(sum_of_sqr + eps, 0.5);
math::Scale<T, Context>(buffer->count(), 1.0 / Ndata_[n], Xdata, Ydata);
} else {
math::Set<T, Context>(norm->count(), dragon_cast<T, float>(eps), Ndata);
math::Square<T, Context>(buffer->count(), Xdata, Bdata);
// compute T1 = \sum_{i} x_{i,j}^{2}
math::Gemv<T, Context>(CblasTrans, dim, inner_dim,
mode == "MEAN" ? 1.0 / dim : 1.0,
Bdata, DMuldata,
1.0,
Ndata);
// compute T2 = \sqrt{T1}
math::Sqrt<T, Context>(inner_dim, Ndata, Ndata);
// compute T3 = x / [(T2)]_{dim}
math::Gemm<T, Context>(CblasNoTrans, CblasNoTrans, dim, inner_dim, 1,
1.0,
DMuldata, Ndata,
0.0,
Bdata);
math::Div<T, Context>(buffer->count(), Xdata, Bdata, Ydata);
Ndata += inner_dim;
}
Xdata += buffer->count();
Ydata += buffer->count();
}
// release buffer
ws()->ReleaseBuffer(buffer);
}
template <class Context>
void L2NormOp<Context>::RunOnDevice() {
if (num_axes >= 0) {
if (num_axes == 0) num_axes += 1;
} else num_axes = (int)input(0).ndim() - axis;
end_axis = axis + num_axes;
CHECK_LE(end_axis, int(input(0).ndim()));
// do statistics through [axis, end_axis)
outer_dim = input(0).count(0, axis);
dim = input(0).count(axis, axis + num_axes);
inner_dim = input(0).count(axis + num_axes);
if (inner_dim == 1) across_inner = true;
else across_inner = false;
output(0)->ReshapeLike(input(0));
if (input(0).template IsType<float>()) RunWithType<float>();
#ifdef WITH_CUDA_FP16
else if (input(0).template IsType<float16>()) RunWithType<float16>();
#endif
else LOG(FATAL) << "Unsupported input types.";
}
DEPLOY_CPU(L2Norm);
#ifdef WITH_CUDA
DEPLOY_CUDA(L2Norm);
#endif
OPERATOR_SCHEMA(L2Norm).NumInputs(1).NumOutputs(1);
template <class Context> template <typename T>
void L2NormGradientOp<Context>::RunWithType() {
INIT_MULTIPLIER(multiplier, dim);
// normalize by inner_dim independently if not across it
norm = ws()->GetTensor("/mnt/" + anchor() + "/l2norm_normalizer");
buffer = ws()->GetBuffer();
vector<TIndex> dims = input(0).dims();
for (int i = 0; i < axis; i++) dims[i] = 1;
buffer->Reshape(dims);
buffer_inner = ws()->GetBuffer();
buffer_inner->Reshape(vector<TIndex>(1, inner_dim));
auto* Xdata = input(0).template data<T, Context>();
auto* dYdata = input(-1).template data<T, Context>();
auto* DMuldata = multiplier->data<T, Context>();
auto* Ndata = norm->template data<T, Context>();
auto* dXdata = output(0)->template mutable_data<T, Context>();
auto* Bdata = buffer->template mutable_data<T, Context>();
auto* BInnerdata = buffer_inner->template mutable_data<T, Context>();
for (int n = 0; n < outer_dim; n++) {
if (across_inner) {
Ndata = norm->template data<T, CPUContext>();
T sum_of_x_mul_dy = math::Dot<T, Context>(buffer->count(), Xdata, dYdata);
if (mode == "MEAN") sum_of_x_mul_dy = sum_of_x_mul_dy / dim;
math::Scale<T, Context>(buffer->count(), sum_of_x_mul_dy / Ndata[n] / Ndata[n], Xdata, dXdata);
math::Sub<T, Context>(buffer->count(), dYdata, dXdata, dXdata);
math::Scal<T, Context>(buffer->count(), T(1.0 / Ndata[n]), dXdata);
} else {
// compute \sum_{i} x_{i, j}dy_{i, j}
math::Mul<T, Context>(buffer->count(), Xdata, dYdata, Bdata);
math::Gemv<T, Context>(CblasTrans, dim, inner_dim,
mode == "MEAN" ? 1.0 / dim : 1.0,
Bdata, DMuldata,
0.0,
BInnerdata);
// compute T1 = x[(\sum_{i} x_{i, j}dy_{i, j})]_{dim}
math::Gemm<T, Context>(CblasNoTrans, CblasNoTrans, dim, inner_dim, 1,
1.0,
DMuldata, BInnerdata,
0.0,
Bdata);
math::Mul<T, Context>(buffer->count(), Xdata, Bdata, dXdata);
// compute T2 = T1 / Normalizer^{2}
math::Pow<T, Context>(inner_dim, 2.0, Ndata, BInnerdata);
math::Gemm<T, Context>(CblasNoTrans, CblasNoTrans, dim, inner_dim, 1,
1.0,
DMuldata, BInnerdata,
0.0,
Bdata);
math::Div<T, Context>(buffer->count(), dXdata, Bdata, dXdata);
// compute T3 = (dy - T2) / Normalizer
math::Sub<T, Context>(buffer->count(), dYdata, dXdata, dXdata);
math::Gemm<T, Context>(CblasNoTrans, CblasNoTrans, dim, inner_dim, 1,
1.0,
DMuldata, Ndata,
0.0,
Bdata);
math::Div<T, Context>(buffer->count(), dXdata, Bdata, dXdata);
Ndata += inner_dim;
}
Xdata += buffer->count();
dYdata += buffer->count();
dXdata += buffer->count();
}
// release buffer
ws()->ReleaseBuffer(buffer_inner);
ws()->ReleaseBuffer(buffer);
}
template <class Context>
void L2NormGradientOp<Context>::RunOnDevice() {
if (num_axes >= 0) {
if (num_axes == 0) num_axes += 1;
} else { num_axes = (int)input(0).ndim() - axis; }
end_axis = axis + num_axes;
CHECK_LE(end_axis, int(input(0).ndim()));
// do statistics through [axis, end_axis)
outer_dim = input(0).count(0, axis);
dim = input(0).count(axis, axis + num_axes);
inner_dim = input(0).count(axis + num_axes);
if (inner_dim == 1) across_inner = true;
else across_inner = false;
output(0)->ReshapeLike(input(0));
if (input(0).template IsType<float>()) RunWithType<float>();
else LOG(FATAL) << "Unsupported input types.";
}
DEPLOY_CPU(L2NormGradient);
#ifdef WITH_CUDA
DEPLOY_CUDA(L2NormGradient);
#endif
OPERATOR_SCHEMA(L2NormGradient).NumInputs(2).NumOutputs(1);
class GetL2NormGradient final : public GradientMakerBase {
public:
GRADIENT_MAKER_CTOR(GetL2NormGradient);
vector<OperatorDef> MakeDefs() override {
return SingleDef(def.type() + "Gradient", "",
vector<string> {I(0), GO(0)},
vector<string> {GI(0)});
}
};
REGISTER_GRADIENT(L2Norm, GetL2NormGradient);
} // namespace dragon
<MSG> Fix the potential crash on L2NormOp
<DFF> @@ -25,6 +25,7 @@ void L2NormOp<Context>::RunWithType() {
auto* Ydata = output(0)->template mutable_data<T, Context>();
auto* Bdata = buffer->template mutable_data<T, Context>();
auto* Ndata = norm->template mutable_data<T, Context>();
+ math::Set<T, Context>(norm->count(), dragon_cast<T, float>(eps), Ndata);
for (int n = 0; n < outer_dim; n++) {
if (across_inner) {
@@ -34,7 +35,6 @@ void L2NormOp<Context>::RunWithType() {
Ndata_[n] = pow(sum_of_sqr + eps, 0.5);
math::Scale<T, Context>(buffer->count(), 1.0 / Ndata_[n], Xdata, Ydata);
} else {
- math::Set<T, Context>(norm->count(), dragon_cast<T, float>(eps), Ndata);
math::Square<T, Context>(buffer->count(), Xdata, Bdata);
// compute T1 = \sum_{i} x_{i,j}^{2}
math::Gemv<T, Context>(CblasTrans, dim, inner_dim,
| 1 | Fix the potential crash on L2NormOp | 1 | .cc | cc | bsd-2-clause | neopenx/Dragon |
1851 | <NME> hitchvagrant.rst
<BEF> HitchVagrant
============
.. note::
This documentation applies to the latest version of hitchvagrant.
Installation
------------
If it is not already installed, install the hitch vagrant package::
$ hitch install hitchvagrant
Setup
-----
To use, define the service after initializing the :doc:`/api/service_bundle`:
Like so:
.. code-block:: python
import hitchvagrant
# Service definition in engine's setUp:
self.services['MyVM'] = hitchvagrant.VagrantService(
directory="vagrantubuntu/", # Directory containing Vagrantfile (optional)
)
Interaction
-----------
Once it is running, you can run ssh commands against the machine::
In [1]: self.services['MyVM'].ssh("pwd").run()
/vagrant
<MSG> DOCS : Fixed some bad links
<DFF> @@ -17,7 +17,11 @@ If it is not already installed, install the hitch vagrant package::
Setup
-----
-To use, define the service after initializing the :doc:`/api/service_bundle`:
+.. note::
+
+ See also: :doc:`/api/generic_service_api`
+
+To use, define the service after initializing the :doc:`/glossary/service_bundle`:
Like so:
| 5 | DOCS : Fixed some bad links | 1 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1852 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from sys import stderr, exit, modules, argv
from os import path, makedirs, listdir, kill
from functools import partial
import hitchdir
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).replace("\n", "")
if hitchdir.hitch_exists():
stderr.write("Hitch has already been initialized in this directory or a directory above it.\n")
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"])
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
stderr.write(languagestrings.YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33)
exit(1)
if hitchdir.hitch_exists():
hitchdir.check_hitch_directory_integrity()
update_requirements()
exit(0)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : Made the hitch bootstrap script work on python 3.
<DFF> @@ -4,7 +4,7 @@ from click import command, group, argument, option
from sys import stderr, exit, modules, argv
from os import path, makedirs, listdir, kill
from functools import partial
-import hitchdir
+from hitch import hitchdir
import shutil
import signal
import copy
@@ -27,7 +27,7 @@ def init():
stderr.flush()
exit(1)
- python3 = check_output(["which", "python3"]).replace("\n", "")
+ python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
if hitchdir.hitch_exists():
stderr.write("Hitch has already been initialized in this directory or a directory above it.\n")
@@ -55,7 +55,7 @@ def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
- pip_freeze = check_output([pip, "freeze"]).split('\n')
+ pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
@@ -63,7 +63,7 @@ def update_requirements():
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
- pip_freeze = check_output([pip, "freeze"])
+ pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
| 4 | FEATURE : Made the hitch bootstrap script work on python 3. | 4 | .py | py | agpl-3.0 | hitchtest/hitch |
1853 | <NME> languagestrings.py
<BEF> ADDFILE
<MSG> BUG : Check for directory integrity properly and make the script work in python 2.6
<DFF> @@ -0,0 +1,50 @@
+
+
+SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH = """\
+Create hitch virtualenv using specific python version
+(e.g. /usr/bin/python3). Defaults to using python3 on the system path."""
+
+SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH = """\
+Create hitch virtualenv using specific virtualenv
+(e.g. /usr/bin/virtualenv). Defaults to using virtualenv on the system path."""
+
+YOU_MUST_HAVE_VIRTUALENV_INSTALLED = """\
+You must have virtualenv installed to use hitch.
+"""
+
+YOU_MUST_HAVE_PYTHON3_INSTALLED = """\
+To use Hitch, you must have python 3 installed on your system
+and available. If your python3 is not on the system path with
+the name python3, specify its exact location using --python.
+"""
+
+YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33 = """\
+Hitch must have python 3.3 or higher installed to run.
+Your app can run with earlier versions of python, but the tests can't.
+"""
+
+HITCH_ALREADY_INITIALIZED = """\
+Hitch has already been initialized in this directory or a directory above it.
+If you wish to re-initialize hitch in this directory, run 'hitch clean' first.
+"""
+
+ERROR_INITIALIZING_HITCH = """\
+\nError initializing hitch. Problem checklist:\n
+* Was there a problem with your internet?
+* Was there a python package being installed that couldn't compile?\n
+Try searching for any errors printed above or raising an issue at:
+http://github.com/hitchtest/hitch/issues/
+"""
+
+HITCH_DIRECTORY_MOVED = """\
+The hitch directory '{0}' was moved.
+"Run 'hitch clean' then run 'hitch init' in this directory:
+==> {1}
+"""
+
+HITCH_NOT_INITIALIZED = """\
+Hitch has not been initialized in this directory, or any of the directories beneath it:\n"""
+
+SOMETHING_CORRUPTED = """\
+WARNING: Hitch directory was corrupted. Run 'hitch clean' and hitch init again.\n
+"""
| 50 | BUG : Check for directory integrity properly and make the script work in python 2.6 | 0 | .py | py | agpl-3.0 | hitchtest/hitch |
1854 | <NME> refactor_your_tests.rst
<BEF> ADDFILE
<MSG> DOCS : Update to settings docs and added refactoring docs plus debugging docs.
<DFF> @@ -0,0 +1,57 @@
+How to refactor your tests
+==========================
+
+A good, although uncommon development practice when building features or
+fixing bugs is refactoring.
+
+Refactoring means making small, incremental changes that improve the
+quality of code.
+
+This usually means:
+
+* De-duplicating code
+* Decoupling code
+* Minimizing the amount of imperative code (e.g. python code) in favor of declarative code (e.g. YAML configuration).
+* Improving the code's readability (changing names, adding comments)
+
+Tests are code too, so it's good practice to refactor your tests to gradually improve :doc:`/glossary/test_quality`
+as you write new tests or fix existing ones.
+
+Of course, you should only refactor *passing* tests and you should always run passing tests
+after refactoring to ensure that they are still passing.
+
+Here is a list of things which commonly need refactoring in tests.
+
+
+De-duplicate duplicated tests
+-----------------------------
+
+You may find after a while that your test suite has a lot of duplication - for example,
+tests that do almost the same thing in two or three slightly different ways.
+
+See :doc:`/glossary/parameterize_test_cases` for how to remove some of that duplication.
+
+
+Move configuration from engine.py to all.settings
+-------------------------------------------------
+
+Your execution engine should be kept as short as possible yet still capable. If you have
+any long lists in your engine.py, moving them into all.settings will help to keep it clean.
+
+
+Change HTML IDs and Classes to make them more readable
+------------------------------------------------------
+
+Beware! This might be best left to a developer since it may require code changes as well
+if the ID is used in many places (code, javascript, CSS, etc.) as well as changing
+other IDs to accomodate.
+
+If you have test steps that look like this::
+
+ - Click: btn-rgstr-a1
+
+Because the registration button had the HTML ID 'btn-rgstr-a1' when you wrote the test,
+it might be worth changing the ID in the test and in the HTML code to make it more
+readable, e.g. to something like::
+
+ - Click: register
| 57 | DOCS : Update to settings docs and added refactoring docs plus debugging docs. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1855 | <NME> tested_on.rst
<BEF> ADDFILE
<MSG> DOCS : Updates to documentation.
<DFF> @@ -0,0 +1,77 @@
+Tested on
+=========
+
+Working on
+----------
+
+This is an list of environments that Hitch is tested on
+and been found to work. If you use one of these environments
+and you discover a problem, that's a bug. Please raise an issue.
+
+OSes:
+
+* Ubuntu 14.04
+* Mac OS X Mavericks (with brew, Xcode version X)
+* Mac OS X Yosemite (with brew, Xcode version X)
+* Debian Jessie
+* Fedora 20
+* CentOS 6.4
+* Arch (latest rolling release)
+
+Additionally, the following environments have been tested with and
+seem to run Hitch okay:
+
+* Docker
+* Jenkins
+* Travis CI
+
+
+Not working on
+--------------
+
+This is a list of other UNIX systems that Hitch will not currently work on, but
+might be made to function with with some work. If you really want one of these systems
+to run hitch, that's a feature request. It may happen if you raise an issue.
+
+* Mandriva
+* OpenSUSE
+* Mac OS X with macports
+* BSD
+* Gentoo
+* Mac OS X with no package manager
+* Slackware
+* LFS
+
+
+.. note::
+
+ 95% of getting these environments to work involves getting unixpackage to work with them.
+ Please consider helping out on that project.
+
+This is a list of environments that probably aren't happening for the forseeable
+future due to a combination of hard and not worth the hassle:
+
+* Cygwin
+* Windows
+
+
+Untested but should still work
+------------------------------
+
+This is a list of environments that Hitch has *not* been tested on, but
+hopefully should still work, but I'm not as confident about.
+
+If it doesn't work, that's a bug. Please raise an issue if you find a problem.
+
+If you have access to one of these, *please* try out the example project
+and submit a pull request adding your environment if it works or raise
+an issue if it doesn't:
+
+* Mac OS X El Capitan
+* Variations on Mac OS X Yosemite/Mavericks (e.g. with different versions of Xcode)
+* Red Hat
+* Ubuntu/Debian based distros like Trisquel, Kali or Mint
+* Red Hat based distros like Oracle Linux
+* Basically any Linux system not mentioned above
+* Any continuous integration system not mentioned above
+* Any Linux system mentioned above but a slightly different version
| 77 | DOCS : Updates to documentation. | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1856 | <NME> IRedisManager.java
<BEF> package org.crazycake.shiro;
import java.util.Set;
/**
* redisManager interface
*
**/
public interface IRedisManager {
/**
* get value from redis
* @param key key
* @return value
*/
byte[] get(byte[] key);
/**
* set
* @param key key
* @param value value
* @return value
*/
byte[] set(byte[] key, byte[] value, int expire);
/**
* del
* @param key key
*/
void del(byte[] key);
/**
* dbsize
* @param pattern pattern
* @return key-value size
*/
Long dbSize(byte[] pattern);
/**
* keys
* @param pattern key pattern
* @return key set
*/
Set<byte[]> keys(byte[] pattern);
}
<MSG> Fix checkstyle error
<DFF> @@ -17,9 +17,10 @@ public interface IRedisManager {
byte[] get(byte[] key);
/**
- * set
- * @param key key
+ * set value
+ * @param key key
* @param value value
+ * @param expire expire
* @return value
*/
byte[] set(byte[] key, byte[] value, int expire);
| 3 | Fix checkstyle error | 2 | .java | java | mit | alexxiyang/shiro-redis |
1857 | <NME> setup.py
<BEF> from distutils.core import setup
import os.path, sys
import shutil
packages = []
def find_packages(root_dir):
filenames = os.listdir(root_dir)
for filename in filenames:
filepath = os.path.join(root_dir, filename)
if os.path.isdir(filepath):
find_packages(filepath)
else:
if filename == '__init__.py':
packages.append(root_dir)
def find_modules():
dragon_c_lib_win32 = '../lib/dragon.dll'
dragon_c_lib_other = '../lib/libdragon.so'
if os.path.exists(dragon_c_lib_win32):
shutil.copy(dragon_c_lib_win32, 'dragon/libdragon.pyd')
elif os.path.exists(dragon_c_lib_other):
shutil.copy(dragon_c_lib_other, 'dragon/libdragon.so')
else:
print('ERROR: Unable to find modules. built Dragon using CMake.')
sys.exit()
def find_resources():
c_lib = ['libdragon.*']
protos = ['protos/*.proto', 'vm/caffe/proto/*.proto']
others = []
return c_lib + protos + others
find_packages('dragon')
find_modules()
setup(name = 'dragon',
version='0.2.1.1',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
license='BSD 2-Clause',
packages=packages,
package_dir={'dragon': 'dragon'},
package_data={'dragon': find_resources()})
<MSG> Fix the crash when terminating processes
<DFF> @@ -36,7 +36,7 @@ find_packages('dragon')
find_modules()
setup(name = 'dragon',
- version='0.2.1.1',
+ version='0.2.1.2',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
| 1 | Fix the crash when terminating processes | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1858 | <NME> setup.py
<BEF> from distutils.core import setup
import os.path, sys
import shutil
packages = []
def find_packages(root_dir):
filenames = os.listdir(root_dir)
for filename in filenames:
filepath = os.path.join(root_dir, filename)
if os.path.isdir(filepath):
find_packages(filepath)
else:
if filename == '__init__.py':
packages.append(root_dir)
def find_modules():
dragon_c_lib_win32 = '../lib/dragon.dll'
dragon_c_lib_other = '../lib/libdragon.so'
if os.path.exists(dragon_c_lib_win32):
shutil.copy(dragon_c_lib_win32, 'dragon/libdragon.pyd')
elif os.path.exists(dragon_c_lib_other):
shutil.copy(dragon_c_lib_other, 'dragon/libdragon.so')
else:
print('ERROR: Unable to find modules. built Dragon using CMake.')
sys.exit()
def find_resources():
c_lib = ['libdragon.*']
protos = ['protos/*.proto', 'vm/caffe/proto/*.proto']
others = []
return c_lib + protos + others
find_packages('dragon')
find_modules()
setup(name = 'dragon',
version='0.2.1.1',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
license='BSD 2-Clause',
packages=packages,
package_dir={'dragon': 'dragon'},
package_data={'dragon': find_resources()})
<MSG> Fix the crash when terminating processes
<DFF> @@ -36,7 +36,7 @@ find_packages('dragon')
find_modules()
setup(name = 'dragon',
- version='0.2.1.1',
+ version='0.2.1.2',
description = 'Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework',
url='https://github.com/neopenx/Dragon',
author='Ting Pan',
| 1 | Fix the crash when terminating processes | 1 | .py | py | bsd-2-clause | neopenx/Dragon |
1859 | <NME> brittle_tests.rst
<BEF> Brittle Tests
=============
Brittleness is a property of tests which renders them likely to break easily
Brittleness can lead to :doc:`test_failure_habituation` and :doc:`test_abandonment`.
Solving brittleness in tests is a *hard* engineering problem. `Even Google can't seem to do it right. <http://googletesting.blogspot.ch/2015/04/just-say-no-to-more-end-to-end-tests.html>`_
The following are real life examples of tests failures caused by brittle tests:
* A lack of :doc:`data_isolation`.
* A lack of :doc:`environment_isolation`.
* A lack of :doc:`process_isolation`.
* A lack of :doc:`package_isolation`.
* Tight :doc:`coupling` between tests and code.
* :doc:`sleep_oriented_testing`.
* :doc:`event_oriented_testing`
* The :doc:`hitch_package`
* :doc:`environment_checks`
* Enforced :doc:`loose_coupling`
* Enforced :doc:`isolation`
* `Google not-so-wisely calling their own brittle tests a 'fact of life' <http://googletesting.blogspot.ch/2015/04/just-say-no-to-more-end-to-end-tests.html>`_
* :doc:`indeterminacy`.
* :doc:`sleep_oriented_testing`
* :doc:`tightly_coupled_tests`
Are you having a problem with brittle tests? `We can help you <https://hitchtest.com/consulting.html>`_
<MSG> DOCS : Updated the page on brittle tests.
<DFF> @@ -5,7 +5,7 @@ Brittleness is a property of tests which renders them liable to break easily des
Brittleness can lead to :doc:`test_failure_habituation` and :doc:`test_abandonment`.
-Solving brittleness in tests is a *hard* engineering problem. `Even Google can't seem to do it right. <http://googletesting.blogspot.ch/2015/04/just-say-no-to-more-end-to-end-tests.html>`_
+Solving brittleness in tests is a *hard* engineering problem. `Even Google has this problem. <http://googletesting.blogspot.ch/2015/04/just-say-no-to-more-end-to-end-tests.html>`_
The following are real life examples of tests failures caused by brittle tests:
@@ -18,7 +18,7 @@ Hitch *substantially* minimizes brittleness / false positives with the following
* :doc:`event_oriented_testing`
* The :doc:`hitch_package`
-* :doc:`environment_checks`
+* :doc:`/api/environment` checks
* Enforced :doc:`loose_coupling`
* Enforced :doc:`isolation`
@@ -27,5 +27,3 @@ See also:
* :doc:`indeterminacy`.
* :doc:`sleep_oriented_testing`
* :doc:`tightly_coupled_tests`
-
-Are you having a problem with brittle tests? `We can help you <https://hitchtest.com/consulting.html>`_
| 2 | DOCS : Updated the page on brittle tests. | 4 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1860 | <NME> hitchnode.rst
<BEF> ADDFILE
<MSG> Merge pull request #33 from kkarimi/master
DOCS: Basic usage of hitchnode
<DFF> @@ -0,0 +1,59 @@
+HitchNode
+==========
+
+.. note::
+
+ This documentation applies to the latest version of hitchnode.
+
+HitchNode is a :doc:`/glossary/hitch_plugin` created to make testing applications that use Node easier.
+
+It contains:
+
+* A :doc:`/glossary/hitch_package` to download and install specified version(s) of Node.
+* A :doc:`/glossary/service` to set up an isolated Node environment and run it.
+
+Note: the Node service destroys and sets up a new node environment during each test run in order
+to provide strict :doc:`/glossary/isolation` for your tests.
+
+Installation
+------------
+
+First, install the the plugin in your tests directory::
+
+ $ hitch install hitchnode
+
+
+Set up Node
+------------
+
+In your test, define the Node installation you want to test with:
+
+.. code-block:: python
+
+ import hitchnode
+
+ node_package = hitchnode.NodePackage(
+ version="5.0.0" # Optional (default is the latest version of Node)
+ )
+
+ # Downloads & installs Node to ~/.hitchpkg if not already installed by previous test
+ node_package.build()
+
+
+To use, define the service after initializing the :doc:`/glossary/service_bundle` but before starting it:
+
+.. note::
+
+ See also: :doc:`/api/generic_service_api`
+
+.. code-block:: python
+
+ # Check if a package (e.g. less) is already installed:
+ import hitchtest
+ if not path.exists(path.join(
+ hitchtest.utils.get_hitch_directory(),
+ "node_modules", "less", "bin", "lessc"
+ )):
+ # Install the package is not found:
+ chdir(hitchtest.utils.get_hitch_directory())
+ check_call([node_package.npm, "install", "less"])
| 59 | Merge pull request #33 from kkarimi/master | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1861 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
exit(1)
if version_info[0] == 2:
if version_info[1] < 6:
stderr.write("The hitch bootstrapper will not run on versions of python below v2.6.\n")
exit(1)
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
author_email='[email protected]',
url='https://hitch.readthedocs.org/',
license='AGPL',
install_requires=['click', ],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> FEATURE : Absorbed click package into hitch so that it is dependency-less.
<DFF> @@ -45,7 +45,7 @@ setup(name="hitch",
author_email='[email protected]',
url='https://hitch.readthedocs.org/',
license='AGPL',
- install_requires=['click', ],
+ install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
| 1 | FEATURE : Absorbed click package into hitch so that it is dependency-less. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1862 | <NME> RedisManager.java
<BEF> package org.crazycake.shiro;
import org.crazycake.shiro.common.WorkAloneRedisManager;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.Protocol;
public class RedisManager extends WorkAloneRedisManager implements IRedisManager {
public class RedisManager extends JedisManager {
private volatile JedisPool jedisPool = null;
private void init() {
synchronized (this) {
if (jedisPool == null) {
String[] hostAndPort = host.split(":");
jedisPool = new JedisPool(new JedisPoolConfig(), hostAndPort[0], Integer.parseInt(hostAndPort[1]), timeout, password, database);
}
}
}
}
}
}
@Override
protected Jedis getJedis() {
return jedisPool.getResource();
}
}
this.timeout = timeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
public JedisPool getJedisPool() {
return jedisPool;
}
public void setJedisPool(JedisPool jedisPool) {
this.jedisPool = jedisPool;
}
}
<MSG> support host:port (eg. host=127.0.0.1:6379) config style , keep old config sytle (eg. host=127.0.0.1 port=6379)
<DFF> @@ -10,12 +10,19 @@ import java.util.Set;
public class RedisManager extends JedisManager {
private volatile JedisPool jedisPool = null;
+ private int port;
private void init() {
synchronized (this) {
if (jedisPool == null) {
- String[] hostAndPort = host.split(":");
- jedisPool = new JedisPool(new JedisPoolConfig(), hostAndPort[0], Integer.parseInt(hostAndPort[1]), timeout, password, database);
+ if(port == 0){
+ // support host:port config style
+ String[] hostAndPort = host.split(":");
+ jedisPool = new JedisPool(new JedisPoolConfig(), hostAndPort[0], Integer.parseInt(hostAndPort[1]), timeout, password, database);
+ }else{
+ jedisPool = new JedisPool(new JedisPoolConfig(), host, port, timeout, password, database);
+ }
+
}
}
}
@@ -28,4 +35,19 @@ public class RedisManager extends JedisManager {
return jedisPool.getResource();
}
+ public JedisPool getJedisPool() {
+ return jedisPool;
+ }
+
+ public void setJedisPool(JedisPool jedisPool) {
+ this.jedisPool = jedisPool;
+ }
+
+ public int getPort() {
+ return port;
+ }
+
+ public void setPort(int port) {
+ this.port = port;
+ }
}
| 24 | support host:port (eg. host=127.0.0.1:6379) config style , keep old config sytle (eg. host=127.0.0.1 port=6379) | 2 | .java | java | mit | alexxiyang/shiro-redis |
1863 | <NME> why_yaml.rst
<BEF> Why YAML?
=========
YAML is a markup language for presenting structured data. It is
a more readable version of JSON.
Hitch uses YAML as a declarative description language for integration
tests.
While python could be used to write integration tests instead,
YAML is more suitable as it lets your tests adhere to the following
two principles more easily:
* https://en.wikipedia.org/wiki/Rule_of_least_power - YAML is a *less* powerful language than python, so using it instead will keep your tests simpler.
* https://en.wikipedia.org/wiki/Separation_of_concerns - YAML provides a 'language barrier' that lets you maintain a strict separation of concerns between the code which describes your tests and the code which runs them.
For more powerful and customized behavior, you can write python code in the test engine.
The use of YAML and Jinja2 in Hitch was inspired somewhat by Ansible: https://en.wikipedia.org/wiki/Ansible_%28software%29
<MSG> DOCS : Updated Why YAML
<DFF> @@ -1,19 +1,57 @@
Why YAML?
=========
-YAML is a markup language for presenting structured data. It is
-a more readable version of JSON.
+The :doc:`/glossary/hitch_test_description_language` is built upon YAML -
+a markup language for presenting structured data.
+All tests are subset of YAML.
-Hitch uses YAML as a declarative description language for integration
-tests.
+Hitch also provides you with the popular templating tool
+Jinja2 to let you *generate* a lot of very similar
+test cases without copying and pasting.
-While python could be used to write integration tests instead,
-YAML is more suitable as it lets your tests adhere to the following
-two principles more easily:
-* https://en.wikipedia.org/wiki/Rule_of_least_power - YAML is a *less* powerful language than python, so using it instead will keep your tests simpler.
-* https://en.wikipedia.org/wiki/Separation_of_concerns - YAML provides a 'language barrier' that lets you maintain a strict separation of concerns between the code which describes your tests and the code which runs them.
+Why write tests with YAML instead of Python?
+--------------------------------------------
-For more powerful and customized behavior, you can write python code in the test engine.
+The hitch test description language is an *intentionally dumb language*
+with a restricted feature set.
+
+It is *not* a full programming language like python. You cannot use
+conditionals, loops, etc. It is just a simple sequence of steps and
+some data. It's essentialy just configuration, in other words.
+
+This may feel like handcuffs to a good programmer, but there's a good
+reason for it: less powerful languages are easier to understand.
+
+This means that it becomes easier to maintain, and easier to keep
+free from bugs and also even means that you can show it to customers
+or product managers/owners. With some training they may even be
+able to write in it (it's really no more complicated than handling
+a spreadsheet).
+
+Jinja2 adds additional complexity, but it helps you to prevent your
+test suite from becoming repetitive. See: :doc:`/glossary/DRY`.
+
+Again, using Jinja2 is not a powerful language (more powerful
+than YAML but less powerful than python/java/ruby/etc.) and should be
+something that somebody who uses advanced features on a spreadsheet
+could pick up in a day.
+
+
+Why YAML instead of Gherkin?
+----------------------------
+
+Some readers may be familiar with Gherkin, which follows the same
+pattern. It is another intentionally dumb (i.e. non turing complete)
+language.
+
+
+
+
+Related reading
+---------------
+
+* https://en.wikipedia.org/wiki/Rule_of_least_power
+* https://en.wikipedia.org/wiki/Separation_of_concerns
The use of YAML and Jinja2 in Hitch was inspired somewhat by Ansible: https://en.wikipedia.org/wiki/Ansible_%28software%29
| 48 | DOCS : Updated Why YAML | 10 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1864 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
exit(1)
if version_info[0] == 2:
if version_info[1] < 6:
stderr.write("The hitch bootstrapper will not run on versions of python below v2.6.\n")
exit(1)
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.2",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version.
<DFF> @@ -22,7 +22,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.5.2",
+ version="0.5.3",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1865 | <NME> index.rst
<BEF> Getting started quickly with Hitch
==================================
This is a basic introduction to getting your first hitch test up and running.
Create your test directory
--------------------------
Create a directory inside the root of your project to put your tests in. For example::
~/yourproject$ mkdir tests
~/yourproject$ cd tests
~/yourproject/tests$
If you already have a tests directory you can call it something else.
Create the hitch environment
----------------------------
If you have hitch installed already, run the following command::
~/yourproject/tests$ hitch init
If you don't, run the init script by copying and pasting the following line::
~/yourproject/tests$ curl -sSL https://hitchtest.com/init.sh > init.sh ; chmod +x init.sh ; ./init.sh
.. note::
This can be used as a guide to instal hitch instead: :doc:`/faq/what_does_the_init_script_do`
Once the installation has completed, it will ask you a few basic questions about your project,
mostly requiring a yes or no answer and will then generate a skeleton project template for you.
Apart from installing all of the required packages and creating a .hitch directory,
the following files are created in your tests directory:
* :doc:`/glossary/hitchreqs.txt`
* :doc:`/glossary/engine.py`
* tdd.settings (:doc:`/glossary/hitch_settings`)
* ci.settings
* all.settings
* :doc:`/glossary/stub.test`
* README.rst
You might want to take a look around these files. They all try to be self-explanatory.
Running your first test
-----------------------
You can now run the stub test. Try running it in test driven development mode::
$ hitch test stub.test --settings tdd.settings
The first time you run this command it may take a while (up to 25 minutes depending upon what you configured).
Time for coffee?
While you're at it, check out the hitch subreddit and subscribe to the twitter feed!
.. note::
:doc:`/faq/why_does_the_first_test_run_take_so_long`
Back?
-----
Once the test run is done setting up and running things, if there were no problems, you should see this::
Python 3.4.3 (default, Jul 28 2015, 18:20:59)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
SUCCESS
In [1]:
This is the interactive prompt that appears during the pause step. This is an :doc:`/glossary/ipython`
prompt that can be used to interact with your app, inspect logs and try out test
steps.
The components you selected during the set up should also be running. For example, if you
chose postgres, postgres will be running.
To exit, simply hit ctrl-D.
This will shut everything down and then quit.
You're now ready to start writing new tests.
Happy testing!
.. note::
Was there anything that went wrong or was confusing? Please tell us! Help with :doc:`/misc/clarifying_documentation`.
Further reading
---------------
* :doc:`/howto/web_applications`
* :doc:`/howto/command_line_applications`
Advanced topics
---------------
* :doc:`/howto/test_driven_development`
* :doc:`/howto/parameterize_test_cases`
* :doc:`/howto/external_apis`
* :doc:`/howto/continuous_integration`
Plugin Documentation
--------------------
.. toctree::
:glob:
:maxdepth: 1
/plugins/*
.. note::
Need tutorials for any other topics? `Please raise a ticket <https://github.com/hitchtest/hitch/issues/new>`_.
<MSG> DOCS : Added more glossary terms, updated service API docs and updated front page.
<DFF> @@ -3,6 +3,36 @@ Getting started quickly with Hitch
This is a basic introduction to getting your first hitch test up and running.
+Install prerequisites
+---------------------
+
+You should have a reasonably up to date Ubuntu, Debian, Arch, Fedora or Mac.
+
+On Ubuntu/Debian::
+
+ $ sudo apt-get install python3 python-pip python-virtualenv
+ $ sudo pip install --upgrade hitch
+
+On Mac OS X::
+
+ $ brew install python python3
+ $ pip install --upgrade hitch virtualenv
+
+On Arch::
+
+ $ sudo pacman -Sy python python-virtualenv
+ $ sudo pip install --upgrade hitch
+
+On Fedora/RHEL/CentOS::
+
+ $ sudo yum install python3 python-virtualenv python-pip python3
+ $ sudo pip install --upgrade hitch
+
+.. note::
+
+ The 'hitch' package (the bootstrapper) is a small python package with no dependencies.
+
+
Create your test directory
--------------------------
@@ -18,23 +48,18 @@ If you already have a tests directory you can call it something else.
Create the hitch environment
----------------------------
-If you have hitch installed already, run the following command::
+To initialize a hitch environment, run hitch init in your tests directory::
~/yourproject/tests$ hitch init
-If you don't, run the init script by copying and pasting the following line::
-
- ~/yourproject/tests$ curl -sSL https://hitchtest.com/init.sh > init.sh ; chmod +x init.sh ; ./init.sh
-
-.. note::
+This will:
- This can be used as a guide to instal hitch instead: :doc:`/faq/what_does_the_init_script_do`
+* Install any necessary system packages required to run hitch.
+* Create a .hitch directory, create a python 3 virtualenv in it and install all the necessary packages to run hitch tests there.
+* Ask you some basic questions about the project which you are testing.
+* Create a skeleton hitch project template for you to use based upon the answers.
-Once the installation has completed, it will ask you a few basic questions about your project,
-mostly requiring a yes or no answer and will then generate a skeleton project template for you.
-
-Apart from installing all of the required packages and creating a .hitch directory,
-the following files are created in your tests directory:
+The skeleton template will include all of the following:
* :doc:`/glossary/hitchreqs.txt`
* :doc:`/glossary/engine.py`
@@ -54,21 +79,27 @@ You can now run the stub test. Try running it in test driven development mode::
$ hitch test stub.test --settings tdd.settings
-The first time you run this command it may take a while (up to 25 minutes depending upon what you configured).
-
-Time for coffee?
-
-While you're at it, check out the hitch subreddit and subscribe to the twitter feed!
+The first time you run this command it *may take a while* (up to 25 minutes depending upon what you answered).
.. note::
:doc:`/faq/why_does_the_first_test_run_take_so_long`
+This might be a good time to take a break.
+
+While you're at it, subscribe to the `hitch subreddit <https://reddit.com/r/hitchtest>`_ and
+`twitter feed <https://twitter.com/testhitch>`_ for updates and news.
+
+
Back?
-----
-Once the test run is done setting up and running things, if there were no problems, you should see this::
+.. note::
+
+ If the stub test failed, please `raise an issue <https://github.com/hitchtest/hitch/issues/new>`_.
+
+Once the test run is done setting up, if there were no problems, you should see this::
Python 3.4.3 (default, Jul 28 2015, 18:20:59)
Type "copyright", "credits" or "license" for more information.
@@ -89,7 +120,8 @@ prompt that can be used to interact with your app, inspect logs and try out test
steps.
The components you selected during the set up should also be running. For example, if you
-chose postgres, postgres will be running.
+chose postgres, the latest version of postgres will have been installed in the ~/.hitchpkg
+directory and it will be running and accessible.
To exit, simply hit ctrl-D.
| 51 | DOCS : Added more glossary terms, updated service API docs and updated front page. | 19 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1866 | <NME> index.rst
<BEF> ADDFILE
<MSG> DOCS : Updated docs
<DFF> @@ -0,0 +1,112 @@
+Hitch Quickstart
+================
+
+This is a basic introduction to getting your first test up and running.
+
+Prerequisites
+-------------
+
+To begin, the minimum you need to have python3 and virtualenv installed on your system.
+
+On Ubuntu::
+
+ $ sudo apt-get install python3 python-virtualenv
+
+On a Mac::
+
+ $ brew install python3
+
+ $ pip install -U setuptools pip virtualenv
+
+Install
+-------
+
+The first thing that you need to install after this is the hitch bootstrap
+script::
+
+ $ pip install hitch
+
+or::
+
+ $ sudo pip install hitch
+
+
+See :doc:`faq/why_install_hitch_on_the_system_path`.
+
+
+Create your test directory
+--------------------------
+
+First create a directory inside your project to put your tests in. For example::
+
+ $ mkdir tests
+ $ cd tests
+
+Inside this, directory, run the following command to initialize hitch.
+
+ $ hitch init
+
+This will create a file called hitchreqs.txt, which contains a list of
+pypi requirements to use in the hitch virtualenv. These are the packages
+required to run your *testing* code - there is no need to add packages
+here which your application needs to run. It will run in its own segregated
+virtualenv.
+
+The .hitch directory contains all the necessary generated files
+required to run your tests, including the testing virtualenv.
+
+This directory should be gitignored. If you delete it, you can regenerate
+it just by running hitch init again.
+
+Create your first test and engine
+---------------------------------
+
+To run your first test, you need an engine. An engine file simply contains
+a class with a lot of methods that your tests can invoke.
+
+Create an engine called 'engine.py' like so::
+
+ import hitchtest
+ from os import path
+
+ PROJECT_DIRECTORY = path.abspath(path.join(path.dirname(__file__), '..'))
+
+ class YourProjectTestExecutionEngine(hitchtest.ExecutionEngine):
+ def set_up(self):
+ pass
+
+ def pause(self):
+ hitchtest.ipython_embed()
+
+ def tear_down(self):
+ pass
+
+And a test called 'stub.test', written in YAML, like so::
+
+ - name: Stub
+ engine: engine.py:YourProjectTestExecutionEngine
+ scenario:
+ - Pause
+
+You can run this test by running the command inside your tests directory::
+
+ $ hitch test stub.test
+
+And voila, you should see an IPython prompt.
+
+It runs "set_up", followed by "pause" (as specified in the scenario), which
+enters IPython and finally runs "tear_down".
+
+You can exit the IPython prompt by typing ctrl-D.
+
+Now that you have the skeleton of a test, you can continue building the
+other necessary parts of your testing infrastructure.
+
+
+Contents:
+
+.. toctree::
+ :glob:
+ :maxdepth: 2
+
+ *
| 112 | DOCS : Updated docs | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1867 | <NME> why_was_hitch_behavior_changed.rst
<BEF> ADDFILE
<MSG> DOCS : Updates to docs
<DFF> @@ -0,0 +1,41 @@
+Why was hitch behavior changed?
+===============================
+
+This FAQ is gives explanations for behaviors that were changed in Hitch.
+
+
+Why will my tests no longer run with the default filename "settings.yml"?
+-------------------------------------------------------------------------
+
+settings.yml used to be the default hitch settings filename. If no other settings file was specified,
+your test settings would be pulled from this filename. If you specified another filename on
+the command line, settings would be pulled from that *instead*.
+
+The behavior has now changed. Settings are now *always* pulled from the filename 'all.settings'
+if it exists, but settings in any filename specified via --settings will always take precedence.
+
+Settings are now taken from (in order of precedence):
+
+* The JSON specified on the command line via --extra.
+* The YAML filename specified via --settings.
+* all.settings
+
+See :doc:`/api/settings` for more information.
+
+
+Why did you remove the --quiet switch?
+--------------------------------------
+
+--quiet is more appropriate as a setting, so it has been deprecated as a
+command line switch and will be removed entirely in version 1.0.
+
+You can still make your tests quietly by setting the property to true
+on the command line via --extra::
+
+ $ hitch test . --extra '{"quiet":true}'
+
+Or by adding the setting to a specified settings file, e.g. ::
+
+ quiet: True
+
+See :doc:`/api/settings` for more information.
| 41 | DOCS : Updates to docs | 0 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1868 | <NME> RedisSessionDAO.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.session.Session;
import org.apache.shiro.session.UnknownSessionException;
import org.apache.shiro.session.mgt.eis.AbstractSessionDAO;
import org.crazycake.shiro.common.SessionInMemory;
import org.crazycake.shiro.exception.SerializationException;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.RedisSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.Serializable;
import java.util.*;
private static Logger logger = LoggerFactory.getLogger(RedisSessionDAO.class);
private static final String DEFAULT_SESSION_KEY_PREFIX = "shiro:session:";
private RedisManager redisManager;
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
private RedisSerializer keySerializer = new StringSerializer();
private RedisSerializer valueSerializer = new ObjectSerializer();
private static final String DEFAULT_SESSION_KEY_PREFIX = "shiro:session:";
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
/**
* doReadSession be called about 10 times when login.
* Save Session in ThreadLocal to resolve this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal.
* The default value is 1000 milliseconds (1s).
* Most of time, you don't need to change it.
*
* You can turn it off by setting sessionInMemoryEnabled to false
*/
private static final long DEFAULT_SESSION_IN_MEMORY_TIMEOUT = 1000L;
private long sessionInMemoryTimeout = DEFAULT_SESSION_IN_MEMORY_TIMEOUT;
private static final boolean DEFAULT_SESSION_IN_MEMORY_ENABLED = true;
private boolean sessionInMemoryEnabled = DEFAULT_SESSION_IN_MEMORY_ENABLED;
private static ThreadLocal sessionsInThread = new ThreadLocal();
/**
* expire time in seconds.
* NOTE: Please make sure expire is longer than session.getTimeout(),
* otherwise you might need the issue that session in Redis got erased when the Session is still available
*
* DEFAULT_EXPIRE: use the timeout of session instead of setting it by yourself
* NO_EXPIRE: never expire
*/
private static final int DEFAULT_EXPIRE = -2;
private static final int NO_EXPIRE = -1;
private int expire = DEFAULT_EXPIRE;
private static final int MILLISECONDS_IN_A_SECOND = 1000;
/**
* redisManager used for communicate with Redis
*/
private IRedisManager redisManager;
/**
* Serializer of key
*/
private RedisSerializer keySerializer = new StringSerializer();
/**
* Serializer of value
*/
private RedisSerializer valueSerializer = new ObjectSerializer();
/**
* save/update session
* @param session
* @throws UnknownSessionException
*/
@Override
public void update(Session session) throws UnknownSessionException {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
this.saveSession(session);
if (this.sessionInMemoryEnabled) {
this.setSessionToThreadLocal(session.getId(), session);
}
}
private void saveSession(Session session) throws UnknownSessionException {
if (session == null || session.getId() == null) {
logger.error("session or session id is null");
throw new UnknownSessionException("session or session id is null");
}
byte[] key;
byte[] value;
try {
key = keySerializer.serialize(getRedisSessionKey(session.getId()));
value = valueSerializer.serialize(session);
} catch (SerializationException e) {
logger.error("serialize session error. session id=" + session.getId());
throw new UnknownSessionException(e);
}
if (expire == DEFAULT_EXPIRE) {
redisManager.set(key, value, (int) (session.getTimeout() / MILLISECONDS_IN_A_SECOND));
return;
}
if (expire != NO_EXPIRE && expire * MILLISECONDS_IN_A_SECOND < session.getTimeout()) {
logger.warn("Redis session expire time: "
+ (expire * MILLISECONDS_IN_A_SECOND)
+ " is less than Session timeout: "
+ session.getTimeout()
+ " . It may cause some problems.");
}
redisManager.set(key, value, expire);
}
/**
* delete session
* @param session
return this.keyPrefix + sessionId;
}
public RedisManager getRedisManager() {
return redisManager;
}
public void setRedisManager(RedisManager redisManager) {
this.redisManager = redisManager;
}
this.delSessionFromThreadLocal(session.getId());
}
try {
redisManager.del(keySerializer.serialize(getRedisSessionKey(session.getId())));
} catch (SerializationException e) {
logger.error("delete session error. session id=" + session.getId());
}
}
/**
* get all active sessions
* @return
*/
@Override
public Collection<Session> getActiveSessions() {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
Set<Session> sessions = new HashSet<Session>();
try {
Set<byte[]> keys = redisManager.keys(keySerializer.serialize(this.keyPrefix + "*"));
if (keys != null && keys.size() > 0) {
for (byte[] key:keys) {
Session s = (Session) valueSerializer.deserialize(redisManager.get(key));
sessions.add(s);
}
}
} catch (SerializationException e) {
logger.error("get active sessions error.");
}
return sessions;
}
@Override
protected Serializable doCreate(Session session) {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
if (session == null) {
logger.error("session is null");
throw new UnknownSessionException("session is null");
}
Serializable sessionId = this.generateSessionId(session);
this.assignSessionId(session, sessionId);
this.saveSession(session);
return sessionId;
}
/**
* I change
* @param sessionId
* @return
*/
@Override
protected Session doReadSession(Serializable sessionId) {
if (this.sessionInMemoryEnabled) {
this.removeExpiredSessionInMemory();
}
if (sessionId == null) {
logger.warn("session id is null");
return null;
}
if (this.sessionInMemoryEnabled) {
Session session = getSessionFromThreadLocal(sessionId);
if (session != null) {
return session;
}
}
Session session = null;
try {
String sessionRedisKey = getRedisSessionKey(sessionId);
logger.debug("read session: " + sessionRedisKey + " from Redis");
session = (Session) valueSerializer.deserialize(redisManager.get(keySerializer.serialize(sessionRedisKey)));
if (this.sessionInMemoryEnabled) {
setSessionToThreadLocal(sessionId, session);
}
} catch (SerializationException e) {
logger.error("read session error. sessionId: " + sessionId);
}
return session;
}
private void setSessionToThreadLocal(Serializable sessionId, Session session) {
this.initSessionsInThread();
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
sessionMap.put(sessionId, this.createSessionInMemory(session));
}
private void delSessionFromThreadLocal(Serializable sessionId) {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
return;
}
sessionMap.remove(sessionId);
}
private SessionInMemory createSessionInMemory(Session session) {
SessionInMemory sessionInMemory = new SessionInMemory();
sessionInMemory.setCreateTime(new Date());
sessionInMemory.setSession(session);
return sessionInMemory;
}
private void initSessionsInThread() {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
sessionMap = new HashMap<Serializable, SessionInMemory>();
sessionsInThread.set(sessionMap);
}
}
private void removeExpiredSessionInMemory() {
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
if (sessionMap == null) {
return;
}
Iterator<Serializable> it = sessionMap.keySet().iterator();
while (it.hasNext()) {
Serializable sessionId = it.next();
SessionInMemory sessionInMemory = sessionMap.get(sessionId);
if (sessionInMemory == null) {
it.remove();
continue;
}
long liveTime = getSessionInMemoryLiveTime(sessionInMemory);
if (liveTime > sessionInMemoryTimeout) {
it.remove();
}
}
if (sessionMap.size() == 0) {
sessionsInThread.remove();
}
}
private Session getSessionFromThreadLocal(Serializable sessionId) {
if (sessionsInThread.get() == null) {
return null;
}
Map<Serializable, SessionInMemory> sessionMap = (Map<Serializable, SessionInMemory>) sessionsInThread.get();
SessionInMemory sessionInMemory = sessionMap.get(sessionId);
if (sessionInMemory == null) {
return null;
}
logger.debug("read session from memory");
return sessionInMemory.getSession();
}
private long getSessionInMemoryLiveTime(SessionInMemory sessionInMemory) {
Date now = new Date();
return now.getTime() - sessionInMemory.getCreateTime().getTime();
}
private String getRedisSessionKey(Serializable sessionId) {
return this.keyPrefix + sessionId;
}
public IRedisManager getRedisManager() {
return redisManager;
}
public void setRedisManager(IRedisManager redisManager) {
this.redisManager = redisManager;
}
public String getKeyPrefix() {
return keyPrefix;
}
public void setKeyPrefix(String keyPrefix) {
this.keyPrefix = keyPrefix;
}
public RedisSerializer getKeySerializer() {
return keySerializer;
}
public void setKeySerializer(RedisSerializer keySerializer) {
this.keySerializer = keySerializer;
}
public RedisSerializer getValueSerializer() {
return valueSerializer;
}
public void setValueSerializer(RedisSerializer valueSerializer) {
this.valueSerializer = valueSerializer;
}
public long getSessionInMemoryTimeout() {
return sessionInMemoryTimeout;
}
public void setSessionInMemoryTimeout(long sessionInMemoryTimeout) {
this.sessionInMemoryTimeout = sessionInMemoryTimeout;
}
public int getExpire() {
return expire;
}
public void setExpire(int expire) {
this.expire = expire;
}
public boolean getSessionInMemoryEnabled() {
return sessionInMemoryEnabled;
}
public void setSessionInMemoryEnabled(boolean sessionInMemoryEnabled) {
this.sessionInMemoryEnabled = sessionInMemoryEnabled;
}
public static ThreadLocal getSessionsInThread() {
return sessionsInThread;
}
}
<MSG> add redis cluster support
<DFF> @@ -16,7 +16,7 @@ public class RedisSessionDAO extends AbstractSessionDAO {
private static Logger logger = LoggerFactory.getLogger(RedisSessionDAO.class);
private static final String DEFAULT_SESSION_KEY_PREFIX = "shiro:session:";
- private RedisManager redisManager;
+ private IRedisManager redisManager;
private String keyPrefix = DEFAULT_SESSION_KEY_PREFIX;
private RedisSerializer keySerializer = new StringSerializer();
private RedisSerializer valueSerializer = new ObjectSerializer();
@@ -119,11 +119,11 @@ public class RedisSessionDAO extends AbstractSessionDAO {
return this.keyPrefix + sessionId;
}
- public RedisManager getRedisManager() {
+ public IRedisManager getRedisManager() {
return redisManager;
}
- public void setRedisManager(RedisManager redisManager) {
+ public void setRedisManager(IRedisManager redisManager) {
this.redisManager = redisManager;
}
| 3 | add redis cluster support | 3 | .java | java | mit | alexxiyang/shiro-redis |
1869 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"])
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"])
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"])
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> BUG : More fixes for python 3.
<DFF> @@ -99,7 +99,7 @@ def uninstall(package):
pip = get_pip()
call([pip, "uninstall", package] )
- pip_freeze = check_output([pip, "freeze"])
+ pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@@ -111,7 +111,7 @@ def install(package):
pip = get_pip()
call([pip, "install", package, "-U", ])
- pip_freeze = check_output([pip, "freeze"])
+ pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@@ -129,7 +129,7 @@ def upgrade():
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
- pip_freeze = check_output([pip, "freeze"])
+ pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
| 3 | BUG : More fixes for python 3. | 3 | .py | py | agpl-3.0 | hitchtest/hitch |
1870 | <NME> README.md
<BEF> shiro-redis
[![Build Status]][Travis CI]
===========
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
<MSG> Update README.md
add travis build image
<DFF> @@ -1,6 +1,6 @@
shiro-redis
-[![Build Status]][Travis CI]
+[](https://travis-ci.org/alexxiyang/shiro-redis)
===========
| 1 | Update README.md | 1 | .md | md | mit | alexxiyang/shiro-redis |
1871 | <NME> UserMock.java
<BEF> ADDFILE
<MSG> add RedisManager testcase
<DFF> @@ -0,0 +1,126 @@
+/*
+ * Copyright (C), 1996-2014
+ * FileName: User.java
+ * Author: 王华君
+ * Date: Nov 19, 2014 2:47:57 PM
+ * Description: //模块目的、功能描述
+ * History: //修改记录
+ * <author> <time> <version> <desc>
+ * 修改人姓名 修改时间 版本号 描述
+ */
+package org.crazycake.shiro;
+
+import java.io.Serializable;
+
+/**
+ * 〈一句话功能简述〉<br>
+ * 〈功能详细描述〉
+ *
+ * @author 王华君
+ * @see [相关类/方法](可选)
+ * @since [产品/模块版本] (可选)
+ */
+public class UserMock implements Serializable{
+ /**
+ */
+ private static final long serialVersionUID = 1L;
+ private String id;
+ private String username;
+ private String password;
+ private String salt;
+ public static final String OBJECT_KEY = "USER";
+
+ private Boolean locked = Boolean.FALSE;
+
+ /**
+ * @return the id
+ */
+ public String getId() {
+ return id;
+ }
+
+ /**
+ * @param id
+ * the id to set
+ */
+ public void setId(String id) {
+ this.id = id;
+ }
+
+ /**
+ * @return the username
+ */
+ public String getUsername() {
+ return username;
+ }
+
+ /**
+ * @param username
+ * the username to set
+ */
+ public void setUsername(String username) {
+ this.username = username;
+ }
+
+ /**
+ * @return the password
+ */
+ public String getPassword() {
+ return password;
+ }
+
+ /**
+ * @param password
+ * the password to set
+ */
+ public void setPassword(String password) {
+ this.password = password;
+ }
+
+ /**
+ * @return the salt
+ */
+ public String getSalt() {
+ return salt;
+ }
+
+ /**
+ * @param salt
+ * the salt to set
+ */
+ public void setSalt(String salt) {
+ this.salt = salt;
+ }
+
+ /**
+ * @return the locked
+ */
+ public Boolean getLocked() {
+ return locked;
+ }
+
+ /**
+ * @param locked
+ * the locked to set
+ */
+ public void setLocked(Boolean locked) {
+ this.locked = locked;
+ }
+
+
+ public String getCredentialsSalt() {
+ return username + salt;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return "User [id=" + id + ", username=" + username + ", password=" + password + ", salt=" + salt + ", locked="
+ + locked + "]";
+ }
+
+}
| 126 | add RedisManager testcase | 0 | .java | java | mit | alexxiyang/shiro-redis |
1872 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
unixpackage = path.abspath(path.join(".hitch", "virtualenv", "bin", "unixpackage"))
try:
check_call([virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3])
check_call([pip, "install", "-U", "pip"])
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "unixpackage"])
check_call([pip, "install", "hitchtest"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
if path.exists("system.packages"):
check_call([unixpackage, "-r", "system.packages"])
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : Install unixpackage in the .hitch virtualenv and install all packages in file 'system.packages' if it exists.
<DFF> @@ -86,25 +86,33 @@ def init(python, virtualenv):
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
- unixpackage = path.abspath(path.join(".hitch", "virtualenv", "bin", "unixpackage"))
try:
- check_call([virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3])
+ check_call([
+ virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
+ ])
check_call([pip, "install", "-U", "pip"])
+ check_call([pip, "install", "unixpackage"])
+
+ unixpackage = path.abspath(path.join(".hitch", "virtualenv", "bin", "unixpackage"))
+ if path.exists("system.packages"):
+ check_call([unixpackage, "install", "--polite", "-r", "system.packages"])
+
+ check_call([
+ unixpackage, "install", "--polite",
+ "python-dev", "python3-dev", "libtool", "automake", "cmake"
+ ])
+
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
- check_call([pip, "install", "unixpackage"])
check_call([pip, "install", "hitchtest"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
-
- if path.exists("system.packages"):
- check_call([unixpackage, "-r", "system.packages"])
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
| 14 | FEATURE : Install unixpackage in the .hitch virtualenv and install all packages in file 'system.packages' if it exists. | 6 | .py | py | agpl-3.0 | hitchtest/hitch |
1873 | <NME> shiro.ini
<BEF> ADDFILE
<MSG> Added JUnit test
<DFF> @@ -0,0 +1,46 @@
+# =============================================================================
+# Tutorial INI configuration
+#
+# Usernames/passwords are based on the classic Mel Brooks' film "Spaceballs" :)
+# =============================================================================
+
+# -----------------------------------------------------------------------------
+# Users and their (optional) assigned roles
+# username = password, role1, role2, ..., roleN
+# -----------------------------------------------------------------------------
+[users]
+root = secret, admin
+guest = guest, guest
+presidentskroob = 12345, president
+darkhelmet = ludicrousspeed, darklord, schwartz
+lonestarr = vespa, goodguy, schwartz
+
+# -----------------------------------------------------------------------------
+# Roles with assigned permissions
+# roleName = perm1, perm2, ..., permN
+# -----------------------------------------------------------------------------
+[roles]
+admin = *
+schwartz = lightsaber:*
+goodguy = winnebago:drive:eagle5
+
+[main]
+
+redisManager = org.crazycake.shiro.RedisManager
+redisManager.host = localhost
+redisManager.port = 6379
+
+shiroCacheManager = org.crazycake.shiro.RedisCacheManager
+shiroCacheManager.redisManager = $redisManager
+shiroCacheManager.keyPrefix = users:security:authz:
+
+sessionDAO = org.crazycake.shiro.RedisSessionDAO
+sessionDAO.redisManager = $redisManager
+sessionDAO.keyPrefix = users:security:sessions:
+
+sessionManager = org.apache.shiro.session.mgt.DefaultSessionManager
+sessionManager.sessionDAO = $sessionDAO
+
+# Use the configured native session manager:
+securityManager.sessionManager = $sessionManager
+securityManager.cacheManager = $shiroCacheManager
| 46 | Added JUnit test | 0 | .ini | ini | mit | alexxiyang/shiro-redis |
1874 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework
### Compile Requirements for C++
0. Google Protocol Buffer
<MSG> add omp optimization
<DFF> @@ -1,5 +1,6 @@
# Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework
-
+
+-----
### Compile Requirements for C++
0. Google Protocol Buffer
| 2 | add omp optimization | 1 | .md | md | bsd-2-clause | neopenx/Dragon |
1875 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework
### Compile Requirements for C++
0. Google Protocol Buffer
<MSG> add omp optimization
<DFF> @@ -1,5 +1,6 @@
# Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework
-
+
+-----
### Compile Requirements for C++
0. Google Protocol Buffer
| 2 | add omp optimization | 1 | .md | md | bsd-2-clause | neopenx/Dragon |
1876 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
===========
You can chose these 2 ways to include shiro-redis into your project
+ directly download jar file
Download shiro-redis.jar in bin folder and add it into your classpath.
+ add maven dependency
------------------------------------
<dependency>
<groupId>org.crazycake</groupId>
<MSG> Update README.md
update readme
<DFF> @@ -7,9 +7,10 @@ How to use it?
===========
You can chose these 2 ways to include shiro-redis into your project
-+ directly download jar file
+* directly download jar file
Download shiro-redis.jar in bin folder and add it into your classpath.
-+ add maven dependency
+* add maven dependency
+
------------------------------------
<dependency>
<groupId>org.crazycake</groupId>
| 3 | Update README.md | 2 | .md | md | mit | alexxiyang/shiro-redis |
1877 | <NME> RedisSentinelManager.java
<BEF> package org.crazycake.shiro;
import redis.clients.jedis.JedisPoolConfig;
import redis.clients.jedis.JedisSentinelPool;
import redis.clients.jedis.Protocol;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
public class RedisSentinelManager extends WorkAloneRedisManager implements IRedisManager {
private static final String DEFAULT_HOST = "127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381";
private String host = DEFAULT_HOST;
private static final String DEFAULT_MASTER_NAME = "mymaster";
private String masterName = DEFAULT_MASTER_NAME;
// timeout for jedis try to connect to redis server, not expire time! In milliseconds
private int timeout = Protocol.DEFAULT_TIMEOUT;
// timeout for jedis try to read data from redis server
private int soTimeout = Protocol.DEFAULT_TIMEOUT;
private String password;
private int database = Protocol.DEFAULT_DATABASE;
private JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
private void init() {
synchronized (this) {
synchronized (RedisSentinelManager.class) {
String[] sentinelHosts = host.split(",\\s*");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
jedisPool = new JedisSentinelPool(masterName, sentinels, jedisPoolConfig, timeout, soTimeout, password, database);
}
}
}
@Override
protected void checkAndInit() {
if (jedisPool == null) {
init();
}
}
public String getHost() {
return host;
}
return host;
}
public void setHost(String host) {
this.host = host;
}
public int getTimeout() {
return timeout;
}
public void setTimeout(int timeout) {
this.timeout = timeout;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public int getDatabase() {
return database;
}
public void setDatabase(int database) {
this.database = database;
}
public String getMasterName() {
return masterName;
}
public void setMasterName(String masterName) {
this.masterName = masterName;
}
public JedisPoolConfig getJedisPoolConfig() {
return jedisPoolConfig;
}
public void setJedisPoolConfig(JedisPoolConfig jedisPoolConfig) {
this.jedisPoolConfig = jedisPoolConfig;
}
public int getSoTimeout() {
return soTimeout;
}
public void setSoTimeout(int soTimeout) {
this.soTimeout = soTimeout;
}
}
return jedisPool;
}
public void setJedisPool(JedisSentinelPool jedisPool) {
this.jedisPool = jedisPool;
}
}
<MSG> refactor
<DFF> @@ -1,5 +1,6 @@
package org.crazycake.shiro;
+import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPoolConfig;
import redis.clients.jedis.JedisSentinelPool;
import redis.clients.jedis.Protocol;
@@ -26,7 +27,15 @@ public class RedisSentinelManager extends BaseRedisManager implements IRedisMana
private int database = Protocol.DEFAULT_DATABASE;
- private JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
+ private JedisSentinelPool jedisPool;
+
+ @Override
+ protected Jedis getJedis() {
+ if(jedisPool == null){
+ init();
+ }
+ return jedisPool.getResource();
+ }
private void init() {
synchronized (this) {
@@ -34,18 +43,11 @@ public class RedisSentinelManager extends BaseRedisManager implements IRedisMana
String[] sentinelHosts = host.split(",\\s*");
Set<String> sentinels = new HashSet<String>();
Collections.addAll(sentinels, sentinelHosts);
- jedisPool = new JedisSentinelPool(masterName, sentinels, jedisPoolConfig, timeout, soTimeout, password, database);
+ jedisPool = new JedisSentinelPool(masterName, sentinels, new JedisPoolConfig(), timeout, soTimeout, password, database);
}
}
}
- @Override
- protected void checkAndInit() {
- if (jedisPool == null) {
- init();
- }
- }
-
public String getHost() {
return host;
}
@@ -88,14 +90,6 @@ public class RedisSentinelManager extends BaseRedisManager implements IRedisMana
this.masterName = masterName;
}
- public JedisPoolConfig getJedisPoolConfig() {
- return jedisPoolConfig;
- }
-
- public void setJedisPoolConfig(JedisPoolConfig jedisPoolConfig) {
- this.jedisPoolConfig = jedisPoolConfig;
- }
-
public int getSoTimeout() {
return soTimeout;
}
@@ -103,4 +97,5 @@ public class RedisSentinelManager extends BaseRedisManager implements IRedisMana
public void setSoTimeout(int soTimeout) {
this.soTimeout = soTimeout;
}
+
}
| 12 | refactor | 17 | .java | java | mit | alexxiyang/shiro-redis |
1878 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, check_output, PIPE, CalledProcessError, Popen
from click import command, group, argument, option
from sys import stderr, exit, modules, argv
from os import path, makedirs, listdir, getpgrp, killpg
from functools import partial
import hitchdir
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
stderr.write(languagestrings.YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33)
exit(1)
if hitchdir.hitch_exists():
hitchdir.check_hitch_directory_integrity()
update_requirements()
exit(0)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def freeze():
"""List installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
call([pip, "freeze", ])
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> BUG : Fixed kill import issue.
<DFF> @@ -2,7 +2,7 @@
from subprocess import call, check_output, PIPE, CalledProcessError, Popen
from click import command, group, argument, option
from sys import stderr, exit, modules, argv
-from os import path, makedirs, listdir, getpgrp, killpg
+from os import path, makedirs, listdir, kill
from functools import partial
import hitchdir
import shutil
| 1 | BUG : Fixed kill import issue. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1879 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
exit(1)
if version_info[0] == 2:
if version_info[1] < 6:
stderr.write("The hitch bootstrapper will not run on versions of python below v2.6.\n")
exit(1)
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.4",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> BUG : Fix for reduce missing when the code is run with python 3.
<DFF> @@ -22,7 +22,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.5.4",
+ version="0.5.5",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | BUG : Fix for reduce missing when the code is run with python 3. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1880 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
#
# redisSessionDAO.expire = <expire>
# Custom your redis key prefix for session management, if you doesn't define this parameter, shiro-redis will use 'shiro_redis_session:' as default prefix
# Note: Remember to add colon at the end of prefix.
#
# redisSessionDAO.keyPrefix = <session keyprefix>
# Use redisManager as cache manager
redisSessionDAO.redisManager = $redisManager
# cacheManager.keySerializer = $cacheManagerKeySerializer
# Custom your redis key prefix for cache management, if you doesn't define this parameter, shiro-redis will use 'shiro_redis_session:' as default prefix
# Note: Remember to add colon at the end of prefix.
cacheManager.keyPrefix = shiro:cache:
# Use redisManager as cache manager
cacheManager.redisManager = $redisManager
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
<property name="host" value="127.0.0.1:6379"/>
<!-- optional properties:
<property name="timeout" value="10000"/>
<property name="password" value="123456"/>
<property name="database" value="1"/>
<!-- Redis-based session configuration -->
<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
<property name="redisManager" ref="redisManager" />
<property name="expire" value="-2"/>
<property name="keyPrefix" value="shiro:session:" />
</bean>
<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
<property name="sessionDAO" ref="redisSessionDAO" />
<!-- Redis-based cache configuration -->
<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
<property name="redisManager" ref="redisManager" />
<property name="expire" value="1800"/>
<property name="keyPrefix" value="shiro:cache:" />
<property name="principalIdFieldName" value="id" />
</bean>
<!-- securityManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
<property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
<property name="masterName" value="mymaster"/>
<!-- optional properties:、
<property name="timeout" value="2000"/>
<property name="soTimeout" value="2000"/>
<property name="password" value=""/>
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisClusterManager">
<property name="host" value="192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005"/>
<!-- optional properties:
<property name="timeout" value="10000"/>
<property name="soTimeout" value="10000"/>
<property name="maxAttempts" value="2"/>
<MSG> Update README.md
Update README.md
<DFF> @@ -139,10 +139,11 @@ redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
#
# redisSessionDAO.expire = <expire>
-# Custom your redis key prefix for session management, if you doesn't define this parameter, shiro-redis will use 'shiro_redis_session:' as default prefix
+# Custom your redis key prefix for session management
+# Default value is "shiro:session:"
# Note: Remember to add colon at the end of prefix.
#
-# redisSessionDAO.keyPrefix = <session keyprefix>
+# redisSessionDAO.keyPrefix = <session key prefix>
# Use redisManager as cache manager
redisSessionDAO.redisManager = $redisManager
@@ -188,9 +189,11 @@ cacheManager = org.crazycake.shiro.RedisCacheManager
# cacheManager.keySerializer = $cacheManagerKeySerializer
-# Custom your redis key prefix for cache management, if you doesn't define this parameter, shiro-redis will use 'shiro_redis_session:' as default prefix
+# Custom your redis key prefix for cache management
+# Default value is "shiro:cache:"
# Note: Remember to add colon at the end of prefix.
-cacheManager.keyPrefix = shiro:cache:
+#
+# cacheManager.keyPrefix = <cache key prefix>
# Use redisManager as cache manager
cacheManager.redisManager = $redisManager
@@ -285,7 +288,7 @@ spring.xml:
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
<property name="host" value="127.0.0.1:6379"/>
- <!-- optional properties:
+ <!-- optional properties
<property name="timeout" value="10000"/>
<property name="password" value="123456"/>
<property name="database" value="1"/>
@@ -297,8 +300,10 @@ spring.xml:
<!-- Redis-based session configuration -->
<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
<property name="redisManager" ref="redisManager" />
+ <!-- optional properties
<property name="expire" value="-2"/>
<property name="keyPrefix" value="shiro:session:" />
+ -->
</bean>
<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
<property name="sessionDAO" ref="redisSessionDAO" />
@@ -307,9 +312,11 @@ spring.xml:
<!-- Redis-based cache configuration -->
<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
<property name="redisManager" ref="redisManager" />
+ <!-- optional properties
<property name="expire" value="1800"/>
<property name="keyPrefix" value="shiro:cache:" />
<property name="principalIdFieldName" value="id" />
+ -->
</bean>
<!-- securityManager -->
@@ -334,7 +341,7 @@ If you use redis sentinel, here is an example of configuration :
<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
<property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
<property name="masterName" value="mymaster"/>
- <!-- optional properties:、
+ <!-- optional properties
<property name="timeout" value="2000"/>
<property name="soTimeout" value="2000"/>
<property name="password" value=""/>
@@ -351,7 +358,7 @@ If you use redis cluster, here is an example of configuration :
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisClusterManager">
<property name="host" value="192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005"/>
- <!-- optional properties:
+ <!-- optional properties
<property name="timeout" value="10000"/>
<property name="soTimeout" value="10000"/>
<property name="maxAttempts" value="2"/>
| 14 | Update README.md | 7 | .md | md | mit | alexxiyang/shiro-redis |
1881 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
cacheManager = org.crazycake.shiro.RedisCacheManager
cacheManager.redisManager = $redisManager
#custom your redis key prefix, if you doesn't define this parameter shiro-redis will use 'shiro_redis_session:' as default prefix
shiroCacheManager.keyPrefix = users:security:authz:
securityManager.cacheManager = $cacheManager
```
<MSG> fixing typo in example shiro.ini
<DFF> @@ -51,7 +51,7 @@ securityManager.sessionManager = $sessionManager
cacheManager = org.crazycake.shiro.RedisCacheManager
cacheManager.redisManager = $redisManager
#custom your redis key prefix, if you doesn't define this parameter shiro-redis will use 'shiro_redis_session:' as default prefix
-shiroCacheManager.keyPrefix = users:security:authz:
+cacheManager.keyPrefix = users:security:authz:
securityManager.cacheManager = $cacheManager
```
| 1 | fixing typo in example shiro.ini | 1 | .md | md | mit | alexxiyang/shiro-redis |
1882 | <NME> SessionInMemory.java
<BEF> ADDFILE
<MSG> Add expire time to session in ThreadLocal
<DFF> @@ -0,0 +1,26 @@
+package org.crazycake.shiro;
+
+import org.apache.shiro.session.Session;
+
+import java.util.Date;
+
+public class SessionInMemory {
+ private Session session;
+ private Date createTime;
+
+ public Session getSession() {
+ return session;
+ }
+
+ public void setSession(Session session) {
+ this.session = session;
+ }
+
+ public Date getCreateTime() {
+ return createTime;
+ }
+
+ public void setCreateTime(Date createTime) {
+ this.createTime = createTime;
+ }
+}
| 26 | Add expire time to session in ThreadLocal | 0 | .java | java | mit | alexxiyang/shiro-redis |
1883 | <NME> why_should_my_tests_set_up_their_own_python_environments.rst
<BEF> Why should my tests set up their own python environments?
=========================================================
Most python testing frameworks run using the same python environment
that the application code runs in.
Hitch is different.
Hitch is designed to test at as high a level as possible and to isolate
all possible sources of bugs and test :doc:`/glossary/indeterminacy`.
The specific version and environment of python you test with is such
a potential source of bugs.
Thus, to ensure the stability and repeatability of the test, this environment
should ideally be brought under the test's control.
Hitch includes a plugin (based upon pyenv) which downloads and compiles
different versions of python which you can use to test your application in.
You can even download and create multiple versions of python to test your
application in - for instance, if you want to test your app using python 2
and python 3, or even *all* different versions of python.
<MSG> DOCS : Updates to documentation.
<DFF> @@ -4,7 +4,33 @@ Why should my tests set up their own python environments?
Most python testing frameworks run using the same python environment
that the application code runs in.
-Hitch is different.
+Hitch is different::
+
+ $ sudo pip install hitch
+
+The *bootstrap* script should ideally installed using the system python.
+This is a very small script with just one dependency.
+
+This script sets up virtual environment to run the test code in::
+
+ $ hitch init
+
+This runs using your system python3. This is the environment all of your
+testing code will run in.
+
+Your tests will set up *another* environment to run your code in:
+
+.. code-block:: python
+
+ import hitchpython
+
+ python_package = hitchpython.PythonPackage(version="2.7.9")
+ python_package.build() # Takes about 5 minutes during the first run. Instantaneous thereafter.
+ python_package.verify()
+
+This not only ensures that the packages required to run your tests do
+not interfere with the packages required to run your code, it lets you be
+be specific about the version of python your test runs with.
Hitch is designed to test at as high a level as possible and to isolate
all possible sources of bugs and test :doc:`/glossary/indeterminacy`.
@@ -15,9 +41,7 @@ a potential source of bugs.
Thus, to ensure the stability and repeatability of the test, this environment
should ideally be brought under the test's control.
-Hitch includes a plugin (based upon pyenv) which downloads and compiles
-different versions of python which you can use to test your application in.
-
You can even download and create multiple versions of python to test your
-application in - for instance, if you want to test your app using python 2
-and python 3, or even *all* different versions of python.
+application in - for instance, if you want to test your app using python 2.7.9
+and python 3.4.3. You can even easily test your code in every single version
+of python.
| 30 | DOCS : Updates to documentation. | 6 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1884 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
| Title | Default | Description |
| :------------------| :------------------- | :---------------------------|
| host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is 127.0.0.1:6379. If you run redis in sentinel mode or cluster mode, separate host names with comma, like 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381 |
| masterName | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
| timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
| soTimeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
<MSG> Update README.md
<DFF> @@ -300,7 +300,7 @@ These 4 Serializers are replaceable:
| Title | Default | Description |
| :------------------| :------------------- | :---------------------------|
-| host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is 127.0.0.1:6379. If you run redis in sentinel mode or cluster mode, separate host names with comma, like 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381 |
+| host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
| masterName | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
| timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
| soTimeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
| 1 | Update README.md | 1 | .md | md | mit | alexxiyang/shiro-redis |
1885 | <NME> shiro-standalone.ini
<BEF> ADDFILE
<MSG> Merge pull request #1 from alexxiyang/master
更新alexxiyang的提交
<DFF> @@ -0,0 +1,4 @@
+redisManager.host = 127.0.0.1:6379
+redisSessionDAO.expire = 3000
+cacheManager.expire = 3000
+cacheManager.principalIdFieldName = userId
\ No newline at end of file
| 4 | Merge pull request #1 from alexxiyang/master | 0 | .ini | ini | mit | alexxiyang/shiro-redis |
1886 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
How to use it?
===========
* Download shiro-redis.jar in bin folder and add it into your classpath.
* Add depende'.
Edit shiro.ini
```properties
#============redisCacheManager===========
cacheManager = org.crazycake.shiro.RedisCacheManager
cacheManager.redisManager = $redisManager
securityManager.cacheManager = $cacheManager
```
<MSG> Update README.md
update readme
<DFF> @@ -6,9 +6,17 @@ shiro only provide the support of ehcache and concurrentHashMap. Here is an impl
How to use it?
===========
-* Download shiro-redis.jar in bin folder and add it into your classpath.
-* Add depende'.
-
+You can chose these 2 ways to include shiro-redis into your project
+1. directly download jar file
+Download shiro-redis.jar in bin folder and add it into your classpath.
+2. add maven dependency
+------------------------------------
+ <dependency>
+ <groupId>org.crazycake</groupId>
+ <artifactId>shiro-redis</artifactId>
+ <version>2.4.2-RELEASE</version>
+ </dependency>
+------------------------------------
Edit shiro.ini
```properties
@@ -31,6 +39,8 @@ securityManager.sessionManager = $sessionManager
#============redisCacheManager===========
cacheManager = org.crazycake.shiro.RedisCacheManager
cacheManager.redisManager = $redisManager
+#custom your redis key prefix, if you doesn't define this parameter shiro-redis will use 'shiro_redis_session:' as default prefix
+shiroCacheManager.keyPrefix = users:security:authz:
securityManager.cacheManager = $cacheManager
```
| 13 | Update README.md | 3 | .md | md | mit | alexxiyang/shiro-redis |
1887 | <NME> why_yaml.rst
<BEF> Why YAML?
=========
YAML is a markup language for presenting structured data. It is
effectively a more readable version of JSON.
It is used for configuration - for the settings file - because
it is
It is used both for writing test scripts and for settings files.
Python developers who are new to Hitch might wonder why it uses YAML
as a test description language, rather than python itself.
Python is what is known as a turing complete language. This means
that is is very powerful and can theoretically perform any task
that any other programming language can.
This is good if you want to write powerful, capable programs.
This comes with a significant down side, however. Turing complete
languages are significantly more susceptible to technical debt
and are less readable by non-programmers.
YAML is *not* turing complete language. However, since all test
descriptions are effectively declarative and linear - step 1 is
followed by step 2, which is followed by step 3, etc. you do not
actually *need* a turing complete language to describe the test in.
Thus by using this language to describe tests, they can be kept
simple, readable and (mostly) free of technical debt.
For more powerful and customized behavior, you can still
write python code in the test engine.
some data. It is essentially just configuration, in fact.
This may feel like handcuffs to a good programmer, but there's a good
reason for it: less powerful languages are easier to understand and
since tests are just series of steps, you do not *need* a powerful
language to describe them.
Less powerful, easier to understand languages also have the following benefits:
* They are easier to maintain
* They are easier to keep free from bugs.
* You can *template* them and they are *still* relatively easy to understand, maintain and keep bug free.
Easier to understand also means that advanced programming skills are
not necessary to write them. Some training and understanding is
required - it's certainly not written English (for good reason),
but it's more like learning how to use a spreadsheet rather than
learning how to program in a turing complete language like python.
Jinja2 adds additional complexity, but it helps you to prevent your
test suite from becoming repetitive. See: :doc:`/glossary/DRY`.
Again, Jinja2 is not a powerful language. It is more powerful
than YAML but less powerful than python/java/ruby/etc. and should be
something that a non programmer could pick up and use productively
with a minimum of training.
Related reading
---------------
* https://en.wikipedia.org/wiki/Rule_of_least_power
* https://en.wikipedia.org/wiki/Separation_of_concerns
The use of YAML and Jinja2 in Hitch was inspired somewhat by Ansible: https://en.wikipedia.org/wiki/Ansible_%28software%29
<MSG> DOCS : Updated why YAML.
<DFF> @@ -2,33 +2,18 @@ Why YAML?
=========
YAML is a markup language for presenting structured data. It is
-effectively a more readable version of JSON.
+a more readable version of JSON.
-It is used for configuration - for the settings file - because
-it is
+Hitch uses YAML as a declarative description language for integration
+tests.
-It is used both for writing test scripts and for settings files.
+While python could be used to write integration tests instead,
+YAML is more suitable as it lets your tests adhere to the following
+two principles more easily:
-Python developers who are new to Hitch might wonder why it uses YAML
-as a test description language, rather than python itself.
+* https://en.wikipedia.org/wiki/Rule_of_least_power - YAML is a *less* powerful language than python, so using it instead will keep your tests simpler.
+* https://en.wikipedia.org/wiki/Separation_of_concerns - YAML provides a 'language barrier' that lets you maintain a strict separation of concerns between the code which describes your tests and the code which runs them.
-Python is what is known as a turing complete language. This means
-that is is very powerful and can theoretically perform any task
-that any other programming language can.
+For more powerful and customized behavior, you can write python code in the test engine.
-This is good if you want to write powerful, capable programs.
-
-This comes with a significant down side, however. Turing complete
-languages are significantly more susceptible to technical debt
-and are less readable by non-programmers.
-
-YAML is *not* turing complete language. However, since all test
-descriptions are effectively declarative and linear - step 1 is
-followed by step 2, which is followed by step 3, etc. you do not
-actually *need* a turing complete language to describe the test in.
-
-Thus by using this language to describe tests, they can be kept
-simple, readable and (mostly) free of technical debt.
-
-For more powerful and customized behavior, you can still
-write python code in the test engine.
+The use of YAML and Jinja2 in Hitch was inspired somewhat by Ansible: https://en.wikipedia.org/wiki/Ansible_%28software%29
| 10 | DOCS : Updated why YAML. | 25 | .rst | rst | agpl-3.0 | hitchtest/hitch |
1888 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
## Documentation
Official documentation [is located here](http://alexxiyang.github.io/shiro-redis/).
</dependency>
```
Edit shiro.ini
```properties
#redisManager
securityManager.cacheManager = $cacheManager
```
If you found any bugs
===========
<MSG> Update README.md
add spring.xml way
<DFF> @@ -20,7 +20,11 @@ You can choose these 2 ways to include shiro-redis into your project
</dependency>
```
-Edit shiro.ini
+How to configure ?
+===========
+You can choose 2 ways : shiro.ini or spring-*.xml
+
+shiro.ini:
```properties
#redisManager
@@ -51,6 +55,82 @@ shiroCacheManager.keyPrefix = users:security:authz:
securityManager.cacheManager = $cacheManager
```
+spring.xml:
+```xml
+<!-- shiro filter -->
+<bean id="ShiroFilter" class="org.apache.shiro.spring.web.ShiroFilterFactoryBean">
+ <property name="securityManager" ref="securityManager"/>
+
+ <!--
+ <property name="loginUrl" value="/login.jsp"/>
+ <property name="successUrl" value="/home.jsp"/>
+ <property name="unauthorizedUrl" value="/unauthorized.jsp"/>
+ -->
+ <!-- The 'filters' property is not necessary since any declared javax.servlet.Filter bean -->
+ <!-- defined will be automatically acquired and available via its beanName in chain -->
+ <!-- definitions, but you can perform instance overrides or name aliases here if you like: -->
+ <!-- <property name="filters">
+ <util:map>
+ <entry key="anAlias" value-ref="someFilter"/>
+ </util:map>
+ </property> -->
+ <property name="filterChainDefinitions">
+ <value>
+ /login.jsp = anon
+ /user/** = anon
+ /register/** = anon
+ /unauthorized.jsp = anon
+ /css/** = anon
+ /js/** = anon
+
+ /** = authc
+ </value>
+ </property>
+</bean>
+
+<!-- shiro securityManager -->
+<bean id="securityManager" class="org.apache.shiro.web.mgt.DefaultWebSecurityManager">
+
+ <!-- Single realm app. If you have multiple realms, use the 'realms' property instead. -->
+
+ <!-- sessionManager -->
+ <property name="sessionManager" ref="sessionManager" />
+
+ <!-- cacheManager -->
+ <property name="cacheManager" ref="cacheManager" />
+
+ <!-- By default the servlet container sessions will be used. Uncomment this line
+ to use shiro's native sessions (see the JavaDoc for more): -->
+ <!-- <property name="sessionMode" value="native"/> -->
+</bean>
+<bean id="lifecycleBeanPostProcessor" class="org.apache.shiro.spring.LifecycleBeanPostProcessor"/>
+
+<!-- shiro redisManager -->
+<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
+ <property name="host" value="127.0.0.1"/>
+ <property name="port" value="6379"/>
+ <property name="expire" value="1800"/>
+ <!-- optional properties:
+ <property name="timeout" value="10000"/>
+ <property name="password" value="123456"/>
+ -->
+</bean>
+
+<!-- redisSessionDAO -->
+<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
+ <property name="redisManager" ref="redisManager" />
+</bean>
+
+<!-- sessionManager -->
+<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
+ <property name="sessionDAO" ref="redisSessionDAO" />
+</bean>
+
+<!-- cacheManager -->
+<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
+ <property name="redisManager" ref="redisManager" />
+</bean>
+```
If you found any bugs
===========
| 81 | Update README.md | 1 | .md | md | mit | alexxiyang/shiro-redis |
1889 | <NME> commandline.py
<BEF> """High level command line interface to hitch."""
from subprocess import call, PIPE, STDOUT, Popen
from hitch.click import command, group, argument, option
from os import path, makedirs, listdir, kill, remove
from sys import stderr, stdout, exit, modules, argv
from functools import partial, reduce
from hitch import hitchdir, languagestrings
import shutil
import signal
import copy
class CalledProcessError(Exception):
"""Re-implemented CalledProcessError, since it is not available < python 2.7."""
pass
def check_output(command, stdout=PIPE, stderr=PIPE):
"""Re-implemented subprocess.check_output since it is not available < python 2.7."""
return Popen(command, stdout=stdout, stderr=stderr).communicate()[0]
def check_call(command, shell=False):
"""Re-implemented subprocess.check_call since it is not available < python 2.7."""
process = Popen(command, shell=shell)
process.communicate()
if process.returncode != 0:
raise CalledProcessError
return
def stop_everything(sig, frame):
"""Exit hitch."""
exit(1)
def installpackages():
"""Install packages with hitchsystem."""
hitchsystem = path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchsystem"))
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([hitchsystem, "installpackages", ])
signal.signal(signal.SIGINT, stop_everything)
def update_requirements():
"""Check hitchreqs.txt match what's installed via pip freeze. If not, update."""
stdout.write(languagestrings.UPDATING_REQUIREMENTS)
pip = path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
hitchreqs_filename = path.join(hitchdir.get_hitch_directory_or_fail(), "..", "hitchreqs.txt")
pip_freeze = check_output([pip, "freeze"]).decode('utf8').split('\n')
hitchreqs_handle = ""
with open(hitchreqs_filename, "r") as hitchreqs_handle:
hitchreqs = hitchreqs_handle.read().split('\n')
if not sorted(pip_freeze) == sorted(hitchreqs):
call([pip, "install", "-r", "hitchreqs.txt"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
@group()
def cli():
pass
@command()
@option(
'-p', '--python', default=None,
help=languagestrings.SPECIFY_PYTHON_TO_CREATE_VIRTUALENV_WITH
)
@option(
'-v', '--virtualenv', default=None,
help=languagestrings.SPECIFY_VIRTUALENV_TO_CREATE_HITCH_WITH
)
def init(python, virtualenv):
"""Initialize hitch in this directory."""
if virtualenv is None:
if call(["which", "virtualenv"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_VIRTUALENV_INSTALLED)
stderr.flush()
exit(1)
virtualenv = check_output(["which", "virtualenv"]).decode('utf8').replace("\n", "")
else:
if path.exists(virtualenv):
if python is None:
python = path.join(path.dirname(virtualenv), "python")
else:
stderr.write("{0} not found.\n".format(virtualenv))
if python is None:
if call(["which", "python3"], stdout=PIPE, stderr=PIPE) != 0:
stderr.write(languagestrings.YOU_MUST_HAVE_PYTHON3_INSTALLED)
stderr.flush()
exit(1)
python3 = check_output(["which", "python3"]).decode('utf8').replace("\n", "")
else:
if path.exists(python):
python3 = python
else:
stderr.write("{0} not found.\n".format(python))
exit(1)
python_version = check_output([python3, "-V"], stderr=STDOUT).decode('utf8')
replacements = ('Python ', ''), ('\n', '')
str_version = reduce(lambda a, kv: a.replace(*kv), replacements, python_version)
tuple_version = tuple([int(x) for x in str_version.split('.')[:2]])
if tuple_version < (3, 3):
stderr.write(languagestrings.YOU_MUST_HAVE_VERSION_ABOVE_PYTHON33)
exit(1)
if hitchdir.hitch_exists():
hitchdir.check_hitch_directory_integrity()
update_requirements()
exit(0)
makedirs(".hitch")
# Store absolute directory in .hitch directory to guard against the directory being moved
hitch_dir = path.abspath(".hitch")
with open(path.join(hitch_dir, "absdir"), "w") as absdir_handle:
absdir_handle.write(hitch_dir)
pip = path.abspath(path.join(".hitch", "virtualenv", "bin", "pip"))
try:
check_call([
virtualenv, ".hitch/virtualenv", "--no-site-packages", "--distribute", "-p", python3
])
check_call([pip, "install", "--upgrade", "pip"])
check_call([pip, "install", "--upgrade", "setuptools"])
check_call([pip, "install", "unixpackage", "hitchsystem"])
installpackages()
if path.exists("hitchreqs.txt"):
check_call([pip, "install", "-r", "hitchreqs.txt"])
else:
check_call([pip, "install", "hitchtest"])
check_call([pip, "install", "hitchquickstart"])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
signal.signal(signal.SIGINT, signal.SIG_IGN)
check_call([path.abspath(path.join(".hitch", "virtualenv", "bin", "hitchquickstart")), ])
signal.signal(signal.SIGINT, stop_everything)
installpackages()
except CalledProcessError:
stderr.write(languagestrings.ERROR_INITIALIZING_HITCH)
hitchdir.remove_hitch_directory_if_exists()
exit(1)
def get_pip():
"""Get the file path to the hitch pip."""
return path.join(hitchdir.get_hitch_directory_or_fail(), "virtualenv", "bin", "pip")
@command(context_settings={'help_option_names':[],'ignore_unknown_options':True}, help="dd")
@argument('arguments', nargs=-1)
def runpackage(arguments):
# Generic method to run any installed app in the virtualenv whose name starts with hitch*
hitchdir.check_hitch_directory_integrity()
binfile = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "hitch{0}".format(argv[1]))
command = [binfile, ] + argv[2:]
# When receiving an exit signal, just forward it to process child.
def forward_signal_to_child(pid, signum, frame):
kill(pid, signum)
process = Popen(command)
signal.signal(signal.SIGINT, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGTERM, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGHUP, partial(forward_signal_to_child, process.pid))
signal.signal(signal.SIGQUIT, partial(forward_signal_to_child, process.pid))
return_code = process.wait()
exit(return_code)
@command()
@argument('package', required=True)
def uninstall(package):
"""Uninstall hitch package."""
hitchdir.check_hitch_directory_integrity()
pip = get_pip()
call([pip, "uninstall", package] )
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
update_requirements()
@command()
@argument('package', required=True)
def install(package):
"""Install hitch package."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
@command()
def upgrade():
"""Upgrade all installed hitch packages."""
hitchdir.check_hitch_directory_integrity()
update_requirements()
pip = get_pip()
package_list = [
p for p in check_output([pip, "freeze"]).decode('utf8').split('\n')
if p != "" and "==" in p
]
version_fixed_package_list = [p.split("==")[0] for p in package_list]
for package in version_fixed_package_list:
call([pip, "install", package, "-U", ])
pip_freeze = check_output([pip, "freeze"]).decode('utf8')
with open("hitchreqs.txt", "w") as hitchreqs_handle:
hitchreqs_handle.write(pip_freeze)
installpackages()
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
@command()
def clean():
"""Remove the hitch directory entirely."""
if hitchdir.hitch_exists():
hitchdir.remove_hitch_directory_if_exists()
else:
stderr.write("No hitch directory found. Doing nothing.\n")
stderr.flush()
@command()
@option(
'-p', '--packages', default=None, help=(
"Specify precise packages to remove - "
"e.g. postgresql, postgresql-9.3.9, python, python2.6.8"
)
)
def cleanpkg(packages):
"""Remove installed packages from the .hitchpkg directory."""
hitchpkg = path.join(path.expanduser("~"), ".hitchpkg")
if path.exists(hitchpkg):
if packages is None:
shutil.rmtree(hitchpkg)
else:
for file_or_dir in listdir(hitchpkg):
if file_or_dir.startswith(packages):
if path.isdir(path.join(hitchpkg, file_or_dir)):
shutil.rmtree(path.join(hitchpkg, file_or_dir))
else:
remove(path.join(hitchpkg, file_or_dir))
def run():
"""Run hitch bootstrap CLI"""
signal.signal(signal.SIGINT, stop_everything)
signal.signal(signal.SIGTERM, stop_everything)
signal.signal(signal.SIGHUP, stop_everything)
signal.signal(signal.SIGQUIT, stop_everything)
if hitchdir.hitch_exists():
# Get packages from bin folder that are hitch related
python_bin = path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin", "python")
if path.exists(python_bin):
packages = [
package.replace("hitch", "") for package in listdir(
path.join(hitchdir.get_hitch_directory(), "virtualenv", "bin")
)
if package.startswith("hitch") and package != "hitch"
]
# Add commands that start with "hitch" to the list of commands available (e.g. hitchtest, hitchsmtp)
for package in packages:
cmd = copy.deepcopy(runpackage)
cmd.name = package
try:
description = check_output([
python_bin, '-c',
'import sys;sys.stdout.write(__import__("hitch{0}").commandline.cli.help)'.format(
package
)
]).decode('utf8')
except CalledProcessError:
description = ""
cmd.help = description
cmd.short_help = description
cli.add_command(cmd)
cli.add_command(install)
cli.add_command(uninstall)
cli.add_command(upgrade)
cli.add_command(freeze)
else:
stderr.write(languagestrings.SOMETHING_CORRUPTED)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.add_command(init)
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
cli.add_command(clean)
cli.add_command(cleanpkg)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
if __name__ == '__main__':
run()
<MSG> FEATURE : Allow hitch clean to be run even if no hitch directory is detected.
<DFF> @@ -246,6 +246,7 @@ def run():
cli.help = "Hitch test runner for:\n\n {0}.".format(hitchdir.get_hitch_directory())
else:
cli.add_command(init)
+ cli.add_command(clean)
cli.help = "Hitch bootstrapper - '.hitch' directory not detected here."
cli()
| 1 | FEATURE : Allow hitch clean to be run even if no hitch directory is detected. | 0 | .py | py | agpl-3.0 | hitchtest/hitch |
1890 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
## Deprecated. See [seetaresearch/Dragon](http://github.com/seetaresearch/Dragon).
4. Configure Dragon/CMakeLists.txt
- Select optional libraries [CUDA / CUDNN / BLAS / SSE / MPI / MPI_CUDA_AWARE / CUDA_FP16]
- Set 3rdparty path (recommend to keep defualt)
- Set python path
- Set cuda compiling architectures if necessary
5. Environment Variables
### Linux(Only for OpenMPI):
8. Deploy
```Shell
cp Dragon/libs/libdragon.so Dragon/python
cp Dragon/python /usr/lib/python2.7/dist-packages/dragon (For Python)
cp Dragon/python ANACONDA_DIR/libs/python2.7/dist-packages/dragon (For Anaconda)
```
----
Title = {Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework},
Year = {2017}
}
<MSG> add setup.py
<DFF> @@ -31,8 +31,9 @@
4. Configure Dragon/CMakeLists.txt
- Select optional libraries [CUDA / CUDNN / BLAS / SSE / MPI / MPI_CUDA_AWARE / CUDA_FP16]
- Set 3rdparty path (recommend to keep defualt)
- - Set python path
+ - Set python & numpy root path
- Set cuda compiling architectures if necessary
+ - GCC version(4.8+, 5.0-) should add ``-std=c++11`` to ``CUDA_NVCC_FLAGS``, if ``nullptr`` is not found.
5. Environment Variables
### Linux(Only for OpenMPI):
@@ -100,9 +101,7 @@
8. Deploy
```Shell
- cp Dragon/libs/libdragon.so Dragon/python
- cp Dragon/python /usr/lib/python2.7/dist-packages/dragon (For Python)
- cp Dragon/python ANACONDA_DIR/libs/python2.7/dist-packages/dragon (For Anaconda)
+ python Dragon/setup.py install
```
----
@@ -189,4 +188,3 @@ Please cite Dragon in your publications if it helps your research:
Title = {Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework},
Year = {2017}
}
-
| 3 | add setup.py | 5 | .md | md | bsd-2-clause | neopenx/Dragon |
1891 | <NME> README.md
<BEF> # Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework

-----
## Deprecated. See [seetaresearch/Dragon](http://github.com/seetaresearch/Dragon).
4. Configure Dragon/CMakeLists.txt
- Select optional libraries [CUDA / CUDNN / BLAS / SSE / MPI / MPI_CUDA_AWARE / CUDA_FP16]
- Set 3rdparty path (recommend to keep defualt)
- Set python path
- Set cuda compiling architectures if necessary
5. Environment Variables
### Linux(Only for OpenMPI):
8. Deploy
```Shell
cp Dragon/libs/libdragon.so Dragon/python
cp Dragon/python /usr/lib/python2.7/dist-packages/dragon (For Python)
cp Dragon/python ANACONDA_DIR/libs/python2.7/dist-packages/dragon (For Anaconda)
```
----
Title = {Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework},
Year = {2017}
}
<MSG> add setup.py
<DFF> @@ -31,8 +31,9 @@
4. Configure Dragon/CMakeLists.txt
- Select optional libraries [CUDA / CUDNN / BLAS / SSE / MPI / MPI_CUDA_AWARE / CUDA_FP16]
- Set 3rdparty path (recommend to keep defualt)
- - Set python path
+ - Set python & numpy root path
- Set cuda compiling architectures if necessary
+ - GCC version(4.8+, 5.0-) should add ``-std=c++11`` to ``CUDA_NVCC_FLAGS``, if ``nullptr`` is not found.
5. Environment Variables
### Linux(Only for OpenMPI):
@@ -100,9 +101,7 @@
8. Deploy
```Shell
- cp Dragon/libs/libdragon.so Dragon/python
- cp Dragon/python /usr/lib/python2.7/dist-packages/dragon (For Python)
- cp Dragon/python ANACONDA_DIR/libs/python2.7/dist-packages/dragon (For Anaconda)
+ python Dragon/setup.py install
```
----
@@ -189,4 +188,3 @@ Please cite Dragon in your publications if it helps your research:
Title = {Dragon: A Computation Graph Virtual Machine Based Deep Learning Framework},
Year = {2017}
}
-
| 3 | add setup.py | 5 | .md | md | bsd-2-clause | neopenx/Dragon |
1892 | <NME> README.md
<BEF> shiro-redis
=============
## Introduction
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
edit in shiro.ini
#required
cacheManager = org.yqr.shiro.RedisCacheManager
#optional if you don't specify host the default value is 127.0.0.1
cacheManager.expire=5
#required
securityManager.cacheManager = $cacheManager
<MSG> Update README.md
<DFF> @@ -8,6 +8,7 @@ How to use it?
edit in shiro.ini
+```properties
#required
cacheManager = org.yqr.shiro.RedisCacheManager
#optional if you don't specify host the default value is 127.0.0.1
@@ -18,3 +19,4 @@ cacheManager.port=6379
cacheManager.expire=5
#required
securityManager.cacheManager = $cacheManager
+```
| 2 | Update README.md | 0 | .md | md | mit | alexxiyang/shiro-redis |
1893 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.4.5",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.7",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version.
<DFF> @@ -13,7 +13,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.4.5",
+ version="0.4.6",
description="Loosely coupled testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1894 | <NME> RedisSessionDAOTest.java
<BEF> package org.crazycake.shiro;
import org.apache.shiro.session.InvalidSessionException;
import org.apache.shiro.session.Session;
import org.crazycake.shiro.exception.SerializationException;
import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.Before;
import org.junit.Test;
import java.io.Serializable;
import java.util.*;
import java.util.HashSet;
import java.util.Set;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.CoreMatchers.*;
public class RedisSessionDAOTest {
private IRedisManager redisManager;
private StringSerializer keySerializer = new StringSerializer();
private ObjectSerializer valueSerializer = new ObjectSerializer();
@BeforeEach
public void setUp() {
redisManager = mock(IRedisManager.class);
}
private RedisSessionDAO mountRedisSessionDAO(Integer expire) {
RedisSessionDAO redisSessionDAO = new RedisSessionDAO();
if (expire != null) {
redisSessionDAO.setExpire(expire);
}
redisSessionDAO.setKeyPrefix("student:");
redisSessionDAO.setRedisManager(redisManager);
return redisSessionDAO;
}
@Test
public void testUpdate() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
StudentSession session = new StudentSession(99, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:99"), valueSerializer.serialize(session), 2);
}
@Test
public void testUpdateByCustomExpire() throws SerializationException {
RedisSessionDAO sessionDAO = mountRedisSessionDAO(3);
StudentSession session = new StudentSession(98, 2000);
sessionDAO.update(session);
verify(redisManager).set(keySerializer.serialize("student:98"), valueSerializer.serialize(session), 3);
}
verify(redisManager, times(0)).set(eq(keySerializer.serialize("abc:" + sessionId)), any((new byte[0]).getClass()), eq(2));
}
@Test
public void testUpdate() throws SerializationException {
FakeSession testSession = new FakeSession(1, "jack");
@Test
public void testGetActiveSessions() throws SerializationException {
Set<byte[]> mockKeys = new HashSet<byte[]>();
mockKeys.add(keySerializer.serialize("student:1"));
mockKeys.add(keySerializer.serialize("student:2"));
when(redisManager.keys(keySerializer.serialize("student:*"))).thenReturn(mockKeys);
StudentSession mockSession1 = new StudentSession(1, 2000);
StudentSession mockSession2 = new StudentSession(2, 2000);
when(redisManager.get(keySerializer.serialize("student:1"))).thenReturn(valueSerializer.serialize(mockSession1));
when(redisManager.get(keySerializer.serialize("student:2"))).thenReturn(valueSerializer.serialize(mockSession2));
RedisSessionDAO sessionDAO = mountRedisSessionDAO(null);
assertThat(sessionDAO.getActiveSessions().size(), is(2));
}
}
class StudentSession implements Session, Serializable {
private Integer id;
private long timeout;
public StudentSession(Integer id, long timeout) {
this.id = id;
this.timeout = timeout;
}
@Override
public Serializable getId() {
return id;
}
@Override
public Date getStartTimestamp() {
return null;
}
@Override
public Date getLastAccessTime() {
return null;
}
@Override
public long getTimeout() throws InvalidSessionException {
return timeout;
}
@Override
public void setTimeout(long l) throws InvalidSessionException {
}
@Override
public String getHost() {
return null;
}
@Override
public void touch() throws InvalidSessionException {
}
@Override
public void stop() throws InvalidSessionException {
}
@Override
public Collection<Object> getAttributeKeys() throws InvalidSessionException {
return null;
}
@Override
public Object getAttribute(Object o) throws InvalidSessionException {
return null;
}
@Override
public void setAttribute(Object o, Object o1) throws InvalidSessionException {
}
@Override
public Object removeAttribute(Object o) throws InvalidSessionException {
return null;
}
}
<MSG> - Make RedisSessionDAO use session timeout as expire as default
<DFF> @@ -7,6 +7,7 @@ import org.crazycake.shiro.serializer.ObjectSerializer;
import org.crazycake.shiro.serializer.StringSerializer;
import org.junit.Before;
import org.junit.Test;
+import org.mockito.ArgumentCaptor;
import java.io.Serializable;
import java.util.*;
@@ -57,6 +58,17 @@ public class RedisSessionDAOTest {
verify(redisManager, times(0)).set(eq(keySerializer.serialize("abc:" + sessionId)), any((new byte[0]).getClass()), eq(2));
}
+ @Test
+ public void testDoCreateWithSessionTimeout() {
+ redisSessionDAO.setExpire(-2);
+ FakeSession fakeSession = new FakeSession(2, "Jack");
+ redisSessionDAO.doCreate(fakeSession);
+
+ ArgumentCaptor<Integer> expireArg = ArgumentCaptor.forClass(Integer.class);
+ verify(redisManager).set(any((new byte[0]).getClass()), any((new byte[0]).getClass()), expireArg.capture());
+ assertThat(expireArg.getValue(), is(1800));
+ }
+
@Test
public void testUpdate() throws SerializationException {
FakeSession testSession = new FakeSession(1, "jack");
| 12 | - Make RedisSessionDAO use session timeout as expire as default | 0 | .java | java | mit | alexxiyang/shiro-redis |
1895 | <NME> README.md
<BEF> shiro-redis
=============
[](https://travis-ci.org/alexxiyang/shiro-redis)
[](https://maven-badges.herokuapp.com/maven-central/org.crazycake/shiro-redis)
shiro only provide the support of ehcache and concurrentHashMap. Here is an implement of redis cache can be used by shiro. Hope it will help you!
# Download
You use either of the following 2 ways to include `shiro-redis` into your project
* use `git clone https://github.com/alexxiyang/shiro-redis.git` to clone project to your local workspace and build jar file by your self
* add maven dependency
```xml
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis</artifactId>
<version>3.3.1</version>
</dependency>
```
> **Note:**\
> 3.3.0 is compiled in java11 by mistake.
> **注意**:\
> 请不要使用3.1.0以下版本
## Jedis Version Comparison Charts
| shiro-redis | jedis |
| :----------------:| :-------: |
| 3.2.3 | 2.9.0 |
| 3.3.0(Unrelease) | 3.3.0 |
# Before use
Here is the first thing you need to know. Shiro-redis needs an id field to identify your authorization object in Redis. So please make sure your principal class has a field which you can get unique id of this object. Please setting this id field name by `cacheManager.principalIdFieldName = <your id field name of principal object>`
For example:
If you create `SimpleAuthenticationInfo` like this:
```java
@Override
protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken token) throws AuthenticationException {
UsernamePasswordToken usernamePasswordToken = (UsernamePasswordToken)token;
UserInfo userInfo = new UserInfo();
userInfo.setUsername(usernamePasswordToken.getUsername());
return new SimpleAuthenticationInfo(userInfo, "123456", getName());
}
```
Then the `userInfo` object is your principal object. You need to make sure `UserInfo` has an unique field for Redis to identify it. Take `userId` as an example:
```java
public class UserInfo implements Serializable{
private Integer userId
private String username;
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public Integer getUserId() {
return this.userId;
}
}
```
Put userId as the value of `cacheManager.principalIdFieldName`, like this:
```properties
cacheManager.principalIdFieldName = userId
```
If you're using Spring, the configuration should be
```xml
<property name="principalIdFieldName" value="userId" />
```
Then `shiro-redis` will call `userInfo.getUserId()` to get the id for saving Redis object.
# How to configure ?
You can configure `shiro-redis` either in `shiro.ini` or in `spring-*.xml`
## shiro.ini
Here is the configuration example for shiro.ini.
### Redis Standalone
If you are running Redis in Standalone mode
```properties
[main]
#====================================
# shiro-redis configuration [start]
#====================================
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisManager
# Redis host. If you don't specify host the default value is 127.0.0.1:6379
redisManager.host = 127.0.0.1:6379
#===================================
# Redis Manager [end]
#===================================
#=========================================
# Redis session DAO [start]
#=========================================
# Create redisSessionDAO
redisSessionDAO = org.crazycake.shiro.RedisSessionDAO
# Use redisManager as cache manager
redisSessionDAO.redisManager = $redisManager
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
sessionManager.sessionDAO = $redisSessionDAO
securityManager.sessionManager = $sessionManager
#=========================================
# Redis session DAO [end]
#=========================================
#==========================================
# Redis cache manager [start]
#==========================================
# Create cacheManager
cacheManager = org.crazycake.shiro.RedisCacheManager
# Principal id field name. The field which you can get unique id to identify this principal.
# For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.
# Remember to add getter to this id field. For example, `getId()`, `getUserId()`, `getEmail()`, etc.
# Default value is id, that means your principal object must has a method called `getId()`
cacheManager.principalIdFieldName = id
# Use redisManager as cache manager
cacheManager.redisManager = $redisManager
securityManager.cacheManager = $cacheManager
#==========================================
# Redis cache manager [end]
#==========================================
#=================================
# shiro-redis configuration [end]
#=================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-tutorial) for you to understand how to configure `shiro-redis` in `shiro.ini`.
### Redis Sentinel
if you're using Redis Sentinel, please replace the `redisManager` configuration of the standalone version into the following:
```properties
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisSentinelManager
# Sentinel host. If you don't specify host the default value is 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
redisManager.host = 127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381
# Sentinel master name
redisManager.masterName = mymaster
#===================================
# Redis Manager [end]
#===================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
### Redis Cluster
If you're using redis cluster, please replace the `redisManager` configuration of the standalone version into the following:
```properties
#===================================
# Redis Manager [start]
#===================================
# Create redisManager
redisManager = org.crazycake.shiro.RedisClusterManager
# Redis host and port list
redisManager.host = 192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005
#===================================
# Redis Manager [end]
#===================================
```
For complete configurable options list, check [Configurable Options](#configurable-options).
## Spring
If you are using Spring
### Redis Standalone
If you are running Redis in Standalone mode
```xml
<!-- shiro-redis configuration [start] -->
<!-- Redis Manager [start] -->
<bean id="redisManager" class="org.crazycake.shiro.RedisManager">
<property name="host" value="127.0.0.1:6379"/>
</bean>
<!-- Redis Manager [end] -->
<!-- Redis session DAO [start] -->
<bean id="redisSessionDAO" class="org.crazycake.shiro.RedisSessionDAO">
<property name="redisManager" ref="redisManager" />
</bean>
<bean id="sessionManager" class="org.apache.shiro.web.session.mgt.DefaultWebSessionManager">
<property name="sessionDAO" ref="redisSessionDAO" />
</bean>
<!-- Redis session DAO [end] -->
<!-- Redis cache manager [start] -->
<bean id="cacheManager" class="org.crazycake.shiro.RedisCacheManager">
<property name="redisManager" ref="redisManager" />
</bean>
<!-- Redis cache manager [end] -->
<bean id="securityManager" class="org.apache.shiro.web.mgt.DefaultWebSecurityManager">
<property name="sessionManager" ref="sessionManager" />
<property name="cacheManager" ref="cacheManager" />
<!-- other configurations -->
<property name="realm" ref="exampleRealm"/>
<property name="rememberMeManager.cipherKey" value="kPH+bIxk5D2deZiIxcaaaA==" />
</bean>
<!-- shiro-redis configuration [end] -->
```
For complete configurable options list, check [Configurable Options](#configurable-options).
Here is a [tutorial project](https://github.com/alexxiyang/shiro-redis-spring-tutorial) for you to understand how to configure `shiro-redis` in spring configuration file.
### Redis Sentinel
If you use redis sentinel, please replace the `redisManager` configuration of the standalone version into the following:
```xml
<!-- shiro-redis configuration [start] -->
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisSentinelManager">
<property name="host" value="127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381"/>
<property name="masterName" value="mymaster"/>
</bean>
```
For complete configurable options list, check [Configurable Options](#configurable-options).
### Redis Cluster
If you use redis cluster, please replace the `redisManager` configuration of the standalone version into the following:
```xml
<!-- shiro-redis configuration [start] -->
<!-- shiro redisManager -->
<bean id="redisManager" class="org.crazycake.shiro.RedisClusterManager">
<property name="host" value="192.168.21.3:7000,192.168.21.3:7001,192.168.21.3:7002,192.168.21.3:7003,192.168.21.3:7004,192.168.21.3:7005"/>
</bean>
```
For complete configurable options list, check [Configurable Options](#configurable-options).
## Serializer
Since redis only accept `byte[]`, there comes a serializer problem.
Shiro-redis is using `StringSerializer` as key serializer and `ObjectSerializer` as value serializer.
You can use your own custom serializer, as long as this custom serializer implements `org.crazycake.shiro.serializer.RedisSerializer`
For example, we can change the charset of keySerializer like this
```properties
# If you want change charset of keySerializer or use your own custom serializer, you need to define serializer first
#
# cacheManagerKeySerializer = org.crazycake.shiro.serializer.StringSerializer
# Supported encodings refer to https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html
# UTF-8, UTF-16, UTF-32, ISO-8859-1, GBK, Big5, etc
#
# cacheManagerKeySerializer.charset = UTF-8
# cacheManager.keySerializer = $cacheManagerKeySerializer
```
These 4 options that you can replace them with your cutom serializers:
- cacheManager.keySerializer
- cacheManager.valueSerializer
- redisSessionDAO.keySerializer
- redisSessionDAO.valueSerializer
## Configurable Options
Here are all the available options you can use in `shiro-redis` configuration file.
### RedisManager
| Title | Default | Description |
| :------------------| :------------------- | :---------------------------|
| host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
| masterName | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
| timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
| soTimeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
| maxAttempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
| password | | Redis password |
| database | `0` | Redis database. Default value is 0 |
| jedisPoolConfig | `new redis.clients.jedis.JedisPoolConfig()` | JedisPoolConfig. You can create your own JedisPoolConfig instance and set attributes as you wish<br>Most of time, you don't need to set jedisPoolConfig<br>Here is an example.<br>`jedisPoolConfig = redis.clients.jedis.JedisPoolConfig`<br>`jedisPoolConfig.testWhileIdle = false`<br>`redisManager.jedisPoolConfig = jedisPoolConfig` |
| count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
| jedisPool | `null` | **Only used for sentinel mode or single mode**<br>You can create your own JedisPool instance and set attributes as you wish |
### RedisSessionDAO
| Title | Default | Description |
| :------------------| :------------------- | :---------------------------|
| redisManager | | RedisManager which you just configured above (Required) |
| expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
| keyPrefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
| sessionInMemoryTimeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
| sessionInMemoryEnabled | `true` | Whether or not enable temporary save session in ThreadLocal |
| keySerializer | `org.crazycake.shiro.serializer.StringSerializer` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
| valueSerializer | `org.crazycake.shiro.serializer.ObjectSerializer` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
### CacheManager
| Title | Default | Description |
| :--------------------| :------------------- | :---------------------------|
| redisManager | | RedisManager which you just configured above (Required) |
| principalIdFieldName | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
| expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
| keyPrefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
| keySerializer | `org.crazycake.shiro.serializer.StringSerializer` | The key serializer of cache manager<br>You can change the implement of key serializer or the encoding of StringSerializer.<br>Supported encodings refer to [Supported Encodings](https://docs.oracle.com/javase/8/docs/technotes/guides/intl/encoding.doc.html). Such as `UTF-8`, `UTF-16`, `UTF-32`, `ISO-8859-1`, `GBK`, `Big5`, etc<br>For more detail, check [Serializer](#serializer) |
| valueSerializer | `org.crazycake.shiro.serializer.ObjectSerializer` | The value serializer of cache manager<br>You can change the implement of value serializer<br>For more detail, check [Serializer](#serializer) |
# Spring boot starter
Using `Spring-Boot` integration is the easiest way to integrate `shiro-redis` into a Spring-base application.
> Note: `shiro-redis-spring-boot-starter` version `3.2.1` is based on `shiro-spring-boot-web-starter` version `1.4.0-RC2`
First include the `shiro-redis` Spring boot starter dependency in you application classpath
```xml
<dependency>
<groupId>org.crazycake</groupId>
<artifactId>shiro-redis-spring-boot-starter</artifactId>
<version>3.3.1</version>
</dependency>
```
The next step depends on whether you've created your own `SessionManager` or `SessionsSecurityManager`.
## If you haven't created your own `SessionManager` or `SessionsSecurityManager`
If you don't have your own `SessionManager` or `SessionsSecurityManager` in your configuration, `shiro-redis-spring-boot-starter` will create `RedisSessionDAO` and `RedisCacheManager` for you. Then inject them into `SessionManager` and `SessionsSecurityManager` automatically.
So, You are all set. Enjoy it!
## If you have created your own `SessionManager` or `SessionsSecurityManager`
If you have created your own `SessionManager` or `SessionsSecurityManager` like this:
```java
@Bean
public SessionsSecurityManager securityManager(List<Realm> realms) {
DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
// other stuff...
return securityManager;
}
```
Then inject `redisSessionDAO` and `redisCacheManager` which created by `shiro-redis-spring-boot-starter` already
```java
@Autowired
RedisSessionDAO redisSessionDAO;
@Autowired
RedisCacheManager redisCacheManager;
```
Inject them into your own `SessionManager` and `SessionsSecurityManager`
```java
@Bean
public SessionManager sessionManager() {
DefaultWebSessionManager sessionManager = new DefaultWebSessionManager();
// inject redisSessionDAO
sessionManager.setSessionDAO(redisSessionDAO);
// other stuff...
return sessionManager;
}
@Bean
public SessionsSecurityManager securityManager(List<Realm> realms, SessionManager sessionManager) {
DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(realms);
//inject sessionManager
securityManager.setSessionManager(sessionManager);
// inject redisCacheManager
securityManager.setCacheManager(redisCacheManager);
// other stuff...
return securityManager;
}
```
For full example, see [shiro-redis-spring-boot-tutorial](https://github.com/alexxiyang/shiro-redis-spring-boot-tutorial)
### Configuration Properties
Here are all available options you can use in Spring-boot starter configuration
| Title | Default | Description |
| :--------------------------------------------------| :------------------- | :---------------------------|
| shiro-redis.enabled | `true` | Enables shiro-redis’s Spring module |
| shiro-redis.redis-manager.deploy-mode | `standalone` | Redis deploy mode. Options: `standalone`, `sentinel`, 'cluster' |
| shiro-redis.redis-manager.host | `127.0.0.1:6379` | Redis host. If you don't specify host the default value is `127.0.0.1:6379`. If you run redis in sentinel mode or cluster mode, separate host names with comma, like `127.0.0.1:26379,127.0.0.1:26380,127.0.0.1:26381` |
| shiro-redis.redis-manager.master-name | `mymaster` | **Only used for sentinel mode**<br>The master node of Redis sentinel mode |
| shiro-redis.redis-manager.timeout | `2000` | Redis connect timeout. Timeout for jedis try to connect to redis server(In milliseconds) |
| shiro-redis.redis-manager.so-timeout | `2000` | **Only used for sentinel mode or cluster mode**<br>The timeout for jedis try to read data from redis server |
| shiro-redis.redis-manager.max-attempts | `3` | **Only used for cluster mode**<br>Max attempts to connect to server |
| shiro-redis.redis-manager.password | | Redis password |
| shiro-redis.redis-manager.database | `0` | Redis database. Default value is 0 |
| shiro-redis.redis-manager.count | `100` | Scan count. Shiro-redis use Scan to get keys, so you can define the number of elements returned at every iteration. |
| shiro-redis.session-dao.expire | `-2` | Redis cache key/value expire time. The expire time is in second.<br>Special values:<br>`-1`: no expire<br>`-2`: the same timeout with session<br>Default value: `-2`<br>**Note**: Make sure expire time is longer than session timeout. |
| shiro-redis.session-dao.key-prefix | `shiro:session:` | Custom your redis key prefix for session management<br>**Note**: Remember to add colon at the end of prefix. |
| shiro-redis.session-dao.session-in-memory-timeout | `1000` | When we do signin, `doReadSession(sessionId)` will be called by shiro about 10 times. So shiro-redis save Session in ThreadLocal to remit this problem. sessionInMemoryTimeout is expiration of Session in ThreadLocal. <br>Most of time, you don't need to change it. |
| shiro-redis.session-dao.session-in-memory-enabled | `true` | Whether or not enable temporary save session in ThreadLocal |
| shiro-redis.cache-manager.principal-id-field-name | `id` | Principal id field name. The field which you can get unique id to identify this principal.<br>For example, if you use UserInfo as Principal class, the id field maybe `id`, `userId`, `email`, etc.<br>Remember to add getter to this id field. For example, `getId()`, `getUserId(`), `getEmail()`, etc.<br>Default value is `id`, that means your principal object must has a method called `getId()` |
| shiro-redis.cache-manager.expire | `1800` | Redis cache key/value expire time. <br>The expire time is in second. |
| shiro-redis.cache-manager.key-prefix | `shiro:cache:` | Custom your redis key prefix for cache management<br>**Note**: Remember to add colon at the end of prefix. |
## Working with `spring-boot-devtools`
If you are using `shiro-redis` with `spring-boot-devtools`. Please add this line to `resources/META-INF/spring-devtools.properties` (Create it if there is no this file):
```ini
restart.include.shiro-redis=/shiro-[\\w-\\.]+jar
```
# If you found any bugs
Please create the issue
可以用中文
<MSG> update shiro-core/jedis Version Comparison Charts
update shiro-core/jedis Version Comparison Charts
<DFF> @@ -25,12 +25,12 @@ You use either of the following 2 ways to include `shiro-redis` into your projec
> **注意**:\
> 请不要使用3.1.0以下版本
-## Jedis Version Comparison Charts
+## shiro-core/jedis Version Comparison Charts
-| shiro-redis | jedis |
-| :----------------:| :-------: |
-| 3.2.3 | 2.9.0 |
-| 3.3.0(Unrelease) | 3.3.0 |
+| shiro-redis | jedis | jedis |
+| :----------------:| :-------: | :-------: |
+| 3.2.3 | 1.3.2 | 2.9.0 |
+| 3.3.0(Unrelease) | 1.6.0 | 3.3.0 |
# Before use
Here is the first thing you need to know. Shiro-redis needs an id field to identify your authorization object in Redis. So please make sure your principal class has a field which you can get unique id of this object. Please setting this id field name by `cacheManager.principalIdFieldName = <your id field name of principal object>`
| 5 | update shiro-core/jedis Version Comparison Charts | 5 | .md | md | mit | alexxiyang/shiro-redis |
1896 | <NME> setup.py
<BEF> # -*- coding: utf-8 -*
from setuptools.command.install import install
from setuptools import find_packages
from setuptools import setup
from sys import version_info, stderr, exit
import codecs
import sys
import os
if sys.platform == "win32" or sys.platform == "cygwin":
stderr.write("Hitch will not work on Windows. Sorry.\n")
exit(1)
if version_info[0] == 2:
if version_info[1] < 6:
stderr.write("The hitch bootstrapper will not run on versions of python below v2.6.\n")
exit(1)
if version_info[0] == 3:
if version_info[1] < 3:
stderr.write("The hitch bootstrapper will not run on python 3.0.x, 3.1.x or 3.2.x.\n")
exit(1)
def read(*parts):
# intentionally *not* adding an encoding option to open
# see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
version="0.5.5",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Operating System :: Unix',
'Environment :: Console',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
keywords='hitch testing framework bdd tdd declarative tests bootstrap virtualenv',
author='Colm O\'Connor',
author_email='[email protected]',
url='https://hitchtest.readthedocs.org/',
license='AGPL',
install_requires=[],
packages=find_packages(exclude=["docs", ]),
package_data={},
entry_points=dict(console_scripts=['hitch=hitch:commandline.run',]),
zip_safe=False,
include_package_data=True,
)
<MSG> RELEASE : Bumped version.
<DFF> @@ -31,7 +31,7 @@ def read(*parts):
return codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts), 'r').read()
setup(name="hitch",
- version="0.5.5",
+ version="0.5.6",
description="Bootstrapper for hitchtest - the loosely coupled integration testing framework",
long_description=read('README.rst'),
classifiers=[
| 1 | RELEASE : Bumped version. | 1 | .py | py | agpl-3.0 | hitchtest/hitch |
1897 | <NME> ObjectSerializer.java
<BEF> package org.crazycake.shiro.serializer;
import org.crazycake.shiro.exception.SerializationException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.*;
public class ObjectSerializer implements RedisSerializer<Object> {
private static Logger logger = LoggerFactory.getLogger(ObjectSerializer.class);
public static final int BYTE_ARRAY_OUTPUT_STREAM_SIZE = 128;
if (object == null) {
return result;
}
ByteArrayOutputStream byteStream = new ByteArrayOutputStream(BYTE_ARRAY_OUTPUT_STREAM_SIZE);
if (!(object instanceof Serializable)) {
throw new SerializationException("requires a Serializable payload "
+ "but received an object of type [" + object.getClass().getName() + "]");
}
try {
ObjectOutputStream objectOutputStream = new ObjectOutputStream(byteStream);
objectOutputStream.writeObject(object);
objectOutputStream.flush();
result = byteStream.toByteArray();
} catch (IOException e) {
throw new SerializationException("serialize error, object=" + object, e);
}
return result;
}
@Override
public Object deserialize(byte[] bytes) throws SerializationException {
Object result = null;
if (bytes == null || bytes.length == 0) {
return result;
}
try {
ByteArrayInputStream byteStream = new ByteArrayInputStream(bytes);
try {
ByteArrayInputStream byteStream = new ByteArrayInputStream(bytes);
ObjectInputStream objectInputStream = new ObjectInputStream(byteStream);
result = objectInputStream.readObject();
} catch (IOException e) {
throw new SerializationException("deserialize error", e);
return result;
}
}
<MSG> 增加多ClassLoader反序列化支持
<DFF> @@ -1,13 +1,18 @@
package org.crazycake.shiro.serializer;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.ObjectInputStream;
+import java.io.ObjectOutputStream;
+import java.io.Serializable;
+
import org.crazycake.shiro.exception.SerializationException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
-import java.io.*;
-
public class ObjectSerializer implements RedisSerializer<Object> {
- private static Logger logger = LoggerFactory.getLogger(ObjectSerializer.class);
+ private static Logger log = LoggerFactory.getLogger(ObjectSerializer.class);
public static final int BYTE_ARRAY_OUTPUT_STREAM_SIZE = 128;
@@ -45,7 +50,7 @@ public class ObjectSerializer implements RedisSerializer<Object> {
try {
ByteArrayInputStream byteStream = new ByteArrayInputStream(bytes);
- ObjectInputStream objectInputStream = new ObjectInputStream(byteStream);
+ ObjectInputStream objectInputStream = new MultiClassLoaderObjectInputStream(byteStream);
result = objectInputStream.readObject();
} catch (IOException e) {
throw new SerializationException("deserialize error", e);
| 9 | 增加多ClassLoader反序列化支持 | 4 | .java | java | mit | alexxiyang/shiro-redis |
1898 | <NME> TestUtils.java
<BEF> ADDFILE
<MSG> - Fix defect about can't clean object after logout
- Add Serializers
- Upgrade Jedis version to 2.7.2
- Upgrade support Redis version to 2.4.6
- Refactor codes
- Add testcases
<DFF> @@ -0,0 +1,12 @@
+package org.crazycake.shiro;
+
+import java.lang.reflect.Field;
+
+public class TestUtils {
+
+ public static void setPrivateField(Object obj, String fieldName, Object value) throws NoSuchFieldException, IllegalAccessException {
+ Field field = obj.getClass().getDeclaredField(fieldName);
+ field.setAccessible(true);
+ field.set(obj, value);
+ }
+}
| 12 | - Fix defect about can't clean object after logout - Add Serializers - Upgrade Jedis version to 2.7.2 - Upgrade support Redis version to 2.4.6 - Refactor codes - Add testcases | 0 | .java | java | mit | alexxiyang/shiro-redis |
1899 | <NME> index.rst
<BEF> 1: Creating a skeleton test
===========================
This is a basic introduction to getting your first hitch test up and running.
Install prerequisites
---------------------
You should have a reasonably up to date Ubuntu, Debian, Arch, Fedora or Mac.
On Ubuntu/Debian::
$ sudo apt-get install python3 python-pip python-virtualenv
$ sudo pip install --upgrade hitch
On Mac OS X::
$ brew install python python3
$ pip install --upgrade hitch virtualenv
On Arch::
$ sudo pacman -Sy python python-virtualenv
$ sudo pip install --upgrade hitch
On Fedora/RHEL/CentOS::
.. note::
:doc:`/faq/what_does_the_init_script_do` instead.
Once the installation has completed, it will ask you a few basic questions about your project,
mostly requiring a yes or no answer and will then generate a skeleton project template for you.
Create your test directory
--------------------------
Create a directory inside the root of your project to put your tests in. For example::
~/yourproject$ mkdir tests
~/yourproject$ cd tests
~/yourproject/tests$
If you already have a tests directory you can call it something else.
Create the hitch environment
----------------------------
To initialize a hitch environment, run hitch init in your tests directory::
~/yourproject/tests$ hitch init
This will:
* Install any necessary system packages required to run hitch.
* Create a .hitch directory, create a python 3 virtualenv in it and install all the necessary packages to run hitch tests there.
* Ask you some basic questions about the project which you are testing.
* Create a skeleton hitch project template for you to use based upon the answers.
The skeleton template will include all of the following:
* :doc:`/glossary/hitchreqs.txt`
* :doc:`/glossary/engine.py`
* tdd.settings (:doc:`/glossary/hitch_settings`)
* ci.settings
* all.settings
* :doc:`/glossary/stub.test`
* README.rst
You might want to take a look around these files. They all try to be self-explanatory.
Running your first test
-----------------------
You can now run the stub test. Try running it in test driven development mode::
$ hitch test stub.test --settings tdd.settings
The first time you run this command it *may take a while* (up to 25 minutes depending upon what you answered).
.. note::
:doc:`/faq/why_does_the_first_test_run_take_so_long`
This might be a good time to take a break.
While you're at it, subscribe to the `hitch subreddit <https://reddit.com/r/hitchtest>`_ and
`twitter feed <https://twitter.com/testhitch>`_ for updates and news.
Back?
-----
.. note::
If the stub test failed, please `raise an issue <https://github.com/hitchtest/hitch/issues/new>`_.
Once the test run is done setting up, if there were no problems, you should see this::
Python 3.4.3 (default, Jul 28 2015, 18:20:59)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
SUCCESS
In [1]:
This is the interactive prompt that appears during the pause step. This is an :doc:`/glossary/ipython`
prompt that can be used to interact with your app, inspect logs and try out test
steps.
The components you selected during the set up should also be running. For example, if you
chose postgres, the latest version of postgres will have been installed in the ~/.hitchpkg
directory and it will be running and accessible.
To exit, simply hit ctrl-D.
This will shut everything down and then quit.
You're now ready to start writing new tests.
Happy testing!
.. note::
Was there anything that went wrong or was confusing? Please tell us! Help with :doc:`/misc/clarifying_documentation`.
Further reading
---------------
* :doc:`/howto/web_applications`
* :doc:`/howto/command_line_applications`
Advanced topics
---------------
* :doc:`/howto/test_driven_development`
* :doc:`/howto/parameterize_test_cases`
* :doc:`/howto/external_apis`
* :doc:`/howto/continuous_integration`
Plugin Documentation
--------------------
.. toctree::
:glob:
:maxdepth: 1
/plugins/*
.. note::
Need tutorials for any other topics? `Please raise a ticket <https://github.com/hitchtest/hitch/issues/new>`_.
<MSG> DOCS : Fixes for bad links in documentation
<DFF> @@ -1,5 +1,5 @@
-1: Creating a skeleton test
-===========================
+Getting started quickly with Hitch
+==================================
This is a basic introduction to getting your first hitch test up and running.
@@ -28,7 +28,7 @@ If you don't, run the init script by copying and pasting the following line::
.. note::
- :doc:`/faq/what_does_the_init_script_do` instead.
+ This can be used as a guide to instal hitch instead: :doc:`/faq/what_does_the_init_script_do`
Once the installation has completed, it will ask you a few basic questions about your project,
mostly requiring a yes or no answer and will then generate a skeleton project template for you.
| 3 | DOCS : Fixes for bad links in documentation | 3 | .rst | rst | agpl-3.0 | hitchtest/hitch |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.