id
int64 20
338k
| vocab_size
int64 2
671
| ast_levels
int64 4
32
| nloc
int64 1
451
| n_ast_nodes
int64 12
5.6k
| n_identifiers
int64 1
186
| n_ast_errors
int64 0
10
| n_words
int64 2
2.17k
| n_whitespaces
int64 2
13.8k
| fun_name
stringlengths 2
73
| commit_message
stringlengths 51
15.3k
| url
stringlengths 31
59
| code
stringlengths 51
31k
| ast_errors
stringlengths 0
1.46k
| token_counts
int64 6
3.32k
| file_name
stringlengths 5
56
| language
stringclasses 1
value | path
stringlengths 7
134
| commit_id
stringlengths 40
40
| repo
stringlengths 3
28
| complexity
int64 1
153
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
259,478 | 60 | 10 | 20 | 271 | 27 | 0 | 93 | 270 | plot | FEA Add DecisionBoundaryDisplay (#16061)
Co-authored-by: Guillaume Lemaitre <[email protected]>
Co-authored-by: Olivier Grisel <[email protected]>
Co-authored-by: Loïc Estève <[email protected]> | https://github.com/scikit-learn/scikit-learn.git | def plot(self, plot_method="contourf", ax=None, xlabel=None, ylabel=None, **kwargs):
check_matplotlib_support("DecisionBoundaryDisplay.plot")
import matplotlib.pyplot as plt # noqa
if plot_method not in ("contourf", "contour", "pcolormesh"):
raise ValueError(
"plot_method must be 'contourf', 'contour', or 'pcolormesh'"
)
if ax is None:
_, ax = plt.subplots()
plot_func = getattr(ax, plot_method)
self.surface_ = plot_func(self.xx0, self.xx1, self.response, **kwargs)
if xlabel is not None or not ax.get_xlabel():
xlabel = self.xlabel if xlabel is None else xlabel
ax.set_xlabel(xlabel)
if ylabel is not None or not ax.get_ylabel():
ylabel = self.ylabel if ylabel is None else ylabel
ax.set_ylabel(ylabel)
self.ax_ = ax
self.figure_ = ax.figure
return self
| 169 | decision_boundary.py | Python | sklearn/inspection/_plot/decision_boundary.py | d400723a2112f15c5d5b4d40dfac2ed8a19cca5c | scikit-learn | 9 |
|
299,567 | 6 | 6 | 3 | 22 | 4 | 0 | 6 | 20 | name | Add application credentials platform (#69148)
* Initial developer credentials scaffolding
- Support websocket list/add/delete
- Add developer credentials protocol from yaml config
- Handle OAuth credential registration and de-registration
- Tests for websocket and integration based registration
* Fix pydoc text
* Remove translations and update owners
* Update homeassistant/components/developer_credentials/__init__.py
Co-authored-by: Paulus Schoutsen <[email protected]>
* Update homeassistant/components/developer_credentials/__init__.py
Co-authored-by: Paulus Schoutsen <[email protected]>
* Remove _async_get_developer_credential
* Rename to application credentials platform
* Fix race condition and add import support
* Increase code coverage (92%)
* Increase test coverage 93%
* Increase test coverage (94%)
* Increase test coverage (97%)
* Increase test covearge (98%)
* Increase test coverage (99%)
* Increase test coverage (100%)
* Remove http router frozen comment
* Remove auth domain override on import
* Remove debug statement
* Don't import the same client id multiple times
* Add auth dependency for local oauth implementation
* Revert older oauth2 changes from merge
* Update homeassistant/components/application_credentials/__init__.py
Co-authored-by: Martin Hjelmare <[email protected]>
* Move config credential import to its own fixture
* Override the mock_application_credentials_integration fixture instead per test
* Update application credentials
* Add dictionary typing
* Use f-strings as per feedback
* Add additional structure needed for an MVP application credential
Add additional structure needed for an MVP, including a target
component Xbox
* Add websocket to list supported integrations for frontend selector
* Application credentials config
* Import xbox credentials
* Remove unnecessary async calls
* Update script/hassfest/application_credentials.py
Co-authored-by: Martin Hjelmare <[email protected]>
* Update script/hassfest/application_credentials.py
Co-authored-by: Martin Hjelmare <[email protected]>
* Update script/hassfest/application_credentials.py
Co-authored-by: Martin Hjelmare <[email protected]>
* Update script/hassfest/application_credentials.py
Co-authored-by: Martin Hjelmare <[email protected]>
* Import credentials with a fixed auth domain
Resolve an issue with compatibility of exisiting config entries when importing
client credentials
Co-authored-by: Paulus Schoutsen <[email protected]>
Co-authored-by: Martin Hjelmare <[email protected]> | https://github.com/home-assistant/core.git | def name(self) -> str:
return self.client_id
| 12 | __init__.py | Python | homeassistant/components/application_credentials/__init__.py | 00b5d30e24dccebcc61839be7cf6ca9d87b2a3de | core | 1 |
|
300,691 | 9 | 8 | 3 | 45 | 6 | 0 | 9 | 18 | test_timestamp_to_utc | Sync event timed_fired and the context ulid time (#71854) | https://github.com/home-assistant/core.git | def test_timestamp_to_utc():
utc_now = dt_util.utcnow()
assert dt_util.utc_to_timestamp(utc_now) == utc_now.timestamp()
| 25 | test_dt.py | Python | tests/util/test_dt.py | ebce5660e3f80ceb95c21d8fe231f792ad0dfd7f | core | 1 |
|
93,712 | 29 | 13 | 13 | 167 | 20 | 0 | 31 | 178 | request_hook | ref(Jira): Split Jira Cloud and Jira Server (#37034)
* Split Jira Cloud and Jira Server | https://github.com/getsentry/sentry.git | def request_hook(self, method, path, data, params, **kwargs):
if "auth" not in kwargs:
kwargs["auth"] = OAuth1(
client_key=self.credentials["consumer_key"],
rsa_key=self.credentials["private_key"],
resource_owner_key=self.credentials["access_token"],
resource_owner_secret=self.credentials["access_token_secret"],
signature_method=SIGNATURE_RSA,
signature_type="auth_header",
)
request_spec = kwargs.copy()
request_spec.update(dict(method=method, path=path, data=data, params=params))
return request_spec
| 107 | client.py | Python | src/sentry/integrations/jira_server/client.py | 2fbf550ec05c8501cbc9eca62e73526e717dcbdf | sentry | 2 |
|
64,175 | 16 | 9 | 6 | 77 | 9 | 0 | 19 | 13 | update_product_bundle_rate | refactor: Price fetching and updation logic
- fetch price from price list, use item master valuation rate as fallback fo0r packed item
- use a item code, item row name map to maintain cumulative price
- reset table if item in a row is replaced
- loop over items table only to set price, lesser iterations than packed items table | https://github.com/frappe/erpnext.git | def update_product_bundle_rate(parent_items_price, pi_row):
key = (pi_row.parent_item, pi_row.parent_detail_docname)
rate = parent_items_price.get(key)
if not rate:
parent_items_price[key] = 0.0
parent_items_price[key] += flt(pi_row.rate)
| 50 | packed_item.py | Python | erpnext/stock/doctype/packed_item/packed_item.py | 2f4d266ee132e34d81034321f47a0aca96ee1774 | erpnext | 2 |
|
19,508 | 22 | 11 | 6 | 96 | 14 | 0 | 22 | 48 | download_file | Code reorg utils into utils module reduces complexity (#4990)
* Split apart the massive utils.py into a utils module | https://github.com/pypa/pipenv.git | def download_file(url, filename, max_retries=1):
r = _get_requests_session(max_retries).get(url, stream=True)
if not r.ok:
raise OSError("Unable to download file")
with open(filename, "wb") as f:
f.write(r.content)
| 56 | internet.py | Python | pipenv/utils/internet.py | 3387881a6d4fc2d8bdc0f05c484cb2f7222acfb8 | pipenv | 2 |
|
269,992 | 21 | 14 | 11 | 81 | 11 | 0 | 24 | 177 | _check_counts | Reformatting the codebase with black.
PiperOrigin-RevId: 450093126 | https://github.com/keras-team/keras.git | def _check_counts(self, counter, expected_counts):
for method_name, expected_count in expected_counts.items():
self.assertEqual(
counter.method_counts[method_name],
expected_count,
msg="For method {}: expected {}, got: {}".format(
method_name,
expected_count,
counter.method_counts[method_name],
),
)
| 54 | callbacks_test.py | Python | keras/callbacks_test.py | 84afc5193d38057e2e2badf9c889ea87d80d8fbf | keras | 2 |
|
130,577 | 14 | 8 | 22 | 58 | 11 | 0 | 14 | 42 | to_modin | [CI] Format Python code with Black (#21975)
See #21316 and #21311 for the motivation behind these changes. | https://github.com/ray-project/ray.git | def to_modin(self) -> "modin.DataFrame":
from modin.distributed.dataframe.pandas.partitions import from_partitions
pd_objs = self.to_pandas_refs()
return from_partitions(pd_objs, axis=0)
| 36 | dataset.py | Python | python/ray/data/dataset.py | 7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065 | ray | 1 |
|
178,942 | 31 | 10 | 19 | 112 | 16 | 0 | 32 | 156 | addMacOSCodeSignature | macOS: Add support for specifying signing identity and access to protected resources. | https://github.com/Nuitka/Nuitka.git | def addMacOSCodeSignature(filenames):
# Weak signing.
identity = getMacOSSigningIdentity()
command = [
"codesign",
"-s",
identity,
"--force",
"--deep",
"--preserve-metadata=entitlements",
]
assert type(filenames) is not str
command.extend(filenames)
with withMadeWritableFileMode(filenames):
executeToolChecked(
logger=postprocessing_logger,
command=command,
absence_message=macos_codesign_usage,
stderr_filter=_filterSigntoolErrorOutput,
)
| 66 | Signing.py | Python | nuitka/utils/Signing.py | 51ca460bd8c382cc165cbb1325e7cb65895d1a0b | Nuitka | 1 |
|
156,127 | 26 | 18 | 15 | 139 | 14 | 0 | 36 | 173 | functions_of | absolufy-imports - No relative - PEP8 (#8796)
Conversation in https://github.com/dask/distributed/issues/5889 | https://github.com/dask/dask.git | def functions_of(task):
funcs = set()
work = [task]
sequence_types = {list, tuple}
while work:
new_work = []
for task in work:
if type(task) in sequence_types:
if istask(task):
funcs.add(unwrap_partial(task[0]))
new_work.extend(task[1:])
else:
new_work.extend(task)
work = new_work
return funcs
| 84 | optimization.py | Python | dask/optimization.py | cccb9d8d8e33a891396b1275c2448c352ef40c27 | dask | 5 |
|
215,072 | 38 | 18 | 24 | 286 | 29 | 1 | 50 | 265 | test_minion_module_refresh_beacons_refresh | Fix test cases with PermissionError on /var/cache/salt
When running the test cases without root permission, some test cases fail:
```
$ python3 -m pytest -ra tests/pytests/unit/state/test_state_compiler.py tests/pytests/unit/test_minion.py
[...]
FAILED tests/pytests/unit/state/test_state_compiler.py::test_render_requisite_require_disabled - PermissionError: [Errno 13] Permission denied: '/var/cache/salt'
FAILED tests/pytests/unit/state/test_state_compiler.py::test_render_requisite_require_in_disabled - PermissionError: [Errno 13] Permission denied: '/var/cache/salt'
FAILED tests/pytests/unit/test_minion.py::test_minion_module_refresh - PermissionError: [Errno 13] Permission denied: '/var/cache/salt'
FAILED tests/pytests/unit/test_minion.py::test_minion_module_refresh_beacons_refresh - PermissionError: [Errno 13] Permission denied: '/var/cache/salt'
```
Fix these test cases by using a temporary directory as cache directory.
Signed-off-by: Benjamin Drung <[email protected]> | https://github.com/saltstack/salt.git | def test_minion_module_refresh_beacons_refresh(tmp_path):
with patch("salt.minion.Minion.ctx", MagicMock(return_value={})), patch(
"salt.utils.process.SignalHandlingProcess.start",
MagicMock(return_value=True),
), patch(
"salt.utils.process.SignalHandlingProcess.join",
MagicMock(return_value=True),
):
try:
mock_opts = salt.config.DEFAULT_MINION_OPTS.copy()
mock_opts["cachedir"] = str(tmp_path)
minion = salt.minion.Minion(
mock_opts,
io_loop=salt.ext.tornado.ioloop.IOLoop(),
)
minion.schedule = salt.utils.schedule.Schedule(mock_opts, {}, returners={})
assert not hasattr(minion, "beacons")
minion.module_refresh()
assert hasattr(minion, "beacons")
assert hasattr(minion.beacons, "beacons")
assert "service.beacon" in minion.beacons.beacons
minion.destroy()
finally:
minion.destroy()
@pytest.mark.slow_test | @pytest.mark.slow_test | 164 | test_minion.py | Python | tests/pytests/unit/test_minion.py | fae21e4698d9bb45a407345e7dff5ce3b69f799d | salt | 2 |
125,024 | 6 | 7 | 3 | 23 | 4 | 0 | 6 | 20 | has_batch | [Datasets] [Local Shuffle - 1/N] Add local shuffling option. (#26094)
Co-authored-by: Eric Liang <[email protected]>
Co-authored-by: matthewdeng <[email protected]>
Co-authored-by: Matthew Deng <[email protected]>
Co-authored-by: Richard Liaw <[email protected]> | https://github.com/ray-project/ray.git | def has_batch(self) -> bool:
raise NotImplementedError()
| 12 | batcher.py | Python | python/ray/data/_internal/batcher.py | 864af14f410ab12c7553332dd3a62e716f24a667 | ray | 1 |
|
300,189 | 21 | 12 | 12 | 78 | 15 | 0 | 25 | 89 | test_supported_features | Add ws66i core integration (#56094)
* Add ws66i core integration
* Remove all ws66i translations
* Update ws66i unit tests to meet minimum code coverage
* Update ws66i based on @bdraco review
* General improvements after 2nd PR review
* Disable entities if amp shutoff, set default source names, set 30sec polling
* Add _attr_ and change async_on_unload
* Improve entity generation
* Implement coordinator
* Made options fields required, retry connection on failed attempts, use ZoneStatus for attributes
* Refactor WS66i entity properties, raise HomeAssistantError on restore service if no snapshot
* Update to pyws66i v1.1
* Add quality scale of silver to manifest
* Update config_flow test | https://github.com/home-assistant/core.git | async def test_supported_features(hass):
await _setup_ws66i(hass, MockWs66i())
state = hass.states.get(ZONE_1_ID)
assert (
SUPPORT_VOLUME_MUTE
| SUPPORT_VOLUME_SET
| SUPPORT_VOLUME_STEP
| SUPPORT_TURN_ON
| SUPPORT_TURN_OFF
| SUPPORT_SELECT_SOURCE
== state.attributes["supported_features"]
)
| 46 | test_media_player.py | Python | tests/components/ws66i/test_media_player.py | 5e737bfe4fbc5a724f5fdf04ea9319c2224cb114 | core | 1 |
|
276,189 | 25 | 9 | 8 | 122 | 16 | 0 | 26 | 89 | test_trainable_custom_model_false | Reformatting the codebase with black.
PiperOrigin-RevId: 450093126 | https://github.com/keras-team/keras.git | def test_trainable_custom_model_false(self):
# Set all layers to *not* be trainable.
model = test_utils.SmallSubclassMLP(1, 4, trainable=False)
model.compile(loss="mse", optimizer="rmsprop")
self._train_model(model, use_dataset=False)
loaded = self._save_and_load(model)
self._test_evaluation(model, loaded)
self.assertEmpty(model.trainable_variables)
self.assertEmpty(loaded.trainable_variables)
| 74 | saved_model_test.py | Python | keras/saving/saved_model/saved_model_test.py | 84afc5193d38057e2e2badf9c889ea87d80d8fbf | keras | 1 |
|
249,238 | 17 | 10 | 15 | 93 | 13 | 0 | 18 | 126 | test_requester_is_no_admin | Use literals in place of `HTTPStatus` constants in tests (#13479)
Replace
- `HTTPStatus.NOT_FOUND`
- `HTTPStatus.FORBIDDEN`
- `HTTPStatus.UNAUTHORIZED`
- `HTTPStatus.CONFLICT`
- `HTTPStatus.CREATED`
Signed-off-by: Dirk Klimpel <[email protected]> | https://github.com/matrix-org/synapse.git | def test_requester_is_no_admin(self) -> None:
channel = self.make_request(
"GET",
self.url,
access_token=self.other_user_tok,
)
self.assertEqual(
403,
channel.code,
msg=channel.json_body,
)
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
| 59 | test_event_reports.py | Python | tests/rest/admin/test_event_reports.py | 1595052b2681fb86c1c1b9a6028c1bc0d38a2e4b | synapse | 1 |
|
256,253 | 134 | 15 | 45 | 548 | 46 | 0 | 195 | 628 | tokenize_batch_question_answering | Apply black formatting (#2115)
* Testing black on ui/
* Applying black on docstores
* Add latest docstring and tutorial changes
* Create a single GH action for Black and docs to reduce commit noise to the minimum, slightly refactor the OpenAPI action too
* Remove comments
* Relax constraints on pydoc-markdown
* Split temporary black from the docs. Pydoc-markdown was obsolete and needs a separate PR to upgrade
* Fix a couple of bugs
* Add a type: ignore that was missing somehow
* Give path to black
* Apply Black
* Apply Black
* Relocate a couple of type: ignore
* Update documentation
* Make Linux CI run after applying Black
* Triggering Black
* Apply Black
* Remove dependency, does not work well
* Remove manually double trailing commas
* Update documentation
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> | https://github.com/deepset-ai/haystack.git | def tokenize_batch_question_answering(pre_baskets, tokenizer, indices):
assert len(indices) == len(pre_baskets)
assert tokenizer.is_fast, (
"Processing QA data is only supported with fast tokenizers for now.\n"
"Please load Tokenizers with 'use_fast=True' option."
)
baskets = []
# # Tokenize texts in batch mode
texts = [d["context"] for d in pre_baskets]
tokenized_docs_batch = tokenizer.batch_encode_plus(
texts, return_offsets_mapping=True, return_special_tokens_mask=True, add_special_tokens=False, verbose=False
)
# Extract relevant data
tokenids_batch = tokenized_docs_batch["input_ids"]
offsets_batch = []
for o in tokenized_docs_batch["offset_mapping"]:
offsets_batch.append(np.array([x[0] for x in o]))
start_of_words_batch = []
for e in tokenized_docs_batch.encodings:
start_of_words_batch.append(_get_start_of_word_QA(e.words))
for i_doc, d in enumerate(pre_baskets):
document_text = d["context"]
# # Tokenize questions one by one
for i_q, q in enumerate(d["qas"]):
question_text = q["question"]
tokenized_q = tokenizer.encode_plus(
question_text, return_offsets_mapping=True, return_special_tokens_mask=True, add_special_tokens=False
)
# Extract relevant data
question_tokenids = tokenized_q["input_ids"]
question_offsets = [x[0] for x in tokenized_q["offset_mapping"]]
question_sow = _get_start_of_word_QA(tokenized_q.encodings[0].words)
external_id = q["id"]
# The internal_id depends on unique ids created for each process before forking
internal_id = f"{indices[i_doc]}-{i_q}"
raw = {
"document_text": document_text,
"document_tokens": tokenids_batch[i_doc],
"document_offsets": offsets_batch[i_doc],
"document_start_of_word": start_of_words_batch[i_doc],
"question_text": question_text,
"question_tokens": question_tokenids,
"question_offsets": question_offsets,
"question_start_of_word": question_sow,
"answers": q["answers"],
}
# TODO add only during debug mode (need to create debug mode)
raw["document_tokens_strings"] = tokenized_docs_batch.encodings[i_doc].tokens
raw["question_tokens_strings"] = tokenized_q.encodings[0].tokens
baskets.append(SampleBasket(raw=raw, id_internal=internal_id, id_external=external_id, samples=None))
return baskets
| 331 | tokenization.py | Python | haystack/modeling/model/tokenization.py | a59bca366174d9c692fa19750c24d65f47660ef7 | haystack | 8 |
|
81,351 | 22 | 13 | 10 | 102 | 17 | 0 | 27 | 69 | test_activity_stream_related | Optimize object creation by getting fewer empty relationships (#12508)
This optimizes the ActivityStreamSerializer by only getting many-to-many
relationships that are speculatively non-empty
based on information we have in other fields
We run this every time we create an object as an on_commit action
so it is expected this will have a major impact on response times for launching jobs | https://github.com/ansible/awx.git | def test_activity_stream_related():
serializer_related = set(
ActivityStream._meta.get_field(field_name).related_model
for field_name, stuff in ActivityStreamSerializer()._local_summarizable_fk_fields(None)
if hasattr(ActivityStream, field_name)
)
models = set(activity_stream_registrar.models)
models.remove(Setting)
missing_models = models - serializer_related
assert not missing_models
| 62 | test_activity_stream_serializer.py | Python | awx/main/tests/unit/api/serializers/test_activity_stream_serializer.py | 2d310dc4e50c6f7cd298f9fb8af69da258cd9ea6 | awx | 3 |
|
168,320 | 36 | 13 | 11 | 92 | 8 | 0 | 41 | 153 | _validate_scalar | ENH: Make categories setitem error more readable (#48087) | https://github.com/pandas-dev/pandas.git | def _validate_scalar(self, fill_value):
if is_valid_na_for_dtype(fill_value, self.categories.dtype):
fill_value = -1
elif fill_value in self.categories:
fill_value = self._unbox_scalar(fill_value)
else:
raise TypeError(
"Cannot setitem on a Categorical with a new "
f"category ({fill_value}), set the categories first"
) from None
return fill_value
# -------------------------------------------------------------
| 52 | categorical.py | Python | pandas/core/arrays/categorical.py | 06dd5dab93ff4a55377309c0315aa767fdf9937e | pandas | 3 |
|
153,056 | 26 | 10 | 6 | 105 | 16 | 0 | 30 | 79 | reduce | REFACTOR-#2656: Update modin to fit algebra (code only) (#3717)
Co-authored-by: Yaroslav Igoshev <[email protected]>
Co-authored-by: Vasily Litvinov <[email protected]>
Co-authored-by: Alexey Prutskov <[email protected]>
Co-authored-by: Devin Petersohn <[email protected]>
Signed-off-by: Rehan Durrani <[email protected]> | https://github.com/modin-project/modin.git | def reduce(self, func):
keys = [partition.get_key() for partition in self.partitions]
gpu = self.partitions[0].get_gpu_manager()
# FIXME: Method `gpu_manager.reduce_key_list` does not exist.
key = gpu.reduce_key_list.remote(keys, func)
key = ray.get(key)
return cuDFOnRayDataframePartition(gpu_manager=gpu, key=key)
| 66 | axis_partition.py | Python | modin/core/execution/ray/implementations/cudf_on_ray/partitioning/axis_partition.py | 58bbcc37477866d19c8b092a0e1974a4f0baa586 | modin | 2 |
|
20,189 | 6 | 6 | 3 | 22 | 4 | 0 | 6 | 20 | site_config_dir | check point progress on only bringing in pip==22.0.4 (#4966)
* vendor in pip==22.0.4
* updating vendor packaging version
* update pipdeptree to fix pipenv graph with new version of pip.
* Vendoring of pip-shims 0.7.0
* Vendoring of requirementslib 1.6.3
* Update pip index safety restrictions patch for pip==22.0.4
* Update patches
* exclude pyptoject.toml from black to see if that helps.
* Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4 | https://github.com/pypa/pipenv.git | def site_config_dir(self) -> str:
return self.user_config_dir
| 12 | android.py | Python | pipenv/patched/notpip/_vendor/platformdirs/android.py | f3166e673fe8d40277b804d35d77dcdb760fc3b3 | pipenv | 1 |
|
249,299 | 10 | 10 | 17 | 39 | 5 | 0 | 10 | 31 | test_trace_decorator_async | Allow use of both `@trace` and `@tag_args` stacked on the same function (#13453)
```py
@trace
@tag_args
async def get_oldest_event_ids_with_depth_in_room(...)
...
```
Before this PR, you would see a warning in the logs and the span was not exported:
```
2022-08-03 19:11:59,383 - synapse.logging.opentracing - 835 - ERROR - GET-0 - @trace may not have wrapped EventFederationWorkerStore.get_oldest_event_ids_with_depth_in_room correctly! The function is not async but returned a coroutine.
``` | https://github.com/matrix-org/synapse.git | def test_trace_decorator_async(self) -> None:
reactor = MemoryReactorClock()
with LoggingContext("root context"):
| 94 | test_opentracing.py | Python | tests/logging/test_opentracing.py | 1b09b0832ed56bfc994deadb3315755d0c20433b | synapse | 2 |
|
189,494 | 99 | 18 | 52 | 612 | 38 | 0 | 182 | 888 | _text2settings | Hide more private methods from the docs. (#2468)
* hide privs from text_mobject.py
* hide privs from tex_mobject.py
* hide privs from code_mobject.py
* hide privs from svg_mobject.py
* remove SVGPath and utils from __init__.py
* don't import string_to_numbers
* hide privs from geometry.py
* hide privs from matrix.py
* hide privs from numbers.py
* hide privs from three_dimensions.py
* forgot underscore under set_stroke_width_from_length
* there were more i missed
* unhidea method that was used in docs
* forgot other text2hash
* remove svg_path from docs | https://github.com/ManimCommunity/manim.git | def _text2settings(self):
t2xs = [
(self.t2f, "font"),
(self.t2s, "slant"),
(self.t2w, "weight"),
(self.t2c, "color"),
]
setting_args = {arg: getattr(self, arg) for _, arg in t2xs}
settings = self._get_settings_from_t2xs(t2xs)
settings.extend(self._get_settings_from_gradient(setting_args))
# Handle overlaps
settings.sort(key=lambda setting: setting.start)
for index, setting in enumerate(settings):
if index + 1 == len(settings):
break
next_setting = settings[index + 1]
if setting.end > next_setting.start:
new_setting = self._merge_settings(setting, next_setting, setting_args)
new_index = index + 1
while (
new_index < len(settings)
and settings[new_index].start < new_setting.start
):
new_index += 1
settings.insert(new_index, new_setting)
# Set all text settings (default font, slant, weight)
temp_settings = settings.copy()
start = 0
for setting in settings:
if setting.start != start:
temp_settings.append(TextSetting(start, setting.start, **setting_args))
start = setting.end
if start != len(self.text):
temp_settings.append(TextSetting(start, len(self.text), **setting_args))
settings = sorted(temp_settings, key=lambda setting: setting.start)
if re.search(r"\n", self.text):
line_num = 0
for start, end in self._find_indexes("\n", self.text):
for setting in settings:
if setting.line_num == -1:
setting.line_num = line_num
if start < setting.end:
line_num += 1
new_setting = copy.copy(setting)
setting.end = end
new_setting.start = end
new_setting.line_num = line_num
settings.append(new_setting)
settings.sort(key=lambda setting: setting.start)
break
for setting in settings:
if setting.line_num == -1:
setting.line_num = 0
return settings
| 389 | text_mobject.py | Python | manim/mobject/svg/text_mobject.py | 902e7eb4f0147b5882a613b67467e38a1d47f01e | manim | 17 |
|
212,626 | 60 | 13 | 23 | 312 | 36 | 0 | 104 | 401 | update | Docstring changes for all Element.update methods to indicate that the change will not be visible until Window.refresh or Window.read is called | https://github.com/PySimpleGUI/PySimpleGUI.git | def update(self, current_count=None, max=None, bar_color=None, visible=None):
if not self._widget_was_created(): # if widget hasn't been created yet, then don't allow
return False
if self.ParentForm.TKrootDestroyed:
return False
if visible is False:
self.TKProgressBar.TKProgressBarForReal.pack_forget()
elif visible is True:
self.TKProgressBar.TKProgressBarForReal.pack(padx=self.pad_used[0], pady=self.pad_used[1])
if visible is not None:
self._visible = visible
if bar_color is not None:
bar_color = _simplified_dual_color_to_tuple(bar_color, default=DEFAULT_PROGRESS_BAR_COLOR)
self.BarColor = bar_color
style = ttk.Style()
style.configure(self.ttk_style_name, background=bar_color[0], troughcolor=bar_color[1])
if current_count is not None:
self.TKProgressBar.Update(current_count, max=max)
try:
self.ParentForm.TKroot.update()
except:
# Window._DecrementOpenCount()
# _my_windows.Decrement()
return False
return True
Update = update
UpdateBar = update_bar
PBar = ProgressBar
Prog = ProgressBar
Progress = ProgressBar
# ---------------------------------------------------------------------- #
# Image #
# ---------------------------------------------------------------------- # | 182 | PySimpleGUI.py | Python | PySimpleGUI.py | 9c80a060e2463bcf4534d388f48003b538deb64b | PySimpleGUI | 9 |
|
112,868 | 16 | 10 | 5 | 80 | 11 | 0 | 18 | 53 | _earlystop_notify_tuner | Support multiple HPO experiments in one process (#4855) | https://github.com/microsoft/nni.git | def _earlystop_notify_tuner(self, data):
_logger.debug('Early stop notify tuner data: [%s]', data)
data['type'] = MetricType.FINAL
data['value'] = dump(data['value'])
self.enqueue_command(CommandType.ReportMetricData, data)
| 46 | msg_dispatcher.py | Python | nni/runtime/msg_dispatcher.py | 98c1a77f61900d486f46d284c49fb65675dbee6a | nni | 1 |
|
258,603 | 21 | 11 | 9 | 104 | 16 | 0 | 25 | 56 | test_feature_agglomeration_feature_names_out | ENH Adds get_feature_names to cluster module (#22255) | https://github.com/scikit-learn/scikit-learn.git | def test_feature_agglomeration_feature_names_out():
X, _ = make_blobs(n_features=6, random_state=0)
agglo = FeatureAgglomeration(n_clusters=3)
agglo.fit(X)
n_clusters = agglo.n_clusters_
names_out = agglo.get_feature_names_out()
assert_array_equal(
[f"featureagglomeration{i}" for i in range(n_clusters)], names_out
)
| 61 | test_feature_agglomeration.py | Python | sklearn/cluster/tests/test_feature_agglomeration.py | 5219b6f479d79bf201ccbc6210607d6190ebbed4 | scikit-learn | 2 |
|
154,139 | 70 | 16 | 23 | 273 | 16 | 0 | 147 | 518 | _validate_axes_lengths | FEAT-#4725: Make index and columns lazy in Modin DataFrame (#4726)
Co-authored-by: Mahesh Vashishtha <[email protected]>
Co-authored-by: Yaroslav Igoshev <[email protected]>
Signed-off-by: Vasily Litvinov <[email protected]> | https://github.com/modin-project/modin.git | def _validate_axes_lengths(self):
if self._row_lengths_cache is not None and len(self.index) > 0:
# An empty frame can have 0 rows but a nonempty index. If the frame
# does have rows, the number of rows must equal the size of the
# index.
num_rows = sum(self._row_lengths_cache)
if num_rows > 0:
ErrorMessage.catch_bugs_and_request_email(
num_rows != len(self._index_cache),
f"Row lengths: {num_rows} != {len(self._index_cache)}",
)
ErrorMessage.catch_bugs_and_request_email(
any(val < 0 for val in self._row_lengths_cache),
f"Row lengths cannot be negative: {self._row_lengths_cache}",
)
if self._column_widths_cache is not None and len(self.columns) > 0:
# An empty frame can have 0 column but a nonempty column index. If
# the frame does have columns, the number of columns must equal the
# size of the columns.
num_columns = sum(self._column_widths_cache)
if num_columns > 0:
ErrorMessage.catch_bugs_and_request_email(
num_columns != len(self._columns_cache),
f"Column widths: {num_columns} != {len(self._columns_cache)}",
)
ErrorMessage.catch_bugs_and_request_email(
any(val < 0 for val in self._column_widths_cache),
f"Column widths cannot be negative: {self._column_widths_cache}",
)
| 142 | dataframe.py | Python | modin/core/dataframe/pandas/dataframe/dataframe.py | adb16a17f721048005520388080627975c6852d8 | modin | 9 |
|
42,592 | 9 | 17 | 3 | 81 | 9 | 0 | 9 | 34 | load_wiki_q | Support both iso639-3 codes and BCP-47 language tags (#3060)
* Add support for iso639-3 language codes
* Add support for retired language codes
* Move langnames.py to the top-level
* Add langcode() function
* Add iso639retired dictionary
* Improve wrapper functions
* Add module docstring with doctest
* Add 2-letter language codes
* Add regular expression check
* Improve inverse lookup of retired codes
* Support BCP-47
* Avoid deprecated langcodes
* Set stack level for warnings to warn on the langname call
Now it throws e.g.
```
...\nltk_3060.py:9: UserWarning: Shortening 'smo' to 'sm'
print(f"{lang}: {langname(code)}")
```
Rather than
```
...\nltk\langnames.py:64: UserWarning: Shortening zha to za
warn(f"Shortening {code} to {code2}")
```
* Dict key membership is equivalent to dict membership
* Resolve bug: subtag -> tag
* Capitalize BCP47 in CorpusReader name
* Reimplement removed type hint changes from #3081
Co-authored-by: Tom Aarsen <[email protected]> | https://github.com/nltk/nltk.git | def load_wiki_q(self):
with self.open("cldr/tools-cldr-rdf-external-entityToCode.tsv") as fp:
self.wiki_q = self.wiki_dict(fp.read().strip().split("\n")[1:])
| 43 | bcp47.py | Python | nltk/corpus/reader/bcp47.py | f019fbedb3d2b6a2e6b58ec1b38db612b106568b | nltk | 1 |
|
168,252 | 12 | 12 | 13 | 69 | 13 | 0 | 12 | 73 | _start | PERF cache find_stack_level (#48023)
cache stacklevel | https://github.com/pandas-dev/pandas.git | def _start(self) -> int:
warnings.warn(
self._deprecation_message.format("_start", "start"),
FutureWarning,
stacklevel=find_stack_level(inspect.currentframe()),
)
return self.start
| 41 | range.py | Python | pandas/core/indexes/range.py | 2f8d0a36703e81e4dca52ca9fe4f58c910c1b304 | pandas | 1 |
|
168,718 | 7 | 10 | 32 | 34 | 7 | 0 | 7 | 13 | is_float_dtype | DOC: Remove mention that is_float_dtype is private (#48156) | https://github.com/pandas-dev/pandas.git | def is_float_dtype(arr_or_dtype) -> bool:
return _is_dtype_type(arr_or_dtype, classes(np.floating))
| 20 | common.py | Python | pandas/core/dtypes/common.py | 6d458eefdf9ae17dceff39471853e4af136ab495 | pandas | 1 |
|
215,754 | 61 | 13 | 31 | 359 | 15 | 0 | 104 | 277 | acl_create | [merge jam] Master port 49261 - consul modules (#58101)
* add consul states and acl function present/absent
* add consul to states doc index
* refact/fix consul states
* fix doc, fix states
* fix name parameter for acl_changes
* fixing pylint errors
* small changes after review by @rallytime
* fix header count
* Update consul.py
* fix acl_exists description, fix when both id and name are missing
* Adding some tests for consul module and consul state module. Some additional fixes in the consul module.
* Fixing tests.
* Fixing failing tests on Windows.
* Adding changelog.
* Adding some tests for consul module and consul state module. Some additional fixes in the consul module.
* moving tests to pytest.
* manual black changes.
* One more manual black change.
* fixing formatting. Adding versionadded for state module.
Co-authored-by: Rémi Jouannet <[email protected]>
Co-authored-by: Mike Place <[email protected]>
Co-authored-by: Daniel Wozniak <[email protected]>
Co-authored-by: Wayne Werner <[email protected]> | https://github.com/saltstack/salt.git | def acl_create(consul_url=None, token=None, **kwargs):
ret = {}
data = {}
if not consul_url:
consul_url = _get_config()
if not consul_url:
log.error("No Consul URL found.")
ret["message"] = "No Consul URL found."
ret["res"] = False
return ret
if "id" in kwargs:
data["id"] = kwargs["id"]
if "name" in kwargs:
data["Name"] = kwargs["name"]
else:
raise SaltInvocationError('Required argument "name" is missing.')
if "type" in kwargs:
data["Type"] = kwargs["type"]
if "rules" in kwargs:
data["Rules"] = kwargs["rules"]
function = "acl/create"
res = _query(
consul_url=consul_url, token=token, data=data, method="PUT", function=function
)
if res["res"]:
ret["res"] = True
ret["message"] = "ACL {} created.".format(kwargs["name"])
else:
ret["res"] = False
ret["message"] = "Removing Catalog item {} failed.".format(kwargs["name"])
return ret
| 196 | consul.py | Python | salt/modules/consul.py | fb825aa760fa0585a2c8fdafc6e62be8aec8cecf | salt | 8 |
|
265,491 | 8 | 8 | 4 | 40 | 4 | 0 | 9 | 41 | destinations | Fixes #9778: Fix exception during cable deletion after deleting a connected termination | https://github.com/netbox-community/netbox.git | def destinations(self):
if not self.is_complete:
return []
return self.path_objects[-1]
| 23 | cables.py | Python | netbox/dcim/models/cables.py | 367bf25618d1be55c10b7e707101f2759711e855 | netbox | 2 |
|
323,170 | 11 | 10 | 5 | 43 | 5 | 0 | 12 | 35 | total_processes_number | [Trainer] Add init version of paddlenlp trainer and apply finetune for ernie-1.0 pretraining. (#1761)
* add some datasets for finetune.
* support fine tune for all tastks.
* add trainer prototype.
* init verison for paddlenlp trainer.
* refine trainer.
* update for some details.
* support multi-cards training evaluation.
* support load from ckpt.
* support for export inference model.
* first version of trainer.
* seq cls support clue.
* trainer support for token classification and question answersing tasks.
* fix as reviews.
Co-authored-by: Zeyu Chen <[email protected]> | https://github.com/PaddlePaddle/PaddleNLP.git | def total_processes_number(local_rank):
if local_rank != -1:
import paddle
return paddle.distributed.get_world_size()
return 1
| 24 | trainer_utils.py | Python | paddlenlp/trainer/trainer_utils.py | 44a290e94d1becd1f09fddc3d873f9e19c9d6919 | PaddleNLP | 2 |
|
159,359 | 6 | 6 | 3 | 26 | 5 | 0 | 6 | 20 | required_components | Add Logistic Regression to our NLU classifiers. (#10650)
* added-logistic-regression
* added
* d0h! gotta copy the imports correctly
* run black
* black issues fixed
* stash
* added tolerance hyperparam
* added random seed
* fixed testing path
* ran black
* use joblib directly
* insurance against sklearn changes
* added try except
* ran black
* make code more DRY
* flake8
* added type information
* add train -> persists -> load -> load
* add to test_train.py
* fixed style issues
* actually persist model
* persist, i insist
* fixed-bug
* added-documentation
* black
* added changelog
* added
* moar-whitespace
* removed stale param
* added comments | https://github.com/RasaHQ/rasa.git | def required_components(cls) -> List[Type]:
return [Featurizer]
| 15 | logistic_regression_classifier.py | Python | rasa/nlu/classifiers/logistic_regression_classifier.py | dc762814317ce46873a5226ee09033031a7d3604 | rasa | 1 |
|
247,527 | 41 | 13 | 27 | 254 | 23 | 0 | 48 | 241 | test_blacklisted_ip_range_whitelisted_ip | Add type hints to `tests/rest`. (#12208)
Co-authored-by: Patrick Cloke <[email protected]> | https://github.com/matrix-org/synapse.git | def test_blacklisted_ip_range_whitelisted_ip(self) -> None:
self.lookups["example.com"] = [(IPv4Address, "1.1.1.1")]
channel = self.make_request(
"GET",
"preview_url?url=http://example.com",
shorthand=False,
await_result=False,
)
self.pump()
client = self.reactor.tcpClients[0][2].buildProtocol(None)
server = AccumulatingProtocol()
server.makeConnection(FakeTransport(client, self.reactor))
client.makeConnection(FakeTransport(server, self.reactor))
client.dataReceived(
b"HTTP/1.0 200 OK\r\nContent-Length: %d\r\nContent-Type: text/html\r\n\r\n"
% (len(self.end_content),)
+ self.end_content
)
self.pump()
self.assertEqual(channel.code, 200)
self.assertEqual(
channel.json_body, {"og:title": "~matrix~", "og:description": "hi"}
)
| 149 | test_url_preview.py | Python | tests/rest/media/v1/test_url_preview.py | 32c828d0f760492711a98b11376e229d795fd1b3 | synapse | 1 |
|
268,038 | 13 | 12 | 4 | 87 | 15 | 1 | 13 | 33 | serialize | ansible-test - Use more native type hints. (#78435)
* ansible-test - Use more native type hints.
Simple search and replace to switch from comments to native type hints for return types of functions with no arguments.
* ansible-test - Use more native type hints.
Conversion of simple single-line function annotation type comments to native type hints.
* ansible-test - Use more native type hints.
Conversion of single-line function annotation type comments with default values to native type hints.
* ansible-test - Use more native type hints.
Manual conversion of type annotation comments for functions which have pylint directives. | https://github.com/ansible/ansible.git | def serialize(self) -> t.Tuple[str, t.Dict[str, t.Any]]:
name = type(self).__name__[3:].lower()
return name, self.__dict__
@dataclasses.dataclass(frozen=True) | @dataclasses.dataclass(frozen=True) | 46 | python_requirements.py | Python | test/lib/ansible_test/_internal/python_requirements.py | 3eb0485dd92c88cc92152d3656d94492db44b183 | ansible | 1 |
180,946 | 5 | 6 | 2 | 18 | 3 | 0 | 5 | 19 | postprocess | Add docs to blocks context postprocessing function (#2332)
Co-authored-by: Ian Gonzalez <[email protected]> | https://github.com/gradio-app/gradio.git | def postprocess(self, y):
return y
| 10 | blocks.py | Python | gradio/blocks.py | 027bbc0180051076c266bcc43c79918c74e922f4 | gradio | 1 |
|
269,085 | 28 | 14 | 13 | 120 | 15 | 0 | 38 | 71 | _serialize_function_to_config | Refactor RNN classes such that V2 cells and layers no longer depend on V1 counterparts.
V2 GRU and LSTM cells no longer extend their V1 counterpart; instead the inheritance is the other way around.
V2 GRU and LSTM layers no longer extend their V1 counterpart; instead the common code was duplicated.
V2 cell wrappers and legacy cell wrappers no longer have a complex hierarchy with multiple inheritance to share code; instead, the common code was duplicated.
Unit tests for GRU and LSTM layers were reorganized so that all generic tests that work for both V1 and V2 are in `gru_test.py` and `lstm_test.py`. The only tests in `gru_v1_test.py` and `lstm_v1_test.py` are the ones that compare V1 and V2 for accuracy or performance, and V1 specific tests.
Also made cell wrappers API more consistent, all wrappers now expose a `wrapped_cell` property, not just `DropoutWrapper`.
PiperOrigin-RevId: 432554966 | https://github.com/keras-team/keras.git | def _serialize_function_to_config(function):
if isinstance(function, python_types.LambdaType):
output = generic_utils.func_dump(function)
output_type = "lambda"
module = function.__module__
elif callable(function):
output = function.__name__
output_type = "function"
module = function.__module__
else:
raise ValueError(
f"Unrecognized function type for input: {type(function)}")
return output, output_type, module
| 65 | cell_wrappers.py | Python | keras/layers/rnn/cell_wrappers.py | 62aab556c6252e54b9f3ee3fa65243aecd6aea52 | keras | 3 |
|
153,974 | 9 | 9 | 2 | 40 | 7 | 0 | 9 | 23 | wait_partitions | FIX-#4491: Wait for all partitions in parallel in benchmark mode (#4656)
* FIX-#4491: Wait for all partitions in parallel in benchmark mode
Signed-off-by: Jonathan Shi <[email protected]> | https://github.com/modin-project/modin.git | def wait_partitions(cls, partitions):
wait([partition._data for partition in partitions], return_when="ALL_COMPLETED")
| 24 | partition_manager.py | Python | modin/core/execution/dask/implementations/pandas_on_dask/partitioning/partition_manager.py | 7a36071c0b00e0392615a0dd9d5c2ddd5f7c0d27 | modin | 2 |
|
155,409 | 7 | 8 | 10 | 29 | 5 | 0 | 7 | 13 | q1_sql | FEAT-#5223: Execute SQL queries on the HDK backend (#5224)
Signed-off-by: Andrey Pavlenko <[email protected]> | https://github.com/modin-project/modin.git | def q1_sql(df):
sql =
return query(sql, trips=df)
| 17 | nyc-taxi-hdk.py | Python | examples/docker/modin-hdk/nyc-taxi-hdk.py | 26e10c2ccc0eb670e61e32f08eacb61ca8414f95 | modin | 1 |
|
138,348 | 36 | 11 | 21 | 103 | 13 | 0 | 42 | 198 | _get_all_child_nodes | [Serve] Address incremental memory leak due to _PyObjScanner (#31317) | https://github.com/ray-project/ray.git | def _get_all_child_nodes(self) -> List["DAGNode"]:
scanner = _PyObjScanner()
# we use List instead of Set here, reason explained
# in `_get_toplevel_child_nodes`.
children = []
for n in scanner.find_nodes(
[
self._bound_args,
self._bound_kwargs,
self._bound_other_args_to_resolve,
]
):
if n not in children:
children.append(n)
scanner.clear()
return children
| 62 | dag_node.py | Python | python/ray/dag/dag_node.py | 01b19bafb224ddd2b7cc3aef557274ffb38c9c42 | ray | 3 |
|
247,309 | 9 | 11 | 4 | 69 | 4 | 0 | 10 | 31 | test_bad_alias | Add type hints to `tests/rest/client` (#12108)
* Add type hints to `tests/rest/client`
* newsfile
* fix imports
* add `test_account.py`
* Remove one type hint in `test_report_event.py`
* change `on_create_room` to `async`
* update new functions in `test_third_party_rules.py`
* Add `test_filter.py`
* add `test_rooms.py`
* change to `assertEquals` to `assertEqual`
* lint | https://github.com/matrix-org/synapse.git | def test_bad_alias(self) -> None:
self._set_canonical_alias({"alias": "@unknown:test"}, expected_code=400)
self._set_canonical_alias({"alt_aliases": ["@unknown:test"]}, expected_code=400)
| 38 | test_rooms.py | Python | tests/rest/client/test_rooms.py | 2ffaf30803f93273a4d8a65c9e6c3110c8433488 | synapse | 1 |
|
209,482 | 65 | 20 | 27 | 384 | 32 | 0 | 107 | 431 | command | Kerberos: documentation + various fixes + demo (#3693)
* MS-PAC, more key usage numbers
* Properly document Kerberos
* Fix command() for lists
* More doc, examples, Kerberos AS client
* Python 2.7 fix
* Add great schema | https://github.com/secdev/scapy.git | def command(self):
# type: () -> str
f = []
for fn, fv in six.iteritems(self.fields):
fld = self.get_field(fn)
if isinstance(fv, (list, dict, set)) and len(fv) == 0:
continue
if isinstance(fv, Packet):
fv = fv.command()
elif fld.islist and fld.holds_packets and isinstance(fv, list):
fv = "[%s]" % ",".join(map(Packet.command, fv))
elif fld.islist and isinstance(fv, list):
fv = "[%s]" % ", ".join(
getattr(x, 'command', lambda: repr(x))()
for x in fv
)
elif isinstance(fld, FlagsField):
fv = int(fv)
elif callable(getattr(fv, 'command', None)):
fv = fv.command()
else:
fv = repr(fv)
f.append("%s=%s" % (fn, fv))
c = "%s(%s)" % (self.__class__.__name__, ", ".join(f))
pc = self.payload.command()
if pc:
c += "/" + pc
return c
| 233 | packet.py | Python | scapy/packet.py | 5a527a90ab3928e86497cd9ab0e5779159cf1244 | scapy | 14 |
|
266,496 | 9 | 6 | 2 | 17 | 2 | 0 | 9 | 24 | is_content_root | ansible-test - Improve help for unsupported cwd. (#76866)
* ansible-test - Improve help for unsupported cwd.
* The `--help` option is now available when an unsupported cwd is in use.
* The `--help` output now shows the same instructions about cwd as would be shown in error messages if the cwd is unsupported.
* Add `--version` support to show the ansible-core version.
* The explanation about cwd usage has been improved to explain more clearly what is required.
Resolves https://github.com/ansible/ansible/issues/64523
Resolves https://github.com/ansible/ansible/issues/67551 | https://github.com/ansible/ansible.git | def is_content_root(path): # type: (str) -> bool
return False
| 8 | unsupported.py | Python | test/lib/ansible_test/_internal/provider/layout/unsupported.py | de5f60e374524de13fe079b52282cd7a9eeabd5f | ansible | 1 |
|
244,119 | 78 | 16 | 33 | 510 | 26 | 1 | 123 | 415 | model_scaling | [Feature] Support efficientnet in mmdetection. (#7514)
* Initial implementation
* Add missing import
* Add MemoryEfficientSwishImplementation. Add docstrings
* Add efficientnet2mmdet tool
* Add config folder
* Flake8
* Flake8
* Flake8
* Fix config
* Requested changes
* docformatter
* Update train config from https://github.com/google/automl/blob/master/efficientdet
* Run pre-commit
* Fix schedule
* Set by_epoch=False in scheduler
* Train 80 epochs
* Remove duplicated arg
* Update README.md
* efficient3 efficient0
* efficientNet imports
* efficientNet
* config edit path for eff3 and dropout for eff0
* efficientnet review2
* fix model_converter location and drop path
* fix model converter and efficientnet import
* register memoryefficietnswish
* eff0, eff3
* fix flake8 yapf isort
* same padding in tensorflow and edit drop path rate
* fix init of utils
* Align mmdet utils with mmcls
* Align mmdet.models.utils with mmcls
* Use mmcls efficientnet backbone
* Update
* Update
* Update metafile
Co-authored-by: David de la Iglesia Castro <[email protected]>
Co-authored-by: David de la Iglesia Castro <[email protected]>
Co-authored-by: jiangyitong <[email protected]>
Co-authored-by: jiangyitong <[email protected]> | https://github.com/open-mmlab/mmdetection.git | def model_scaling(layer_setting, arch_setting):
# scale width
new_layer_setting = copy.deepcopy(layer_setting)
for layer_cfg in new_layer_setting:
for block_cfg in layer_cfg:
block_cfg[1] = make_divisible(block_cfg[1] * arch_setting[0], 8)
# scale depth
split_layer_setting = [new_layer_setting[0]]
for layer_cfg in new_layer_setting[1:-1]:
tmp_index = [0]
for i in range(len(layer_cfg) - 1):
if layer_cfg[i + 1][1] != layer_cfg[i][1]:
tmp_index.append(i + 1)
tmp_index.append(len(layer_cfg))
for i in range(len(tmp_index) - 1):
split_layer_setting.append(layer_cfg[tmp_index[i]:tmp_index[i +
1]])
split_layer_setting.append(new_layer_setting[-1])
num_of_layers = [len(layer_cfg) for layer_cfg in split_layer_setting[1:-1]]
new_layers = [
int(math.ceil(arch_setting[1] * num)) for num in num_of_layers
]
merge_layer_setting = [split_layer_setting[0]]
for i, layer_cfg in enumerate(split_layer_setting[1:-1]):
if new_layers[i] <= num_of_layers[i]:
tmp_layer_cfg = layer_cfg[:new_layers[i]]
else:
tmp_layer_cfg = copy.deepcopy(layer_cfg) + [layer_cfg[-1]] * (
new_layers[i] - num_of_layers[i])
if tmp_layer_cfg[0][3] == 1 and i != 0:
merge_layer_setting[-1] += tmp_layer_cfg.copy()
else:
merge_layer_setting.append(tmp_layer_cfg.copy())
merge_layer_setting.append(split_layer_setting[-1])
return merge_layer_setting
@BACKBONES.register_module() | @BACKBONES.register_module() | 325 | efficientnet.py | Python | mmdet/models/backbones/efficientnet.py | 3f0f2a059743593fd07b629c261b609bd9a767e6 | mmdetection | 13 |
126,957 | 43 | 14 | 28 | 300 | 32 | 0 | 66 | 404 | test_dqn_compilation | [RLlib] Move learning_starts logic from buffers into `training_step()`. (#26032) | https://github.com/ray-project/ray.git | def test_dqn_compilation(self):
num_iterations = 1
config = (
dqn.dqn.DQNConfig()
.rollouts(num_rollout_workers=2)
.training(num_steps_sampled_before_learning_starts=0)
)
for _ in framework_iterator(config, with_eager_tracing=True):
# Double-dueling DQN.
print("Double-dueling")
plain_config = deepcopy(config)
trainer = dqn.DQN(config=plain_config, env="CartPole-v0")
for i in range(num_iterations):
results = trainer.train()
check_train_results(results)
print(results)
check_compute_single_action(trainer)
trainer.stop()
# Rainbow.
print("Rainbow")
rainbow_config = deepcopy(config).training(
num_atoms=10, noisy=True, double_q=True, dueling=True, n_step=5
)
trainer = dqn.DQN(config=rainbow_config, env="CartPole-v0")
for i in range(num_iterations):
results = trainer.train()
check_train_results(results)
print(results)
check_compute_single_action(trainer)
trainer.stop()
| 181 | test_dqn.py | Python | rllib/algorithms/dqn/tests/test_dqn.py | 0dceddb912ed92286032b5563dd2e541a8a7031f | ray | 4 |
|
256,077 | 22 | 15 | 9 | 142 | 15 | 0 | 30 | 75 | get_dependency_links | Introduce readonly DCDocumentStore (without labels support) (#1991)
* minimal DCDocumentStore
* support filters
* implement get_documents_by_id
* handle not existing documents
* add docstrings
* auth added
* add tests
* generate docs
* Add latest docstring and tutorial changes
* add responses to dev dependencies
* fix tests
* support query() and quey_by_embedding()
* Add latest docstring and tutorial changes
* query tests added
* read api_key and api_endpoint from env
* Add latest docstring and tutorial changes
* support query() and quey_by_embedding()
* query tests added
* Add latest docstring and tutorial changes
* Add latest docstring and tutorial changes
* support dynamic similarity and return_embedding values
* Add latest docstring and tutorial changes
* adjust KeywordDocumentStore description
* refactoring
* Add latest docstring and tutorial changes
* implement get_document_count and raise on all not implemented methods
* Add latest docstring and tutorial changes
* don't use abbreviation DC in comments and errors
* Add latest docstring and tutorial changes
* docstring added to KeywordDocumentStore
* Add latest docstring and tutorial changes
* enhanced api key set
* split tests into two parts
* change setup.py in order to work around build cache
* added link
* Add latest docstring and tutorial changes
* rename DCDocumentStore to DeepsetCloudDocumentStore
* Add latest docstring and tutorial changes
* remove dc.py
* reinsert link to docs
* fix imports
* Add latest docstring and tutorial changes
* better test structure
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: ArzelaAscoIi <[email protected]> | https://github.com/deepset-ai/haystack.git | def get_dependency_links(filename):
with open(filename) as file:
parsed_requirements = file.read().splitlines()
dependency_links = list()
for line in parsed_requirements:
line = line.strip()
if line.startswith('--find-links'):
dependency_links.append(line.split('=')[1])
return dependency_links
dependency_links = get_dependency_links('requirements.txt')
parsed_requirements = parse_requirements('requirements.txt')
| 66 | setup.py | Python | setup.py | 8a32d8da92e4548e308bc971910e94bedb320029 | haystack | 3 |
|
299,314 | 18 | 13 | 10 | 97 | 16 | 0 | 18 | 93 | cleanup | Skip invalid segments in stream recorder (#70896)
* Skip segment if duration is None
* Copy segments deque before passing to thread | https://github.com/home-assistant/core.git | def cleanup(self) -> None:
_LOGGER.debug("Starting recorder worker thread")
thread = threading.Thread(
name="recorder_save_worker",
target=recorder_save_worker,
args=(self.video_path, self._segments.copy()),
)
thread.start()
super().cleanup()
| 57 | recorder.py | Python | homeassistant/components/stream/recorder.py | 9281f46bcd9b76e88f9e490f16c6677f8b5ca738 | core | 1 |
|
113,152 | 29 | 7 | 9 | 41 | 6 | 0 | 34 | 55 | evaluate | [Compression] lightning & legacy evaluator - step 1 (#4950) | https://github.com/microsoft/nni.git | def evaluate(self) -> float | None | Tuple[float, Any] | Tuple[None, Any]:
# Note that the first item of the returned value will be used as the default metric used by NNI.
raise NotImplementedError
| 26 | evaluator.py | Python | nni/algorithms/compression/v2/pytorch/utils/evaluator.py | 5a3d82e842906dc8f695fafe52434fde781615be | nni | 1 |
|
306,790 | 13 | 8 | 5 | 43 | 8 | 0 | 13 | 32 | test_convert_from_cubic_meters | Refactor distance, speed and volume utils (#77952)
* Refactor distance util
* Fix bmw connected drive tests
* Adjust here travel time tests
* Adjust waze travel time tests
* Adjust test_distance
* Adjust rounding values
* Adjust more tests
* Adjust volume conversions
* Add tests | https://github.com/home-assistant/core.git | def test_convert_from_cubic_meters():
cubic_meters = 5
assert volume_util.convert(
cubic_meters, VOLUME_CUBIC_METERS, VOLUME_CUBIC_FEET
) == pytest.approx(176.5733335)
| 28 | test_volume.py | Python | tests/util/test_volume.py | 9490771a8737892a7a86afd866a3520b836779fd | core | 1 |
|
319,215 | 41 | 13 | 13 | 108 | 16 | 0 | 49 | 115 | scan_file_for_seperating_barcodes | add first tests for barcode reader
Signed-off-by: florian on nixos (Florian Brandes) <[email protected]> | https://github.com/paperless-ngx/paperless-ngx.git | def scan_file_for_seperating_barcodes(filepath) -> list:
seperator_page_numbers = [ ]
# use a temporary directory in case the file os too big to handle in memory
with tempfile.TemporaryDirectory() as path:
pages_from_path = convert_from_path(filepath, output_folder=path)
for current_page_number, page in enumerate(pages_from_path):
current_barcodes = barcode_reader(page)
if current_barcodes.isin("PATCHT"):
seperator_page_numbers = seperator_page_numbers + current_page_number
return seperator_page_numbers
| 62 | tasks.py | Python | src/documents/tasks.py | 76e43bcb89dc96f18c966fab273f439f7efe2e12 | paperless-ngx | 3 |
|
139,700 | 25 | 12 | 14 | 61 | 3 | 0 | 26 | 172 | get_invalid_runtime_envs | [Serve] Add deployment graph `import_path` and `runtime_env` to `ServeApplicationSchema` (#24814)
A newly planned version of the Serve schema (used in the REST API and CLI) requires the user to pass in their deployment graph's`import_path` and optionally a runtime_env containing that graph. This new schema can then pick up any `init_args` and `init_kwargs` values directly from the graph, instead of requiring them to be serialized and passed explicitly into the REST request.
This change:
* Adds the `import_path` and `runtime_env` fields to the `ServeApplicationSchema`.
* Updates or disables outdated unit tests.
Follow-up changes should:
* Update the status schemas (i.e. `DeploymentStatusSchema` and `ServeApplicationStatusSchema`).
* Remove deployment-level `import_path`s.
* Process the new `import_path` and `runtime_env` fields instead of silently ignoring them.
* Remove `init_args` and `init_kwargs` from `DeploymentSchema` afterwards.
Co-authored-by: Edward Oakes <[email protected]> | https://github.com/ray-project/ray.git | def get_invalid_runtime_envs() -> List[Dict]:
return [
# Local URIs in working_dir and py_modules
{
"working_dir": ".",
"py_modules": [
"/Desktop/my_project",
(
"https://github.com/shrekris-anyscale/"
"test_deploy_group/archive/HEAD.zip"
),
],
}
]
| 31 | test_schema.py | Python | python/ray/serve/tests/test_schema.py | 3a2bd16ecae15d6e26585c32c113dcfe7469ccd7 | ray | 1 |
|
20,795 | 14 | 9 | 7 | 74 | 7 | 0 | 23 | 73 | elapsed | check point progress on only bringing in pip==22.0.4 (#4966)
* vendor in pip==22.0.4
* updating vendor packaging version
* update pipdeptree to fix pipenv graph with new version of pip.
* Vendoring of pip-shims 0.7.0
* Vendoring of requirementslib 1.6.3
* Update pip index safety restrictions patch for pip==22.0.4
* Update patches
* exclude pyptoject.toml from black to see if that helps.
* Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4 | https://github.com/pypa/pipenv.git | def elapsed(self) -> Optional[float]:
if self.start_time is None:
return None
if self.stop_time is not None:
return self.stop_time - self.start_time
return self.get_time() - self.start_time
| 46 | progress.py | Python | pipenv/patched/notpip/_vendor/rich/progress.py | f3166e673fe8d40277b804d35d77dcdb760fc3b3 | pipenv | 3 |
|
37,512 | 8 | 9 | 2 | 33 | 5 | 0 | 8 | 14 | custom_tokenizers | Update all require decorators to use skipUnless when possible (#16999) | https://github.com/huggingface/transformers.git | def custom_tokenizers(test_case):
return unittest.skipUnless(_run_custom_tokenizers, "test of custom tokenizers")(test_case)
| 18 | testing_utils.py | Python | src/transformers/testing_utils.py | 57e6464ac9a31156f1c93e59107323e6ec01309e | transformers | 1 |
|
257,821 | 55 | 18 | 24 | 210 | 20 | 0 | 78 | 232 | to_dict | refactor: improve support for dataclasses (#3142)
* refactor: improve support for dataclasses
* refactor: refactor class init
* refactor: remove unused import
* refactor: testing 3.7 diffs
* refactor: checking meta where is Optional
* refactor: reverting some changes on 3.7
* refactor: remove unused imports
* build: manual pre-commit run
* doc: run doc pre-commit manually
* refactor: post initialization hack for 3.7-3.10 compat.
TODO: investigate another method to improve 3.7 compatibility.
* doc: force pre-commit
* refactor: refactored for both Python 3.7 and 3.9
* docs: manually run pre-commit hooks
* docs: run api docs manually
* docs: fix wrong comment
* refactor: change no type-checked test code
* docs: update primitives
* docs: api documentation
* docs: api documentation
* refactor: minor test refactoring
* refactor: remova unused enumeration on test
* refactor: remove unneeded dir in gitignore
* refactor: exclude all private fields and change meta def
* refactor: add pydantic comment
* refactor : fix for mypy on Python 3.7
* refactor: revert custom init
* docs: update docs to new pydoc-markdown style
* Update test/nodes/test_generator.py
Co-authored-by: Sara Zan <[email protected]> | https://github.com/deepset-ai/haystack.git | def to_dict(self, field_map={}) -> Dict:
inv_field_map = {v: k for k, v in field_map.items()}
_doc: Dict[str, str] = {}
for k, v in self.__dict__.items():
# Exclude internal fields (Pydantic, ...) fields from the conversion process
if k.startswith("__"):
continue
if k == "content":
# Convert pd.DataFrame to list of rows for serialization
if self.content_type == "table" and isinstance(self.content, pd.DataFrame):
v = [self.content.columns.tolist()] + self.content.values.tolist()
k = k if k not in inv_field_map else inv_field_map[k]
_doc[k] = v
return _doc
| 130 | schema.py | Python | haystack/schema.py | 621e1af74c9c7d04b79ca5f5826ddcc06e1237f0 | haystack | 8 |
|
250,081 | 24 | 11 | 8 | 138 | 11 | 0 | 27 | 90 | test_shutdown | Require types in tests.storage. (#14646)
Adds missing type hints to `tests.storage` package
and does not allow untyped definitions. | https://github.com/matrix-org/synapse.git | def test_shutdown(self) -> None:
# Acquire two locks
lock = self.get_success(self.store.try_acquire_lock("name", "key1"))
self.assertIsNotNone(lock)
lock2 = self.get_success(self.store.try_acquire_lock("name", "key2"))
self.assertIsNotNone(lock2)
# Now call the shutdown code
self.get_success(self.store._on_shutdown())
self.assertEqual(self.store._live_tokens, {})
| 79 | test_lock.py | Python | tests/storage/databases/main/test_lock.py | 3ac412b4e2f8c5ba11dc962b8a9d871c1efdce9b | synapse | 1 |
|
153,672 | 114 | 14 | 37 | 515 | 55 | 1 | 176 | 366 | test_export_unaligned_at_chunks | FEAT-#4244: Implement dataframe exchange protocol for OmniSci (#4269)
Co-authored-by: Yaroslav Igoshev <[email protected]>
Co-authored-by: Vasily Litvinov <[email protected]>
Signed-off-by: Dmitry Chigarev <[email protected]> | https://github.com/modin-project/modin.git | def test_export_unaligned_at_chunks(data_has_nulls):
# Modin DataFrame constructor can't process PyArrow's category when using `from_arrow`, so exclude it
data = get_data_of_all_types(has_nulls=data_has_nulls, exclude_dtypes=["category"])
pd_df = pandas.DataFrame(data)
# divide columns in 3 groups: unchunked, 2-chunked, 7-chunked
chunk_groups = [1, 2, 7]
chunk_col_ilocs = [
slice(
i * len(pd_df.columns) // len(chunk_groups),
(i + 1) * len(pd_df.columns) // len(chunk_groups),
)
for i in range(len(chunk_groups))
]
pd_chunk_groups = [
split_df_into_chunks(pd_df.iloc[:, cols], n_chunks)
for n_chunks, cols in zip(chunk_groups, chunk_col_ilocs)
]
at_chunk_groups = [
pa.concat_tables([pa.Table.from_pandas(pd_df) for pd_df in chunk_group])
for chunk_group in pd_chunk_groups
]
chunked_at = at_chunk_groups[0]
# TODO: appending columns one by one looks inefficient, is there a better way?
for _at in at_chunk_groups[1:]:
for field in _at.schema:
chunked_at = chunked_at.append_column(field, _at[field.name])
md_df = from_arrow(chunked_at)
# verify that test generated the correct chunking
internal_at = md_df._query_compiler._modin_frame._partitions[0][0].get()
for n_chunks_group, cols in zip(chunk_groups, chunk_col_ilocs):
for col in internal_at.select(range(cols.start, cols.stop)).columns:
assert len(col.chunks) == n_chunks_group
n_chunks = md_df.__dataframe__().num_chunks()
exported_df = export_frame(md_df)
df_equals(md_df, exported_df)
exported_df = export_frame(md_df, n_chunks=n_chunks)
df_equals(md_df, exported_df)
exported_df = export_frame(md_df, n_chunks=n_chunks * 2)
df_equals(md_df, exported_df)
exported_df = export_frame(md_df, n_chunks=n_chunks * 3)
df_equals(md_df, exported_df)
@pytest.mark.parametrize("data_has_nulls", [True, False]) | @pytest.mark.parametrize("data_has_nulls", [True, False]) | 310 | test_protocol.py | Python | modin/test/exchange/dataframe_protocol/omnisci/test_protocol.py | 0c1a2129df64cf45bf1ff49c8ed92c510fdb1c82 | modin | 9 |
13,596 | 137 | 14 | 85 | 469 | 16 | 0 | 291 | 854 | mixin_pod_runtime_args_parser | refactor: inject dependencies at contruction with runtime args (#5418)
Co-authored-by: Joan Fontanals <[email protected]>
Co-authored-by: Jina Dev Bot <[email protected]> | https://github.com/jina-ai/jina.git | def mixin_pod_runtime_args_parser(arg_group, pod_type='worker'):
port_description = (
'The port for input data to bind to, default is a random port between [49152, 65535]. '
'In the case of an external Executor (`--external` or `external=True`) this can be a list of ports, separated by commas. '
'Then, every resulting address will be considered as one replica of the Executor.'
)
if pod_type != 'gateway':
arg_group.add_argument(
'--port',
'--port-in',
type=str,
default=helper.random_port(),
action=CastToIntAction,
help=port_description,
)
else:
arg_group.add_argument(
'--port',
'--port-expose',
'--port-in',
'--ports',
action=CastToIntAction,
type=str,
nargs='+',
default=[helper.random_port()],
help=port_description,
)
arg_group.add_argument(
'--monitoring',
action='store_true',
default=False,
help='If set, spawn an http server with a prometheus endpoint to expose metrics',
)
arg_group.add_argument(
'--port-monitoring',
type=str,
default=str(helper.random_port()),
dest='port_monitoring',
help=f'The port on which the prometheus server is exposed, default is a random port between [49152, 65535]',
)
arg_group.add_argument(
'--retries',
type=int,
default=-1,
dest='retries',
help=f'Number of retries per gRPC call. If <0 it defaults to max(3, num_replicas)',
)
arg_group.add_argument(
'--tracing',
action='store_true',
default=False,
help='If set, the sdk implementation of the OpenTelemetry tracer will be available and will be enabled for automatic tracing of requests and customer span creation. '
'Otherwise a no-op implementation will be provided.',
)
arg_group.add_argument(
'--traces-exporter-host',
type=str,
default=None,
help='If tracing is enabled, this hostname will be used to configure the trace exporter agent.',
)
arg_group.add_argument(
'--traces-exporter-port',
type=int,
default=None,
help='If tracing is enabled, this port will be used to configure the trace exporter agent.',
)
arg_group.add_argument(
'--metrics',
action='store_true',
default=False,
help='If set, the sdk implementation of the OpenTelemetry metrics will be available for default monitoring and custom measurements. '
'Otherwise a no-op implementation will be provided.',
)
arg_group.add_argument(
'--metrics-exporter-host',
type=str,
default=None,
help='If tracing is enabled, this hostname will be used to configure the metrics exporter agent.',
)
arg_group.add_argument(
'--metrics-exporter-port',
type=int,
default=None,
help='If tracing is enabled, this port will be used to configure the metrics exporter agent.',
)
| 283 | pod.py | Python | jina/parsers/orchestrate/pod.py | beecc7863e5a6b2580a0e1f10095253549279614 | jina | 2 |
|
158,210 | 31 | 12 | 6 | 116 | 16 | 0 | 35 | 60 | evaluate_loss | [PaddlePaddle] Merge master into Paddle branch (#1186)
* change 15.2 title in chinese version (#1109)
change title ’15.2. 情感分析:使用递归神经网络‘ to ’15.2. 情感分析:使用循环神经网络‘
* 修改部分语义表述 (#1105)
* Update r0.17.5 (#1120)
* Bump versions in installation
* 94行typo: (“bert.mall”)->(“bert.small”) (#1129)
* line 313: "bert.mall" -> "bert.small" (#1130)
* fix: update language as native reader (#1114)
* Fix the translation of "stride" (#1115)
* Update index.md (#1118)
修改部分语义表述
* Update self-attention-and-positional-encoding.md (#1133)
依照本书的翻译习惯,将pooling翻译成汇聚
* maybe a comment false (#1149)
* maybe a little false
* maybe a little false
* A minor bug in the rcnn section (Chinese edition) (#1148)
* Update bert.md (#1137)
一个笔误
# 假设batch_size=2,num_pred_positions=3
# 那么batch_idx应该是np.repeat( [0,1], 3 ) = [0,0,0,1,1,1]
* Update calculus.md (#1135)
* fix typo in git documentation (#1106)
* fix: Update the Chinese translation in lr-scheduler.md (#1136)
* Update lr-scheduler.md
* Update chapter_optimization/lr-scheduler.md
Co-authored-by: goldmermaid <[email protected]>
Co-authored-by: goldmermaid <[email protected]>
* fix translation for kaggle-house-price.md (#1107)
* fix translation for kaggle-house-price.md
* fix translation for kaggle-house-price.md
Signed-off-by: sunhaizhou <[email protected]>
* Update weight-decay.md (#1150)
* Update weight-decay.md
关于“k多选d”这一部分,中文读者使用排列组合的方式可能更容易理解
关于“给定k个变量,阶数的个数为...”这句话是有歧义的,不是很像中国话,应该是说“阶数为d的项的个数为...”。
并增加了一句对“因此即使是阶数上的微小变化,比如从$2$到$3$,也会显著增加我们模型的复杂性。”的解释
解释为何会增加复杂性以及为何需要细粒度工具。
* Update chapter_multilayer-perceptrons/weight-decay.md
yep
Co-authored-by: goldmermaid <[email protected]>
* Update chapter_multilayer-perceptrons/weight-decay.md
yep
Co-authored-by: goldmermaid <[email protected]>
Co-authored-by: goldmermaid <[email protected]>
* Fix a spelling error (#1161)
* Update gru.md (#1152)
The key distinction between vanilla RNNs and GRUs is that the latter support gating of the hidden state.
翻译错误
* Unify the function naming (#1113)
Unify naming of the function 'init_xavier()'.
* Update mlp-concise.md (#1166)
* Update mlp-concise.md
语句不通顺
* Update environment.md
语序异常
* Update config.ini
* fix the imprecise description (#1168)
Co-authored-by: yuande <yuande>
* fix typo in chapter_natural-language-processing-pretraining/glove.md (#1175)
* Fix some typos. (#1163)
* Update batch-norm.md (#1170)
fixing typos u->x in article
* Update linear-regression.md (#1090)
We invoke Stuart Russell and Peter Norvig who, in their classic AI text book Artificial Intelligence: A Modern Approach :cite:Russell.Norvig.2016, pointed out that
原译文把who也直接翻译出来了。
* Update mlp.md (#1117)
* Update mlp.md
修改部分语义表述
* Update chapter_multilayer-perceptrons/mlp.md
Co-authored-by: goldmermaid <[email protected]>
* Update chapter_multilayer-perceptrons/mlp.md
Co-authored-by: Aston Zhang <[email protected]>
Co-authored-by: goldmermaid <[email protected]>
* Correct a translation error. (#1091)
* Correct a translation error.
* Update chapter_computer-vision/image-augmentation.md
Co-authored-by: Aston Zhang <[email protected]>
* Update aws.md (#1121)
* Update aws.md
* Update chapter_appendix-tools-for-deep-learning/aws.md
Co-authored-by: Aston Zhang <[email protected]>
* Update image-augmentation.md (#1093)
* Update anchor.md (#1088)
fix a minor issue in code
* Update anchor.md
* Update image-augmentation.md
* fix typo and improve translation in chapter_linear-networks\softmax-regression.md (#1087)
* Avoid `torch.meshgrid` user warning (#1174)
Avoids the following user warning:
```python
~/anaconda3/envs/torch/lib/python3.10/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
```
* bump to 2.0.0-beta1
* Update sequence.md
* bump beta1 on readme
* Add latex code block background to config
* BLD: Bump python support version 3.9 (#1183)
* BLD: Bump python support version 3.9
* Remove clear and manually downgrade protobuf 4.21.4 to 3.19.4
* BLD: Bump torch and tensorflow
* Update Jenkinsfile
* Update chapter_installation/index.md
* Update chapter_installation/index.md
Co-authored-by: Aston Zhang <[email protected]>
* Update config.ini
* Update INFO.md
* Update INFO.md
* Drop mint to show code in pdf, use Inconsolata font, apply code cell color (#1187)
* resolve the conflicts
* revise from publisher (#1089)
* revise from publisher
* d2l api
* post_latex
* revise from publisher
* revise ch11
* Delete d2l-Copy1.bib
* clear cache
* rm d2lbook clear
* debug anchor
* keep original d2l doc
Co-authored-by: Ubuntu <[email protected]>
Co-authored-by: Aston Zhang <[email protected]>
Co-authored-by: Aston Zhang <[email protected]>
* 重复语句 (#1188)
Co-authored-by: Aston Zhang <[email protected]>
* Improve expression for chapter_preliminaries/pandas.md (#1184)
* Update pandas.md
* Improve expression
* Improve expression
* Update chapter_preliminaries/pandas.md
Co-authored-by: Aston Zhang <[email protected]>
* Improce expression for chapter_preliminaries/linear-algebra.md (#1185)
* Improce expression
* Improve code comments
* Update chapter_preliminaries/linear-algebra.md
* Update chapter_preliminaries/linear-algebra.md
* Update chapter_preliminaries/linear-algebra.md
* Update chapter_preliminaries/linear-algebra.md
Co-authored-by: Aston Zhang <[email protected]>
* Fix multibox_detection bugs
* Update d2l to 0.17.5 version
* restore older version
* Upgrade pandas
* change to python3.8
* Test warning log
* relocate warning log
* test logs filtering
* Update gru.md
* Add DeprecationWarning filter
* Test warning log
* Update attention mechanisms & computational performance
* Update multilayer perceptron& linear & convolution networks & computer vision
* Update recurrent&optimition&nlp pretraining & nlp applications
* ignore warnings
* Update index.md
* Update linear networks
* Update multilayer perceptrons&deep learning computation
* Update preliminaries
* Check and Add warning filter
* Update kaggle-cifar10.md
* Update object-detection-dataset.md
* Update ssd.md fcn.md
* Update hybridize.md
* Update hybridize.md
Signed-off-by: sunhaizhou <[email protected]>
Co-authored-by: zhou201505013 <[email protected]>
Co-authored-by: Xinwei Liu <[email protected]>
Co-authored-by: Anirudh Dagar <[email protected]>
Co-authored-by: Aston Zhang <[email protected]>
Co-authored-by: hugo_han <[email protected]>
Co-authored-by: gyro永不抽风 <[email protected]>
Co-authored-by: CanChengZheng <[email protected]>
Co-authored-by: linlin <[email protected]>
Co-authored-by: iuk <[email protected]>
Co-authored-by: yoos <[email protected]>
Co-authored-by: Mr. Justice Lawrence John Wargrave <[email protected]>
Co-authored-by: Chiyuan Fu <[email protected]>
Co-authored-by: Sunhuashan <[email protected]>
Co-authored-by: Haiker Sun <[email protected]>
Co-authored-by: Ming Liu <[email protected]>
Co-authored-by: goldmermaid <[email protected]>
Co-authored-by: silenceZheng66 <[email protected]>
Co-authored-by: Wenchao Yan <[email protected]>
Co-authored-by: Kiki2049 <[email protected]>
Co-authored-by: Krahets <[email protected]>
Co-authored-by: friedmainfunction <[email protected]>
Co-authored-by: Jameson <[email protected]>
Co-authored-by: P. Yao <[email protected]>
Co-authored-by: Yulv-git <[email protected]>
Co-authored-by: Liu,Xiao <[email protected]>
Co-authored-by: YIN, Gang <[email protected]>
Co-authored-by: Joe-HZ <[email protected]>
Co-authored-by: lybloveyou <[email protected]>
Co-authored-by: VigourJiang <[email protected]>
Co-authored-by: zxhd863943427 <[email protected]>
Co-authored-by: LYF <[email protected]>
Co-authored-by: Aston Zhang <[email protected]>
Co-authored-by: xiaotinghe <[email protected]>
Co-authored-by: Ubuntu <[email protected]>
Co-authored-by: Holly-Max <[email protected]>
Co-authored-by: HinGwenWoong <[email protected]>
Co-authored-by: Shuai Zhang <[email protected]> | https://github.com/d2l-ai/d2l-zh.git | def evaluate_loss(net, data_iter, loss):
metric = d2l.Accumulator(2) # Sum of losses, no. of examples
for X, y in data_iter:
l = loss(net(X), y)
metric.add(d2l.reduce_sum(l), d2l.size(l))
return metric[0] / metric[1]
DATA_HUB = dict()
DATA_URL = 'http://d2l-data.s3-accelerate.amazonaws.com/'
| 64 | mxnet.py | Python | d2l/mxnet.py | b64b41d8c1ac23c43f7a4e3f9f6339d6f0012ab2 | d2l-zh | 2 |
|
224,448 | 7 | 6 | 2 | 22 | 5 | 0 | 7 | 21 | on_template_context | Move plugin events docs into source code + refactor
* Create real (no-op) methods for each event in the base class.
* Refactor event dispatcher to not check for methods' existence, instead just call them.
* Move documentation from Markdown into docstrings of these methods.
* Activate the 'mkdocstrings' plugin.
* Use 'mkdocstrings' to insert documentation from those docstrings into the site. | https://github.com/mkdocs/mkdocs.git | def on_template_context(self, context, template_name, config):
return context
| 14 | plugins.py | Python | mkdocs/plugins.py | f79b34d174e41084391868e7b503f5c61b8b1bdf | mkdocs | 1 |
|
174,578 | 46 | 16 | 15 | 177 | 24 | 0 | 55 | 119 | _create_runnable_pip | Speed up build environment creation
Instead of creating a zip file from the current pip's sources, add the
current copy of pip, to the build environment's interpreter's import
system using `sys.meta_path`. This avoids the overhead of creating the
zipfile, allows us to use the current pip's sources as-is,
meaningfully reduces the size of the build environment and
speeds up the creation of the build environment. | https://github.com/pypa/pip.git | def _create_runnable_pip() -> Generator[str, None, None]:
source = pathlib.Path(pip_location).resolve().parent
# Return the current instance if `source` is not a directory. It likely
# means that this copy of pip is already standalone.
if not source.is_dir():
yield str(source)
return
with TempDirectory(kind="standalone-pip") as tmp_dir:
pip_runner = os.path.join(tmp_dir.path, "__pip-runner__.py")
with open(pip_runner, "w", encoding="utf8") as f:
f.write(PIP_RUNNER.format(source=os.fsdecode(source)))
yield pip_runner
| 100 | build_env.py | Python | src/pip/_internal/build_env.py | d36bd5a96e50c4beee10eb283e4e15688e0d0eb6 | pip | 2 |
|
104,411 | 7 | 10 | 2 | 46 | 6 | 0 | 7 | 21 | from_pandas | Update docs to new frontend/UI (#3690)
* WIP: update docs to new UI
* make style
* Rm unused
* inject_arrow_table_documentation __annotations__
* hasattr(arrow_table_method, "__annotations__")
* Update task_template.rst
* Codeblock PT-TF-SPLIT
* Convert loading scripts
* Convert docs to mdx
* Fix mdx
* Add <Tip>
* Convert mdx tables
* Fix codeblock
* Rm unneded hashlinks
* Update index.mdx
* Redo dev change
* Rm circle ci `build_doc` & `deploy_doc`
* Rm unneeded files
* Update docs reamde
* Standardize to `Example::`
* mdx logging levels doc
* Table properties inject_arrow_table_documentation
* ``` to ```py mdx
* Add Tips mdx
* important,None -> <Tip warning={true}>
* More misc
* Center imgs
* Update instllation page
* `setup.py` docs section
* Rm imgs since they are in hf.co
* Update docs/source/access.mdx
Co-authored-by: Steven Liu <[email protected]>
* Update index mdx
* Update docs/source/access.mdx
Co-authored-by: Steven Liu <[email protected]>
* just `Dataset` obj
* Addedversion just italics
* Update ReadInstruction doc example syntax
* Change docstring for `prepare_for_task`
* Chore
* Remove `code` syntax from headings
* Rm `code` syntax from headings
* Hashlink backward compatability
* S3FileSystem doc
* S3FileSystem doc updates
* index.mdx updates
* Add darkmode gifs
* Index logo img css classes
* Index mdx dataset logo img size
* Docs for DownloadMode class
* Doc DownloadMode table
* format docstrings
* style
* Add doc builder scripts (#3790)
* add doc builder scripts
* fix docker image
* Docs new UI actions no self hosted (#3793)
* No self hosted
* replace doc injection by actual docstrings
* Docstring formatted
Co-authored-by: Quentin Lhoest <[email protected]>
Co-authored-by: Mishig Davaadorj <[email protected]>
Co-authored-by: Lysandre Debut <[email protected]>
Co-authored-by: Mishig Davaadorj <[email protected]>
* Rm notebooks from docs actions since they dont exi
* Update tsting branch
* More docstring
* Chore
* bump up node version
* bump up node
* ``` -> ```py for audio_process.mdx
* Update .github/workflows/build_documentation.yml
Co-authored-by: Quentin Lhoest <[email protected]>
* Uodate dev doc build
* remove run on PR
* fix action
* Fix gh doc workflow
* forgot this change when merging master
* Update build doc
Co-authored-by: Steven Liu <[email protected]>
Co-authored-by: Quentin Lhoest <[email protected]>
Co-authored-by: Quentin Lhoest <[email protected]>
Co-authored-by: Lysandre Debut <[email protected]> | https://github.com/huggingface/datasets.git | def from_pandas(cls, *args, **kwargs):
return cls(pa.Table.from_pandas(*args, **kwargs))
| 28 | table.py | Python | src/datasets/table.py | e35be138148333078284b942ccc9ed7b1d826f97 | datasets | 1 |
|
36,547 | 26 | 10 | 10 | 136 | 19 | 1 | 35 | 117 | serving_output | Add TF implementation of GPT-J (#15623)
* Initial commit
* Add TFGPTJModel
* Fix a forward pass
* Add TFGPTJCausalLM
* Add TFGPTJForSequenceClassification
* Add TFGPTJForQuestionAnswering
* Fix docs
* Deal with TF dynamic shapes
* Add Loss parents to models
* Adjust split and merge heads to handle 4 and 5-dim tensors
* Update outputs for @tooslow tests | https://github.com/huggingface/transformers.git | def serving_output(self, output):
pkv = tf.convert_to_tensor(output.past_key_values) if self.config.use_cache else None
hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None
attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None
return TFBaseModelOutputWithPast(
last_hidden_state=output.last_hidden_state,
past_key_values=pkv,
hidden_states=hs,
attentions=attns,
)
@add_start_docstrings(
,
GPTJ_START_DOCSTRING,
) | @add_start_docstrings(
"""
The GPT-J Model transformer with a language modeling head on top.
""",
GPTJ_START_DOCSTRING,
) | 83 | modeling_tf_gptj.py | Python | src/transformers/models/gptj/modeling_tf_gptj.py | ed2ee373d07aa8fd3f97e5f9fac9649511cf46fd | transformers | 4 |
288,892 | 23 | 11 | 13 | 106 | 14 | 0 | 26 | 85 | test_migrate_unique_id | Migrate HomeKit Controller to use stable identifiers (#80064) | https://github.com/home-assistant/core.git | async def test_migrate_unique_id(hass, utcnow):
entity_registry = er.async_get(hass)
aid = get_next_aid()
cover_entry = entity_registry.async_get_or_create(
"cover",
"homekit_controller",
f"homekit-00:00:00:00:00:00-{aid}-8",
)
await setup_test_component(hass, create_garage_door_opener_service)
assert (
entity_registry.async_get(cover_entry.entity_id).unique_id
== f"00:00:00:00:00:00_{aid}_8"
)
| 58 | test_cover.py | Python | tests/components/homekit_controller/test_cover.py | f23b1750e85f07091eb896a0b12b8f95e5646338 | core | 1 |
|
95,082 | 45 | 12 | 16 | 134 | 12 | 0 | 48 | 226 | test_failure_rate_without_transactions | feat(performance): Update performance queries to use generic_metrics dataset [TET-227] (#37855)
If I get a use case id = Performance, I should route to generic_metrics,
whereas if I get use case id=RELEASE_HEALTH, I should route to metrics.
Note for reviewer:
Also i've migrated tests to use `self.store_metric` instead of `self._send_buckets` since it's already has required fix:
https://github.com/getsentry/sentry/blob/d23db50d912d28f4489158fe9ef630525ba46677/src/sentry/testutils/cases.py#L1197-L1200
Regarding granularity hacks:
Since @wmak removed it here [fix(mep): Remove the granularity hacks #36724](https://github.com/getsentry/sentry/pull/36724), I assumed we can ignore it and delegate it to snuba.
Please correct me here if i'm wrong.
Co-authored-by: ahmedetefy <[email protected]>
Co-authored-by: Ahmed Etefy <[email protected]> | https://github.com/getsentry/sentry.git | def test_failure_rate_without_transactions(self):
# Not sending buckets means no project is created automatically. We need
# a project without transaction data, so create one:
self.project
response = self.get_success_response(
self.organization.slug,
field=["transaction.failure_rate"],
statsPeriod="1m",
interval="1m",
useCase="performance",
)
assert response.data["groups"] == [
{
"by": {},
"series": {"transaction.failure_rate": [None]},
"totals": {"transaction.failure_rate": None},
},
]
| 76 | test_organization_metric_data.py | Python | tests/sentry/api/endpoints/test_organization_metric_data.py | 91bbcc1795642a02e1e5ba29558c01fe55e2fa26 | sentry | 1 |
|
176,434 | 16 | 10 | 6 | 88 | 12 | 0 | 19 | 37 | test_tutte_polynomial_disjoint_C5 | Add Tutte polynomial (#5265)
Add a new polynomial module to algorithms for characteristic polynomials.
Adds the Tutte polynomial, which is computed and ultimate represented as a
sympy expression.
Co-authored-by: Dan Schult <[email protected]>
Co-authored-by: Ross Barnowski <[email protected]> | https://github.com/networkx/networkx.git | def test_tutte_polynomial_disjoint_C5():
g = nx.cycle_graph(5)
t_g = nx.tutte_polynomial(g)
h = nx.disjoint_union(g, g)
t_h = nx.tutte_polynomial(h)
assert sympy.simplify(t_g * t_g).equals(t_h)
| 53 | test_polynomials.py | Python | networkx/algorithms/tests/test_polynomials.py | f11068c0115ede0c7b631f771c10be7efd0b950b | networkx | 1 |
|
101,260 | 74 | 14 | 24 | 298 | 30 | 0 | 110 | 374 | copy | lib.align updates:
- alignments.py
- Add typed dicts for imported alignments
- Explicitly check for presence of thumb value in alignments dict
- linting
- detected_face.py
- Typing
- Linting
- Legacy support for pre-aligned face
- Update dependencies to new property names | https://github.com/deepfakes/faceswap.git | def copy(self, frame_index, direction):
logger.debug("frame: %s, direction: %s", frame_index, direction)
faces = self._faces_at_frame_index(frame_index)
frames_with_faces = [idx for idx, faces in enumerate(self._detected_faces.current_faces)
if len(faces) > 0]
if direction == "prev":
idx = next((idx for idx in reversed(frames_with_faces)
if idx < frame_index), None)
else:
idx = next((idx for idx in frames_with_faces
if idx > frame_index), None)
if idx is None:
# No previous/next frame available
return
logger.debug("Copying alignments from frame %s to frame: %s", idx, frame_index)
# aligned_face cannot be deep copied, so remove and recreate
to_copy = self._faces_at_frame_index(idx)
for face in to_copy:
face._aligned = None # pylint:disable=protected-access
copied = deepcopy(to_copy)
for old_face, new_face in zip(to_copy, copied):
old_face.load_aligned(None)
new_face.load_aligned(None)
faces.extend(copied)
self._tk_face_count_changed.set(True)
self._globals.tk_update.set(True)
| 187 | detected_faces.py | Python | tools/manual/detected_faces.py | 5e73437be47f2410439a3c6716de96354e6a0c94 | faceswap | 11 |
|
189,043 | 10 | 10 | 7 | 56 | 5 | 0 | 10 | 43 | chdir | add flake8-quotes plugin
Signed-off-by: Giampaolo Rodola <[email protected]> | https://github.com/giampaolo/psutil.git | def chdir(dirname):
curdir = os.getcwd()
try:
os.chdir(dirname)
yield
finally:
os.chdir(curdir)
| 30 | __init__.py | Python | psutil/tests/__init__.py | ddea4072684561fc8fe754a7f2baf0cc6a787c33 | psutil | 2 |
|
245,720 | 8 | 8 | 3 | 30 | 5 | 0 | 9 | 30 | encode | [Refactor] Refactor anchor head and base head with boxlist (#8625)
* Refactor anchor head
* Update
* Update
* Update
* Add a series of boxes tools
* Fix box type to support n x box_dim boxes
* revert box type changes
* Add docstring
* refactor retina_head
* Update
* Update
* Fix comments
* modify docstring of coder and ioucalculator
* Replace with_boxlist with use_box_type | https://github.com/open-mmlab/mmdetection.git | def encode(self, bboxes, gt_bboxes):
gt_bboxes = get_box_tensor(gt_bboxes)
return gt_bboxes
| 18 | pseudo_bbox_coder.py | Python | mmdet/models/task_modules/coders/pseudo_bbox_coder.py | d915740fa8228cf57741b27d9e5d66e358456b8e | mmdetection | 1 |
|
125,487 | 7 | 10 | 6 | 47 | 7 | 0 | 7 | 21 | name | [Datasets] Automatically cast tensor columns when building Pandas blocks. (#26684)
This PR tries to automatically cast tensor columns to our TensorArray extension type when building Pandas blocks, logging a warning and falling back to the opaque object-typed column if the cast fails. This should allow users to remain mostly tensor extension agnostic.
TensorArray now eagerly validates the underlying tensor data, raising an error if e.g. the underlying ndarrays have heterogeneous shapes; previously, TensorArray wouldn't validate this on construction and would instead let failures happen downstream. This means that our internal TensorArray use needs to follow a try-except pattern, falling back to a plain NumPy object column. | https://github.com/ray-project/ray.git | def name(self) -> str:
return f"{type(self).__name__}(shape={self._shape}, dtype={self._dtype})"
| 11 | pandas.py | Python | python/ray/air/util/tensor_extensions/pandas.py | 0c139914bbb3e3557f13738b5f3f9fe8d2d428b4 | ray | 1 |
|
166,957 | 53 | 9 | 7 | 126 | 13 | 0 | 66 | 96 | np_dtype_to_arrays | DOC: Added docstrings to fixtures defined in array module (#47211) | https://github.com/pandas-dev/pandas.git | def np_dtype_to_arrays(any_real_numpy_dtype):
np_dtype = np.dtype(any_real_numpy_dtype)
pa_type = pa.from_numpy_dtype(np_dtype)
# None ensures the creation of a bitmask buffer.
pa_array = pa.array([0, 1, 2, None], type=pa_type)
# Since masked Arrow buffer slots are not required to contain a specific
# value, assert only the first three values of the created np.array
np_expected = np.array([0, 1, 2], dtype=np_dtype)
mask_expected = np.array([True, True, True, False])
return np_dtype, pa_array, np_expected, mask_expected
| 84 | test_arrow_compat.py | Python | pandas/tests/arrays/masked/test_arrow_compat.py | 89be1f053b695c4ce1c0569f737caf3f03c12128 | pandas | 1 |
|
142,458 | 23 | 11 | 6 | 79 | 10 | 0 | 24 | 70 | was_current_actor_reconstructed | [api] Annotate as public / move ray-core APIs to _private and add enforcement rule (#25695)
Enable checking of the ray core module, excluding serve, workflows, and tune, in ./ci/lint/check_api_annotations.py. This required moving many files to ray._private and associated fixes. | https://github.com/ray-project/ray.git | def was_current_actor_reconstructed(self):
assert (
not self.actor_id.is_nil()
), "This method should't be called inside Ray tasks."
actor_info = ray._private.state.actors(self.actor_id.hex())
return actor_info and actor_info["NumRestarts"] != 0
| 46 | runtime_context.py | Python | python/ray/runtime_context.py | 43aa2299e6623c8f8c7c4a1b80133459d0aa68b0 | ray | 2 |
|
36,152 | 36 | 9 | 3 | 64 | 10 | 1 | 40 | 59 | _set_gradient_checkpointing | Visual Attention Network (VAN) (#16027)
* encoder works
* addded files
* norm in stage
* convertion script
* tests
* fix copies
* make fix-copies
* fixed __init__
* make fix-copies
* fix
* shapiro test needed
* make fix-copie
* minor changes
* make style + quality
* minor refactor conversion script
* rebase + tests
* removed unused variables
* updated doc
* toctree
* CI
* doc
* Apply suggestions from code review
Co-authored-by: Sylvain Gugger <[email protected]>
* resolved conversations
* make fixup
* config passed to modules
* config passed to modules
* Apply suggestions from code review
Co-authored-by: NielsRogge <[email protected]>
* conversations
* conversations
* copyrights
* normal test
* tests
Co-authored-by: Sylvain Gugger <[email protected]>
Co-authored-by: NielsRogge <[email protected]> | https://github.com/huggingface/transformers.git | def _set_gradient_checkpointing(self, module, value=False):
if isinstance(module, VanModel):
module.gradient_checkpointing = value
VAN_START_DOCSTRING = r
VAN_INPUTS_DOCSTRING = r
@add_start_docstrings(
"The bare VAN model outputting raw features without any specific head on top. Note, VAN does not have an embedding layer.",
VAN_START_DOCSTRING,
) | @add_start_docstrings(
"The bare VAN model outputting raw features without any specific head on top. Note, VAN does not have an embedding layer.",
VAN_START_DOCSTRING,
) | 24 | modeling_van.py | Python | src/transformers/models/van/modeling_van.py | 0a057201a96565df29984d716f660fd8d634329a | transformers | 2 |
291,719 | 6 | 6 | 2 | 20 | 3 | 0 | 6 | 12 | aiohttp_server | Upgrade pytest-aiohttp (#82475)
* Upgrade pytest-aiohttp
* Make sure executors, tasks and timers are closed
Some test will trigger warnings on garbage collect, these warnings
spills over into next test.
Some test trigger tasks that raise errors on shutdown, these spill
over into next test.
This is to mimic older pytest-aiohttp and it's behaviour on test
cleanup.
Discussions on similar changes for pytest-aiohttp are here:
https://github.com/pytest-dev/pytest-asyncio/pull/309
* Replace loop with event_loop
* Make sure time is frozen for tests
* Make sure the ConditionType is not async
/home-assistant/homeassistant/helpers/template.py:2082: RuntimeWarning: coroutine 'AsyncMockMixin._execute_mock_call' was never awaited
def wrapper(*args, **kwargs):
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.
* Increase litejet press tests with a factor 10
The times are simulated anyway, and we can't stop the normal
event from occuring.
* Use async handlers for aiohttp
tests/components/motioneye/test_camera.py::test_get_still_image_from_camera
tests/components/motioneye/test_camera.py::test_get_still_image_from_camera
tests/components/motioneye/test_camera.py::test_get_stream_from_camera
tests/components/motioneye/test_camera.py::test_get_stream_from_camera
tests/components/motioneye/test_camera.py::test_camera_option_stream_url_template
tests/components/motioneye/test_camera.py::test_camera_option_stream_url_template
/Users/joakim/src/hass/home-assistant/venv/lib/python3.9/site-packages/aiohttp/web_urldispatcher.py:189: DeprecationWarning: Bare functions are deprecated, use async ones
warnings.warn(
* Switch to freezegun in modbus tests
The tests allowed clock to tick in between steps
* Make sure skybell object are fully mocked
Old tests would trigger attempts to post to could services:
```
DEBUG:aioskybell:HTTP post https://cloud.myskybell.com/api/v3/login/ Request with headers: {'content-type': 'application/json', 'accept': '*/*', 'x-skybell-app-id': 'd2b542c7-a7e4-4e1e-b77d-2b76911c7c46', 'x-skybell-client-id': '1f36a3c0-6dee-4997-a6db-4e1c67338e57'}
```
* Fix sorting that broke after rebase | https://github.com/home-assistant/core.git | def aiohttp_server(event_loop, aiohttp_server, socket_enabled):
return aiohttp_server
| 12 | test_camera.py | Python | tests/components/motioneye/test_camera.py | c576a68d336bc91fd82c299d9b3e5dfdc1c14960 | core | 1 |
|
141,293 | 13 | 6 | 9 | 30 | 7 | 0 | 13 | 34 | to_config | [RLlib] Introduce basic connectors library. (#25311) | https://github.com/ray-project/ray.git | def to_config(self) -> Tuple[str, List[Any]]:
# Must implement by each connector.
return NotImplementedError
| 18 | connector.py | Python | rllib/connectors/connector.py | 9b65d5535df7fcdfb5cb8fd7ad90c328eb1f1aed | ray | 1 |
|
259,994 | 32 | 16 | 11 | 164 | 14 | 0 | 37 | 106 | test_iforest | TST use global_random_seed in sklearn/ensemble/tests/test_iforest.py (#22901)
Co-authored-by: jeremie du boisberranger <[email protected]>
Co-authored-by: Guillaume Lemaitre <[email protected]>
Co-authored-by: Olivier Grisel <[email protected]> | https://github.com/scikit-learn/scikit-learn.git | def test_iforest(global_random_seed):
X_train = np.array([[0, 1], [1, 2]])
X_test = np.array([[2, 1], [1, 1]])
grid = ParameterGrid(
{"n_estimators": [3], "max_samples": [0.5, 1.0, 3], "bootstrap": [True, False]}
)
with ignore_warnings():
for params in grid:
IsolationForest(random_state=global_random_seed, **params).fit(
X_train
).predict(X_test)
| 109 | test_iforest.py | Python | sklearn/ensemble/tests/test_iforest.py | 6ca1f5e4d0d16bc9a7f28582079a15e14f012719 | scikit-learn | 2 |
|
280,071 | 33 | 13 | 12 | 103 | 12 | 0 | 40 | 151 | clone_model | Move serialization-related logic in utils/generic_utils.py to saving/legacy/serialization.py.
PiperOrigin-RevId: 479688207 | https://github.com/keras-team/keras.git | def clone_model(model, input_tensors=None, clone_function=None):
with serialization.DisableSharedObjectScope():
if clone_function is None:
clone_function = _clone_layer
if isinstance(model, Sequential):
return _clone_sequential_model(
model, input_tensors=input_tensors, layer_fn=clone_function
)
else:
return _clone_functional_model(
model, input_tensors=input_tensors, layer_fn=clone_function
)
# "Clone" a subclassed model by resetting all of the attributes. | 65 | cloning.py | Python | keras/models/cloning.py | c269e3cd8fed713fb54d2971319df0bfe6e1bf10 | keras | 3 |
|
189,882 | 13 | 8 | 10 | 44 | 7 | 0 | 13 | 34 | get_end_anchors | Fix vm.get_end_anchors() docstring (#2755)
Co-authored-by: Benjamin Hackl <[email protected]> | https://github.com/ManimCommunity/manim.git | def get_end_anchors(self) -> np.ndarray:
nppcc = self.n_points_per_cubic_curve
return self.points[nppcc - 1 :: nppcc]
| 26 | vectorized_mobject.py | Python | manim/mobject/types/vectorized_mobject.py | e8124bb95609237e53f0be3e7dd208e3e2dfd255 | manim | 1 |
|
266,730 | 34 | 11 | 11 | 126 | 23 | 0 | 40 | 91 | command_coverage_analyze_targets_expand | ansible-test - Code cleanup and refactoring. (#77169)
* Remove unnecessary PyCharm ignores.
* Ignore intentional undefined attribute usage.
* Add missing type hints. Fix existing type hints.
* Fix docstrings and comments.
* Use function to register completion handler.
* Pass strings to display functions.
* Fix CompositeAction handling of dest argument.
* Use consistent types in expressions/assignments.
* Use custom function to keep linters happy.
* Add missing raise for custom exception.
* Clean up key/value type handling in cloud plugins.
* Use dataclass instead of dict for results.
* Add custom type_guard function to check lists.
* Ignore return type that can't be checked (yet).
* Avoid changing types on local variables. | https://github.com/ansible/ansible.git | def command_coverage_analyze_targets_expand(args): # type: (CoverageAnalyzeTargetsExpandConfig) -> None
host_state = prepare_profiles(args) # coverage analyze targets expand
if args.delegate:
raise Delegate(host_state=host_state)
covered_targets, covered_path_arcs, covered_path_lines = read_report(args.input_file)
report = dict(
arcs=expand_indexes(covered_path_arcs, covered_targets, format_arc),
lines=expand_indexes(covered_path_lines, covered_targets, format_line),
)
if not args.explain:
write_json_file(args.output_file, report, encoder=SortedSetEncoder)
| 81 | expand.py | Python | test/lib/ansible_test/_internal/commands/coverage/analyze/targets/expand.py | a06fa496d3f837cca3c437ab6e9858525633d147 | ansible | 3 |
|
272,366 | 11 | 10 | 4 | 73 | 10 | 0 | 12 | 40 | test_scale_none | Reformatting the codebase with black.
PiperOrigin-RevId: 450093126 | https://github.com/keras-team/keras.git | def test_scale_none(self):
attention_layer = keras.layers.Attention()
attention_layer.build(input_shape=([1, 1, 1], [1, 1, 1]))
self.assertIsNone(attention_layer.scale)
| 47 | attention_test.py | Python | keras/layers/attention/attention_test.py | 84afc5193d38057e2e2badf9c889ea87d80d8fbf | keras | 1 |
|
132,220 | 9 | 8 | 4 | 39 | 7 | 0 | 9 | 41 | __call__ | [CI] Format Python code with Black (#21975)
See #21316 and #21311 for the motivation behind these changes. | https://github.com/ray-project/ray.git | def __call__(self, env):
return self.after_iteration(
env.model, env.iteration, env.evaluation_result_list
)
| 25 | xgboost.py | Python | python/ray/tune/integration/xgboost.py | 7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065 | ray | 1 |
|
194,668 | 18 | 8 | 4 | 50 | 9 | 0 | 21 | 33 | test_kivy_log_mode_marker_on | Support KivyLogMode environment variable for logging testing (#7971)
* Support KivyLogMode for logging testing
Also:
Remove unused imports.
Remove Python 2 only code
Run through Black to canonicalize formatting
* Undo formatting changes
Undo black. | https://github.com/kivy/kivy.git | def test_kivy_log_mode_marker_on():
from kivy.logger import previous_stderr
assert sys.stderr == previous_stderr, "Kivy.logging override stderr"
assert logging.root.parent is None, "Kivy.logging override root logger"
| 29 | test_logger.py | Python | kivy/tests/test_logger.py | 2d9755ad8a82ba0777299cbc1666bed25278db94 | kivy | 1 |
|
22,285 | 26 | 12 | 14 | 143 | 12 | 0 | 43 | 125 | which_pip | Remove other spots that did not use the internal pip version to exectue pipenv commands. | https://github.com/pypa/pipenv.git | def which_pip(project):
location = None
if "VIRTUAL_ENV" in os.environ:
location = os.environ["VIRTUAL_ENV"]
pip = project._which("python", location=location)
if pip:
return pip
if not pip:
for p in ("pip", "pip3", "pip2"):
where = system_which(p)
if where:
return where
pip = fallback_which("pip", allow_global=True, location=location)
return pip
| 83 | core.py | Python | pipenv/core.py | 374b670afb206c6e1ae9b2edd27c244dae5d296a | pipenv | 6 |
|
154,654 | 21 | 11 | 7 | 69 | 9 | 0 | 24 | 93 | __getattr__ | REFACTOR-#5026: Change exception names to simplify grepping (#5027)
Signed-off-by: Myachev <[email protected]> | https://github.com/modin-project/modin.git | def __getattr__(self, key):
try:
return object.__getattribute__(self, key)
except AttributeError as err:
if key not in _ATTRS_NO_LOOKUP and key in self.index:
return self[key]
raise err
| 43 | series.py | Python | modin/pandas/series.py | 0a2c0de4451f7e2e8f337a9478d7595473aa348e | modin | 4 |
|
102,230 | 149 | 16 | 42 | 502 | 40 | 0 | 250 | 584 | broadcast_object_list | Prevent sum overflow in broadcast_object_list (#70605)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70605
broadcast_object_list casted the sum of all object lengths to int from long causing overflows.
Test Plan:
Add a Tensor with >2GB storage requirement (in distributed_test.py) to object broadcast.
This Tensor is only added if test are running at Meta as github tests will oom.
Without fix the length will overflow and the program will request a negative sized Tensor:
```
RuntimeError: Trying to create tensor with negative dimension -2147482417: [-2147482417]
```
With fix it will pass the test.
Test used on server with GPUs:
buck test mode/dev-nosan //caffe2/test/distributed:distributed_nccl_spawn --local -- broadcast_object
buck test mode/dev-nosan //caffe2/test/distributed:distributed_gloo_spawn --local -- broadcast_object
Reviewed By: r-barnes
Differential Revision: D33405741
fbshipit-source-id: 972165f8297b3f5d475636e6127ed4a49adacab1 | https://github.com/pytorch/pytorch.git | def broadcast_object_list(object_list, src=0, group=None, device=None):
if _rank_not_in_group(group):
_warn_not_in_group("broadcast_object_list")
return
my_rank = get_rank()
# Serialize object_list elements to tensors on src rank.
if my_rank == src:
tensor_list, size_list = zip(*[_object_to_tensor(obj) for obj in object_list])
object_sizes_tensor = torch.cat(size_list)
else:
object_sizes_tensor = torch.empty(len(object_list), dtype=torch.long)
# Current device selection.
# To preserve backwards compatibility, ``device`` is default to ``None``
# in which case we run current logic of device selection, i.e.
# ``current_device`` is CUDA if backend is NCCL otherwise CPU device. In the
# case it is not ``None`` we move the size and object tensors to be
# broadcasted to this device.
is_nccl_backend = _check_for_nccl_backend(group)
current_device = None
if device is not None:
if is_nccl_backend and device.type != "cuda":
raise ValueError("device type must be cuda for nccl backend")
current_device = device
else:
current_device = torch.device("cpu")
if is_nccl_backend:
# See note about using torch.cuda.current_device() here in
# docstring. We cannot simply use my_rank since rank == device is
# not necessarily true.
current_device = torch.device("cuda", torch.cuda.current_device())
if is_nccl_backend:
object_sizes_tensor = object_sizes_tensor.to(current_device)
# Broadcast object sizes
broadcast(object_sizes_tensor, src=src, group=group)
# Concatenate and broadcast serialized object tensors
if my_rank == src:
object_tensor = torch.cat(tensor_list)
else:
object_tensor = torch.empty(
torch.sum(object_sizes_tensor).item(), # type: ignore[arg-type]
dtype=torch.uint8,
)
if is_nccl_backend:
object_tensor = object_tensor.to(current_device)
broadcast(object_tensor, src=src, group=group)
# Deserialize objects using their stored sizes.
offset = 0
if my_rank != src:
for i, obj_size in enumerate(object_sizes_tensor):
obj_view = object_tensor[offset : offset + obj_size]
obj_view = obj_view.type(torch.uint8)
if obj_view.device != torch.device("cpu"):
obj_view = obj_view.cpu()
offset += obj_size
object_list[i] = _tensor_to_object(obj_view, obj_size)
| 301 | distributed_c10d.py | Python | torch/distributed/distributed_c10d.py | e1e43c4e710389a3fcf54cd7f3537336e21d3ae5 | pytorch | 14 |
|
160,501 | 7 | 7 | 2 | 35 | 6 | 1 | 7 | 12 | inner | DIC: Misc RST reformatting.
This contains various RST reformatting.
One, moving `(C)` one line up, is specific to a bug in tree-sitter-rst
that mis parses this section. Another is adding one black line for a
similar reason where `..` is seen as section underline by
tree-sitter-rst.
This is some shuffling of section underline: try to be consitant,
`=`, then `-`, then `~`, with this refactor there is also no more
section that use backticks as underline.
Note in particular that non-consitency of underline lead to a problem in
datetime64 section where "weekmasks" (underlined with `-`) were actually
a level-4 heading instead of a level 2 or 3 I guess, and thus were
nested under the `busday_count()` section.
You'll note also 2 formulas that are under double-quotes as they are not
references. | https://github.com/numpy/numpy.git | def inner(a, b):
return (a, b)
@array_function_from_c_func_and_dispatcher(_multiarray_umath.where) | @array_function_from_c_func_and_dispatcher(_multiarray_umath.where) | 14 | multiarray.py | Python | numpy/core/multiarray.py | 84eeca630ec9c5bf580bc456035c87d8591c1389 | numpy | 1 |
179,282 | 25 | 10 | 8 | 90 | 9 | 0 | 31 | 64 | resize_and_crop | Format The Codebase
- black formatting
- isort formatting | https://github.com/gradio-app/gradio.git | def resize_and_crop(img, size, crop_type="center"):
if crop_type == "top":
center = (0, 0)
elif crop_type == "center":
center = (0.5, 0.5)
else:
raise ValueError
return ImageOps.fit(img, size, centering=center)
##################
# Audio
##################
| 57 | processing_utils.py | Python | gradio/processing_utils.py | cc0cff893f9d7d472788adc2510c123967b384fe | gradio | 3 |
|
92,536 | 20 | 13 | 12 | 107 | 17 | 0 | 20 | 128 | update_snuba_subscription | feat(mep): Restructure how we determine entity subscription for alerts (#36605)
Previously we mapped a specific `EntityKey` to all `EntitySubscription` classes. As part of
introducing metric based performance alerts, we want to have the `EntitySubscription` determine the
specific entity that the subscription will run on. This allows us to automatically determine the
correct entity for metric based alerts without having to duplicate logic that parses
aggregates/datasets/etc. | https://github.com/getsentry/sentry.git | def update_snuba_subscription(subscription, old_query_type, old_dataset):
with transaction.atomic():
subscription.update(status=QuerySubscription.Status.UPDATING.value)
update_subscription_in_snuba.apply_async(
kwargs={
"query_subscription_id": subscription.id,
"old_query_type": old_query_type.value,
"old_dataset": old_dataset.value,
},
countdown=5,
)
return subscription
| 65 | subscriptions.py | Python | src/sentry/snuba/subscriptions.py | 06885ee7284a274d02a9dc1f6a0348c8edc07184 | sentry | 1 |
|
118,531 | 17 | 9 | 8 | 98 | 15 | 1 | 22 | 49 | experimental_set_query_params | Rename and refactor `Report` machinery (#4141)
This refactor renames (almost) everything related to the outdated "report" concept with more precise concepts that we use throughout our code, primarily "script run", "session", and "app". | https://github.com/streamlit/streamlit.git | def experimental_set_query_params(**query_params):
ctx = _get_script_run_ctx()
if ctx is None:
return
ctx.query_string = _parse.urlencode(query_params, doseq=True)
msg = _ForwardMsg_pb2.ForwardMsg()
msg.page_info_changed.query_string = ctx.query_string
ctx.enqueue(msg)
@_contextlib.contextmanager | @_contextlib.contextmanager | 54 | __init__.py | Python | lib/streamlit/__init__.py | 704eab3478cf69847825b23dabf15813a8ac9fa2 | streamlit | 2 |
9,928 | 5 | 8 | 3 | 40 | 5 | 0 | 5 | 26 | replace_docs | feat: star routing (#3900)
* feat(proto): adjust proto for star routing (#3844)
* feat(proto): adjust proto for star routing
* feat(proto): generate proto files
* feat(grpc): refactor grpclet interface (#3846)
* feat: refactor connection pool for star routing (#3872)
* feat(k8s): add more labels to k8s deployments
* feat(network): refactor connection pool
* feat(network): refactor k8s pool
* feat: star routing graph gateway (#3877)
* feat: star routing - refactor grpc data runtime (#3887)
* feat(runtimes): refactor grpc dataruntime
* fix(tests): adapt worker runtime tests
* fix(import): fix import
* feat(proto): enable sending multiple lists (#3891)
* feat: star routing gateway (#3893)
* feat: star routing gateway all protocols (#3897)
* test: add streaming and prefetch tests (#3901)
* feat(head): new head runtime for star routing (#3899)
* feat(head): new head runtime
* feat(head): new head runtime
* style: fix overload and cli autocomplete
* feat(network): improve proto comments
Co-authored-by: Jina Dev Bot <[email protected]>
* feat(worker): merge docs in worker runtime (#3905)
* feat(worker): merge docs in worker runtime
* feat(tests): assert after clean up
* feat(tests): star routing runtime integration tests (#3908)
* fix(tests): fix integration tests
* test: test runtimes fast slow request (#3910)
* feat(zmq): purge zmq, zed, routing_table (#3915)
* feat(zmq): purge zmq, zed, routing_table
* style: fix overload and cli autocomplete
* feat(zmq): adapt comment in dependency list
* style: fix overload and cli autocomplete
* fix(tests): fix type tests
Co-authored-by: Jina Dev Bot <[email protected]>
* test: add test gateway to worker connection (#3921)
* feat(pea): adapt peas for star routing (#3918)
* feat(pea): adapt peas for star routing
* style: fix overload and cli autocomplete
* feat(pea): add tests
* feat(tests): add failing head pea test
Co-authored-by: Jina Dev Bot <[email protected]>
* feat(tests): integration tests for peas (#3923)
* feat(tests): integration tests for peas
* feat(pea): remove _inner_pea function
* feat: star routing container pea (#3922)
* test: rescue tests (#3942)
* fix: fix streaming tests (#3945)
* refactor: move docker run to run (#3948)
* feat: star routing pods (#3940)
* feat(pod): adapt pods for star routing
* feat(pods): adapt basepod to star routing
* feat(pod): merge pod and compound pod
* feat(tests): fix tests
* style: fix overload and cli autocomplete
* feat(test): add container pea int test
* feat(ci): remove more unnecessary tests
* fix(tests): remove jinad runtime
* feat(ci): remove latency tracking
* fix(ci): fix ci def
* fix(runtime): enable runtime to be exited
* fix(tests): wrap runtime test in process
* fix(runtimes): remove unused runtimes
* feat(runtimes): improve cancel wait
* fix(ci): build test pip again in ci
* fix(tests): fix a test
* fix(test): run async in its own process
* feat(pod): include shard in activate msg
* fix(pea): dont join
* feat(pod): more debug out
* feat(grpc): manage channels properly
* feat(pods): remove exitfifo
* feat(network): add simple send retry mechanism
* fix(network): await pool close
* fix(test): always close grpc server in worker
* fix(tests): remove container pea from tests
* fix(tests): reorder tests
* fix(ci): split tests
* fix(ci): allow alias setting
* fix(test): skip a test
* feat(pods): address comments
Co-authored-by: Jina Dev Bot <[email protected]>
* test: unblock skipped test (#3957)
* feat: jinad pea (#3949)
* feat: jinad pea
* feat: jinad pea
* test: remote peas
* test: toplogy tests with jinad
* ci: parallel jobs
* feat(tests): add pod integration tests (#3958)
* feat(tests): add pod integration tests
* fix(tests): make tests less flaky
* fix(test): fix test
* test(pea): remote pea topologies (#3961)
* test(pea): remote pea simple topology
* test: remote pea topologies
* refactor: refactor streamer result handling (#3960)
* feat(k8s): adapt K8s Pod for StarRouting (#3964)
* test: optimize k8s test
* test: increase timeout and use different namespace
* test: optimize k8s test
* test: build and load image when needed
* test: refactor k8s test
* test: fix image name error
* test: fix k8s image load
* test: fix typoe port expose
* test: update tests in connection pool and handling
* test: remove unused fixture
* test: parameterize docker images
* test: parameterize docker images
* test: parameterize docker images
* feat(k8s): adapt k8s pod for star routing
* fix(k8s): dont overwrite add/remove function in pool
* fix(k8s): some fixes
* fix(k8s): some more fixes
* fix(k8s): linting
* fix(tests): fix tests
* fix(tests): fix k8s unit tests
* feat(k8s): complete k8s integration test
* feat(k8s): finish k8s tests
* feat(k8s): fix test
* fix(tests): fix test with no name
* feat(k8s): unify create/replace interface
* feat(k8s): extract k8s port constants
* fix(tests): fix tests
* fix(tests): wait for runtime being ready in tests
* feat(k8s): address comments
Co-authored-by: bwanglzu <[email protected]>
* feat(flow): adapt Flow for StarRouting (#3986)
* feat(flow): add routes
* feat(flow): adapt flow to star routing
* style: fix overload and cli autocomplete
* feat(flow): handle empty topologies
* feat(k8s): allow k8s pool disabling
* style: fix overload and cli autocomplete
* fix(test): fix test with mock
* fix(tests): fix more tests
* feat(flow): clean up tests
* style: fix overload and cli autocomplete
* fix(tests): fix more tests
* feat: add plot function (#3994)
* fix(tests): avoid hanging tests
* feat(flow): add type hinting
* fix(test): fix duplicate exec name in test
* fix(tests): fix more tests
* fix(tests): enable jinad test again
* fix(tests): random port fixture
* fix(style): replace quotes
Co-authored-by: Jina Dev Bot <[email protected]>
Co-authored-by: Joan Fontanals <[email protected]>
* feat(ci): bring back ci (#3997)
* feat(ci): enable ci again
* style: fix overload and cli autocomplete
* feat(ci): add latency tracking
* feat(ci): bring back some tests
* fix(tests): remove invalid port test
* feat(ci): disable daemon and distributed tests
* fix(tests): fix entrypoint in hub test
* fix(tests): wait for gateway to be ready
* fix(test): fix more tests
* feat(flow): do rolling update and scale sequentially
* fix(tests): fix more tests
* style: fix overload and cli autocomplete
* feat: star routing hanging pods (#4011)
* fix: try to handle hanging pods better
* test: hanging pods test work
* fix: fix topology graph problem
* test: add unit test to graph
* fix(tests): fix k8s tests
* fix(test): fix k8s test
* fix(test): fix k8s pool test
* fix(test): fix k8s test
* fix(test): fix k8s connection pool setting
* fix(tests): make runtime test more reliable
* fix(test): fix routes test
* fix(tests): make rolling update test less flaky
* feat(network): gurantee unique ports
* feat(network): do round robin for shards
* fix(ci): increase pytest timeout to 10 min
Co-authored-by: Jina Dev Bot <[email protected]>
Co-authored-by: Joan Fontanals <[email protected]>
* fix(ci): fix ci file
* feat(daemon): jinad pod for star routing
* Revert "feat(daemon): jinad pod for star routing"
This reverts commit ed9b37ac862af2e2e8d52df1ee51c0c331d76f92.
* feat(daemon): remote jinad pod support (#4042)
* feat(daemon): add pod tests for star routing
* feat(daemon): add remote pod test
* test(daemon): add remote pod arguments test
* test(daemon): add async scale test
* test(daemon): add rolling update test
* test(daemon): fix host
* feat(proto): remove message proto (#4051)
* feat(proto): remove message proto
* fix(tests): fix tests
* fix(tests): fix some more tests
* fix(tests): fix more tests
* fix(tests): fix more tests
* fix(tests): fix more tests
* fix(tests): fix more tests
* feat(proto): put docs back in data
* fix(proto): clean up
* feat(proto): clean up
* fix(tests): skip latency tracking
* fix(test): fix hub test
* fix(tests): fix k8s test
* fix(test): some test clean up
* fix(style): clean up style issues
* feat(proto): adjust for rebase
* fix(tests): bring back latency tracking
* fix(tests): fix merge accident
* feat(proto): skip request serialization (#4074)
* feat: add reduce to star routing (#4070)
* feat: add reduce on shards to head runtime
* test: add reduce integration tests with fixed order
* feat: add reduce on needs
* chore: get_docs_matrix_from_request becomes public
* style: fix overload and cli autocomplete
* docs: remove undeterministic results warning
* fix: fix uses_after
* test: assert correct num docs after reducing in test_external_pod
* test: correct asserts after reduce in test_rolling_update
* fix: no reduce if uses_after_address is set
* fix: get_docs_from_request only if needed
* fix: fix tests after merge
* refactor: move reduce from data_request_handler to head
* style: fix overload and cli autocomplete
* chore: apply suggestions
* fix: fix asserts
* chore: minor test fix
* chore: apply suggestions
* test: remove flow tests with external executor (pea)
* fix: fix test_expected_messages_routing
* fix: fix test_func_joiner
* test: adapt k8s test
Co-authored-by: Jina Dev Bot <[email protected]>
* fix(k8s): fix static pool config
* fix: use custom protoc doc generator image (#4088)
* fix: use custom protoc doc generator image
* fix(docs): minor doc improvement
* fix(docs): use custom image
* fix(docs): copy docarray
* fix: doc building local only
* fix: timeout doc building
* fix: use updated args when building ContainerPea
* test: add container PeaFactory test
* fix: force pea close on windows (#4098)
* fix: dont reduce if uses exist (#4099)
* fix: dont use reduce if uses exist
* fix: adjust reduce tests
* fix: adjust more reduce tests
* fix: fix more tests
* fix: adjust more tests
* fix: ignore non jina resources (#4101)
* feat(executor): enable async executors (#4102)
* feat(daemon): daemon flow on star routing (#4096)
* test(daemon): add remote flow test
* feat(daemon): call scale in daemon
* feat(daemon): remove tail args and identity
* test(daemon): rename scalable executor
* test(daemon): add a small delay in async test
* feat(daemon): scale partial flow only
* feat(daemon): call scale directly in partial flow store
* test(daemon): use asyncio sleep
* feat(daemon): enable flow level distributed tests
* test(daemon): fix jinad env workspace config
* test(daemon): fix pod test use new port rolling update
* feat(daemon): enable distribuetd tests
* test(daemon): remove duplicate tests and zed runtime test
* test(daemon): fix stores unit test
* feat(daemon): enable part of distributed tests
* feat(daemon): enable part of distributed tests
* test: correct test paths
* test(daemon): add client test for remote flows
* test(daemon): send a request with jina client
* test(daemon): assert async generator
* test(daemon): small interval between tests
* test(daemon): add flow test for container runtime
* test(daemon): add flow test for container runtime
* test(daemon): fix executor name
* test(daemon): fix executor name
* test(daemon): use async client fetch result
* test(daemon): finish container flow test
* test(daemon): enable distributed in ci
* test(daemon): enable distributed in ci
* test(daemon): decare flows and pods
* test(daemon): debug ci if else
* test(daemon): debug ci if else
* test(daemon): decare flows and pods
* test(daemon): correct test paths
* test(daemon): add small delay for async tests
* fix: star routing fixes (#4100)
* docs: update docs
* fix: fix Request.__repr__
* docs: update flow remarks
* docs: fix typo
* test: add non_empty_fields test
* chore: remove non_empty_fields test
* feat: polling per endpoint (#4111)
* feat(polling): polling per endpoint configurable
* fix: adjust tests
* feat(polling): extend documentation
* style: fix overload and cli autocomplete
* fix: clean up
* fix: adjust more tests
* fix: remove repeat from flaky test
* fix: k8s test
* feat(polling): address pr feedback
* feat: improve docs
Co-authored-by: Jina Dev Bot <[email protected]>
* feat(grpc): support connect grpc server via ssl tunnel (#4092)
* feat(grpc): support ssl grpc connect if port is 443
* fix(grpc): use https option instead of detect port automatically
* chore: fix typo
* fix: update jina/peapods/networking.py
Co-authored-by: Joan Fontanals <[email protected]>
* fix: update jina/peapods/networking.py
Co-authored-by: Joan Fontanals <[email protected]>
* fix: update jina/peapods/networking.py
Co-authored-by: Joan Fontanals <[email protected]>
* test(networking): add test for peapods networking
* fix: address comments
Co-authored-by: Joan Fontanals <[email protected]>
* feat(polling): unify polling args (#4113)
* fix: several issues for jinad pods (#4119)
* fix: activate for jinad pods
* fix: dont expose worker pod in partial daemon
* fix: workspace setting
* fix: containerized flows
* fix: hub test
* feat(daemon): remote peas on star routing (#4112)
* test(daemon): fix request in peas
* test(daemon): fix request in peas
* test(daemon): fix sync async client test
* test(daemon): enable remote peas test
* test(daemon): replace send message to send request
* test(daemon): declare pea tests in ci
* test(daemon): use pea args fixture
* test(daemon): head pea use default host
* test(daemon): fix peas topologies
* test(daemon): fix pseudo naming
* test(daemon): use default host as host
* test(daemon): fix executor path
* test(daemon): add remote worker back
* test(daemon): skip local remote remote topology
* fix: jinad pea test setup
* fix: jinad pea tests
* fix: remove invalid assertion
Co-authored-by: jacobowitz <[email protected]>
* feat: enable daemon tests again (#4132)
* feat: enable daemon tests again
* fix: remove bogy empty script file
* fix: more jinad test fixes
* style: fix overload and cli autocomplete
* fix: scale and ru in jinad
* fix: fix more jinad tests
Co-authored-by: Jina Dev Bot <[email protected]>
* fix: fix flow test
* fix: improve pea tests reliability (#4136)
Co-authored-by: Joan Fontanals <[email protected]>
Co-authored-by: Jina Dev Bot <[email protected]>
Co-authored-by: Deepankar Mahapatro <[email protected]>
Co-authored-by: bwanglzu <[email protected]>
Co-authored-by: AlaeddineAbdessalem <[email protected]>
Co-authored-by: Zhaofeng Miao <[email protected]> | https://github.com/jina-ai/jina.git | def replace_docs(request, docs):
request.docs.clear()
request.docs.extend(docs)
| 23 | data_request_handler.py | Python | jina/peapods/runtimes/request_handlers/data_request_handler.py | 933415bfa1f9eb89f935037014dfed816eb9815d | jina | 1 |
|
261,647 | 49 | 12 | 32 | 273 | 23 | 0 | 76 | 233 | test_learning_curve_display_plot_kwargs | FEA add LearningCurveDisplay to show plot learning curve (#24084)
Co-authored-by: jeremie du boisberranger <[email protected]>
Co-authored-by: Arturo Amor <[email protected]> | https://github.com/scikit-learn/scikit-learn.git | def test_learning_curve_display_plot_kwargs(pyplot, data):
X, y = data
estimator = DecisionTreeClassifier(random_state=0)
train_sizes = [0.3, 0.6, 0.9]
std_display_style = "fill_between"
line_kw = {"color": "red"}
fill_between_kw = {"color": "red", "alpha": 1.0}
display = LearningCurveDisplay.from_estimator(
estimator,
X,
y,
train_sizes=train_sizes,
std_display_style=std_display_style,
line_kw=line_kw,
fill_between_kw=fill_between_kw,
)
assert display.lines_[0].get_color() == "red"
assert_allclose(
display.fill_between_[0].get_facecolor(),
[[1.0, 0.0, 0.0, 1.0]], # trust me, it's red
)
std_display_style = "errorbar"
errorbar_kw = {"color": "red"}
display = LearningCurveDisplay.from_estimator(
estimator,
X,
y,
train_sizes=train_sizes,
std_display_style=std_display_style,
errorbar_kw=errorbar_kw,
)
assert display.errorbar_[0].lines[0].get_color() == "red"
| 188 | test_plot.py | Python | sklearn/model_selection/tests/test_plot.py | 758fe0d9c72ba343097003e7992c9239e58bfc63 | scikit-learn | 1 |
|
132,079 | 34 | 12 | 18 | 203 | 16 | 0 | 39 | 121 | trial | [CI] Format Python code with Black (#21975)
See #21316 and #21311 for the motivation behind these changes. | https://github.com/ray-project/ray.git | def trial(request):
job_id = request.GET.get("job_id")
trial_id = request.GET.get("trial_id")
recent_trials = TrialRecord.objects.filter(job_id=job_id).order_by("-start_time")
recent_results = ResultRecord.objects.filter(trial_id=trial_id).order_by("-date")[
0:2000
]
current_trial = TrialRecord.objects.filter(trial_id=trial_id).order_by(
"-start_time"
)[0]
context = {
"job_id": job_id,
"trial_id": trial_id,
"current_trial": current_trial,
"recent_results": recent_results,
"recent_trials": recent_trials,
}
return render(request, "trial.html", context)
| 118 | view.py | Python | python/ray/tune/automlboard/frontend/view.py | 7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065 | ray | 1 |
|
256,613 | 16 | 10 | 4 | 78 | 12 | 1 | 18 | 29 | delete_feedback | Add `DELETE /feedback` for testing and make the label's id generate server-side (#2159)
* Add DELETE /feedback for testing and make the ID generate server-side
* Make sure to delete only user generated labels
* Reduce fixture scope, was too broad
* Make test a bit more generic
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> | https://github.com/deepset-ai/haystack.git | def delete_feedback():
all_labels = DOCUMENT_STORE.get_all_labels()
user_label_ids = [label.id for label in all_labels if label.origin == "user-feedback"]
DOCUMENT_STORE.delete_labels(ids=user_label_ids)
@router.post("/eval-feedback") | @router.post("/eval-feedback") | 37 | feedback.py | Python | rest_api/controller/feedback.py | be8f50c9e3de4e264b3f345f5f4b9c9ec518ed08 | haystack | 3 |
118,728 | 15 | 11 | 5 | 86 | 10 | 0 | 19 | 62 | bar_chart | Replace static apps with live Cloud apps (#4317)
Co-authored-by: kajarenc <[email protected]> | https://github.com/streamlit/streamlit.git | def bar_chart(self, data=None, width=0, height=0, use_container_width=True):
if _use_arrow():
return self.dg._arrow_bar_chart(data, width, height, use_container_width)
else:
return self.dg._legacy_bar_chart(data, width, height, use_container_width)
| 59 | dataframe_selector.py | Python | lib/streamlit/elements/dataframe_selector.py | 72703b38029f9358a0ec7ca5ed875a6b438ece19 | streamlit | 2 |
|
189,797 | 2 | 6 | 4 | 13 | 2 | 0 | 2 | 5 | test_animate_with_changed_custom_attribute | Improved handling of attributes when using the ``.animate`` syntax (#2665)
* apply all methods to original mobject after finishing _MethodBuilder animation
* added test to check whether custom attributes are changed
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> | https://github.com/ManimCommunity/manim.git | def test_animate_with_changed_custom_attribute(using_temp_config):
| 21 | test_play_logic.py | Python | tests/test_scene_rendering/test_play_logic.py | b6311098df07c87f3c3991a3aa95847721c202d3 | manim | 1 |
|
176,348 | 36 | 11 | 8 | 128 | 17 | 0 | 42 | 92 | _lg_directed | MAINT: Remove unnecessary helper functions, use inbuilt methods for line graph generator (#5327)
* MAINT: Remove unnecessary helper functions, use inbuilt methods
* Use multigraph key to create node, add tests for multi(di)graphs | https://github.com/networkx/networkx.git | def _lg_directed(G, create_using=None):
L = nx.empty_graph(0, create_using, default=G.__class__)
# Create a graph specific edge function.
get_edges = partial(G.edges, keys=True) if G.is_multigraph() else G.edges
for from_node in get_edges():
# from_node is: (u,v) or (u,v,key)
L.add_node(from_node)
for to_node in get_edges(from_node[1]):
L.add_edge(from_node, to_node)
return L
| 82 | line.py | Python | networkx/generators/line.py | e308b80f17264b89acf8defe185c71c6656d5105 | networkx | 4 |
|
296,150 | 13 | 10 | 5 | 49 | 6 | 0 | 14 | 46 | state | Improve typing of deCONZ alarm control panel (#69680)
* Improve typing of deCONZ alarm control panel
* Fix review comments | https://github.com/home-assistant/core.git | def state(self) -> str | None:
if self._device.panel in DECONZ_TO_ALARM_STATE:
return DECONZ_TO_ALARM_STATE[self._device.panel]
return None
| 30 | alarm_control_panel.py | Python | homeassistant/components/deconz/alarm_control_panel.py | 81a55703bfa9d285cd4c9b38c337dc08f3a6bc4b | core | 2 |
|
287,765 | 6 | 7 | 3 | 29 | 5 | 0 | 6 | 20 | supported_channels | Clean up Speech-to-text integration and add tests (#79012) | https://github.com/home-assistant/core.git | def supported_channels(self) -> list[AudioChannels]:
return [AudioChannels.CHANNEL_MONO]
| 17 | test_init.py | Python | tests/components/stt/test_init.py | 57746642349a3ca62959de4447a4eb5963a84ae1 | core | 1 |
|
181,631 | 20 | 10 | 8 | 136 | 15 | 0 | 26 | 50 | test_FeatureSetSelector_5 | Revert "Deployed 7ccda9a with MkDocs version: 1.3.0"
This reverts commit bd9629c40e01241766197119b581a99409b07068. | https://github.com/EpistasisLab/tpot.git | def test_FeatureSetSelector_5():
ds = FeatureSetSelector(subset_list="tests/subset_test.csv", sel_subset=0)
ds.fit(test_X, y=None)
transformed_X = ds.transform(test_X)
assert transformed_X.shape[0] == test_X.shape[0]
assert transformed_X.shape[1] != test_X.shape[1]
assert transformed_X.shape[1] == 5
assert np.array_equal(transformed_X, test_X[ds.feat_list].values)
| 88 | feature_set_selector_tests.py | Python | tests/feature_set_selector_tests.py | 388616b6247ca4ea8de4e2f340d6206aee523541 | tpot | 1 |
|
68,837 | 31 | 18 | 34 | 157 | 19 | 0 | 39 | 25 | get_mode_of_payments | refactor: DB independent quoting and truthy/falsy values (#31358)
* refactor: DB independent quoting and truthy/falsy values
* style: reformat to black spec
* fix: ifnull -> coalesce
* fix: coalesce -> Coalesce
* fix: revert pypika comparison
* refactor: convert queries to QB
* fix: incorrect value types for query
`=` query makes no sense with list of values
* fix: remove warehouse docstatus condition
* fix: keep using base rate as rate
Co-authored-by: Ankush Menat <[email protected]> | https://github.com/frappe/erpnext.git | def get_mode_of_payments(filters):
mode_of_payments = {}
invoice_list = get_invoices(filters)
invoice_list_names = ",".join("'" + invoice["name"] + "'" for invoice in invoice_list)
if invoice_list:
inv_mop = frappe.db.sql(
.format(
invoice_list_names=invoice_list_names
),
as_dict=1,
)
for d in inv_mop:
mode_of_payments.setdefault(d["owner"] + cstr(d["posting_date"]), []).append(d.mode_of_payment)
return mode_of_payments
| 93 | sales_payment_summary.py | Python | erpnext/accounts/report/sales_payment_summary/sales_payment_summary.py | 74a782d81d8f8c4a4d9214a9c06377e5e6e464dd | erpnext | 4 |
|
111,232 | 28 | 12 | 16 | 165 | 14 | 0 | 33 | 100 | to_bytes | Fix entity linker batching (#9669)
* Partial fix of entity linker batching
* Add import
* Better name
* Add `use_gold_ents` option, docs
* Change to v2, create stub v1, update docs etc.
* Fix error type
Honestly no idea what the right type to use here is.
ConfigValidationError seems wrong. Maybe a NotImplementedError?
* Make mypy happy
* Add hacky fix for init issue
* Add legacy pipeline entity linker
* Fix references to class name
* Add __init__.py for legacy
* Attempted fix for loss issue
* Remove placeholder V1
* formatting
* slightly more interesting train data
* Handle batches with no usable examples
This adds a test for batches that have docs but not entities, and a
check in the component that detects such cases and skips the update step
as thought the batch were empty.
* Remove todo about data verification
Check for empty data was moved further up so this should be OK now - the
case in question shouldn't be possible.
* Fix gradient calculation
The model doesn't know which entities are not in the kb, so it generates
embeddings for the context of all of them.
However, the loss does know which entities aren't in the kb, and it
ignores them, as there's no sensible gradient.
This has the issue that the gradient will not be calculated for some of
the input embeddings, which causes a dimension mismatch in backprop.
That should have caused a clear error, but with numpyops it was causing
nans to happen, which is another problem that should be addressed
separately.
This commit changes the loss to give a zero gradient for entities not in
the kb.
* add failing test for v1 EL legacy architecture
* Add nasty but simple working check for legacy arch
* Clarify why init hack works the way it does
* Clarify use_gold_ents use case
* Fix use gold ents related handling
* Add tests for no gold ents and fix other tests
* Use aligned ents function (not working)
This doesn't actually work because the "aligned" ents are gold-only. But
if I have a different function that returns the intersection, *then*
this will work as desired.
* Use proper matching ent check
This changes the process when gold ents are not used so that the
intersection of ents in the pred and gold is used.
* Move get_matching_ents to Example
* Use model attribute to check for legacy arch
* Rename flag
* bump spacy-legacy to lower 3.0.9
Co-authored-by: svlandeg <[email protected]> | https://github.com/explosion/spaCy.git | def to_bytes(self, *, exclude=tuple()):
self._validate_serialization_attrs()
serialize = {}
if hasattr(self, "cfg") and self.cfg is not None:
serialize["cfg"] = lambda: srsly.json_dumps(self.cfg)
serialize["vocab"] = lambda: self.vocab.to_bytes(exclude=exclude)
serialize["kb"] = self.kb.to_bytes
serialize["model"] = self.model.to_bytes
return util.to_bytes(serialize, exclude)
| 99 | entity_linker.py | Python | spacy/pipeline/legacy/entity_linker.py | 91acc3ea75d219ad07ed2b106e7b8bdcb01516dd | spaCy | 3 |
Subsets and Splits