id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
177,558
34
10
5
61
11
0
37
106
_get_storage_by_url
fix: DEV-1476: Resolving performance for project storages (#1910) * Fix: DEV-1476: Resolving performance for project storages * Rewrite cache * Remove cache completely
https://github.com/heartexlabs/label-studio.git
def _get_storage_by_url(self, url, storage_objects): from io_storages.models import get_storage_classes for storage_object in storage_objects: # check url is string because task can have int, float, dict, list # and 'can_resolve_url' will fail if isinstance(url, str) and storage_object.can_resolve_url(url): return storage_object
38
models.py
Python
label_studio/tasks/models.py
6293c3226e3713bdae678603d6c1300e09c41448
label-studio
4
108,942
8
12
5
59
8
0
8
71
_on_leave
Make it easier to improve UI event metadata. Currently, UI events (MouseEvent, KeyEvent, etc.) are generated by letting the GUI-specific backends massage the native event objects into a list of args/kwargs and then call `FigureCanvasBase.motion_notify_event`/`.key_press_event`/etc. This makes it a bit tricky to improve the metadata on the events, because one needs to change the signature on both the `FigureCanvasBase` method and the event class. Moreover, the `motion_notify_event`/etc. methods are directly bound as event handlers in the gtk3 and tk backends, and thus have incompatible signatures there. Instead, the native GUI handlers can directly construct the relevant event objects and trigger the events themselves; a new `Event._process` helper method makes this even shorter (and allows to keep factoring some common functionality e.g. for tracking the last pressed button or key). As an example, this PR also updates figure_leave_event to always correctly set the event location based on the *current* cursor position, instead of the last triggered location event (which may be outdated); this can now easily be done on a backend-by-backend basis, instead of coordinating the change with FigureCanvasBase.figure_leave_event. This also exposed another (minor) issue, in that resize events often trigger *two* calls to draw_idle -- one in the GUI-specific handler, and one in FigureCanvasBase.draw_idle (now moved to ResizeEvent._process, but should perhaps instead be a callback autoconnected to "resize_event") -- could probably be fixed later.
https://github.com/matplotlib/matplotlib.git
def _on_leave(self, event): event.Skip() LocationEvent("figure_leave_event", self, *self._mpl_coords(event), guiEvent=event)._process()
35
backend_wx.py
Python
lib/matplotlib/backends/backend_wx.py
4e21912d2938b0e8812c4d1f7cd902c080062ff2
matplotlib
1
337,616
10
12
2
45
7
0
10
16
require_cpu
Fix debug_launcher issues (#413) * change to require_cpu only
https://github.com/huggingface/accelerate.git
def require_cpu(test_case): return unittest.skipUnless(not torch.cuda.is_available(), "test requires only a CPU")(test_case)
25
testing.py
Python
src/accelerate/test_utils/testing.py
3b51d6e9ad8a3916e519eb27ed85ec70f6f862fc
accelerate
1
189,393
18
13
5
98
13
0
22
69
set_color_by_t2g
Update `Text` to use new ManimPango color setting (#2341) * Find indexes in stripped text, not original text * Add regression test * Only run the test in linux environement * Rewrite text2settings in Text to set text color via pango * Make gradient in Text use pango coloring * Bump manimpango to newest version * Update test to use new frames_comparison * Don't remove svg file on exception * Bump manimpango * Fix pre-commit errors * Fix index bug * Deprecate no longer used functions set_color_by_t2x * Remove old commented out code * Update poetry.lock
https://github.com/ManimCommunity/manim.git
def set_color_by_t2g(self, t2g=None): t2g = t2g if t2g else self.t2g for word, gradient in list(t2g.items()): for start, end in self.find_indexes(word, self.text): self.chars[start:end].set_color_by_gradient(*gradient)
63
text_mobject.py
Python
manim/mobject/svg/text_mobject.py
540dc70d2fd7a2f759a6da158303ef81a1ae53f8
manim
4
130,250
25
12
7
109
14
0
27
84
match_files
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def match_files(self, files, separators=None): if not util._is_iterable(files): raise TypeError("files:{!r} is not an iterable.".format(files)) file_map = util.normalize_files(files, separators=separators) matched_files = util.match_files(self.patterns, iterkeys(file_map)) for path in matched_files: yield file_map[path]
68
pathspec.py
Python
python/ray/_private/thirdparty/pathspec/pathspec.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
3
81,824
64
12
22
197
15
0
102
278
check_related
DRY edits to access classes for new prompts Remove if-not-data conditional from WFJTnode.can_change these are cannonical for can_add, but this looks like a bug Change JTaccess.can_unattach to call same method in super() previously called can_attach, which is problematic Better consolidate launch config m2m related checks Test and fix pre-existing WFJT node RBAC bug recognize not-provided instance group list on launch, avoiding bug where it fell back to default fix bug where timeout field was saved on WFJT nodes after creating approval node remove labels from schedule serializer summary_fields remove unnecessary prefetch of credentials from WFJT node queryset
https://github.com/ansible/awx.git
def check_related(self, field, Model, data, role_field='admin_role', obj=None, mandatory=False): new = None changed = True if data and 'reference_obj' in data: # Use reference object's related fields, if given new = getattr(data['reference_obj'], field) elif data and field in data: new = get_object_from_data(field, Model, data, obj=obj) else: changed = False # Obtain existing related resource current = None if obj and (changed or mandatory): current = getattr(obj, field) if obj and new == current: # Resource not changed, like a PUT request changed = False if (not new) and (not obj) and mandatory: # Restrict ability to create resource without required field return self.user.is_superuser
161
access.py
Python
awx/main/access.py
34e8087aeef0de19642e7dd9cd076adcdf5fbe9c
awx
20
247,387
14
9
12
59
6
0
14
64
test_unknown_invalid
Add type hints to `tests/rest` (#12146) * Add type hints to `tests/rest` * newsfile * change import from `SigningKey`
https://github.com/matrix-org/synapse.git
def test_unknown_invalid(self) -> None: encodings = _get_html_media_encodings( b, 'text/html; charset="invalid"', ) self.assertEqual(list(encodings), ["utf-8", "cp1252"])
33
test_html_preview.py
Python
tests/rest/media/v1/test_html_preview.py
7e91107be1a4287873266e588a3c5b415279f4c8
synapse
1
241,621
6
6
3
19
3
0
6
20
creates_processes_externally
Modify LSFEnvironment to use more reliable environment variable (#10825) Co-authored-by: thomas chaton <[email protected]> Co-authored-by: Carlos Mocholí <[email protected]> Co-authored-by: Adrian Wälchli <[email protected]> Co-authored-by: Jirka Borovec <[email protected]>
https://github.com/Lightning-AI/lightning.git
def creates_processes_externally(self) -> bool: return True
10
lsf_environment.py
Python
pytorch_lightning/plugins/environments/lsf_environment.py
dbf1acd5a553ffc1546734be164cc89cef2b741d
lightning
1
145,828
2
6
60
13
2
0
2
9
test_timeslices_partially_overlapping_experiences
[RLlib] Issue 22625: `MultiAgentBatch.timeslices()` does not behave as expected. (#22657)
https://github.com/ray-project/ray.git
def test_timeslices_partially_overlapping_experiences(self):
254
test_multi_agent_batch.py
Python
rllib/policy/tests/test_multi_agent_batch.py
c0ade5f0b7cfc9aeba46cde7af3b36068a6420df
ray
3
250,614
12
11
43
63
15
0
14
26
test_http_client_aborts
reintroduce `Flow.live` We previously relied on the state of `Flow.reply` to check if a flow can be killed, but this doesn't work anymore with `Flow.reply` being removed. Instead, we now reintroduce the `Flow.live` attribute, which signals if we are on a live connection. Killing still is not ideal (see comment in `Flow.killable`), but this paves the way.
https://github.com/mitmproxy/mitmproxy.git
def test_http_client_aborts(tctx, stream): server = Placeholder(Server) flow = Placeholder(HTTPFlow) playbook = Playbook(http.HttpLayer(tctx, HTTPMode.regular), hooks=True)
191
test_http.py
Python
test/mitmproxy/proxy/layers/http/test_http.py
372a632161dee642d81542069507826e34466ba1
mitmproxy
3
109,257
18
7
2
84
8
0
25
54
get_yaxis
Add discouraged admonitions The [*Discouraged*] prefix in the summary line is added in analogy to the [*Deprecated*] prefix we add automatically. We do this so that these "labels" are prominently visible also in summary overviews of the functions in the docs. Since we rarely discourage whole functions, for now I just do this manually.
https://github.com/matplotlib/matplotlib.git
def get_yaxis(self): return self.yaxis get_xgridlines = _axis_method_wrapper("xaxis", "get_gridlines") get_xticklines = _axis_method_wrapper("xaxis", "get_ticklines") get_ygridlines = _axis_method_wrapper("yaxis", "get_gridlines") get_yticklines = _axis_method_wrapper("yaxis", "get_ticklines") # Adding and tracking artists
10
_base.py
Python
lib/matplotlib/axes/_base.py
5af97515b3823b2efa1961253a11e2d77df88637
matplotlib
1
13,348
52
10
37
162
15
0
68
217
mixin_scalable_deployment_parser
refactor: remove unnecessary parser args (#5328) * refactor: refactor deployment mixin and remove polling and shards for gateway * chore: rename executor to pod and move native and array type to worker args * refactor: make exit-on-exceptions just a worker arg * style: fix overload and cli autocomplete * chore: apply suggestion * chore: move native parameter to deployment group * fix: fix pod init * style: fix overload and cli autocomplete * fix: fix shards and replicas in deployment * chore: disable gpu and volumes for gateway * style: fix overload and cli autocomplete * fix: volume and gpus are optional for container pods Co-authored-by: Jina Dev Bot <[email protected]>
https://github.com/jina-ai/jina.git
def mixin_scalable_deployment_parser(parser): gp = mixin_base_deployment_parser(parser, title='Scalable Deployment') gp.add_argument( '--polling', type=str, default=PollingType.ANY.name, help=, ) gp.add_argument( '--shards', type=int, default=1, help='The number of shards in the deployment running at the same time. For more details check ' 'https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies', ) gp.add_argument( '--replicas', type=int, default=1, help='The number of replicas in the deployment', ) gp.add_argument( '--native', action='store_true', default=False, help='If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.', )
97
base.py
Python
jina/parsers/orchestrate/base.py
bd8003508da0b35713361484f5801ebc818bd0c3
jina
1
271,620
11
8
39
41
4
0
13
38
make_train_function
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def make_train_function(self, force=False): if self.train_function is not None and not force: return self.train_function
204
training.py
Python
keras/engine/training.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
10
274,931
4
7
2
22
3
0
4
18
dtype
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def dtype(self): return self._variable.dtype
12
autocast_variable.py
Python
keras/mixed_precision/autocast_variable.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
171,304
37
12
17
169
15
0
59
244
__new__
STYLE enable pylint's redefined-outer-name (#49671) * fix warning for pandas/core/dtypes/cast.py, pandas/core/dtypes/dtypes.py, pandas/core/indexes/base.py * fix warning for pandas/core/dtypes/cast.py, pandas/core/dtypes/dtypes.py, pandas/core/indexes/base.py * fix warning for pandas/core/dtypes/cast.py, pandas/core/dtypes/dtypes.py, pandas/core/indexes/base.py * fix warning for pandas/core/dtypes/cast.py, pandas/core/dtypes/dtypes.py, pandas/core/indexes/base.py Co-authored-by: bishwas jha <[email protected]>
https://github.com/pandas-dev/pandas.git
def __new__(cls, freq=None): if isinstance(freq, PeriodDtype): return freq elif freq is None: # empty constructor for pickle compat # -10_000 corresponds to PeriodDtypeCode.UNDEFINED u = PeriodDtypeBase.__new__(cls, -10_000) u._freq = None return u if not isinstance(freq, BaseOffset): freq = cls._parse_dtype_strict(freq) try: return cls._cache_dtypes[freq.freqstr] except KeyError: dtype_code = freq._period_dtype_code u = PeriodDtypeBase.__new__(cls, dtype_code) u._freq = freq cls._cache_dtypes[freq.freqstr] = u return u
106
dtypes.py
Python
pandas/core/dtypes/dtypes.py
c7010a7adec1c47a4642fa068544699fc8e1ea6a
pandas
5
100,733
55
11
9
134
7
0
79
231
_rewrite_warnings
Bugfixes: - Stats graph - Handle NaNs in data - logger - de-elevate matplotlib font messages
https://github.com/deepfakes/faceswap.git
def _rewrite_warnings(cls, record): if record.levelno == 30 and record.funcName == "warn" and record.module == "ag_logging": # TF 2.3 in Conda is imported with the wrong gast(0.4 when 0.3.3 should be used). This # causes warnings in autograph. They don't appear to impact performance so de-elevate # warning to debug record.levelno = 10 record.levelname = "DEBUG" if record.levelno == 30 and (record.funcName == "_tfmw_add_deprecation_warning" or record.module in ("deprecation", "deprecation_wrapper")): # Keras Deprecations. record.levelno = 10 record.levelname = "DEBUG" return record
74
logger.py
Python
lib/logger.py
afec52309326304f4323029039e49bfcf928ef43
faceswap
7
163,113
22
11
51
53
6
0
24
97
get_loc
BUG: Index.get_loc always raise InvalidIndexError on listlike (#45181)
https://github.com/pandas-dev/pandas.git
def get_loc(self, key, method=None): if method is not None: raise NotImplementedError( "only the default get_loc method is " "currently supported for MultiIndex" ) self._check_indexing_error(key)
324
multi.py
Python
pandas/core/indexes/multi.py
46ddb8ef882940fa3da58813e0b7a2df1061031e
pandas
15
225,597
77
15
22
422
32
1
145
242
bbox_rotate
Implement Ellipse Method For Bounding Box Rotation (#1203) * implement ellipse method * black formatting * fix serialization and update docs * apply reviews
https://github.com/albumentations-team/albumentations.git
def bbox_rotate(bbox, angle, method, rows, cols): x_min, y_min, x_max, y_max = bbox[:4] scale = cols / float(rows) if method == "largest_box": x = np.array([x_min, x_max, x_max, x_min]) - 0.5 y = np.array([y_min, y_min, y_max, y_max]) - 0.5 elif method == "ellipse": w = (x_max - x_min) / 2 h = (y_max - y_min) / 2 data = np.arange(0, 360, dtype=np.float32) x = w * np.sin(np.radians(data)) + (w + x_min - 0.5) y = h * np.cos(np.radians(data)) + (h + y_min - 0.5) else: raise ValueError(f"Method {method} is not a valid rotation method.") angle = np.deg2rad(angle) x_t = (np.cos(angle) * x * scale + np.sin(angle) * y) / scale y_t = -np.sin(angle) * x * scale + np.cos(angle) * y x_t = x_t + 0.5 y_t = y_t + 0.5 x_min, x_max = min(x_t), max(x_t) y_min, y_max = min(y_t), max(y_t) return x_min, y_min, x_max, y_max @angle_2pi_range
@angle_2pi_range
280
functional.py
Python
albumentations/augmentations/geometric/functional.py
a3a8fd99b564663e26a741c6d59013f2f213c799
albumentations
3
68,215
26
11
12
105
9
0
32
24
get_message
feat: add colors for attendance status to lessen the cognitive load - legend with colors and full form for status abbreviations
https://github.com/frappe/erpnext.git
def get_message() -> str: message = '' colors = ['green', 'red', 'orange', 'green', '#318AD8', '', ''] count = 0 for status, abbr in status_map.items(): message += f count += 1 return message
49
monthly_attendance_sheet.py
Python
erpnext/hr/report/monthly_attendance_sheet/monthly_attendance_sheet.py
865204a541651c284979a824576cdfcc4d789056
erpnext
2
270,970
27
10
7
117
12
0
50
121
get_losses_for
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def get_losses_for(self, inputs): if inputs is None: # Requesting unconditional losses. return [l for l in self.losses if l._unconditional_loss] # Requesting input-conditional losses. losses = [l for l in self.losses if not l._unconditional_loss] inputs = tf.nest.flatten(inputs) reachable = tf_utils.get_reachable_from_inputs(inputs, losses) return [l for l in losses if l in reachable]
75
base_layer_v1.py
Python
keras/engine/base_layer_v1.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
8
209,501
70
9
6
132
26
0
103
101
GetIcmpStatistics
E275 - Missing whitespace after keyword (#3711) Co-authored-by: Alexander Aring <[email protected]> Co-authored-by: Anmol Sarma <[email protected]> Co-authored-by: antoine.torre <[email protected]> Co-authored-by: Antoine Vacher <[email protected]> Co-authored-by: Arnaud Ebalard <[email protected]> Co-authored-by: atlowl <[email protected]> Co-authored-by: Brian Bienvenu <[email protected]> Co-authored-by: Chris Packham <[email protected]> Co-authored-by: CQ <[email protected]> Co-authored-by: Daniel Collins <[email protected]> Co-authored-by: Federico Maggi <[email protected]> Co-authored-by: Florian Maury <[email protected]> Co-authored-by: _Frky <[email protected]> Co-authored-by: g-mahieux <[email protected]> Co-authored-by: gpotter2 <[email protected]> Co-authored-by: Guillaume Valadon <[email protected]> Co-authored-by: Hao Zheng <[email protected]> Co-authored-by: Haresh Khandelwal <[email protected]> Co-authored-by: Harri Hämäläinen <[email protected]> Co-authored-by: hecke <[email protected]> Co-authored-by: Jan Romann <[email protected]> Co-authored-by: Jan Sebechlebsky <[email protected]> Co-authored-by: jdiog0 <[email protected]> Co-authored-by: jockque <[email protected]> Co-authored-by: Julien Bedel <[email protected]> Co-authored-by: Keith Scott <[email protected]> Co-authored-by: Kfir Gollan <[email protected]> Co-authored-by: Lars Munch <[email protected]> Co-authored-by: ldp77 <[email protected]> Co-authored-by: Leonard Crestez <[email protected]> Co-authored-by: Marcel Patzlaff <[email protected]> Co-authored-by: Martijn Thé <[email protected]> Co-authored-by: Martine Lenders <[email protected]> Co-authored-by: Michael Farrell <[email protected]> Co-authored-by: Michał Mirosław <[email protected]> Co-authored-by: mkaliszan <[email protected]> Co-authored-by: mtury <[email protected]> Co-authored-by: Neale Ranns <[email protected]> Co-authored-by: Octavian Toader <[email protected]> Co-authored-by: Peter Eisenlohr <[email protected]> Co-authored-by: Phil <[email protected]> Co-authored-by: Pierre Lalet <[email protected]> Co-authored-by: Pierre Lorinquer <[email protected]> Co-authored-by: piersoh <[email protected]> Co-authored-by: plorinquer <[email protected]> Co-authored-by: pvinci <[email protected]> Co-authored-by: Rahul Jadhav <[email protected]> Co-authored-by: Robin Jarry <[email protected]> Co-authored-by: romain-perez <[email protected]> Co-authored-by: rperez <rperez@debian> Co-authored-by: Sabrina Dubroca <[email protected]> Co-authored-by: Sebastian Baar <[email protected]> Co-authored-by: sebastien mainand <[email protected]> Co-authored-by: smehner1 <[email protected]> Co-authored-by: speakinghedge <[email protected]> Co-authored-by: Steven Van Acker <[email protected]> Co-authored-by: Thomas Faivre <[email protected]> Co-authored-by: Tran Tien Dat <[email protected]> Co-authored-by: Wael Mahlous <[email protected]> Co-authored-by: waeva <[email protected]> Co-authored-by: Alexander Aring <[email protected]> Co-authored-by: Anmol Sarma <[email protected]> Co-authored-by: antoine.torre <[email protected]> Co-authored-by: Antoine Vacher <[email protected]> Co-authored-by: Arnaud Ebalard <[email protected]> Co-authored-by: atlowl <[email protected]> Co-authored-by: Brian Bienvenu <[email protected]> Co-authored-by: Chris Packham <[email protected]> Co-authored-by: CQ <[email protected]> Co-authored-by: Daniel Collins <[email protected]> Co-authored-by: Federico Maggi <[email protected]> Co-authored-by: Florian Maury <[email protected]> Co-authored-by: _Frky <[email protected]> Co-authored-by: g-mahieux <[email protected]> Co-authored-by: gpotter2 <[email protected]> Co-authored-by: Guillaume Valadon <[email protected]> Co-authored-by: Hao Zheng <[email protected]> Co-authored-by: Haresh Khandelwal <[email protected]> Co-authored-by: Harri Hämäläinen <[email protected]> Co-authored-by: hecke <[email protected]> Co-authored-by: Jan Romann <[email protected]> Co-authored-by: Jan Sebechlebsky <[email protected]> Co-authored-by: jdiog0 <[email protected]> Co-authored-by: jockque <[email protected]> Co-authored-by: Julien Bedel <[email protected]> Co-authored-by: Keith Scott <[email protected]> Co-authored-by: Kfir Gollan <[email protected]> Co-authored-by: Lars Munch <[email protected]> Co-authored-by: ldp77 <[email protected]> Co-authored-by: Leonard Crestez <[email protected]> Co-authored-by: Marcel Patzlaff <[email protected]> Co-authored-by: Martijn Thé <[email protected]> Co-authored-by: Martine Lenders <[email protected]> Co-authored-by: Michael Farrell <[email protected]> Co-authored-by: Michał Mirosław <[email protected]> Co-authored-by: mkaliszan <[email protected]> Co-authored-by: mtury <[email protected]> Co-authored-by: Neale Ranns <[email protected]> Co-authored-by: Octavian Toader <[email protected]> Co-authored-by: Peter Eisenlohr <[email protected]> Co-authored-by: Phil <[email protected]> Co-authored-by: Pierre Lalet <[email protected]> Co-authored-by: Pierre Lorinquer <[email protected]> Co-authored-by: piersoh <[email protected]> Co-authored-by: pvinci <[email protected]> Co-authored-by: Rahul Jadhav <[email protected]> Co-authored-by: Robin Jarry <[email protected]> Co-authored-by: romain-perez <[email protected]> Co-authored-by: rperez <rperez@debian> Co-authored-by: Sabrina Dubroca <[email protected]> Co-authored-by: Sebastian Baar <[email protected]> Co-authored-by: sebastien mainand <[email protected]> Co-authored-by: smehner1 <[email protected]> Co-authored-by: Steven Van Acker <[email protected]> Co-authored-by: Thomas Faivre <[email protected]> Co-authored-by: Tran Tien Dat <[email protected]> Co-authored-by: Wael Mahlous <[email protected]> Co-authored-by: waeva <[email protected]>
https://github.com/secdev/scapy.git
def GetIcmpStatistics(): statistics = MIB_ICMP() _GetIcmpStatistics(byref(statistics)) results = _struct_to_dict(statistics) del statistics return results ############################## ##### Adapters Addresses ##### ############################## # Our GetAdaptersAddresses implementation is inspired by # @sphaero 's gist: https://gist.github.com/sphaero/f9da6ebb9a7a6f679157 # published under a MPL 2.0 License (GPLv2 compatible) # from iptypes.h MAX_ADAPTER_ADDRESS_LENGTH = 8 MAX_DHCPV6_DUID_LENGTH = 130 GAA_FLAG_INCLUDE_PREFIX = 0x0010 GAA_FLAG_INCLUDE_ALL_INTERFACES = 0x0100 # for now, just use void * for pointers to unused structures PIP_ADAPTER_WINS_SERVER_ADDRESS_LH = VOID PIP_ADAPTER_GATEWAY_ADDRESS_LH = VOID PIP_ADAPTER_DNS_SUFFIX = VOID IF_OPER_STATUS = UINT IF_LUID = UINT64 NET_IF_COMPARTMENT_ID = UINT32 GUID = BYTE * 16 NET_IF_NETWORK_GUID = GUID NET_IF_CONNECTION_TYPE = UINT # enum TUNNEL_TYPE = UINT # enum
27
structures.py
Python
scapy/arch/windows/structures.py
08b1f9d67c8e716fd44036a027bdc90dcb9fcfdf
scapy
1
292,459
50
14
34
241
33
1
68
366
upnp_factory_mock
Add dlna_dms integration to support DLNA Digital Media Servers (#66437)
https://github.com/home-assistant/core.git
def upnp_factory_mock() -> Iterable[Mock]: with patch( "homeassistant.components.dlna_dms.dms.UpnpFactory", autospec=True, spec_set=True, ) as upnp_factory: upnp_device = create_autospec(UpnpDevice, instance=True) upnp_device.name = MOCK_DEVICE_NAME upnp_device.udn = MOCK_DEVICE_UDN upnp_device.device_url = MOCK_DEVICE_LOCATION upnp_device.device_type = MOCK_DEVICE_TYPE upnp_device.available = True upnp_device.parent_device = None upnp_device.root_device = upnp_device upnp_device.all_devices = [upnp_device] upnp_device.services = { "urn:schemas-upnp-org:service:ContentDirectory:1": create_autospec( UpnpService, instance=True, service_type="urn:schemas-upnp-org:service:ContentDirectory:1", service_id="urn:upnp-org:serviceId:ContentDirectory", ), "urn:schemas-upnp-org:service:ConnectionManager:1": create_autospec( UpnpService, instance=True, service_type="urn:schemas-upnp-org:service:ConnectionManager:1", service_id="urn:upnp-org:serviceId:ConnectionManager", ), } seal(upnp_device) upnp_factory_instance = upnp_factory.return_value upnp_factory_instance.async_create_device.return_value = upnp_device yield upnp_factory_instance @pytest.fixture
@pytest.fixture
143
conftest.py
Python
tests/components/dlna_dms/conftest.py
b19bf9b147f4321e89d1f7f01e68337f2102f460
core
1
211,430
27
9
9
114
16
0
36
126
forward
pose3d metro modeling (#6612) * pose3d metro modeling * delete extra comments
https://github.com/PaddlePaddle/PaddleDetection.git
def forward(self, pred3d, pred2d, inputs): gt_3d_joints = inputs['joints_3d'] gt_2d_joints = inputs['joints_2d'] has_3d_joints = inputs['has_3d_joints'] has_2d_joints = inputs['has_2d_joints'] loss_3d = mpjpe(pred3d, gt_3d_joints, has_3d_joints) loss_2d = keypoint_2d_loss(self.criterion_2dpose, pred2d, gt_2d_joints, has_2d_joints) return self.weight_3d * loss_3d + self.weight_2d * loss_2d
72
pose3d_loss.py
Python
ppdet/modeling/losses/pose3d_loss.py
d4e34fe165c09db65fd00113708be1b711ac957c
PaddleDetection
1
337,064
61
11
24
261
37
0
81
291
test_stable_diffusion_fp16
[img2img, inpainting] fix fp16 inference (#769) * handle dtype in vae and image2image pipeline * fix inpaint in fp16 * dtype should be handled in add_noise * style * address review comments * add simple fast tests to check fp16 * fix test name * put mask in fp16
https://github.com/huggingface/diffusers.git
def test_stable_diffusion_fp16(self): unet = self.dummy_cond_unet scheduler = PNDMScheduler(skip_prk_steps=True) vae = self.dummy_vae bert = self.dummy_text_encoder tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") # put models in fp16 unet = unet.half() vae = vae.half() bert = bert.half() # make sure here that pndm scheduler skips prk sd_pipe = StableDiffusionPipeline( unet=unet, scheduler=scheduler, vae=vae, text_encoder=bert, tokenizer=tokenizer, safety_checker=self.dummy_safety_checker, feature_extractor=self.dummy_extractor, ) sd_pipe = sd_pipe.to(torch_device) sd_pipe.set_progress_bar_config(disable=None) prompt = "A painting of a squirrel eating a burger" generator = torch.Generator(device=torch_device).manual_seed(0) image = sd_pipe([prompt], generator=generator, num_inference_steps=2, output_type="np").images assert image.shape == (1, 128, 128, 3)
165
test_pipelines.py
Python
tests/test_pipelines.py
92d70863663662669ee3c376909be1f876e00965
diffusers
1
209,387
17
9
7
56
3
0
23
58
dce_rpc_endianess
Add SPDX License identifiers (#3655) * Add SPDX License identifiers * Relicense `ldp.py` with author consent See https://github.com/secdev/scapy/issues/3478 * Apply guedou suggestions * Relicense someim under GPL2 * DCE/RPC licensing
https://github.com/secdev/scapy.git
def dce_rpc_endianess(pkt): if pkt.endianness == 0: # big endian return ">" elif pkt.endianness == 1: # little endian return "<" else: return "!"
28
dce_rpc.py
Python
scapy/contrib/dce_rpc.py
9420c2229bf5330c2cc580f114f63f920a68db10
scapy
3
128,581
11
14
24
67
10
0
12
55
get_results
[tune] Add Tuner.get_results() to retrieve results after restore (#29083) At the moment, we need to call `tuner.fit()` to retrieve results. This PR adds a method `Tuner.get_results()` that will return the results again after fitting. It can also be used after restoring a run to get results without calling `fit()` (and potentially resuming failed trials). Signed-off-by: Kai Fricke <[email protected]>
https://github.com/ray-project/ray.git
def get_results(self) -> ResultGrid: if not self._is_ray_client: return self._local_tuner.get_results() else: return ray.get(self._remote_tuner.fit.remote())
39
tuner.py
Python
python/ray/tune/tuner.py
b510640f15a0fa4782b83ec2ea0749386b615b15
ray
2
276,451
12
9
4
42
8
0
12
44
_get_tensors
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _get_tensors(self, sess, tensor_list): return [ sess.graph.get_tensor_by_name(tensor.name) for tensor in tensor_list ]
27
graph_util_test.py
Python
keras/tests/graph_util_test.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
46,559
44
16
16
207
27
0
51
135
pool_import_helper
More explicit messages for pools and exceptions (#22569)
https://github.com/apache/airflow.git
def pool_import_helper(filepath): api_client = get_current_api_client() with open(filepath) as poolfile: data = poolfile.read() try: pools_json = json.loads(data) except JSONDecodeError as e: raise SystemExit(f"Invalid json file: {e}") pools = [] failed = [] for k, v in pools_json.items(): if isinstance(v, dict) and len(v) == 2: pools.append(api_client.create_pool(name=k, slots=v["slots"], description=v["description"])) else: failed.append(k) return pools, failed
120
pool_command.py
Python
airflow/cli/commands/pool_command.py
7418720ce173ca5d0c5f5197c168e43258af8cc3
airflow
5
314,848
26
10
10
98
16
0
28
71
test_dont_fire_on_unknown_module
Add tests for LCN sensor and binary_sensor platforms (#67263)
https://github.com/home-assistant/core.git
async def test_dont_fire_on_unknown_module(hass, lcn_connection): inp = ModStatusAccessControl( LcnAddr(0, 10, False), # unknown module periphery=AccessControlPeriphery.FINGERPRINT, code="aabbcc", ) events = async_capture_events(hass, LCN_FINGERPRINT) await lcn_connection.async_process_input(inp) await hass.async_block_till_done() assert len(events) == 0
60
test_events.py
Python
tests/components/lcn/test_events.py
b7b8feda0ffb7487954545c96c50e7f64e2195bc
core
1
310,210
22
10
13
101
13
0
28
81
extra_state_attributes
Remove vera from mypy ignore list (#64474) * Remove vera from mypy ignore list * Fix pylint
https://github.com/home-assistant/core.git
def extra_state_attributes(self) -> dict[str, Any] | None: data = super().extra_state_attributes or {} last_user = self.vera_device.get_last_user_alert() if last_user is not None: data[ATTR_LAST_USER_NAME] = last_user[1] data[ATTR_LOW_BATTERY] = self.vera_device.get_low_battery_alert() return data
63
lock.py
Python
homeassistant/components/vera/lock.py
03bf2cdd56eb9a0a9ed56d7afb700d5f7d9cf75e
core
3
33,645
68
12
24
337
36
0
84
284
loss_labels
Add Deformable DETR (#17281) * First draft * More improvements * Improve model, add custom CUDA code * Import torch before * Add script that imports custom layer * Add everything in new ops directory * Import custom layer in modeling file * Fix ARCHIVE_MAP typo * Creating the custom kernel on the fly. * Import custom layer in modeling file * More improvements * Fix CUDA loading * More improvements * Improve conversion script * Improve conversion script * Make it work until encoder_outputs * Make forward pass work * More improvements * Make logits match original implementation * Make implementation also support single_scale model * Add support for single_scale and dilation checkpoint * Add support for with_box_refine model * Support also two stage model * Improve tests * Fix more tests * Make more tests pass * Upload all models to the hub * Clean up some code * Improve decoder outputs * Rename intermediate hidden states and reference points * Improve model outputs * Move tests to dedicated folder * Improve model outputs * Fix retain_grad test * Improve docs * Clean up and make test_initialization pass * Improve variable names * Add copied from statements * Improve docs * Fix style * Improve docs * Improve docs, move tests to model folder * Fix rebase * Remove DetrForSegmentation from auto mapping * Apply suggestions from code review * Improve variable names and docstrings * Apply some more suggestions from code review * Apply suggestion from code review * better docs and variables names * hint to num_queries and two_stage confusion * remove asserts and code refactor * add exception if two_stage is True and with_box_refine is False * use f-strings * Improve docs and variable names * Fix code quality * Fix rebase * Add require_torch_gpu decorator * Add pip install ninja to CI jobs * Apply suggestion of @sgugger * Remove DeformableDetrForObjectDetection from auto mapping * Remove DeformableDetrModel from auto mapping * Add model to toctree * Add model back to mappings, skip model in pipeline tests * Apply @sgugger's suggestion * Fix imports in the init * Fix copies * Add CPU implementation * Comment out GPU function * Undo previous change * Apply more suggestions * Remove require_torch_gpu annotator * Fix quality * Add logger.info * Fix logger * Fix variable names * Fix initializaztion * Add missing initialization * Update checkpoint name * Add model to doc tests * Add CPU/GPU equivalence test * Add Deformable DETR to pipeline tests * Skip model for object detection pipeline Co-authored-by: Nicolas Patry <[email protected]> Co-authored-by: Nouamane Tazi <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]>
https://github.com/huggingface/transformers.git
def loss_labels(self, outputs, targets, indices, num_boxes, log=True): if "logits" not in outputs: raise ValueError("No logits were found in the outputs") source_logits = outputs["logits"] idx = self._get_source_permutation_idx(indices) target_classes_o = torch.cat([t["class_labels"][J] for t, (_, J) in zip(targets, indices)]) target_classes = torch.full( source_logits.shape[:2], self.num_classes, dtype=torch.int64, device=source_logits.device ) target_classes[idx] = target_classes_o target_classes_onehot = torch.zeros( [source_logits.shape[0], source_logits.shape[1], source_logits.shape[2] + 1], dtype=source_logits.dtype, layout=source_logits.layout, device=source_logits.device, ) target_classes_onehot.scatter_(2, target_classes.unsqueeze(-1), 1) target_classes_onehot = target_classes_onehot[:, :, :-1] loss_ce = ( sigmoid_focal_loss(source_logits, target_classes_onehot, num_boxes, alpha=self.focal_alpha, gamma=2) * source_logits.shape[1] ) losses = {"loss_ce": loss_ce} return losses
226
modeling_deformable_detr.py
Python
src/transformers/models/deformable_detr/modeling_deformable_detr.py
59407bbeb31fff8340938768051c9daabd38d7a7
transformers
3
22,665
18
11
5
61
6
0
19
62
component
refactor: clean code Signed-off-by: slowy07 <[email protected]>
https://github.com/geekcomputers/Python.git
def component(self, i): if i < len(self.__components) and i >= 0: return self.__components[i] else: raise Exception("index out of range")
36
lib.py
Python
linear-algebra-python/src/lib.py
f0af0c43340763724f139fa68aa1e5a9ffe458b4
Python
3
260,557
82
13
28
334
28
0
103
367
fit
MAINT add parameter_constraints for MultiOutputClassifier and MultiOutputRegressor (#23902) Co-authored-by: jeremiedbb <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def fit(self, X, y, sample_weight=None, **fit_params): self._validate_params() if not hasattr(self.estimator, "fit"): raise ValueError("The base estimator should implement a fit method") y = self._validate_data(X="no_validation", y=y, multi_output=True) if is_classifier(self): check_classification_targets(y) if y.ndim == 1: raise ValueError( "y must have at least two dimensions for " "multi-output regression but has only one." ) if sample_weight is not None and not has_fit_parameter( self.estimator, "sample_weight" ): raise ValueError("Underlying estimator does not support sample weights.") fit_params_validated = _check_fit_params(X, fit_params) self.estimators_ = Parallel(n_jobs=self.n_jobs)( delayed(_fit_estimator)( self.estimator, X, y[:, i], sample_weight, **fit_params_validated ) for i in range(y.shape[1]) ) if hasattr(self.estimators_[0], "n_features_in_"): self.n_features_in_ = self.estimators_[0].n_features_in_ if hasattr(self.estimators_[0], "feature_names_in_"): self.feature_names_in_ = self.estimators_[0].feature_names_in_ return self
209
multioutput.py
Python
sklearn/multioutput.py
d942600e1f1979c431c24f59933a95155789f324
scikit-learn
9
181,911
13
10
6
74
12
0
16
34
mutate_random_individual
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def mutate_random_individual(population, toolbox): idx = np.random.randint(0,len(population)) ind = population[idx] ind, = toolbox.mutate(ind) del ind.fitness.values return ind
46
gp_deap.py
Python
tpot/gp_deap.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
1
224,172
32
11
21
232
21
0
42
181
test_nav_no_title
Some manual changes ahead of formatting code with Black
https://github.com/mkdocs/mkdocs.git
def test_nav_no_title(self): nav_cfg = [ 'index.md', {'About': 'about.md'}, ] expected = dedent( ) cfg = load_config(nav=nav_cfg, site_url='http://example.com/') fs = [ File(nav_cfg[0], cfg['docs_dir'], cfg['site_dir'], cfg['use_directory_urls']), File(nav_cfg[1]['About'], cfg['docs_dir'], cfg['site_dir'], cfg['use_directory_urls']) ] files = Files(fs) site_navigation = get_navigation(files, cfg) self.assertEqual(str(site_navigation).strip(), expected) self.assertEqual(len(site_navigation.items), 2) self.assertEqual(len(site_navigation.pages), 2)
142
nav_tests.py
Python
mkdocs/tests/structure/nav_tests.py
372384d8102ddb4be6360f44d1bfddb8b45435a4
mkdocs
1
60,276
4
7
2
21
2
0
4
18
to_proto
Balanced joint maximum mean discrepancy for deep transfer learning
https://github.com/jindongwang/transferlearning.git
def to_proto(self): return to_proto(self)
11
net_spec.py
Python
code/deep/BJMMD/caffe/python/caffe/net_spec.py
cc4d0564756ca067516f71718a3d135996525909
transferlearning
1
168,314
8
10
3
45
8
0
8
29
get
DOC/TST: Clarify Series.str.get supports passing hashable label (#47918) * gh 47911 * pre-commit issue * add test and fix doc * modified comment * pep 8 * add more elements
https://github.com/pandas-dev/pandas.git
def get(self, i): result = self._data.array._str_get(i) return self._wrap_result(result)
27
accessor.py
Python
pandas/core/strings/accessor.py
d5b4b33f1034b0fb0aa8a76cefe620794e28e851
pandas
1
296,122
6
7
3
25
4
0
6
20
is_connected
Add missing type declaration to AsusWrt Scanner Entity (#69773)
https://github.com/home-assistant/core.git
def is_connected(self) -> bool: return self._device.is_connected
14
device_tracker.py
Python
homeassistant/components/asuswrt/device_tracker.py
bc2ba8e1c8c988ae24f6961ce64187782f5ba32d
core
1
261,022
84
12
16
185
17
0
113
219
get_namespace
ENH Adds Array API support to LinearDiscriminantAnalysis (#22554) Co-authored-by: Olivier Grisel <[email protected]> Co-authored-by: Julien Jerphanion <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def get_namespace(*arrays): # `arrays` contains one or more arrays, or possibly Python scalars (accepting # those is a matter of taste, but doesn't seem unreasonable). # Returns a tuple: (array_namespace, is_array_api) if not get_config()["array_api_dispatch"]: return _NumPyApiWrapper(), False namespaces = { x.__array_namespace__() if hasattr(x, "__array_namespace__") else None for x in arrays if not isinstance(x, (bool, int, float, complex)) } if not namespaces: # one could special-case np.ndarray above or use np.asarray here if # older numpy versions need to be supported. raise ValueError("Unrecognized array input") if len(namespaces) != 1: raise ValueError(f"Multiple namespaces for array inputs: {namespaces}") (xp,) = namespaces if xp is None: # Use numpy as default return _NumPyApiWrapper(), False return _ArrayAPIWrapper(xp), True
107
_array_api.py
Python
sklearn/utils/_array_api.py
2710a9e7eefd2088ce35fd2fb6651d5f97e5ef8b
scikit-learn
8
268,938
113
16
32
479
22
1
178
316
array_to_img
Copy image utils from keras_preprocessing directly into core keras This is not new code, we are just moving these utilities directly into keras from keras-preprocessing. For the library code, just fixed linting errors. For the test code, had to do more major changes to port from pytest, but hopefully any errors have been caught by the tests themselves. PiperOrigin-RevId: 427274651
https://github.com/keras-team/keras.git
def array_to_img(x, data_format=None, scale=True, dtype=None): if data_format is None: data_format = backend.image_data_format() if dtype is None: dtype = backend.floatx() if pil_image is None: raise ImportError('Could not import PIL.Image. ' 'The use of `array_to_img` requires PIL.') x = np.asarray(x, dtype=dtype) if x.ndim != 3: raise ValueError('Expected image array to have rank 3 (single image). ' f'Got array with shape: {x.shape}') if data_format not in {'channels_first', 'channels_last'}: raise ValueError(f'Invalid data_format: {data_format}') # Original Numpy array x has format (height, width, channel) # or (channel, height, width) # but target PIL image has format (width, height, channel) if data_format == 'channels_first': x = x.transpose(1, 2, 0) if scale: x = x - np.min(x) x_max = np.max(x) if x_max != 0: x /= x_max x *= 255 if x.shape[2] == 4: # RGBA return pil_image.fromarray(x.astype('uint8'), 'RGBA') elif x.shape[2] == 3: # RGB return pil_image.fromarray(x.astype('uint8'), 'RGB') elif x.shape[2] == 1: # grayscale if np.max(x) > 255: # 32-bit signed integer grayscale image. PIL mode "I" return pil_image.fromarray(x[:, :, 0].astype('int32'), 'I') return pil_image.fromarray(x[:, :, 0].astype('uint8'), 'L') else: raise ValueError(f'Unsupported channel number: {x.shape[2]}') @keras_export('keras.utils.img_to_array', 'keras.preprocessing.image.img_to_array')
@keras_export('keras.utils.img_to_array', 'keras.preprocessing.image.img_to_array')
264
image.py
Python
keras/preprocessing/image.py
373ad97c72ed1ac4b6898e85b2cfd7b016e4b469
keras
13
276,040
34
11
21
177
23
0
50
225
_infer_inputs
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _infer_inputs(self, layer_node_id, convert_to_shapes=False): call_fn_id = self._search_for_child_node( layer_node_id, ["call_and_return_all_conditional_losses"] ) if call_fn_id is None: return None concrete_functions = self._proto.nodes[ call_fn_id ].function.concrete_functions if not concrete_functions: return None call_fn_name = concrete_functions[0] call_fn_proto = self._proto.concrete_functions[call_fn_name] structured_input_signature = tf.__internal__.saved_model.decode_proto( call_fn_proto.canonicalized_input_signature ) inputs = structured_input_signature[0][0] if convert_to_shapes: return tf.nest.map_structure(lambda spec: spec.shape, inputs) else: return inputs
113
load.py
Python
keras/saving/saved_model/load.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
4
289,389
20
10
73
90
11
0
34
83
test_measure_from_end_going_backwards
Ensure recorder test fixture is setup before hass fixture (#80528) * Ensure recorder test fixture is setup before hass fixture * Adjust more tests
https://github.com/home-assistant/core.git
async def test_measure_from_end_going_backwards(recorder_mock, hass): start_time = dt_util.utcnow() - timedelta(minutes=60) t0 = start_time + timedelta(minutes=20) t1 = t0 + timedelta(minutes=10) t2 = t1 + timedelta(minutes=10) # Start t0 t1 t2 End # |--20min--|--20min--|--10min--|--10min--| # |---off---|---on----|---off---|---on----|
393
test_sensor.py
Python
tests/components/history_stats/test_sensor.py
31a787558fd312331b55e5c2c4b33341fc3601fc
core
2
268,596
32
15
6
111
12
0
36
72
_gen_candidate_chars
fix password lookup's use of f=v settings (#76551) update tests
https://github.com/ansible/ansible.git
def _gen_candidate_chars(characters): chars = [] for chars_spec in characters: # getattr from string expands things like "ascii_letters" and "digits" # into a set of characters. chars.append(to_text(getattr(string, to_native(chars_spec), chars_spec), errors='strict')) chars = u''.join(chars).replace(u'"', u'').replace(u"'", u'') return chars
67
password.py
Python
lib/ansible/plugins/lookup/password.py
5d253a13807e884b7ce0b6b57a963a45e2f0322c
ansible
2
245,934
54
11
20
256
26
1
76
166
quality_focal_loss_tensor_target
[Feat]: adjust FocalLoss and QualityFocalLoss to allow different kinds of targets (#9481) * Adjust FocalLoss and QualityFocalLoss for MMYOLO * Adjust FocalLoss and QualityFocalLoss to fit MMYOLO * Adjust FocalLoss and QualityFocalLoss to fit MMYOLO * add comment * Add docstring * refine docstring * add a new quality_focal_loss_tensor_target function to support any dim tensor target * add activated condition * Add a test unit to determine whether two losses are equal
https://github.com/open-mmlab/mmdetection.git
def quality_focal_loss_tensor_target(pred, target, beta=2.0, activated=False): # pred and target should be of the same size assert pred.size() == target.size() if activated: pred_sigmoid = pred loss_function = F.binary_cross_entropy else: pred_sigmoid = pred.sigmoid() loss_function = F.binary_cross_entropy_with_logits scale_factor = pred_sigmoid target = target.type_as(pred) zerolabel = scale_factor.new_zeros(pred.shape) loss = loss_function( pred, zerolabel, reduction='none') * scale_factor.pow(beta) pos = (target != 0) scale_factor = target[pos] - pred_sigmoid[pos] loss[pos] = loss_function( pred[pos], target[pos], reduction='none') * scale_factor.abs().pow(beta) loss = loss.sum(dim=1, keepdim=False) return loss @weighted_loss
@weighted_loss
161
gfocal_loss.py
Python
mmdet/models/losses/gfocal_loss.py
380d936098c051a639ed8403667d95a595145c2a
mmdetection
2
291,850
4
6
2
16
3
0
4
18
supported_options
Add dialect support to google_translate (#81768) * Add TLD option support to google_translate * Fix tests for added TLD option in google_translate * Add Language to TLD mapping, Make tld configurable in google_translate * Move const to dedicated file in google_translate
https://github.com/home-assistant/core.git
def supported_options(self): return SUPPORT_OPTIONS
8
tts.py
Python
homeassistant/components/google_translate/tts.py
5533368171525f00beb7b2355f49c5b774408996
core
1
34,566
26
11
7
101
8
0
28
68
maybe_append_new_line
[DocTests Speech] Add doc tests for all speech models (#15031) * fix_torch_device_generate_test * remove @ * doc tests * up * up * fix doctests * adapt files * finish refactor * up * save intermediate * add more logic * new change * improve * next try * next try * next try * next try * fix final spaces * fix final spaces * improve * renaming * correct more bugs * finish wavlm * add comment * run on test runner * finish all speech models * adapt * finish
https://github.com/huggingface/transformers.git
def maybe_append_new_line(code): lines = code.split("\n") if lines[0] in ["py", "python"]: # add new line before last line being ``` last_line = lines[-1] lines.pop() lines.append("\n" + last_line) return "\n".join(lines)
53
prepare_for_doc_test.py
Python
utils/prepare_for_doc_test.py
9f831bdeaf965acca6c6097dfffb1364f4416c17
transformers
2
153,887
14
11
4
55
8
0
14
46
add_to_apply_calls
REFACTOR-#4530: Standardize access to physical data in partitions (#4563) Signed-off-by: Alexey Prutskov <[email protected]>
https://github.com/modin-project/modin.git
def add_to_apply_calls(self, func, *args, **kwargs): return PandasOnRayDataframePartition( self._data, call_queue=self.call_queue + [(func, args, kwargs)] )
37
partition.py
Python
modin/core/execution/ray/implementations/pandas_on_ray/partitioning/partition.py
4ec7f6347903f9133c65ebc5b6e0e15553b98577
modin
1
290,592
29
12
62
215
15
0
46
202
test_remote_scanner
Move bluetooth remote scanner implementation into a base class (#82012)
https://github.com/home-assistant/core.git
async def test_remote_scanner(hass): manager = _get_manager() switchbot_device = BLEDevice( "44:44:33:11:23:45", "wohand", {}, rssi=-100, ) switchbot_device_adv = generate_advertisement_data( local_name="wohand", service_uuids=["050a021a-0000-1000-8000-00805f9b34fb"], service_data={"050a021a-0000-1000-8000-00805f9b34fb": b"\n\xff"}, manufacturer_data={1: b"\x01"}, rssi=-100, ) switchbot_device_2 = BLEDevice( "44:44:33:11:23:45", "w", {}, rssi=-100, ) switchbot_device_adv_2 = generate_advertisement_data( local_name="wohand", service_uuids=["00000001-0000-1000-8000-00805f9b34fb"], service_data={"00000001-0000-1000-8000-00805f9b34fb": b"\n\xff"}, manufacturer_data={1: b"\x01", 2: b"\x02"}, rssi=-100, )
337
test_models.py
Python
tests/components/bluetooth/test_models.py
f584efa0c24df19ef1f805ecf95a95cecec5ff99
core
1
20,526
45
10
3
84
14
0
51
128
with_class
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def with_class(classname, namespace=""): <div> Some text <div class="grid">1 4 0 1 0</div> <div class="graph">1,3 2,3 1,1</div> <div>this &lt;div&gt; has no class</div> </div> classattr = "{}:class".format(namespace) if namespace else "class" return with_attribute(**{classattr: classname}) # pre-PEP8 compatibility symbols replaceWith = replace_with removeQuotes = remove_quotes withAttribute = with_attribute withClass = with_class matchOnlyAtCol = match_only_at_col
32
actions.py
Python
pipenv/patched/notpip/_vendor/pyparsing/actions.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
1,721
13
10
6
56
8
0
18
67
predict
fix pooling layers add notebooks for model training
https://github.com/OpenMined/PySyft.git
def predict(self, X): x_next = X for layer in self.layers[:]: x_next = layer.forward(x_next) y_pred = x_next return y_pred
34
model.py
Python
packages/syft/src/syft/core/tensor/nn/model.py
f9c115a133e58935de7905dd8a19b6b0d6490500
PySyft
2
190,823
113
14
25
348
25
0
164
532
getSubRectangles
Reformat of files using black These files were not properly formatted.
https://github.com/thumbor/thumbor.git
def getSubRectangles(self, ims): # Check image count if len(ims) < 2: return ims, [(0, 0) for i in ims] # We need numpy if np is None: raise RuntimeError("Need Numpy to calculate sub-rectangles. ") # Prepare ims2 = [ims[0]] xy = [(0, 0)] # t0 = time.time() # Iterate over images prev = ims[0] for im in ims[1:]: # Get difference, sum over colors diff = np.abs(im - prev) if diff.ndim == 3: diff = diff.sum(2) # Get begin and end for both dimensions X = np.argwhere(diff.sum(0)) Y = np.argwhere(diff.sum(1)) # Get rect coordinates if X.size and Y.size: x0, x1 = X[0], X[-1] + 1 y0, y1 = Y[0], Y[-1] + 1 else: # No change ... make it minimal x0, x1 = 0, 2 y0, y1 = 0, 2 # Cut out and store im2 = im[y0:y1, x0:x1] prev = im ims2.append(im2) xy.append((x0, y0)) # Done # print('%1.2f seconds to determine subrectangles of %i images' % # (time.time()-t0, len(ims2)) ) return ims2, xy
215
pil.py
Python
thumbor/engines/extensions/pil.py
3c745ef193e9af9244cc406734e67815377472ed
thumbor
8
244,171
5
6
27
19
5
0
5
8
split_coco
[Feature] Support splitting COCO data for Semi-supervised object detection. (#7431) * Split COCO data for Semi-supervised object detection. * import mmcv and use f-string * add a parser out_dir to set the path of semi-annos * Support multiprocessing * use mmcv.track_parallel_progress * fix * rename some variables
https://github.com/open-mmlab/mmdetection.git
def split_coco(data_root, out_dir, percent, fold):
214
split_coco.py
Python
tools/misc/split_coco.py
04db930cec2bb1bf628456ac57ec1aa396204b1b
mmdetection
5
22,785
27
16
5
122
10
0
36
75
expand_block
refactor: clean code Signed-off-by: slowy07 <[email protected]>
https://github.com/geekcomputers/Python.git
def expand_block(self, block): w = list(struct.unpack(">16L", block)) + [0] * 64 for i in range(16, 80): w[i] = self.rotate((w[i - 3] ^ w[i - 8] ^ w[i - 14] ^ w[i - 16]), 1) return w
80
sha1.py
Python
sha1.py
f0af0c43340763724f139fa68aa1e5a9ffe458b4
Python
2
309,481
6
6
12
19
4
0
6
20
speed_count
Bump pytradfri to 8.0.1 and fix fan preset mode "Auto" bug (#63920) * Move util functions * Fix errors * Revert changes * Fix tests * Use self.async_set_percentage() * Fix calculation functions and associated tests * Handle case of 0 * Update tests/components/tradfri/test_util.py Co-authored-by: Martin Hjelmare <[email protected]> * Update tests/components/tradfri/test_util.py Co-authored-by: Martin Hjelmare <[email protected]> * Update tests/components/tradfri/test_util.py Co-authored-by: Martin Hjelmare <[email protected]> * Handle case of 0 * Update homeassistant/components/tradfri/fan.py Co-authored-by: Martin Hjelmare <[email protected]> Co-authored-by: Martin Hjelmare <[email protected]>
https://github.com/home-assistant/core.git
def speed_count(self) -> int: return ATTR_MAX_FAN_STEPS
10
fan.py
Python
homeassistant/components/tradfri/fan.py
b52a8ba37a5e5e05b80beddff06b116371941d86
core
1
181,816
26
8
7
55
7
0
29
83
_combine_individual_stats
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def _combine_individual_stats(self, operator_count, cv_score, individual_stats): stats = deepcopy( individual_stats ) # Deepcopy, since the string reference to predecessor should be cloned stats["operator_count"] = operator_count stats["internal_cv_score"] = cv_score return stats
32
base.py
Python
tpot/base.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
1
200,290
9
11
4
54
6
0
9
29
raise_on_deprecated
runtests.py: Undo auto-formatting, re-add changes to blacklist for scipy, numpy
https://github.com/sympy/sympy.git
def raise_on_deprecated(): with warnings.catch_warnings(): warnings.filterwarnings('error', '.*', DeprecationWarning, module='sympy.*') yield
27
runtests.py
Python
sympy/testing/runtests.py
6d2bbf80752549276a968fd4af78231c569d55c5
sympy
1
142,835
4
6
2
19
3
0
4
18
get_live_trials
[tune/structure] Introduce execution package (#26015) Execution-specific packages are moved to tune.execution. Co-authored-by: Xiaowei Jiang <[email protected]>
https://github.com/ray-project/ray.git
def get_live_trials(self): return self._live_trials
10
trial_runner.py
Python
python/ray/tune/execution/trial_runner.py
0959f44b6fc217a4f2766ed46a721eb79b067b2c
ray
1
96,980
26
11
13
75
12
0
27
62
is_guessed_to_be_created_on_project_creation
ref(types): Add types to conditions and filters (#32393)
https://github.com/getsentry/sentry.git
def is_guessed_to_be_created_on_project_creation(self) -> bool: # TODO(mgaeta): Bug: Rule is optional. delta = abs(self.rule.date_added - self.project.date_added) guess: bool = delta.total_seconds() < 30 and self.rule.label == DEFAULT_RULE_LABEL return guess
45
event_frequency.py
Python
src/sentry/rules/conditions/event_frequency.py
654c6627307359956c6d44f83791d6b177841363
sentry
2
248,089
16
12
12
140
6
0
18
93
test_check_push_rules_actions
Add a module API to allow modules to edit push rule actions (#12406) Co-authored-by: Richard van der Hoff <[email protected]>
https://github.com/matrix-org/synapse.git
def test_check_push_rules_actions(self) -> None: with self.assertRaises(InvalidRuleException): self.module_api.check_push_rule_actions(["foo"]) with self.assertRaises(InvalidRuleException): self.module_api.check_push_rule_actions({"foo": "bar"}) self.module_api.check_push_rule_actions(["notify"]) self.module_api.check_push_rule_actions( [{"set_tweak": "sound", "value": "default"}] )
74
test_api.py
Python
tests/module_api/test_api.py
5ef673de4f0bf991402ee29235741a91a7cc9b02
synapse
1
43,887
53
17
31
292
32
0
83
452
on_task_instance_state_session_flush
Add Listener Plugin API that tracks TaskInstance state changes (#20443) This adds new Plugin API - "listeners". It enables plugin authors to write [pluggy hook implementation][1] that will be called on certain formalized extension points. To differentiate between current Airflow extension points, like plugins, and current Airflow hooks, implementations of those hooks are called listeners. The API is ment to be called across all dags, and all operators - in contrast to current on_success_callback, pre_execute and related family which are meant to provide callbacks for particular dag authors, or operator creators. pluggy mechanism enables us to execute multiple, or none, listeners that implement particular extension point, so that users can use multiple listeners seamlessly. In this PR, three such extension points are added. When TaskInstance's state is changed to RUNNING, on_task_instance_running hook is called. On change toSUCCESS on_task_instance_success is called, similarly on FAILED on_task_instance_failed is called. Actual notification mechanism is be implemented using [SQLAlchemy’s events mechanism][2]. This ensures that plugins will get every change of state, regardless of where in the codebase it happened, and not require manual annotation of TI state changes across the codebase. To make sure that this change is not affecting performance, running this mechanism on scheduler is disabled by default. The SQLAlchemy event mechanism is also not affected by default - the event listener is only added if we have any plugin which actually provides any listener. [1]: https://pluggy.readthedocs.io/en/stable/ [2]: https://docs.sqlalchemy.org/en/13/orm/session_events.html#after-flush Signed-off-by: Maciej Obuchowski <[email protected]>
https://github.com/apache/airflow.git
def on_task_instance_state_session_flush(session, flush_context): logger = logging.getLogger(__name__) if not get_listener_manager().has_listeners: return for state in flush_context.states: if isinstance(state.object, TaskInstance) and session.is_modified( state.object, include_collections=False ): added, unchanged, deleted = flush_context.get_attribute_history(state, 'state') logger.debug( "session flush listener: added %s unchanged %s deleted %s - %s", added, unchanged, deleted, state.object, ) if not added: continue previous_state = deleted[0] if deleted else State.NONE if State.RUNNING in added: get_listener_manager().hook.on_task_instance_running( previous_state=previous_state, task_instance=state.object, session=session ) elif State.FAILED in added: get_listener_manager().hook.on_task_instance_failed( previous_state=previous_state, task_instance=state.object, session=session ) elif State.SUCCESS in added: get_listener_manager().hook.on_task_instance_success( previous_state=previous_state, task_instance=state.object, session=session )
190
events.py
Python
airflow/listeners/events.py
dba00ce6a32b7f50153887c6974f62985ca8023f
airflow
10
277,101
8
11
5
49
9
0
8
21
sync_to_numpy_or_python_type
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def sync_to_numpy_or_python_type(tensors): if isinstance(tensors, tf.distribute.experimental.coordinator.RemoteValue): tensors = tensors.fetch()
42
tf_utils.py
Python
keras/utils/tf_utils.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
43,650
23
14
6
83
9
0
25
102
cancel_triggers
Rename `to_delete` to `to_cancel` in TriggerRunner (#20658) The queue's purpose is to track triggers that need to be canceled. The language `to_delete` was a bit confusing because for one it does not actually delete them but cancel them. The deletion work is actually in `cleanup_finished_triggers`. It seems that this method will usually not do anything and it's only for cancelling triggers that are currently running but for whatever reason no longer should be. E.g. when a task is killed and therefore the trigger is no longer needed, or some multi-triggerer scenarios. So putting cancel in the name also highlights that this is about stopping running triggers, not e.g. purging completed ones.
https://github.com/apache/airflow.git
async def cancel_triggers(self): while self.to_cancel: trigger_id = self.to_cancel.popleft() if trigger_id in self.triggers: # We only delete if it did not exit already self.triggers[trigger_id]["task"].cancel() await asyncio.sleep(0)
47
triggerer_job.py
Python
airflow/jobs/triggerer_job.py
c20ad79b40ea2b213f6dca221221c6dbd55bd08f
airflow
3
166,980
5
6
21
27
4
0
5
8
parametrize_fixture_doc
TYP: pandas/_testing (#47037) * TYP: bunch of type annotations * change not needed
https://github.com/pandas-dev/pandas.git
def parametrize_fixture_doc(*args) -> Callable[[F], F]:
20
_test_decorators.py
Python
pandas/util/_test_decorators.py
c9c6685c51ead26bbbb9a0dd565e82967cd839e8
pandas
1
102,175
10
8
8
47
6
0
11
32
test_empty_backend
Revert "Revert D32498569: allow external backend codegen to toggle whether to generate out= and inplace kernels" (#69950) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69950 This reverts commit f6cad53443704dfe5a20cc62bee14d91e3bffcaa. Test Plan: Imported from OSS Reviewed By: albanD Differential Revision: D33113545 Pulled By: bdhirsh fbshipit-source-id: d6590294662588d36c09662dea65919ad4e1e288
https://github.com/pytorch/pytorch.git
def test_empty_backend(self) -> None: yaml_str = output_error = self.get_errors_from_gen_backend_stubs(yaml_str) self.assertExpectedInline(output_error, )
26
test_gen_backend_stubs.py
Python
tools/test/test_gen_backend_stubs.py
bb5b4cceb6f737448eaaa6817cd773b6f4b0e77d
pytorch
1
270,204
25
12
9
84
10
0
29
84
normalize_cluster_spec
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def normalize_cluster_spec(cluster_spec): if isinstance(cluster_spec, (dict, cluster_pb2.ClusterDef)): return tf.train.ClusterSpec(cluster_spec) elif not isinstance(cluster_spec, tf.train.ClusterSpec): raise ValueError( "`cluster_spec' should be dict or a `tf.train.ClusterSpec` or a " "`tf.train.ClusterDef` object" ) return cluster_spec
50
distribute_coordinator_utils.py
Python
keras/distribute/distribute_coordinator_utils.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
3
295,908
10
8
7
41
7
0
13
27
available_tones
Add EntityFeature enum to Siren (#69585) Co-authored-by: Franck Nijhof <[email protected]>
https://github.com/home-assistant/core.git
def available_tones(self) -> list[int | str] | dict[int, str] | None: return self._attr_available_tones
26
__init__.py
Python
homeassistant/components/siren/__init__.py
a61ac3ddc6d65522dfa1eb599adf73420a9267dc
core
1
126,954
56
14
23
257
37
0
67
330
test_ddpg_compilation
[RLlib] Move learning_starts logic from buffers into `training_step()`. (#26032)
https://github.com/ray-project/ray.git
def test_ddpg_compilation(self): config = ( ddpg.DDPGConfig() .training(num_steps_sampled_before_learning_starts=0) .rollouts(num_rollout_workers=0, num_envs_per_worker=2) ) explore = config.exploration_config.update({"random_timesteps": 100}) config.exploration(exploration_config=explore) num_iterations = 1 # Test against all frameworks. for _ in framework_iterator(config, with_eager_tracing=True): algo = config.build(env="Pendulum-v1") for i in range(num_iterations): results = algo.train() check_train_results(results) print(results) check_compute_single_action(algo) # Ensure apply_gradient_fn is being called and updating global_step pol = algo.get_policy() if config.framework_str == "tf": a = pol.get_session().run(pol.global_step) else: a = pol.global_step check(a, 500) algo.stop()
153
test_ddpg.py
Python
rllib/algorithms/ddpg/tests/test_ddpg.py
0dceddb912ed92286032b5563dd2e541a8a7031f
ray
4
297,996
75
16
37
382
49
0
111
521
async_step_user
Add blebox discovery/zeroconf (#83837) Co-authored-by: J. Nick Koston <[email protected]>
https://github.com/home-assistant/core.git
async def async_step_user(self, user_input=None): hass = self.hass schema = create_schema(user_input) if user_input is None: return self.async_show_form( step_id="user", data_schema=schema, errors={}, description_placeholders={}, ) addr = host_port(user_input) for entry in self._async_current_entries(): if addr == host_port(entry.data): host, port = addr return self.async_abort( reason=ADDRESS_ALREADY_CONFIGURED, description_placeholders={"address": f"{host}:{port}"}, ) websession = async_get_clientsession(hass) api_host = ApiHost(*addr, DEFAULT_SETUP_TIMEOUT, websession, hass.loop, _LOGGER) try: product = await Box.async_from_host(api_host) except UnsupportedBoxVersion as ex: return self.handle_step_exception( "user", ex, schema, *addr, UNSUPPORTED_VERSION, _LOGGER.debug ) except Error as ex: return self.handle_step_exception( "user", ex, schema, *addr, CANNOT_CONNECT, _LOGGER.warning ) except RuntimeError as ex: return self.handle_step_exception( "user", ex, schema, *addr, UNKNOWN, _LOGGER.error ) # Check if configured but IP changed since await self.async_set_unique_id(product.unique_id, raise_on_progress=False) self._abort_if_unique_id_configured() return self.async_create_entry(title=product.name, data=user_input)
241
config_flow.py
Python
homeassistant/components/blebox/config_flow.py
c737378ee14c12f988118dc9d23f1fc0b1da8ea1
core
7
167,373
61
18
26
240
21
0
96
488
update_info
TYP: some return annotations in pytables.py (#47512)
https://github.com/pandas-dev/pandas.git
def update_info(self, info) -> None: for key in self._info_fields: value = getattr(self, key, None) idx = info.setdefault(self.name, {}) existing_value = idx.get(key) if key in idx and value is not None and existing_value != value: # frequency/name just warn if key in ["freq", "index_name"]: ws = attribute_conflict_doc % (key, existing_value, value) warnings.warn( ws, AttributeConflictWarning, stacklevel=find_stack_level() ) # reset idx[key] = None setattr(self, key, None) else: raise ValueError( f"invalid info for [{self.name}] for [{key}], " f"existing_value [{existing_value}] conflicts with " f"new value [{value}]" ) else: if value is not None or existing_value is not None: idx[key] = value
141
pytables.py
Python
pandas/io/pytables.py
7d2f9b8d59908fbf57c6453bc41891efbfe981a6
pandas
8
176,408
4
7
2
21
3
0
4
18
out_degree
Updated MultiDiGraph documentation to include more examples of actually (#5387) using parallel edges, and fixed references to things like G[u, v] where G[u, v, k] is required for a MultiDigraph. Have not made parallel changes in MultiGraph which should maybe also be made? Docs tests pass on my end; no code outside of comments was changed. -Peter Mawhorter
https://github.com/networkx/networkx.git
def out_degree(self): return OutMultiDegreeView(self)
11
multidigraph.py
Python
networkx/classes/multidigraph.py
4d4cf1efd44326a858af33711cb0c631abc5105a
networkx
1
264,675
6
8
2
31
4
0
6
12
get_auth_backend_display
Closes #9123: Improve appearance of SSO login providers
https://github.com/netbox-community/netbox.git
def get_auth_backend_display(name): return AUTH_BACKEND_ATTRS.get(name, (name, None))
19
authentication.py
Python
netbox/netbox/authentication.py
d6df6b444f1bcc1b77b1b6ae6e726f3024e0abd4
netbox
1
45,743
24
10
9
101
10
0
30
94
unmap
More explicit mapped argument validation (#21933) * More explicit mapped argument validation Instead of always using MagicMock to validate mapped arguments, this implements a more sophisticated protocol that allows an operator to implement a 'validate_mapped_arguments' to provide custom validation logic. If an operator just wants to use __init__ for validation, however, they can set a flag 'mapped_arguments_validated_by_init' to get the behavior easily. (This does *not* use MagicMock, however, since any custom validation logic should be able to handle those on its own). The 'validate_mapped_arguments' flag is currently only set on PythonOperator. It can likely be used on a lot more operators down the road. * Add flag to distinguish a validation-only init There's just too much magic during a task's initialization that tries to add it into the dependency graph. This flag is needed to work around all that, I think.
https://github.com/apache/airflow.git
def unmap(self) -> "BaseOperator": dag = self.dag if not dag: raise RuntimeError("Cannot unmap a task without a DAG") dag._remove_task(self.task_id) if isinstance(self.operator_class, str): raise RuntimeError("Cannot unmap a deserialized operator") return self.operator_class(**self._get_unmap_kwargs())
57
mappedoperator.py
Python
airflow/models/mappedoperator.py
b65e52205a7045eb08d471289b85abda587442b7
airflow
3
154,520
26
11
12
150
17
0
39
143
apply
REFACTOR-#5009: use RayWrapper.materialize instead of ray.get (#5010) Signed-off-by: Myachev <[email protected]>
https://github.com/modin-project/modin.git
def apply(self, first, other, func, **kwargs): df1 = self.cudf_dataframe_dict[first] if not other: result = func(df1, **kwargs) return self.store_new_df(result) if not isinstance(other, int): assert isinstance(other, ray.ObjectRef) df2 = RayWrapper.materialize(other) else: df2 = self.cudf_dataframe_dict[other] result = func(df1, df2, **kwargs) return self.store_new_df(result)
97
gpu_manager.py
Python
modin/core/execution/ray/implementations/cudf_on_ray/partitioning/gpu_manager.py
1dc16415333bf2428ee2b1f4d31ff94e66b9a0a6
modin
3
270,593
10
8
2
60
11
1
10
21
get_default_mesh
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def get_default_mesh(self): return self._default_mesh LayoutMap.get.__doc__ = LayoutMap.__getitem__.__doc__ @keras_export("keras.dtensor.experimental.layout_map_scope", v1=[]) @contextlib.contextmanager
@keras_export("keras.dtensor.experimental.layout_map_scope", v1=[]) @contextlib.contextmanager
10
layout_map.py
Python
keras/dtensor/layout_map.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
308,560
51
13
27
261
18
0
77
246
test_local_push_only
Allow mobile app registrations only supporting websocket push (#63208)
https://github.com/home-assistant/core.git
async def test_local_push_only(hass, hass_ws_client, setup_websocket_channel_only_push): with pytest.raises(HomeAssistantError) as e_info: assert await hass.services.async_call( "notify", "mobile_app_websocket_push_name", {"message": "Not connected"}, blocking=True, ) assert str(e_info.value) == "Device not connected to local push notifications" client = await hass_ws_client(hass) await client.send_json( { "id": 5, "type": "mobile_app/push_notification_channel", "webhook_id": "websocket-push-webhook-id", } ) sub_result = await client.receive_json() assert sub_result["success"] assert await hass.services.async_call( "notify", "mobile_app_websocket_push_name", {"message": "Hello world 1"}, blocking=True, ) msg = await client.receive_json() assert msg == {"id": 5, "type": "event", "event": {"message": "Hello world 1"}}
143
test_notify.py
Python
tests/components/mobile_app/test_notify.py
ad8af5fc7a52a66a584bc31c535f100fb7c71919
core
1
241,648
50
16
11
303
25
1
63
220
test_error_raised_with_float_limited_eval_batches
Deprecate `TrainerDataLoadingMixin` and move logic to `DataConnector` (#11282) Co-authored-by: Rohit Gupta <[email protected]> Co-authored-by: Aki Nitta <[email protected]> Co-authored-by: Carlos Mocholí <[email protected]>
https://github.com/Lightning-AI/lightning.git
def test_error_raised_with_float_limited_eval_batches(): model = BoringModel() dl_size = len(model.val_dataloader()) limit_val_batches = 1 / (dl_size + 2) trainer = Trainer(limit_val_batches=limit_val_batches) trainer._data_connector.attach_data(model) with pytest.raises( MisconfigurationException, match=fr"{limit_val_batches} \* {dl_size} < 1. Please increase the `limit_val_batches`", ): trainer._data_connector._reset_eval_dataloader(RunningStage.VALIDATING, model) @pytest.mark.parametrize( "val_dl", [ DataLoader(dataset=RandomDataset(32, 64), shuffle=True), CombinedLoader(DataLoader(dataset=RandomDataset(32, 64), shuffle=True)), CombinedLoader( [DataLoader(dataset=RandomDataset(32, 64)), DataLoader(dataset=RandomDataset(32, 64), shuffle=True)] ), CombinedLoader( { "dl1": DataLoader(dataset=RandomDataset(32, 64)), "dl2": DataLoader(dataset=RandomDataset(32, 64), shuffle=True), } ), ], )
@pytest.mark.parametrize( "val_dl", [ DataLoader(dataset=RandomDataset(32, 64), shuffle=True), CombinedLoader(DataLoader(dataset=RandomDataset(32, 64), shuffle=True)), CombinedLoader( [DataLoader(dataset=RandomDataset(32, 64)), DataLoader(dataset=RandomDataset(32, 64), shuffle=True)] ), CombinedLoader( { "dl1": DataLoader(dataset=RandomDataset(32, 64)), "dl2": DataLoader(dataset=RandomDataset(32, 64), shuffle=True), } ), ], )
71
test_data_loading.py
Python
tests/trainer/test_data_loading.py
5b59c951e28ddc8bb884f044b1f46fb54c23a8b8
lightning
1
8,060
23
11
6
99
15
0
30
45
_get_dataset_configs
Config-first Datasets API (ludwig.datasets refactor) (#2479) * Adds README and stub for reading dataset configs. * Adds __init__.py for configs, moves circular import into function scope in ludwig/datasets/__init__.py * Print config files in datasets folder. * First pass at automatic archive extraction. * Implemented downloading and extract. * Refactor DatasetConfig into its own file. * Fixed bugs downloading kaggle dataset. * Makes registry store dataset instances, not classes. Also comments out import_submodules for testing. * Typo fix. * Only pass data files on to load_unprocessed_dataframe, symlink directories. * Downloading dataset files into existing directory if exists. * Refactor: make datasets fully config-first, lazy load dataset loaders. * Implemented agnews custom loader. * Implements train/validation/test split by files, and globbing support * Adds _glob_multiple * Adds adult_census_income, agnews, allstate_claims_severity. * Implements sha256 verification, adds more datasets up to creditcard_fraud. * Adds checksums, dbpedia, electricity * Fixes gzip file name returned as string not list, adds up to forest_cover dataset. * Adds datasets up to reuters_r8 * Adds all datasets which don't require a custom class. * Restore dataset import behavior by implementing module __getattr__ * Adds KDD datasets. * Adds ieee_fraud. * Adds imbalanced_insurance, insurance_lite. * Adds mnist. * Completes implementation of all of the built-in datasets. * Made cache_dir optional, read from environment variable if set. * Upgrades datasets tests. * Adds test for new dataset config API. Also adds scripts for dataset link checking. * Fixes loading allstate claims severity dataset. * Use @lru_cache(1), @cache not supported in python < 3.9 * Deletes dataset registry, updates automl test utils * Fix imports of datasets API. * Adds more detail to sha256: docstring and basic README * Copy-paste link oops. * Fixes handling of nested archive types like .tar.bz Also adds a LUDWIG_CACHE and export to the README * Adds link for twitter bots. * Fix order of splits in README.md * typo * Adds verify as a phase in doc string. * Support .pqt, .pq extensions for parquet. * Handle nested archives with longer file extensions like .csv.zip * Handle nested .gz types properly too. Check all extensions with .endswith * Handle all archive types with .endswith * Update ludwig/datasets/loaders/split_loaders.py Co-authored-by: Joppe Geluykens <[email protected]> * Adds explanation for export, fixes preserve_paths (should be relative to processed_dataset_dir) * Resolve preserved paths relative to raw dataset dir before move. * Catch runtime exception from extracting sub-archives. Co-authored-by: Daniel Treiman <[email protected]> Co-authored-by: Joppe Geluykens <[email protected]>
https://github.com/ludwig-ai/ludwig.git
def _get_dataset_configs() -> Dict[str, DatasetConfig]: import importlib.resources config_files = [f for f in importlib.resources.contents(configs) if f.endswith(".yaml")] config_objects = [load_dataset_config(f) for f in config_files] return {c.name: c for c in config_objects}
63
__init__.py
Python
ludwig/datasets/__init__.py
e4fc06f986e03919d9aef3ab55c05fee5a6b9d3a
ludwig
5
291,214
18
11
5
64
9
0
20
52
sound_mode_list
Bump to Arcam 1.0.1 and make strictly typed (#82487) * Make arcam_fmj strictly typed * Add test for invalid UDN
https://github.com/home-assistant/core.git
def sound_mode_list(self) -> list[str] | None: if (values := self._state.get_decode_modes()) is None: return None return [x.name for x in values]
40
media_player.py
Python
homeassistant/components/arcam_fmj/media_player.py
a55fb445b0ed4efd625227b4f13a01a0f469c358
core
3
288,186
17
9
7
74
9
0
19
65
wait_for_ble_connections_free
Wait for disconnect when we are out of connection ble slots in esphome (#79246)
https://github.com/home-assistant/core.git
async def wait_for_ble_connections_free(self) -> int: if self.ble_connections_free > 0: return self.ble_connections_free fut: asyncio.Future[int] = asyncio.Future() self._ble_connection_free_futures.append(fut) return await fut
44
entry_data.py
Python
homeassistant/components/esphome/entry_data.py
0b5289f7483dde5911f4a268233fea2ce3b417ff
core
2
110,032
11
13
4
77
11
0
11
64
update_from_data_y
Remove unnecessary np.{,as}array / astype calls. Quite often numpy will call asarray for us, saving us the need to call asarray explicitly. When we do call asarray (or array) ourselves, a dtype can directly be passed in, rather than immediately calling astype immediately after. Passing the dtype makes it unnecessary for asarray to infer the dtype of the passed-in container, and can also save an extra array allocation if asarray first has to allocate an array of a type and astype immediately has to allocate an array of another type.
https://github.com/matplotlib/matplotlib.git
def update_from_data_y(self, y, ignore=None): y = np.ravel(y) self.update_from_data_xy(np.column_stack([np.ones(y.size), y]), ignore=ignore, updatex=False)
50
transforms.py
Python
lib/matplotlib/transforms.py
1068a6faa19767724437461bcfb88c6852ec435c
matplotlib
1
87,234
5
7
2
28
3
0
5
11
generate_cache_key_for_observed_release
feat(ds): Implements release boosting functionality for ds [TET-496] (#40403) Sets releases that should be boosted with ds into the cache when a transaction is observed in the event manager. The logic is as follows once a transaction from a release that wasn't observed in the previous 24 hours is received, a cache key for that release is set with an expiration of one day and then that release is set into a list of boosted releases into the cache with an expiration of 1h, then the project config is invalidated so we recompute the project config with new dynamic sampling rule to boost that release with a hardcoded interval for one hour. If that release doesn't send any transactions in the next 24 hours i.e. after the 24 hour cache key expires and then starts sending transaction again, we want to start boosting the release again for an hour. This PR is one part of two parts, and only handles the setting of the cache and the invalidation of the project config, but does not include the dynamic sampling rules to be sent to relay. This is by design so we can merge this into production and monitor the performance impact of this logic before committing to adding the dynamic sampling rules As a follow up, add a PR that only runs this logic if the feature flags for dynamic sampling are enabled, however we want to merge this without that check to monitor production load
https://github.com/getsentry/sentry.git
def generate_cache_key_for_observed_release(project_id, release_id): return f"ds::p:{project_id}:r:{release_id}"
11
latest_release_booster.py
Python
src/sentry/dynamic_sampling/latest_release_booster.py
0fc7bab05d499d4df4faea2d11f49d2be8214776
sentry
1
101,208
79
13
15
213
22
0
98
301
_update_file_format
lib.align updates: - alignments.py - Add typed dicts for imported alignments - Explicitly check for presence of thumb value in alignments dict - linting - detected_face.py - Typing - Linting - Legacy support for pre-aligned face - Update dependencies to new property names
https://github.com/deepfakes/faceswap.git
def _update_file_format(self, folder, filename): logger.info("Reformatting legacy alignments file...") old_location = os.path.join(str(folder), filename) new_location = f"{os.path.splitext(old_location)[0]}.{self._serializer.file_extension}" if os.path.exists(old_location): if os.path.exists(new_location): logger.info("Using existing updated alignments file found at '%s'. If you do not " "wish to use this existing file then you should delete or rename it.", new_location) else: logger.info("Old location: '%s', New location: '%s'", old_location, new_location) load_serializer = get_serializer_from_filename(old_location) data = load_serializer.load(old_location) self._serializer.save(new_location, data) return os.path.basename(new_location) # <Structure> # # Alignments were structured: {frame_name: <list of faces>}. We need to be able to store # information at the frame level, so new structure is: {frame_name: {faces: <list of faces>}}
109
alignments.py
Python
lib/align/alignments.py
5e73437be47f2410439a3c6716de96354e6a0c94
faceswap
3
297,977
16
9
6
47
6
0
16
66
async_sync
String formatting and max line length - Part 5 (#84501) Co-authored-by: jjlawren <[email protected]>
https://github.com/home-assistant/core.git
async def async_sync(self, other_player): _LOGGER.warning( "Service squeezebox.sync is deprecated; use media_player.join_players" " instead" ) await self.async_join_players([other_player])
24
media_player.py
Python
homeassistant/components/squeezebox/media_player.py
f39f3b612a8c1a12504f2f1d54fb1c9872216d12
core
1
10,842
38
12
8
59
10
0
48
134
get_worker_host
refactor: rename pod to deployment (#4230) * refactor: rename pod to deployment * style: fix overload and cli autocomplete * fix: undo daemon mistake * refactor: leftover cleanup * fix: more test fixes * fix: more fixes * fix: more fixes * fix: more fixes * fix: more tests * fix: fix more tests * refactor: fix more tests * refactor: more tests fixes * refactor: rename pea to pod * refactor: adjust docs * refactor: complete pea renaming * refactor: more fixes * fix: pea_type in k8s yamls * fix: adjust pod args name * refactor: rename peapods parser folder * fix: da init Co-authored-by: Jina Dev Bot <[email protected]>
https://github.com/jina-ai/jina.git
def get_worker_host(pod_args, pod, head_pod): # Check if the current pod and head are both containerized on the same host # If so __docker_host__ needs to be advertised as the worker's address to the head worker_host = ( __docker_host__ if Deployment._is_container_to_container(pod, head_pod) and host_is_local(pod_args.host) else pod_args.host ) return worker_host
37
__init__.py
Python
jina/orchestrate/deployments/__init__.py
13edc16d806fb5d77a6849551178ccc75937f25f
jina
3
175,178
105
12
37
278
28
0
142
549
test_co_positions_artificial_instructions
bpo-46202: Remove opcode POP_EXCEPT_AND_RERAISE (GH-30302) * bpo-46202: remove opcode POP_EXCEPT_AND_RERAISE * do not assume that an exception group is truthy
https://github.com/python/cpython.git
def test_co_positions_artificial_instructions(self): import dis namespace = {} exec(textwrap.dedent(), namespace) exc = namespace['exc'] traceback = exc.__traceback__ code = traceback.tb_frame.f_code artificial_instructions = [] for instr, positions in zip( dis.get_instructions(code), code.co_positions(), strict=True ): # If any of the positions is None, then all have to # be None as well for the case above. There are still # some places in the compiler, where the artificial instructions # get assigned the first_lineno but they don't have other positions. # There is no easy way of inferring them at that stage, so for now # we don't support it. self.assertTrue(positions.count(None) in [0, 4]) if not any(positions): artificial_instructions.append(instr) self.assertEqual( [ (instruction.opname, instruction.argval) for instruction in artificial_instructions ], [ ("PUSH_EXC_INFO", None), ("LOAD_CONST", None), # artificial 'None' ("STORE_NAME", "e"), # XX: we know the location for this ("DELETE_NAME", "e"), ("RERAISE", 1), ("COPY", 3), ("POP_EXCEPT", None), ("RERAISE", 1) ] )
169
test_code.py
Python
Lib/test/test_code.py
a94461d7189d7f1147ab304a332c8684263dc17e
cpython
4
281,540
8
9
31
40
7
0
8
30
print_help
Terminal Wide Rich (#1161) * My idea for how we handle Rich moving forward * remove independent consoles * FIxed pylint issues * add a few vars * Switched print to console * More transitions * Changed more prints * Replaced all prints * Fixing tabulate * Finished replace tabulate * Finished removing rich from Tabulate * add Panel around menu * add GST watermark under feature flag * Fixed 46 tests * Delete test_screener[False].yaml * Delete test_screener[True].yaml * Fixed the rest of the tests * add help and source color vars and use rgb * rich on stocks/options * update rich on disc, dps, sia * rich in gov, ins and scr menus * ba and ca menus with rich * Fixed import issue * Fixed some tests * removed termcolor * Removed prettytable * add rich to remaining stocks menus * FIxed linting issue * Added James' changes * Updated dependencies * Add rich to cryptocurrency menu * refactor economy and forex * refactor etf with rich * refactor mfunds * refactor rich rest * not specify style so default color works well on any background * Fixing mypy issues * Updated tests * More test fixes * James' test fixes * Updating tests : stocks/screener - fix cassettes using BR * Updating tests : crypto * Updating tests : disable DEBUG_MODE * Updating tests : stocks/fa/yfinance * minor fixes that escape * Improve the rich table function (that replaces tabulate :D ) * Fixed bad code * delete rogue file + dcf fix + NoConsole * sia mypy * fuck you linter * fuck you linter pt 2 * skip hehe * i hate the black linter * ubuntu mypy attempt * Update : rich_config + gtff * Updating tests : conftest * Updating tests : stocks * Update : rich_config * Updating : rich_config * make panel configurable for Theodore :b * colors update * Merged * Updating : rich_config + feature_flags * Updating : rich_config * Updating tests : stocks * Updating : feature_flags Co-authored-by: DidierRLopes <[email protected]> Co-authored-by: Chavithra PARANA <[email protected]> Co-authored-by: james <[email protected]> Co-authored-by: jose-donato <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def print_help(self): help_text = console.print(text=help_text, menu="Stocks - Discovery")
21
disc_controller.py
Python
gamestonk_terminal/stocks/discovery/disc_controller.py
82747072c511beb1b2672846ae2ee4aec53eb562
OpenBBTerminal
1
142,459
35
10
7
81
9
0
38
107
task_id
[api] Annotate as public / move ray-core APIs to _private and add enforcement rule (#25695) Enable checking of the ray core module, excluding serve, workflows, and tune, in ./ci/lint/check_api_annotations.py. This required moving many files to ray._private and associated fixes.
https://github.com/ray-project/ray.git
def task_id(self): # only worker mode has actor_id assert ( self.worker.mode == ray._private.worker.WORKER_MODE ), f"This method is only available when the process is a\ worker. Current mode: {self.worker.mode}" task_id = self.worker.current_task_id return task_id if not task_id.is_nil() else None
43
runtime_context.py
Python
python/ray/runtime_context.py
43aa2299e6623c8f8c7c4a1b80133459d0aa68b0
ray
2
203,277
61
12
17
156
13
0
70
250
test_body_after_POST_multipart_form_data
Refs #33476 -- Refactored problematic code before reformatting by Black. In these cases Black produces unexpected results, e.g. def make_random_password( self, length=10, allowed_chars='abcdefghjkmnpqrstuvwxyz' 'ABCDEFGHJKLMNPQRSTUVWXYZ' '23456789', ): or cursor.execute(""" SELECT ... """, [table name], )
https://github.com/django/django.git
def test_body_after_POST_multipart_form_data(self): # Because multipart is used for large amounts of data i.e. file uploads, # we don't want the data held in memory twice, and we don't want to # silence the error by setting body = '' either. payload = FakePayload("\r\n".join([ '--boundary', 'Content-Disposition: form-data; name="name"', '', 'value', '--boundary--' ])) request = WSGIRequest({ 'REQUEST_METHOD': 'POST', 'CONTENT_TYPE': 'multipart/form-data; boundary=boundary', 'CONTENT_LENGTH': len(payload), 'wsgi.input': payload, }) self.assertEqual(request.POST, {'name': ['value']}) with self.assertRaises(RawPostDataException): request.body
80
tests.py
Python
tests/requests/tests.py
c5cd8783825b5f6384417dac5f3889b4210b7d08
django
1
297,694
26
13
12
165
12
0
37
150
update
Use UnitOfTemperature in integrations (e-h) (#84305)
https://github.com/home-assistant/core.git
def update(self) -> None: self.hddtemp.update() if self.hddtemp.data and self.disk in self.hddtemp.data: self._details = self.hddtemp.data[self.disk].split("|") self._attr_native_value = self._details[2] if self._details is not None and self._details[3] == "F": self._attr_native_unit_of_measurement = UnitOfTemperature.FAHRENHEIT else: self._attr_native_unit_of_measurement = UnitOfTemperature.CELSIUS else: self._attr_native_value = None
101
sensor.py
Python
homeassistant/components/hddtemp/sensor.py
9580c4f1ec5e45e5090d927792feea4ecf7c96e7
core
5
269,419
32
14
11
145
14
1
43
131
stack3
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def stack3(x, filters, blocks, stride1=2, groups=32, name=None): x = block3(x, filters, stride=stride1, groups=groups, name=name + "_block1") for i in range(2, blocks + 1): x = block3( x, filters, groups=groups, conv_shortcut=False, name=name + "_block" + str(i), ) return x @keras_export( "keras.applications.resnet50.ResNet50", "keras.applications.resnet.ResNet50", "keras.applications.ResNet50", )
@keras_export( "keras.applications.resnet50.ResNet50", "keras.applications.resnet.ResNet50", "keras.applications.ResNet50", )
86
resnet.py
Python
keras/applications/resnet.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
308,886
4
6
2
16
2
0
4
11
async_update_group_state
Simplify groups (#63477) * Simplify group * Rename async_update to async_update_group_state and mark it as callback * Simplify _async_start
https://github.com/home-assistant/core.git
def async_update_group_state(self) -> None:
8
__init__.py
Python
homeassistant/components/group/__init__.py
8bf8709d9928b714e70d32a383ba4e1a2849d353
core
1
268,806
43
8
7
78
9
0
56
123
add_locals
Simplify AnsibleJ2Vars by using ChainMap for vars (#78713) Co-authored-by: Matt Martz <[email protected]>
https://github.com/ansible/ansible.git
def add_locals(self, locals): if locals is None: return self current_locals = self.maps[0] current_globals = self.maps[2] # prior to version 2.9, locals contained all of the vars and not just the current # local vars so this was not necessary for locals to propagate down to nested includes new_locals = current_locals | locals return AnsibleJ2Vars(self._templar, current_globals, locals=new_locals)
49
vars.py
Python
lib/ansible/template/vars.py
60f76436c144a08aa6b74bfefd559ac0188202f6
ansible
2
104,587
36
12
24
131
17
0
42
63
_parse_and_clean_wikicode
Improve Wikipedia Loading Script (#3435) * Improve Wikipedia Loading Script (#3400) * More structured approach to detecting redirects * Remove redundant template filter code (covered by strip_code) * Add language-specific lists of additional media namespace aliases for filtering * Add language-specific lists of category namespace aliases for new link text cleaning step * Remove magic words (parser directions like __TOC__ that occasionally occur in text) With support from @albertvillanova * Update wikipedia.py Co-authored-by: Albert Villanova del Moral <[email protected]>
https://github.com/huggingface/datasets.git
def _parse_and_clean_wikicode(raw_content, parser, language): wikicode = parser.parse(raw_content) # Filters for magic words that are parser instructions -- e.g., __NOTOC__ re_rm_magic = re.compile("__[A-Z]*__", flags=re.UNICODE) # Filters for file/image links. media_prefixes = "|".join(["File", "Image", "Media"] + MEDIA_ALIASES.get(language, [])) re_rm_wikilink = re.compile(f"^(?:{media_prefixes}):", flags=re.IGNORECASE | re.UNICODE)
236
wikipedia.py
Python
datasets/wikipedia/wikipedia.py
7e30308f49f8c85dc7a2ab5aafbff04b5d2f38e2
datasets
6
275,541
18
15
12
110
15
0
20
188
iterations
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def iterations(self): if self._iterations is None: with self._distribution_strategy_scope(): self._iterations = self.add_weight( "iter", shape=[], dtype=tf.int64, trainable=False, aggregation=tf.VariableAggregation.ONLY_FIRST_REPLICA, ) self._weights.append(self._iterations) return self._iterations
68
optimizer_v2.py
Python
keras/optimizers/optimizer_v2/optimizer_v2.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
319,942
108
20
43
463
56
0
141
662
update_document_archive_file
Implements a better re-do of OCR by making the document archiver function common. Actually creates updated file now
https://github.com/paperless-ngx/paperless-ngx.git
def update_document_archive_file(document_id): document = Document.objects.get(id=document_id) mime_type = document.mime_type parser_class: Type[DocumentParser] = get_parser_class_for_mime_type(mime_type) if not parser_class: logger.error( f"No parser found for mime type {mime_type}, cannot " f"archive document {document} (ID: {document_id})", ) return parser: DocumentParser = parser_class(logging_group=uuid.uuid4()) try: parser.parse(document.source_path, mime_type, document.get_public_filename()) thumbnail = parser.get_thumbnail( document.source_path, mime_type, document.get_public_filename(), ) if parser.get_archive_path(): with transaction.atomic(): with open(parser.get_archive_path(), "rb") as f: checksum = hashlib.md5(f.read()).hexdigest() # I'm going to save first so that in case the file move # fails, the database is rolled back. # We also don't use save() since that triggers the filehandling # logic, and we don't want that yet (file not yet in place) document.archive_filename = generate_unique_filename( document, archive_filename=True, ) Document.objects.filter(pk=document.pk).update( archive_checksum=checksum, content=parser.get_text(), archive_filename=document.archive_filename, ) with FileLock(settings.MEDIA_LOCK): create_source_path_directory(document.archive_path) shutil.move(parser.get_archive_path(), document.archive_path) shutil.move(thumbnail, document.thumbnail_path) with index.open_index_writer() as writer: index.update_document(writer, document) except Exception: logger.exception( f"Error while parsing document {document} " f"(ID: {document_id})", ) finally: parser.cleanup()
266
tasks.py
Python
src/documents/tasks.py
ab761e837c4be4974f699c8c97560a4291a8d298
paperless-ngx
5
181,659
8
9
6
38
6
0
8
38
test_sparse1_with_non_sparse_components
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def test_sparse1_with_non_sparse_components(): fit_then_transform( sparse1_paratial_1h.todense(), sparse1, categorical_features=[True, False] )
23
one_hot_encoder_tests.py
Python
tests/one_hot_encoder_tests.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
1
267,386
20
11
3
56
8
0
22
35
get_generic_type
ansible-test - Code cleanup. This helps prepare for a future pylint upgrade.
https://github.com/ansible/ansible.git
def get_generic_type(base_type, generic_base_type): # type: (t.Type, t.Type[TValue]) -> t.Optional[t.Type[TValue]] # noinspection PyUnresolvedReferences type_arg = t.get_args(base_type.__orig_bases__[0])[0] return None if isinstance(type_arg, generic_base_type) else type_arg
35
util.py
Python
test/lib/ansible_test/_internal/util.py
86779cc90376ea70bafa7044b12ce5132409fd63
ansible
2
166,227
4
8
2
23
3
0
4
18
__dlpack__
ENH: Implement DataFrame interchange protocol (#46141)
https://github.com/pandas-dev/pandas.git
def __dlpack__(self): raise NotImplementedError("__dlpack__")
11
dataframe_protocol.py
Python
pandas/core/exchange/dataframe_protocol.py
90140f055892a46f473bd26affab88a7f171e394
pandas
1
82,287
46
15
22
232
21
0
71
314
for_page
Enabled isort workflow (#7200) * Ran isort * Enabled isort workflow Co-authored-by: Vinit Kumar <[email protected]>
https://github.com/django-cms/django-cms.git
def for_page(self, page): # permissions should be managed on the draft page only from cms.models import ( ACCESS_CHILDREN, ACCESS_DESCENDANTS, ACCESS_PAGE, ACCESS_PAGE_AND_CHILDREN, ACCESS_PAGE_AND_DESCENDANTS, ) page = page.get_draft_object() paths = page.node.get_ancestor_paths() # Ancestors query = ( Q(page__node__path__in=paths) & ( Q(grant_on=ACCESS_DESCENDANTS) | Q(grant_on=ACCESS_PAGE_AND_DESCENDANTS) ) ) if page.parent_page: # Direct parent query |= ( Q(page=page.parent_page) & ( Q(grant_on=ACCESS_CHILDREN) | Q(grant_on=ACCESS_PAGE_AND_CHILDREN) ) ) query |= Q(page=page) & ( Q(grant_on=ACCESS_PAGE_AND_DESCENDANTS) | Q(grant_on=ACCESS_PAGE_AND_CHILDREN) | Q(grant_on=ACCESS_PAGE) ) return self.filter(query).order_by('page__node__depth')
143
managers.py
Python
cms/models/managers.py
a3110e1ff24085373898c7d2a85f628abeb8518d
django-cms
2
20,773
27
15
20
105
10
0
33
250
position_cursor
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def position_cursor(self) -> Control: if self._shape is not None: _, height = self._shape return Control( ControlType.CARRIAGE_RETURN, (ControlType.ERASE_IN_LINE, 2), *( ( (ControlType.CURSOR_UP, 1), (ControlType.ERASE_IN_LINE, 2), ) * (height - 1) ) ) return Control()
70
live_render.py
Python
pipenv/patched/notpip/_vendor/rich/live_render.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2