question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
75,709,741 | 2023-3-11 | https://stackoverflow.com/questions/75709741/pydantic-nested-setting-objects-load-env-variables-from-file | Using pydantic setting management, how can I load env variables on nested setting objects on a main settings class? In the code below, the sub_field env variable field doesn't get loaded. field_one and field_two load fine. How can I load an environment file so the values are propagated down to the nested sub_settings object? from typing import Optional from pydantic import BaseSettings, Field class SubSettings(BaseSettings): sub_field: Optional[str] = Field(None, env='SUB_FIELD') class Settings(BaseSettings): field_one: Optional[str] = Field(None, env='FIELD_ONE') field_two: Optional[int] = Field(None, env='FIELD_TWO') sub_settings: SubSettings = SubSettings() settings = Settings(_env_file='local.env') | There are some examples of nested loading of pydantic env variables in the docs. Option 1 If you're willing to adjust your variable names, one strategy is to use env_nested_delimiter to denote nested fields. This appears to be the way that pydantic expects nested settings to be loaded, so it should be preferred when possible. So with a local.env like this: FIELD_ONE=one FIELD_TWO=2 SUB_SETTINGS__SUB_FIELD=value You should be able to load the settings in this way from typing import Optional from pydantic import BaseModel, BaseSettings class SubSettings(BaseModel): # ^ Note that this inherits from BaseModel, not BaseSettings sub_field: Optional[str] class Settings(BaseSettings): field_one: Optional[str] field_two: Optional[int] sub_settings: SubSettings class Config: env_nested_delimiter = '__' Alternate If you don't want to use the env_nested_delimiter functionality, you could load both sets of settings from the same local.env file. Then pass the loaded SubSettings to Settings directly. This can be done by overriding the Settings class __init__ method With this local.env FIELD_ONE=one FIELD_TWO=2 SUB_FIELD=value use the following to load the settings from typing import Optional from pydantic import BaseModel, BaseSettings, Field class SubSettings(BaseSettings): sub_field: Optional[str] class Settings(BaseSettings): field_one: Optional[str] field_two: Optional[int] sub_settings: SubSettings def __init__(self, *args, **kwargs): kwargs['sub_settings'] = SubSettings(_env_file=kwargs['_env_file']) super().__init__(*args, **kwargs) settings = Settings(_env_file='local.env') | 6 | 8 |
75,707,605 | 2023-3-11 | https://stackoverflow.com/questions/75707605/cant-get-telegram-bot-to-send-messages-to-user-and-run-polling-loop-at-the-same | I m trying to build a telegram bot usin the telegram.ext.Application class of python-telegram-bot and I need the bot to send periodic messages to a specific user while also running its polling loop to receive incoming messages from other users. I ve done a lot-lot-lot of searching and tried various combinations but I cant seem to get it to work. THis is the code I have so far (I removed message handlers to give you the minimum possible code example) import asyncio import logging from telegram.ext import Application token = '1111111111:AAAoOOV_Aaa1aaAAAaA1AA11A1aAaAAaAAA' user_id = 0000000000 application = Application.builder().token(token).build() async def send_message_to_user(user_id, message_text): try: await application.bot.send_message(chat_id=user_id, text=message_text) except Exception as e: logging.error(f"Error while sending message to user {user_id}: {e}") async def main() -> None: await send_message_to_user(user_id, "hello user, dont forget to get your pills") application.run_polling() if __name__ == "__main__": asyncio.run(main()) The send_message_to_user coroutine works fine on its own, but when I try to combine it with the run_polling method (with or without await before it), i get this error RuntimeError("Cannot close a running event loop") RuntimeError: Cannot close a running event loop sys:1: RuntimeWarning: coroutine 'Application.shutdown' was never awaited sys:1: RuntimeWarning: coroutine 'Application.initialize' was never awaited I think run_polling has its own event loop and cant run asyncronously but unfortunately Im not experienced with asynchronous programming so can anyone help me figure out how to send periodic messages to a specific user while also running a telegram bot's polling loop? Thanks in advance for your help! | python-telegram-bot comes with the JobQueue, a built-in scheduling interface that allows you to run repeating tasks seamlessly within the framework. You might also be interested in this wiki section, which covers the general use case of running some asyncio logic in parallel to the bot. Disclaimer: I'm currently the maintainer of python-telegram-bot. | 3 | 4 |
75,704,418 | 2023-3-11 | https://stackoverflow.com/questions/75704418/parallelization-of-un-bzipping-millions-of-files | I have millions of compressed .bz2 files which I need to uncompressed. Can uncompression be parallelized ? I have access to the server with many cpu cores for the purpose. I worked with the following code which is correct but it is extremely slow. import os, glob, bz2 files = glob.glob("/data01/*.bz2") for fi in files: fo = fi[:-4] with bz2.BZ2File(fi) as fr, open(fo, "wb") as fw: shutil.copyfileobj(fr, fw) | Multithreading would ideal for this because it's primarily IO-bound. from concurrent.futures import ThreadPoolExecutor import glob import bz2 import shutil def process(filename): with bz2.BZ2File(filename) as fr, open(filename[:-4], "wb") as fw: shutil.copyfileobj(fr, fw) def main(): with ThreadPoolExecutor() as tpe: tpe.map(process, glob.glob('/data01/*.bz2')) if __name__ == '__main__': main() | 3 | 2 |
75,671,038 | 2023-3-8 | https://stackoverflow.com/questions/75671038/pytest-manually-add-test-to-discovered-tests | # tests/test_assert.py @pytest.mark.mymark def custom_assert(): assert True How do I force pytest to discover this test? In general, how do I dynamically add any test to pytest's list of discovered tests, even if they don't fit in the naming convention? | pytest is fairly customisable, but you'll have to look at its extensive API. Luckily, the code base is statically typed, so you can navigate from functions and classes to other functions and classes fairly easily. To start off, it pays to understand how pytest discovers tests. Recall the configurable discovery naming conventions: # content of pytest.ini # Example 1: have pytest look for "check" instead of "test" [pytest] python_files = check_*.py python_classes = Check python_functions = *_check This implies that, for example, the value to python_functions is used somewhere to filter out functions that are not considered as test functions. Do a quick search on the pytest repository to see this: class PyCollector(PyobjMixin, nodes.Collector): def funcnamefilter(self, name: str) -> bool: return self._matches_prefix_or_glob_option("python_functions", name) PyCollector is a base class for pytest Module objects, and module_: pytest.Module has an obj property which is the types.ModuleType object itself. Along with access to the funcnamefilter::name parameter, you can make a subclass of pytest.Module, pytest.Package, and pytest.Class to override funcnamefilter to accept functions decorated your custom @pytest.mark.mymark decorator as test functions: from __future__ import annotations import types import typing as t import pytest # Static-type-friendliness if t.TYPE_CHECKING: from _pytest.python import PyCollector class _MarkDecorated(t.Protocol): pytestmark: list[pytest.Mark] def __call__(self, *args: object, **kwargs: object) -> None: """Test function callback method""" else: PyCollector: t.TypeAlias = object def _isPytestMarkDecorated(obj: object) -> t.TypeGuard[_MarkDecorated]: """ Decorating `@pytest.mark.mymark` over a function results in this: >>> @pytest.mark.mymark ... def f() -> None: ... pass ... >>> f.pytestmark [Mark(name='mymark', args=(), kwargs={})] where `Mark` is `pytest.Mark`. This function provides a type guard for static typing purposes. """ if ( callable(obj) and hasattr(obj, "pytestmark") and isinstance(obj.pytestmark, list) ): return True return False class _MyMarkMixin(PyCollector): def funcnamefilter(self, name: str) -> bool: underlying_py_obj: object = self.obj assert isinstance(underlying_py_obj, (types.ModuleType, type)) func: object = getattr(underlying_py_obj, name) if _isPytestMarkDecorated(func) and any( mark.name == "mymark" for mark in func.pytestmark ): return True return super().funcnamefilter(name) class MyMarkModule(_MyMarkMixin, pytest.Module): pass The last thing to do is to configure pytest to use your MyMarkModule rather than pytest.Module when collecting test modules. You can do this with the per-directory plugin module file conftest.py, where you would override the hook pytest.pycollect.makemodule (please see pytest's implementation on how to write this properly): # conftest.py import typing as t from <...> import MyMarkModule if t.TYPE_CHECKING: import pathlib import pytest def pytest_pycollect_makemodule( module_path: pathlib.Path, parent: object ) -> pytest.Module | None: if module_path.name != "__init__.py": return MyMarkModule.from_parent(parent, path=module_path) # type: ignore[no-any-return] Now you can run pytest <your test file> and you should see all @pytest.mark.mymark functions run as test functions, regardless of whether they're named according to the pytest_functions configuration setting. This is a start on what you need to do, and can do with pytest. You'll have to do this with pytest.Class and pytest.Package as well, if you're planning on using @pytest.mark.mymark elsewhere. | 4 | 4 |
75,698,393 | 2023-3-10 | https://stackoverflow.com/questions/75698393/create-a-raster-from-points-gpd-geodataframe-object-in-python-3-6 | I want to create a raster file (.tif) from a points file using a geopandas.geodataframe.GeoDataFrame object. My dataframe has two columns: [geometry] and [Value]. The goal is to make a 10m resolution raster in [geometry] point with the [Value] value. My dataset is: geometry | Value 0 | POINT (520595.000 5720335.000) | 536.678345 1 | POINT (520605.000 5720335.000) | 637.052185 2 | POINT (520615.000 5720335.000) | 1230.553955 3 | POINT (520625.000 5720335.000) | 944.970642 4 | POINT (520635.000 5720335.000) | 1094.613281 5 | POINT (520645.000 5720335.000) | 1123.185181 6 | POINT (520655.000 5720335.000) | 849.37634 7 | POINT (520665.000 5720335.000) | 1333.459839 8 | POINT (520675.000 5720335.000) | 492.866608 9 | POINT (520685.000 5720335.000) | 960.957214 10 | POINT (520695.000 5720335.000) | 539.401978 11 | POINT (520705.000 5720335.000) | 573.015625 12 | POINT (520715.000 5720335.000) | 970.386536 13 | POINT (520725.000 5720335.000) | 390.315094 14 | POINT (520735.000 5720335.000) | 642.036865 I have tried before and so, I know that with from geocube.api.core import make_geocube I could do it, but due to some libraries I have a limitation and I cannot use make_geocube. Any idea? | Assign x and y columns, convert to xarray, then export to tiff using rioxarray: # do this before sending to xarray # to ensure extension is loaded import rioxarray # assuming your GeoDataFrame is called `gdf` gdf["x"] = gdf.x gdf["y"] = gdf.y da = ( gdf.set_index(["y", "x"]) .Value .to_xarray() ) da.rio.to_raster("myfile.tif") In order for this to work, the points must make up a full regular grid, with values of x and y each repeated for each combination. If this is instead just a collection of arbitrary points converting to xarray with x and y as perpendicular dimensions will explode your memory and the result will be almost entirely NaNs. | 4 | 6 |
75,702,760 | 2023-3-11 | https://stackoverflow.com/questions/75702760/how-to-call-an-imported-module-knowing-its-name-as-a-string | I am writing an application that should test the solution of many students. I have structure like this app/ students/ A/ lab1/ solution.py lab2 solution.py B/ lab1/ solution.py C/ test.py I want to import the solution file in the testing module and run the main method. I know there is __import__ function that returns a module object, but I don't know how to call main method. My code: import os def get_student_names(): return os.listdir('students') def test(name, lab): if lab not in os.listdir('/'.join(['students',name])): return f"{lab} not found" if 'solution.py' not in os.listdir('/'.join(['students',name,lab])): return "solution file not found" module = __import__('.'.join(['students',name,lab,'solution'])) # module.name.lab.solution.main() def main(): student_names = get_student_names() for student in student_names: result = test(student, "lab1") I want to call module.name.lab.solution.main() but the module is students. How to switch to solution submodule? | Your most immediate problem is that you aren't returning anything from test so, result will always be None. The second and most important problem you have is that you don't really have any modules. You actually have a bunch of random folders with arbitrary python scripts in them. This means __import__ is going to be quite unfriendly to you. if we run this line __import__('students.A.lab1.solution').main() we get back this error: AttributeError: module 'students' has no attribute 'main' students? We're supposed to be in solution! That's a problem. There is a way to do this without worrying about converting and maintaining your individual folders as actual modules. First, students must call main in their solution. ex: def main(): print('hello, world') main() Secondly, run their script. If the command fails it is going to tell you the script doesn't exist and move on to the next student. import os #change this to however you call python PY = 'python3' #lab format strings LAB1 = '{} "students/{}/lab1/solution.py"' LAB2 = '{} "students/{}/lab2/solution.py"' def main(lab:str) -> None: for student in os.listdir('students'): os.system(lab.format(PY, student)) main(LAB1) As highlighted by @KarlKnetchel, the above code has all of the same vulnerabilities as your original concept. This is NOT safe. You should definitely consider running this in a sandbox, if you use it. | 5 | 3 |
75,701,643 | 2023-3-10 | https://stackoverflow.com/questions/75701643/remove-an-element-from-a-set-and-return-the-set | I need to iterate over a copy of a set that doesn't contain a certain element. Until now I'm doing this: for element in myset: if element != myelement: ... But I want something like this, in one line: for element in myset.copy().remove(myelement): ... Obviously this doesn't work because the remove method returns None. | Use the set difference operator. for element in myset - {myelement}: ... This creates a new set containing the elements of myset that aren't in {myelement} (namely, myelement itself). | 6 | 12 |
75,699,024 | 2023-3-10 | https://stackoverflow.com/questions/75699024/finding-the-centroid-of-a-polygon-in-python | I want to calculate the centroid of a figure formed by the points: (0,0), (70,0), (70,25), (45, 45), (45, 180), (95, 188), (95, 200), (-25, 200), (-25,188), (25,180), (25,45), (0, 25), (0,0). I know that the correct result for the centroid of this polygon is x = 35 and y = 100.4615 (source), but the code below does not return the correct values (figure of the polygon below). import numpy as np points = np.array([(0,0), (70,0), (70,25), (45,45), (45,180), (95,188), (95,200), (-25,200), (-25, 188), (25,180), (25,45), (0,25), (0,0)]) centroid = np.mean(points, axis=0) print("Centroid:", centroid) Output: Centroid: [32.30769231 98.15384615] How can I correctly calculate the centroid of the polygon? | Fixed: def centroid(vertices): x, y = 0, 0 n = len(vertices) signed_area = 0 for i in range(len(vertices)): x0, y0 = vertices[i] x1, y1 = vertices[(i + 1) % n] # shoelace formula area = (x0 * y1) - (x1 * y0) signed_area += area x += (x0 + x1) * area y += (y0 + y1) * area signed_area *= 0.5 x /= 6 * signed_area y /= 6 * signed_area return x, y x, y = centroid(vertices) print(x, y) Produces: 35.0 100.46145124716553 | 6 | 5 |
75,696,056 | 2023-3-10 | https://stackoverflow.com/questions/75696056/pythons-inspect-getsource-throws-error-if-used-in-a-decorator | I have the following function def foo(): for _ in range(1): print("hello") Now I want to add another print statement to print "Loop iterated" after every loop iteration. For this I define a new function that transforms foo into an ast tree, inserts the corresponding print node and then compiles the ast tree into an executable function: def modify(func): def wrapper(): source_code = inspect.getsource(func) ast_tree = ast.parse(source_code) # insert new node into ast tree for node in ast.walk(ast_tree): if isinstance(node, ast.For): node.body += ast.parse("print('Loop iterated')").body # get the compiled function new_func = compile(ast_tree, '<ast>', 'exec') namespace = {} exec(new_func, globals(), namespace) new_func = namespace[func.__name__] return new_func() return wrapper This works fine as expected when using: foo = modify(foo) foo() However, if I decide to use modify as a decorator: @modify def foo(): for _ in range(1): print("hello") foo() I get the following error: Traceback (most recent call last): File "c:\Users\noinn\Documents\decorator_test\test.py", line 34, in <module> foo() File "c:\Users\noinn\Documents\decorator_test\test.py", line 25, in wrapper return new_func() File "c:\Users\noinn\Documents\decorator_test\test.py", line 11, in wrapper source_code = inspect.getsource(func) File "C:\Users\noinn\AppData\Local\Programs\Python\Python39\lib\inspect.py", line 1024, in getsource lines, lnum = getsourcelines(object) File "C:\Users\noinn\AppData\Local\Programs\Python\Python39\lib\inspect.py", line 1006, in getsourcelines lines, lnum = findsource(object) File "C:\Users\noinn\AppData\Local\Programs\Python\Python39\lib\inspect.py", line 835, in findsource raise OSError('could not get source code') OSError: could not get source code Does anyone know why that error appears? Note that this does not happen If I return the original function and the error only appears once new_func() is called. ------------------------- Solution ---------------------- Simply remove the decorator from the function in the decorator itself using: ast_tree.body[0].decorator_list = [] | After some experimenting, I found out my initial hypothesis is incorrect: inspect.getsource is smart enough to retrieve the source code even if the function name is not yet set in the module globals. (surprisingly). What happens is that the source code is retrieved along with the decorator calls as well, and when the function is called, the decorator runs again - so, it gets some re-entrancy in inspect.getsource, at which points it fails. The solution bellow work: I just strip decorators at the module level in the retrieved source code, before feeding it to ast.parse I also rearranged your decorator, as it would re-read the source code, and reparse the AST at each time the decorated function would be called - the way this example is, there is no need for an inner wrapper function at all. If you happen to need to parametrize your decorator, and need the inner wrapper, all the function re-writting parts, up to the creation of new_func, should be outside the wrapper, so that they run only once import inspect import ast import functools def modify(func): func.__globals__[func.__name__] = func source_lines = inspect.getsource(func).splitlines() # strip decorators from the source itself source_code = "\n".join(line for line in source_lines if not line.startswith('@')) ast_tree = ast.parse(source_code) # insert new node into ast tree for node in ast.walk(ast_tree): if isinstance(node, ast.For): node.body += ast.parse("print('Loop iterated')").body # get the compiled function new_func = compile(ast_tree, '<ast>', 'exec') namespace = {} exec(new_func, func.__globals__, namespace) new_func = namespace[func.__name__] return functools.wraps(func)(new_func) @modify def foo(): for _ in range(1): print("hello") #foo = modify(foo) foo() initial answer Left here for the reasoning and simpler workaround: it can't get the source of <module>.foo function due to the simple fact that the name foo will only be defined and bound after the function definition, including all its decorators, is executed. The name foo simply does not exist as a variable in the module at the point the decorator code is run, although it is set as the __name__ attribute in func at this point. There is nothing inspect.getsource can do about it. In other words, decorator syntax is altogether off-limits if one is using inspect.getsource because there is nothing yet getsource can get the source of. The good news is the workaround is simple: one have to give-up the @ decorator syntax, and apply the decorator "in the old way" (as it was before Python 2.3): one declares the function as usual, and reassign its name to the return of manually calling the decorator - this is the same as is written in your working example: def foo(...): ... foo = modify(foo) | 3 | 2 |
75,690,784 | 2023-3-9 | https://stackoverflow.com/questions/75690784/polars-for-python-how-to-get-rid-of-ensure-you-pass-a-path-to-the-file-instead | The statement I'm reading data sets using Polars.read_csv() method via a Python file handler: with gzip.open(os.path.join(getParameters()['rdir'], dataset)) as compressed_file: df = pl.read_csv(compressed_file, sep = '\t', ignore_errors=True) A performance warning keeps popping up: Polars found a filename. Ensure you pass a path to the file instead of a python file object when possible for best performance. Possible solutions I already tried Python warning suppression, but it seems Polars literally just prints out this statement without any default warning associated. Another possibility would be to read using non-handler methods? Any ideas on how to get rid of this annoying message will be highly appreciated. | I had a similar issue with opening from a ZipFile object. The solution was to add a .read() method to the filename. Maybe the same would work in your case? with gzip.open(os.path.join(getParameters()['rdir'], dataset)) as compressed_file: df = pl.read_csv(compressed_file.read(), sep = '\t', ignore_errors=True) | 13 | 6 |
75,690,334 | 2023-3-9 | https://stackoverflow.com/questions/75690334/replace-two-adjacent-duplicate-characters-in-string-with-the-next-character-in-a | I want to write a function, that will find the first occurrence of two adjacent characters that are the same, replace them with a single character that is next in the alphabet and go over the string until there are no duplicates left. In case of "zz" it should go in a circular fashion back to "a". The string can only include characters a-z, that is, no capital letters or non-alphabetical characters. I have written a function that does it, but it is not effective enough for a very long string. def solve(s): i = 1 while i < len(s): if s[i] == s[i-1]: r = s[i+1:] l = s[:i-1] if s[i] == "z": x = "a" else: x = chr(ord(s[i])+1) i = 1 s = l+x+r else: i += 1 return s So for example if s = 'aabbbc' the function should work like aabbbc --> bbbbc --> cbbc --> ccc and finally return dc. How can I make it more efficient? Edit: for example if s = 'ab'*10**4 + 'cc'*10**4 + 'dd'*10**4 this function is taking a lot of time. | As a trivial optimisation: instead of the hard reset i = 1, you can use a softer reset i = i-1. Indeed, if there was no duplicate between 1 and i before the transformation, then there still won't be any duplicate between 1 and i-1 after the transformation. def solve(s): i = 1 while i < len(s): if s[i] == s[i-1]: l = s[:i-1] r = s[i+1:] # I swapped these two lines because I read left-to-right if s[i] == "z": x = "a" else: x = chr(ord(s[i])+1) s = l+x+r i = max(1, i-1) # important change is here else: i += 1 return s s = 'ab'*10**4 + 'cc'*10**4 + 'dd'*10**4 t = solve(s) print(t[2*10**4:]) # rqpnlih print(t == 'ab'*10**4 + 'rqpnlih') # True | 4 | 2 |
75,677,991 | 2023-3-8 | https://stackoverflow.com/questions/75677991/polars-datetime-5-minutes-floor | I have polars dataframe with timestamp folumn of type datetime[ns] which value is 2023-03-08 11:13:07.831 I want to use polars efficiency to round timestamp to 5 minutes floor. Right now I do: import arrow def timestamp_5minutes_floor(ts: int) -> int: return int(arrow.get(ts).timestamp() // 300000 * 300000) df.with_columns([ pl.col("timestamp").apply(lambda x: timestamp_5minutes_floor(x)).alias("ts_floor") ]) It is slow. How to improve it? | You could try to use .dt.truncate: With the sample dataframe df = pl.DataFrame({ "ts": ["2023-03-08 11:01:07.831", "2023-03-08 18:09:01.007"] }).select(pl.col("ts").str.strptime(pl.Datetime, "%Y-%m-%d %H:%M:%S%.3f")) βββββββββββββββββββββββββββ β ts β β --- β β datetime[ms] β βββββββββββββββββββββββββββ‘ β 2023-03-08 11:01:07.831 β β 2023-03-08 18:09:01.007 β βββββββββββββββββββββββββββ this df = df.select(pl.col("ts").dt.truncate("5m")) results in βββββββββββββββββββββββ β ts β β --- β β datetime[ms] β βββββββββββββββββββββββ‘ β 2023-03-08 11:00:00 β β 2023-03-08 18:05:00 β βββββββββββββββββββββββ | 3 | 8 |
75,676,642 | 2023-3-8 | https://stackoverflow.com/questions/75676642/how-to-perform-conditional-join-more-efficiently-in-polars | I have a reasonably large dataframe on hand. Joining it with itself takes some time. But I want to join them with some conditions, which could make the resulting dataframe much smaller. My question is how can I take advantage of such conditions to make the conditional join faster to plain full join? Code below for illustration: import time import numpy as np import polars as pl # example dataframe rng = np.random.default_rng(1) nrows = 3_000_000 df = pl.DataFrame( dict( day=rng.integers(1, 300, nrows), id=rng.integers(1, 5_000, nrows), id2=rng.integers(1, 5, nrows), value=rng.normal(0, 1, nrows), ) ) # joining df with itself takes around 10-15 seconds on a machine with 32 cores. start = time.perf_counter() df.join(df, on=["id", "id2"], how="left") time.perf_counter() - start # joining df with itself with extra conditions - the implementation below that takes very similar time (10-15 seconds). start = time.perf_counter() df.join(df, on=["id", "id2"], how="left").filter( (pl.col("day") < pl.col("day_right")) & (pl.col("day_right") - pl.col("day") <= 30) ) time.perf_counter() - start So, as mentioned above, my question is how can I take advantage of the conditions during join to make the 'conditional join' faster? It should be faster since the resulting dataframe after conditions has 10x less rows than the full join without any conditions. | With lazy and streaming=True, it is faster: In [5]: start = time.perf_counter() ...: df.lazy().join(df.lazy(), on=["id", "id2"], how="left").filter( ...: (pl.col("day") < pl.col("day_right")) & (pl.col("day_right") - pl.col("day") <= 30) ...: ).collect() ...: time.perf_counter() - start Out[5]: 11.083821532000002 In [6]: start = time.perf_counter() ...: df.lazy().join(df.lazy(), on=["id", "id2"], how="left").filter( ...: (pl.col("day") < pl.col("day_right")) & (pl.col("day_right") - pl.col("day") <= 30) ...: ).collect(streaming=True) ...: time.perf_counter() - start Out[6]: 7.110704054997768 | 5 | 5 |
75,680,658 | 2023-3-9 | https://stackoverflow.com/questions/75680658/where-to-put-a-small-utility-function-that-i-would-like-to-use-across-multiple-p | Right now I have one function that would be useful in a number of distinct packages that I work on. The function is only a handful of lines. But I would like to be able to use this code in a number of packages/projects that I work on and are deployed, I would like this code to be version controlled etc. There isn't, for example, one package that all the other packages already have as a requirement, otherwise I could put this code inside of that one and import it that way. During my time coding I've come across this issue a couple times. For concreteness some functions f that have this characteristic might be: A wrapper or context manager which times a block of code with some log statements A function which divides a range of integers into as small number of evenly spaced strides while not exceeding a maximum number of steps A function which converts a center value and a span into a lower and upper limit Basically the options I see are: Put the code f in one of the existing packages R and then make any package A that wants to use f have R as a requirement. The downside here is that A may require nothing from R other than f in which case most of the requirement is wasteful. Copy the code f into every package A that requires it. One question that comes up is then where should f live within A? Because the functionality of f is really outside the scope of package A does. This is sort of a minor problem, the bigger problem is that if an improvement is made to f in one package it would be challenging to maintain uniformity across multiple packages that have f in them. I could make an entire package F dedicated to functions like f and import it into each package that needs f. This seems like technically the best approach from a requirements management and separation of responsibility management perspective. But like I said, right now this would be an entire package dedicated to literally one function with a few lines of code. If there is a stdlib function that has the functionality I want I should definitely use that. If there is not, I may be able to find a 3rd party package that has the functionality I want, but this brings about a zoo of other potential problems that I'd prefer to avoid. What would be the suggested way to do this? Are there other approaches I haven't mentioned? | The entire packaging system is designed to solve exactly this problem - sharing code between different applications. So yes, the ideally you'd want to create a package out of this and add it as a dependency to all the other packages that use this code. There are a few upsides to this option: Package management is clean Future repositories can also include this code Any changes to this code can still be handled with proper versioning and version pinning - thus not breaking code in other places Future such functions f2, f3, etc. can potentially be added to this package, allowing you to share them across packages too But this also comes with some (potential) downsides: You now have to maintain an additional package, complete with its deployment pipeline and versioning - this however should not be too much of a hassle if there is already a pipeline in place Poorly managed versioning can cause systems to collapse rather quickly, whenever breaking changes are introduced - this typically is harder to trace Having said that, the option of copying code to each of the packages that use f is still an option. Consider these points: Often, such code is also tweaked over time to adapt it to the requirements of the parent package, in such cases sharing it between packages no longer makes sense - and attempts to generalize it more often than not lead to bad abstractions. If adhering to DRY is your concern, do checkout this talk from Dan Abramov on the 'WET codebase' Regarding maintaining uniformity - you may not have to do so all the time, depending on the usecase. Package A could be using updated code, while package B could be using the older one. Regardless, whatever approach you use, you'd still need to update every package to maintain uniformity - for example if you go with a dedicated package, you'd still need to update the version used everywhere. Regarding where this code will reside in each package's codebase - If f does something very specific, it can reside in an appropriately named file of its own. If nothing else, there is always the notoriously overused util.py Β―\(γ)/Β― Recommendation Begin with copying over the code to all packages. Update them individually as required in every package. Over time if you observe that any updates to f is being propagated to all other packages every time, then put f in a package of its own and replace the code in the other packages with an import from this new package. Finally, don't fret the small things. Most things in software are reversible. Pick one approach and change it to the other if it does not workout. Just remember to not drag the decision - delay too much and you'd be left with a huge mountain of tech debt over time. PS: Someone may recommend using a git submodule for sharing this code - DO NOT do it, managing versions isn't clean and will soon get out of hand - you'd rather just create a new package instead | 3 | 3 |
75,677,363 | 2023-3-8 | https://stackoverflow.com/questions/75677363/does-pip-provide-a-toml-parser-for-python-3-9 | I want to parse TOML files in python 3.9, and I am wondering if I can do so without installing another package. Since pip knows how to work with pyproject.toml files, and I already have pip installed, does pip provide a parser that I can import and use in my own code? | For 3.9, pip vendors tomli: from pip._vendor import tomli For consenting adults, importing the vendored module should work as usual. However, this is an implementation detail of pip, and it could change without a deprecation period at any moment in a future release of pip. Therefore, for anything apart from a quick hack, it would be safer to install tomli (or some other TOML parser) into the site instead. | 10 | 9 |
75,649,038 | 2023-3-6 | https://stackoverflow.com/questions/75649038/training-difference-between-lightgbm-api-and-sklearn-api | I'm trying to train a LGBClassifier for multiclass task. I tried first working directly with LightGBM API and set the model and training as follows: LightGBM API train_data = lgb.Dataset(X_train, (y_train-1)) test_data = lgb.Dataset(X_test, (y_test-1)) params = {} params['learning_rate'] = 0.3 params['boosting_type'] = 'gbdt' params['objective'] = 'multiclass' params['metric'] = 'softmax' params['max_depth'] = 10 params['num_class'] = 8 params['num_leaves'] = 500 lgb_train = lgb.train(params, train_data, 200) # AFTER TRAINING THE MODEL y_pred = lgb_train.predict(X_test) y_pred_class = [np.argmax(line) for line in y_pred] y_pred_class = np.asarray(y_pred_class) + 1 This is how the confussion matrix looks: Sklearn API Then I tried to move to Sklearn API to be able to use other tools. This is the code I used: lgb_clf = LGBMClassifier(objective='multiclass', boosting_type='gbdt', max_depth=10, num_leaves=500, learning_rate=0.3, eval_metric=['accuracy','softmax'], num_class=8, n_jobs=-1, early_stopping_rounds=100, num_iterations=500) clf_train = lgb_clf(X_train, (y_train-1), verbose=1, eval_set=[(X_train, (y_train-1)), (X_test, (y_test-1)))]) # TRAINING: I can see overfitting is happening y_pred = clf_train.predict(X_test) y_pred = [np.argmax(line) for line in y_pred] y_pred = np.asarray(y_pred) + 1 And this is the confusion matrix in this case: Notes I need to substract 1 from y_train as my classes start at 1 and LightGBM was complaining about this. When I try a RandomSearch or a GridSearch I always obtain the same result as the last confusion matrix. I have check different questions here but none solve this issue. Questions Is there anything that I'm missing out when implementing the model in Sklearn API? Why do I obtain good results (maybe with overfitting) with LightGBM API? How can I achieve the same results with the two APIs? Thanks in advance. UPDATE It was my mistake. I thought the output in both APIs would be the same but it doesn't seem like that. I just removed the np.argmax() line when predicting with Sklearn API. It seems this API already predict directly the class. Don't remove the question in case someone else is dealing with similar issues. | It was my mistake. I thought the output in both APIs would be the same but it doesn't seem like that. I just removed the np.argmax() line when predicting with Sklearn API. It seems this API already predict directly the class. Don't remove the question in case someone else is dealing with similar issues. | 3 | 0 |
75,628,413 | 2023-3-3 | https://stackoverflow.com/questions/75628413/cast-column-of-type-list-to-str-in-polars | Currently, using the polars' cast() method on columns of type list[] is not supported. It throws: ComputeError: Cannot cast list type Before I do as usual (use rows(), or convert to pandas, or work with apply()). Is there any trick or best practice to convert polars list[] to strings? Here a quick snippet for you to replicate the error df = pl.from_dict({'foo': [[1,2,3]], 'bar': 'Hello World'}) print(df) ''' shape: (1, 2) βββββββββββββ¬ββββββββββββββ β foo β bar β β --- β --- β β list[i64] β str β βββββββββββββͺββββββββββββββ‘ β [1, 2, 3] β Hello World β βββββββββββββ΄ββββββββββββββ ''' df['foo'].cast(str) # this other workaround wont work neither df.select([pl.col('foo').str]) Here is what i expect to see: ''' shape: (1, 2) βββββββββββββ¬ββββββββββββββ β foo β bar β β --- β --- β β str β str β βββββββββββββͺββββββββββββββ‘ β"[1, 2, 3]"β Hello World β βββββββββββββ΄ββββββββββββββ ''' | You can use the datatype pl.List(pl.String) df.with_columns(pl.col("foo").cast(pl.List(pl.String))) shape: (1, 2) βββββββββββββββββββ¬ββββββββββββββ β foo | bar β β --- | --- β β list[str] | str β βββββββββββββββββββͺββββββββββββββ‘ β ["1", "2", "3"] | Hello World β βββββββββββββββββββ΄ββββββββββββββ To create an actual string - perhaps: df.with_columns("[" + pl.col("foo").cast(pl.List(pl.String)).list.join(", ") + "]" ) There's also pl.format() df.with_columns( pl.format("[{}]", pl.col("foo").cast(pl.List(pl.String)).list.join(", "))) shape: (1, 3) βββββββββββββ¬ββββββββββββββ¬ββββββββββββ β foo | bar | literal β β --- | --- | --- β β list[i64] | str | str β βββββββββββββͺββββββββββββββͺββββββββββββ‘ β [1, 2, 3] | Hello World | [1, 2, 3] β βββββββββββββ΄ββββββββββββββ΄ββββββββββββ | 3 | 6 |
75,654,140 | 2023-3-6 | https://stackoverflow.com/questions/75654140/trouble-with-conversion-of-duration-time-strings | I have some duration type data (lap times) as pl.String that fails to convert using strptime, whereas regular datetimes work as expected. Minutes (before :) and Seconds (before .) are always padded to two digits, Milliseconds are always 3 digits. Lap times are always < 2 min. df = pl.DataFrame({ "lap_time": ["01:14.007", "00:53.040", "01:00.123"] }) df = df.with_columns( # pl.col('release_date').str.to_date("%B %d, %Y"), # works pl.col('lap_time').str.to_time("%M:%S.%3f").cast(pl.Duration), # fails ) So I used the chrono format specifier definitions from https://docs.rs/chrono/latest/chrono/format/strftime/index.html which are used as per the polars docs of strptime the second conversion (for lap_time) always fails, no matter whether I use .%f, .%3f, %.3f. Apparently, strptime doesn't allow creating a pl.Duration directly, so I tried with pl.Time but it fails with error: ComputeError: strict conversion to dates failed, maybe set strict=False but setting strict=False yields all null values for the whole Series. Am I missing something or this some weird behavior on chrono's or python-polars part? | General case In case you have duration that may exceed 24 hours, you can extract data (minutes, seconds and so on) from string using regex pattern. For example: df = pl.DataFrame({ "time": ["+01:14.007", "100:20.000", "-05:00.000"] }) df.with_columns( pl.col("time").str.extract_all(r"([+-]?\d+)") # / # you will get array of length 3 # ["min", "sec", "ms"] ).with_columns( pl.duration( minutes=pl.col("time").list.get(0), seconds=pl.col("time").list.get(1), milliseconds=pl.col("time").list.get(2) ).alias("time") ) ββββββββββββββββ β time β β --- β β duration[ns] β ββββββββββββββββ‘ β 1m 14s 7ms β β 1h 40m 20s β β -5m β ββββββββββββββββ About pl.Time To convert data to pl.Time, you need to specify hours as well. When you add 00 hours to your time, code will work: df = pl.DataFrame({"str_time": ["01:14.007", "01:18.880"]}) df.with_columns( duration = (pl.lit("00:") + pl.col("str_time")) .str.to_time("%T%.3f") .cast(pl.Duration) ) βββββββββββββ¬βββββββββββββββ β str_time β duration β β --- β --- β β str β duration[ΞΌs] β βββββββββββββͺβββββββββββββββ‘ β 01:14.007 β 1m 14s 7ms β β 01:18.880 β 1m 18s 880ms β βββββββββββββ΄βββββββββββββββ | 4 | 3 |
75,601,543 | 2023-3-1 | https://stackoverflow.com/questions/75601543/access-newly-created-column-in-with-columns-when-using-polars | I am new to Polars and I am not sure whether I am using .with_columns() correctly. Here's a situation I encounter frequently: There's a dataframe and in .with_columns(), I apply some operation to a column. For example, I convert some dates from str to date type and then want to compute the duration between start and end date. I'd implement this as follows. import polars as pl pl.DataFrame( { "start": ["01.01.2019", "01.01.2020"], "end": ["11.01.2019", "01.05.2020"], } ).with_columns( pl.col("start").str.to_date(), pl.col("end").str.to_date(), ).with_columns( (pl.col("end") - pl.col("start")).alias("duration"), ) First, I convert the two columns, next I call .with_columns() again. Something shorter like this does not work: pl.DataFrame( { "start": ["01.01.2019", "01.01.2020"], "end": ["11.01.2019", "01.05.2020"], } ).with_columns( pl.col("start").str.to_date(), pl.col("end").str.to_date(), (pl.col("end") - pl.col("start")).alias("duration"), ) # InvalidOperationError: sub operation not supported for dtypes `str` and `str` Is there a way to avoid calling .with_columns() twice and to write this in a more compact way? | The second .with_columns() is needed. From the GitHub Issues I don't want this extra complexity in polars. If you want to use an updated column, you need two with_columns. This makes it much more readable, simple, and explainable. In the given example, passing multiple names to col() could simplify it slightly. (df.with_columns(pl.col("start", "end").str.to_date()) .with_columns(duration = pl.col("end") - pl.col("start")) ) shape: (2, 3) ββββββββββββββ¬βββββββββββββ¬βββββββββββββββ β start β end β duration β β --- β --- β --- β β date β date β duration[ms] β ββββββββββββββͺβββββββββββββͺβββββββββββββββ‘ β 2019-01-01 β 2019-01-11 β 10d β β 2020-01-01 β 2020-05-01 β 121d β ββββββββββββββ΄βββββββββββββ΄βββββββββββββββ | 7 | 13 |
75,633,075 | 2023-3-4 | https://stackoverflow.com/questions/75633075/why-do-image-size-differ-when-vertical-vs-horizontal | Tried to create a random image with PIL as per the example: import numpy from PIL import image a = numpy.random.rand(48,84) img = Image.fromarray(a.astype('uint8')).convert('1') print(len(img.tobytes())) This particular code will output 528. Wen we flip the numbers of the numpy array: a = numpy.random.rand(84,48) The output we get is 504. Why is that? I was expecting for the byte number to be the same, since the numpy arrays are the same size. | When you call tobytes() on the boolean array*, the data is likely encoded per row. In your second example, there are 48 booleans in each row of img. So each row can be represented with 6 bytes (48 bits). 6 bytes * 84 rows = 504 bytes in img. However, in your first example, there are 84 pixels per row, which is not divisible by 8. In this case, the encoder represents each row with 11 bytes (88 bits). There are 4 extra bits of padding per row. So now the total size is 11 bytes * 48 rows = 528 bytes. If you test a bunch of random input shapes for a 2d boolean array to encode, you will find that when the number of elements per row is divisible by 8, the number of total bytes in the encoding is equal to the width * height / 8. However, when the row length is not divisible by 8, the encoding will contain more bytes because it has to pad each row with between 1 and 7 bits. In summary - ideally, we would want to store eight boolean values per byte, but this is complicated by the fact that the row length isn't always divisible by 8, and the encoder serializes the array by row. Edit for clarification: *the PIL.Image object in mode "1" (binary or "bilevel" image) effectively represents a boolean array. In mode 1, the original image (in this case, the numpy array a) is thresholded to convert it to a binary image. | 4 | 5 |
75,626,974 | 2023-3-3 | https://stackoverflow.com/questions/75626974/is-it-possible-to-load-huggingface-model-which-does-not-have-config-json-file | I am trying to load this semantic segmentation model from HF using the following code: from transformers import pipeline model = pipeline("image-segmentation", model="Carve/u2net-universal", device="cpu") But I get the following error: OSError: tamnvcc/isnet-general-use does not appear to have a file named config.json. Checkout 'https://huggingface.co/tamnvcc/isnet-general-use/main' for available files. Is it even possible to load models from HuggingFace without config.json file provided? I also tried loading the model via: id2label = {0: "background", 1: "target"} label2id = {"background": 0, "target": 1} image_processor = AutoImageProcessor.from_pretrained("Carve/u2net-universal") model = AutoModelForSemanticSegmentation("Carve/u2net-universal", id2label=id2label, label2id=label2id) But got the same error. | TL;DR You will need to make a lot of assumption if you don't have the config.json and the model card doesn't have any documentation After some guessing, possibly it's this: from u2net import U2NET import torch model = U2NET() model.load_state_dict(torch.load('full_weights.pth', map_location=torch.device('cpu'))) In Long Looking at the files available in the model card, we see these files: .gitattributes README.md full_weights.pth A good guess would be that the .pth file is a PyTorch model binary. Given that, we can try: import shutil import requests import torch # Download the .pth file locally url = "https://huggingface.co/Carve/u2net-universal/resolve/main/full_weights.pth" response = requests.get(url, stream=True) with open('full_weights.pth', 'wb') as out_file: shutil.copyfileobj(response.raw, out_file) model = torch.load('full_weights.pth', map_location=torch.device('cpu')) But what you end up with is NOT a usable model, it's just the model parameters/weights (aka checkpoint file), i.e. type(model) [out]: collections.OrderedDict Looking at the layer names, it looks like a rebnconvin model that points to the https://github.com/xuebinqin/U-2-Net code: model.keys() [out]: odict_keys(['stage1.rebnconvin.conv_s1.weight', 'stage1.rebnconvin.conv_s1.bias', 'stage1.rebnconvin.bn_s1.weight', 'stage1.rebnconvin.bn_s1.bias', 'stage1.rebnconvin.bn_s1.running_mean', 'stage1.rebnconvin.bn_s1.running_var', 'stage1.rebnconv1.conv_s1.weight', 'stage1.rebnconv1.conv_s1.bias', 'stage1.rebnconv1.bn_s1.weight', 'stage1.rebnconv1.bn_s1.bias', 'stage1.rebnconv1.bn_s1.running_mean', 'stage1.rebnconv1.bn_s1.running_var', ...]) ASSUMING THAT YOU CAN TRUST THE CODE from the github, you can try installing it with: ! wget https://raw.githubusercontent.com/xuebinqin/U-2-Net/master/model/u2net.py And guessing from the layer names and model name, it looks like a U2Net from https://arxiv.org/abs/2005.09007v3 So you can try: from u2net import U2NET model = U2NET() model.load_state_dict(torch.load('full_weights.pth', map_location=torch.device('cpu'))) | 8 | 5 |
75,617,865 | 2023-3-2 | https://stackoverflow.com/questions/75617865/openai-chat-completions-api-error-invalidrequesterror-unrecognized-request-ar | I am currently trying to use OpenAI's most recent model: gpt-3.5-turbo. I am following a very basic tutorial. I am working from a Google Collab notebook. I have to make a request for each prompt in a list of prompts, which for sake of simplicity looks like this: prompts = ['What are your functionalities?', 'what is the best name for an ice-cream shop?', 'who won the premier league last year?'] I defined a function to do so: import openai # Load your API key from an environment variable or secret management service openai.api_key = 'my_API' def get_response(prompts: list, model = "gpt-3.5-turbo"): responses = [] restart_sequence = "\n" for item in prompts: response = openai.Completion.create( model=model, messages=[{"role": "user", "content": prompt}], temperature=0, max_tokens=20, top_p=1, frequency_penalty=0, presence_penalty=0 ) responses.append(response['choices'][0]['message']['content']) return responses However, when I call responses = get_response(prompts=prompts[0:3]) I get the following error: InvalidRequestError: Unrecognized request argument supplied: messages Any suggestions? Replacing the messages argument with prompt leads to the following error: InvalidRequestError: [{'role': 'user', 'content': 'What are your functionalities?'}] is valid under each of {'type': 'array', 'minItems': 1, 'items': {'oneOf': [{'type': 'integer'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}]}, 'example': '[1, 1313, 451, {"buffer": "abcdefgh", "shape": [1024], "dtype": "float16"}]'}, {'type': 'array', 'minItems': 1, 'maxItems': 2048, 'items': {'oneOf': [{'type': 'string'}, {'type': 'object', 'properties': {'buffer': {'type': 'string', 'description': 'A serialized numpy buffer'}, 'shape': {'type': 'array', 'items': {'type': 'integer'}, 'description': 'Array shape'}, 'dtype': {'type': 'string', 'description': 'Stringified dtype'}, 'token': {'type': 'string'}}}], 'default': '', 'example': 'This is a test.', 'nullable': False}} - 'prompt' | Problem You used the wrong method to get a completion. When using the OpenAI SDK, whether you use Python or Node.js, you need to use the right method. Which method is the right one? It depends on the OpenAI model you want to use. Solution The tables below will help you figure out which method is the right one for a given OpenAI model. STEP 1: Find in the table below which API endpoint is compatible with the OpenAI model you want to use. API endpoint Model group Model name /v1/chat/completions β’ GPT-4 β’ GPT-3.5 β’ gpt-4 and dated model releases β’ gpt-4-32k and dated model releases β’ gpt-4-1106-preview β’ gpt-4-vision-preview β’ gpt-3.5-turbo and dated model releases β’ gpt-3.5-turbo-16k and dated model releases β’ fine-tuned versions of gpt-3.5-turbo /v1/completions (Legacy) β’ GPT-3.5 β’ GPT base β’ gpt-3.5-turbo-instruct β’ babbage-002 β’ davinci-002 /v1/assistants All models except gpt-3.5-turbo-0301 supported. Retrieval tool requires gpt-4-1106-preview or gpt-3.5-turbo-1106. /v1/audio/transcriptions Whisper β’ whisper-1 /v1/audio/translations Whisper β’ whisper-1 /v1/audio/speech TTS β’ tts-1 β’ tts-1-hd /v1/fine_tuning/jobs β’ GPT-3.5 β’ GPT base β’ gpt-3.5-turbo β’ babbage-002 β’ davinci-002 /v1/embeddings Embeddings β’ text-embedding-ada-002 /v1/moderations Moderations β’ text-moderation-stable β’ text-moderation-latest STEP 2: Find in the table below which method you need to use for the API endpoint you selected in the table above. Note: Pay attention, because you have to use the method that is compatible with your OpenAI SDK version. API endpoint Method for the Python SDK v0.28.1 Method for the Python SDK >=v1.0.0 Method for the Node.js SDK v3.3.0 Method for the Node.js SDK >=v4.0.0 /v1/chat/completions openai.ChatCompletion.create openai.chat.completions.create openai.createChatCompletion openai.chat.completions.create /v1/completions (Legacy) openai.Completion.create openai.completions.create openai.createCompletion openai.completions.create /v1/assistants / openai.beta.assistants.create / openai.beta.assistants.create /v1/audio/transcriptions openai.Audio.transcribe openai.audio.transcriptions.create openai.createTranscription openai.audio.transcriptions.create /v1/audio/translations openai.Audio.translate openai.audio.translations.create openai.createTranslation openai.audio.translations.create /v1/audio/speech / openai.audio.speech.create / openai.audio.speech.create /v1/fine_tuning/jobs / openai.fine_tuning.jobs.create / openai.fineTuning.jobs.create /v1/embeddings openai.Embedding.create openai.embeddings.create openai.createEmbedding openai.embeddings.create /v1/moderations openai.Moderation.create openai.moderations.create openai.createModeration openai.moderations.create Python SDK v1.0.0 working example for the gpt-3.5-turbo model If you run test.py, the OpenAI API will return the following completion: Hello! How can I assist you today? test.py import os from openai import OpenAI client = OpenAI( api_key = os.getenv("OPENAI_API_KEY"), ) completion = client.chat.completions.create( model = "gpt-3.5-turbo", messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}, ] ) print(completion.choices[0].message.content.strip()) Node.js SDK v4.0.0 working example for the gpt-3.5-turbo model If you run test.js, the OpenAI API will return the following completion: Hello! How can I assist you today? test.js const OpenAI = require("openai"); const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, }); async function main() { const completion = await openai.chat.completions.create({ model: "gpt-3.5-turbo", messages: [ { role: "system", content: "You are a helpful assistant." }, { role: "user", content: "Hello!" }, ], }); console.log(completion.choices[0].message.content.trim()); } main(); | 38 | 55 |
75,666,486 | 2023-3-7 | https://stackoverflow.com/questions/75666486/the-explicit-passing-of-coroutine-objects-to-asyncio-wait-is-deprecated | From this github link: https://github.com/pyxll/pyxll-examples/blob/master/bitmex/bitmex.py When I run this code, I get the message saying the explicit passing of coroutine objects to asyncio.wait() is deprecated. I've pinpointed this to line 71: await asyncio.wait(tasks), but can't figure out how to resolve the issue. Code below for reference: from pyxll import xl_func, RTD, get_event_loop import websockets import asyncio import json class BitMex: """Class to manage subscriptions to instrument prices.""" URI = "wss://www.bitmex.com/realtime" def __init__(self, loop=None): self.__websocket = None self.__running = False self.__running_task = None self.__subscriptions = {} self.__data = {} self.__lock = asyncio.Lock(loop=loop) async def __connect(self): # Connect to the websocket API and start the __run coroutine self.__running = True self.__websocket = await websockets.connect(self.URI) self.__connecting_task = None self.__running_task = asyncio.create_task(self.__run()) async def __disconnect(self): # Close the websocket and wait for __run to complete self.__running = False await self.__websocket.close() self.__websocket = None await self.__running_task async def __run(self): # Read from the websocket until disconnected while self.__running: msg = await self.__websocket.recv() await self.__process_message(json.loads(msg)) async def __process_message(self, msg): if msg.get("table", None) == "instrument": # Extract the data from the message, update our data dictionary and notify subscribers for data in msg.get("data", []): symbol = data["symbol"] timestamp = data["symbol"] # Update the latest values in our data dictionary and notify any subscribers tasks = [] subscribers = self.__subscriptions.get(symbol, {}) latest = self.__data.setdefault(symbol, {}) for field, value in data.items(): latest[field] = (value, timestamp) # Notify the subscribers with the updated field for subscriber in subscribers.get(field, []): tasks.append(subscriber(symbol, field, value, timestamp)) # await all the tasks from the subscribers if tasks: await asyncio.wait(tasks) async def subscribe(self, symbol, field, callback): """Subscribe to updates for a specific symbol and field. The callback will be called as 'await callback(symbol, field, value, timestamp)' whenever an update is received. """ async with self.__lock: # Connect the websocket if necessary if self.__websocket is None: await self.__connect() # Send the subscribe message if we're not already subscribed if symbol not in self.__subscriptions: msg = {"op": "subscribe", "args": [f"instrument:{symbol}"]} await self.__websocket.send(json.dumps(msg)) # Add the subscriber to the dict of subscriptions self.__subscriptions.setdefault(symbol, {}).setdefault(field, []).append(callback) # Call the callback with the latest data data = self.__data.get(symbol, {}) if field in data: (value, timestamp) = data[field] await callback(symbol, field, value, timestamp) async def unsubscribe(self, symbol, field, callback): async with self.__lock: # Remove the subscriber from the list of subscriptions self.__subscriptions[symbol][field].remove(callback) if not self.__subscriptions[symbol][field]: del self.__subscriptions[symbol][field] # Unsubscribe if we no longer have any subscriptions for this instrument if not self.__subscriptions[symbol]: msg = {"op": "unsubscribe", "args": [f"instrument:{symbol}"]} await self.__websocket.send(json.dumps(msg)) del self.__subscriptions[symbol] self.__data.pop(symbol, None) # Disconnect if we no longer have any subscriptions if not self.__subscriptions: async with self.__lock: await self.__disconnect() class BitMexRTD(RTD): """RTD class for subscribing to BitMEX prices using the BitMex class above. """ # Use a single BitMex object for all RTD functions _bitmex = BitMex(get_event_loop()) def __init__(self, symbol, field): super().__init__(value="Waiting...") self.__symbol = symbol self.__field = field async def connect(self): # Subscribe to BitMix updates when Excel connects to the RTD object await self._bitmex.subscribe(self.__symbol, self.__field, self.__update) async def disconnect(self): # Unsubscribe to BitMix updates when Excel disconnects from the RTD object await self._bitmex.unsubscribe(self.__symbol, self.__field, self.__update) async def __update(self, symbol, field, value, timestamp): # Update the value in Excel self.value = value @xl_func("string symbol, string field: rtd", recalc_on_open=True) def bitmex_rtd(symbol, field="lastPrice"): """Subscribe to BitMEX prices for a given symbol.""" return BitMexRTD(symbol, field) if __name__ == "__main__": async def main(): # This is the callback that will be called whenever there's an update async def callback(symbol, field, value, timestamp): print((symbol, field, value, timestamp)) bm = BitMex() await bm.subscribe("XBTUSD", "lastPrice", callback) await asyncio.sleep(60) await bm.unsubscribe("XBTUSD", "lastPrice", callback) print("DONE!") # Run the 'main' function in an asyncio event loop loop = asyncio.get_event_loop() loop.create_task(main()) loop.run_forever() | Don't pass the tasks list to asyncio.wait(). The documentation for asyncio.wait states "Run Future and Task instances in the aws iterable concurrently and block until the condition specified by return_when.". The type returned by an async def ...(...)-like defined function is a coroutine(type(async_function)==types.coroutine#true). Since this is neither a Future nor a Task, the warning is outputted. To fix it, just leave asyncio.wait() and the tasks list inside of __process_message out entirely: async def __process_message(self, msg): if msg.get("table", None) == "instrument": # Extract the data from the message, update our data dictionary and notify subscribers for data in msg.get("data", []): symbol = data["symbol"] timestamp = data["symbol"] # Update the latest values in our data dictionary and notify any subscribers subscribers = self.__subscriptions.get(symbol, {}) latest = self.__data.setdefault(symbol, {}) for field, value in data.items(): latest[field] = (value, timestamp) # Notify the subscribers with the updated field for subscriber in subscribers.get(field, []): await subscriber(symbol, field, value, timestamp) | 9 | 3 |
75,598,463 | 2023-3-1 | https://stackoverflow.com/questions/75598463/pytest-ordering-of-test-suites | I've a set of test files (.py files) for different UI tests. I want to run these test files using pytest in a specific order. I used the below command python -m pytest -vv -s --capture=tee-sys --html=report.html --self-contained-html ./Tests/test_transTypes.py ./Tests/test_agentBank.py ./Tests/test_bankacct.py The pytest execution is triggered from an AWS Batch job. When the test executions happens it is not executing the test files in the order as specified in the above command. Instead it first runs test_agentBank.py followed by test_bankacct.py, then test_transTypes.py Each of these python files contains bunch of test functions. I also tried decorating the test function class such as @pytest.mark.run(order=1) in the first python file(test_transTypes.py), @pytest.mark.run(order=2) in the 2nd python file(test_agentBank.py) etc. This seems to run the test in the order, but at the end I get a warning PytestUnknownMarkWarning: Unknown pytest.mark.run - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs .pytest.org/en/stable/how-to/mark.html @pytest.mark.run(order=1) What is the correct way of running tests in a specific order in pytest? Each of my "test_" python files are the ones I need to run using pytest. Any help much appreciated. | To specify the order in which tests are run in pytest, you can use the pytest-order plugin. This plugin allows you to customize the order in which your tests are run by providing the order marker, which has attributes that define when your tests should run in relation to each other. You can use absolute attributes (e.g., first, second-to-last) or relative attributes (e.g., run this test before this other test) to specify the order of your tests. Here's an example: import pytest @pytest.mark.order(2) def test_foo(): assert True @pytest.mark.order(1) def test_bar(): assert True | 4 | 4 |
75,648,132 | 2023-3-6 | https://stackoverflow.com/questions/75648132/openai-gpt-3-api-why-do-i-get-only-partial-completion-why-is-the-completion-cu | I tried the following code but got only partial results like [{"light_id": 0, "color I was expecting the full JSON as suggested on this page: https://medium.com/@richardhayes777/using-chatgpt-to-control-hue-lights-37729959d94f import json import os import time from json import JSONDecodeError from typing import List import openai openai.api_key = "xxx" HEADER = """ I have a hue scale from 0 to 65535. red is 0.0 orange is 7281 yellow is 14563 purple is 50971 pink is 54612 green is 23665 blue is 43690 Saturation is from 0 to 254 Brightness is from 0 to 254 Two JSONs should be returned in a list. Each JSON should contain a color and a light_id. The light ids are 0 and 1. The color relates a key "color" to a dictionary with the keys "hue", "saturation" and "brightness". Give me a list of JSONs to configure the lights in response to the instructions below. Give only the JSON and no additional characters. Do not attempt to complete the instruction that I give. Only give one JSON for each light. """ completion = openai.Completion.create(model="text-davinci-003", prompt=HEADER) print(completion.choices[0].text) | In general GPT-3 API (i.e., Completions API) If you get partial completion (i.e., if the completion is cut off), it's because the max_tokens parameter is set too low or you didn't set it at all (in this case, it defaults to 16). You need to set it higher, but the token count of your prompt and completion together cannot exceed the model's context length. See the official OpenAI documentation: GPT-3.5 and GPT-4 API (i.e., Chat Completions API) Compared to the GPT-3 API, the GPT-3.5 and GPT-4 APIs have the max_tokens parameter set to infinite by default. Your case You're using text-davinci-003 (i.e., the GPT-3 API). If you don't set max_tokens = 1024 the completion you get will be cut off. Take a careful look at the tutorial you're referring to once again. If you run test.py, the OpenAI API will return a completion: Light 0 should be red: [{"light_id": 0, "color": {"hue": 0, "saturation": 254, "brightness": 254}},{"light_id": 1, "color": }] Light 1 should be orange: [{"light_id": 0, "color": {"hue": 0, "saturation": 254, "brightness": 254}},{"light_id": 1, "color": {"hue": 7281, "saturation": 254, "brightness": 254}}] test.py import openai import os openai.api_key = os.getenv('OPENAI_API_KEY') HEADER = """ I have a hue scale from 0 to 65535. red is 0.0 orange is 7281 yellow is 14563 purple is 50971 pink is 54612 green is 23665 blue is 43690 Saturation is from 0 to 254 Brightness is from 0 to 254 Two JSONs should be returned in a list. Each JSON should contain a color and a light_id. The light ids are 0 and 1. The color relates a key "color" to a dictionary with the keys "hue", "saturation" and "brightness". Give me a list of JSONs to configure the lights in response to the instructions below. Give only the JSON and no additional characters. Do not attempt to complete the instruction that I give. Only give one JSON for each light. """ completion = openai.Completion.create(model="text-davinci-003", prompt=HEADER, max_tokens=1024) print(completion.choices[0].text) | 4 | 8 |
75,651,880 | 2023-3-6 | https://stackoverflow.com/questions/75651880/python-abstract-instance-property-alternative | If I have for example: class Parent(object): @property @abc.abstractmethod def asdf(self) -> str: """ Must be implemented by child """ @dataclass class Children(Parent): asdf = "1234" def some_method(self): self.asdf = "5678" I get an error from mypy saying I am shadowing class attribute in some_method. This is understandable, since asdf is defined in Parent and so becomes a class attribute. But, if I do this instead: class Parent(object): asdf: str No error is generated by mypy, but then the assignment to asdf is not enforced. And also, asdf is still nonetheless a class attribute right? Are class attributes in general not meant to be overridden by methods in children? How can I make a parent class that enforces its children to have a certain instance attribute (and not class attributes)? If I look at other languages like C#, I think these 'class attributes' would be kind of equivalent to properties, but there we can customize its get and set behavior. Why is it not the case with Python? | You might be misunderstanding the purpose of the @property decorator. Generally it's used for a function that is supposed to be accessed / "feel like" a constant to outside code. Python does not do strict type-checking of variables without significant effort. (It does constrain the types of values at runtime, so a variable's type won't change subtly / unexpectedly. However, it will normally allow a string to be passed / returned in any place nominally expecting an integer - or vice versa. The value which is passed at runtime will continue to be used, without conversion as its original type, and without an error being thrown - until & unless a conversion is required & not provided. For example, type hints marking a variable as a string don't prevent assignment of a float to that variable. An error will only be thrown if there's an explicit type check, or if you try to call a method defined for strings but not for floats - e.g. str.join().) The grounding assumption in that is "other developers know what they're doing as much as the current developer does", so you have to do some workarounds to get strict type enforcement of the type you'd see in C#, Java, Scala and so on. Think of the "type hints" more like documentation and help for linters and the IDE than strict type enforcement. This point of view gives a few alternatives, depending on what you want from Children.asdf: How strict should the check be? Is the class-constant string you've shown what you're looking for, or do you want the functionality normally associated with the @property decorator? First draft, as a very non-strict constant string, very much in Python's EAFP tradition: class Parent: ASDF: str = None # Subclasses are expected to define a string for ASDF. class Children(Parent): ASDF = 'MyChildrenASDF' If you want to (somewhat) strictly enforce that behavior, you could do something like this: class Parent: ASDF: str = None # Subclasses are required to define a string for ASDF. def __init__(self): if not isinstance(self.ASDF, str): # This uses the class name of Children or whatever # subclasses Parent in the error message, which makes # debugging easier if you have many subclasses and multiple # developers. raise TypeError(f'{self.__class__.__name__}.ASDF must be a string.') class Children(Parent): ASDF = 'MyChildrenASDF' def __init__(self): # This approach does assume the person writing the subclass # remembers to call super().__init__(). That's not enforced # automatically. super().__init__() The second option is about as strict as I'd go personally, except in rare circumstances. If you need greater enforcement, you could write a unit test which loops over all Parent.__subclasses__(), and performs the check each time tests are run. Alternately, you could define a Python metaclass. Note that metaclasses are an "advanced topic" in Python, and the general rule of thumb is "If you don't know whether you need a metaclass, or don't know what a metaclass is, you shouldn't use a metaclass". Basically, metaclasses let you hack the class-definition process: You can inject class attributes which are automatically defined on the fly, throw errors if things aren't defined, and all sorts of other wacky tricks... but it's a deep rabbit hole and probably overkill for most use cases. If you want something which is actually a function, but uses the @property decorator so it feels like an instance property, you could do this: class Parent: @property def asdf(self) -> str: prepared = self._calculate_asdf() if not isinstance(prepared, str): raise TypeError(f'{self.__class__.__name__}._calculate_asdf() must return a string.') def _calculate_asdf(self) -> str: raise NotImplementedError(f'{self.__class__.__name__}._calculate_asdf() must return a string.') class Children(Parent): def _calculate_asdf(self) -> str: # I needed a function here to show what the `@property` could # do. This one seemed convenient. Any function returning a # string would be fine. return self.__class__.__name__.reverse() | 3 | 3 |
75,611,661 | 2023-3-2 | https://stackoverflow.com/questions/75611661/is-there-any-logical-reason-not-to-reuse-a-deleted-slot-immediately-in-hash-tabl | I have seen several implementations of dynamic tables with open addressing using linear probing that does not use deleted slots before resizing. Here is one example: https://gist.github.com/EntilZha/5397c02dc6be389c85d8 Is there any logical reason not to reuse a deleted slot immediately? I know why it makes sense not to set the slot's value as Empty Hash Table: Why deletion is difficult in open addressing scheme because it would create a bug with the read operation. However, what's holding from writing to this slot? Wouldn't it be better to have most slots used as much as possible for performance? | No, thereβs no reason not to fill tombstone slots as soon as you find them. In fact, a recent paper by Bender et al shows that in the absence of tombstones, primary clustering (where long runs of elements arise because collisions start linking together smaller runs of elements) can largely be eliminated in linear probing tables by periodically inserting additional tombstone elements into the table. | 3 | 1 |
75,663,384 | 2023-3-7 | https://stackoverflow.com/questions/75663384/how-to-log-python-code-memory-consumption | Question Hi, I am runnin' a Docker container with a Python application inside. The code performs some computing tasks and I would like to monitor it's memory consumption using logs (so I can see how different parts of the calculations perform). I do not need any charts or continous monitoring - I am okay with the inaccuracy of this approach. How should I do it without loosing performance? Using external (AWS) tools to monitor used memory is not suitable, because I often debug using logs and thus it's very difficult to match logs with performance charts. Also the resolution is too small. Setup using python:3.10 as base docker image using Python 3.10 running in AWS ECS Fargate (but results are similar while testing on local) running the calculation method using asyncio I have read some articles about tracemalloc, but it says it degrades the performance a lot (around 30 %). The article. Tried methods I have tried the following method, however it shows the same memory usage every time called. So I doubt it works the desired way. Using resource import asyncio import resource # Local imports from utils import logger def get_usage(): usage = round(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1000 / 1000, 4) logger.info(f"Current memory usage is: {usage} MB") return usage # Do calculation - EXAMPLE asyncio.run( some_method_to_do_calculations() ) Logs from Cloudwatch Using psutil (in testing) import psutil # Local imports from utils import logger def get_usage(): total = round(psutil.virtual_memory().total / 1000 / 1000, 4) used = round(psutil.virtual_memory().used / 1000 / 1000, 4) pct = round(used / total * 100, 1) logger.info(f"Current memory usage is: {used} / {total} MB ({pct} %)") return True | Fargate is using cgroup for memory limiting. As mentioned here and here, the CPU/memory values provided by /proc refer to the host, not the container. As a result, userspace tools such as top and free report misleading values. You can try with something like: with open('/sys/fs/cgroup/memory/memory.stat', 'r') as f: for line in f: if 'hierarchical_memory_limit ' in line: memory_limit = int(line.split()[1]) if 'total_rss ' in line: memory_usage = int(line.split()[1]) percentage=memory_usage*100/memory_limit | 4 | 1 |
75,614,728 | 2023-3-2 | https://stackoverflow.com/questions/75614728/cuda-12-tf-nightly-2-12-could-not-find-cuda-drivers-on-your-machine-gpu-will | tf-nightly version = 2.12.0-dev2023203 Python version = 3.10.6 CUDA drivers version = 525.85.12 CUDA version = 12.0 Cudnn version = 8.5.0 I am using Linux (x86_64, Ubuntu 22.04) I am coding in Visual Studio Code on a venv virtual environment I am trying to run some models on the GPU (NVIDIA GeForce RTX 3050) using tensorflow nightly 2.12 (to be able to use Cuda 12.0). The problem that I have is that apparently every checking that I am making seems to be correct, but in the end the script is not able to detect the GPU. I've dedicated a lot of time trying to see what is happening and nothing seems to work, so any advice or solution will be more than welcomed. The GPU seems to be working for torch as you can see at the very end of the question. I will show some of the most common checkings regarding CUDA that I did (Visual Studio Code terminal), I hope you find them useful: Check CUDA version: $ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Fri_Jan__6_16:45:21_PST_2023 Cuda compilation tools, release 12.0, V12.0.140 Build cuda_12.0.r12.0/compiler.32267302_0 Check if the connection with the CUDA libraries is correct: $ echo $LD_LIBRARY_PATH /usr/cuda/lib Check nvidia drivers for the GPU and check if GPU is readable for the venv: $ nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... On | 00000000:01:00.0 On | N/A | | N/A 40C P5 6W / 20W | 46MiB / 4096MiB | 22% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1356 G /usr/lib/xorg/Xorg 45MiB | +-----------------------------------------------------------------------------+ Add cuda/bin PATH and Check it: $ export PATH="/usr/local/cuda/bin:$PATH" $ echo $PATH /usr/local/cuda-12.0/bin:/home/victus-linux/Escritorio/MasterThesis_CODE/to_share/venv_master/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin Custom function to check if CUDA is correctly installed: [function by Sherlock] function lib_installed() { /sbin/ldconfig -N -v $(sed 's/:/ /' <<< $LD_LIBRARY_PATH) 2>/dev/null | grep $1; } function check() { lib_installed $1 && echo "$1 is installed" || echo "ERROR: $1 is NOT installed"; } check libcuda check libcudart libcudart.so.12 -> libcudart.so.12.0.146 libcuda.so.1 -> libcuda.so.525.85.12 libcuda.so.1 -> libcuda.so.525.85.12 libcudadebugger.so.1 -> libcudadebugger.so.525.85.12 libcuda is installed libcudart.so.12 -> libcudart.so.12.0.146 libcudart is installed Custom function to check if Cudnn is correctly installed: [function by Sherlock] function lib_installed() { /sbin/ldconfig -N -v $(sed 's/:/ /' <<< $LD_LIBRARY_PATH) 2>/dev/null | grep $1; } function check() { lib_installed $1 && echo "$1 is installed" || echo "ERROR: $1 is NOT installed"; } check libcudnn libcudnn_cnn_train.so.8 -> libcudnn_cnn_train.so.8.8.0 libcudnn_cnn_infer.so.8 -> libcudnn_cnn_infer.so.8.8.0 libcudnn_adv_train.so.8 -> libcudnn_adv_train.so.8.8.0 libcudnn.so.8 -> libcudnn.so.8.8.0 libcudnn_ops_train.so.8 -> libcudnn_ops_train.so.8.8.0 libcudnn_adv_infer.so.8 -> libcudnn_adv_infer.so.8.8.0 libcudnn_ops_infer.so.8 -> libcudnn_ops_infer.so.8.8.0 libcudnn is installed So, once I did these previous checkings I used a script to evaluate if everything was finally ok and then the following error appeared: import tensorflow as tf print(f'\nTensorflow version = {tf.__version__}\n') print(f'\n{tf.config.list_physical_devices("GPU")}\n') 2023-03-02 12:05:09.463343: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-03-02 12:05:09.489911: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-03-02 12:05:09.490522: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-03-02 12:05:10.066759: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Tensorflow version = 2.12.0-dev20230203 2023-03-02 12:05:10.748675: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355 2023-03-02 12:05:10.771263: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform. Skipping registering GPU devices... [] Extra check: I tried to run a checking script on torch and in here it worked so I guess the problem is related with tensorflow/tf-nightly import torch print(f'\nAvailable cuda = {torch.cuda.is_available()}') print(f'\nGPUs availables = {torch.cuda.device_count()}') print(f'\nCurrent device = {torch.cuda.current_device()}') print(f'\nCurrent Device location = {torch.cuda.device(0)}') print(f'\nName of the device = {torch.cuda.get_device_name(0)}') Available cuda = True GPUs availables = 1 Current device = 0 Current Device location = <torch.cuda.device object at 0x7fbe26fd2ec0> Name of the device = NVIDIA GeForce RTX 3050 Laptop GPU Please, if you know something that might help solve this issue, don't hesitate on telling me. | I think that, as of March 2023, the only tensorflow distribution for cuda 12 is the docker package from NVIDIA. A tf package for cuda 12 should show the following info >>> tf.sysconfig.get_build_info() OrderedDict([('cpu_compiler', '/usr/bin/x86_64-linux-gnu-gcc-11'), ('cuda_compute_capabilities', ['compute_86']), ('cuda_version', '12.0'), ('cudnn_version', '8'), ('is_cuda_build', True), ('is_rocm_build', False), ('is_tensorrt_build', True)]) But if we run tf.sysconfig.get_build_info() on any tensorflow package installed via pip, it stills tells that cuda_version is 11.x So your alternatives are: install docker with the nvidia cloud instructions and run one of the recent containers compile tensorflow from source, either nightly or last release. Caveat, it takes a lot of RAM and some time, as all good compilations do, and the occasional error to be corrected on the run. In my case, to define kFP8, the new 8-bits float. wait | 58 | 34 |
75,660,251 | 2023-3-7 | https://stackoverflow.com/questions/75660251/textual-ui-is-not-updating-after-change-the-value | I am trying to build a simple TUI based app using Python's textual package. I have one left panel where I want to display list of items and on right panel I wan to show details of the selected item from left panel. So I want add items in the left panel using keybinding provided by textual lib but when I add new item to the list it does not updates the UI of the left panel to show newly added item in the list. I am following this [doc][1] [1]: https://textual.textualize.io/guide/reactivity/#__tabbed_2_1 to add that feature but it is not working as expected. I am not able to figure out what's wrong here or my understanding of the things are incorrect. Here is my full code: import uuid from time import monotonic from textual.app import App, ComposeResult from textual.containers import Container from textual.reactive import reactive from textual.widget import Widget from textual.widgets import Button, Header, Footer, Static, ListView, ListItem, Label class LeftPanel(Widget): items = reactive([]) def compose(self) -> ComposeResult: yield Static( "All Request", expand=True, id="left_panel_header" ) yield ListView( *self.items, initial_index=None, ) class RightPanel(Widget): """A stopwatch widget.""" def compose(self) -> ComposeResult: yield ListView( ListItem(Label("4")), ListItem(Label("5")), ListItem(Label("6")), initial_index=None, ) class DebugApp(App): """A Textual app to manage stopwatches.""" CSS_PATH = "main.css" BINDINGS = [ ("d", "toggle_dark", "Toggle dark mode"), ("a", "add_item", "Add new item"), ] def compose(self) -> ComposeResult: """Called to add widgets to the app.""" yield Container(LeftPanel(id="my_list"), id="left_panel") yield Container(RightPanel(), id="right_panel") yield Footer() def action_add_item(self): self.query_one("#my_list").items.append(ListItem(Label(str(uuid.uuid4()), classes="request_item"))) self.dark = not self.dark # This works def action_toggle_dark(self) -> None: """An action to toggle dark mode.""" self.dark = not self.dark def render_ui(): app = DebugApp(watch_css=True) app.run() | It should work if you call the .append method of the ListView instance. To achieve this you can give it its own id: yield = ListView( *self.items, initial_index=None, id = "my_list_LV" ) and then use # ... def action_add_item(self): self.query_one("#my_list_LV").append(ListItem(Label(str(uuid.uuid4()), classes="request_item"))) (I tested this and it worked for me). | 3 | 2 |
75,666,408 | 2023-3-7 | https://stackoverflow.com/questions/75666408/how-can-i-collect-the-results-of-a-repeated-calculation-in-a-list-dictionary-et | There are a great many existing Q&A on Stack Overflow on this general theme, but they are all either poor quality (typically, implied from a beginner's debugging problem) or miss the mark in some other way (generally by being insufficiently general). There are at least two extremely common ways to get the naive code wrong, and beginners would benefit more from a canonical about looping than from having their questions closed as typos or a canonical about what printing entails. So this is my attempt to put all the related information in the same place. Suppose I have some simple code that does a calculation with a value x and assigns it to y: y = x + 1 # Or it could be in a function: def calc_y(an_x): return an_x + 1 Now I want to repeat the calculation for many possible values of x. I know that I can use a for loop if I already have a list (or other sequence) of values to use: xs = [1, 3, 5] for x in xs: y = x + 1 Or I can use a while loop if there is some other logic to calculate the sequence of x values: def next_collatz(value): if value % 2 == 0: return value // 2 else: return 3 * value + 1 def collatz_from_19(): x = 19 while x != 1: x = next_collatz(x) The question is: how can I collect these values and use them after the loop? I tried printing the value inside the loop, but it doesn't give me anything useful: xs = [1, 3, 5] for x in xs: print(x + 1) The results show up on the screen, but I can't find any way to use them in the next part of the code. So I think I should try to store the values in a container, like a list or a dictionary. But when I try that: xs = [1, 3, 5] for x in xs: ys = [] y = x + 1 ys.append(y) or xs = [1, 3, 5] for x in xs: ys = {} y = x + 1 ys[x] = y After either of these attempts, ys only contains the last result. | General approaches There are three ordinary ways to approach the problem: by explicitly using a loop (normally a for loop, but while loops are also possible); by using a list comprehension (or dict comprehension, set comprehension, or generator expression as appropriate to the specific need in context); or by using the built-in map (results of which can be used to construct a list, set or dict explicitly). Using an explicit loop Create a list or dictionary before the loop, and add each value as it's calculated: def make_list_with_inline_code_and_for(): ys = [] for x in [1, 3, 5]: ys.append(x + 1) return ys def next_collatz(value): if value % 2 == 0: return value // 2 else: return 3 * value + 1 def make_dict_with_function_and_while(): x = 19 ys = {} while x != 1: y = next_collatz(x) ys[x] = y # associate each key with the next number in the Collatz sequence. x = y # continue calculating the sequence. return ys In both examples here, the loop was put into a function in order to label the code and make it reusable. These examples return the ys value so that the calling code can use the result. But of course, the computed ys could also be used later in the same function, and loops like these could also be written outside of any function. Use a for loop when there is an existing input, where each element should be processed independently. Use a while loop to create output elements until some condition is met. Python does not directly support running a loop a specific number of times (calculated in advance); the usual idiom is to make a dummy range of the appropriate length and use a for loop with that. Using a comprehension or generator expression A list comprehension gives elegant syntax for creating a list from an existing sequence of values. It should be preferred where possible, because it means that the code does not have to focus on the details of how to build the list, making it easier to read. It can also be faster, although this will usually not matter. It can work with either a function call or other calculation (any expression in terms of the "source" elements), and it looks like: xs = [1, 3, 5] ys = [x + 1 for x in xs] # or def calc_y(an_x): return an_x + 1 ys = [calc_y(x) for x in xs] Note that this will not replace a while loop; there is no valid syntax replacing for with while here. In general, list comprehensions are meant for taking existing values and doing a separate calculation on each - not for any kind of logic that involves "remembering" anything from one iteration to the next (although this can be worked around, especially in Python 3.8 and later). Similarly, a dictionary result can be created using a dict comprehension - as long as both a key and value are computed in each iteration. Depending on exact needs, set comprehensions (produce a set, which does not contain duplicate values) and generator expressions (produce a lazily-evaluated result; see below about map and generator expressions) may also be appropriate. Using map This is similar to a list comprehension, but even more specific. map is a built-in function that can apply a function repeatedly to multiple different arguments from some input sequence (or multiple sequences). Getting results equivalent to the previous code looks like: xs = [1, 3, 5] def calc_y(an_x): return an_x + 1 ys = list(map(calc_y, xs)) # or ys = list(map(lambda x: x + 1, xs)) As well as requiring an input sequence (it doesn't replace a while loop), the calculation needs to be done using a function or other callable, such as the lambda shown above (any of these, when passed to map, is a so-called "higher-order function"). In Python 3.x, map is a class, and calling it therefore creates an instance of that class - and that instance is a special kind of iterator (not a list) that can't be iterated more than once. (We can get something similar using a generator expression rather than a list comprehension; simply use () instead of [].) Therefore, the code above explicitly creates a list from the mapped values. In other situations, it might not be necessary to do this (i.e., if it will only be iterated over once). On the other hand, if a set is necessary, the map object can be passed directly to set rather than list in the same way. To produce a dictionary, the map should be set up so that each output element is a (key, value) tuple; then it can be passed to dict, like so: def dict_from_map_example(letters): return dict(map(lambda l: (l, l.upper()), letters)) # equivalent using a dict comprehension: # return {l:l.upper() for l in letters} Generally, map is limited and uncommon compared to list comprehensions, and list comprehensions should be preferred in most code. However, it does offer some advantages. In particular, it can avoid the need to specify and use an iteration variable: when we write list(map(calc_y, xs)), we don't need to make up an x to name the elements of xs, and we don't have to write code to pass it to calc_y (as in the list comprehension equivalent, [calc_y(x) for x in xs] - note the two xs). Some people find this more elegant. | 9 | 12 |
75,666,062 | 2023-3-7 | https://stackoverflow.com/questions/75666062/discrete-date-values-for-x-axis-in-seaborn-objects-plot | I am trying to prepare a bar plot using seaborn.objects with time series data where the x-axis ticks and labels are only on the dates that really appear in the data. import pandas as pd import seaborn.objects as so df1 = pd.DataFrame({'date': pd.to_datetime(['2022-01-01', '2022-02-01']), 'val': [10,20,]}) so.Plot(df1, x='date', y='val').add(so.Bar()) The result is the following graph with a tick mark at 2022-01-15. Going to three entries in the dataframe solves the issue, but how would I do it in the presented case. Adding .scale(x=so.Nominal()) or .scale(x=so.Temporal()) does not help. As a bonus, how would I format the x-axis ticks as "Jan 2022", "Feb 2022" etc.? | You can convert your date column as string: (so.Plot(df1.assign(date=df1['date'].dt.strftime('%b %Y')), x='date', y='val') .add(so.Bar()).show()) # Or, as suggested by @mwaskom: so.Plot(x=df1['date'].dt.strftime('%b %Y'), y=df1['val']).add(so.Bar()).show() Output: | 3 | 2 |
75,660,016 | 2023-3-7 | https://stackoverflow.com/questions/75660016/multiply-scipy-sparse-matrix-with-a-3d-numpy-array | I have the following matrices a = sp.random(150, 150) x = np.random.normal(0, 1, size=(150, 20)) and I would basically like to implement the following formula I can calculate the inner difference like this diff = (x[:, None, :] - x[None, :, :]) ** 2 diff.shape # -> (150, 150, 20) a.shape # -> (150, 150) I would basically like to broadcast the element-wise multiplication between the scipy sparse matrix and each internal numpy array. If A was allowed to be dense, then I could simply do np.einsum("ij,ijk->k", a.toarray(), (x[:, None, :] - x[None, :, :]) ** 2) but A is sparse, and potentially huge, so this isn't an option. Of course, I could just reorder the axes and loop over the diff array with a for loop, but is there a faster way using numpy? As @hpaulj pointed out, the current solution also forms an array of shape (150, 150, 20), which would also immediately lead to problems with memory, so this solution would not be okay either. | import numpy as np import scipy.sparse from numpy.random import default_rng rand = default_rng(seed=0) # \sigma_k = \sum_i^N \sum_j^N A_{i,j} (x_{i,k} - x_{j,k})^2 # Dense method N = 100 x = rand.integers(0, 10, (N, 2)) A = np.clip(rand.integers(0, 100, (N, N)) - 80, a_min=0, a_max=None) diff = (x[:, None, :] - x[None, :, :])**2 product = np.einsum("ij,ijk->k", A, diff) # Loop method s_loop = [0, 0] for i in range(N): for j in range(N): for k in range(2): s_loop[k] += A[i, j]*(x[i, k] - x[j, k])**2 assert np.allclose(product, s_loop) # For any i,j, we trivially know whether A_{i,j} is zero, and highly sparse matrices have more zeros # than nonzeros. Crucially, do not calculate (x_{i,k} - x_{j,k})^2 at all if A_{i,j} is zero. A_i_nz, A_j_nz = A.nonzero() diff = (x[A_i_nz, :] - x[A_j_nz, :])**2 s_semidense = A[A_i_nz, A_j_nz].dot(diff) assert np.allclose(product, s_semidense) # You can see where this is going: A_sparse = scipy.sparse.coo_array(A) diff = (x[A_sparse.row, :] - x[A_sparse.col, :])**2 s_sparse = A_sparse.data.dot(diff) assert np.allclose(product, s_sparse) Seems reasonably fast; this completes in about a second: N = 100_000_000 print(f'Big test: initialising a {N}x{N} array') n_values = 10_000_000 A = scipy.sparse.coo_array( ( rand.integers(0, 100, n_values), rand.integers(0, N, (2, n_values)), ), shape=(N, N), ) x = rand.integers(0, 100, (N, 2)) print('Big test: calculating') s = A.data.dot((x[A.row, :] - x[A.col, :])**2) print(s) | 5 | 2 |
75,668,272 | 2023-3-7 | https://stackoverflow.com/questions/75668272/how-to-assign-numeric-labels-to-all-elements-in-a-list-series-array-based-on-num | I have two lists that contains two series of numbers, such as: A = [1.0, 2.9, 3.4, 4.2, 5.5....100.3] B = [1.1, 1.2, 1.3, 2.5, 3.0, 3.1, 5.2] I would like to create another list of labels based on whether the elements in list B falls in an (any) interval from list A. Something like this: C = [group_1, group_1, group_1, group_1, group_2, group_2, group_3] i.e. 1.1, 1.2, 1.3, 2.5 all fall in the interval of 1.0 - 2.9 from list A, hence group_1; 3.0, 3.1 both fall in the interval of 2.9 - 3.4, hence group_2; and 5.2 falls in the interval of 4.2 - 5.5, hence group_3, etc.. It doesn't matter which interval from list A does the number from list B fall in, the point is to group/label all elements in list B in a consecutive manner. The orginal data is large so it would be impossible to manually assign labels/groups to elements in list B. Any help is appreciated. | So, assuming A is sorted, you can use binary search, which already comes with the python standard library in the (rather clunky) module bisect: >>> A = [1.0, 2.9, 3.4, 4.2, 5.5] >>> B = [1.1, 1.2, 1.3, 2.5, 3.0, 3.1, 5.2] >>> [bisect.bisect_left(A, b) for b in B] [1, 1, 1, 1, 2, 2, 4] This takes O(N * logN) time. Note, take care to read the documentation and how bisect_left and bisect_right behave when a value in B is equal to a value in A, and how items that wouldn't fall anywhere behave. | 3 | 5 |
75,667,225 | 2023-3-7 | https://stackoverflow.com/questions/75667225/how-to-get-latest-version-of-not-installed-package-using-pip | I want to know, for a package which is not installed in my virtual environment, what is the latest version available. For example, if I had to install requests library, I would want to know the version before installation. I considered pip search but that was deprecated in Python 3.11. Is there any other way to get the release version using some command before installing a Python package ? | You can use the currently-experimental pip index: C:\>pip index versions requests WARNING: pip index is currently an experimental command. It may be removed/changed in a future release without prior warning. requests (2.28.2) Available versions: 2.28.2, 2.28.1, 2.28.0, 2.27.1, 2.27.0, 2.26.0, 2.25.1, 2.25.0, 2.24.0, 2.23.0, 2.22.0, 2.21.0, 2.20.1, 2.20.0, 2.19.1, 2.19.0, 2.18.4, 2.18.3, 2.18.2, 2.18.1, 2.18.0, 2.17.3, 2.17.2, 2.17.1, 2.17.0, 2.16.5, 2.16.4, 2.16.3, 2.16.2, 2.16.1, 2.16.0, 2.15.1, 2.14.2, 2.14.1, 2.14.0, 2.13.0, 2.12.5, 2.12.4, 2.12.3, 2.12.2, 2.12.1, 2.12.0, 2.11.1, 2.11.0, 2.10.0, 2.9.2, 2.9.1, 2.9.0, 2.8.1, 2.8.0, 2.7.0, 2.6.2, 2.6.1, 2.6.0, 2.5.3, 2.5.2, 2.5.1, 2.5.0, 2.4.3, 2.4.2, 2.4.1, 2.4.0, 2.3.0, 2.2.1, 2.2.0, 2.1.0, 2.0.1, 2.0.0, 1.2.3, 1.2.2, 1.2.1, 1.2.0, 1.1.0, 1.0.4, 1.0.3, 1.0.2, 1.0.1, 1.0.0, 0.14.2, 0.14.1, 0.14.0, 0.13.9, 0.13.8, 0.13.7, 0.13.6, 0.13.5, 0.13.4, 0.13.3, 0.13.2, 0.13.1, 0.13.0, 0.12.1, 0.12.0, 0.11.2, 0.11.1, 0.10.8, 0.10.7, 0.10.6, 0.10.4, 0.10.3, 0.10.2, 0.10.1, 0.10.0, 0.9.3, 0.9.2, 0.9.1, 0.9.0, 0.8.9, 0.8.8, 0.8.7, 0.8.6, 0.8.5, 0.8.4, 0.8.3, 0.8.2, 0.8.1, 0.8.0, 0.7.6, 0.7.5, 0.7.4, 0.7.3, 0.7.2, 0.7.1, 0.7.0, 0.6.6, 0.6.5, 0.6.4, 0.6.3, 0.6.2, 0.6.1, 0.6.0, 0.5.1, 0.5.0, 0.4.1, 0.4.0, 0.3.4, 0.3.3, 0.3.2, 0.3.1, 0.3.0, 0.2.4, 0.2.3, 0.2.2, 0.2.1, 0.2.0 | 4 | 4 |
75,664,524 | 2023-3-7 | https://stackoverflow.com/questions/75664524/pandas-multiindex-updating-with-derived-values | I am tryng to update a MultiIndex frame with derived data. My multiframe is a time series where 'Vehicle_ID' and 'Frame_ID' are the levels of index and I iterate through each Vehicle_ID in order and compute exponential weighted avgs to clean the data and try to merge the additional columns to the original MultiIndex dataframe. Example Code: v_ids = trajec.index.get_level_values('Vehicle_ID').unique().values for id in v_ids: ewm_x = trajec.loc[(id,), 'Local_X'].ewm(span=T_pos/dt).mean() ewm_y = trajec.loc[(id,), 'Local_Y'].ewm(span=T_pos_x/dt).mean() smooth = pd.DataFrame({'Vehicle_ID': id, 'Frame_ID': ewm_y.index.values, 'ewm_y': ewm_y, 'ewm_x': ewm_x}).set_index(['Vehicle_ID', 'Frame_ID']) trajec.join(smooth) And this works outside of the loop, to join the values to the trajec dataframe. But when implemented in the loop seems to overwrite on each loop. Local_X, Local_Y, v_Length, v_Width, v_Class, v_Vel, v_Acc, Lane_ID, Preceding, Following, Space_Headway, Time_Headway Vehicle_ID Frame_ID 1 12 16.884 48.213 14.3 6.4 2 12.50 0.0 2 0 0 0.00 0.00 13 16.938 49.463 14.3 6.4 2 12.50 0.0 2 0 0 0.00 0.00 14 16.991 50.712 14.3 6.4 2 12.50 0.0 2 0 0 0.00 0.00 15 17.045 51.963 14.3 6.4 2 12.50 0.0 2 0 0 0.00 0.00 16 17.098 53.213 14.3 6.4 2 12.50 0.0 2 0 0 0.00 0.00 ... ... ... ... ... ... ... ... ... ... ... ... ... ... 2911 8588 53.693 1520.312 14.9 5.9 2 31.26 0.0 5 2910 2915 78.19 2.50 8589 53.719 1523.437 14.9 5.9 2 31.26 0.0 5 2910 2915 78.26 2.50 8590 53.746 1526.564 14.9 5.9 2 31.26 0.0 5 2910 2915 78.41 2.51 8591 53.772 1529.689 14.9 5.9 2 31.26 0.0 5 2910 2915 78.61 2.51 8592 53.799 1532.830 14.9 5.9 2 30.70 5.9 5 2910 2915 78.81 2.57 dataframe exerpt. | You can create an empty dataframe outside the loop to store the results, and then concatenate the results from each iteration to this empty dataframe. v_ids = trajec.index.get_level_values('Vehicle_ID').unique().values results = pd.DataFrame() # empty dataframe to store results for id in v_ids: ewm_x = trajec.loc[(id,), 'Local_X'].ewm(span=T_pos/dt).mean() ewm_y = trajec.loc[(id,), 'Local_Y'].ewm(span=T_pos_x/dt).mean() smooth = pd.DataFrame({'Vehicle_ID': id, 'Frame_ID': ewm_y.index.values, 'ewm_y': ewm_y, 'ewm_x': ewm_x}).set_index(['Vehicle_ID', 'Frame_ID']) results = pd.concat([results, smooth]) # concatenate results from each iteration # join the results to the original dataframe trajec = trajec.join(results) | 3 | 1 |
75,664,404 | 2023-3-7 | https://stackoverflow.com/questions/75664404/how-to-extract-hex-color-codes-from-a-colormap | From the below branca colormap import branca color_map = branca.colormap.linear.PuRd_09.scale(0, 250) colormap = color_map.to_step(index=[0, 10, 20, 50, 70, 90, 120, 200]) How can I extract hex colours for all the steps(index) from the above Branca colormap? | You can use matplotlib.colors.to_hex on colormap.colors: from matplotlib.colors import to_hex out = [to_hex(c) for c in colormap.colors] # or out = list(map(to_hex, colormap.colors)) Output: ['#f7f4f9', '#f0ebf4', '#e3d9eb', '#d0aad2', '#d084bf', '#e44199', '#67001f'] | 3 | 2 |
75,663,632 | 2023-3-7 | https://stackoverflow.com/questions/75663632/airflow-exceptions-airflowexception-branch-task-ids-must-contain-only-valid-t | I have a dag which contains 1 custom task, 1 @task.branch task decorator and 1 taskgroup, inside the taskgroup I have multiple tasks that need to be triggered sequentially depending on the outcome of the @task.branch. PROCESS_BATCH_data_FILE = "batch_upload" SINGLE_data_FILE_FIRST_OPERATOR = "validate_data_schema_task" ENSURE_INTEGRITY_TASK = "provide_data_integrity_task" PROCESS_SINGLE_data_FILE = "process_single_data_file_task" default_args = { "retries": 0, "retry_delay": timedelta(seconds=30), "trigger_rule": "none_failed", } default_args = update_default_args(default_args) flow_name = "data_ingestion" with DAG( flow_name, default_args=default_args, start_date= airflow.utils.dates.days_ago(0), schedule=None, dagrun_timeout=timedelta(minutes=180) ) as dag: update_status_running_op = UpdateStatusOperator( task_id="update_status_running_task", ) @task.branch(task_id = 'check_payload_type') def is_batch(**context): # data = context["dag_run"].conf["execution_context"].get("data") if isinstance(data, dict): subdag = "validate_data_schema_task" elif isinstance(data, list): subdag = PROCESS_BATCH_data_FILE return subdag with TaskGroup(group_id='group1') as my_task_group: validate_schema_operator = ValidatedataSchemaOperator(task_id=SINGLE_data_FILE_FIRST_OPERATOR) ensure_integrity_op = EnsuredataIntegrityOperator(task_id=ENSURE_INTEGRITY_TASK) process_single_data_file = ProcessdataOperatorR3(task_id=PROCESS_SINGLE_data_FILE) validate_schema_operator >> ensure_integrity_op >> process_single_data_file update_status_finished_op = UpdateStatusOperator( task_id="update_status_finished_task", dag=dag, trigger_rule="all_done", ) batch_upload = DummyOperator( task_id=PROCESS_BATCH_data_FILE ) for batch in range(0, BATCH_NUMBER): batch_upload >> ProcessdataOperatorR3( task_id=f"process_data_task_{batch + 1}", previous_task_id=f"provide_data_integrity_task_{batch + 1}", batch_number=batch + 1, trigger_rule="none_failed_or_skipped" ) >> update_status_finished_op branch_task = is_batch() update_status_running_op >> branch_task branch_task >> batch_upload branch_task >> my_task_group >> update_status_finished_op When I trigger below tag I get the following error: airflow.exceptions.AirflowException: 'branch_task_ids' must contain only valid task_ids. Invalid tasks found: {'validate_data_schema_task'}. I dont understand why I get the error because 'validate_data_schema_task' is defined at the top of the file. I have tried to hard code 'validate_data_schema_task' as task_id but that gives me the same error. | When referring to a task nested in a task group you need to specify their task _id as "group_id.task_id". This should work: @task.branch(task_id = 'check_payload_type') def is_batch(**context): # data = context["dag_run"].conf["execution_context"].get("data") if isinstance(data, dict): subdag = "group1.validate_data_schema_task" elif isinstance(data, list): subdag = "group1." + PROCESS_BATCH_data_FILE return subdag | 4 | 7 |
75,663,011 | 2023-3-7 | https://stackoverflow.com/questions/75663011/how-to-include-both-ends-of-a-pandas-date-range | From a pair of dates, I would like to create a list of dates at monthly frequency, including the months of both dates indicated. import pandas as pd import datetime # Option 1 pd.date_range(datetime(2022, 1, 13),datetime(2022, 4, 5), freq='M', inclusive='both') # Option 2 pd.date_range("2022-01-13", "2022-04-05", freq='M', inclusive='both') both return the list: DatetimeIndex(['2022-01-31', '2022-02-28', '2022-03-31'], dtype='datetime64[ns]', freq='M'). However, I am expecting the outcome with a list of dates (4 long) with one date for each month: [january, february, mars, april] If now we run: pd.date_range("2022-01-13", "2022-04-05", freq='M', inclusive='right') we still obtain the same result as before. It looks like inclusive has no effect on the outcome. Pandas version. 1.5.3 | using MonthEnd and Day offsets This is because 2022-04-05 is before your month end (2022-04-30). You can use: pd.date_range("2022-01-13", pd.Timestamp("2022-04-05")+pd.offsets.MonthEnd(), freq='M', inclusive='both') A more robust variant to also handle the case in which the input date is already the month end: pd.date_range("2022-01-13", pd.Timestamp("2022-04-05")-pd.offsets.Day()+pd.offsets.MonthEnd(), freq='M', inclusive='both') Output: DatetimeIndex(['2022-01-31', '2022-02-28', '2022-03-31', '2022-04-30'], dtype='datetime64[ns]', freq='M') alternative: using Period pd.date_range(pd.Period('2022-01-13', 'M').to_timestamp(), pd.Period('2022-04-30', 'M').to_timestamp(how='end'), freq='M', inclusive='both') Intermediates: pd.Period('2022-01-13', 'M').to_timestamp() # Timestamp('2022-01-01 00:00:00') pd.Period('2022-04-30', 'M').to_timestamp(how='end') # Timestamp('2022-04-30 23:59:59.999999999') or as periods: period_range pd.period_range('2022-01-13', '2022-04-30', freq='M') Output: PeriodIndex(['2022-01', '2022-02', '2022-03', '2022-04'], dtype='period[M]') | 3 | 3 |
75,600,994 | 2023-3-1 | https://stackoverflow.com/questions/75600994/i-am-getting-access-blocked-error-while-trying-to-get-athentication-code-which-i | here is what is tried I changed the status of O Auth Consent screen from testing to publish and the app scope is external then i created the O Auth client Id token and then i tried this code but this is giving error when i try to authenticate to the app. from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request SCOPES = ['https://www.googleapis.com/auth/drive'] CLIENT_SECRETS_FILE = 'client.json' REDIRECT_URI = 'urn:ietf:wg:oauth:2.0:oob' flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRETS_FILE, SCOPES, redirect_uri=REDIRECT_URI) auth_url, _ = flow.authorization_url(prompt='consent') print(f'Please go to this URL to authorize the application: {auth_url}') auth_code = input('Enter the authorization code: ') flow.fetch_token(code=auth_code) creds = flow.credentials print(f'Access token: {creds.token}') print(f'Refresh token: {creds.refresh_token}') Can you spot how to do it in python and solve this error. | you cant use urn:ietf:wg:oauth:2.0:oob, this was discontinued. You would have better luck following the official QuickStart This sample will use creds = flow.run_local_server(port=0) to open the consent screen for you rather then asking you to click a link which will probably return a 404 error because you don't have a local web server running. from __future__ import print_function import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError # If modifying these scopes, delete the file token.json. SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly'] def main(): """Shows basic usage of the Drive v3 API. Prints the names and ids of the first 10 files the user has access to. """ creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) try: service = build('drive', 'v3', credentials=creds) # Call the Drive v3 API results = service.files().list( pageSize=10, fields="nextPageToken, files(id, name)").execute() items = results.get('files', []) if not items: print('No files found.') return print('Files:') for item in items: print(u'{0} ({1})'.format(item['name'], item['id'])) except HttpError as error: # TODO(developer) - Handle errors from drive API. print(f'An error occurred: {error}') if __name__ == '__main__': main() | 4 | 1 |
75,616,542 | 2023-3-2 | https://stackoverflow.com/questions/75616542/how-to-set-whisper-decodingoptions-language | I'm trying to run whisper and I want to set the DecodingOptions.language with France (instead of using it's language detection). I have tried to write: options = whisper.DecodingOptions() options.language = "fr" but I'm getting error: FrozenInstanceError: cannot assign to field 'language' How can I set the language in DecodingOptions ? | You can change the language in the DecodingOptions in Whisper with the following command (as, for example, shown here): options = whisper.DecodingOptions(language="en") | 5 | 8 |
75,657,917 | 2023-3-7 | https://stackoverflow.com/questions/75657917/how-to-define-a-script-in-the-venv-bin-dir-with-pyproject-toml-in-hatch-or-any | Im unsure about the new doc on packaging with hatch and wonder if someone worked out how to define a script in a pip installable package. So in short I need to be able to direct python -m build to make a package with open_foo_bar.py as in the example, install into the (virtual env)/bin dir. my package looks like this (after a python -m build step that generated dist dir) pypi_package/ βββ bin β βββ open_foo_bar.py βββ dist β βββ foo-0.1.0-py3-none-any.whl β βββ foo-0.1.0.tar.gz βββ pyproject.toml βββ README.md βββ test_pkg βββ foolib.py βββ __init__.py Im trying to get bin/open_foo_bar.py installed into the $(virtual env)/bin instead it installs it into the site-packages/bin ./lib/python3.10/site-packages/bin/open_foo_bar.py myproject.toml is [build-system] requires = ["hatchling"] build-backend = "hatchling.build" [project] name = "FOO" version = "0.1.0" authors = [ { name="Mr Foo", email="[email protected]" }, ] description = "a Foo bar without drinks" readme = "README.md" requires-python = ">=3.8" classifiers = [ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ] dependencies = [ 'requests' ] [project.urls] "Homepage" = "http://footopia.s/howto_foo" This used to be easy by defining the scripts section in setup.py setuptools.setup( ... scripts ['bin/script1'], ... ) | In theory the Python spec defines how entrypoints and scripts are supposed to be handled. Hatch has chosen to do it a bit differently, and I'm not sure if this is even considered best-practice. This approach combines Forced Inclusion with Metadata Entrypoints/CLI. Here's how you might achieve your desired outcome: Add a main or run function to your script open_foo_bar.py Force hatch to make your script appear as part of the package once it is installed. You would add the lines: [tool.hatch.build.targets.sdist.force-include] "bin/open_foo_bar.py" = "test_pkg/open_foo_bar.py" [tool.hatch.build.targets.wheel.force-include] "bin/open_foo_bar.py" = "test_pkg/open_foo_bar.py" Create an entrypoint using the scripts section: [project.scripts] open_foo_bar = "test_pkg.open_foo_bar:main" When you install the package, open_foo_bar should be in your virtualenv bin/ directory, and it will call your main function within open_foo_bar.py, which will have been moved to reside within your package in site-packages. It's a hacky solution, but there doesn't appear to be a 1:1 feature for supporting arbitrary scripts. Hatch actually does mention the usage of an Entrypoints section, but it's catered towards plugin management. | 3 | 2 |
75,653,850 | 2023-3-6 | https://stackoverflow.com/questions/75653850/python-mypy-type-checking-not-working-as-expected | I'm new to python, and am a huge fan of static type checkers. I have some code that handles file uploads with the Bottle framework. See below. def transcribe_upload(upload: FileUpload) -> Alternative: audio:AudioSource = upload_source(upload) ... def upload_source(upload:FileUpload) -> AudioSource: ... I made a really simple mistake and passed upload.file (the file-like object) to upload_source instead of the entire FileUpload object. def transcribe_upload(upload: FileUpload) -> Alternative: audio:AudioSource = upload_source(upload.file) # This is incorrect! The typechecker didn't catch it. In fact, it doesn't catch ANY incorrect parameter passing to upload_source: def transcribe_upload(upload: FileUpload) -> Alternative: audio:AudioSource = upload_source(4) # Why isn't mypy giving me an error? audio:AudioSource = upload_source(upload.asdf) # Why isn't mypy giving me an error? What's going on? I tested some basic functions separately and the typechecker caught when I tried to pass a number to a function that wanted a string, and it worked. What am I missing here? EDIT @kojiro suggsted that FileUpload is equivalent to Any. I think that may be correct. Here is the source for FileUpload. It's imported like this: from bottle import FileUpload. If that's the case, why is it letting me use FileUpload as if it were a type? (If I mess up the name, like FileUpld it does give me an error). More importantly, how do I get real types? I suppose the bottle authors have to add them? | bottle is not a statically-typed library (it offers no type annotations in bottle.py). Its organisation as a single bottle.py (instead of a package) also prevents you from making a py.typed yourself, which would otherwise allow mypy to pick up at least the class attributes, function/method signatures, and global variables (even if they don't have any typing information). The only way to get "real types" is for the bottle maintainers to add them. Assuming this isn't going to happen, you have several options: Generate a skeletal outline (Python stub file, extension .pyi) which will aid in basic code completion. This will catch variable, function, and class existence, but not variable and function signature typing information (simply because bottle.py doesn't have any in the first place). The skeletal outline can be generated in about a second using mypy's shipped stubgen tool, which you should already have if you've installed mypy. Collect runtime types by running pyannotate or MonkeyType on bottle's test suite, and use these tools to generate a .pyi. This is more accurate than just running stubgen, but the final quality is at the mercy of the test suite, and is very likely to result in a lot of false positive warnings by mypy, especially with descriptor classes like bottle's DictProperty and lazy_attribute. Use pytype's type inference to generate a .pyi. This generally results in more usable stubs, at the cost of an extremely long inference procedure that may end up building a dependency graph and scanning most of your site-packages, and will fail quite often if bottle has a lot of complex third-party dependencies. (EDIT: It looks like bottle only depends on the Python standard library, in which case you should get a fairly fast inference by pytype) You should end up getting a single bottle.pyi that you must move to the same directory as bottle.py. This should be in the site-packages of your virtual environment. Once you've done 2. or 3., mypy should be able to pick up typing errors properly. | 4 | 5 |
75,656,976 | 2023-3-7 | https://stackoverflow.com/questions/75656976/whats-the-difference-between-pip-install-bs4-and-pip-install-beautifulsoup4 | When I search about the installation of the BeautifulSoup lib, sometimes I see pip install bs4, and sometimes pip install BeautifulSoup4. What's the difference between these 2 methods of installation? | bs4 is technically a different package; however, it is a dummy package designed to install the correct package: beautifulsoup4. https://pypi.org/project/bs4/ TLDR: You can use either the short name or long name | 3 | 4 |
75,652,936 | 2023-3-6 | https://stackoverflow.com/questions/75652936/how-to-unit-test-that-a-flask-apps-routes-are-protected-by-authlibs-resourcepr | I've got a Flask app with some routes protected by authlib's ResourceProtector. I want to test that a route I've set up is indeed protected by a authlib.integrations.flask_oauth2.ResourceProtector but I don't want to go to the trouble of creating a valid jwt token or anything like that. I want to avoid feeling like I'm testing authlib, rather than testing my code. All I really need to do is check that when I hit my endpoint, the ResourceProtector is called. This should be possible with mocking of some kind, but I wonder if there's a supported way to do this. from authlib.integrations.flask_oauth2 import ResourceProtector require_auth = ResourceProtector() require_auth.register_token_validator(validator) APP = Flask(__name__) @APP.route("/") @require_auth(None) def home(): return "Authorized!" The test should look something like this: from my_module import APP class TestApi: @property def client(self): return APP.test_client() def test_response_without_oauth(self): response = self.client.get("/") assert response.status_code == 401 def test_response_with_auth0(self): # Do some mocking or some magic response = self.client.get("/") assert response.text == "Authorized!" | The Flask docs subtly refer to Application Factories as having a use case involving testing: So why would you want to do this? Testing. You can have instances of the application with different settings to test every case. ... We can use an application factory to keep code out of the top-level which makes life hard for mocking (because by the time you've imported the module, the thing you're hoping to mock has already been defined.) from authlib.integrations.flask_oauth2 import ResourceProtector def create_app(): """The name `create_app` is magic: you can't rename it""" require_auth = ResourceProtector() require_auth.register_token_validator(validator) app = Flask(__name__) @app.route("/") @require_auth(None) def home(): return "Authorized!" This means we can then test the route as normal like this: from my_module import create_app from authlib.integrations.flask_oauth2 import ResourceProtector class TestApi: def test_endpoint_is_protected(self): client = create_app().test_client() with patch("my_module.ResourceProtector", wraps=ResourceProtector) as mock: client().get(self.ENDPOINT) mock.assert_called_once() To then test the route without having to worry about auth at all, you could tweak the server a little bit to use the optional argument: def create_app(auth_is_optional = False): ... @app.route("/") @require_auth(None, optional=auth_is_optional) def home(): return "Authorized!" We can now ignore auth in testing by calling create_app with auth_is_optional=True. This approach, of having keyword arguments to create_app for config, is recommended by the docs: Make it possible to pass in configuration values for unit tests so that you donβt have to create config files on the filesystem. | 3 | 3 |
75,647,682 | 2023-3-6 | https://stackoverflow.com/questions/75647682/how-can-i-resolve-flake8-unused-import-error-for-pytest-fixture-imported-from | I wrote pytest fixture in fixtures.py file and using it in my main_test.py . But I am getting this error in flake8: F401 'utils_test.backup_path' imported but unused for this code: @pytest.fixture def backup_path(): ... from fixtures import backup_path def test_filename(backup_path): ... How can I resolve this? | generally you should not do this making fixtures available via import side-effects is an unintentional implementation detail of how fixtures work and may break in the future of pytest if you want to continue doing so, you can use # noqa: F403 on the import, telling flake8 to ignore the unused imports (though the linter is telling you the right thing here!) the supported way to make reusable fixtures is to place them in a conftest.py which is in a directory above all the tests that need it. tests will have these fixtures "in scope" automatically if that file is getting too large, you can write your fixtures in a plugin module and add that plugin module via pytest_plugins = ['module.name'] in your conftest.py disclaimer: I'm the current flake8 maintainer, I'm also a pytest core dev | 11 | 12 |
75,649,804 | 2023-3-6 | https://stackoverflow.com/questions/75649804/create-dictionary-of-each-row-in-polars-dataframe | Lets assume we have below given dataframe. Now for each row I need to create dictionary and pass it to UDF for some logic processing.Is there a way to achieve this using either polars or pyspark dataframe ? | With Polars, you can use: # Dict of lists >>> df.transpose().to_dict(as_series=False) {'column_0': [1.0, 100.0, 1000.0], 'column_1': [2.0, 200.0, None]} # List of dicts >>> df.to_dicts() [{'Account number': 1, 'V1': 100, 'V2': 1000.0}, {'Account number': 2, 'V1': 200, 'V2': None}] Input dataframe: >>> df shape: (2, 3) ββββββββββββββββββ¬ββββββ¬βββββββββ β Account number β V1 β V2 β β --- β --- β --- β β i64 β i64 β f64 β ββββββββββββββββββͺββββββͺβββββββββ‘ β 1 β 100 β 1000.0 β ββββββββββββββββββΌββββββΌβββββββββ€ β 2 β 200 β null β ββββββββββββββββββ΄ββββββ΄βββββββββ | 4 | 6 |
75,646,975 | 2023-3-6 | https://stackoverflow.com/questions/75646975/pandas-1-5-3-index-get-indexer-not-working-like-index-get-loc | I recently updated Pandas to the latest version, 1.5.3. I was using a version pre 1.4 previously. With the newest update I am getting a bunch of depreciation notices on index.get_loc and I am to use index.get_indexer. I updated my code to get_indexer, but now I am receiving a bunch of errors. My code uses timestamps as the index, and with get_loc I could simply find the index number by passing in a datetime. Now I get an error: 'datetime.datetime' object is not iterable. I have some simple sample code below: from datetime import datetime, timedelta import pandas as pd import numpy as np expiration_date = datetime(2016,3,24,16,00,0) num_minutes = 50000 date_list = [expiration_date - timedelta(minutes=x) for x in range(num_minutes)] df = pd.DataFrame(index = date_list) searchDate = date_list[10] idx_get_loc = df.index.get_loc(searchDate, method='nearest') print(f'Closes index using get_loc: {idx_get_loc}') idx_get_indexer = df.index.get_indexer(searchDate, method='nearest') print(f'Closes index using get_indexer: {idx_get_indexer}') get_loc will report out the proper index location. get_indexer fails. What am I missing? | Here, the get_indexer() method expects an iterable object such as a list or an array of timestamps, not a single timestamp as an argument. To use the get_indexer() method with a single timestamp, you need to put the timestamp in a list or an array like below. idx_get_indexer = df.index.get_indexer([searchDate], method='nearest') | 4 | 5 |
75,627,734 | 2023-3-3 | https://stackoverflow.com/questions/75627734/python-sparse-matrix-c-with-elements-c-ij-sum-j-mina-ij-b-ji-from-sparse-ma | I have two sparse matrices A and B where A and B have the same number of columns (but potentially different numbers of rows). I'm trying to get a sparse matrix C with elements c_ij = sum_j min(a_ij,b_ji), where a_ij and b_ij are elements of A and B, respectively. Can this be done efficiently, and ideally sparsely? Even densely would be okay, but the matrices involved will be huge (tens of thousands of rows) and very sparse. Details: The analogy is with standard (mathematical) matrix multiplication, but instead of summing the product of elements, I want to sum the minimum of elements. An example will help. Using standard matrix multiplication A * B gives elements c_ij = sum_j a_ij * b_ji for elements a_ij in A and b_ij in B with i and j as row and column indices, respectively. For example, import numpy as np from scipy.sparse import csr_matrix A = csr_matrix([[1, 2, 0], [0, 0, 3], [4, 0, 5]]) B = csr_matrix([[2, 0, 0], [0, 0, 3], [6, 0, 0]]) C = A * B print(f"\nC = A * B = \n{C}") gives >>> C = A * B = (0, 2) 6 (0, 0) 2 (1, 0) 18 (2, 0) 38 where c_31 = 38 = sum_j ( a_3j * b_j1 ) = 4 * 2 + 0 * 0 + 5 * 6 = 38. For each element of C, I want the sum of minimums min(a_ij * b_ji) instead of the sum of products a_ij * b_ji: c_31 = sum_j min(a_3j,b_j1) = min(4,2) + min(0,0) + min(5,6) = 7. | In normal dense arrays this would be done more easily with a three-dimensional array, but those are unavailable in sparse form. Instead, import numpy as np from scipy.sparse import csr_array, coo_array # If you're able to start in COO format instead of CSR format, it will make the code below simpler A = csr_array([ [1, 2, 0], [0, 0, 3], [4, 0, 5], [4, 0, 5], [1, 1, 6], ]) B = csr_array([ [2, 0, 0, 8], [0, 0, 3, 2], [6, 0, 0, 4], ]) m, n = A.shape n, p = B.shape # first min argument based on A a_coo = A.tocoo(copy=False) a_row_idx = ((a_coo.col + a_coo.row*(n*p))[None, :] + np.arange(0, n*p, n)[:, None]).ravel() a_min = coo_array(( np.broadcast_to(A.data, (p, A.size)).ravel(), # data: row-major repeat ( a_row_idx, # row indices np.zeros_like(a_row_idx), # column indices ), ), shape=(m*n*p, 2)) # test; deleteme a_min_dense = np.zeros((m*n*p, 2), dtype=int) a_min_dense[:, 0] = np.repeat(A.toarray(), p, axis=0).ravel() assert np.allclose(a_min_dense, a_min.toarray()) # second min argument based on B b_coo = B.tocoo(copy=False) b_row_idx = ((b_coo.row + b_coo.col*n)[None, :] + np.arange(0, m*n*p, n*p)[:, None]).ravel() b_min = coo_array(( np.broadcast_to(B.data, (m, B.size)).ravel(), # data: row-major repeat ( b_row_idx, # row indices np.ones_like(b_row_idx), # column indices ), ), shape=(m*n*p, 2)) # test; deleteme b_min_dense = np.zeros((m*n*p, 2), dtype=int) b_min_dense[:, 1] = np.tile(B.T.toarray().ravel(), (1, m)).ravel() assert np.allclose(b_min_dense, b_min.toarray()) # Alternative to this: use two one-dimensional columns and hstack ab = (a_min + b_min).min(axis=1) addend = ab.reshape((-1, n)) # scipy sparse summation input is sparse, but output is NOT SPARSE. It's an np.matrix. # If that matters, you need to run a more complex operation on 'ab' that manipulates indices. # deleteme total = addend.sum(axis=1).reshape((m, p)) # Example true sparse sum separators = np.insert( np.nonzero(np.diff(addend.row))[0] + 1, 0, 0, ) sparse_total = coo_array(( np.add.reduceat(addend.data, separators), ( addend.row[separators], np.zeros_like(separators), ) ), shape=(m*p, 1)).reshape((m, p)) # test; deleteme ab_dense = (a_min_dense + b_min_dense).min(axis=1) sum_dense = ab_dense.reshape((-1, n)).sum(axis=1).reshape((m, p)) assert np.allclose(ab.toarray().ravel(), ab_dense) assert np.allclose(sum_dense, total) assert np.allclose(sum_dense, sparse_total.toarray()) print(sum_dense) Other notes: don't use _matrix; it's deprecated in favour of _array. The above operates on different example matrices from yours, deliberately, because it's impossible to reliably develop array manipulation code when all of the dimensions are of the same size. However, when I modify them to be the same as your original arrays, I get [[1 0 2] [3 0 0] [7 0 0]] | 3 | 1 |
75,645,306 | 2023-3-5 | https://stackoverflow.com/questions/75645306/plot-radial-heatmap-in-python | I have problems when plotting a radial heatmap in Python. I would like to plot values from pandas dataframe that has column angle, distance and count. Values for angle are from -180 to 180. The distance are values from 0 to 20 and count tells the value count of that pair (angle, distance). Example of one dataframe whith df.head(10): angle dist counts 0 -180.0 0.64 1 1 -180.0 0.67 1 2 -180.0 0.68 1 3 -180.0 0.72 1 4 -180.0 0.75 2 5 -180.0 0.76 2 6 -180.0 0.78 1 7 -180.0 0.79 4 8 -180.0 0.80 1 9 -180.0 0.82 2 I would like to discretize these values so that, for example, I have bins with a width of 5 degrees and a distance of 0.25. And in the cell, the count for those values is summarized and then plotted as a radial heatmap. For now I worked like this, I would define a pandas dataframe that imitates a matrix, whose columns are degrees from -180 to 180 (-180, -175, ..., 175, 180) and index values are from 0 to 20 with step of 0.25 (0, 0.25, ..., 19.75, 20) degree_bins = np.array([x for x in range(-180, 181, 5)]) distance_bins = np.array([x*0.25 for x in range(0, 81)]) round_bins = pd.DataFrame(0, index=distance_bins, columns=degree_bins) And then i would sum count values: for row in tot.iterrows(): _, value = row degree = value["angle"] distance = value["distance"] count = value["counts"] degree_bin = np.digitize([degree], degree_bins)[0]-1 distance_bin = np.digitize([distance], distance_bins)[0]-1 round_bins.iloc[distance_bin, degree_bin] += count I also found solution creating bins using groupby and unstack which is much faster than using for loop: counts = df.groupby(['distance_bins', 'angle_bins'])['counts'].sum().unstack() But for now I just want to make ploting work correct. Code used for plotting: n = len(degree_bins) m = len(distance_bins) rad = np.linspace(0, 20.25, m) a = np.linspace(0, 2 * np.pi, n) r, th = np.meshgrid(rad, a) z = round_bins.to_numpy().T plt.figure(figsize=(20, 20)) plt.subplot(projection="polar") plt.pcolormesh(th, r, z, cmap='jet') plt.plot(a, r, ls='none', color='k') plt.grid() plt.colorbar() This code gives me empty histogram. Any help would be appreciated. | You can use np.histogram2d to calculate the counts for each bin. It has a parameter weight= where you can put the given counts per entry. The heatmap can be directly plotted via ax.pcolormesh on a polar subplot. The angles need to be converted to radians to match the polar plot. Optionally you can use ax.set_theta_zero_location('N') to tell where the zero should go and/or ax.set_theta_direction('clockwise') to change the turning direction. import matplotlib.pyplot as plt import pandas as pd import numpy as np # first generate some dummy test data df = pd.DataFrame() df['angle'] = np.random.randint(-180, 181, 2000) df['dist'] = np.random.uniform(0, 20, len(df)) df['counts'] = np.random.randint(1, 5, len(df)) # define the bins degree_bins = np.arange(-180, 181, 5) distance_bins = np.arange(0, 20.001, 0.25) # calculate the 2D histogram hist, _, _ = np.histogram2d(df['angle'], df['dist'], bins=(degree_bins, distance_bins), weights=df['counts']) # plot the histogram as a polar heatmap plt.style.use('dark_background') # this makes the text and grid lines white fig, ax = plt.subplots(figsize=(10, 10), subplot_kw={'polar': True}) ax.grid(False) # pcolormesh gives an annoying warning when the grid is on ax.pcolormesh(np.radians(degree_bins), distance_bins, hist.T, cmap='hot') ax.grid(True) plt.tight_layout() plt.show() | 3 | 5 |
75,645,700 | 2023-3-5 | https://stackoverflow.com/questions/75645700/how-do-i-query-a-dynamodb-and-get-all-rows-where-col3-exists-not-0-null-bot | This is my DynamoDB table via serverless framework, added a secondary column index for "aaa_id": Devices: Type: AWS::DynamoDB::Table Properties: TableName: Devices BillingMode: PAY_PER_REQUEST AttributeDefinitions: - AttributeName: serial AttributeType: S - AttributeName: aaa_id AttributeType: N KeySchema: - AttributeName: serial KeyType: HASH GlobalSecondaryIndexes: - IndexName: aaa_id KeySchema: - AttributeName: aaa_id KeyType: HASH Projection: ProjectionType: ALL I want to query my DynamoDB and get all items of the table where the column "aaa_id" exists or isn't 0 (or null, if it's even possible for a Number type column). Some rows don't include it. preferable using the query method instead of scan since i know it's less heavy I've been on this for hours. please help. Some of my few fail attempts: import json import boto3 def lambda_handler(event, context): dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('Devices') try: response = table.query( IndexName='aaa_id', FilterExpression='aaa_id <> :empty', ExpressionAttributeValues={':empty': {'N': '0'}} ) items = response['Items'] return { 'statusCode': 200, 'body': json.dumps(items) } except Exception as e: print(e) return { 'statusCode': 500, 'body': json.dumps('Error querying the database') } ################################# import json import boto3 from boto3.dynamodb.conditions import Key, Attr def lambda_handler(event, context): dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('Devices') try: response = table.query( IndexName='aaa_id', KeyConditionExpression=Key('aaa_id').gt(0) & Attr('aaa_id').not_exists(), ExpressionAttributeValues={ ':empty': {'N': ''} } ) data = response['Items'] while 'LastEvaluatedKey' in response: response = table.query( IndexName='aaa_id', KeyConditionExpression=Key('aaa_id').gt(0) & Attr('aaa_id').not_exists(), ExpressionAttributeValues={ ':empty': {'N': ''} }, ExclusiveStartKey=response['LastEvaluatedKey'] ) data.extend(response['Items']) return { 'statusCode': 200, 'body': json.dumps(data), 'success': True } except Exception as e: return { 'statusCode': 500, 'body': json.dumps(str(e)), 'success': False } ####################### import json import boto3 from boto3.dynamodb.conditions import Key def lambda_handler(event, context): dynamodb = boto3.resource('dynamodb') table = dynamodb.Table('Devices') try: response = table.query( IndexName='aaa_id-index', KeyConditionExpression=Key('aaa_id').gt(0) ) items = response['Items'] while 'LastEvaluatedKey' in response: response = table.query( IndexName='aaa_id-index', KeyConditionExpression=Key('aaa_id').gt(0), ExclusiveStartKey=response['LastEvaluatedKey'] ) items.extend(response['Items']) return { 'statusCode': 200, 'body': json.dumps(items), 'success': True } except Exception as e: return { 'statusCode': 500, 'body': json.dumps({'error': str(e)}), 'success': False } ################################## import boto3 import json def lambda_handler(event, context): dynamodb = boto3.client('dynamodb') try: response = dynamodb.query( TableName="Devices", IndexName='aaa_id-index', KeyConditionExpression='aaa_id <> :empty', # ExpressionAttributeValues={':empty': {'S': ''}} ) return { 'statusCode': 200, 'body': json.dumps(response['Items']), 'status': 'success' } except Exception as e: return { 'statusCode': 500, 'body': json.dumps({'error': str(e)}), 'status': 'error' } | You're over complicating things. Firstly, a number type cannot be null, as DynamoDB is schemaless you simply omit any value. Secondly, indexes can be sparse. As you're not interested in items which are 0 then simply don't set a value for that, which in turn will mean the item won't exist in the index. Then you simply Scan the index, as you know that all values that are there are not null and are not 0. In this case Scan is efficient as it's reading exactly what you want. response = table.scan( IndexName='aaa_id' ) | 3 | 2 |
75,645,297 | 2023-3-5 | https://stackoverflow.com/questions/75645297/django-gettext-lazy-not-working-in-string-interpolation-concatenation-inside | I have a dictionary of items with multiple properties. from django.utils.translation import ( gettext_lazy as _, ) {"item1": { "labels": [ _("label1"), "this is" + _("translatethis") + " label2", ] These items are then serialized in DRF. The problem is that _("label1") is being translated but "this is" + _("translatethis") + " label2" is not translated I tried also string interpolation, fstring and .format but nothing worked. When serializer fetches labels, _("translatethis") is not a proxy object. Is the only way to make this work surrounding whole strings in the gettext_lazy ? | The main problem is that _('translatethis') is not a string, but something that promises, when necessary, to be a string. When you however concatenate it with a string, it is time to keep its promise, and it thus presents a string, and it no longer can thus, when needed, check the active language. An option might be to work with a lazy object, like: from django.utils.functional import lazy def text_join(*items): return ''.join(items) text_join_lazy = lazy(text_join, str) { 'item1': { 'labels': [ _('label1'), text_join_lazy('this is ', _("translatethis"), ' label2'), ] } } | 3 | 3 |
75,644,077 | 2023-3-5 | https://stackoverflow.com/questions/75644077/pytorch-runtimeerror-zero-dimensional-tensor-at-position-0-cannot-be-concate | I have two tensors: # losses_q tensor(0.0870, device='cuda:0', grad_fn=<SumBackward0>) # this_loss_q tensor([0.0874], device='cuda:0', grad_fn=<AddBackward0>) When I am trying to concat them, PyTorch raises an error: losses_q = torch.cat((losses_q, this_loss_q), dim=0) RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated How to resolve this error? | losses_q is zero dimensional so can't be concatenated with anything. You can reshape it into a one dimensional tensor before concatenation. losses_q = torch.cat((losses_q.reshape(1), this_loss_q), dim=0) | 6 | 8 |
75,641,884 | 2023-3-5 | https://stackoverflow.com/questions/75641884/caching-behavior-in-jax | I have a function f that takes in a boolean static argument flag and performs some computation based on it's value. Below is a rough outline of this function. @partial(jax.jit, static_argnames=['flag']) def f(x, flag): # Preprocessing if flag: ... else: ... # Postprocessing Each time f is called with a different value of flag, a recompilation of this function should be triggered. However, because flag is a boolean and can take on at most two values, it would be preferable if JAX would cache the compiled version of f for each of the possible values of flag and avoid recompilations. In short, I would like JAX to compile f only two times when running following piece of code: flag = True for i in range(100): f(x, flag) flag = not flag Is there a way to tell JAX not to throw away old compiled versions of f, each time it's called with a new value of flag? And in general, are there any caching mechanisms implemented in JAX for such scenarios? (For instance if flag is an integer, but we know beforehand that it would only ever take k distinct values, and we would like to save the compiled version of f for each of these k values) I know that I can use jax.lax.cond or jax.lax.switch to control the flow inside f and treat flag as a regular argument rather than a static one. But this would make the code much more bloated (and difficult to read) as there are several places within the body of f where I access flag. It would be much cleaner if I declared flag to be a static argument and then controled the caching behavior of jax.jit to avoid recompilations. | If I understand your question correctly, then JAX by default behaves the way you would like it to behave. Each JIT-compiled function has an LRU cache of compilations based on the shape and dtype of dynamic arguments and the hash of static arguments. You can inspect the size of this cache using the _cache_size method of the compiled function. For example: import jax import jax.numpy as jnp from functools import partial @partial(jax.jit, static_argnames=['flag']) def f(x, flag): if flag: return jnp.sin(x) else: return jnp.cos(x) print(f._cache_size()) # 0 x = jnp.arange(10) f(x, True) print(f._cache_size()) # 1 f(x, False) print(f._cache_size()) # 2 # Subsequent calls with the same flag value hit the cache: flag = True for i in range(100): f(x, flag) flag = not flag print(f._cache_size()) # 2 Since the size of the x argument hasn't changed, we get one cache entry for each value of flag, and the cached compilations are used in subsequent calls. Note however if you change the shape or dtype of the dynamic argument, you get new cache entries: x = jnp.arange(100) for i in range(100): f(x, flag) flag = not flag print(f._cache_size()) # 4 The reason this is necessary is that, in general, functions may change its behavior based on these static quantities. | 3 | 3 |
75,642,045 | 2023-3-5 | https://stackoverflow.com/questions/75642045/separate-this-string-using-these-separator-elements-but-without-removing-them-fr | import re input_string = "Sus cosas deben ser llevadas allΓ, ella tomo a sΓ, Lucy la hermana menor, esta muy entusiasmada. por verte hoy por la tarde\n sdsdsd" #result_list = re.split(r"(?:.\s*\n|.|\n|;|,\s*[A-Z])", input_string) result_list = re.split(r"(?=[.,;]|(?<=\s)[A-Z])", input_string) print(result_list) Separate the string input_string using these separators r"(?:.\s*\n|.|\n|;|,\s*[A-Z])" , but without removing them from the substrings of the resulting list. When I use a positive lookahead assertion instead of a non-capturing group. This will split the input string at the positions immediately before the separators, while keeping the separators in the substrings. But I get this wrong output list ['Sus cosas deben ser llevadas allΓ', ', ella tomo a sΓ', ', ', 'Lucy la hermana menor', ', esta muy entusiasmada', '. por verte hoy por la tarde\n sdsdsd'] In order to obtain this correct list of output, when printing ["Sus cosas deben ser llevadas allΓ, ella tomo a sΓ,", " Lucy la hermana menor, esta muy entusiasmada.", " por verte hoy por la tarde\n", " sdsdsd"] | Condense your delimiter list pattern to the following "([.;\n]|,(?=\s*[A-Z]))" and use itertools.zip_longest to combine resulting substrings with followed delimiters: import re from itertools import zip_longest input_string = "Sus cosas deben ser llevadas allΓ, ella tomo a sΓ, Lucy la hermana menor, esta muy entusiasmada. por verte hoy por la tarde\n sdsdsd" res = re.split(r"([.;\n]|,(?=\s*[A-Z]))", input_string) res = list(map(''.join, zip_longest(res[::2], res[1::2], fillvalue=''))) print(res) ['Sus cosas deben ser llevadas allΓ, ella tomo a sΓ,', ' Lucy la hermana menor, esta muy entusiasmada.', ' por verte hoy por la tarde\n', ' sdsdsd'] | 3 | 2 |
75,639,679 | 2023-3-5 | https://stackoverflow.com/questions/75639679/type-hint-for-a-cast-like-function-that-raises-if-casting-is-not-possible | I am having a function safe_cast which casts a value to a given type, but raises if the value fails to comply with the type at runtime: from typing import TypeVar T = TypeVar('T') def safe_cast(t: type[T], value: Any) -> T: if isinstance(value, t): return cast(T, value) raise TypeError() This works nicely with primitive types. But I run into problems if I want to safe_cast against a UnionType: string = "string" casted: str | int = safe_cast(str | int, string) The instance check works with a union type. But my solution does not work, because mypy gives me error: Argument 1 to "safe_cast" has incompatible type "UnionType"; expected "Type[<nothing>]" [arg-type] I figure that <nothing> refers to the unspecified type variable T here. I also figure that apparently mypy cannot resolve Union[str, int] to Type[T]. My question is: How can I solve this? I looked into creating an overload for the UnionType. IIUC, in order to write the overload, I would need to create a Generic Union Type with a variadic number of arguments. I failed to get this done. Is this the right direction? If yes, how do I get it done? If no, how can I solve my problem with safe_casting Union types? | typing.cast is likely to be special-cased in a type-checking implementation, because while this works in mypy, import typing as t if t.TYPE_CHECKING: reveal_type(t.cast(str | int, "string")) # mypy: Revealed type is "Union[builtins.str, builtins.int]" the type annotations for typing.cast don't actually do anything meaningful with union types: @overload def cast(typ: Type[_T], val: Any) -> _T: ... @overload def cast(typ: str, val: Any) -> Any: ... @overload def cast(typ: object, val: Any) -> Any: ... What you'd actually want to do is to exploit the magic that your type-checker-implementation already offers. This is very easily done by making typing.cast your typing API, while making your safe_cast the runtime implementation. import typing as t T = t.TypeVar("T") if t.TYPE_CHECKING: from typing import cast as safe_cast else: # Type annotations don't actually matter here, but # are nice to provide for readability purposes. def safe_cast(t: type[T], value: t.Any) -> T: if isinstance(value, t): return value raise TypeError >>> reveal_type(safe_cast(int | str, "string")) # mypy: Revealed type is "Union[builtins.int, builtins.str]" | 4 | 4 |
75,640,162 | 2023-3-5 | https://stackoverflow.com/questions/75640162/django-showing-error-constraints-refers-to-the-joined-field | I have two models Product and Cart. Product model has maximum_order_quantity. While updating quantity in cart, I'll have to check whether quantity is greater than maximum_order_quantityat database level. For that am comparing quantity with maximum_order_quantity in Cart Model But it throws an error when I try to migrate cart.CartItems: (models.E041) 'constraints' refers to the joined field 'product__maximum_order_quantity'. Below are my models class Products(models.Model): category = models.ForeignKey( Category, on_delete=models.CASCADE, related_name="products" ) product_name = models.CharField(max_length=50, unique=True) base_price = models.IntegerField() product_image = models.ImageField( upload_to="photos/products", null=True, blank=True ) stock = models.IntegerField(validators=[MinValueValidator(0)]) maximum_order_quantity = models.IntegerField(null=True, blank=True) ) class CartItems(models.Model): cart = models.ForeignKey(Cart, on_delete=models.CASCADE) product = models.ForeignKey(Products, on_delete=models.CASCADE) quantity = models.IntegerField() class Meta: verbose_name_plural = "Cart Items" constraints = [ models.CheckConstraint( check=models.Q(quantity__gt=models.F("product__maximum_order_quantity")), name="Quantity cannot be more than maximum order quantity" ) ] Error SystemCheckError: System check identified some issues: ERRORS: cart.CartItems: (models.E041) 'constraints' refers to the joined field 'product__maximum_order_quantity'. | You cannot reference fields on related models in database-level constraints like CheckConstraint. The error message you received is indicating that you cannot reference the maximum_order_quantity field of the Products model in the constraints attribute of the CartItems model. The reason for this is that the maximum_order_quantity field is a property of the Products model, and in the CartItems model, you are referencing it through a foreign key relationship. Specifically, you are using the syntax product__maximum_order_quantity to access the maximum_order_quantity field of the related Products model. However, the constraints attribute can only reference fields that belong to the same model. It cannot reference fields of related models. Currently, you can override the clean() method as: from django.core.exceptions import ValidationError class CartItems(models.Model): cart = models.ForeignKey(Cart, on_delete=models.CASCADE) product = models.ForeignKey(Products, on_delete=models.CASCADE) quantity = models.IntegerField() def clean(self): if self.quantity > self.product.maximum_order_quantity: raise ValidationError('Quantity cannot be more than maximum order quantity') Note: Models in Django don't require s to be added as suffix, since it is added by default. So it is better to change them as CartItem and Product from CartItems and Products respectively. | 11 | 14 |
75,621,245 | 2023-3-2 | https://stackoverflow.com/questions/75621245/how-to-solve-an-overdetermined-non-linear-set-of-equations-numerically-in-python | I am trying to solve a system of 4 exponential equations with two variables. However If I use fsolve python will only allow me two use as many equations as I have variables. But as I have infinitely many pairs of solutions (if only two equations are used) and I need to find the pair of variables that fits not only two but all four equations, fsolve does not seem to work. So right know my code look something like this: from scipy.optimize import fsolve c_1 = w+(y/(x*R)) c_2 = offset - (c_1/(x*R)) #offset(1.05), w(10) and R(0.35) are constants def equations(p): x,y = p eq1 = (c_1*sp.exp(-y*R*1.017))/(y*R)+c_2-(x*1.017)/(y*R)-(5.1138*2*np.pi) eq2 = (c_1*sp.exp(-y*R*2.35))/(y*R)+c_2-(x*2.35)/(y*R)-(2.02457*4*np.pi) eq3 = (c_1*sp.exp(-y*R*2.683))/(y*R)+c_2-(x*2.683)/(y*R)-(6.0842178*4*np.pi) return (eq1,eq2,eq3) x, y = fsolve(equations, (1,1)) print (equations((x, y))) However this will give me wildly different result depending on the initial guesses I give. I now want to so edit this code so that I can put additional equations to guarantee that I get the correct pair of x,y as a solution. Simply adding a third equation here in the into the function will return a TypeError of course so I was wondering how this can be done. | To add an arbitrary number of equations, I don't think you can use fsolve at all. Just run a least-squares minimisation, and make sure that you vectorise properly instead of rote repetition. The following does run and generate a result, though I have not assessed its accuracy. import numpy as np from scipy.optimize import minimize offset = 1.05 w = 10 R = 0.35 a = np.array((-1.017, -2.350, -2.683)) b = np.array(( -5.1138000*2, -2.0245700*4, -6.0842178*4, ))*np.pi def equations(x: float, y: float) -> np.ndarray: c_1 = w + y/x/R c_2 = offset - c_1/x/R return (c_1*np.exp(a*y*R) + a*x)/y/R + c_2 + b def error(xy: np.ndarray) -> np.ndarray: eqs = equations(*xy) return eqs.dot(eqs) result = minimize( fun=error, x0=(1, 1), method='nelder-mead', ) print(result.message) print('x, y =', result.x) print('equations:', equations(*result.x)) Optimization terminated successfully. x, y = [0.19082078 0.13493941] equations: [ 27.4082168 13.91087443 -42.00388728] | 3 | 2 |
75,637,278 | 2023-3-4 | https://stackoverflow.com/questions/75637278/why-do-small-changes-have-dramatic-effects-on-the-runtime-of-my-numba-parallel-f | I'm trying to understand why my parallelized numba function is acting the way it does. In particular, why it is so sensitive to how arrays are being used. I have the following function: @njit(parallel=True) def f(n): g = lambda i,j: zeros(3) + sqrt(i*j) x = zeros((n,3)) for i in prange(n): for j in range(n): tmp = g(i,j) x[i] += tmp return x Trust that n is large enough for parallel computing to be useful. For some reason this actually runs faster with fewer cores. Now when I make a small change (x[i] -> x[i, :]). @njit(parallel=True) def f(n): g = lambda i,j: zeros(3) + sqrt(i*j) x = zeros((n,3)) for i in prange(n): for j in range(n): tmp = g(i,j) x[i, :] += tmp return x The performance is significantly better, and it scales properly with the number of cores (ie. more cores is faster). Why does slicing make the performance better? To go even further, another change that makes a big difference is turning the lambda function into and external njit function. @njit def g(i,j): x = zeros(3) + sqrt(i*j) return x @njit(parallel=True) def f(n): x = zeros((n,3)) for i in prange(n): for j in range(n): tmp = g(i,j) x[i, :] += tmp return x This again ruins the performance and scaling, reverting back to runtimes equal to or slower than the first case. Why does this external function ruin the performance? The performance can be recovered with two options shown below. @njit def g(i,j): x = sqrt(i*j) return x @njit(parallel=True) def f(n): x = zeros((n,3)) for i in prange(n): for j in range(n): tmp = zeros(3) + g(i,j) x[i, :] += tmp return x @njit(parallel=True) def f(n): def g(i,j): x = zeros(3) + sqrt(i*j) return x x = zeros((n,3)) for i in prange(n): for j in range(n): tmp = g(i,j) x[i, :] += tmp return x Why is the parallel=True numba decorated function so sensitive to how arrays are being used? I know arrays are not trivially parallelizable, but the exact reason each of these changes dramatically effects performance isn't obvious to me. | TL;DR: allocations and inlining are certainly the source of the performance gap between the different version. Operating on Numpy array is generally a bit more expensive than view in Numba. In this case, the problem appear to be that Numba perform an allocation when using x[i] while it does not with x[i,:]. The thing is allocations are expensive, especially in parallel codes since allocators tends not to scale (due to internal locks or atomic variables serializing the execution). I am not sure this is a missed optimization since x[i] and x[i,:] might have a slightly different behaviour. In addition, Numba uses a JIT compiler (LLVM-Lite) which perform aggressive optimizations. LLVM is able to track allocations so to remove them in simple cases (like a function doing an allocation and freeing data just after in the same scope without side effects). The thing is Numba allocations are calling an external function that the compiler cannot optimize as it does not know the content at compile time (due to the way the Numba runtime interface currently works) and the function could theoretically have side effects. To show what is happening, we need to delve into the assembly code. Overall, Numba generates a function for f calling a xxx_numba_parfor_gufunc_xxx function in N threads. This last function executes the content of the parallel loop. The caller function is the same for both implementation. The main computing function is different for the two version. Here is the assembly code on my machine: ----- WITHOUT VIEWS ----- _ZN13_3cdynamic_3e40__numba_parfor_gufunc_0x272d183ed00_2487B104c8tJTC_2fWQAkyW1xhBopo9CCDiCFCDMJHDTCIGCy8IDxIcEFloKEF4UEDC8KBhhWIo6mjgJZwoWpwJVOYNCIkbcG2Ai0vgkhqAgA_3dE5ArrayIyLi1E1C7mutable7alignedE23Literal_5bint_5d_283_29x5ArrayIdLi2E1C7mutable7alignedE: .cfi_startproc pushq %r15 .cfi_def_cfa_offset 16 pushq %r14 .cfi_def_cfa_offset 24 pushq %r13 .cfi_def_cfa_offset 32 pushq %r12 .cfi_def_cfa_offset 40 pushq %rsi .cfi_def_cfa_offset 48 pushq %rdi .cfi_def_cfa_offset 56 pushq %rbp .cfi_def_cfa_offset 64 pushq %rbx .cfi_def_cfa_offset 72 subq $280, %rsp vmovaps %xmm6, 256(%rsp) .cfi_def_cfa_offset 352 .cfi_offset %rbx, -72 .cfi_offset %rbp, -64 .cfi_offset %rdi, -56 .cfi_offset %rsi, -48 .cfi_offset %r12, -40 .cfi_offset %r13, -32 .cfi_offset %r14, -24 .cfi_offset %r15, -16 .cfi_offset %xmm6, -96 movq %rdx, 160(%rsp) movq %rcx, 200(%rsp) movq 504(%rsp), %r14 movq 488(%rsp), %r15 leaq -1(%r15), %rax imulq %r14, %rax xorl %ebp, %ebp testq %rax, %rax movq %rax, %rdx cmovnsq %rbp, %rdx cmpq $1, %r15 cmovbq %rbp, %rdx movq %rdx, 240(%rsp) movq %rax, %rdx sarq $63, %rdx andnq %rax, %rdx, %rax addq 464(%rsp), %rax movq %r15, %rbx subq $1, %rbx movq 440(%rsp), %rcx movq 400(%rsp), %rsi movabsq $NRT_incref, %rdx cmovbq %rbp, %rax movq %rax, 232(%rsp) callq *%rdx movq (%rsi), %rbp movq 8(%rsi), %rdi subq %rbp, %rdi incq %rdi movabsq $NRT_MemInfo_alloc_safe_aligned, %rsi movl $24, %ecx movl $32, %edx callq *%rsi movq %rax, 192(%rsp) movq 24(%rax), %rax movq %rax, 120(%rsp) movl $24, %ecx movl $32, %edx callq *%rsi movq %rax, 64(%rsp) testq %rdi, %rdi jle .LBB6_48 movq %rdi, %r11 movq %rbp, %r8 movq %rbx, %r10 movq %r15, %r9 movq 432(%rsp), %rdx movq 472(%rsp), %rdi movq %r15, %rax imulq 464(%rsp), %rax movq %rax, 208(%rsp) xorl %eax, %eax testq %rdx, %rdx setg %al movq %rdx, %rcx sarq $63, %rcx andnq %rdx, %rcx, %rcx subq %rax, %rcx movq %rcx, 224(%rsp) leaq -4(%r15), %rax movq %rax, 184(%rsp) shrq $2, %rax incq %rax andl $7, %r15d movq %r9, %r13 andq $-8, %r13 movq %r9, %rcx andq $-4, %rcx movq %rcx, 176(%rsp) movl %eax, %ecx andl $7, %ecx movq %rbp, %rdx imulq %r9, %rdx movq %rcx, 168(%rsp) shlq $5, %rcx movq %rcx, 152(%rsp) andq $-8, %rax addq $-8, %rax movq %rax, 144(%rsp) movq %rax, %rcx shrq $3, %rcx incq %rcx movq %rcx, %rax movq %rcx, 136(%rsp) andq $-2, %rcx movq %rcx, 128(%rsp) vxorps %xmm6, %xmm6, %xmm6 movq 64(%rsp), %rax movq 24(%rax), %rax movq %rax, 248(%rsp) leaq 56(%rdi,%rdx,8), %rsi leaq 224(%rdi,%rdx,8), %rcx leaq (,%r9,8), %rax movq %rax, 88(%rsp) leaq (%rdi,%rdx,8), %rax addq $480, %rax movq %rax, 80(%rsp) xorl %eax, %eax movq %rax, 96(%rsp) movq %rdx, 216(%rsp) movq %rdx, 112(%rsp) movq %rbx, 56(%rsp) jmp .LBB6_3 .p2align 4, 0x90 .LBB6_2: leaq -1(%r11), %rax incq %r8 addq %r9, 112(%rsp) movq 104(%rsp), %rcx leaq (%rcx,%r9,8), %rcx incq 96(%rsp) movq 88(%rsp), %rdx addq %rdx, %rsi addq %rdx, 80(%rsp) cmpq $2, %r11 movq %rax, %r11 jl .LBB6_48 .LBB6_3: movq %rcx, 104(%rsp) movq %r8, %rax imulq %r9, %rax movq 472(%rsp), %rdi leaq (%rdi,%rax,8), %rbp movq 240(%rsp), %rax addq %rbp, %rax movq 232(%rsp), %rcx addq %rbp, %rcx movq %r8, %rdx imulq 496(%rsp), %rdx movq 464(%rsp), %rbx addq %rdx, %rbx testq %r9, %r9 cmoveq %r9, %rdx cmoveq %r9, %rbx addq %rdi, %rdx addq %rdi, %rbx cmpq %rbx, %rax setb 39(%rsp) cmpq %rcx, %rdx setb %al cmpq $0, 432(%rsp) jle .LBB6_2 cmpq 424(%rsp), %r9 jne .LBB6_46 movq 96(%rsp), %rcx imulq %r9, %rcx addq 216(%rsp), %rcx andb %al, 39(%rsp) movq 472(%rsp), %rax leaq (%rax,%rcx,8), %rax movq %rax, 72(%rsp) movl $1, %eax movq 224(%rsp), %rbx xorl %r12d, %r12d .p2align 4, 0x90 .LBB6_6: imulq %r8, %r12 vcvtsi2sd %r12, %xmm2, %xmm0 vsqrtsd %xmm0, %xmm0, %xmm0 movq 120(%rsp), %rcx vmovups %xmm6, (%rcx) movq $0, 16(%rcx) movq 248(%rsp), %rdx vmovsd %xmm0, (%rdx) vaddsd (%rbp), %xmm0, %xmm1 vmovsd %xmm1, (%rbp) vaddsd 8(%rcx), %xmm0, %xmm1 vmovsd %xmm1, 8(%rdx) vaddsd 8(%rbp), %xmm1, %xmm1 vmovsd %xmm1, 8(%rbp) vaddsd 16(%rcx), %xmm0, %xmm0 vmovsd %xmm0, 16(%rdx) movq %rax, %r12 vaddsd 16(%rbp), %xmm0, %xmm0 vmovsd %xmm0, 16(%rbp) cmpb $0, 39(%rsp) jne .LBB6_7 testq %r9, %r9 jle .LBB6_28 cmpq $7, %r10 jae .LBB6_19 xorl %eax, %eax movq %rbp, %rdi testq %r15, %r15 jne .LBB6_23 jmp .LBB6_26 .p2align 4, 0x90 .LBB6_19: movq %rbp, %rcx xorl %eax, %eax .p2align 4, 0x90 .LBB6_20: movq (%rcx), %rdx movq %rdx, -56(%rsi,%rax,8) leaq (%r14,%rcx), %rdx movq (%r14,%rcx), %rcx movq %rcx, -48(%rsi,%rax,8) leaq (%r14,%rdx), %rcx movq (%r14,%rdx), %rdx movq %rdx, -40(%rsi,%rax,8) leaq (%r14,%rcx), %rdx movq (%r14,%rcx), %rcx movq %rcx, -32(%rsi,%rax,8) leaq (%r14,%rdx), %rcx movq (%r14,%rdx), %rdx movq %rdx, -24(%rsi,%rax,8) leaq (%r14,%rcx), %rdx movq (%r14,%rcx), %rcx movq %rcx, -16(%rsi,%rax,8) leaq (%r14,%rdx), %rdi movq (%r14,%rdx), %rcx movq %rcx, -8(%rsi,%rax,8) leaq (%r14,%rdi), %rcx movq (%r14,%rdi), %rdx movq %rdx, (%rsi,%rax,8) addq $8, %rax addq %r14, %rcx cmpq %rax, %r13 jne .LBB6_20 movq %r13, %rax movq %rbp, %rdi testq %r15, %r15 je .LBB6_26 .LBB6_23: movq 112(%rsp), %rcx addq %rax, %rcx imulq %r14, %rax addq %rbp, %rax movq 472(%rsp), %rdx leaq (%rdx,%rcx,8), %rcx xorl %edx, %edx .p2align 4, 0x90 .LBB6_24: movq (%rax), %rdi movq %rdi, (%rcx,%rdx,8) incq %rdx addq %r14, %rax cmpq %rdx, %r15 jne .LBB6_24 movq %rbp, %rdi .LBB6_26: cmpb $0, 39(%rsp) jne .LBB6_27 .LBB6_28: xorl %eax, %eax testq %rbx, %rbx setg %al movq %rbx, %rcx subq %rax, %rcx addq %r12, %rax testq %rbx, %rbx movq %rcx, %rbx jg .LBB6_6 jmp .LBB6_2 .LBB6_7: movq %r11, 48(%rsp) movq %r8, 40(%rsp) movq 208(%rsp), %rcx movabsq $NRT_Allocate, %rax vzeroupper callq *%rax movq 488(%rsp), %r9 movq %rax, %rdi testq %r9, %r9 jle .LBB6_8 movq 56(%rsp), %r10 cmpq $7, %r10 movq 48(%rsp), %r11 jae .LBB6_11 xorl %eax, %eax testq %r15, %r15 jne .LBB6_15 jmp .LBB6_17 .LBB6_8: movq 40(%rsp), %r8 movq 48(%rsp), %r11 .LBB6_27: movq %r8, 40(%rsp) movq %rdi, %rcx movq %r11, %rdi movabsq $NRT_Free, %rax vzeroupper callq *%rax movq %rdi, %r11 movq 40(%rsp), %r8 movq 56(%rsp), %r10 movq 488(%rsp), %r9 jmp .LBB6_28 .LBB6_11: movq %rbp, %rcx xorl %eax, %eax .p2align 4, 0x90 .LBB6_12: movq (%rcx), %rdx movq %rdx, (%rdi,%rax,8) leaq (%r14,%rcx), %rdx movq (%r14,%rcx), %rcx movq %rcx, 8(%rdi,%rax,8) leaq (%r14,%rdx), %rcx movq (%r14,%rdx), %rdx movq %rdx, 16(%rdi,%rax,8) leaq (%r14,%rcx), %rdx movq (%r14,%rcx), %rcx movq %rcx, 24(%rdi,%rax,8) leaq (%r14,%rdx), %rcx movq (%r14,%rdx), %rdx movq %rdx, 32(%rdi,%rax,8) leaq (%r14,%rcx), %rdx movq (%r14,%rcx), %rcx movq %rcx, 40(%rdi,%rax,8) leaq (%r14,%rdx), %r8 movq (%r14,%rdx), %rcx movq %rcx, 48(%rdi,%rax,8) leaq (%r14,%r8), %rcx movq (%r14,%r8), %rdx movq %rdx, 56(%rdi,%rax,8) addq $8, %rax addq %r14, %rcx cmpq %rax, %r13 jne .LBB6_12 movq %r13, %rax testq %r15, %r15 je .LBB6_17 .LBB6_15: leaq (%rdi,%rax,8), %r8 imulq %r14, %rax addq %rbp, %rax xorl %edx, %edx .p2align 4, 0x90 .LBB6_16: movq (%rax), %rcx movq %rcx, (%r8,%rdx,8) incq %rdx addq %r14, %rax cmpq %rdx, %r15 jne .LBB6_16 .LBB6_17: testq %r9, %r9 jle .LBB6_18 cmpq $3, %r9 movq 40(%rsp), %r8 ja .LBB6_32 xorl %eax, %eax jmp .LBB6_31 .LBB6_32: cmpq $28, 184(%rsp) jae .LBB6_34 xorl %eax, %eax jmp .LBB6_40 .LBB6_34: cmpq $0, 144(%rsp) je .LBB6_35 movq 128(%rsp), %rcx xorl %eax, %eax movq 80(%rsp), %rdx .p2align 4, 0x90 .LBB6_37: vmovups (%rdi,%rax,8), %ymm0 vmovups %ymm0, -480(%rdx,%rax,8) vmovups 32(%rdi,%rax,8), %ymm0 vmovups %ymm0, -448(%rdx,%rax,8) vmovups 64(%rdi,%rax,8), %ymm0 vmovups %ymm0, -416(%rdx,%rax,8) vmovups 96(%rdi,%rax,8), %ymm0 vmovups %ymm0, -384(%rdx,%rax,8) vmovups 128(%rdi,%rax,8), %ymm0 vmovups %ymm0, -352(%rdx,%rax,8) vmovups 160(%rdi,%rax,8), %ymm0 vmovups %ymm0, -320(%rdx,%rax,8) vmovups 192(%rdi,%rax,8), %ymm0 vmovups %ymm0, -288(%rdx,%rax,8) vmovups 224(%rdi,%rax,8), %ymm0 vmovups %ymm0, -256(%rdx,%rax,8) vmovups 256(%rdi,%rax,8), %ymm0 vmovups %ymm0, -224(%rdx,%rax,8) vmovups 288(%rdi,%rax,8), %ymm0 vmovups %ymm0, -192(%rdx,%rax,8) vmovups 320(%rdi,%rax,8), %ymm0 vmovups %ymm0, -160(%rdx,%rax,8) vmovups 352(%rdi,%rax,8), %ymm0 vmovups %ymm0, -128(%rdx,%rax,8) vmovups 384(%rdi,%rax,8), %ymm0 vmovups %ymm0, -96(%rdx,%rax,8) vmovups 416(%rdi,%rax,8), %ymm0 vmovups %ymm0, -64(%rdx,%rax,8) vmovups 448(%rdi,%rax,8), %ymm0 vmovups %ymm0, -32(%rdx,%rax,8) vmovupd 480(%rdi,%rax,8), %ymm0 vmovupd %ymm0, (%rdx,%rax,8) addq $64, %rax addq $-2, %rcx jne .LBB6_37 testb $1, 136(%rsp) je .LBB6_40 .LBB6_39: vmovups (%rdi,%rax,8), %ymm0 movq 104(%rsp), %rcx vmovups %ymm0, -224(%rcx,%rax,8) vmovups 32(%rdi,%rax,8), %ymm0 vmovups %ymm0, -192(%rcx,%rax,8) vmovups 64(%rdi,%rax,8), %ymm0 vmovups %ymm0, -160(%rcx,%rax,8) vmovups 96(%rdi,%rax,8), %ymm0 vmovups %ymm0, -128(%rcx,%rax,8) vmovups 128(%rdi,%rax,8), %ymm0 vmovups %ymm0, -96(%rcx,%rax,8) vmovups 160(%rdi,%rax,8), %ymm0 vmovups %ymm0, -64(%rcx,%rax,8) vmovups 192(%rdi,%rax,8), %ymm0 vmovups %ymm0, -32(%rcx,%rax,8) vmovupd 224(%rdi,%rax,8), %ymm0 vmovupd %ymm0, (%rcx,%rax,8) addq $32, %rax .LBB6_40: cmpq $0, 168(%rsp) je .LBB6_42 movq 72(%rsp), %rcx leaq (%rcx,%rax,8), %rcx leaq (%rdi,%rax,8), %rdx movq 152(%rsp), %r8 movabsq $memcpy, %rax vzeroupper callq *%rax movq 48(%rsp), %r11 movq 40(%rsp), %r8 movq 488(%rsp), %r9 .LBB6_42: movq 176(%rsp), %rcx movq %rcx, %rax cmpq %r9, %rcx movq 56(%rsp), %r10 je .LBB6_26 .LBB6_31: movq 72(%rsp), %rcx leaq (%rcx,%rax,8), %rcx leaq (%rdi,%rax,8), %rdx shlq $3, %rax movq 88(%rsp), %r8 subq %rax, %r8 movabsq $memcpy, %rax vzeroupper callq *%rax movq 48(%rsp), %r11 movq 40(%rsp), %r8 movq 56(%rsp), %r10 movq 488(%rsp), %r9 jmp .LBB6_26 .LBB6_35: xorl %eax, %eax testb $1, 136(%rsp) jne .LBB6_39 jmp .LBB6_40 .LBB6_18: movq 40(%rsp), %r8 jmp .LBB6_26 .LBB6_48: movabsq $NRT_decref, %rsi movq 440(%rsp), %rcx vzeroupper callq *%rsi movq 192(%rsp), %rcx callq *%rsi movq 64(%rsp), %rcx callq *%rsi movq 200(%rsp), %rax movq $0, (%rax) xorl %eax, %eax jmp .LBB6_47 .LBB6_46: vxorps %xmm0, %xmm0, %xmm0 movq 120(%rsp), %rax vmovups %xmm0, (%rax) movq $0, 16(%rax) movq 440(%rsp), %rcx movabsq $NRT_incref, %rax vzeroupper callq *%rax movabsq $.const.picklebuf.2691622873664, %rax movq 160(%rsp), %rcx movq %rax, (%rcx) movl $1, %eax .LBB6_47: vmovaps 256(%rsp), %xmm6 addq $280, %rsp popq %rbx popq %rbp popq %rdi popq %rsi popq %r12 popq %r13 popq %r14 popq %r15 retq .Lfunc_end6: .size _ZN13_3cdynamic_3e40__numba_parfor_gufunc_0x272d183ed00_2487B104c8tJTC_2fWQAkyW1xhBopo9CCDiCFCDMJHDTCIGCy8IDxIcEFloKEF4UEDC8KBhhWIo6mjgJZwoWpwJVOYNCIkbcG2Ai0vgkhqAgA_3dE5ArrayIyLi1E1C7mutable7alignedE23Literal_5bint_5d_283_29x5ArrayIdLi2E1C7mutable7alignedE, .Lfunc_end6-_ZN13_3cdynamic_3e40__numba_parfor_gufunc_0x272d183ed00_2487B104c8tJTC_2fWQAkyW1xhBopo9CCDiCFCDMJHDTCIGCy8IDxIcEFloKEF4UEDC8KBhhWIo6mjgJZwoWpwJVOYNCIkbcG2Ai0vgkhqAgA_3dE5ArrayIyLi1E1C7mutable7alignedE23Literal_5bint_5d_283_29x5ArrayIdLi2E1C7mutable7alignedE .cfi_endproc .weak NRT_incref .p2align 4, 0x90 .type NRT_incref,@function NRT_incref: testq %rcx, %rcx je .LBB7_1 lock incq (%rcx) retq .LBB7_1: retq .Lfunc_end7: ----- WITH VIEWS ----- __gufunc__._ZN13_3cdynamic_3e40__numba_parfor_gufunc_0x272bef6ed00_2491B104c8tJTC_2fWQAkyW1xhBopo9CCDiCFCDMJHDTCIGCy8IDxIcEFloKEF4UEDC8KBhhWIo6mjgJZwoWpwJVOYNCIkbcG2Ai0vgkhqAgA_3dE5ArrayIyLi1E1C7mutable7alignedE23Literal_5bint_5d_283_29x5ArrayIdLi2E1C7mutable7alignedE: .cfi_startproc pushq %r15 .cfi_def_cfa_offset 16 pushq %r14 .cfi_def_cfa_offset 24 pushq %r13 .cfi_def_cfa_offset 32 pushq %r12 .cfi_def_cfa_offset 40 pushq %rsi .cfi_def_cfa_offset 48 pushq %rdi .cfi_def_cfa_offset 56 pushq %rbp .cfi_def_cfa_offset 64 pushq %rbx .cfi_def_cfa_offset 72 subq $168, %rsp vmovaps %xmm6, 144(%rsp) .cfi_def_cfa_offset 240 .cfi_offset %rbx, -72 .cfi_offset %rbp, -64 .cfi_offset %rdi, -56 .cfi_offset %rsi, -48 .cfi_offset %r12, -40 .cfi_offset %r13, -32 .cfi_offset %r14, -24 .cfi_offset %r15, -16 .cfi_offset %xmm6, -96 movq (%rdx), %rax movq 24(%rdx), %r12 movq (%rcx), %rdx movq %rdx, 120(%rsp) movq 8(%rcx), %rdx movq %rdx, 112(%rsp) movq (%r8), %rdx movq %rdx, 104(%rsp) movq 8(%r8), %rdx movq %rdx, 96(%rsp) movq 16(%rcx), %rdx movq %rdx, 88(%rsp) movq 16(%r8), %rdx movq %rdx, 80(%rsp) movq 24(%rcx), %rcx movq %rcx, 64(%rsp) movq 24(%r8), %rcx movq %rcx, 56(%rsp) movl $0, 36(%rsp) movq %rax, 72(%rsp) testq %rax, %rax jle .LBB5_12 cmpq $4, %r12 movl $3, %eax cmovlq %r12, %rax movq %rax, 48(%rsp) movq %r12, %rbx sarq $63, %rbx andq %r12, %rbx xorl %eax, %eax vxorps %xmm6, %xmm6, %xmm6 jmp .LBB5_2 .p2align 4, 0x90 .LBB5_9: movq 136(%rsp), %rcx movabsq $NRT_decref, %rsi movq %r9, %rdi callq *%rsi movq %rdi, %rcx callq *%rsi movq 40(%rsp), %rax incq %rax cmpq 72(%rsp), %rax je .LBB5_12 .LBB5_2: movq %rax, %rbp imulq 104(%rsp), %rbp movq %rax, %rcx imulq 96(%rsp), %rcx movq 112(%rsp), %rdx movq (%rdx,%rcx), %rcx movq %rcx, 128(%rsp) movq %rax, 40(%rsp) movq %rax, %rcx imulq 80(%rsp), %rcx movq 88(%rsp), %rdx movq (%rdx,%rcx), %rdi movq 120(%rsp), %rcx movq (%rbp,%rcx), %r13 movq 8(%rbp,%rcx), %r14 subq %r13, %r14 incq %r14 movl $24, %ecx movl $32, %edx movabsq $NRT_MemInfo_alloc_safe_aligned, %rsi callq *%rsi movq %rax, 136(%rsp) movq 24(%rax), %r15 movl $24, %ecx movl $32, %edx callq *%rsi movq %rax, %r9 testq %r14, %r14 jle .LBB5_9 xorl %edx, %edx testq %rdi, %rdi setg %r8b testq %rdi, %rdi jle .LBB5_9 movq 128(%rsp), %rax cmpq %rax, 48(%rsp) jne .LBB5_10 movq 40(%rsp), %rax imulq 56(%rsp), %rax addq 64(%rsp), %rax movq 24(%r9), %rcx movb %r8b, %dl negq %rdx leaq (%rdi,%rdx), %rsi incq %rsi .p2align 4, 0x90 .LBB5_6: movq %r13, %rdi imulq %r12, %rdi addq %rbx, %rdi movq %rsi, %rdx xorl %ebp, %ebp .p2align 4, 0x90 .LBB5_7: vcvtsi2sd %rbp, %xmm2, %xmm0 vsqrtsd %xmm0, %xmm0, %xmm0 vmovups %xmm6, (%r15) movq $0, 16(%r15) vmovsd %xmm0, (%rcx) vaddsd (%rax,%rdi,8), %xmm0, %xmm1 vmovsd %xmm1, (%rax,%rdi,8) vaddsd 8(%r15), %xmm0, %xmm1 vmovsd %xmm1, 8(%rcx) vaddsd 8(%rax,%rdi,8), %xmm1, %xmm1 vmovsd %xmm1, 8(%rax,%rdi,8) vaddsd 16(%r15), %xmm0, %xmm0 vmovsd %xmm0, 16(%rcx) vaddsd 16(%rax,%rdi,8), %xmm0, %xmm0 vmovsd %xmm0, 16(%rax,%rdi,8) addq %r13, %rbp decq %rdx testq %rdx, %rdx jg .LBB5_7 leaq -1(%r14), %rdx incq %r13 cmpq $1, %r14 movq %rdx, %r14 jg .LBB5_6 jmp .LBB5_9 .LBB5_10: vxorps %xmm0, %xmm0, %xmm0 vmovups %xmm0, (%r15) movq $0, 16(%r15) movabsq $numba_gil_ensure, %rax leaq 36(%rsp), %rcx callq *%rax movabsq $PyErr_Clear, %rax callq *%rax movabsq $.const.pickledata.2691858029760, %rcx movabsq $.const.pickledata.2691858029760.sha1, %r8 movabsq $numba_unpickle, %rax movl $180, %edx callq *%rax testq %rax, %rax je .LBB5_11 movabsq $numba_do_raise, %rdx movq %rax, %rcx callq *%rdx .LBB5_11: movabsq $numba_gil_release, %rax leaq 36(%rsp), %rcx callq *%rax .LBB5_12: vmovaps 144(%rsp), %xmm6 addq $168, %rsp popq %rbx popq %rbp popq %rdi popq %rsi popq %r12 popq %r13 popq %r14 popq %r15 retq .Lfunc_end5: The code of the first version is huge compared to the second part. Overall, we can see that the most computational part is about the same: ----- WITHOUT VIEWS ----- .LBB6_6: imulq %r8, %r12 vcvtsi2sd %r12, %xmm2, %xmm0 vsqrtsd %xmm0, %xmm0, %xmm0 movq 120(%rsp), %rcx vmovups %xmm6, (%rcx) movq $0, 16(%rcx) movq 248(%rsp), %rdx vmovsd %xmm0, (%rdx) vaddsd (%rbp), %xmm0, %xmm1 vmovsd %xmm1, (%rbp) vaddsd 8(%rcx), %xmm0, %xmm1 vmovsd %xmm1, 8(%rdx) vaddsd 8(%rbp), %xmm1, %xmm1 vmovsd %xmm1, 8(%rbp) vaddsd 16(%rcx), %xmm0, %xmm0 vmovsd %xmm0, 16(%rdx) movq %rax, %r12 vaddsd 16(%rbp), %xmm0, %xmm0 vmovsd %xmm0, 16(%rbp) cmpb $0, 39(%rsp) jne .LBB6_7 ----- WITH VIEWS ----- .LBB5_7: vcvtsi2sd %rbp, %xmm2, %xmm0 vsqrtsd %xmm0, %xmm0, %xmm0 vmovups %xmm6, (%r15) movq $0, 16(%r15) vmovsd %xmm0, (%rcx) vaddsd (%rax,%rdi,8), %xmm0, %xmm1 vmovsd %xmm1, (%rax,%rdi,8) vaddsd 8(%r15), %xmm0, %xmm1 vmovsd %xmm1, 8(%rcx) vaddsd 8(%rax,%rdi,8), %xmm1, %xmm1 vmovsd %xmm1, 8(%rax,%rdi,8) vaddsd 16(%r15), %xmm0, %xmm0 vmovsd %xmm0, 16(%rcx) vaddsd 16(%rax,%rdi,8), %xmm0, %xmm0 vmovsd %xmm0, 16(%rax,%rdi,8) addq %r13, %rbp decq %rdx testq %rdx, %rdx jg .LBB5_7 While the code of the first version is a bit less efficient than the second one, the difference is certainly far from being sufficient to explain the huge gap in the timings (~65 ms VS <0.6ms). We can also see that the function calls in the assembly code are different between the two versions: ----- WITHOUT VIEWS ----- memcpy NRT_Allocate NRT_Free NRT_decref NRT_incref NRT_MemInfo_alloc_safe_aligned ----- WITH VIEWS ----- numba_do_raise numba_gil_ensure numba_gil_release numba_unpickle PyErr_Clear NRT_decref NRT_MemInfo_alloc_safe_aligned The NRT_Allocate, NRT_Free, NRT_decref, NRT_incref function calls indicate that the compiled code create a new Python object in the middle of the hot loop which is very inefficient. Meanwhile, the second version does not perform any NRT_incref and I suspect NRT_decref is never actually called (or maybe just once). The second code performs no Numpy array allocations. It looks like the calls to PyErr_Clear, numba_do_raise and numba_unpickle are made to manage exception that can possibly be raised (but not in the first version surprizingly so it is likely related to the use of views). Finally, the call to memcpy in the first version shows that the newly created array is certainly copied to the x. The allocation and the copy makes the first version very inefficient. I am pretty surprized that Numba does not generate allocations for zeros(3). This is grea, but you should really avoid creating arrays in hot loops like this since there is no garantee Numba will always optimize such a call. In fact, it often does not. You can use a basic loop to copy all items of a slice so to avoid any allocation. This is often faster if the size of the slice is known at compile time. Slice copies could be faster since the copiler might better vectorize the code, but in practice, such loops are relatively well auto-vectorized. One can note that there is the vsqrtsd instruction in the code of both versions so the lambda is actually inlined. When you move the lambda away of the function and put its content in another jitted function, LLVM may not inline the function. You can request Numba to inline the function manually before the generation of the intermediate representation (IR code) so that LLVM should generate a similar code. This can be done using the inline="always" flag. This tends to increase the compilation time though (since the code is nearly copy-pasted in the caller function). Inlining is critical for applying many further optimizations (constant propagation, SIMD vectorization, etc.) which can result in a huge performance boost. | 3 | 4 |
75,634,466 | 2023-3-4 | https://stackoverflow.com/questions/75634466/include-or-exclude-license-files-from-package-data-with-pyproject-toml-and-set | TL;DR How does one reliably include files from LICENSES/ (REUSE-style) in source archive and wheels for a Python package with a src/ layout? How does one exclude specific files? Details I have a project structure that looks like . βββ pyproject.toml βββ LICENSES β βββ MAIN.txt β βββ SECUNDARY.txt βββ MANIFEST.in βββ random_package β βββ __init__.py β βββ foo1.cpp β βββ foo2.cpp β βββ submodule1 β β βββ __init__.py β β βββ bar1.cpp β βββ submodule2 β β βββ __init__.py β β βββ bar2.cpp The pyproject.toml looks like [build-system] requires = ["setuptools>=61.0"] build-backend = "setuptools.build_meta" [project] name = "random_package" version = "0.1.0" license = {file = "LICENSES/MAIN.txt"} [metadata] # EDIT: metadata was the issue license-files = ["LICENSES/*.txt"] # this line should be in [tool.setuptools] [tool.setuptools] package-dir = {"" = "."} include-package-data = true # tried both true and false [tool.setuptools.packages.find] where = ["."] include = ["random_package*"] How do I include all cpp files except submodule1/bar1.cpp into the installation? I have tried the following entries in the toml (one at a time): [tool.setuptools.exclude-package-data] "*" = ["bar1.cpp"] "random_package.submodule1" = ["bar1.cpp"] I even set include-package-data to false and entered cpp files manually (except bar1.cpp) and even that did not work for both source and wheels. Nothing works reliably: for any and all combinations of these options, I always get bar1.cpp in either the zip/tar.gz archive or the wheel when I do python -m build. As for the license files, I get LICENSE/MAIN.txt in the source build, but not the others and no licenses are present in the wheels. Partial solution I have something that works for source dist using a MANIFEST.in with an include for the LICENSES/*.txt files and a manual include for the .cpp files instead of the data options in pyproject.toml but even this does not work for the wheel: I don't get the licenses in random_package-0.1.0.dist-info. Am I wrong in expecting the license files in the wheel? With the old setup.py scheme, back when I was using a single License.txt file, I did get the license file in there... And is there no way to do that with the toml alone? | It turns out that I was mistaken about the location of license-files (I first saw it in the "metadata" section on the doc); it must actually be in [tool.setuptools]. The other data include issue was maybe a cache issue, it seems to work in the following pyproject.toml: [build-system] requires = ["setuptools>=61.0"] build-backend = "setuptools.build_meta" [project] name = "random_package" license = {file = "LICENSES/MAIN.txt"} version = "0.1.0" [tool.setuptools] package-dir = {"" = "."} include-package-data = false license-files = ["LICENSES/*.txt"] [tool.setuptools.packages.find] where = ["."] include = ["random_package*"] [tool.setuptools.package-data] random_package = ["*.cpp"] [tool.setuptools.exclude-package-data] "*" = ["bar1.cpp"] With this, no MANIFEST.in file is required. | 5 | 5 |
75,632,978 | 2023-3-4 | https://stackoverflow.com/questions/75632978/correct-way-to-use-venv-and-pip-in-pypy | I've been using cpython forever, but I'm new to pypy. In cpython, this is how I use virtual environments and pip. python3 -m venv venv source venv/bin/activate python3 -m pip install <package> I recently started using pypy for a project, and noticed that the following works. pypy3 -m venv venv source venv/bin/activate pypy3 -m pip install <package> Questions: Are there any differences between cpython venv/pip and pypy venv/pip? Can I create a venv using cpython, and use it with pypy, or vice-versa? Similarly, can I install packages using cpython's pip, and use them from pypy interpreter, or vice-versa? Is what I'm doing "correct", or are there any downsides/issues I'll face in future if I go down this road. Reasons why I prefer the python3 -m ... invocations: venv is present in std. lib, so I don't have to globally install virtualenv. Less ambiguous than using pip and pip3. References: What is the difference between venv, pyvenv, pyenv, virtualenv, virtualenvwrapper, pipenv, etc? Should I use pip or pip3? EDIT: Tried to share venv's between cpython and venv doesn't work (seems obvious in hindsight). It's still possible to create two separate venv's like python3 -m venv cpython_venv and pypy3 -m venv pypy_venv and switch between them as needed. python will be bound to cpython or pypy based on which virtual env is active, and pypi packages need to be installed in every venv where it's needed. | Are there any differences between cpython venv/pip and pypy venv/pip? Yes, PyPy make some changes in the venv Python code, so they may have some differences. Example for 3.7: CPython stdilb: https://github.com/python/cpython/blob/v3.7.13/Lib/venv/__init__.py PyPy stdlib: https://github.com/mozillazg/pypy/blob/release-pypy3.7-v7.3.9/lib-python/3/venv/__init__.py Can I create a venv using cpython, and use it with pypy, or vice-versa? I wouldn't recommend that, since they presumably have good reasons for patching the stdlib venv code. Similarly, can I install packages using cpython's pip, and use them from pypy interpreter, or vice-versa? I wouldn't recommend that, for several reasons. In the case of binary distributions with compatibility tags, the installer may select a wheel file which is specific to the Python interpreter that pip was running in. This package could be totally broken for a different Python runtime. Use python3 -m pip debug --verbose or pypy3 -m pip debug --verbose to list the supported compatibility tags of each runtime. Even for pure-python packages with no compiled extensions you're not safe - it's also the job of the installer to generate bytecode (.pyc files) at installation time. If you install with a different interpreter, you'll get incompatible bytecode. Python packages can and do specify conditional dependencies using environment markers in the packaging metadata. It's possible for dependency trees to be different between CPython and PyPy based on the platform_python_implementation environment marker. Is what I'm doing "correct", or are there any downsides/issues I'll face in future if I go down this road. Your usage shown in the question is correct. | 6 | 3 |
75,602,771 | 2023-3-1 | https://stackoverflow.com/questions/75602771/attributeerror-readonlyworksheet-object-has-no-attribute-defined-names | I am trying to open an xlsx file with Pandas and i recieve this error: Output exceeds the size limit. Open the full output data in a text editor --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[20], line 1 ----> 1 df = pd.read_excel("C:/Users/*****/OneDrive - *******/Bureau/file_name.xlsx", sheet_name="") File c:\Users\*******\OneDrive - ********\Bureau\file_name\venv\lib\site-packages\pandas\util\_decorators.py:211, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs) 209 else: 210 kwargs[new_arg_name] = new_arg_value --> 211 return func(*args, **kwargs) File c:\Users\******\OneDrive - *******\Bureau\file_name\venv\lib\site-packages\pandas\util\_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs) 325 if len(args) > num_allow_args: 326 warnings.warn( 327 msg.format(arguments=_format_argument_list(allow_args)), 328 FutureWarning, 329 stacklevel=find_stack_level(), 330 ) --> 331 return func(*args, **kwargs) File c:\Users\******\OneDrive - ******\Bureau\file_name\venv\lib\site-packages\pandas\io\excel\_base.py:482, in read_excel(io, sheet_name, header, names, index_col, usecols, squeeze, dtype, engine, converters, true_values, false_values, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, parse_dates, date_parser, thousands, decimal, comment, skipfooter, convert_float, mangle_dupe_cols, storage_options) 480 if not isinstance(io, ExcelFile): 481 should_close = True --> 482 io = ExcelFile(io, storage_options=storage_options, engine=engine) 483 elif engine and engine != io.engine: 484 raise ValueError( ... --> 109 sheet.defined_names[name] = defn 111 elif reserved == "Print_Titles": 112 titles = PrintTitles.from_string(defn.value) AttributeError: 'ReadOnlyWorksheet' object has no attribute 'defined_names' " I was using python 3.10. versions and tried it with 3.11.2 also and version of pandas is 1.5.3 also i tried the solution in this topic too; 'ReadOnlyWorksheet' object has no attribute 'defined_names' i downgraded openpyxl and upgraded it again, tried several times but nope. still getting the same error. I tried different file xlsm, worked fine but not with xlsx. so i am blocked atm! | Restart Kernel. Had the same problem, downgraded openpyxl version to the 3.1.0, still wasn't working. Did a restart and runned it again. Worked! | 3 | 1 |
75,627,989 | 2023-3-3 | https://stackoverflow.com/questions/75627989/git-filter-repo-loses-the-remotes | I previously got this rewrite of history commit messages to work: #!/bin/bash # Create a temporary file to store the commit messages temp_file=$(mktemp) main_head_hash=$(git rev-parse main) suffix="β οΈ rebased since!" # Use git log to retrieve the commit messages and store them in the temporary file git log --pretty=format:%s $main_head_hash.. | grep 'π build version' | grep -v "$suffix" > $temp_file # Create a file to store the replacements echo > replacements.txt # Iterate over the commit messages in the temporary file while read commit_message; do # Print the replacement message to the replacements.txt file echo "$commit_message==>$commit_message $suffix" >> replacements.txt done < $temp_file # # β οΈβ οΈ Rewriting history β οΈβ οΈ git filter-repo --replace-message replacements.txt --force # # Remove the temporary files rm $temp_file rm replacements.txt but I have no began to notice that it causes my git repo to lose the remotes setup. Af first I thought it was some other git tool I was using, but now I have this script being the suspect π΅οΈ How do I avoid losing the remotes? | Removing the remote is by design. Quote from the author: filter-repo defaults to rewriting the whole repository and as such it creates history incompatible with the original. There are lots of things that can go wrong with pushing a rewritten history back up to the original location, something that other tools did not discuss or warn about in detail and which will trip up many people. Removing the origin remote helps avoid these errors; it is encouragement for folks to stop and read the docs to make sure they are prepared and understand the ramifications (and take appropriate preventative steps) before they do something that may bite them. (They can still push to the remote by just setting it back up, can just re-fetch, etc., so it's easy to circumvent, but I want them to at least notice something is different). Note it's easy to add the remote back: git remote add <repo-url-here> Or, you can also use the --refs or --partial options to avoid removing it during the filter. Note these options do more than just "not removing the remote", so you should decide whether they are acceptable to you: --partial Do a partial history rewrite, resulting in the mixture of old and new history. This implies a default of update-no-add for --replace-refs, disables rewriting refs/remotes/origin/* to refs/heads/*, disables removing of the origin remote, disables removing unexported refs, disables expiring the reflog, and disables the automatic post-filter gc. Also, this modifies --tag-rename and --refname-callback options such that instead of replacing old refs with new refnames, it will instead create new refs and keep the old ones around. Use with caution. --refs <refs+> Limit history rewriting to the specified refs. Implies --partial. In addition to the normal caveats of --partial (mixing old and new history, no automatic remapping of refs/remotes/origin/* to refs/heads/*, etc.), this also may cause problems for pruning of degenerate empty merge commits when negative revisions are specified. | 3 | 4 |
75,629,940 | 2023-3-3 | https://stackoverflow.com/questions/75629940/how-to-replace-particular-characters-of-a-string-with-the-elements-of-a-list-in | There is a string: input_str = 'The substring of "python" from index @ to index @ inclusive is "tho"' and a list of indices: idx_list = [2, 4] I want to replace the character @ in str_input with each element of the idx_list to have the following output: output_str = 'The substring of "python" from index 2 to index 4 inclusive is "tho"' So I have coded it as follows: def replace_char(input_str, idx_list): output_str = "" idx = 0 for i in range(0, len(input_str)): if input_str[i] == '@': output_str += str(idx_list[idx]) idx += 1 else: output_str += input_str[i] return output_str I wonder if there is any shorter and faster way than the concatenation that I have used? | One concise approach uses re.sub with a callback function: input_str = 'The substring of "python" from index @ to index @ inclusive is "tho"' idx_list = [2, 4] output_str = re.sub(r'\bindex @', lambda m: str(idx_list.pop(0)), input_str) print(output_str) # The substring of "python" from 2 to 4 inclusive is "tho" The idea here is that every time a match of index @ is found, we replace with the first entry in the list of indices. We also then pop that first index, so that it doesn't get used again. | 3 | 1 |
75,628,106 | 2023-3-3 | https://stackoverflow.com/questions/75628106/python-creating-plot-based-on-observation-dates-not-as-a-time-series | I have the following dataset df id medication_date 1 2000-01-01 1 2000-01-04 1 2000-01-06 2 2000-04-01 2 2000-04-02 2 2000-04-03 I would like to first reshape the data set into days after the first observation per patient: id day1 day2 day3 day4 1 yes no no yes 2 yes yes yes no in order to ultimately create a plot with the above table: columns the dates and in black if yes, and white if not. any help really appreciated it | Transform the sparse Series ('yes' medication) to dense Series by adding missing days ('no' medication) then reset the Series index (2000-01-01 -> 0, 2000-04-01 -> 0). Finally, reshape your dataframe. def f(sr): # Create missing dates dti = pd.date_range(sr.min(), sr.max(), freq='D') # Fill the Series with 'yes' or 'no' return (pd.Series('yes', index=sr.tolist()) .reindex(dti, fill_value='no') .reset_index(drop=True)) df['medication_date'] = pd.to_datetime(df['medication_date']) out = (df.groupby('id')['medication_date'].apply(f).unstack(fill_value='no') .rename(columns=lambda x: f'day{x+1}').reset_index()) Output: >>> out id day1 day2 day3 day4 day5 day6 0 1 yes no no yes no yes 1 2 yes yes yes no no no Update import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap colors = ["white", "black"] cmap = LinearSegmentedColormap.from_list('Custom', colors, len(colors)) plt.matshow(out.set_index('id').eq('yes').astype(int), cmap=cmap) plt.show() | 3 | 2 |
75,627,029 | 2023-3-3 | https://stackoverflow.com/questions/75627029/how-to-organise-dataframe-columns | I am trying to organise DataFrame columns based on the specific rules, but I don't know the way. For example, I have a DataFrame related to chemistry as shown below. Each row shows the number of chemical bonds in a chemical compound. OH HO CaO OCa OO NaMg MgNa 0 2 3 2 0 1 1 1 1 0 2 3 4 5 2 0 2 1 2 3 0 0 0 0 In chemistry, OH (Oxygen-Hydrogen) bond is equal to HO (Hydrogen-Oxygen) bond and CaO (Calcium-Oxygen) bond is equal to OCa (Oxygen-Calcium) bond in the meaning. Thus, I'd like to organise the DataFrame as shown below. OH CaO OO NaMg 0 5 2 1 2 1 2 7 9 2 2 3 3 0 0 Iβm struggling because: there are a variety of chemical bonds in my real DataFrame, so it is impossible to organise the information one by one (The number of columns is more than 3,000 and I don't know which kinds of chemical bonds exist and are duplicates.) the number of letters depends on each element symbol and some symbols include lowercase (e.g. Hydrogen: H (one letter and only uppercase), Calcium: Ca (Two letters and uppercase & lowercase) I looked for the same question online and wrote codes by myself, but I was not able to find the way. I would like to know the codes which solve my problem. | You can use str.findall to extract individual element and use frozenset and sort individual elements to reorganize the pairs. Using frozenset is not a good solution because for OO, the second will be lost. Now you can group by this sets and apply sum: # Modified from https://www.johndcook.com/blog/2016/02/04/regular-expression-to-match-a-chemical-element/ pat = r'(A[cglmrstu]|B[aehikr]?|C[adeflmnorsu]?|D[bsy]|E[rsu]|F[elmr]?|G[ade]|H[efgos]?|I[nr]?|Kr?|L[airuv]|M[dgnot]|N[abdeiop]?|Os?|P[abdmortu]?|R[abefghnu]|S[bcegimnr]?|T[abcehilm]|U(?:u[opst])?|V|W|Xe|Yb?|Z[nr])' grp = df.columns.str.findall(pat).map(lambda x: tuple(sorted(x)))) out = df.groupby(grp, axis=1).sum().rename(columns=''.join) Output: >>> out CaO HO MgNa OO 0 2 5 2 1 1 7 2 2 5 2 3 3 0 0 | 6 | 6 |
75,617,527 | 2023-3-2 | https://stackoverflow.com/questions/75617527/why-do-similar-signal-slot-connections-behave-differently-after-pyqt5-to-pyside6 | I'm porting a pyqt5 GUI to pyside6 and I'm running into an issue I don't understand. I have two QSpinboxes that control two GUI parameters: One spinbox controls the weight of a splitter, the other controls the row height in a QtableView. This is the code in pyqt5: spinbox1.valueChanged.connect(my_splitter.setHandleWidth) spinbox2.valueChanged.connect(my_view.verticalHeader().setDefaultSectionSize) In Pyside6 spinbox1 works fine, but spinbox2 doesn't do its job and there is this warning: You can't add dynamic slots on an object originated from C++. The issue can be solved by changing the second line of code to: spinbox2.valueChanged.connect(lambda x: my_view.verticalHeader().setDefaultSectionSize(x)) It's nice to have found a solution, but I would also like to understand why the two connections behave differently in PySide6 and why using he lambda solves the issue. The warning message probably holds a clue but I have no idea what dynamic slots are (and a quick google didn't help me much). Edit: Since I was changing two things: Qt5 > QT6, And pyqt > pyside I looked at this in 4 python wrappers (pyqt5, pyqt6, pyside2, pyside6) to see which of the changes caused the issue. And I can tell that both pyside 2 and 6 show this behaviour, and none of the pyqt's | It looks like PySide (or, to be precise, Shiboken, the wrapper that allows Python access to Qt objects) is not able to directly connect to slots of objects directly created by Qt. That seems a PySide bug: PyQt does not show that behavior, meaning that it's completely possible to achieve it. A similar behavior sometimes happens with PyQt as well, but that's only in very specific cases: for protected methods of objects directly created by Qt. For example, trying to call initStyleOption() of the default delegate of an item view raises a RuntimeError ("no access to protected functions or signals for objects not created from Python"). Still, that should not happen for public functions like setDefaultSectionSize() is. There are possible workarounds for that, though. Using a lambda As you already found out, you can just use a lambda. This will force PySide to connect to a python function instead of a Qt one: PySide always allows that kind of connection. The drawback of this approach is the common problem with lambdas: if you directly use it as the connect() argument, you completely lose any reference to it, so there is no way to specifically disconnect from that function, unless you disconnect all functions for that signal (or the whole object). Lambdas can be referenced to, though: header = my_view.verticalHeader() header.setDefaultSectionSize_ = lambda s: header.setDefaultSectionSize(s) spinbox2.valueChanged.connect(header.setDefaultSectionSize_) With the code above, you can disconnect the function, since you now have a persistent reference to it. Using a method This is similar to the above, with the difference that we create a specific method to handle that, assuming that you keep a reference to the header (or, better, the view): spinbox2.valueChanged.connect(self.updateMyViewSectionSize) def updateMyViewSectionSize(self, size): self.my_view.verticalHeader().setDefaultSectionSize(size) It might be a bit more verbose, but it's also a better approach, since it consider the dynamic nature of verticalHeader() and provides public access to a function you may need to call in other cases. Explicitly set the header This is a trick I normally use for the delegate issue mentioned above: whenever I need to get some info from the delegate based on its initStyleOption(), I just create a new "dummy" delegate; since it has been created in Python, the problem doesn't occur anymore. The same works for this case too: create and set a dummy header view. my_view.setVerticalHeader(QHeaderView(Qt.Vertical, my_view)) Note that both the orientation and arguments are mandatory, and that the above should always be done as soon as possible (right after the table widget has been created). I would still suggest you to file a report in the Qt bug tracker, hoping they will be able to fix it at least for PySide6 (PySide2 will probably be ignored, but you never know). | 4 | 6 |
75,618,364 | 2023-3-2 | https://stackoverflow.com/questions/75618364/why-does-true-2-in-python | I am completely perplexed. We came across a bug, which we easily fixed, but we are perplexed as to why the value the bug was generating created the output it did. Specifically: Why does ~True equal -2 in python? ~True >> -2 Shouldn't the bitwise operator ~ only return binary? (Python v3.8) | True is a specialization of int. In python, integers are signed and unbounded. If one were to invert a fixed-size integer like the 16 bit 0x0001, you'd get 0xfffe which is -2 signed. But python needs a different definition of that operation because it is not size bounded. In Unary arithmetic and bitwise operations python defines unary inversion as The unary ~ (invert) operator yields the bitwise inversion of its integer argument. The bitwise inversion of x is defined as -(x+1). It only applies to integral numbers or to custom objects that override the invert() special method. This has the same effect as fixed-size bit inversion without messing around with infinity. Sure enough >>> -(True+1) -2 | 3 | 6 |
75,616,052 | 2023-3-2 | https://stackoverflow.com/questions/75616052/sort-list-based-on-dictionary-values-even-with-missing-keys | I am trying to sort a list based on dictionary values. The only problem is that if a list contains something that doesn't exist in the dictionary it wont sort it. here is what I have d = { "hello0" : 0, "hello1" : 1, "hello2" : 2, "hello3" : 3 } l1 = ["hello1","hello3","hello0","hello2"] #list 1 l2 = ["hello4","hello1","hello3","hello0","hello2"] #list 2 try: sorted_l1 = sorted(l1,key=lambda x : d[x]) sorted_l2 = sorted(l2,key=lambda x : d[x]) #print(sorted_l1) print(sorted_l2) except Exception as e: print(f"Keyerror {e}") List 1 will get sorted just fine because everything in that list exists in the dictionary but list 2 will not get sorted and will get a keyerror. How do I sort list 2 so that the missing key gets added to the end of the sorted list 2 ? Do I use {}.get ? or is there some another way ? | max_value_dict = max(d.values()) sorted_l2 = sorted(l2,key=lambda x : d.get(x, max_value_dict + 1)) Whenever x is not a key in d, the value associated to x by d.get in the sort is strictly greater than every value in d. Thus x is placed at the end. | 3 | 4 |
75,605,184 | 2023-3-1 | https://stackoverflow.com/questions/75605184/left-align-the-titles-of-each-plotly-subplot | I have a facet wraped group of plotly express barplots , each with a title. How can I left align each subplot's title with the left of its plot window? import lorem import plotly.express as px import numpy as np import random items = np.repeat([lorem.sentence() for i in range(10)], 5) response = list(range(1,6)) * 10 n = [random.randint(0, 10) for i in range(50)] ( px.bar(x=response, y=n, facet_col=items, facet_col_wrap=4, height=1300) .for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1])) .for_each_xaxis(lambda xaxis: xaxis.update(showticklabels=True)) .for_each_yaxis(lambda yaxis: yaxis.update(showticklabels=True)) .show() ) I tried adding .for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1], x=0)) but it results in: | There's a few major challenges here: annotations are not aware of their column in the facet plot (which makes determining the x value difficult) the order in which annotations are created are from left to right, bottom to top (therefore, the annotations at the bottom of the plot are created first) if the number of subplots doesn't divide the number of columns evenly, then the first {remainder} number of annotations are made in the bottom row, and then subsequent annotations start the next row up, where the {remainder} = {number of annotations} % {number of columns} To help visualize: x x x x x x x x β followed by these ones left to right, ... x x β these annotations are created first left to right This means we would like to iterate through (annotation1, x_location_of_col1), (annotation2, x_location_of_col2), (annotation3, x_location_of_col1)... because the placement of the title is based on knowing when the column restarts. In your facet plot there are 10 annotations, 12 total subplots, and 4 columns. Therefore we want to keep track of the x values for each subplot which we can extract from the layouts: the information inside fig.layout['xaxis'], fig.layout['xaxis2']... contain the starting x-values for each subplot in paper coordinates, and we can use the information up to 'xaxis4' (since we have 4 columns), and store this info inside x_axis_start_positions which in our case is [0.0, 0.255, 0.51, 0.7649999999999999] (this is derived in my code, we definitely don't want to hardcode this) Then using the fact that there are 4 columns, 10 annotations, and 12 plots, we can work out that there are 10 // 4 = 2 full rows of plots, and the first row has 10 % 4 = 2 plots in the first row of annotations that are created. We can distill this information into the x starting positions we iterate through for the placement of each title: x_axis_start_positions_iterator = x_axis_start_positions[:remainder] + x_axis_start_positions*number_of_full_rows # [0.0, 0.255, 0.0, 0.255, 0.51, 0.7649999999999999, 0.0, 0.255, 0.51, 0.7649999999999999] Then we iterate through all 10 annotations and 10 starting positions for the titles, overwriting the positions of the automatically generated annotations. Update: I have wrapped this in a function called left_align_facet_plot_titles that takes a facet plot fig as an input, figures out the number of columns, and left aligns each title. import lorem import plotly.express as px import numpy as np import random items = np.repeat([lorem.sentence() for i in range(10)], 5) response = list(range(1,6)) * 10 n = [random.randint(0, 10) for i in range(50)] fig = ( px.bar(x=response, y=n, facet_col=items, facet_col_wrap=4, height=1300) .for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1])) .for_each_xaxis(lambda xaxis: xaxis.update(showticklabels=True)) .for_each_yaxis(lambda yaxis: yaxis.update(showticklabels=True)) ) def left_align_facet_plot_titles(fig): ## figure out number of columns in each facet facet_col_wrap = len(np.unique([a['x'] for a in fig.layout.annotations])) # x x x x # x x x x <-- then these annotations # x x <-- these annotations are created first ## we need to know the remainder ## because when we iterate through annotations ## they need to know what column they are in ## (and annotations natively contain no such information) remainder = len(fig.data) % facet_col_wrap number_of_full_rows = len(fig.data) // facet_col_wrap annotations = fig.layout.annotations xaxis_col_strings = list(range(1, facet_col_wrap+1)) xaxis_col_strings[0] = '' x_axis_start_positions = [fig.layout[f'xaxis{i}']['domain'][0] for i in xaxis_col_strings] if remainder == 0: x_axis_start_positions_iterator = x_axis_start_positions*number_of_full_rows else: x_axis_start_positions_iterator = x_axis_start_positions[:remainder] + x_axis_start_positions*number_of_full_rows for a, x in zip(annotations, x_axis_start_positions_iterator): a['x'] = x a['xanchor'] = 'left' fig.layout.annotations = annotations return fig fig = left_align_facet_plot_titles(fig) fig.show() And if we change the number of columns in the figure with fig = px.bar(..., facet_col_wrap=3, ...) and pass this figure to the function, the results are also as expected: | 3 | 6 |
75,601,631 | 2023-3-1 | https://stackoverflow.com/questions/75601631/nested-loop-over-all-row-pairs-in-a-pandas-dataframe | I have a dataframe in the following format with ~80K rows. df = pd.DataFrame({'Year': [1900, 1902, 1903], 'Name': ['Tom', 'Dick', 'Harry']}) Year Name 0 1900 Tom 1 1902 Dick 2 1903 Harry I need to call a function with each combination of the name column as parameters. Currently I am doing this with the following code (substituting print for function call): for i, n1 in enumerate(df.itertuples()): for n2 in df[i:].itertuples(): print(n1.Name, n2.Name) Is there a way to speed this up that I am missing? PS: I need to keep track of the indices for each name pair. So if I run itertools.combinations on the index, then I still have to make costly df.loc calls. | Another solution to keep track of index/years would be to use a cross join: import pandas as pd df = pd.DataFrame({'Year': [1900, 1902, 1903], 'Name': ['Tom', 'Dick', 'Harry']}) df = df.reset_index() print(df.join(df, how='cross', lsuffix='_1', rsuffix='_2')) Output: index_1 Year_1 Name_1 index_2 Year_2 Name_2 0 0 1900 Tom 0 1900 Tom 1 0 1900 Tom 1 1902 Dick 2 0 1900 Tom 2 1903 Harry 3 1 1902 Dick 0 1900 Tom 4 1 1902 Dick 1 1902 Dick 5 1 1902 Dick 2 1903 Harry 6 2 1903 Harry 0 1900 Tom 7 2 1903 Harry 1 1902 Dick 8 2 1903 Harry 2 1903 Harry | 3 | 3 |
75,530,375 | 2023-2-22 | https://stackoverflow.com/questions/75530375/polars-vs-pandas-size-and-speed-difference | I have a parquet file (~1.5 GB) which I want to process with polars. The resulting dataframe has 250k rows and 10 columns. One column has large chunks of texts in it. I have just started using polars, because I heard many good things about it. One of which is that it is significantly faster than pandas. Here is my issue / question: The preprocessing of the dataframe is rather slow, so I started comparing to pandas. Am I doing something wrong or is polars for this particular use case just slower? If so: is there a way to speed this up? Here is my code in polars import polars as pl df = (pl.scan_parquet("folder/myfile.parquet") .filter((pl.col("type")=="Urteil") | (pl.col("type")=="Beschluss")) .collect() ) df.head() The entire code takes roughly 1 minute whereas just the filtering part takes around 13 seconds. My code in pandas: import pandas as pd df = (pd.read_parquet("folder/myfile.parquet") .query("type == 'Urteil' | type == 'Beschluss'") ) df.head() The entire code also takes roughly 1 minute whereas just the querying part takes <1 second. The dataframe has the following types for the 10 columns: i64 str struct[7] str (for all remaining) As mentioned: a column "content" stores large texts (1 to 20 pages of text) which I need to preprocess and the store differently I guess. EDIT: removed the size part of the original post as the comparison was not like for like and does not appear to be related to my question. | Edit 30 January 2025: The answer below isn't true anymore and Polars switched to a faster implementation of the string type. As mentioned: a column "content" stores large texts (1 to 20 pages of text) which I need to preprocess and the store differently I guess. This is where polars must do much more work than pandas. Polars uses arrow memory format for string data. When you filter your DataFrame all the columns are recreated for where the mask evaluates to true. That means that all the text bytes in the string columns need to be moved around. Whereas for pandas they can just move the pointers to the python objects around, e.g. a few bytes. This only hurts if you have really large values as strings. E.g. when you are storing whole webpages for instance. You can speed this up by converting to categoricals. | 7 | 9 |
75,554,856 | 2023-2-24 | https://stackoverflow.com/questions/75554856/how-do-you-fill-missing-dates-in-a-polars-dataframe-python | I do not seem to find an equivalent for Polars library. But basically, what I want to do is fill missing dates between two dates for a big dataframe. It has to be Polars because of the size of the data (> 100 mill). Below is the code I use for Pandas, but how can I do the same thing for Polars? import janitor import pandas as pd from datetime import datetime, timedelta def missing_date_filler(d): df = d.copy() time_back = 1 # Look back in days td = pd.to_datetime(datetime.now().strftime("%Y-%m-%d")) helper = timedelta(days=time_back) max_date = (td - helper).strftime("%Y-%m-%d") # Takes todays date minus 1 day df_date = dict(Date = pd.date_range(df.Date.min(), max_date, freq='1D')) # Adds the full date range between the earliest date up until yesterday df = df.complete(['Col_A', 'Col_B'], df_date).sort_values("Date") # Filling the missing dates return df | It sounds like you're looking for .upsample() Note that you can use the group_by parameter to perform the operation on a per-group basis. import polars as pl from datetime import datetime df = pl.DataFrame({ "date": [datetime(2023, 1, 2), datetime(2023, 1, 5)], "value": [1, 2] }) shape: (2, 2) βββββββββββββββββββββββ¬ββββββββ β date | value β β --- | --- β β datetime[ΞΌs] | i64 β βββββββββββββββββββββββͺββββββββ‘ β 2023-01-02 00:00:00 | 1 β β 2023-01-05 00:00:00 | 2 β βββββββββββββββββββββββ΄ββββββββ >>> df.upsample(time_column="date", every="1d") shape: (4, 2) βββββββββββββββββββββββ¬ββββββββ β date | value β β --- | --- β β datetime[ΞΌs] | i64 β βββββββββββββββββββββββͺββββββββ‘ β 2023-01-02 00:00:00 | 1 β β 2023-01-03 00:00:00 | null β β 2023-01-04 00:00:00 | null β β 2023-01-05 00:00:00 | 2 β βββββββββββββββββββββββ΄ββββββββ | 4 | 4 |
75,588,250 | 2023-2-28 | https://stackoverflow.com/questions/75588250/what-are-the-downsides-to-relying-purely-on-pyproject-toml | Say you have a Python program that you can successfully package using only pyproject.toml. What are the downsides? Why use setup.py or setup.cfg in this case? | There is no downside in not having a setup.py. It is just that in some particular cases some elements of the packaging can not be expressed in a descriptive manner (which means without code) in setup.cfg or pyproject.toml. This can range from some custom dynamic package metadata to the handling of packaging for custom non-Python code and many other things. My recommendation is: Avoid using setup.py as much as possible, if you can do without this file completely, then do without it Place the standard parts in pyproject.toml: [build-system] section, it is strongly recommended to have this section independently of whether or not you have a setup.cfg file and/or setup.py script [project] section, and avoid dynamic fields as much as possible For the parts that are specific to setuptools, you can choose to have them in pyproject.toml (under the [tool.setuptools] section) or in setup.cfg, as you prefer Related: Is setup.py deprecated? How to modernize a setup.py based project? | 5 | 6 |
75,595,957 | 2023-2-28 | https://stackoverflow.com/questions/75595957/how-to-flatten-split-a-tuple-of-arrays-and-calculate-column-means-in-polars-data | I have a dataframe as follows: df = pl.DataFrame( {"a": [([1, 2, 3], [2, 3, 4], [6, 7, 8]), ([1, 2, 3], [3, 4, 5], [5, 7, 9])]} ) Basically, each cell of a is a tuple of three arrays of the same length. I want to fully split them to separate columns (one scalar resides in one column) like the shape below: shape: (2, 9) βββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ β field_0 β field_1 β field_2 β field_3 β ... β field_5 β field_6 β field_7 β field_8 β β --- β --- β --- β --- β β --- β --- β --- β --- β β i64 β i64 β i64 β i64 β β i64 β i64 β i64 β i64 β βββββββββββͺββββββββββͺββββββββββͺββββββββββͺββββββͺββββββββββͺββββββββββͺββββββββββͺββββββββββ‘ β 1 β 2 β 3 β 2 β ... β 4 β 6 β 7 β 8 β β 1 β 2 β 3 β 3 β ... β 5 β 5 β 7 β 9 β βββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ One way I have tried is to use list.to_struct and unnest two times to fully flatten the two nested levels. Two levels is fine here, but if there are a variety of nested levels and the number could not be determined ahead, the code will be so long. Is there any simpler (or more systematic) way to achieve this? | Piggybacking on some of the concepts in the other answers... We can see how much nesting there is by checking the inner datatype of the pl.List datatype for the column. This can be the condition for a loop. Here's an example with one further nesting of lists than the initial question: df = pl.DataFrame({"a": [([[1, 2], [2, 4], [3, 6]], [[2, 4], [3, 6], [4, 8]], [[6, 12], [7, 14], [8, 16]]), ([[1, 2], [2, 4], [3, 6]], [[3, 6], [4, 8], [5, 10]], [[5, 10], [7, 14], [9, 18]])]}) shape: (2, 1) βββββββββββββββββββββββββββββββββββ β a β β --- β β list[list[list[i64]]] β βββββββββββββββββββββββββββββββββββ‘ β [[[1, 2], [2, 4], [3, 6]], [[2β¦ β β [[[1, 2], [2, 4], [3, 6]], [[3β¦ β βββββββββββββββββββββββββββββββββββ expr = pl.col('a') dtype = df.schema['a'] # alternative: dtype.base_type() == pl.List while isinstance(dtype.inner, pl.List): expr = expr.list.eval(pl.element().list.explode()) dtype = dtype.inner df.lazy().select(expr.list.to_struct()).collect().unnest('a') shape: (2, 18) βββββββββββ¬ββββββββββ¬ββββββββββ¬ββββββββββ¬ββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββ β field_0 β field_1 β field_2 β field_3 β β¦ β field_14 β field_15 β field_16 β field_17 β β --- β --- β --- β --- β β --- β --- β --- β --- β β i64 β i64 β i64 β i64 β β i64 β i64 β i64 β i64 β βββββββββββͺββββββββββͺββββββββββͺββββββββββͺββββͺβββββββββββͺβββββββββββͺβββββββββββͺβββββββββββ‘ β 1 β 2 β 2 β 4 β β¦ β 7 β 14 β 8 β 16 β β 1 β 2 β 2 β 4 β β¦ β 7 β 14 β 9 β 18 β βββββββββββ΄ββββββββββ΄ββββββββββ΄ββββββββββ΄ββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββ | 4 | 1 |
75,569,521 | 2023-2-26 | https://stackoverflow.com/questions/75569521/why-does-math-cosmath-pi-2-not-return-zero | I came across some weird behavior by math.cos() (Python 3.11.0): >>> import math >>> math.cos(math.pi) # expected to get -1 -1.0 >>> math.cos(math.pi/2) # expected to get 0 6.123233995736766e-17 I suspect that floating point math might play a role in this, but I'm not sure how. And if it did, I'd assume Python just checks if the parameter equaled math.pi/2 to begin with. I found this answer by Jon Skeet, who said: Basically, you shouldn't expect binary floating point operations to be exactly right when your inputs can't be expressed as exact binary values - which pi/2 can't, given that it's irrational. But if this is true, then math.cos(math.pi) shouldn't work either, because it also uses the math.pi approximation. My question is: why does this issue only show up when math.pi/2 is used? | Any error in math.pi vs. Ο (there always is some) makes very little difference in one case math.cos(math.pi) and is quite significant in math.cos(math.pi/2). The curve is flat When math.cos(x) is very near 1.0, the curve is very flat: the slope is "close" to zero. About 47 million floating point x values near Ο have a cos(x) mathematically more than -1.0, yet their value is closer to -1.0 than the next encodable value of -0.99999999999999988897... The curve's slope is 1 With x near Ο and math.cos(x/2) near 0.0, the cosine curve has a |slope| "close" to one. Both the next smaller and next larger encodable x have a different cos(x/2). Conclusion When the |result| of sin(x) or cos(x) is near 1.0, many nearby x values will report 1.0. This would be true even if some x value was incredible close to Ο. For x near Ο (like math.pi) and y = |cos(x)|, we need about twice the precision in y to see an imprecision in x. | 5 | 8 |
75,568,011 | 2023-2-25 | https://stackoverflow.com/questions/75568011/filter-polars-dataframe-based-on-when-rows-whose-specific-columns-contain-pairs | In this example, on columns ["foo", "ham"], I want rows 1 and 4 to be removed since they match a pair in the list df = pl.DataFrame( { "foo": [1, 1, 2, 2, 3, 3, 4], "bar": [6, 7, 8, 9, 10, 11, 12], "ham": ["a", "b", "c", "d", "e", "f", "b"] } ) pairs = [(1,"b"),(3,"e"),(4,"g")] The following worked for me but I think this will be problematic when the dataframe and list of pairs are large. for a, b in pairs: df = df.filter(~(pl.col('foo') == a) | ~(pl.col('ham') == b)) I think this is the pandas implementation for this problem Pandas: How to remove rows from a dataframe based on a list of tuples representing values in TWO columns? I am not sure what the Polars implementation of it is. (I think this problem can be generalized to any number of selected columns and any number of elements in a group. For instance, rather than a list of pairs, it can be another dataframe. You get the 'set difference', in terms of rows, of the two dataframes based on specific columns.) | It looks like an ANTI JOIN schema = ["foo", "ham"] (df.with_row_index() # just to show what "row numbers" were "removed" .join( pl.DataFrame(pairs, orient="row", schema=schema), on = schema, how = "anti" ) ) shape: (5, 4) βββββββββ¬ββββββ¬ββββββ¬ββββββ β index β foo β bar β ham β β --- β --- β --- β --- β β u32 β i64 β i64 β str β βββββββββͺββββββͺββββββͺββββββ‘ β 0 β 1 β 6 β a β β 2 β 2 β 8 β c β β 3 β 2 β 9 β d β β 5 β 3 β 11 β f β β 6 β 4 β 12 β b β βββββββββ΄ββββββ΄ββββββ΄ββββββ | 4 | 7 |
75,550,124 | 2023-2-23 | https://stackoverflow.com/questions/75550124/python-polars-how-to-add-a-progress-bar-to-map-elements-map-groups | Is it possible to add a progress bar to a Polars apply loop with a custom function? For example, how would I add a progress bar to the following toy example: df = pl.DataFrame( { "team": ["A", "A", "A", "B", "B", "C"], "conference": ["East", "East", "East", "West", "West", "East"], "points": [11, 8, 10, 6, 6, 5], "rebounds": [7, 7, 6, 9, 12, 8] } ) df.group_by("team").map_groups(lambda x: x.select(pl.col("points").mean())) Edit 1: After help from @Jcurious, I have the following 'tools' that can be re-used for other functions, however it does not print to console correctly. def pl_progress_applier(func, task_id, progress, **kwargs): progress.update(task_id, advance=1, refresh=True) return func(**kwargs) def pl_groupby_progress_apply(data, group_by, func, drop_cols=[], **kwargs): global progress with Progress() as progress: num_groups = len(data.select(group_by).unique()) task_id = progress.add_task('Applying', total=num_groups) return ( data .group_by(group_by) .map_groups(lambda x: pl_progress_applier( x=x.drop(drop_cols), func=func, task_id=task_id, progress=progress, **kwargs) ) ) # and using the function custom_func, we can return a table, howevef the progress bar jumps to 100% def custom_func(x): return x.select(pl.col('points').mean()) pl_groupby_progress_apply( data=df, group_by='team', func=custom_func ) Any ideas on how to get the progress bar to actually work? Edit 2: It seems like the above functions do indeed work, however if you're using PyCharm (like me), then it does not work. Enjoy non-PyCharm users! | I like the progress bars from Rich (which also comes bundled with pip) There's probably a neater way to package this up, but something like: from pip._vendor.rich.progress import ( Progress, SpinnerColumn, TimeElapsedColumn ) def polars_bar(total, title="Processing", transient=True): bar = Progress( SpinnerColumn(), *Progress.get_default_columns(), TimeElapsedColumn(), transient=transient # remove bar when finished ) def _run(func, *args, **kwargs): task_id = bar.add_task(title, total=total) def _execute(*args, **kwargs): bar.update(task_id, advance=1) return func(*args, **kwargs) return lambda self: _execute(self, *args, **kwargs) bar.run = _run return bar Examples .map_groups() def my_custom_group_udf(group, expr): time.sleep(.7) return group.select(expr) num_groups = df["team"].n_unique() with polars_bar(total=num_groups) as bar: (df.group_by("team") .map_groups( bar.run( my_custom_group_udf, expr=pl.col("points").mean().name.suffix("_mean") ) ) ) .map_elements() def my_custom_udf(points, multiplier=1): time.sleep(.3) # simulate some work return (points + 100) * multiplier with polars_bar(total=df.height) as bar: df.with_columns( pl.col("points").map_elements( bar.run(my_custom_udf, multiplier=5), return_dtype = pl.Int64 ) .alias("udf") ) Note: tqdm also has Rich support: https://tqdm.github.io/docs/rich/ | 5 | 7 |
75,534,590 | 2023-2-22 | https://stackoverflow.com/questions/75534590/how-to-smooth-adjacent-polygons-in-python | I'm looking for a way to smooth polygons such that adjacent/touching polygons remain touching. Individual polygons can be smoothed easily, e.g., with PAEK or Bezier interpolation (https://pro.arcgis.com/en/pro-app/latest/tool-reference/cartography/smooth-polygon.htm), which naturally changes their boundary edge. But how to smooth all polygons such that touching polygons remain that way? I'm looking for a Python solution ideally, so it can easily be automated. I found an equivalent question for Arcgis (https://gis.stackexchange.com/questions/183718/how-to-smooth-adjacent-polygons), where the top answer outlines a good strategy (converting polygon edges to lines from polygon-junction to junction), smoothing these and then reconstructing the polygons). Perhaps this would the best strategy, but I'm not sure how to convert shared polygon boundaries to individual polylines in Python. Here is some example code that shows what I'm trying to do for just 2 polygons (but I've created the 'smoothed' polygons by hand): import matplotlib.pyplot as plt import geopandas as gpd from shapely import geometry x_min, x_max, y_min, y_max = 0, 20, 0, 20 ## Create original (coarse) polygons: staircase_points = [[(ii, ii), (ii, ii + 1)] for ii in range(x_max)] staircase_points_flat = [coord for double_coord in staircase_points for coord in double_coord] + [(x_max, y_max)] list_points = {1: staircase_points_flat + [(x_max, y_min)], 2: staircase_points_flat[1:-1] + [(x_min, y_max)]} pols_coarse = {} for ind_pol in [1, 2]: list_points[ind_pol] = [geometry.Point(x) for x in list_points[ind_pol]] pols_coarse[ind_pol] = geometry.Polygon(list_points[ind_pol]) df_pols_coarse = gpd.GeoDataFrame({'geometry': pols_coarse.values(), 'id': pols_coarse.keys()}) ## Create smooth polygons (manually): pols_smooth = {1: geometry.Polygon([geometry.Point(x) for x in [(x_min, y_min), (x_max, y_min), (x_max, y_max)]]), 2: geometry.Polygon([geometry.Point(x) for x in [(x_min, y_min), (x_min, y_max), (x_max, y_max)]])} df_pols_smooth = gpd.GeoDataFrame({'geometry': pols_smooth.values(), 'id': pols_smooth.keys()}) ## Plot fig, ax = plt.subplots(1, 2, figsize=(10, 4)) df_pols_coarse.plot(column='id', ax=ax[0]) df_pols_smooth.plot(column='id', ax=ax[1]) ax[0].set_title('Original polygons') ax[1].set_title('Smoothed polygons'); Update: Using the suggestion from Mountain below and this post, I think the problem could be broken down in the following steps: Find boundary edges between each pair of touching polygons (e.g., using this suggestion). Transform these into numpy arrays and smooth as per Mountain's bspline suggestion Reconstruct polygons using updated/smoothed edges. Also note that for single (shapely.geometry) polygons, they can be smoothed using: pol.simplify() using Douglas-Peucker algorithm. | You can do this with the topojson library. Sample script: import matplotlib.pyplot as plt import geopandas as gpd from shapely import geometry import topojson x_min, x_max, y_min, y_max = 0, 20, 0, 20 ## Create original (coarse) polygons: staircase_points = [[(ii, ii), (ii, ii + 1)] for ii in range(x_max)] staircase_points_flat = [ coord for double_coord in staircase_points for coord in double_coord ] + [(x_max, y_max)] list_points = { 1: staircase_points_flat + [(x_max, y_min)], 2: staircase_points_flat[1:-1] + [(x_min, y_max)], } pols_coarse = {} for ind_pol in [1, 2]: list_points[ind_pol] = [geometry.Point(x) for x in list_points[ind_pol]] pols_coarse[ind_pol] = geometry.Polygon(list_points[ind_pol]) df_pols_coarse = gpd.GeoDataFrame( {"geometry": pols_coarse.values(), "id": pols_coarse.keys()} ) ## Create smooth polygons: topo = topojson.Topology(df_pols_coarse) topo_smooth = topo.toposimplify(1) df_pols_smooth = topo_smooth.to_gdf() ## Plot fig, ax = plt.subplots(1, 2, figsize=(10, 4)) df_pols_coarse.plot(column="id", ax=ax[0]) df_pols_smooth.plot(column="id", ax=ax[1]) ax[0].set_title("Original polygons") ax[1].set_title("Smoothed polygons") plt.show() | 4 | 1 |
75,583,626 | 2023-2-27 | https://stackoverflow.com/questions/75583626/a-comprehensive-way-to-perform-step-by-step-debugging-of-python-code | I was wondering is there is a good and comprehensive way to debug python code step by step so that I can have a better idea of all the variables involved, their dimensions and values? What can be done to do step-by-step debugging? | JupyterLab is the current generation of Jupyter interface and has a debugger mode that allows you to step through code. See this post here and a related one here. Documentation is here. You can try it out without installing anything on your own system by clicking here to launch a temporary Jupyter session running on a remote computer via the MyBinder service, and then open a new notebook file from the launcher. You can step the sections 'Debug code in notebook' and 'Explore the code state' there in the documentation. For %%debug, which is from IPython that Jupyter inherits much from, see here and other posts in that thread. | 6 | 5 |
75,575,346 | 2023-2-26 | https://stackoverflow.com/questions/75575346/how-can-i-configure-my-tools-to-ignore-or-prevent-updates-to-the-execution-count | I'm using the Jupyter extension (v2022.9.1303220346) in Visual Studio Code (v1.73.1). To reproduce this issue, make any modification to the notebook and check it into git. You'll observe that you get an extra difference for execution_count. For example (display from Git Gui): - "execution_count": 7, + "execution_count": 9, The execution count doesn't appear to be useful and is noise in the git history. Can Jupyter or VS Code be configured to stop updating this value or (better) ignore it altogether? | Can Jupyter or VS Code be configured to stop updating this value or (better) ignore it altogether? I'm not sure about VS Code, and I think the answer for VS Code config options might be no after reading some discussions in GitHub feature-request issue tickets for Jupyter notebooks, where the fact that they are feature-requests indicates to me that the answer also currently seems to be no, but also that there are plenty of approaches to tackling the problem: In jupyter/notebook: Suggestion: Separate file for notebook executed cell outputs. #5677 I think it would be nice to have a separate file (something like .ipynb.output) that links output to their cells in the .ipynb json file. This would make it significantly easier to exclude notebook outputs in source control systems like git. - jbursey Its not a bad idea. But if keeping cell output out of source control is your primary concern, the easiest solution is to just clear the outputs before committing. There are a few ways to do that: Use a commit hook as outlined in Jupyter docs. Use Jupyter's shortcut to "clear all cell output" Use nbconvert to clear the notebook outputs before committing. You could also just write your own shell script to clear outputs. I wrote one using jq to do that and it is fairly easy. Some folks also choose to just convert the notebook to python using nbconvert and then just commit that. If you search for "How to version control jupyter notebooks" you will see a bunch of posts on the topic. - gitjeff05 Alternatively, Jupytext could be helpful for your case. It allows you to save notebooks as code. Then you only need to commit the code to git, whilst you can ignore the notebooks for version control. Their paired notebooks avoid the need for automatically saving and converting the notebooks. - IvoMerchiers In jupyterlab/jupyterlab: Using a notebook & git creates too many diff #9444 It would be much simpler if we had an option to save only the input cells, not the output ones. And to reset the cell index (execution_count) to 0 without restarting the kernel. - sylvain-bougnoux I think that you can configure the underlying nbdiff to ignore outputs, see: https://nbdime.readthedocs.io/en/latest/config.html#configuring-ignores - krassowski In jupyterlab/jupyterlab-git: Cleaning Notebook cell outputs #392 Notebooks cell outputs can be a hindrance in Version Control while reviewing the diff of a commit to see what changed (either in a PR or historically) Some ideas on how we could enable users to deal with outputs in cell in jupyterlab-git Enable a Command Palette option to easily install a Git filter with nbstripout Prompt the user to remove outputs from cells if we detect that there are cell outputs during a git push Use the JupyterLab settings registry to let the user specify that all Notebook outputs must be cleaned on a git push - jaipreet-s With #700, it is now possible to add nbstripout (for example) when initializing a git repository. - fcollonval For your learning purposes / reference, I found this info by googling "github issues jupyter notebook put execution_count in separate file" and looking through the top search results and linked GitHub issues in their discussion threads. Someone in the issue ticket mentioned the extension "paired notebooks" which allows pairing a text notebook with a ipynb file, with the intention that the text notebook be used for version control. I have no affiliation with this extension and have not tried it. Just mentioning it in case you find it useful. | 7 | 4 |
75,539,007 | 2023-2-22 | https://stackoverflow.com/questions/75539007/custom-validation-for-fastapis-query-parameter-using-pydatinc-causes-internal-s | My GET endpoint receives a query parameter that needs to meet the following criteria: be an int between 0 and 10 be even number 1. is straight forward using Query(gt=0, lt=10). However, it is not quiet clear to me how to extend Query to do extra custom validation such as 2.. The documentation ultimately leads to pydantic. But, my application runs into internal server error when the second validation 2. fails. Below is a minimal scoped example from fastapi import FastAPI, Depends, Query from pydantic import BaseModel, ValidationError, validator app = FastAPI() class CommonParams(BaseModel): n: int = Query(default=..., gt=0, lt=10) @validator('n') def validate(cls, v): if v%2 != 0: raise ValueError("Number is not even :( ") return v @app.get("/") async def root(common: CommonParams = Depends()): return {"n": common.n} Below are requests that work as expected and ones that break: # requsts that work as expected localhost:8000?n=-4 localhost:8000?n=-3 localhost:8000?n=2 localhost:8000?n=8 localhost:8000?n=99 # request that break server localhost:8000?n=1 localhost:8000?n=3 localhost:8000?n=5 | Option 1 Raise HTTPException directly instead of ValueError, as demonstrated in Option 1 of this answer. Example: from fastapi import FastAPI, Depends, Query, HTTPException from pydantic import BaseModel, validator app = FastAPI() class CommonParams(BaseModel): n: int = Query(default=..., gt=0, lt=10) @validator('n') def prevent_odd_numbers(cls, v): if v % 2 != 0: raise HTTPException(status_code=422, detail='Input number is not even') return v @app.get('/') async def root(common: CommonParams = Depends()): return {'n': common.n} Server Response (when the input number is not even, e.g., n = 1): # 422 Error: Unprocessable Entity { "detail": "Input number is not even" } Option 2 Use a custom exception handler, in order to handle ValueError exceptions, similar to this answer and this answer. Example: from fastapi import FastAPI, Request, Depends, Query, status from fastapi.responses import JSONResponse from fastapi.encoders import jsonable_encoder from pydantic import BaseModel, validator app = FastAPI() @app.exception_handler(ValueError) async def validation_exception_handler(request: Request, exc: ValueError): return JSONResponse( status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, content=jsonable_encoder({"detail": exc.errors(), # optionally, include the pydantic errors "custom msg": {"Your error message"}}), # optionally, return a custom message ) class CommonParams(BaseModel): n: int = Query(default=..., gt=0, lt=10) @validator('n') def prevent_odd_numbers(cls, v): if v % 2 != 0: raise ValueError('Input number is not even') return v @app.get('/') async def root(common: CommonParams = Depends()): return {'n': common.n} Server Response (when the input number is not even, e.g., n = 1): # 422 Error: Unprocessable Entity { "detail": [ { "loc": [ "n" ], "msg": "Input number is not even", "type": "value_error" } ], "custom msg": [ "Your error message" ] } Update Please note that in Pydantic V2, @validator has been deprecated and was replaced by @field_validator. Please have a look at this answer for more details and examples. | 6 | 4 |
75,548,444 | 2023-2-23 | https://stackoverflow.com/questions/75548444/polars-dataframe-drop-nans | I need to drop rows that have a nan value in any column. As for null values with drop_nulls() df.drop_nulls() but for nans. I have found that the method drop_nans exist for Series but not for DataFrames df['A'].drop_nans() Pandas code that I'm using: df = pd.DataFrame( { 'A': [0, 0, 0, 1,None, 1], 'B': [1, 2, 2, 1,1, np.nan] } ) df.dropna() | Another definition is: to keep rows where all values are not NaN For that, we can use: .is_not_nan() to test for "not nan" pl.col(pl.Float32, pl.Float64) to select only float columns .all_horizontal() to compute a row-wise True/False comparison DataFrame.filter to keep only the "True" rows df = pl.from_repr(""" βββββββ¬ββββββ¬ββββββ β A β B β C β β --- β --- β --- β β f64 β f64 β str β βββββββͺββββββͺββββββ‘ β 0.0 β 1.0 β a β β 0.0 β 2.0 β b β β 0.0 β 2.0 β c β β 1.0 β 1.0 β d β β NaN β 1.0 β e β β 1.0 β NaN β g β βββββββ΄ββββββ΄ββββββ """) df.filter( pl.all_horizontal(pl.col(pl.Float32, pl.Float64).is_not_nan()) ) shape: (4, 3) βββββββ¬ββββββ¬ββββββ β A β B β C β β --- β --- β --- β β f64 β f64 β str β βββββββͺββββββͺββββββ‘ β 0.0 β 1.0 β a β β 0.0 β 2.0 β b β β 0.0 β 2.0 β c β β 1.0 β 1.0 β d β βββββββ΄ββββββ΄ββββββ polars.selectors has also since been added which provides cs.float() df.filter( pl.all_horizontal(cs.float().is_not_nan()) ) | 7 | 6 |
75,559,239 | 2023-2-24 | https://stackoverflow.com/questions/75559239/how-do-i-write-a-polars-dataframe-to-an-external-database | I have a Polars dataframe that I want to write to an external database (SQLite). How can I do it? In Pandas you have to_sql() but I couldn't find any equivalent in Polars. | You can use the DataFrame.write_database method. | 4 | 3 |
75,563,256 | 2023-2-25 | https://stackoverflow.com/questions/75563256/after-the-mambaforge-install-completes-how-do-i-get-to-be-able-to-run-mamba-com | I downloaded the mambaforge install file for Windows, ran it, and it successfully completed. Mamba has a quickstart guide with CLI commands here: https://mamba.readthedocs.io/en/latest/user_guide/mamba.html But I can't figure out WHERE to enter the commands (e.g. mamba create -n envname). Is there supposed to be a Start Menu shortcut for Mambaforge or something similar? I checked the option to create Start Menu shortcuts during the install, but I don't see any even though the install completed without errors. I tried running mamba create -n envname from the cmd prompt and it returns: 'mamba' is not recognized as an internal or external command, operable program or batch file. Clearly I missed a step somewhere but I can't for the life of me figure out what. What I tried already: I tried running the Mambaforge-Windows-x86_64.exe and checked the option to create Start Menu shortcuts. The install completed successfully. I found the mambaforge directory (which has a _conda.exe and python.exe among others) I was expecting it to create a "mambaforge" start menu shortcut. As far as I can tell no start menu shortcuts were created. The Windows installation instructions are literally a single line: "Download the installer and double click it on the file browser." https://github.com/conda-forge/miniforge | I was expecting it to create a "mambaforge" start menu shortcut. As far as I can tell no start menu shortcuts were created. The start menu shortcut is called "miniforge prompt" and behaves as you expect from Anaconda prompt. Mambaforge is based on miniforge, and the prompt stems from that. | 11 | 7 |
75,529,492 | 2023-2-22 | https://stackoverflow.com/questions/75529492/importerror-cannot-import-name-ordereddict-from-typing | C:\Users\jpala\.conda\envs\tf\python.exe C:\Users\jpala\Documents\ML\train.py Traceback (most recent call last): File "C:\Users\jpala\Documents\ML\train.py", line 5, in <module> import tensorflow as tf File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\__init__.py", line 37, in <module> from tensorflow.python.tools import module_util as _module_util File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\__init__.py", line 42, in <module> from tensorflow.python import data File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\data\__init__.py", line 21, in <module> from tensorflow.python.data import experimental File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\data\experimental\__init__.py", line 96, in <module> from tensorflow.python.data.experimental import service File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\data\experimental\service\__init__.py", line 419, in <module> from tensorflow.python.data.experimental.ops.data_service_ops import distribute File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py", line 25, in <module> from tensorflow.python.data.ops import dataset_ops File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 29, in <module> from tensorflow.python.data.ops import iterator_ops File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 34, in <module> from tensorflow.python.training.saver import BaseSaverBuilder File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\training\saver.py", line 32, in <module> from tensorflow.python.checkpoint import checkpoint_management File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\checkpoint\__init__.py", line 3, in <module> from tensorflow.python.checkpoint import checkpoint_view File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\checkpoint\checkpoint_view.py", line 19, in <module> from tensorflow.python.checkpoint import trackable_view File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\checkpoint\trackable_view.py", line 20, in <module> from tensorflow.python.trackable import converter File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\trackable\converter.py", line 18, in <module> from tensorflow.python.eager.polymorphic_function import saved_model_utils File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\eager\polymorphic_function\saved_model_utils.py", line 36, in <module> from tensorflow.python.trackable import resource File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\trackable\resource.py", line 22, in <module> from tensorflow.python.eager import def_function File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\eager\def_function.py", line 20, in <module> from tensorflow.python.eager.polymorphic_function.polymorphic_function import set_dynamic_variable_creation File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\eager\polymorphic_function\polymorphic_function.py", line 76, in <module> from tensorflow.python.eager.polymorphic_function import function_spec as function_spec_lib File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\python\eager\polymorphic_function\function_spec.py", line 25, in <module> from tensorflow.core.function.polymorphism import function_type as function_type_lib File "C:\Users\jpala\.conda\envs\tf\lib\site-packages\tensorflow\core\function\polymorphism\function_type.py", line 19, in <module> from typing import Any, Callable, Dict, Mapping, Optional, Sequence, Tuple, OrderedDict ImportError: cannot import name 'OrderedDict' from 'typing' (C:\Users\jpala\.conda\envs\tf\lib\typing.py) Process finished with exit code 1 I got this error while trying to install and run tensorflow for gpu following this tutorial https://www.youtube.com/watch?v=hHWkvEcDBO0 I have python 3.7.4 What am I doing wrong here, is it a version issue? | According to [Python.docs]: class typing.OrderedDict(collections.OrderedDict, MutableMapping[KT, VT]) (emphasis is mine): New in version 3.7.2. Example: (base) [cfati@CFATI-5510-0:e:\Work\Dev\StackOverflow\q075529492]> sopr.bat ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [prompt]> conda env list # conda environments: # F:\Install\pc032\Intel\OneAPI\Version\intelpython\python3.7 F:\Install\pc032\Intel\OneAPI\Version\intelpython\python3.7\envs\2021.1.1 base * f:\Install\pc064\Anaconda\Anaconda\Version py_pc032_03_06_02 f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc032_03_06_02 py_pc064_03_06_02 f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_03_06_02 py_pc064_03_07_04 f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_03_07_04 py_pc064_03_08_08 f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_03_08_08 py_pc064_03_10_00 f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_03_10_00 py_pc064_03_10_06 f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_03_10_06 [prompt]> [prompt]> conda activate py_pc064_03_07_04 (py_pc064_03_07_04) [prompt]> (py_pc064_03_07_04) [prompt]> python -c "import sys, typing;print(\"{:}\n{:}\nDone.\n\".format(sys.version, \"OrderedDict\" in dir(typing)")) 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] True Done. There it is: Python 3.7.4 from Anaconda, whose typing module has OrderedDict. The only logical conclusion (well, excluding a broken environment with an messed up typing version) one could draw is that you're actually running Python < v3.7.2. The fix is running Python >= v3.7.2. Might want to also check: [SO]: PyCharm doesn't recognize installed module (@CristiFati's answer) [SO]: On Windows, running "import tensorflow" generates No module named "_pywrap_tensorflow" error (@CristiFati's answer) | 4 | 4 |
75,552,364 | 2023-2-24 | https://stackoverflow.com/questions/75552364/callback-to-subset-geometry-data-dash-plotly | I'm hoping to include a dropdown bar with a callback function that allows the user to display specific points within smaller areas. Initially, I want to use all point geometry data as a default. I'm then aiming to include a dropdown bar and callback function that returns smaller subsets from this main df. This is accomplished by merging the point data within a specific polygon area. Using below, the default df is labelled gdf_all. This contains point data across a large region. The smaller polygon files are subset from gdf_poly. These include African and European continents. These are used within a function to only return point data if it intersects within the polygon shape. I've hard-coded the outputs below. 1) uses gdf_all and 2) uses a subset from African contintent. Ideally, the dropdown bar will be used to input the desired point data to be visualised within the figures. import geopandas as gpd import plotly.express as px import dash from dash import dcc from dash import html import dash_bootstrap_components as dbc # point data gdf_all = gpd.read_file(gpd.datasets.get_path("naturalearth_cities")) # polygon data gdf_poly = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres")) gdf_poly = gdf_poly.drop('name', axis = 1) gdf_all['LON'] = gdf_all['geometry'].x gdf_all['LAT'] = gdf_all['geometry'].y # subset African continent Afr_gdf_area = gdf_poly[gdf_poly['continent'] == 'Africa'].reset_index(drop = True) # subset European continent Eur_gdf_area = gdf_poly[gdf_poly['continent'] == 'Europe'].reset_index(drop = True) # function to merge point data within selected polygon area def merge_withinboundary(gdf1, gdf2): # spatial join data within larger boundary gdf_out = gpd.sjoin(gdf1, gdf2, predicate = 'within', how = 'left').reset_index(drop = True) return gdf_out gdf_Africa = merge_withinboundary(gdf_all, Afr_gdf_area) gdf_Europe = merge_withinboundary(gdf_all, Eur_gdf_area) external_stylesheets = [dbc.themes.SPACELAB, dbc.icons.BOOTSTRAP] app = dash.Dash(__name__, external_stylesheets = external_stylesheets) # function to return selected df for plotting def update_dataset(df): if df == 'gdf_Africa': gdf = gdf_Africa elif df == 'gdf_Europe': gdf = gdf_Europe else: gdf = gdf_all return gdf nav_bar = html.Div([ html.P("area-dropdown:"), dcc.Dropdown( id='data', value='data', options=[{'value': 'gdf_all', 'label': 'gdf_all'}, {'value': 'gdf_Africa', 'label': 'gdf_Africa'}, {'value': 'gdf_Europe', 'label': 'gdf_Europe'} ], clearable=False ), ]) # output 1 df = gdf_all # output 2 #df = gdf_Africa scatter = px.scatter_mapbox(data_frame = df, lat = 'LAT', lon = 'LON', zoom = 2, mapbox_style = 'carto-positron', ) count = df['name'].value_counts() bar = px.bar(x = count.index, y = count.values, color = count.index, ) app.layout = dbc.Container([ dbc.Row([ dbc.Col(html.Div(nav_bar), width=2), dbc.Col([ dbc.Row([ dbc.Col(dcc.Graph(figure = scatter)) ]), dbc.Row([ dbc.Col(dcc.Graph(figure = bar)) ]), ], width=5), dbc.Col([ ], width=5), ]) ], fluid=True) if __name__ == '__main__': app.run_server(debug=True, port = 8051) Output 1: Output 2: | I think the main task is converting update_dataset into a callback function that takes the dropdown selection as an input, and outputs both the newly updated scatter mapbox and bar chart. In order to do this, you need to provide an id argument in dcc.Graph for both of those figures. Then inside the callback, you can recreate both the scatter mapbox and bar chart depending on the dropdown selection. Also I am not sure about your use case, but for the purpose of this example, I modifed your merge_withinboundary function to perform an inner join instead of a left join (with the sample data you've provided, if you do a left join, you will always end up with gdf_all if that is the first argument to merge_withinboundary β because Afr_gdf_area and Eur_gdf_area are both completely contained inside gdf_all). However, I will leave that decision up to you β perhaps a left join is what you want for your actual data set. For the purpose of this demonstration, I also set zoom = 0 as the default, so when the user selects gdf_all from the dropdown, all of the points are visible. import geopandas as gpd import plotly.express as px import dash from dash import dcc, html, Input, Output import dash_bootstrap_components as dbc # point data gdf_all = gpd.read_file(gpd.datasets.get_path("naturalearth_cities")) # polygon data gdf_poly = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres")) gdf_poly = gdf_poly.drop('name', axis = 1) gdf_all['LON'] = gdf_all['geometry'].x gdf_all['LAT'] = gdf_all['geometry'].y # subset African continent Afr_gdf_area = gdf_poly[gdf_poly['continent'] == 'Africa'].reset_index(drop = True) # subset European continent Eur_gdf_area = gdf_poly[gdf_poly['continent'] == 'Europe'].reset_index(drop = True) # function to merge point data within selected polygon area def merge_withinboundary(gdf1, gdf2): # spatial join data within larger boundary gdf_out = gpd.sjoin(gdf1, gdf2, predicate = 'within', how = 'inner').reset_index(drop = True) return gdf_out gdf_Africa = merge_withinboundary(gdf_all, Afr_gdf_area) gdf_Europe = merge_withinboundary(gdf_all, Eur_gdf_area) external_stylesheets = [dbc.themes.SPACELAB, dbc.icons.BOOTSTRAP] app = dash.Dash(__name__, external_stylesheets = external_stylesheets) # function to return selected df for plotting @app.callback(Output('scatter-mapbox', 'figure'), Output('bar', 'figure'), Input('data', 'value'), prevent_initial_call=True) # function to return df using smaller areas def update_dataset(dropdown_selection): if dropdown_selection == 'gdf_Africa': gdf = gdf_Africa zoom = 2 elif dropdown_selection == 'gdf_Europe': gdf = gdf_Europe zoom = 2 else: gdf = gdf_all zoom = 0 scatter_subset = px.scatter_mapbox(data_frame = gdf, lat = 'LAT', lon = 'LON', zoom = zoom, mapbox_style = 'carto-positron', ) count = gdf['name'].value_counts() bar_subset = px.bar(x = count.index, y = count.values, color = count.index, ) return scatter_subset, bar_subset nav_bar = html.Div([ html.P("area-dropdown:"), dcc.Dropdown( id='data', value='data', options=[{'value': 'gdf_all', 'label': 'gdf_all'}, {'value': 'gdf_Africa', 'label': 'gdf_Africa'}, {'value': 'gdf_Europe', 'label': 'gdf_Europe'} ], clearable=False ), ]) # output 1 df = gdf_all # output 2 #df = gdf_Africa scatter = px.scatter_mapbox(data_frame = df, lat = 'LAT', lon = 'LON', zoom = 0, mapbox_style = 'carto-positron', ) count = df['name'].value_counts() bar = px.bar(x = count.index, y = count.values, color = count.index, ) app.layout = dbc.Container([ dbc.Row([ dbc.Col(html.Div(nav_bar), width=2), dbc.Col([ dbc.Row([ dbc.Col(dcc.Graph(figure = scatter, id = 'scatter-mapbox')) ]), dbc.Row([ dbc.Col(dcc.Graph(figure = bar, id = 'bar')) ]), ], width=5), dbc.Col([ ], width=5), ]) ], fluid=True) if __name__ == '__main__': app.run_server(debug=True, port = 8051) | 6 | 2 |
75,596,471 | 2023-2-28 | https://stackoverflow.com/questions/75596471/error-while-working-with-instapy-modulenotfounderror-no-module-named-clarifa | from instapy import InstaPy session = InstaPy(username='name', password='password') session.login() (I use VSC) my code breaks at the first line with error: from clarifai.rest import ClarifaiApp, Workflow ModuleNotFoundError: No module named 'clarifai.rest' I tried reinstalling instapy, that specific module, but nothing changed. Edit: Yes, I have tried reinstalling clarifai and it didn't help. | The solution is here I tested Franchen Bao's comment and it worked for me. I had this error after the emoji error. Looks like this project is a little unstable and can only handle certain versions of each package, like Franchen Bao said. pip uninstall clarifai pip install clarifai==2.6.2 | 4 | 7 |
75,533,746 | 2023-2-22 | https://stackoverflow.com/questions/75533746/custom-component-in-streamlit-is-having-trouble-loading | I'm testing out a custom authentication component for my Streamlit app. However, when using the component in production, it fails to render for some reason. I've managed to get i to work in dev mode by forking the code and adding it to my Streamlit project - but I still can't make it run in production. Upon digging a bit, it seems to me that the declaration of the component fails since the build path for some reason doens't work. It looks like the assertion fails since the module is none as per the following traceback venv\lib\site-packages\streamlit\components\v1\components.py:284, in declare_component(name, path, url) 281 # Get the caller's module name. `__name__` gives us the module's 282 # fully-qualified name, which includes its package. 283 module = inspect.getmodule(caller_frame) --> 284 assert module is not None ... 288 # user executed `python my_component.py`), then this name will be 289 # "__main__" instead of the actual package name. In this case, we use 290 # the main module's filename, sans `.py` extension, as the component name. AssertionError: To obtain the build_path I use the following: root_dir = os.path.dirname(os.path.abspath(__file__)) build_dir = os.path.join(root_dir, "frontend" , "dist") This returns: 'c:\\Users\\initials\\xxx\\Desktop\\Absence importer\\absense_importer\\frontend\\dist' The component is declared like this: _USE_WEB_DEV_SERVER = os.getenv("USE_WEB_DEV_SERVER", False) _WEB_DEV_SERVER_URL = os.getenv("WEB_DEV_SERVER_URL", "http://localhost:5173") COMPONENT_NAME = "msal_authentication" root_dir = os.path.dirname(os.path.abspath(__file__)) build_dir = os.path.join(root_dir, "frontend" , "dist") if _USE_WEB_DEV_SERVER: _component_func = components.declare_component(name=COMPONENT_NAME, url=_WEB_DEV_SERVER_URL) else: _component_func = components.declare_component(name=COMPONENT_NAME, path=build_dir) I've also tried to wrap everything inside a Linux Docker container, but to no avail, unfortunately. Can anyone spot my error? I'm on Python 3.10.7 and using Streamlit 1.18.1. EDIT: Figured out my browswer has issues reading the compiled frontend code, due to a mismatch in MIME types. I'm not sure, what's wrong. By either adding the MIME types manually like import mimetypes mimetypes.add_type('application/javascript', '.js') mimetypes.add_type('text/css', '.css') Or using the sample-library from the author worked. | I am the author of said custom Streamlit Component, and I have made a small sample project that utilizes the component. I just ran the project in Docker, and it works for me. I have yet to discover what can cause the troubles on Windows. Essentially, the sample project is just a barebone Streamlit dashboard consisting of the following script. import streamlit as st from msal_streamlit_authentication import msal_authentication token_response = msal_authentication( auth={ "clientId": "aaaaaaa-bbbb-cccc-dddd-eeeeeeeeeee", "authority": "https://login.microsoftonline.com/xxxxxxx-bbbb-cccc-dddd-eeeeeeeeeee", "redirectUri": "/", "postLogoutRedirectUri": "/" }, cache={ "cacheLocation": "sessionStorage", "storeAuthStateInCookie": False }, login_request={ "scopes": ["yyyyyy-bbbb-cccc-dddd-eeeeeeeeeee/.default"] }, key=1) st.write("Received", token_response) You should insert clientId, tenant/authority and scope values in accordance with your situation. Note that my test is based on the Authorization Code (PKCE) flow. | 5 | 1 |
75,582,039 | 2023-2-27 | https://stackoverflow.com/questions/75582039/how-to-optimize-a-function-involving-max | I am having problems minimizing a simple if slightly idiosyncratic function. I have scipy.optimize.minimize but I can't get consistent results. Here is the full code: from math import log, exp, sqrt from bisect import bisect_left from scipy.optimize import minimize from scipy.optimize import Bounds import numpy as np def new_inflection(x0, x1): return log((exp(x0)+exp(x1) + sqrt(exp(2*x0)+6*exp(x0+x1)+exp(2*x1)))/2) def make_pairs(points): new_points = [] for i in range(len(points)): for j in range(i+1, len(points)): new_point = new_inflection(points[i], points[j]) new_points.append(new_point) return new_points def find_closest_number(numbers, query): index = bisect_left(numbers, query) if index == 0: return numbers[0] if index == len(numbers): return numbers[-1] before = numbers[index - 1] after = numbers[index] if after - query < query - before: return after else: return before def max_distance(target_points): pair_points = make_pairs(target_points) target_points = sorted(target_points) dists = [] return max(abs(point - find_closest_number(target_points, point)) for point in pair_points) num_points = 20 points = np.random.rand(num_points)*10 print("Starting score:", max_distance(points)) bounds = Bounds([0]*num_points, [num_points] * num_points) res = minimize(max_distance, points, bounds = bounds, options={'maxiter': 100}, method="SLSQP") print([round(x,2) for x in res.x]) print(res) Every time I run it I get quite different results. This is despite the output saying Optimization terminated successfully. An example output: message: Optimization terminated successfully success: True status: 0 fun: 0.4277378933292031 x: [ 5.710e+00 1.963e+00 ... 1.479e+00 6.775e+00] nit: 15 jac: [ 0.000e+00 0.000e+00 ... 0.000e+00 0.000e+00] nfev: 364 njev: 15 Sometimes I get a result as low as 0.40 and other times as high as 0.51. Is there any way to optimize this function properly in Python? | The problem here is that you are exploring immense non convex search space by trying to optimize 20 variables at the same time, so usual optimization methods such as gradient descent related etc. will potentially be trapped in a local minima. In your case, as you are starting from a random coordinates each time, you end in a different local minima each time. If the problem has no analytical solution, there is no optimizer (at least that I know) that can solve this kind of problem and guarantee you that you are in the global minimum, but that doesn't mean that we can get a very good approximations. Approach One Try different types of algorithms more suited for non convex high dimension space optimization, within scipy you have a global optimizer function: from scipy import optimize optimize.shgo(eggholder, bounds) In my case that was very slow, but maybe it can help you. Update: The global optimizer basinhopping seems to give good results faster: from scipy.optimize import basinhopping res = basinhopping(max_distance, points, minimizer_kwargs={"method": "SLSQP", "bounds": bounds}, niter=100) print([round(x,2) for x in res.x]) print(res) With this optimizer you can also reach a fitness of 3.9 consistently. Approach Two I would give a try to genetic algorithms using pygad, here is my trial and it reaches 3.9 fitness and consistently bellow 4.1 (though I change the sign for the optimizer). import pygad def fitness_func(solution, solution_idx): return -max_distance(solution) fitness_function = fitness_func num_generations = 500 num_parents_mating = 5 sol_per_pop = 100 num_genes = len(points) init_range_low = 0 init_range_high = 20 parent_selection_type = "sss" keep_parents = 5 crossover_type = "single_point" mutation_type = "random" mutation_percent_genes = 10 ga_instance = pygad.GA(num_generations=num_generations, num_parents_mating=num_parents_mating, fitness_func=fitness_function, sol_per_pop=sol_per_pop, num_genes=num_genes, gene_space = {'low': 0, 'high': 20}, init_range_low=init_range_low, init_range_high=init_range_high, parent_selection_type=parent_selection_type, keep_parents=keep_parents, crossover_type=crossover_type, mutation_type=mutation_type, mutation_percent_genes=mutation_percent_genes) ga_instance.run() To show the solution: solution, solution_fitness, solution_idx = ga_instance.best_solution() print("Parameters of the best solution : {solution}".format(solution=solution)) print("Fitness value of the best solution = {solution_fitness}".format(solution_fitness=solution_fitness)) # plots the optimization process ga_instance.plot_fitness(title="PyGAD with Adaptive Mutation") Approach Three Probably the worst idea but also maybe is everything you need, Just iterate many times the algorithm you already have, with different initial conditions saving the best solution. Update As the OP commented, somewhat similar to this is what the algorithm basinhopping is doing (with a bit more complexity) but it archives very good results, see approach one. Conclusion There is very little you can do for having consistent results in the optimization algorithm in a problem like this one (if not fixing the seed, which I strongly discourage), but at least you can choose an algorithm that is more suited for the task maximizing the search space, so you get closer and closer to the global minimum. On the other hand you should notice that there may be multiple/infinite solutions with the same fitness, but that can look very different, specially if there are hidden symmetries in the problem, this problem for instance seems invariant under permutations, so it would make sense to sort the solution list. Also in this case little changes in one element of the 20 numbers doesn't necessarily change the fitness. | 4 | 6 |
75,590,142 | 2023-2-28 | https://stackoverflow.com/questions/75590142/decorators-to-configure-sentry-error-and-trace-rates | I am using Sentry and sentry_sdk to monitor errors and traces in my Python application. I want to configure the error and trace rates for different routes in my FastAPI API. To do this, I want to write two decorators called sentry_error_rate and sentry_trace_rate that will allow me to set the sample rates for errors and traces, respectively. The sentry_error_rate decorator should take a single argument errors_sample_rate (a float between 0 and 1) and apply it to a specific route. The sentry_trace_rate decorator should take a single argument traces_sample_rate (also a float between 0 and 1) and apply it to a specific route. def sentry_trace_rate(traces_sample_rate: float = 0.0) -> callable: """ Decorator to set the traces_sample_rate for a specific route. This is useful for routes that are called very frequently, but we want to sample them to reduce the amount of data we send to Sentry. Args: traces_sample_rate (float): The sample rate to use for this route. """ def decorator(func): @wraps(func) async def wrapper(*args, **kwargs): # Do something here ? return await func(*args, **kwargs) return wrapper return decorator def sentry_error_rate(errors_sample_rate: float = 0.0) -> callable: """ Decorator to set the errors_sample_rate for a specific route. This is useful for routes that are called very frequently, but we want to sample them to reduce the amount of data we send to Sentry. Args: errors_sample_rate (float): The sample rate to use for this route. """ def decorator(func): @wraps(func) async def wrapper(*args, **kwargs): # Do something here ? return await func(*args, **kwargs) return wrapper return decorator Does someone have an idea if this is possible and how it could be done ? | I finally managed to do it using a registry mechanism. Each route with a decorator are registred in a dictionary with their trace/error rate. I then used a trace_sampler/before_send function as indicated here: Setting a sampling function Filtering error events Here's my sentry_wrapper.py: import asyncio import random from functools import wraps from typing import Callable, Union from fastapi import APIRouter _route_traces_entrypoints = {} _route_errors_entrypoints = {} _fn_traces_entrypoints = {} _fn_errors_entrypoints = {} _fn_to_route_entrypoints = {} def sentry_trace_rate(trace_sample_rate: float = 0.0) -> Callable: """Decorator to set the sentry trace rate for a specific endpoint. This is useful for endpoints that are called very frequently, and we don't want to report all traces. Args: trace_sample_rate (float): The rate to sample traces. 0.0 to disable traces. """ def decorator(fn: Callable) -> Callable: # Assert there is not twice function with the same nam if fn.__name__ in _fn_traces_entrypoints: raise ValueError(f"Two function have the same name: {fn.__name__} | {fn.__file__}") # Add fn entrypoint _fn_traces_entrypoints[fn.__name__] = trace_sample_rate # Check for coroutines and return the right wrapper if asyncio.iscoroutinefunction(fn): @wraps(fn) async def wrapper(*args, **kwargs) -> Callable: return await fn(*args, **kwargs) return wrapper else: @wraps(fn) def wrapper(*args, **kwargs) -> Callable: return fn(*args, **kwargs) return wrapper return decorator def sentry_error_rate(error_sample_rate: float = 0.0) -> Callable: """Decorator to set the sentry error rate for a specific endpoint. This is useful for endpoints that are called very frequently, and we don't want to report all errors. Args: error_sample_rate (float): The rate to sample errors. 0.0 to disable errors. """ def decorator(fn: Callable) -> Callable: # Assert there is not twice function with the same nam if fn.__name__ in _fn_errors_entrypoints: raise ValueError(f"Two function have the same name: {fn.__name__} | {fn.__file__}") # Add fn entrypoint _fn_errors_entrypoints[fn.__name__] = error_sample_rate # Check for coroutines and return the right wrapper if asyncio.iscoroutinefunction(fn): @wraps(fn) async def wrapper(*args, **kwargs) -> Callable: return await fn(*args, **kwargs) return wrapper else: @wraps(fn) def wrapper(*args, **kwargs) -> Callable: return fn(*args, **kwargs) return wrapper return decorator def register_traces_disabler(router: APIRouter) -> None: """Register all the entrypoints for the traces disabler Args: router (APIRouter): The router to register """ for route in router.routes: if route.name in _fn_traces_entrypoints: _route_traces_entrypoints[route.path] = _fn_traces_entrypoints[route.name] def register_errors_disabler(router: APIRouter) -> None: """Register all the entrypoints for the errors disabler Args: router (APIRouter): The router to register """ for route in router.routes: if route.name in _fn_errors_entrypoints: _route_errors_entrypoints[route.path] = _fn_errors_entrypoints[route.name] class TracesSampler: """Class to sample traces for sentry Args: default_traces_sample_rate (float, optional): The default sample rate for traces. Defaults to 1.0. """ def __init__(self, default_traces_sample_rate: float = 1.0) -> None: self.default_traces_sample_rate = default_traces_sample_rate def __call__(self, sampling_context) -> float: return _route_traces_entrypoints.get(sampling_context["asgi_scope"]["path"], self.default_traces_sample_rate) class BeforeSend: """Class to sample event before sending them to sentry Args: default_errors_sample_rate (float, optional): The default sample rate for errors. Defaults to 1.0. """ def __init__(self, default_errors_sample_rate: float = 1.0) -> None: self.default_errors_sample_rate = default_errors_sample_rate def __call__(self, event: dict, hint: dict) -> Union[dict, None]: # Get the sample rate for this route, or use the default if it's not defined sample_rate = _route_errors_entrypoints.get(event["transaction"], self.default_errors_sample_rate) # Generate a random number between 0 and 1, and discard the event if it's greater than the sample rate if random.random() > sample_rate: return None # Return the event if it should be captured return event I have then ro register some routes: @router.get("/route") @sentry_wrapper.sentry_trace_rate(trace_sample_rate=0.5) # limit traces to 50% @sentry_wrapper.sentry_error_rate(error_sample_rate=0.25) # limit error to 25% def route_fn(): pass And don't forget to register each route at the end of the file: from app.services.sentry_wrapper import register_errors_disabler, register_traces_disabler register_traces_disabler(router) register_errors_disabler(router) | 3 | 2 |
75,590,032 | 2023-2-28 | https://stackoverflow.com/questions/75590032/how-do-i-create-a-model-instance-using-raw-sql-in-async-sqlalchemy | I have asyncio sqlalchemy code: import asyncio from sqlalchemy.orm import sessionmaker, declarative_base from sqlalchemy import text, Column, Integer, String from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession async_engine = create_async_engine(f'mysql+aiomysql://root:[email protected]:3306/spy-bot') AsyncSession = sessionmaker(async_engine, class_=AsyncSession, expire_on_commit=False) Base = declarative_base() class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) username = Column(String(255)) async def main(): async with AsyncSession() as session: stmt = text('SELECT * FROM `users` LIMIT 1') result = await session.execute(stmt) user = result.one() print(type(user), user) asyncio.run(main()) how do I make session query return instances on User class while still using raw sql? On sync version this would look like result = session.query(User).from_statement(text(query)) | You would do it in the same way, albeit using 2.0-style syntax as the legacy Query class is not supported in asyncio: result = await session.scalars(select(User).from_statement(text_object)) The docs at Getting ORM Results from Textual Statements apply. The complete function might look like this: import sqlalchemy as sa ... async def main(): async with AsyncSession() as session: stmt = text('SELECT * FROM `users` LIMIT 1') result = await session.scalars(sa.select(User).from_statement(stmt)) user = result.one() print(type(user), user) await async_engine.dispose() | 3 | 5 |
75,596,780 | 2023-2-28 | https://stackoverflow.com/questions/75596780/error-when-trying-to-display-colorbar-using-matplotlib-library-python-3-9 | I'm currently learning to use the librosa library and when trying to display a colorbar on an associated spectrogram, I get an inexplicable error. I'm not that familiar with matplotlib I've searched everywhere for a solution and I can't help but feel I'm missing something here. The following code: n_fft = 2048 #number of samples (window of performing an FFT) hop_length = 512 #amount shifting each fourier transform to the right stft = librosa.core.stft(signal, hop_length=hop_length, n_fft=n_fft) spectogram = np.abs(stft) log_spectogram = librosa.amplitude_to_db(spectogram) librosa.display.specshow(log_spectogram, sr=sr, hop_length=hop_length) plt.xlabel("Time") plt.ylabel("Frequency") plt.colorbar() plt.show() causes an AttributeError: AttributeError: module 'matplotlib' has no attribute 'axes'. Did you mean: 'axis'? However, if I remove the line plt.colorbar() I get the following result, which is OK, but I really need the colorbar. SpectrogramNoColorBar I need this: SpectrogramWithColorBar I tried using the object oriented interface for displaying the spectrogram as in the librosa documentation : https://librosa.org/doc/main/auto_examples/plot_display.html, but to no success. Please help me. EDIT: Here is the full code of my script: import librosa, librosa.display import matplotlib.pyplot as plt import numpy as np file = "Audio_ML\\blues_sample.wav" #waveform #loading the audio file with sample rate=22050(fine for audio data) signal, sr = librosa.load(file, sr=22050) # -> signal(numpy array) containing sr*T -> 22050 * 96 # librosa.display.waveshow(signal, sr=sr) # plt.xlabel("Time") # plt.ylabel("Amplitude") # plt.show() #FTT -> spectrum (FAST FOURIER TRANSFORM TO GO FROM TIME DOMAIN TO FREQUENCY DOMAIN) fft = np.fft.fft(signal) magnitude = np.abs(fft) #magnitudes of each frequency frequency = np.linspace(0, sr, len(magnitude)) left_frequency = frequency[:int(len(frequency)/2)] left_magnitude = magnitude[:int(len(frequency)/2)] # plt.plot(left_frequency, left_magnitude) # plt.xlabel("Frequency") # plt.ylabel("Magnitude") # plt.show() #STFT -> spectogram (SHORT TIME FOURIER TRANSFORM) n_fft = 2048 #number of samples (window of performing an FFT) hop_length = 512 #amount shifting each fourier transform to the right stft = librosa.core.stft(signal, hop_length=hop_length, n_fft=n_fft) spectogram = np.abs(stft) log_spectogram = librosa.amplitude_to_db(spectogram) librosa.display.specshow(log_spectogram, sr=sr, hop_length=hop_length) plt.xlabel("Time") plt.ylabel("Frequency") plt.colorbar() plt.show() #MFCCs MFCCs = librosa.feature.mfcc(y=signal, n_fft=n_fft, hop_length=hop_length, n_mfcc=13) I am using matplotlib 3.7 and librosa 0.10.0 | It seems the problem was to do with the most recent version of matplotlib. I managed to fix the problem by downgrading matplotlib 3.7.0 to matplotlib 3.6.0 using pip install matplotlib==3.6.0 I got the desired colorbar after doing this. Thank you for trying to help! | 3 | 3 |
75,598,282 | 2023-2-28 | https://stackoverflow.com/questions/75598282/why-is-matplotlib-cutting-off-my-very-specific-axis-label | Here's the bottom line: I need to make publication-quality plots with Greek letters and subscripts in axis labels. Yet, in some rather specific cases, the label gets cut off near the tick labels. For example, I really need a label of $\alpha_-$ for some plots, but the bottom of the $\alpha$ gets cut off. Interestingly, this is not a problem for $\alpha_+$ (or many other labels). On one hand, this seems like a display issue when using default font sizes -- the label is cut off when looking at output from Matplotlib, but looks ok after plt.savefig(). On the other hand, I need nice large font sizes for publication, and apparently the issue persists after doing plt.savefig() with large font size (say 22 pt). I am aware of some previous posts that may seem related (e.g. overlaps, savefig cutoff, second axis), but they aren't quite what I'm asking about here. A very short MRE shows what I'm getting at (Matplotlib version 3.5.1): import matplotlib.pyplot as plt plt.rc("axes", labelsize=22) plt.plot([0,1],[0,1]) plt.ylabel(r"$\alpha_-$") plt.show() A few things I've tried: Include plt.tight_layout() to adjust things automatically. Not helpful in this case. Increase padding, e.g. plt.rc('axes', labelpad=10). This simply puts a large space between the deformed label and tick labels / axis. (Plus, this affects any x-axis label as well.) Insert a new line, e.g. plt.ylabel(r"$\alpha_-$" + "\n"). Similarly, just results in some whitespace without fixing the label. I also noticed an odd behavior from a typo version of the last point: keeping the newline '\n' with the raw text block, plt.ylabel(r"$\alpha_-$\n"), shows the label as expected. But, obviously, I don't want some '\n' in my label. I realize I could possibly just save that figure and crop out the unwanted '\n' -- that would be a forced workaround and certainly not ideal. This is clearly a very specific question -- I cannot reproduce the issue with any character or subscript besides $\alpha_-$. But what's going on? Why does Matplotlib understand how to display $\gamma_-$ (and so many other choices) yet not the label I really need? Could this be fixed with a different version of Matplotlib? Since padding adjustment doesn't fix the issue, what can I do? | If there is a "bigger" subscript, it seems to work better import matplotlib.pyplot as plt plt.rc("axes", labelsize=22) plt.plot([0,1],[0,1]) plt.ylabel(r"$\alpha_{x}$") plt.show() It's when the subscript is "-" that we get problems If you become desperate, here is a hack that makes the whole alpha visible plt.ylabel(r"$\alpha_{-_{_{_{.}}}}$") Only if you look very closely do you see the tiny "."! If that is still too large, you could add more layers of _{}. Or you could find a character that appears blank. However the space charactor does not work for this purpose - I guess TeX is clever enough to know that it doesn't print any pixels. One-off workaround With your character being a minus, it does look like an underscore. So why not print it as a (non-subscripted) underscore? Would that make you feel dirty? If not: import matplotlib.pyplot as plt plt.rc("axes", labelsize=22) plt.plot([0,1],[0,1]) plt.ylabel(r"$\alpha\_$") plt.show() | 4 | 2 |
75,587,445 | 2023-2-28 | https://stackoverflow.com/questions/75587445/no-module-named-sqlalchemy-databases | While trying to install the requirements to use magic sql, I encounter this error, No module named 'sqlalchemy.databases', with this code: import pandas as pd from sqlalchemy.engine import create_engine # Presto engine = create_engine('presto://localhost:8080/system/runtime') #Read Presto Data query into a DataFrame df = pd.read_sql('select * from queries limit 1', engine) df.head() Thanks a lot for your reply. | ipython-sql 0.5.0 depends on sqlalchemy 2 which deprecated the sqlalchemy.databases package in favor of the sqlalchemy.dialects package (details). My solution was to pin both of these libraries to the following versions: %pip install ipython-sql==0.4.1 %pip install sqlalchemy==1.4.46 | 3 | 3 |
75,594,651 | 2023-2-28 | https://stackoverflow.com/questions/75594651/cant-create-exe-with-pyinstaller-when-openais-whisper-is-imported | I am trying to create a small program that works with OpenAI's Whisper. I then build the Python script to the .exe file using PyInstaller (auto-py-to-exe). When I run the .exe file, I get the following error: Traceback (most recent call last): File "transformers\utils\versions.py", line 108, in require_version File "importlib\metadata\__init__.py", line 996, in version File "importlib\metadata\__init__.py", line 969, in distribution File "importlib\metadata\__init__.py", line 548, in from_name importlib.metadata.PackageNotFoundError: No package metadata was found for tqdm During handling of the above exception, another exception occurred: Traceback (most recent call last): File "Whisper.py", line 2, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "whisper\__init__.py", line 12, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "whisper\decoding.py", line 11, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "whisper\tokenizer.py", line 8, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\__init__.py", line 30, in <module> File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "transformers\dependency_versions_check.py", line 41, in <module> File "transformers\utils\versions.py", line 123, in require_version_core File "transformers\utils\versions.py", line 110, in require_version importlib.metadata.PackageNotFoundError: No package metadata was found for The 'tqdm>=4.27' distribution was not found and is required by this application. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main [10092] Failed to execute script 'Whisper' due to unhandled exception! This is the source code of my program: import whisper import os from timeit import default_timer as timer start = timer() audio_file = "MyAudioFile.wav" model = whisper.load_model("small") options = {"language": "de"} res = model.transcribe(f"C:/Users/Username/Desktop/Transcribe/{audio_file}", **options) end = timer() os.system("cls") print(res["text"]) print("\nFinished in " + str(end-start) + " seconds!") input("Press ENTER to close the program.") Everything works without problems when I run the Python script from the console or from VS code. However, unfortunately not when I start it as .exe. I have already tried to reinstall the packages such as tqdm with pip, without success. I am grateful for any help and hints! | My problem was solved by adding --recursive-copy-metadata "openai-whisper" to the PyInstaller command. | 4 | 2 |
75,591,496 | 2023-2-28 | https://stackoverflow.com/questions/75591496/why-autopep8-dont-format-on-save | Autopep8 don't working at all. Here's my setting: { "editor.defaultFormatter": "ms-python.autopep8", "editor.formatOnSave": true, "editor.formatOnPaste": true, "editor.formatOnType": true, "autopep8.showNotifications": "always", "indentRainbow.colorOnWhiteSpaceOnly": true, "[python]": { "editor.formatOnType": true }, "workbench.colorTheme": "Monokai Dimmed", "editor.codeActionsOnSave": { }, "autopep8.args": [ "--in-place --aggressive --aggressive" ] } For exaple I using this code: import math, sys; def example1(): ####This is a long comment. This should be wrapped to fit within 72 characters. some_tuple=( 1,2, 3,'a' ); some_variable={'long':'Long code lines should be wrapped within 79 characters.', 'other':[math.pi, 100,200,300,9876543210,'This is a long string that goes on'], 'more':{'inner':'This whole logical line should be wrapped.',some_tuple:[1, 20,300,40000,500000000,60000000000000000]}} return (some_tuple, some_variable) def example2(): return {'has_key() is deprecated':True}.has_key({'f':2}.has_key('')); class Example3( object ): def __init__ ( self, bar ): #Comments should have a space after the hash. if bar : bar+=1; bar=bar* bar ; return bar else: some_string = """ Indentation in multiline strings should not be touched. Only actual code should be reindented. """ return (sys.path, some_string) The only way I can use autopep8 is import it as module and run autopep8.fix_code('''code example''') and I can't get result that I want. That am i doing wrong? Thanks! P. S. I'm using VS Code 1.75.1, Python 3.9.16, autopep8 v2022.2.0. Excepted result: import math import sys def example1(): # This is a long comment. This should be wrapped to fit within 72 # characters. some_tuple = (1, 2, 3, 'a') some_variable = { 'long': 'Long code lines should be wrapped within 79 characters.', 'other': [ math.pi, 100, 200, 300, 9876543210, 'This is a long string that goes on'], 'more': { 'inner': 'This whole logical line should be wrapped.', some_tuple: [ 1, 20, 300, 40000, 500000000, 60000000000000000]}} return (some_tuple, some_variable) def example2(): return ('' in {'f': 2}) in {'has_key() is deprecated': True} class Example3(object): def __init__(self, bar): # Comments should have a space after the hash. if bar: bar += 1 bar = bar * bar return bar else: some_string = """ Indentation in multiline strings should not be touched. Only actual code should be reindented. """ return (sys.path, some_string) | Add "python.formatting.provider": "autopep8", to your settings.json. Read document for more information. | 3 | 2 |
75,597,598 | 2023-2-28 | https://stackoverflow.com/questions/75597598/how-to-efficiently-remove-duplicates-from-list-of-lists-nested-containing-dict | I have a list of lists, where each list contains a dictionary and integer. Sometimes duplicate lists occur, and I wish to remove these from the parent list directly. Currently, I am creating a new list and iterating over the old list to ensure only unique values are appended, but I feel this is bad practice. Can this be rewritten to a one-liner with list comprehension, or can the original list be filtered directly instead, for performance enhancement? TRIAL=[[{'http': '46.101.160.223:80', 'https': '46.101.160.223:80'}, 0], [{'http': '66.70.178.214:9300', 'https': '66.70.178.214:9300'}, 0], [{'http': '130.61.100.135:80', 'https': '130.61.100.135:80'}, 0], [{'http': '157.245.27.9:3128', 'https': '157.245.27.9:3128'}, 0], [{'http': '185.246.84.7:8080', 'https': '185.246.84.7:8080'}, 0], [{'http': '185.246.84.7:8080', 'https': '185.246.84.7:8080'}, 0], [{'http': '130.61.100.135:80', 'https': '130.61.100.135:80'}, 1]] #We have some duplicates which we want to filter out if there with function temporary_list=[] for i in TRIAL: if i[0] not in [item[0] for item in temporary_list]: temporary_list.append(i) temporary_list (desired outcome) [[{'http': '46.101.160.223:80', 'https': '46.101.160.223:80'}, 0], [{'http': '66.70.178.214:9300', 'https': '66.70.178.214:9300'}, 0], [{'http': '130.61.100.135:80', 'https': '130.61.100.135:80'}, 0], [{'http': '157.245.27.9:3128', 'https': '157.245.27.9:3128'}, 0], [{'http': '185.246.84.7:8080', 'https': '185.246.84.7:8080'}, 0]] | If you can use pandas: import pandas as pd df = pd.DataFrame(TRIAL, columns=['d', 'i']) out = df.groupby(df['d'].apply(dict.values).apply(tuple)).last().values.tolist() print(out) [[{'http': '130.61.100.135:80', 'https': '130.61.100.135:80'}, 1], [{'http': '157.245.27.9:3128', 'https': '157.245.27.9:3128'}, 0], [{'http': '185.246.84.7:8080', 'https': '185.246.84.7:8080'}, 0], [{'http': '46.101.160.223:80', 'https': '46.101.160.223:80'}, 0], [{'http': '66.70.178.214:9300', 'https': '66.70.178.214:9300'}, 0]] | 3 | 5 |
75,567,769 | 2023-2-25 | https://stackoverflow.com/questions/75567769/vscode-does-not-recognize-my-flask-dependency-installed-with-poetry | I am learning how to use VSCode for Python development by building a simple Flask app. I use Poetry for dependency management. I have a simple app.py where I'm importing flask and defining a basic route like so: from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Hello, Flask!" In the VSCode terminal, I run poetry shell and then python -m flask run, and the server starts up no problem. I'm able to hit the endpoint and get a response. However, my VSCode editor is telling me that it's unable to import flask. I'm unable to run the VSCode debugger for this reason. I'm missing some step. Does anybody have any idea? Thank you in advance! | Re-starting VSCode did the trick. I wasn't seeing the Poetry interpreter when I looked for it via cmd+shift+p and Python: Select Interpreter prior to re-starting, and now I do. Selecting the Poetry interpreter it fixes the issue! | 4 | 2 |
75,593,816 | 2023-2-28 | https://stackoverflow.com/questions/75593816/simplify-expressions-given-by-sympy-sympify | sp.simplify doesn't seem to work when your expression is given by sp.sympify. How can I change that? import sympy as sp r = sp.Symbol('r', real = True) f_str = 'sqrt(1/r**4)' f1 = sp.sympify( f_str ) f2 = sp.sqrt(1/r**4) for f in f1,f2: sp.pprint(sp.simplify(f)) which outputs ____ β± 1 β± ββ # f1 β± 4 β²β± r 1 ββ # f2 2 r I was expecting that given a real value (r), a sympify expression could get simplified | The r in f1 isn't the same as the r symbol you defined: >>> f1.free_symbols == f2.free_symbols False In particular, this means that the assumption that r is real doesn't carry through, which is necessary for the simplification you want. You can remedy this postmortem by substituting the r in f1 with your r symbol: >>> f1 # old r, no assumptions sqrt(r**(-4)) >>> f1.subs("r", r) # your r, with real assumption r**(-2) In general, you can specify assumptions for string inputs at construction time by passing a dictionary mapping string symbols to your desired SymPy Symbols: >>> f3 = sp.sympify(f_str, {"r": r}) >>> f3 r**(-2) >>> f3.free_symbols == f2.free_symbols True | 3 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.