question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
74,328,303
2022-11-5
https://stackoverflow.com/questions/74328303/cannot-import-name-pagecoroutine-from-scrapy-playwright-page
I am trying to use scrapy and playwright to scrape dynamic webpages, I installed scrapy and playwright, however, when I try to run my spider, i get this error. ImportError: cannot import name 'PageCoroutine' from 'scrapy_playwright.page' (C:\Ali\DataCamp\Web Scraping in Python\Scrapy\venv\lib\site-packages\scrapy_playwright\page.py) This is my code(it's a test code): import scrapy from scrapy_playwright.page import PageCoroutine class PwspiderSpider(scrapy.Spider): name = 'pwspider' def start_requests(self): yield scrapy.Request("https://shoppable-campaign-demo.netlify.app/#/", meta=dict(playwright=True, playwright_include_page=True, playwright_page_coroutine=[PageCoroutine('wait_for_selector', 'div#productListing')])) async def parse(self, response): yield {'text': response.text} I even added the DOWNLOAD_HANDLERS and the TWISTED_REACTOR in the settings file.
PageCoroutine is deprecated/obsolute. Use playwright_page_methods instead. Working code as an example: import scrapy from scrapy_playwright.page import PageMethod class TestSpider(scrapy.Spider): name = "test" def start_requests(self): yield scrapy.Request( url="https://shoppable-campaign-demo.netlify.app/#/", callback=self.parse, meta={ "playwright": True, "playwright_page_methods": [ PageMethod("wait_for_selector", '.card-body'), ], }, ) def parse(self, response): products = response.xpath('//*[@class="card-body"]') for product in products: yield { 'title':product.xpath('.//*[@class="card-title"]/text()').get() } Output: {'title': 'Oxford Loafers'} 2022-11-05 20:40:40 [scrapy.core.scraper] DEBUG: Scraped from <200 https://shoppable-campaign-demo.netlify.app/#/> {'title': 'Ankle-length Slack'} 2022-11-05 20:40:40 [scrapy.core.scraper] DEBUG: Scraped from <200 https://shoppable-campaign-demo.netlify.app/#/> {'title': 'White Baseball Cap'} 2022-11-05 20:40:40 [scrapy.core.scraper] DEBUG: Scraped from <200 https://shoppable-campaign-demo.netlify.app/#/> {'title': 'Triangle Bikini Top'} 2022-11-05 20:40:40 [scrapy.core.scraper] DEBUG: Scraped from <200 https://shoppable-campaign-demo.netlify.app/#/> {'title': 'Short Blazer'} 2022-11-05 20:40:40 [scrapy.core.engine] INFO: Closing spider (finished) 2022-11-05 20:40:40 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 235, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 39851, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 41.370211, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2022, 11, 5, 14, 40, 40, 261151), 'item_scraped_count': 5,
3
6
74,316,151
2022-11-4
https://stackoverflow.com/questions/74316151/color-correction-using-least-square-method
I have tried color correcting an image using the least square method. I don't understand why it doesn't work, this is supposed to be the standard way of color calibration. First I pull in the above image in CR3 format, convert it to RGB space then crop out the four color patches using the OpenCV boundingRect and inRange functions, saving these four patches in an array called coloursRect. vstack is used so that the array storing each pixel's colour transforms from 3d to 2d. So, for example, colour0 stores every pixels' RGB value of the 'red patch'. colour0 = np.vstack(coloursRect[0]) colour1 = np.vstack(coloursRect[1]) colour2 = np.vstack(coloursRect[2]) colour3 = np.vstack(coloursRect[3]) lstsq_a = np.array(np.vstack((colour0,colour1,colour2,colour3))) Then I declare the original reference colours in RGB. r_ref = [240,0,22] y_ref = [252,222,10] g_ref = [30,187,22] b_ref = [26,0,165] ref_patches = [r_ref,y_ref,g_ref, b_ref] The number of each reference color is multiplied according to the number of pixels in that actual image color patch, so, for example, r_ref is multiplied by the length of colour0 array. (I understand this is a bad way to manipulate the data, but this should work theoretically) lstsq_b_0to255 = np.array(np.vstack(([ref_patches[0]]*colour0.shape[0],[ref_patches[1]]*colour1.shape[0],[ref_patches[2]]*colour2.shape[0],[ref_patches[3]]*colour3.shape[0]))) Least square is computed, and multiplied with the image. lstsq_x_0to255 = np.linalg.lstsq(lstsq_a, lstsq_b_0to255)[0] img_shape = img.shape img_s = img.reshape((-1, 3)) img_corr_s = img_s @ lstsq_x_0to255 img_corr = img_corr_s.reshape(img_shape).astype('uint8') However this color correction method does not work and the colours in the image are incorrect. May I know what is the problem? Edit: using RGB instead of HSV for the reference colours
Ignoring the fact that the image ICC profile is not properly decoded here, this is the expected result given your reference RGB values and using Colour: import colour import numpy as np # Reference values a likely non-linear 8-bit sRGB values. # "colour.cctf_decoding" uses the sRGB EOTF by default. REFERENCE_RGB = colour.cctf_decoding( np.array( [ [240, 0, 22], [252, 222, 10], [30, 187, 22], [26, 0, 165], ] ) / 255 ) colour.plotting.plot_multi_colour_swatches(colour.cctf_encoding(REFERENCE_RGB)) IMAGE = colour.cctf_decoding(colour.read_image("/Users/kelsolaar/Downloads/EKcv1.jpeg")) # Measured test values, the image is not properly decoded as it has a very specific ICC profile. TEST_RGB = np.array( [ [0.578, 0.0, 0.144], [0.895, 0.460, 0.0], [0.0, 0.183, 0.074], [0.067, 0.010, 0.070], ] ) colour.plotting.plot_image( colour.cctf_encoding(colour.colour_correction(IMAGE, REFERENCE_RGB, TEST_RGB)) ) The main functions, available in this module are as follows: def least_square_mapping_MoorePenrose(y: ArrayLike, x: ArrayLike) -> NDArray: """ Compute the *least-squares* mapping from dependent variable :math:`y` to independent variable :math:`x` using *Moore-Penrose* inverse. Parameters ---------- y Dependent and already known :math:`y` variable. x Independent :math:`x` variable(s) values corresponding with :math:`y` variable. Returns ------- :class:`numpy.ndarray` *Least-squares* mapping. References ---------- :cite:`Finlayson2015` Examples -------- >>> prng = np.random.RandomState(2) >>> y = prng.random_sample((24, 3)) >>> x = y + (prng.random_sample((24, 3)) - 0.5) * 0.5 >>> least_square_mapping_MoorePenrose(y, x) # doctest: +ELLIPSIS array([[ 1.0526376..., 0.1378078..., -0.2276339...], [ 0.0739584..., 1.0293994..., -0.1060115...], [ 0.0572550..., -0.2052633..., 1.1015194...]]) """ y = np.atleast_2d(y) x = np.atleast_2d(x) return np.dot(np.transpose(x), np.linalg.pinv(np.transpose(y))) def matrix_augmented_Cheung2004( RGB: ArrayLike, terms: Literal[3, 5, 7, 8, 10, 11, 14, 16, 17, 19, 20, 22] = 3, ) -> NDArray: """ Perform polynomial expansion of given *RGB* colourspace array using *Cheung et al. (2004)* method. Parameters ---------- RGB *RGB* colourspace array to expand. terms Number of terms of the expanded polynomial. Returns ------- :class:`numpy.ndarray` Expanded *RGB* colourspace array. Notes ----- - This definition combines the augmented matrices given in :cite:`Cheung2004` and :cite:`Westland2004`. References ---------- :cite:`Cheung2004`, :cite:`Westland2004` Examples -------- >>> RGB = np.array([0.17224810, 0.09170660, 0.06416938]) >>> matrix_augmented_Cheung2004(RGB, terms=5) # doctest: +ELLIPSIS array([ 0.1722481..., 0.0917066..., 0.0641693..., 0.0010136..., 1...]) """ RGB = as_float_array(RGB) R, G, B = tsplit(RGB) tail = ones(R.shape) existing_terms = np.array([3, 5, 7, 8, 10, 11, 14, 16, 17, 19, 20, 22]) closest_terms = as_int(closest(existing_terms, terms)) if closest_terms != terms: raise ValueError( f'"Cheung et al. (2004)" method does not define an augmented ' f"matrix with {terms} terms, closest augmented matrix has " f"{closest_terms} terms!" ) if terms == 3: return RGB elif terms == 5: return tstack( [ R, G, B, R * G * B, tail, ] ) elif terms == 7: return tstack( [ R, G, B, R * G, R * B, G * B, tail, ] ) elif terms == 8: return tstack( [ R, G, B, R * G, R * B, G * B, R * G * B, tail, ] ) elif terms == 10: return tstack( [ R, G, B, R * G, R * B, G * B, R**2, G**2, B**2, tail, ] ) elif terms == 11: return tstack( [ R, G, B, R * G, R * B, G * B, R**2, G**2, B**2, R * G * B, tail, ] ) elif terms == 14: return tstack( [ R, G, B, R * G, R * B, G * B, R**2, G**2, B**2, R * G * B, R**3, G**3, B**3, tail, ] ) elif terms == 16: return tstack( [ R, G, B, R * G, R * B, G * B, R**2, G**2, B**2, R * G * B, R**2 * G, G**2 * B, B**2 * R, R**3, G**3, B**3, ] ) elif terms == 17: return tstack( [ R, G, B, R * G, R * B, G * B, R**2, G**2, B**2, R * G * B, R**2 * G, G**2 * B, B**2 * R, R**3, G**3, B**3, tail, ] ) elif terms == 19: return tstack( [ R, G, B, R * G, R * B, G * B, R**2, G**2, B**2, R * G * B, R**2 * G, G**2 * B, B**2 * R, R**2 * B, G**2 * R, B**2 * G, R**3, G**3, B**3, ] ) elif terms == 20: return tstack( [ R, G, B, R * G, R * B, G * B, R**2, G**2, B**2, R * G * B, R**2 * G, G**2 * B, B**2 * R, R**2 * B, G**2 * R, B**2 * G, R**3, G**3, B**3, tail, ] ) elif terms == 22: return tstack( [ R, G, B, R * G, R * B, G * B, R**2, G**2, B**2, R * G * B, R**2 * G, G**2 * B, B**2 * R, R**2 * B, G**2 * R, B**2 * G, R**3, G**3, B**3, R**2 * G * B, R * G**2 * B, R * G * B**2, ] ) def matrix_colour_correction_Cheung2004( M_T: ArrayLike, M_R: ArrayLike, terms: Literal[3, 5, 7, 8, 10, 11, 14, 16, 17, 19, 20, 22] = 3, ) -> NDArray: """ Compute a colour correction matrix from given :math:`M_T` colour array to :math:`M_R` colour array using *Cheung et al. (2004)* method. Parameters ---------- M_T Test array :math:`M_T` to fit onto array :math:`M_R`. M_R Reference array the array :math:`M_T` will be colour fitted against. terms Number of terms of the expanded polynomial. Returns ------- :class:`numpy.ndarray` Colour correction matrix. References ---------- :cite:`Cheung2004`, :cite:`Westland2004` Examples -------- >>> prng = np.random.RandomState(2) >>> M_T = prng.random_sample((24, 3)) >>> M_R = M_T + (prng.random_sample((24, 3)) - 0.5) * 0.5 >>> matrix_colour_correction_Cheung2004(M_T, M_R) # doctest: +ELLIPSIS array([[ 1.0526376..., 0.1378078..., -0.2276339...], [ 0.0739584..., 1.0293994..., -0.1060115...], [ 0.0572550..., -0.2052633..., 1.1015194...]]) """ return least_square_mapping_MoorePenrose( matrix_augmented_Cheung2004(M_T, terms), M_R ) def colour_correction_Cheung2004( RGB: ArrayLike, M_T: ArrayLike, M_R: ArrayLike, terms: Literal[3, 5, 7, 8, 10, 11, 14, 16, 17, 19, 20, 22] = 3, ) -> NDArray: """ Perform colour correction of given *RGB* colourspace array using the colour correction matrix from given :math:`M_T` colour array to :math:`M_R` colour array using *Cheung et al. (2004)* method. Parameters ---------- RGB *RGB* colourspace array to colour correct. M_T Test array :math:`M_T` to fit onto array :math:`M_R`. M_R Reference array the array :math:`M_T` will be colour fitted against. terms Number of terms of the expanded polynomial. Returns ------- :class:`numpy.ndarray` Colour corrected *RGB* colourspace array. References ---------- :cite:`Cheung2004`, :cite:`Westland2004` Examples -------- >>> RGB = np.array([0.17224810, 0.09170660, 0.06416938]) >>> prng = np.random.RandomState(2) >>> M_T = prng.random_sample((24, 3)) >>> M_R = M_T + (prng.random_sample((24, 3)) - 0.5) * 0.5 >>> colour_correction_Cheung2004(RGB, M_T, M_R) # doctest: +ELLIPSIS array([ 0.1793456..., 0.1003392..., 0.0617218...]) """ RGB = as_float_array(RGB) shape = RGB.shape RGB = np.reshape(RGB, (-1, 3)) RGB_e = matrix_augmented_Cheung2004(RGB, terms) CCM = matrix_colour_correction_Cheung2004(M_T, M_R, terms) return np.reshape(np.transpose(np.dot(CCM, np.transpose(RGB_e))), shape) I would probably recommend using Colour directly as there are multiple methods that gives different result depending on the training set. That being said, I would not expect great results given that you really only have 4 chromatic colours and none achromatic. The minimum recommended chart for that kind of calibration is the ColorChecker Classic with 24 patches.
3
4
74,324,986
2022-11-5
https://stackoverflow.com/questions/74324986/can-i-set-the-index-value-of-a-list-to-a-changing-variable-in-this-example
I am a total beginner to Python and recently had a question while experimenting with lists. I have a for loop that increases a variable 'x' and generates a random number every time. I want to add this random number to a list, but when I try assigning the index value of the random number to x, I get this error message: Traceback (most recent call last): File "<stdin>", line 6, in <module> IndexError: list assignment index out of range Any help would be greatly appreciated! My code: import random as number x = 0 values = [] while x < 9: values[x] = number.randint(1,91) x += 1 print(values)
import random as number values = [number.randint(1,91) for _ in range(9)]
3
3
74,324,901
2022-11-5
https://stackoverflow.com/questions/74324901/how-does-inheritance-of-class-variables-work-in-python
I'm trying to get back to Python but I don't get why the following code doesn't work as intended. class Cat: age = 0 class Dog(Cat): pass Dog.age = 1 Cat.age = 2 print(Dog.age, Cat.age) My output is: 1 2 But why doesn't Dog.age equals 2? Dog is a subclass of Cat and modifying the class variable of the superclass Cat would normally affect every subclass that inherits the variable as well.
Any property of Dog will override a property inherited from Cat. You can re-define a value in Cat, but it won't matter because it has already been overridden by the child. For example: class Cat: age = 0 # Cat.age = 0 class Dog(Cat): pass # Dog.age = Cat.age = 0 Dog.age=1 # Dog.age = 1, and Dog.age no longer points to Cat.age Cat.age=2 # Cat.age = 2 print(Dog.age, Cat.age) # Dog.age is no longer Cat.age. They are completely different Contrast that with this: class Cat: age = 0 # Cat.age = 0 class Dog(Cat): pass # Dog.age = Cat.age = 0 Cat.age = 10 # Cat.age = 10 print(Dog.age, Cat.age) # Dog.age points to Cat.age, so Dog.age resolves to 10
5
3
74,314,778
2022-11-4
https://stackoverflow.com/questions/74314778/nameerror-name-glpushmatrix-is-not-defined
Try to run a test code for stable baselines gym import gym from stable_baselines3 import A2C env = gym.make("CartPole-v1") model = A2C("MlpPolicy", env, verbose=1) model.learn(total_timesteps=10_000) obs = env.reset() for i in range(100): action, _state = model.predict(obs, deterministic=True) obs, reward, done, info = env.step(action) env.render() if done: obs = env.reset() found the error "NameError: name 'glPushMatrix' is not defined" Traceback (most recent call last): File "test_cart_pole.py", line 14, in <module> env.render() File "/Users/xxx/opt/anaconda3/lib/python3.8/site-packages/gym/core.py", line 295, in render return self.env.render(mode, **kwargs) File "/Users/xxx/opt/anaconda3/lib/python3.8/site-packages/gym/envs/classic_control/cartpole.py", line 229, in render return self.viewer.render(return_rgb_array=mode == "rgb_array") File "/Users/xxx/opt/anaconda3/lib/python3.8/site-packages/gym/envs/classic_control/rendering.py", line 126, in render self.transform.enable() File "/Users/xxx/opt/anaconda3/lib/python3.8/site-packages/gym/envs/classic_control/rendering.py", line 232, in enable glPushMatrix() NameError: name 'glPushMatrix' is not defined I tried "pip install PyOpenGL PyOpenGL_accelerate", which didn't help also uninstall pyglet and install again , did't work too Any Idea???
Just had the same problem. Fixed it by installing an older version of pyglet: $ pip install pyglet==1.5.27 I don't know if this is the latest version that avoids the problem, but it works.
20
45
74,322,894
2022-11-4
https://stackoverflow.com/questions/74322894/get-item-from-set-python
I have a set which contains objects which I have the __eq__ and __hash__ functions defined for. I would like to be able to check if an object with the same hash is in the set and if it is in the set to return the object from the set as I need the reference to the object. class SetObject(): def __init__( self, a: int, b: int, c: int ): self.a = a self.b = b self.c = c def __repr__(self): return f"SetObject ({self.a} {self.b} {self.c}) (id: {id(self)}" def __eq__(self, other): return isinstance(other, SetObject) \ and self.__hash__() == other.__hash__() def __hash__(self): # Hash only depends on a, b return hash( (self.a,self.b) ) x = SetObject(1,2,3) y = SetObject(4,5,6) object_set = set([x,y]) print(f"{object_set=}") z = SetObject(1,2,7) print(f"{z=}") if z in object_set: print("Is in set") # Get the object in set which is equal to z for element in object_set: if element == z: print(element) z = element print(f"{z=}")
Instead of using a set, use a dictionary where the keys and values are the same element. Then you can look use the value as a key and return the element. x = SetObject(1,2,3) y = SetObject(4,5,6) object_set = dict([(x, x),(y, y)]) print(f"{object_set=}") z = SetObject(1,2,7) print(f"{z=}") if z in object_set: print("Is in set") z = object_set[z] print(f"{z=}") If you want to simplify this, you could define a subclass of dict that automatically makes the values the same as the keys.
5
5
74,316,523
2022-11-4
https://stackoverflow.com/questions/74316523/kwarg-unpacking-with-mypy
I have a function, that accepts inputs of different types, and they all have default values. For example, def my_func(a: str = 'Hello', b: bool = False, c: int = 1): return a, b, c What I want to do is define the non-default kwargs in a dictionary, and then pass this to the function. For example, input_kwargs = {'b': True} result = my_func(**input_kwargs) This works fine in that it gives me the output I want. However, mypy complains giving me an error like: error: Argument 1 to "my_func" has incompatible type "**Dict[str, bool]"; expected "str". Is there any way to get around this. I don't want to manually enter the keywords each time, as these input_kwargs would be used multiple times across different functions.
You can create a class (by inheriting from TypedDict) with class variables that match with my_func parameters. To make all these class variables as optional, you can set total=False. Then use this class as the type for input_kwargs to make mypy happy :) from typing import TypedDict class Params(TypedDict, total=False): a: str b: bool c: int def my_func(a: str = 'Hello', b: bool = False, c: int = 1): return a, b, c input_kwargs: Params = {'b': True} result = my_func(**input_kwargs)
4
4
74,311,394
2022-11-4
https://stackoverflow.com/questions/74311394/is-there-any-way-to-monkey-patch-builtin-parentheses-behaviour-in-p
I am just checking where is the limit of changing python using python (without modifying interpreter and/or C code). I know that I can basically monkey patch every builtin function like this: import builtins int(1) # 1 def new_int(number): return number + 1 builtins.int = new_int int(1) # 2 I know I can turn python classes upside down using special methods like __new__, __get__ etc. and that I can overload any operator. but is there any way to monkey patch parentheses? so that instead of creating list, python will do something else with elements between [...] like creating tuple. # normal old = [1,2,3,4] print(old) # [1, 2, 3, 4] type(old) # <class 'list'> # some strange code that monkeypatches [] so that instead of list, it creates tuple def monkey_patch_list(values): return tuple(values) [] = monkey_patch_list # of course it is wrong # new new = [1,2,3,4] print(new) # (1, 2, 3, 4) type(old) # <class 'tuple'> Probably there is no way to do it just in python but maybe somewhere hidden in python code there is a definition of handling [] so that I can mess it up. If there is anyone crazy like me and knows how to do it I will appreciate help. Disclaimer: Don't worry just for fun, thanks ;)
This can be done as a function decorator as long as the lists to be replaced as tuples are defined in a function. To do that, use ast.NodeTransformer to replace any ast.List node with an equivalent ast.Tuple node in the function's AST: import ast import inspect from textwrap import dedent class ForceTuples(ast.NodeTransformer): def visit_List(self, node): return ast.Tuple(**vars(node)) # remove 'force_tuples' from the function's decorator list to avoid re-decorating during exec def visit_FunctionDef(self, node): node.decorator_list = [ decorator for decorator in node.decorator_list if not isinstance(decorator, ast.Name) or decorator.id != 'force_tuples' ] self.generic_visit(node) return node def force_tuples(func): tree = ForceTuples().visit(ast.parse(dedent(inspect.getsource(func)))) ast.fix_missing_locations(tree) scope = {} exec(compile(tree, inspect.getfile(func), 'exec'), func.__globals__, scope) return scope[func.__name__] so that: @force_tuples def foo(): bar = [1, 2, 3, 4] print(bar) print(type(bar)) foo() outputs: (1, 2, 3, 4) <class 'tuple'> Demo: https://replit.com/@blhsing/MellowLinearSystemsanalysis
4
3
74,311,319
2022-11-4
https://stackoverflow.com/questions/74311319/adjusting-the-plotly-colorbar-for-each-subplot-according-to-their-min-and-max
I wanted to find out how to have three different colorbars for my plotly 3 subplots and be able to adjust each of the three colorbars according to their min and max. (attached snapshot). Does anyone know how to have each colorbar on the right side of each subplot? Also, for some reasons, the plot sizes are not perfect and they dont appear with the subtitles specified in the code! Lastly, I wonder if there is a way to synchronize the subplots together so that when we zoom in or out on each of the subplots, they all move together. This is a sample of my code: import pandas as pd import plotly.graph_objects as go from plotly.subplots import make_subplots # load dataset Real_df = pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/volcano.csv") Model_df = (df_1[np.random.default_rng(seed=42).permutation(df_1.columns.values)]) Error_df = pd.DataFrame(np.random.randint(0,5,size=(87, 61)), columns=df_2.columns) # create figure #fig = go.Figure() # Add surface trace fig = make_subplots(rows=1, cols=3, specs=[[{'is_3d': True}, {'is_3d': True}, {'is_3d': True}]], subplot_titles=['True', 'Model', 'Error Percentage'], ) fig.add_trace(go.Surface(z=Real_df.values.tolist(), colorscale="jet"), 1, 1) fig.add_trace(go.Surface(z=Model_df.values.tolist(), colorscale="jet"), 1, 2) fig.add_trace(go.Surface(z=Error_df.values.tolist(), colorscale="jet"), 1, 3) # Update plot sizing fig.update_layout( width=800, height=900, autosize=False, margin=dict(t=0, b=0, l=0, r=0), template="plotly_white", ) # Update 3D scene options fig.update_scenes( aspectratio=dict(x=1, y=1, z=0.7), aspectmode="manual" ) fig.update_layout(1, 3, coloraxis={"cmin": 0, "cmax": 2}) fig.show()
To display a color bar for each subplot, the x-axis position must be set for each subplot. Also, for the subplot titles, the top margin is set to 0, which hides the text display area, so I set the top margin to 50. Finally, there does not seem to be a way to synchronize the zoom of the subplots at this time; the plotly community has mentioned synchronizing the camera viewpoint as an answer, but I am unsure if that is available in the current version. import pandas as pd import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots # load dataset Real_df = pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/volcano.csv") Model_df = (Real_df[np.random.default_rng(seed=42).permutation(Real_df.columns.values)]) Error_df = pd.DataFrame(np.random.randint(0,5,size=(87, 61)), columns=Real_df.columns) # Add surface trace fig = make_subplots(rows=1, cols=3, specs=[[{'is_3d': True}, {'is_3d': True}, {'is_3d': True}]], subplot_titles=['True', 'Model', 'Error Percentage'], ) fig.add_trace(go.Surface(z=Real_df.values.tolist(), colorscale="jet", colorbar_x=0.3), 1, 1) fig.add_trace(go.Surface(z=Model_df.values.tolist(), colorscale="jet", colorbar_x=0.65), 1, 2) fig.add_trace(go.Surface(z=Error_df.values.tolist(), colorscale="jet", colorbar_x=1.05, cmax=2, cmin=0), 1, 3) # Update plot sizing fig.update_layout( width=2000, height=900, autosize=False, margin=dict(t=50, b=0, l=0, r=0), template="plotly_white", ) # Update 3D scene options fig.update_scenes( aspectratio=dict(x=1, y=1, z=0.7), aspectmode="manual" ) fig.show()
3
4
74,311,275
2022-11-4
https://stackoverflow.com/questions/74311275/modulenotfounderror-no-module-named-openai
import requests from bs4 import BeautifulSoup import openai #write each line of nuclear.txt to a list with open('nuclear.txt', 'r') as f: lines = f.readlines() #remove the newline character from each line lines = [line.rstrip() for line in lines] #gather the text from each website and add it to a new txt file for line in lines: r = requests.get(line) soup = BeautifulSoup(r.text, 'html.parser') text = soup.get_text() with open('nuclear_text.txt', 'a') as f: f.write(text) I'm trying to import openai, however it keeps throwing the error module not found. I have done pip install openai and it downloads, but it appears to be the wrong version of python. How do I select the correct one for pip to install to? I am using VSCode pip install openai
Follow the steps below to install the openai package for the current interpreter run the following code import sys print(sys.executable) get the current interpreter path Copy the path and install openai using the following command in the terminal C:\WorkSpace\pytest10\.venv\Scripts\python.exe -m pip install openai Modify the path in the above command to the interpreter path you got.
6
14
74,310,338
2022-11-3
https://stackoverflow.com/questions/74310338/get-keys-of-dictionary-based-on-rules
given a dictionary dictionary = {'Animal 1': {'Dog': 'Yes', 'Cat': 'No', 'Color': 'Black'}, 'Animal 2': {'Dog': 'Yes', 'Cat': 'No', 'Color': 'Brown'}, 'Animal 3': {'Dog': 'No', 'Cat': 'Yes', 'Color': 'Grey'}} How do I select the Animals that are dogs? expected output ['Animal 1','Animal 2'] I could use: pd.DataFrame.from_dict(dictionary).T.loc[pd.DataFrame.from_dict(dictionary).T["Dog"]=='Yes',:].index.to_list() but it looks very ugly
You can use list comprehension: dictionary = { "Animal 1": {"Dog": "Yes", "Cat": "No", "Color": "Black"}, "Animal 2": {"Dog": "Yes", "Cat": "No", "Color": "Brown"}, "Animal 3": {"Dog": "No", "Cat": "Yes", "Color": "Grey"}, } out = [k for k, d in dictionary.items() if d.get("Dog") == "Yes"] print(out) Prints: ['Animal 1', 'Animal 2']
3
3
74,307,236
2022-11-3
https://stackoverflow.com/questions/74307236/python-why-do-functools-partial-functions-not-become-bound-methods-when-set-as
I was reading about how functions become bound methods when being set as class atrributes. I then observed that this is not the case for functions that are wrapped by functools.partial. What is the explanation for this? Simple example: from functools import partial def func1(): print("foo") func1_partial = partial(func1) class A: f = func1 g = func1_partial a = A() a.f() # TypeError: func1() takes 0 positional arguments but 1 was given a.g() # prints "foo" I kind of expected them both to behave in the same way.
The trick that allows functions to become bound methods is the __get__ magic method. To very briefly summarize that page, when you access a field on an instance, say foo.bar, Python first checks whether bar exists in foo's __dict__ (or __slots__, if it has one). If it does, we return it, no harm done. If not, then we look on type(foo). However, when we access the field Foo.bar on the class Foo through an instance, something magical happens. When we write foo.bar, assuming there is no bar on foo's __dict__ (resp. __slots__), then we actually call Foo.bar.__get__(foo, Foo). That is, Python calls a magic method asking the object how it would like to be retrieved. This is how properties are implemented, and it's also how bound methods are implemented. Somewhere deep down (probably written in C), there's a __get__ function on the type function that binds the method when accessed through an instance. functools.partial, despite looking a lot like a function, is not an instance of the type function. It's just a random class that happens to implement __call__, and it doesn't implement __get__. Why doesn't it? Well, they probably just didn't think it was worth it, or it's possible nobody even considered it. Regardless, the "bound method" trick applies to the type called function, not to all callable objects. Another useful resource on magic methods, and __get__ in particular: https://rszalski.github.io/magicmethods/#descriptor
9
7
74,294,527
2022-11-2
https://stackoverflow.com/questions/74294527/what-do-line2d-objects-returned-by-seaborn-lineplot-with-hue-represent
import seaborn as sns import matplotlib.pyplot as plt import numpy as np import pandas as pd # generate data rng = np.random.default_rng(12) x = np.linspace(0, np.pi, 50) y1, y2 = np.sin(x[:25]), np.cos(x[25:]) cat = rng.choice(["a", "b", "c"], 50) data = pd.DataFrame({"x": x , "y" : y1.tolist() + y2.tolist(), "cat": cat}) # generate figure fig, ax = plt.subplots(figsize=(8, 5), ncols=2) g0 = sns.lineplot(x="x", y="y", data=data, hue='cat', ls="--", ax=ax[0]) g1 = sns.lineplot(x="x", y="y", data=data, hue='cat', ls="--", ax=ax[1]) g1.get_lines()[0].set_color('black') print(ax[0].get_lines()) for line in ax[0].get_lines(): print(line.get_label()) plt.tight_layout() plt.show() This returns a list: <a list of 6 Line2D objects> of which the first three objects are the coloured dotted lines in the left subplot in the figure, confirmed by changing the color of one of these lines in the right subplot. But I am unable to understand what the last three Line2D objects are in the list ax[0].get_lines(). If I try to access the labels of the Line2D objects using [line.get_label() for line in ax[0].get_lines()] it gives ['_child0', '_child1', '_child2', 'b', 'a', 'c']. But the last three Line2D objects in the list don't behave like an usual Line2D object since ax[0].get_lines()[-1].set_lw(0.2) did not change anything perceivable in the figure. I expected ax[0].get_lines()[-1].remove() would remove green colored legend line in the left subplot, but it had no effect. So, what do the last three Line2D objects in the list ax[0].get_lines(), which do not have the string _child in their labels, represent? Generated with matplotlib (v3.5.1) and seaborn (v0.11.2).
These are dummy lines used to create the legend. Seaborn's legends can be quite complex, due to the many options. You might want to check out this github issue to get an idea about Seaborn trying to create more elaborate legends than what matplotlib currently allows. If you change the properties of these dummy lines after the legend has been created, you won't see an effect. However, creating the legend again afterwards will show the change. import seaborn as sns import matplotlib.pyplot as plt import numpy as np import pandas as pd # generate data rng = np.random.default_rng(12) x = np.linspace(0, np.pi, 50) y1, y2 = np.sin(x[:25]), np.cos(x[25:]) x = np.round(x / 2, 1) * 2 cat = rng.choice(["a", "b", "c"], 50) data = pd.DataFrame({"x": x, "y": y1.tolist() + y2.tolist(), "cat": cat}) # generate figure fig, ax = plt.subplots(figsize=(8, 5), ncols=2) g0 = sns.lineplot(x="x", y="y", data=data, hue='cat', ls="--", ax=ax[0]) g1 = sns.lineplot(x="x", y="y", data=data, hue='cat', ls="--", ax=ax[1]) g1.get_lines()[0].set_color('black') g1.get_lines()[3].set_color('purple') g1.get_lines()[3].set_linewidth(5) g1.legend() # create the legend again plt.tight_layout() plt.show() PS: If you just want to change the legend symbols (called "handles" in matplotlib), you can change them directly, e.g.: for h in g1.legend_.legendHandles: h.set_linestyle("--") The reason that, in this example, the legend doesn't automatically apply the linestyle, is that the ls= parameter gets sent directly to matplotlib for drawing the lines, but not to the code to create seaborn's legend. In a future seaborn version, this will probably be tackled. As seaborn allows many combinations of options, it is non-trivial to get this working satisfactory for all those combinations. See also open issue 2861.
5
4
74,304,457
2022-11-3
https://stackoverflow.com/questions/74304457/raise-an-error-if-type-hint-is-violated-ignored-in-python
After looking at this question I learned that the type hints are, by default, not enforced whilst executing Python code. One can detect some discrepancies between the type hints and actual argument types using a slightly convoluted process of running pyannotate to generate stubs whilst running Python code, and scanning for differences after applying these stubs to the code. However, it would be more convenient/faster to directly raise an exception if an incoming argument is not of the type included in the type hint. This can be achieved by manually including: if not isinstance(some_argument, the_type_hint_type): raise TypeError("Argument:{argument} is not of type:{the_type_hint_type}") However, that is quite labour intensive. Hence, I was curious, is it possible to make Python raise an error if a type-hint is violated, using an CLI argument or pip package or something like that?
The edit queue for the Answer by @Surya_1897 is full, hence I will include a more detailed description of the solution here. Typeguard does exactly what I was looking for. The following requirements apply: Install typeguard with: pip install typeguard Import typeguard into each script, and add the @typechecked property above each function. Example: Change: """Some file description.""" def add_two(x:int): """Adds two to an incoming int.""" return x+2 somevar:float=42.1 add_two(somevar) To: """Some file description.""" from typeguard import typechecked @typechecked def add_two(x:int): """Adds two to an incoming int.""" return x+2 somevar:float=42.1 add_two(somevar) The latter will than throw an err: TypeError: type of argument "x" must be int; got float instead
5
2
74,301,529
2022-11-3
https://stackoverflow.com/questions/74301529/how-to-get-the-indices-of-at-least-two-consecutive-values-that-are-all-greater-t
For example, let's consider the following numpy array: [1, 5, 0, 5, 4, 6, 1, -1, 5, 10] Also, let's suppose that the threshold is equal to 3. That is to say that we are looking for sequences of at least two consecutive values that are all above the threshold. The output would be the indices of those values, which in our case is: [[3, 4, 5], [8, 9]] If the output array was flattened that would work as well! [3, 4, 5, 8, 9] Output Explanation In our initial array we can see that for index = 1 we have the value 5, which is greater than the threshold, but is not part of a sequence (of at least two values) where every value is greater than the threshold. That's why this index would not make it to our output. On the other hand, for indices [3, 4, 5] we have a sequence of (at least two) neighboring values [5, 4, 6] where each and every of them are above the threshold and that's the reason that their indices are included in the final output! My Code so far I have approached the issue with something like this: (arr > 3).nonzero() The above command gathers the indices of all the items that are above the threshold. However, I cannot determine if they are consecutive or not. I have thought of trying a diff on the outcome of the above snippet and then may be locating ones (that is to say that indices are one after the other). Which would give us: np.diff((arr > 3).nonzero()) But I'd still be missing something here.
If you convolve a boolean array with a window full of 1 of size win_size ([1] * win_size), then you will obtain an array where there is the value win_size where the condition held for win_size items: import numpy as np def groups(arr, *, threshold, win_size, merge_contiguous=False, flat=False): conv = np.convolve((arr >= threshold).astype(int), [1] * win_size, mode="valid") indexes_start = np.where(conv == win_size)[0] indexes = [np.arange(index, index + win_size) for index in indexes_start] if flat or merge_contiguous: indexes = np.unique(indexes) if merge_contiguous: indexes = np.split(indexes, np.where(np.diff(indexes) != 1)[0] + 1) return indexes arr = np.array([1, 5, 0, 5, 4, 6, 1, -1, 5, 10]) threshold = 3 win_size = 2 print(groups(arr, threshold=threshold, win_size=win_size)) print(groups(arr, threshold=threshold, win_size=win_size, merge_contiguous=True)) print(groups(arr, threshold=threshold, win_size=win_size, flat=True)) [array([3, 4]), array([4, 5]), array([8, 9])] [array([3, 4, 5]), array([8, 9])] [3 4 5 8 9]
5
4
74,301,098
2022-11-3
https://stackoverflow.com/questions/74301098/python-tempfile-temporarydirectory-cleanup-crashes-with-permissionerror-and-no
Premise I'm trying to convert some PDF to images via pdf2image and poppler, to then run some computervision tasks on. The conversion itself works fine. However, the conversion creates some artifacts for each page in the pdf as it is being converted, which I would like to be deleted at the end of the function. To facilitate this, I am using tempfile.TemporaryDirectory(). The function looks as follow: with tempfile.TemporaryDirectory() as path: images_from_path: [Image] = convert_from_path( os.path.join(path_superfolder, "calibration_target.pdf"), size=(2480, 3508), output_folder=path, poppler_path=r'E:\poppler-22.04.0\Library\bin') if len(images_from_path) >= page: images_from_path[page - 1].save(os.path.join(path_superfolder, "result.jpg")) Problem The trouble is, that the program always crashes with the following errors, after transforming the PDF and writing the required image to a file. Traceback (most recent call last): File "C:\Python310\lib\shutil.py", line 617, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 32] The process cannot access the file, because it is being used by another process: 'C:\\Users\\tobia\\AppData\\Local\\Temp\\tmp24c4bmzv\\bd76d834-672e-49fc-ac30-7751b7b660d0-01.ppm' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Python310\lib\tempfile.py", line 843, in onerror _os.unlink(path) PermissionError: [WinError 32] The process cannot access the file, because it is being used by another process: 'C:\\Users\\tobia\\AppData\\Local\\Temp\\tmp24c4bmzv\\bd76d834-672e-49fc-ac30-7751b7b660d0-01.ppm' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Python310\lib\code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "E:\PyCharm 2022.2.3\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "E:\PyCharm 2022.2.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "D:\Dokumente\Uni\Informatik\BA_Thesis\tumexam-scheduling-codebase\generate_data.py", line 393, in <module> extract_calibration_page_as_image_from_pdf() File "D:\Dokumente\Uni\Informatik\BA_Thesis\tumexam-scheduling-codebase\generate_data.py", line 190, in extract_calibration_page_as_image_from_pdf tmp_dir.cleanup() File "C:\Python310\lib\tempfile.py", line 873, in cleanup self._rmtree(self.name, ignore_errors=self._ignore_cleanup_errors) File "C:\Python310\lib\tempfile.py", line 855, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Python310\lib\shutil.py", line 749, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Python310\lib\shutil.py", line 619, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Python310\lib\tempfile.py", line 846, in onerror cls._rmtree(path, ignore_errors=ignore_errors) File "C:\Python310\lib\tempfile.py", line 855, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Python310\lib\shutil.py", line 749, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Python310\lib\shutil.py", line 600, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Python310\lib\shutil.py", line 597, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] Directory name invalid: 'C:\\Users\\tobia\\AppData\\Local\\Temp\\tmp24c4bmzv\\bd76d834-672e-49fc-ac30-7751b7b660d0-01.ppm' When stepping through the cleanup routine, everything seems fine, the path is correct and it starts deleting files, until at some point the internal path variable gets jumbled up and the routine crashes, because obviously a file is not a directory. To me it seems like a race condition is causing problems here. What I have already tried Rewriting the function to not use with and instead explicitly call the routine with tmp_dir.cleanup() Just creating the directory without populating it with the conversion artifacts. The cleanup works in this case. The documentation for tempfile mentions Permission errors occuring when files are open. The files are however only used in this function and if this is what is causing the error, I am unsure where the files are still opened or which function is causing this. My suspicion of course would be the conversion function.
While experimenting some more and writing this question, I found a working solution: with tempfile.TemporaryDirectory() as path: images_from_path: [Image] = convert_from_path( os.path.join(path_superfolder, f"calibration_target_{exam_type}.pdf"), size=(2480, 3508), output_folder=path, poppler_path=r'E:\poppler-22.04.0\Library\bin') if len(images_from_path) >= page: images_from_path[page - 1].save(os.path.join(path_superfolder, "result.jpg")) images_from_path = [] It seems that somehow, the routine had trouble cleaning up, because the converted images, are actually the artifacts created by pdf2image and were still being held by my data structure. Resetting the data structure, before implicitly initiating the cleanup fixed the issue. If there is a better way of tackling this issue, please do not hesitate to inform me.
4
1
74,298,091
2022-11-3
https://stackoverflow.com/questions/74298091/pandas-sorting-by-datetime
I have a pandas dataframe filled with time-stamped data. It is out of order; and I am trying to sort by date, hours and minutes. The pandas dataframe will organize by date, but not by hours and minutes. My dataframe is loaded in ('df'), and the column 'dttime' was changed it into a dateframe from integer numbers. df['dttime'] = pd.to_datetime(df['dttime'], format='%y%m%d%H%M%S') I resort it with: df.sort_values(by='dttime') but that does not seem to have the right ordering of the hour minutes and seconds.
I tried with some dummy data and it doesn't look like an issue to me. Please check the below code. import pandas as pd data = ['221011141200', '221011031200', '221011191200', '221011131600'] df = pd.DataFrame(data, columns=['dttime']) df['dttime'] = pd.to_datetime(df['dttime'], format='%y%m%d%H%M%S') # Before sorting print(df) # After sorting df = df.sort_values(by='dttime') print(df) Output is as follows: dttime 0 2022-10-11 14:12:00 1 2022-10-11 03:12:00 2 2022-10-11 19:12:00 3 2022-10-11 13:16:00 dttime 1 2022-10-11 03:12:00 3 2022-10-11 13:16:00 0 2022-10-11 14:12:00 2 2022-10-11 19:12:00
6
10
74,296,722
2022-11-2
https://stackoverflow.com/questions/74296722/delete-repeated-vowels-in-string
Here is my code: def del_rep_vow(s): ''' >>> del_rep_vow('adaptation') 'adpttion' >>> del_rep_vow('repetitions') 'reptitons' >>> del_rep_vow('kingkong') 'kingkong' >>> del_rep_vow('aeiouaeiou') 'aeiou' ''' final_list = [] for i in s: if i not in final_list: final_list.append(i) return ''.join(final_list) if __name__ == "__main__": import doctest doctest.testmod(verbose=True) I don't know how to make the restriction that only repeated vowels have to be deleted once they are in the list. Any help will be very welcome. Output of 'adaptation' must be 'adpttion', for example.
You should maintain a Python set of vowel characters already seen. For each new encountered letter as you walk down the string, only append a vowel if it is not in the set. def del_rep_vow(s): vowels_seen = set() final_list = [] for i in s: if i in ['a', 'e', 'i', 'o', 'u']: if i not in vowels_seen: final_list.append(i) vowels_seen.add(i) else: final_list.append(i) return ''.join(final_list) print(del_rep_vow('adaptation')) # 'adpttion' print(del_rep_vow('repetitions')) # 'reptitons' print(del_rep_vow('kingkong')) # 'kingkong' print(del_rep_vow('aeiouaeiou')) # 'aeiou'
3
3
74,294,107
2022-11-2
https://stackoverflow.com/questions/74294107/python-pandas-making-a-contingency-table-with-multiple-variables
My dataframe has 4 columns (one dependent variable and 3 independent). Here's a sample: My desired output is a contingency table, as follows: I can only seem to get a contingency table using one independent variable- using the following code (my df is called 'table') pd.crosstab(index=table['Dvar'],columns=table['Var1']) I can't seem to be able to add any other variables to this...Is the only way to achieve this to do make a separate contingency table for each var (1 to 3) and then merge/ join them?
First of all, contingency table is for showing correlation between features. If you want to probably see correlation between independent and dependent features, go through this code: pd.crosstab([table['Var1'],table['Var2'],table['Var3']], table['Dvar'], margins = False) But, as you mention, to get your desired output for that use pandas.DataFrame.groupby statement as: table.groupby('Dvar').sum()
3
3
74,292,510
2022-11-2
https://stackoverflow.com/questions/74292510/how-to-create-a-deployable-python-lamba-zip-using-poetry
I've been spending a few days trying to figure out how best to build a Python Lambda bundle when using Poetry. I found a few blogs that that outline the same technique but those didn't work in my situation. The solution provided in the blogs is to use pip install to install the needed dependencies into a specific directory and zip it up. poetry run pip install -t dist/lambda . cd dist/lambda zip -r ../lambda.zip . However, this doesn't work if you use path dependencies with Poetry. You get an error from pip stating pip._vendor.pkg_resources.RequirementParseError: Invalid URL: for any local dependency. I did run into the Poetry Bundle Plugin and it looked promising. Using it did work in that it installed the needed dependencies and the project itself into the chosen target directory. poetry self add poetry-plugin-bundle poetry bundle venv .venv-lambda cd .venv-lambda/lib/python*/site-packages/ zip -r ../../../../dist/lambda.zip . The problem with this approach is that it installs more than just the mainline dependencies, but also the dev and test dependencies. There is no option to specify which dependency group to include or exclude. There is an open issue with a PR that is waiting to be merged to resolve this. Once that is resolved, this is likely the ideal solution. Until then, I need something different/better.
Ultimately I found this documentation from AWS for how to create a lambda archive from a Python virtual environment. Using Poetry's install command, I was able to install just the main runtime dependencies into the Poetry projects created virtual environment, including any local path based dependencies. However, this doesn't install the project itself so the source code needs to be copied in before being archived. In my case, I use a dedicated source directory/module for my code. poetry install --only main --sync mkdir -p dist/lambda-package cp --recursive .venv/lib/python*/site-packages/* dist/lambda-package/ cp --recursive my_project_source_directory dist/lambda-package/ cd dist/lambda-package zip -r ../dist/lambda.zip . The above commands are what I use on my CI build. The local .venv directory is used because the following Poetry setting virtualenvs.in-project is set to true. The other thing necessary for this to work is to not use editable path based dependencies, or at least just do that locally during development. Marking them as editable will not install the dependency into the virtual environment, but will rather just create a link to the project source code. This will not get picked up when creating the zip file. This isn't perfect as there is likely more that gets bundled than necessary but it does remove any dev and test dependencies from the Poetry plugin solution. Also, because on my CI build server, I cache the installed dependencies in the virtual environment, this means at the end of my build, none of the dev or test dependencies are present to be cached and get installed on every run. I hope this helps someone else in a similar situation.
6
6
74,285,167
2022-11-2
https://stackoverflow.com/questions/74285167/how-to-make-output-from-web-scraping-python-selenium-in-div-class-to-output-text
This is my Code ` from attr import attr import requests from bs4 import BeautifulSoup import csv datas = [] key = 'sepatu' jenis = 'teplek' url = 'https://website.com/search/?term={}+{}'.format(key,jenis) headers = { 'user-agent' : 'Mozilla/5.0 (X11; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0' } req = requests.get(url, headers=headers) soup = BeautifulSoup (req.text, 'html.parser') sepatu = soup.find_all('div', 'element_1') for it in sepatu: harga = it.find('div','element').__str__ datas.append([harga]) hasil = ['Harga'] write = csv.writer(open('result/{}_{}.csv'.format(key,jenis), 'w', newline='')) write.writerow (hasil) for d in datas: write.writerow(d) this is Output from this code Column A <bound method Tag.unicode of Rp 88.000 > <bound method Tag.unicode of Rp 200.000 > how to turn that output to this output Column A Rp 88.000 Rp 200.000 i was try harga = it.find('div','element').__str__ to harga = it.find('div','element').text but i got error AttributeError: 'NoneType' object has no attribute 'text' i try to learn web scraping python selenium but i got block by output into text and i expecting i want all the output into text
You can add .text in this line harga = soup.find("div", {"class": "db gM ei b hE be f16-360-o ff vb uT ellipsis-1"}).text Then you will get an output like this Nama Sepatu Harga Sepatu A Rp.24.000
4
1
74,284,758
2022-11-2
https://stackoverflow.com/questions/74284758/how-to-sort-python-strings-both-alphabetically-by-prefix-and-numerically-by-suff
I need to sort a list of strings, of the form: ["ccc_3.23", "b_0.00", "b_-1.10", "aa_-2.37", "aa_3.05", "aa_-2.11", "ccc_9.8"] first by prefix, then by suffix, such that the sorted list is: ["aa_-2.37", "aa_-2.11", "aa_3.05", "b_-1.17", "b_0.00", "ccc_3.23", "ccc_9.8"] The prefixes only contain standard english letters, and can be of any length. The suffixes, on the other hand, are always signed floats, with potentially varying numbers of digits. -1.3232, 0.0, and 7.98 are all valid suffixes. The list needs to be sorted without altering any of its elements in the process. I've tried a number of different approaches from accross the web, but none seem to cover every edge case here. Unfortunately, the list is formatted as-is, and I cannot do anything about it. Is there any way that I could go about doing this?
You need to use key in sort a = ["ccc_3.23", "b_0.00", "b_-1.10", "aa_-2.37", "aa_3.05", "aa_-2.11", "ccc_9.8"] a.sort(key=lambda x: (x.split("_")[0], float(x.split("_")[1]))) a # output : ['aa_-2.37', 'aa_-2.11', 'aa_3.05', 'b_-1.10', 'b_0.00', 'ccc_3.23', 'ccc_9.8']
3
7
74,261,401
2022-10-31
https://stackoverflow.com/questions/74261401/how-to-get-routes-name-using-fastapi-starlette
How can I get the name of a route/endpoint using FastAPI/Starlette? I have access to the Request object and I need this information in one of my middlewares. For example, if I hit services/1, I should then be able to get the abc name. Is this possible in FastAPI? @app.get("/services/{service}", name="abc") async def list_services() -> dict: do something Update 1: Output of request.scope {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('127.0.0.1', 8001), 'client': ('127.0.0.1', 56670), 'scheme': 'http', 'root_path': '', 'headers': [(b'user-agent', b'PostmanRuntime/7.29.2'), (b'accept', b'*/*'), (b'postman-token', b'f2da2d0f-e721-44c8-b14f-e19750ea8a68'), (b'host', b'localhost:8001'), (b'accept-encoding', b'gzip, deflate, br'), (b'connection', b'keep-alive')], 'method': 'GET', 'path': '/health', 'raw_path': b'/health', 'query_string': b'', 'app': <fastapi.applications.FastAPI object at 0x1036d5790>} Update 2: Providing middleware code where request.scope["route"] is breaking. from fastapi import FastAPI,Request app = FastAPI() @app.middleware("http") async def logging_middleware(request: Request, call_next): print(request.scope['route'].name) response = await call_next(request) return response @app.get('/', name='abc') def get_name(request: Request): return request.scope['route'].name
Option 1 You can get the name value inside an endpoint as follows: from fastapi import FastAPI,Request app = FastAPI() @app.get('/', name='abc') def get_name(request: Request): return request.scope['route'].name Option 2 Inside a middleware, make sure to get the route's name after calling call_next(request), otherwise you would be faced with KeyError: 'route', as the route key/object would not yet exist in the scope dictionary. Example: @app.middleware("http") async def some_middleware(request: Request, call_next): response = await call_next(request) print(request.scope['route'].name) return response Option 3 Instead of a middleware, you could create a custom APIRoute class, which would allow you to get the route's name before processing the request and getting the response (if that's a requirement for your app). You could add an endpoint that you would like to be handled by that APIRoute class using @<name_of_router_instance> instead of @app (e.g., @router.get('/', name='abc')). More than one endpoint could be added in the same way. Example: from fastapi import APIRouter, FastAPI, Request, Response from typing import Callable from fastapi.routing import APIRoute class CheckNameRoute(APIRoute): def get_route_handler(self) -> Callable: original_route_handler = super().get_route_handler() async def custom_route_handler(request: Request) -> Response: print(request.scope['route'].name) response = await original_route_handler(request) return response return custom_route_handler app = FastAPI() router = APIRouter(route_class=CheckNameRoute) @router.get('/', name='abc') def get_name(request: Request): return request.scope['route'].name app.include_router(router) To get the route path of an endpoint instead, please see this answer.
5
6
74,209,110
2022-10-26
https://stackoverflow.com/questions/74209110/how-to-convert-date-to-timezone-aware-datetime-in-polars
Let's say I have df = pl.DataFrame({ "date": pl.Series(["2022-01-01", "2022-01-02"]).cast(pl.Date) }) How do I localize that to a specific timezone and make it a datetime? I tried: df.select(pl.col('date').cast(pl.Datetime(time_zone='America/New_York'))) but that gives me shape: (2, 1) ┌────────────────────────────────┐ │ date │ │ --- │ │ datetime[μs, America/New_York] │ ╞════════════════════════════════╡ │ 2021-12-31 19:00:00 EST │ │ 2022-01-01 19:00:00 EST │ └────────────────────────────────┘ so it looks like it's starting from the presumption that the naïve datetimes are UTC and then applying the conversion. I set os.environ['TZ']='America/New_York' but I got the same result. I looked through the polars config options in the API guide to see if there's something else to set but couldn't find anything about default timezone.
As of polars 0.16.3, you can do: df.select( pl.col('date').cast(pl.Datetime).dt.replace_time_zone("America/New_York") )
3
9
74,231,254
2022-10-28
https://stackoverflow.com/questions/74231254/how-to-filter-empty-strings-from-a-list-column-of-python-polars-dataframe
I have a python polars dataframe as- df_pol = pl.DataFrame({'test_names':[['Mallesham','','Bhavik','Jagarini','Jose','Fernando'], ['','','','ABC','','XYZ']]}) I would like to get a count of elements from each list in test_names field not considering the empty values. df_pol.with_columns(pl.col('test_names').list.len().alias('tot_names')) Here it is considering empty strings into count, this is why we can see 6 names in list-2. actually it has only two names. required output as:
You can use list.eval to run any polars expression on the list's elements. In an list.eval expression, you can pl.element() to refer to the lists element and then apply an expression. Next we simply use a filter expression to prune the values we don't need. df = pl.DataFrame({ "test_names":[ ["Mallesham","","Bhavik","Jagarini","Jose","Fernando"], ["","","","ABC","","XYZ"] ] }) df.with_columns( pl.col("test_names").list.eval(pl.element().filter(pl.element() != "")) ) shape: (2, 1) ┌─────────────────────────────────────┐ │ test_names │ │ --- │ │ list[str] │ ╞═════════════════════════════════════╡ │ ["Mallesham", "Bhavik", ... "Fer... │ │ ["ABC", "XYZ"] │ └─────────────────────────────────────┘
4
2
74,280,212
2022-11-1
https://stackoverflow.com/questions/74280212/polars-scan-s3-multi-part-parquet-files
I have a multipart partitioned parquet on s3. Each partition contains multiple parquet files. The below code narrows in on a single partition which may contain somewhere around 30 parquet files. When I use scan_parquet on a s3 address that includes *.parquet wildcard, it only looks at the first file in the partition. I verified this with the count of customers. It has the count from just the first file in the partition. Is there a way that it can scan across files? import polars as pl s3_loc = "s3://some_bucket/some_parquet/some_partion=123/*.parquet" df = pl.scan_parquet(s3_loc) cus_count = df.select(pl.count('customers')).collect() If I leave off the *.parquet from the s3 address then I get the following error. exceptions.ArrowErrorException: ExternalFormat("File out of specification: A parquet file must containt a header and footer with at least 12 bytes")
New Answer polars can natively load files from AWS, Azure, GCP, or plain old http and no longer uses fsspec (very much, if at all). Instead, it uses the object_store under the hood. The syntax to use it is. pl.scan_parquet( "s3://some_bucket/some_parquet/some_partion=123/*.parquet", storage_options= dict_of_credentials) The dict_of_credentials might be a bit different than what you normally feed to initiate an s3fs.S3FileSystem(). Additionally, you can exclude storage_options if you've got your credentials as env variables but the env variables that work for s3fs might not be exactly what object_store is looking for. For instance, this shows what env variables it will look for. Old Answer using pyarrow.dataset It looks like from the user guide on multiple files that to do so requires a loop creating many lazy dfs that you then combine together. Another approach is to use the scan_ds function which takes a pyarrow dataset object. import polars as pl import s3fs import pyarrow.dataset as ds fs = s3fs.S3FileSystem() # you can also make a file system with anything fsspec supports # S3FileSystem is just a wrapper for fsspec s3_loc = "s3://some_bucket/some_parquet/some_partion=123" myds = ds.dataset(s3_loc, filesystem=fs) lazy_df = pl.scan_pyarrow_dataset(myds) cus_count = lazy_df.select(pl.count('customers')).collect()
6
6
74,206,034
2022-10-26
https://stackoverflow.com/questions/74206034/how-do-uvicorn-workers-work-and-how-many-do-i-need-for-a-slim-machine
The application I deploy is FastAPI with Uvicorn under K8s. While trying to understand how I want to Dockerize the application I understood I want to implement Uvicorn without Gunicorn and to add a system of scale up/down by the load of the requests the application is getting. I did a lot of load testing and discovered that with the default of 1 Uvicorn worker I'm getting 3.5 RPS, while changing the workers to 8 I can get easly 22 RPS (didn't check for more since its great results for me). Now what I was expecting regarding the resources is that the CPU that I will have to provide will be with a limit of 8 (I assume every worker works on one process and thread), but I saw only increase in the memory usage, but barley in the CPU. maybe its because the app don't use that much CPU but indeed its possible for it to use more than 1 CPU? so far it didn't used more than one CPU. How do Uvicorn workers work? How should I calculate how many workers I need for the app? I didn't find any useful information. Again, my goal is to keep a slim machine of 1 cpu, with Autoscaling system.
When using uvicorn and applying the --workers argument greater than 1, then uvicorn will spawn subprocesses internally using multiprocessing. You have to remember that uvicorn is asynchronous and that HTTP servers generally are bottle necked by network latency instead of computation. So, it could be that your workloads aren't particularly CPU bound and are IO bound. Without knowing more about the type of work being done by the server on each request, the best way to determine how many workers you will need will be through empirical experimentation. In other words, just test it until you hit a limit. Though the FastAPI documentation does include some guidance for your use case: If you have a cluster of machines with Kubernetes, Docker Swarm Mode, Nomad, or another similar complex system to manage distributed containers on multiple machines, then you will probably want to handle replication at the cluster level instead of using a process manager (like Gunicorn with workers) in each container. One of those distributed container management systems like Kubernetes normally has some integrated way of handling replication of containers while still supporting load balancing for the incoming requests. All at the cluster level. In those cases, you would probably want to build a Docker image from scratch as explained above, installing your dependencies, and running a single Uvicorn process instead of running something like Gunicorn with Uvicorn workers. - FastAPI Docs Emphasis mine.
9
16
74,267,784
2022-10-31
https://stackoverflow.com/questions/74267784/i-cant-authorize-gmail-api-application-in-google-colaboratory
I'm running the quickstart code from https://developers.google.com/people/quickstart/python in a colab notebook. # \[START people_quickstart\] from __future__ import print_function import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError # If modifying these scopes, delete the file token.json. SCOPES = \['https://www.googleapis.com/auth/contacts.readonly'\] def main(): """Shows basic usage of the People API. Prints the name of the first 10 connections. """ creds = None \# The file token.json stores the user's access and refresh tokens, and is \# created automatically when the authorization flow completes for the first \# time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) \# If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) \# Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) try: service = build('people', 'v1', credentials=creds) # Call the People API print('List 10 connection names') results = service.people().connections().list( resourceName='people/me', pageSize=10, personFields='names,emailAddresses').execute() connections = results.get('connections', []) for person in connections: names = person.get('names', []) if names: name = names[0].get('displayName') print(name) except HttpError as err: print(err) if __name__ == '__main__': main() # \[END people_quickstart\] but it fails the authentication at this stage: http://localhost:52591/?state=K8nzFjxOrWJkPEqjeG1AZiGpsT5DSx&code=4/0ARtbsJoAH2rD9UYgHOKJ__UdJcq87d2vuFjEAqcI3aKJpj1rLJ-93TXR0_v-LnBR4Fytsg&scope=https://www.googleapis.com/auth/gmail.readonly why is it redirected to localhost? There is a simple way to send e-mail at google colab? with or without using gmail? i'm using the google colab at opera browser. Can anyone help me how i can send a simple e-mail at google colab without lowing the gmail security level? T.T
today I came across the same problem and the way I found to fix it was running the code in a Jupyter Notebook and then saving the token it was generated there and uploading it to colab. I ran the code bellow and then a json file named 'token' was generated in the folder where my notebook is located: import os.path import google.auth.exceptions from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError SCOPES = ["https://www.googleapis.com/auth/gmail.send"] creds = None if os.path.exists('token.json'): try: creds = Credentials.from_authorized_user_file('token.json', SCOPES) creds.refresh(Request()) except google.auth.exceptions.RefreshError as error: # if refresh token fails, reset creds to none. creds = None print(f'An error occurred: {error}') # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES) InstalledAppFlow.from_client_secrets_file('credentials_web.json', SCOPES) creds = flow.run_local_server(port=0) with open('token.json', 'w') as token: token.write(creds.to_json()) When this is over just run the same code in colab and make sure you updated the token to the environment and thats it. Not simple, but it works
4
2
74,275,058
2022-11-1
https://stackoverflow.com/questions/74275058/importerror-missing-optional-dependency-openpyxl-use-pip-or-conda-to-install
I am trying to run the following pandas code to create a df by reading an excel. However I receive the error below. (I pip-installed the openpyxl but I get the same error.) import pandas as pd import numpy as np df = pd.read_excel("test.xlsx") return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked ModuleNotFoundError: No module named 'openpyxl' ImportError: Missing optional dependency 'openpyxl'. Use pip or conda to install openpyxl.
Jupyter (with anaconda) is using a specific python environment independent from the local python installation in your computer. First make sure your are installing the packages in the correct interpreter if you want to install it into anaconda (jUPYTER NOTEBOOK) try: conda activate pip install openpyxl otherwise just make sure you don't have diferent versions of python 3 or just install it to the specific version you are using. For example if I'm running the code with python 3.9 I'll run the code below: python3.9 -m pip install openpyxl
19
9
74,253,820
2022-10-30
https://stackoverflow.com/questions/74253820/cannot-catch-requests-exceptions-connectionerror-with-try-except
It feels like I am slowly losing my sanity. I am unable to catch a connection error in a REST-API request. I read at least 20 similar questions on stackoverflow, tried every possible except statement I could think of and simplified the code as much as I could to rule out certain other libraries. I am using Python 3.7 and requests 2.25.1. It is a very basic call to an API on my own server, which sometimes fails, but it only fails once in a while: try: response = requests.get(url, headers=api_headers, auth=HTTPBasicAuth(username, password)) except requests.exceptions.ConnectionError: print("Connection error!") I am sorry I cannot supply a full working example, as I am not connecting to an publicly accessible API, so I had to remove url, username and password. Even though I try to catch the connection error, the script fails with following traceback: Traceback (most recent call last): File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 706, in urlopen chunked=chunked, File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 382, in _make_request self._validate_conn(conn) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 1010, in _validate_conn conn.connect() File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connection.py", line 421, in connect tls_in_tls=tls_in_tls, File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\ssl_.py", line 429, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\ssl_.py", line 472, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\ssl.py", line 412, in wrap_socket session=session File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\ssl.py", line 850, in _create self.do_handshake() File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\ssl.py", line 1108, in do_handshake self._sslobj.do_handshake() TimeoutError: [WinError 10060] Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\adapters.py", line 449, in send timeout=timeout File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 756, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\retry.py", line 532, in increment raise six.reraise(type(error), error, _stacktrace) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\packages\six.py", line 734, in reraise raise value.with_traceback(tb) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 706, in urlopen chunked=chunked, File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 382, in _make_request self._validate_conn(conn) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 1010, in _validate_conn conn.connect() File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connection.py", line 421, in connect tls_in_tls=tls_in_tls, File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\ssl_.py", line 429, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\ssl_.py", line 472, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\ssl.py", line 412, in wrap_socket session=session File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\ssl.py", line 850, in _create self.do_handshake() File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\ssl.py", line 1108, in do_handshake self._sslobj.do_handshake() urllib3.exceptions.ProtocolError: ('Connection aborted.', TimeoutError(10060, 'Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat', None, 10060, None)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\adapters.py", line 498, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', TimeoutError(10060, 'Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat', None, 10060, None)) I don't understand how it is possible for the script to fail with requests.exceptions.ConnectionError if I am catching that very error? If I understand that traceback correctly, the error is not thrown in my code, and therefore I am not able to catch it? All I see is python libraries like ssl.py and urllib and request, but not a line from my code. So how do I catch that? Any help is highly appreciated! EDIT (because this is not possible in a comment). @Thomas made a helpful comment to connect to httpstat.us:81 to debug. So I tried replacing my order_response = requests.get() call with response = requests.get("http://httpstat.us:81"). This is the exact block in my code: try: order_response = requests.get(order_access_url, headers=api_headers, auth=HTTPBasicAuth(username, password)) if order_response.status_code == 200: order_content = json.loads(order_response.text) else: order_content = "" except requests.exceptions.ConnectionError: print("Connection error!") If I am trying to connect to http://httpstat.us:81 it actually catches the error. If I intentionally not catch it, the error looks like it: Traceback (most recent call last): File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connection.py", line 170, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\connection.py", line 96, in create_connection raise err File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\connection.py", line 86, in create_connection sock.connect(sa) TimeoutError: [WinError 10060] Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 706, in urlopen chunked=chunked, File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 394, in _make_request conn.request(method, url, **httplib_request_kw) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connection.py", line 234, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1229, in request self._send_request(method, url, body, headers, encode_chunked) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1275, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1224, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1016, in _send_output self.send(msg) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 956, in send self.connect() File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connection.py", line 200, in connect conn = self._new_conn() File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connection.py", line 182, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x00000223F9B42860>: Failed to establish a new connection: [WinError 10060] Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\adapters.py", line 449, in send timeout=timeout File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\connectionpool.py", line 756, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\urllib3\util\retry.py", line 574, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='httpstat.us', port=81): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000223F9B42860>: Failed to establish a new connection: [WinError 10060] Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Daten\cloud.bss-archery.com\BSS\_Twain\modules\order_extracts_api.py", line 50, in create_order_analysis response = requests.get("http://httpstat.us:81") File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\api.py", line 76, in get return request('get', url, params=params, **kwargs) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "C:\Users\Tilman\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='httpstat.us', port=81): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000223F9B42860>: Failed to establish a new connection: [WinError 10060] Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat')) So I am still very confused because the last entry in the traceback is in fact the same, requests.exceptions.ConnectionError but it is not caught in my real world application. It is, however, raised by a different line in \lib\site-packages\requests\adapters.py
Okay, I could figure it out myself. Kind of. A huge problem was that the traceback doesn't point to the line of my code where the exception is raised. I still don't know why that is and if this should be considered a bug in requests or not. But in any case: requests raises a ConnectionError in adapters.py but the origin is a protocol or socket error. This is line 497 in adapters.py: except (ProtocolError, socket.error) as err: raise ConnectionError(err, request=request) The TimeoutError: [WinError 10060] in the traceback actually points to a socket error. From https://hstechdocs.helpsystems.com/manuals/globalscape/archive/cuteftp8/Socket_errors_10060_10061_10064_10065.htm: A socket error in the 10060 range is a Winsock error. It is generally caused by either outgoing connection problems or connection problems on the host end. That is why I wasn't able to reproduce the error with httpstat.us. The solution was to catch it as an OSError: try: response = requests.get(url, headers=api_headers, auth=HTTPBasicAuth(username, password)) except OSError as e: print(e) It's a bit frustrating to be honest, as I still don't know why ProtocolError or socket.error in requests that raises a ConnectionError needs to be caught with "OSError" but at this point, I am just glad I could find ANY solution.
6
5
74,229,178
2022-10-27
https://stackoverflow.com/questions/74229178/stable-baselines3-runtimeerror-mat1-and-mat2-must-have-the-same-dtype
I am trying to implement SAC with a custom environment in Stable Baselines3 and I keep getting the error in the title. The error occurs with any off policy algorithm not just SAC. Traceback: File "<MY PROJECT PATH>\src\main.py", line 70, in <module> main() File "<MY PROJECT PATH>\src\main.py", line 66, in main model.learn(total_timesteps=timesteps, reset_num_timesteps=False, tb_log_name=f"sac_{num_cars}_cars") File "<MY PROJECT PATH>\venv\lib\site-packages\stable_baselines3\sac\sac.py", line 309, in learn return super().learn( File "<MY PROJECT PATH>\venv\lib\site-packages\stable_baselines3\common\off_policy_algorithm.py", line 375, in learn self.train(batch_size=self.batch_size, gradient_steps=gradient_steps) File "<MY PROJECT PATH>\venv\lib\site-packages\stable_baselines3\sac\sac.py", line 256, in train current_q_values = self.critic(replay_data.observations, replay_data.actions) File "<MY PROJECT PATH>\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "<MY PROJECT PATH>\venv\lib\site-packages\stable_baselines3\common\policies.py", line 885, in forward return tuple(q_net(qvalue_input) for q_net in self.q_networks) File "<MY PROJECT PATH>\venv\lib\site-packages\stable_baselines3\common\policies.py", line 885, in <genexpr> return tuple(q_net(qvalue_input) for q_net in self.q_networks) File "<MY PROJECT PATH>\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "<MY PROJECT PATH>\venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward input = module(input) File "<MY PROJECT PATH>\venv\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "<MY PROJECT PATH>\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 must have the same dtype Action and observation spaces: self.action_space = Box(low=-1., high=1., shape=(2,), dtype=np.float) self.observation_space = Box( np.array( [-np.inf] * (9 * 40) + [-np.inf] * 3 + [-np.inf] * 3 + [-np.inf] * 3 + [0.] + [0.] + [0.] + [-1.] + [0.] * 4 + [0.] * 4 + [0.] * 4, dtype=np.float ), np.array( [np.inf] * (9 * 40) + [np.inf] * 3 + [np.inf] * 3 + [np.inf] * 3 + [np.inf] + [1.] + [1.] + [1.] + [1.] * 4 + [np.inf] * 4 + [np.inf] * 4, dtype=np.float ), dtype=np.float ) Observations are returned in the step and reset methods as a numpy array of floats. Is there something I'm missing which is causing this error? If I use one of the environments that come with gym such as pendulum it works fine which is why I think I have a problem with my custom environment. Thanks in advance for any help and please let me know if more information is required.
Change the inputs to float32 , default the loader set the type as float64. inputs = inputs.to(torch.float32)
9
21
74,262,112
2022-10-31
https://stackoverflow.com/questions/74262112/dataclasses-how-to-ignore-default-values-using-asdict
I would like to ignore the default values after calling asdict() @dataclass class A: a: str b: bool = True so if I call a = A("1") result = asdict(a, ignore_default=True) assert {"a": "1"} == result # the "b": True should be deleted
The dataclasses module doesn't appear to have support for detecting default values in asdict(), however the dataclass-wizard library does -- via skip_defaults argument. Example: from dataclasses import dataclass from dataclass_wizard import asdict @dataclass class A: a: str b: bool = True a = A("1") result = asdict(a, skip_defaults=True) assert {"a": "1"} == result # the "b": True should be deleted Further, results show it is close to 2x faster than an approach with dataclasses.adict(). I've added benchmark code I used for testing below. from dataclasses import dataclass, asdict as asdict_orig, MISSING from timeit import timeit from dataclass_wizard import asdict @dataclass class A: a: str b: bool = True def asdict_factory(cls): def factory(obj: list[tuple]) -> dict: d = {} for k, v in obj: field_value = cls.__dataclass_fields__[k].default if field_value is MISSING or field_value != v: d[k] = v return d return factory a = A("1") A_fact = asdict_factory(A) print('dataclass_wizard.asdict(): ', timeit('asdict(a, skip_defaults=True)', globals=globals())) print('dataclasses.asdict(): ', timeit('asdict_orig(a, dict_factory=A_fact)', globals=globals())) result1 = asdict(a, skip_defaults=True) result2 = asdict_orig(a, dict_factory=A_fact) assert {"a": "1"} == result1 == result2 a2 = A("1", True) a3 = A("1", False) assert asdict(a2, skip_defaults=True) == asdict_orig(a2, dict_factory=A_fact) assert asdict(a3, skip_defaults=True) == asdict_orig(a3, dict_factory=A_fact) Disclaimer: I am the creator and maintainer of this library.
8
3
74,226,436
2022-10-27
https://stackoverflow.com/questions/74226436/hdf5-error-when-opening-nc-files-in-python-with-xarray
I'm attempting to open MERRA-2 files using xarray, as my title suggests. The specific error I am encountering occurs when I attempt to view the values in a certain variable using a print statement. The error is as follows: HDF5-DIAG: Error detected in HDF5 (1.12.2) thread 5: #000: H5A.c line 528 in H5Aopen_by_name(): can't open attribute major: Attribute minor: Can't open object #001: H5VLcallback.c line 1091 in H5VL_attr_open(): attribute open failed major: Virtual Object Layer minor: Can't open object #002: H5VLcallback.c line 1058 in H5VL__attr_open(): attribute open failed major: Virtual Object Layer minor: Can't open object #003: H5VLnative_attr.c line 130 in H5VL__native_attr_open(): can't open attribute major: Attribute minor: Can't open object #004: H5Aint.c line 545 in H5A__open_by_name(): unable to load attribute info from object header major: Attribute minor: Unable to initialize object #005: H5Oattribute.c line 476 in H5O__attr_open_by_name(): can't open attribute major: Attribute minor: Can't open object #006: H5Adense.c line 394 in H5A__dense_open(): can't locate attribute in name index major: Attribute minor: Object not found I believe that this error (warning?) is thrown for every file I attempt to open. If I wait long enough, the command I ran does actually go through, albeit much longer than it should take. So, my code is below. import xarray as xr data = xr.open_mfdataset('/path/to/my/data/*.nc') print(data.OMEGA.values) This should print a matrix containing several submatrices (my dimensions are time, lat, lon, level), printing the value at every coordinate. Again, it does eventually do this, but not before giving me the above warning for every single file in my directory. I've looked at plenty of stackoverflow, HDFforum, and github posts, and none of their solutions have worked/been applicable to my problem.
This error occurs due to conflicting non-python (e.g. fortran/C/C++) dependencies. This commonly happens when you install packages using conda with conflicting channels. This happens a lot when using Anaconda. Anaconda is a nice place to start, because it gives you a pre-built bundle (or "distribution") of data science packages. However, these all come from the defaults channel, and if you later install a package from a different channel into the base environment, you can end up with nasty conflicts like this. I'd recommend uninstalling anaconda (by deleting your anaconda directory), and installing one of the following: miniconda - similar to anaconda, but with no packages installed by default. miniforge - a variant of miniconda which prioritizes the conda-forge channel mambaforge - mamaba is a compiled variant of conda that runs in parallel. It's a little less stable than conda and the error messages tend to be less helpful, so if you run into trouble, you can always just run the same command using conda to see what's happening - they're completely interchangeable. But mamba is much, much faster. My top recommendation would be mambaforge - it's really fast and by default helps you avoid combining different channels by setting conda-forge as the priority channel. When using any of these, don't install packages into your base environment, unless they are cross-environment utilities which themselves can activate your various kernel environments, like jupyterlab or an IDE like VSCode or Spyder. So, after deleting your environment, I'd recommend installing one of these, esp. mambaforge, and then re-installing into a new environment, e.g. with mamba create -n myenv python=3.10 xarray dask netCDF4 bottleneck matplotlib scipy pandas [...].
6
6
74,248,955
2022-10-29
https://stackoverflow.com/questions/74248955/how-to-display-the-coordinates-of-the-points-clicked-on-the-image-in-google-cola
I need to locate the mouse click location on an image in a google colab notebook. I tried the following script but nothing happened. The following code should work in Jupyter notebooks but it doesn't work on google colab: import matplotlib matplotlib.use('TKAgg') import matplotlib.pyplot as plt import matplotlib.image as mpimg f1 = 'sample.tif' fig = plt.figure(figsize=(20,30)) img = mpimg.imread(f1) def onclick(event): ix, iy = event.xdata, event.ydata print(ix, iy) cid = fig.canvas.mpl_connect('button_press_event', onclick) imgplot = plt.imshow(img) plt.show()
You need to use an interactive IPython backend, e.g. ipympl: Installation in Colab: !pip install ipympl from google.colab import output output.enable_custom_widget_manager() setup matplotlib to use it: %matplotlib ipympl test it: import matplotlib import matplotlib.pyplot as plt fig, ax = plt.subplots() def onclick(event): ix, iy = event.xdata, event.ydata print(ix, iy) cid = fig.canvas.mpl_connect('button_press_event', onclick) Output:
3
4
74,244,578
2022-10-29
https://stackoverflow.com/questions/74244578/how-can-i-reshape-a-2d-array-into-1d-in-python
Let me edit my question again. I know how flatten works but I am looking if it possible to remove the inside braces and just simple two outside braces just like in MATLAB and maintain the same shape of (3,4). here it is arrays inside array, and I want to have just one array so I can plot it easily also get the same results is it is in Matlab. For example I have the following matrix (which is arrays inside array): s=np.arange(12).reshape(3,4) print(s) [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] Is it possible to reshape or flatten() it and get results like this: [ 0 1 2 3 4 5 6 7 8 9 10 11]
First answer If I understood correctly your question (and 4 other answers say I didn't), your problem is not how to flatten() or reshape(-1) an array, but how to ensure that even after reshaping, it still display with 4 elements per line. I don't think you can, strictly speaking. Arrays are just a bunch of elements. They don't contain indication about how we want to see them. That's a printing problem, you are supposed to solve when printing. You can see [here][1] that people who want to do that... start with reshaping array in 2D. That being said, without creating your own printing function, you can control how numpy display arrays, using np.set_printoptions. Still, it is tricky so, because this function allows you only to specify how many characters, not elements, are printed per line. So you need to know how many chars each element will need, to force linebreaks. In your example: np.set_printoptions(formatter={"all":lambda x:"{:>6}".format(x)}, linewidth=7+(6+2)*4) The formatter ensure that each number use 6 chars. The linewidth, taking into account "array([" part, and the closing "])" (9 chars) plus the 2 ", " between each elements, knowing we want 4 elements, must be 9+6×4+2×3: 9 chars for "array([...])", 6×4 for each 4 numbers, 2×3 for each 3 ", " separator. Or 7+(6+2)×4. You can use it only for one printing with np.printoptions(formatter={"all":lambda x:"{:>6}".format(x)}, linewidth=7+(6+2)*4): print(s.reshape(-1)) Edit after some times : subclass Another method that came to my mind, would be to subclass ndarray, to make it behave as you would want import numpy as np class MyArr(np.ndarray): # To create a new array, with args ls: number of element to print per line, and arr, normal array to take data from def __new__(cls, ls, arr): n=np.ndarray.__new__(MyArr, (len(arr,))) n.ls=ls n[:]=arr[:] return n def __init__(self, *args): pass # So that this .ls is viral: when ever the array is created from an operation from an array that has this .ls, the .ls is copyied in the new array def __array_finalize__(self, obj): if not hasattr(self, 'ls') and type(obj)==MyArr and hasattr(obj, 'ls'): self.ls=obj.ls # Function to print an array with .ls elements per line def __repr__(self): # For other than 1D array, just use standard representation if len(self.shape)!=1: return super().__repr__() mxsize=max(len(str(s)) for s in self) s='[' for i in range(len(self)): if i%self.ls==0 and i>0: s+='\n ' s+=f'{{:{mxsize}}}'.format(self[i]) if i+1<len(self): s+=', ' s+=']' return s Now you can use this MyArr to build your own 1D array MyArr(4, range(12)) shows [ 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0] And you can use it anywhere a 1d ndarray is legal. And most of the time, the .ls attribute will follows (I say "most of the time", because I cannot guarantee that some functions wont build a new ndarray, and fill them with the data from this one) a=MyArr(4, range(12)) a*2 #[ 0.0, 2.0, 4.0, 6.0, # 8.0, 10.0, 12.0, 14.0, # 16.0, 18.0, 20.0, 22.0] a*a #[ 0.0, 1.0, 4.0, 9.0, # 16.0, 25.0, 36.0, 49.0, # 64.0, 81.0, 100.0, 121.0] a[8::-1] #[8.0, 7.0, 6.0, 5.0, # 4.0, 3.0, 2.0, 1.0, # 0.0] # It even resists reshaping b=a.reshape((3,4)) b #MyArr([[ 0., 1., 2., 3.], # [ 4., 5., 6., 7.], # [ 8., 9., 10., 11.]]) b.reshape((12,)) #[ 0.0, 1.0, 2.0, 3.0, # 4.0, 5.0, 6.0, 7.0, # 8.0, 9.0, 10.0, 11.0] # Or fancy indexing a[np.array([1,2,5,5,5])] #[1.0, 2.0, 5.0, 5.0, # 5.0] # Or matrix operations M=np.eye(12,k=1)+2*M.identity(12) # Just a matrix M@a #[ 1.0, 4.0, 7.0, 10.0, # 13.0, 16.0, 19.0, 22.0, # 25.0, 28.0, 31.0, 22.0] np.diag(M*a) #[ 0.0, 2.0, 4.0, 6.0, # 8.0, 10.0, 12.0, 14.0, # 16.0, 18.0, 20.0, 22.0] # But of course, some time you loose the MyArr class import pandas as pd pd.DataFrame(a, columns=['v']).v.values #array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.]) [1]: https://stackoverflow.com/questions/25991666/how-to-efficiently-output-n-items-per-line-from-numpy-array
6
0
74,273,757
2022-11-1
https://stackoverflow.com/questions/74273757/pipenv-packages-do-not-match-the-hashes-from-the-requirements-file
Pipfile I recently installed O365 and shortuuid packages using following command, which got executed with no problems on Mac M1. pipenv install --keep-outdated o365 shortuuid [[source]] name = "pypi" url = "https://pypi.org/simple" verify_ssl = true [dev-packages] black = "*" [packages] django = "~=3.2" djangorestframework = "~=3.12" django-extensions = "~=3.1" django-cors-headers = "~=3.7" drf-yasg = "~=1.13" drf-writable-nested = "~=0.6" email-validator = "~=1.0" isodate = "~=0.6.0" coverage = "*" flake8 = "*" pep8-naming = "*" pyparsing = "*" pydot = "*" django-money = "*" django-storages = "~=1.7" boto3 = "~=1.9" requests = "~=2.22" sentry-sdk = "~=1.4.3" packaging = "~=21.3" django-mathfilters = "~=0.4.0" pdfkit = "~=0.6.1" django-phone-field = "~=1.8.0" nltk = "~=3.4.5" textract = "~=1.6.3" django-restql = "~=0.8.2" openpyxl = "~=3.0.3" toml = "~=0.10.0" django-filter = "~=2.4" pyjq = "~=2.4" html-testrunner = "~=1.2.1" forex-python = "~=1.6" django-tabulate = "*" pynacl = "~=1.4.0" phonenumbers = "~=8.12.9" pycountry = "~=20.7.3" tblib = "~=1.7" html2text = "~=2020.1.16" memoization = "~=0.3.2" django-ipware = "~=3.0.2" gunicorn = "~=20.1" whitenoise = "~=5.2" django-debug-toolbar = "*" celery = "~=5.1.0" django-celery-results = "~=2.1.0" kombu = "~=5.1.0" redis = "~=3.5.3" faker = "~=8.10.1" psutil = "~=5.8.0" django-cryptography = "~=1.0" channels = "~=3.0.4" channels-redis = "~=3.3.0" uvicorn = {extras = ["standard"], version = "~=0.14.0"} psycopg2-binary = "~=2.9.1" django-silk = "~=4.2.0" django-server-timing = {ref = "master", git = "https://github.com/vtemian/django-server-timing.git"} pre-commit = "~=2.17.0" icalendar = "~=4.0.9" jsonschema = "==4.4.0" shortuuid = "==1.0.9" o365 = "==2.0.21" [requires] python_version = "3.7" [pipenv] allow_prereleases = true Even though package version is not updated, pipenv is giving the following error during pipeline build stage which is running on Ubuntu 18.04 OS. Pipfile.lock has expected old version and it's a sha256 hash. #0 453.9 [pipenv.exceptions.InstallError]: ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them. #0 453.9 [pipenv.exceptions.InstallError]: pyrsistent==0.18.1 from https://test-files.pythonhosted.org/packages/46/e2/03adc9d2f17f1a144418d9776414d2a12d692fe6dac18bd1b946bd5a0aad/pyrsistent-0.18.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (from -r /tmp/pipenv-4mqic6yg-requirements/pipenv-plxp2qsv-hashed-reqs.txt (line 129)): #0 453.9 [pipenv.exceptions.InstallError]: Expected sha256 0e3e1fcc45199df76053026a51cc59ab2ea3fc7c094c6627e93b7b44cdae2c8c #0 453.9 [pipenv.exceptions.InstallError]: Expected or 1b34eedd6812bf4d33814fca1b66005805d3640ce53140ab8bbb1e2651b0d9bc #0 453.9 [pipenv.exceptions.InstallError]: Expected or 4ed6784ceac462a7d6fcb7e9b663e93b9a6fb373b7f43594f9ff68875788e01e #0 453.9 [pipenv.exceptions.InstallError]: Expected or 5d45866ececf4a5fff8742c25722da6d4c9e180daa7b405dc0a2a2790d668c26 #0 453.9 [pipenv.exceptions.InstallError]: Expected or 636ce2dc235046ccd3d8c56a7ad54e99d5c1cd0ef07d9ae847306c91d11b5fec #0 453.9 [pipenv.exceptions.InstallError]: Expected or 6455fc599df93d1f60e1c5c4fe471499f08d190d57eca040c0ea182301321286 #0 453.9 [pipenv.exceptions.InstallError]: Expected or 6bc66318fb7ee012071b2792024564973ecc80e9522842eb4e17743604b5e045 #0 453.9 [pipenv.exceptions.InstallError]: Expected or 7bfe2388663fd18bd8ce7db2c91c7400bf3e1a9e8bd7d63bf7e77d39051b85ec #0 453.9 [pipenv.exceptions.InstallError]: Expected or 7ec335fc998faa4febe75cc5268a9eac0478b3f681602c1f27befaf2a1abe1d8 #0 453.9 [pipenv.exceptions.InstallError]: Expected or 914474c9f1d93080338ace89cb2acee74f4f666fb0424896fcfb8d86058bf17c #0 453.9 [pipenv.exceptions.InstallError]: Expected or b568f35ad53a7b07ed9b1b2bae09eb15cdd671a5ba5d2c66caee40dbf91c68ca #0 453.9 [pipenv.exceptions.InstallError]: Expected or cdfd2c361b8a8e5d9499b9082b501c452ade8bbf42aef97ea04854f4a3f43b22 #0 453.9 [pipenv.exceptions.InstallError]: Expected or d1b96547410f76078eaf66d282ddca2e4baae8964364abb4f4dcdde855cd123a #0 454.0 [pipenv.exceptions.InstallError]: Expected or d4d61f8b993a7255ba714df3aca52700f8125289f84f704cf80916517c46eb96 #0 454.0 [pipenv.exceptions.InstallError]: Expected or d7a096646eab884bf8bed965bad63ea327e0d0c38989fc83c5ea7b8a87037bfc #0 454.0 [pipenv.exceptions.InstallError]: Expected or df46c854f490f81210870e509818b729db4488e1f30f2a1ce1698b2295a878d1 #0 454.0 [pipenv.exceptions.InstallError]: Expected or e24a828f57e0c337c8d8bb9f6b12f09dfdf0273da25fda9e314f0b684b415a07 #0 454.0 [pipenv.exceptions.InstallError]: Expected or e4f3149fd5eb9b285d6bfb54d2e5173f6a116fe19172686797c056672689daf6 #0 454.0 [pipenv.exceptions.InstallError]: Expected or e92a52c166426efbe0d1ec1332ee9119b6d32fc1f0bbfd55d5c1088070e7fc1b #0 454.0 [pipenv.exceptions.InstallError]: Expected or f87cc2863ef33c709e237d4b5f4502a62a00fab450c9e020892e8e2ede5847f5 #0 454.0 [pipenv.exceptions.InstallError]: Expected or fd8da6d0124efa2f67d86fa70c851022f87c98e205f0594e1fae044e7119a5a6 #0 454.0 [pipenv.exceptions.InstallError]: Got 2462a2c81ebc4358d169a868f87eb9df71a7d676009fa9180632546456c37015 Any help to resolve this would be highly appreciated.
The package pyrsistent is a dependency of jsonschema, which is pinned to version 4.4.0 in your Pipfile. There are several possible explanations for a hash mismatch when installing from PyPI. One way to simply fix the error would be to regenerate your Pipfile.lock: % pipenv lock And then reinstall all packages specified in Pipfile.lock: % pipenv sync
3
3
74,281,446
2022-11-1
https://stackoverflow.com/questions/74281446/pyarrow-is-not-installed-snowpark-stored-procedure-with-python
I have created this basic stored procedure to query a Snowflake table based on a customer id: CREATE OR REPLACE PROCEDURE SP_Snowpark_Python_Revenue_2(site_id STRING) RETURNS STRING LANGUAGE PYTHON RUNTIME_VERSION = '3.8' PACKAGES = ('snowflake-snowpark-python') HANDLER = 'run' AS $$ from snowflake.snowpark.functions import * def run(session, site_id): df_rev_tmp = session.table("revenue").select(col("site_id"), col("subscription_id"), col("country_name"), col("product_name")) df_rev_final = df_rev_tmp.filter(col("site_id") == site_id) return "SUCCESS" $$; It works fine but I would like my sproc to return a JSON object for the whole result set. I modified it thusly: CREATE OR REPLACE PROCEDURE SP_Snowpark_Python_Revenue_3(site_id STRING) RETURNS STRING LANGUAGE PYTHON RUNTIME_VERSION = '3.8' PACKAGES = ('snowflake-snowpark-python') HANDLER = 'run' AS $$ from snowflake.snowpark.functions import * def run(session, site_id): df_rev = session.table("revenue").select(col("site_id"), col("subscription_id"), col("country_name"), col("product_name")) df_rev_tmp = df_rev.filter(col("site_id") == site_id) df_rev_final = df_rev_tmp.to_pandas() df_rev_json = df_rev_final.to_json(orient = 'columns') return df_rev_json $$; It compiles without errors but fails at runtime with this error: CALL SP_Snowpark_Python_Revenue_3('dfgerr6223')..... 255002: Optional dependency: 'pyarrow' is not installed... What am I missing here?
You need to ask for pyarrow as a package: PACKAGES = ('snowflake-snowpark-python', 'pyarrow') But to get these packages, someone in your org will need to approve the Anaconda terms of service, or you'll get the following error: SQL compilation error: Anaconda terms must be accepted by ORGADMIN to use Anaconda 3rd party packages. Please follow the instructions at https://… Someone with ORGADMIN role can follow these steps: https://docs.snowflake.com/en/sql-reference/stored-procedures-python.html#getting-started
3
2
74,278,889
2022-11-1
https://stackoverflow.com/questions/74278889/how-can-i-count-occurrences-of-words-specified-in-an-array-in-python
I am working on a small program in which the user enters text and I would like to check how many times the given words occur in the given input. # Read user input print("Input your code: \n") user_input = sys.stdin.read() print(user_input) For example, the text that I input in a program is: a=1 b=3 if (a == 1): print("A is a number 1") elif(b == 3): print ("B is 3") else: print("A isn't 1 and B isn't 3") The words to find out are specified in an array. wordsToFind = ["if", "elif", "else", "for", "while"] And basically I would like to print how many "if", "elif" and "else" has occurred in a input. How can I count occurrences of words like "if", "elif", "else", "for", "while" in a given string by user input?
I think the best option is to use the tokenize built-in module of python: # Let's say this is tokens.py import sys from collections import Counter from io import BytesIO from tokenize import tokenize # Get input from stdin code_text = sys.stdin.read() # Tokenize the input as python code tokens = tokenize(BytesIO(code_text.encode("utf-8")).readline) # Filter the ones in wordsToFind wordsToFind = ["if", "elif", "else", "for", "while"] words = [token.string for token in tokens if token.string in wordsToFind] # Count the occurrences counter = Counter(words) print(counter) Test Let's say you have a test.py: a=1 b=3 if (a == 1): print("A is a number 1") elif(b == 3): print ("B is 3") else: print("A isn't 1 and B isn't 3") and then you run: cat test.py | python tokens.py Output: Counter({'if': 1, 'elif': 1, 'else': 1}) Advantages Only correct python (syntactically) will be parsed You only will be counting the python keywords (not every if occurrence in the code text, for example, you can have an line like a = "if inside str" That if should not be counted I think
3
3
74,276,093
2022-11-1
https://stackoverflow.com/questions/74276093/how-to-get-all-the-indexes-of-leading-zeroes-using-regex-in-python
Using Regex in Python (library re (only)), I want to create a function that gives me the position of all leading 0s in a string. For example, if the string was: My house has 01 garden and 003 rooms. I would want me the function to return 13, 27 and 28. I tried for example: import re string = "My house has 01 garden and 003 rooms." pattern = "(0+)[1-9]\d*" print(re.findall(pattern,string)) Obviously, the output gives me the matches but no position...
You can do the following: import re text = "My house has 01 garden and 003 rooms." pattern = re.compile(r"\b0+") def leading_zeros_index(s: str) -> list: return [i for m in pattern.finditer(s) for i in range(m.start(), m.end())] print(leading_zeros_index(text)) output: [13, 27, 28] Basically you use .finditer() in order to get the match object, then you create a range() object from match object's .start() and .end(). I used \b0+ as the pattern. There is no need to check the other characters come after zeros. \b is word boundary, here means, zeros should be at the start of the words.
3
5
74,275,522
2022-11-1
https://stackoverflow.com/questions/74275522/count-the-arrays-in-a-large-array
I wrote the following code but when the rows are large it is slow import numpy as np array = np.array([ [1,2],[1,2],[2,3], [1,2],[2,3],[5,2]]) d={} for l in array: t = tuple(l) if t in d: d[t]+=1 else: d[t]=1 print(d) result: `{(1, 2): 3, (2, 3): 2, (5, 2): 1}` Is there a faster way to do this?
Use np.unique elements, counts = np.unique(array, axis=0, return_counts=True) In your case, elements will be [[1, 2], [2, 3], [5, 2]] and counts will be [3, 2, 1]
3
4
74,268,552
2022-10-31
https://stackoverflow.com/questions/74268552/how-to-sort-pandas-crosstab-columns-by-sum-of-values
I have a crosstab table with 4 rows and multiple columns, containing numeral values (representing the number of dataset elements on the crossing of two factors). I want to sort the order of columns in the crosstab by the sum of values in each column. e.g. I have: ct = pd.crosstab(df_flt_reg['experience'], df_flt_reg['region']) | a| b| c| d| e| 0 | 1| 0| 7| 3| 6| 1 | 2| 4| 1| 5| 4| 2 | 3| 5| 0| 7| 2| 3 | 1| 3| 1| 9| 1| (sum)| 7| 12| 9| 24| 13| # row doesn't exist, written here to make clear the logic What do I want: | d| e| b| c| a| 0 | 3| 6| 0| 7| 1| 1 | 5| 4| 4| 1| 2| 2 | 7| 2| 5| 0| 3| 3 | 9| 1| 3| 1| 1| (sum)| 24| 13| 12| 9| 7| I succeded only in sorting the columns by their names (in alphabet order), but that's not what I need. I tried to sum those values separately, made a list of properly ordered indexes and then addressed them to crosstab.sort_values() via "by" parameter, but it was sorted in alphabet order again. Also I tried to create a new row "sum", but succeded to create only a new column -_- So I humbly asking for the community's help.
Calculate the sum and sort the values. Once you have the sorted series get the index and reorder your columns with it. sorted_df = ct[ct.sum().sort_values(ascending=False).index] d e b c a 0 3 6 0 7 1 1 5 4 4 1 2 2 7 2 5 0 3 3 9 1 3 1 1
3
5
74,260,802
2022-10-31
https://stackoverflow.com/questions/74260802/different-aggregate-function-based-on-value-of-column-pandas
I have the following dataframe import pandas as pd test = pd.DataFrame({'y':[1,2,3,4,5,6], 'label': ['bottom', 'top','bottom', 'top','bottom', 'top']}) y label 0 1 bottom 1 2 top 2 3 bottom 3 4 top 4 5 bottom 5 6 top I would like to add a new column, agg_y, which would be the the max(y) if label=="bottom" and min(y) if label=="top". I have tried this test['min_y'] = test.groupby('label').y.transform('min') test['max_y'] = test.groupby('label').y.transform('max') test['agg_y'] = np.where(test.label == "bottom", test.max_y, test.min_y) test.drop(columns=['min_y', 'max_y'], inplace=True) which gives the correct result y label agg_y 0 1 bottom 5 1 2 top 2 2 3 bottom 5 3 4 top 2 4 5 bottom 5 5 6 top 2 I am just looking fora one-liner solution, if possible
Your solution in one line solution is: test['agg_y'] = np.where(test.label == "bottom", test.groupby('label').y.transform('max'), test.groupby('label').y.transform('min')) Solution without groupby, thank you @ouroboros1: test['agg_y'] = np.where(test.label == 'bottom', test.loc[test.label.eq('bottom'), 'y'].max(), test.loc[test.label.ne('bottom'), 'y'].min()) Another idea is mapping values, idea is similar like ouroboros1 solution: d = {'bottom':'max', 'top':'min'} test['agg_y'] = test['label'].map({val:test.loc[test.label.eq(val),'y'].agg(func) for val, func in d.items()}) print (test) y label agg_y 0 1 bottom 5 1 2 top 2 2 3 bottom 5 3 4 top 2 4 5 bottom 5 5 6 top 2
3
4
74,260,188
2022-10-31
https://stackoverflow.com/questions/74260188/adding-key-and-value-inside-a-list-of-lists-python
I have data that looks like this [[{'title': 'Line'}], [{'title': 'asd'}]]. I want to add a new key and value for every list inside of lists. I have tried this but I'm having an error 'list' object is not a mapping. Any suggestion? data = [[{'title': 'Line'}], [{'title': 'asd'}]] titleID = [{'id': 373}, {'id': 374}] combine = [{**dict_1, **dict_2} for dict_1, dict_2 in zip(char_id, data )] the output I want is like this: [[{'id': 373, 'title': 'Line'}], [{'id': 374, 'title': 'asd'}]]
Try this list comphrehension and unpacking data = [[{'title': 'Line'}], [{'title': 'asd'}]] titleID = [{'id': 373}, {'id': 374}] [[{**i[0], **j}] for i,j in zip(data, titleID)] Output [[{'title': 'Line', 'id': 373}], [{'title': 'asd', 'id': 374}]]
3
1
74,257,818
2022-10-31
https://stackoverflow.com/questions/74257818/how-to-redirect-logged-in-user-with-url-params
In my Django project I am trying to make the http://127.0.0.1:8000/ which is the home page to redirect to the Login in Page if user is not logged in however there is a user who is logged in I want http://127.0.0.1:8000/ to become http://127.0.0.1:8000/username/ I have tried different answers but nothing specific lead to this answer: Here is the login view after login: class LoginView(LoginView): template_name = 'login.html' def get_success_url(self): user=self.request.user.username return f'/{user}/' Here is the login urls: path('accounts/login/', LoginView.as_view(redirect_authenticated_user=True,template_name='users/login.html'), name='login'), Here is the home views: class home(LoginRequiredMixin, ListView): model = Item template_name = 'app/home.html' context_object_name = 'items' Here is the app urls: path('<str:username>/', home.as_view(), name='home'), My question: How to redirect home page to http://127.0.0.1:8000/username/ if user is logged in and if not to login page
You can use settings.LOGIN_URL and settings.LOGIN_REDIRECT_URL so: class LoginView(LoginView): template_name = 'login.html' login_url='login' def get_success_url(self): user=self.request.user.username return reverse('home', args=(user)) In settings.py: LOGIN_URL='some_app_name:login' #Redirect to login page if not logged in. LOGIN_REDIRECT_URL='some_app_name:home' #Redirect to home page after successful login It defaults to LOGIN_URL and LOGIN_REDIRECT_URL of settings.py if not specified in views. Note: It's not a good practice to name same your view with actual authentication view so it should be MyLoginView or anything you can give instead of actual view name.
5
5
74,237,285
2022-10-28
https://stackoverflow.com/questions/74237285/dimension-error-by-using-patch-embedding-for-video-processing
I am working on one of the transformer models that has been proposed for video classification. My input tensor has the shape of [batch=16 ,channels=3 ,frames=16, H=224, W=224] and for applying the patch embedding on the input tensor it uses the following scenario: patch_dim = in_channels * patch_size ** 2 self.to_patch_embedding = nn.Sequential( Rearrange('b t c (h p1) (w p2) -> b t (h w) (p1 p2 c)', p1 = patch_size, p2 = patch_size), nn.Linear(patch_dim, dim), ***** (Root of the error)****** ) The parameters that I am using are as follows: patch_size =16 dim = 192 in_channels = 3 Unfortunately I receive the following error that corresponds to the line that has been shown in the code: Exception has occured: RuntimeError mat1 and mat2 shapes cannot be multiplied (9408x4096 and 768x192) I thought a lot on the reason of the error but I couldn't find out what is the reason. How can I solve the problem?
The input tensor has shape [batch=16, channels=3, frames=16, H=224, W=224], while Rearrange expects dimensions in order [ b t c h w ]. You expect channels but pass frames. This leads to a last dimension of (p1 * p2 * c) = 16 * 16 * 16 = 4096. Please try to align positions of channels and frames: from torch import torch, nn from einops.layers.torch import Rearrange patch_size = 16 dim = 192 b, f, c, h, w = 16, 16, 3, 224, 224 input_tensor = torch.randn(b, f, c, h, w) patch_dim = c * patch_size ** 2 m = nn.Sequential( Rearrange('b t c (h p1) (w p2) -> b t (h w) (p1 p2 c)', p1=patch_size, p2=patch_size), nn.Linear(patch_dim, dim) ) print(m(input_tensor).size()) Output: torch.Size([16, 16, 196, 192])
4
5
74,245,043
2022-10-29
https://stackoverflow.com/questions/74245043/find-palindrome-python-space-complexity
Given the following code in python which checks whether an string of n size is palindromic: def is_palindromic(s): return all(s[i] == s[~i] for i in range(len(s) // 2)) What is the space complexity of this code? Is it O(1) or O(n)? The all function gets an iterable as a parameter; So does it mean the s[i] == s[~i] for i in range(len(s) // 2) expression is an iterable (container) that stores n values in memory? Or maybe it behaves like an iterator that computes and returns the values one by one without any additional space?
You are using a generator expression, which only needs enough memory to store one item at a time. The other parts, len, range and the all function itself are all O(1) too, so your suggestion that "it behaves like an iterator that computes and returns the values one by one without any additional space" is correct. If you used a list expression instead all([s[i] == s[~i] for i in range(len(s) // 2)]) it would briefly use O(n) memory to build the list, which would be passed as a parameter to all.
3
2
74,252,768
2022-10-30
https://stackoverflow.com/questions/74252768/missinggreenlet-greenlet-spawn-has-not-been-called
I am trying to get the number of rows matched in a one to many relationship. When I try parent.children_count I get : sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_only() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/14/xd2s) I added expire_on_commit=False but still get the same error. How can I fix this? import asyncio from uuid import UUID, uuid4 from sqlmodel import SQLModel, Relationship, Field from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession class Parent(SQLModel, table=True): id: UUID = Field(default_factory=uuid4, primary_key=True) children: list["Child"] = Relationship(back_populates="parent") @property def children_count(self): return len(self.children) class Child(SQLModel, table=True): id: UUID = Field(default_factory=uuid4, primary_key=True) parent_id: UUID = Field(default=None, foreign_key=Parent.id) parent: "Parent" = Relationship(back_populates="children") async def main(): engine = create_async_engine("sqlite+aiosqlite://") async with engine.begin() as conn: await conn.run_sync(SQLModel.metadata.create_all) async with AsyncSession(engine) as session: parent = Parent() session.add(parent) await session.commit() await session.refresh(parent) print(parent.children_count) # I expect 0 here, as of now this parent has no children asyncio.run(main())
I think the problem here is that by default SQLAlchemy lazy-loads relationships, so accessing parent.children_count implicitly triggers a database query leading to the reported error. One way around this would be to specify a load strategy other than "lazy" in the relationship definition. Using SQLModel, this would look like: children: list['Child'] = Relationship( back_populates='parent', sa_relationship_kwargs={'lazy': 'selectin'} ) This will cause SQLAlchemy to issue an additional query to fetch the relationship while still in "async mode". Another option would be to pass {'lazy': 'joined'}, which would cause SQLAlchemy to fetch the all the results in a single JOIN query. If configuring the relationship is undesirable, you could issue a query specifying the option: from sqlalchemy.orm import selectinload from sqlmodel import select ... async with AsyncSession(engine) as session: parent = Parent() session.add(parent) await session.commit() result = await session.scalars( select(Parent).options(selectinload(Parent.children)) ) parent = result.first() print( parent.children_count ) # I need 0 here, as of now this parent has no children
28
46
74,255,173
2022-10-30
https://stackoverflow.com/questions/74255173/how-can-i-use-a-seed-inside-a-loop-to-get-the-same-random-samples-everytime-the
I want to generate data using random numbers and then generate random samples with replacement using the generated data. The problem is that using random.seed(10) only fixes the initial random numbers for the generated data but it does not fix the random samples generated inside the loop, everytime I run the code I get the same generated data but different random samples and I would like to get the same random samples in order to get reproducible results. The code is the following: import numpy as np import random np.random.seed(10) data = list(np.random.binomial(size = 215 , n=1, p= 0.3)) sample_mean = [] for i in range(1000): sample = random.choices(data, k=215) mean = np.mean(sample) sample_mean.append(mean) print(np.mean(sample_mean)) np.mean(sample_mean) should retrieve the same value every time the code is ran but it does not happen. I tried typing random.seed(i) inside the loop but it didn't work.
Fixing the seed for np.random doesn't fix the seed for random... So adding a simple line for fixing both seeds will give you reproducible results: import numpy as np import random np.random.seed(10) random.seed(10) data = list(np.random.binomial(size=215, n=1, p=0.3)) sample_mean = [] for i in range(1000): sample = random.choices(data, k=215) mean = np.mean(sample) sample_mean.append(mean) print(np.mean(sample_mean)) Or, alternatively, you can use np.random.choices instead of random.choices.
3
3
74,254,642
2022-10-30
https://stackoverflow.com/questions/74254642/how-to-install-pipenv-on-windows
I need to install pipenv on Windows and I use this tutorial. However I get an error. I use Python 3.9.13 and pip 22.3. I installed pipenv with this command pip install pipenv, then I have to do this: but I didn`t get it. So I passed it and entered this command pipenv -h. Finaly I got this error: Could you help me please?
You must add the Scripts directory to PATH environment variable as in the example below:
3
4
74,252,067
2022-10-30
https://stackoverflow.com/questions/74252067/efficiently-sample-batches-from-only-one-class-at-each-iteration-with-pytorch
I want to train a classifier on ImageNet dataset (1000 classes) and I need each batch to contain 64 images from the same class and consecutive batches from different classes. So far based on @shai's suggestion and this post I have import torchvision.transforms as transforms import torchvision.datasets as datasets from torch.utils.data import DataLoader from torch.utils.data import Dataset import numpy as np import random import argparse import torch import os class DS(Dataset): def __init__(self, data, num_classes): super(DS, self).__init__() self.data = data self.indices = [[] for _ in range(num_classes)] for i, (data, class_label) in enumerate(data): # create a list of lists, where every sublist containts the indices of # the samples that belong to the class_label self.indices[class_label].append(i) def classes(self): return self.indices def __getitem__(self, index): return self.data[index] class BatchSampler: def __init__(self, classes, batch_size): # classes is a list of lists where each sublist refers to a class and contains # the sample ids that belond to this class self.classes = classes self.n_batches = sum([len(x) for x in classes]) // batch_size self.min_class_size = min([len(x) for x in classes]) self.batch_size = batch_size self.class_range = list(range(len(self.classes))) random.shuffle(self.class_range) assert batch_size < self.min_class_size, 'batch_size should be at least {}'.format(self.min_class_size) def __iter__(self): batches = [] for j in range(self.n_batches): if j < len(self.class_range): batch_class = self.class_range[j] else: batch_class = random.choice(self.class_range) batches.append(np.random.choice(self.classes[batch_class], self.batch_size)) return iter(batches) def main(): # Code about _train_dataset = DS(train_dataset, train_dataset.num_classes) _batch_sampler = BatchSampler(_train_dataset.classes(), batch_size=args.batch_size) _train_loader = DataLoader(dataset=_train_dataset, batch_sampler=_batch_sampler) labels = [] for i, (inputs, _labels) in enumerate(_train_loader): labels.append(torch.unique(_labels).item()) print("Unique labels: {}".format(torch.unique(_labels).item())) labels = set(labels) print('Length of traversed unique labels: {}'.format(len(labels))) if __name__ == "__main__": parser = argparse.ArgumentParser(description='PyTorch ImageNet Training') parser.add_argument('--data', metavar='DIR', nargs='?', default='imagenet', help='path to dataset (default: imagenet)') parser.add_argument('--dummy', action='store_true', help="use fake data to benchmark") parser.add_argument('-b', '--batch-size', default=64, type=int, metavar='N', help='mini-batch size (default: 256), this is the total ' 'batch size of all GPUs on the current node when ' 'using Data Parallel or Distributed Data Parallel') parser.add_argument('-j', '--workers', default=4, type=int, metavar='N', help='number of data loading workers (default: 4)') args = parser.parse_args() if args.dummy: print("=> Dummy data is used!") num_classes = 100 train_dataset = datasets.FakeData(size=12811, image_size=(3, 224, 224), num_classes=num_classes, transform=transforms.ToTensor()) val_dataset = datasets.FakeData(5000, (3, 224, 224), num_classes, transforms.ToTensor()) else: traindir = os.path.join(args.data, 'train') valdir = os.path.join(args.data, 'val') normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) train_dataset = datasets.ImageFolder( traindir, transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize, ])) val_dataset = datasets.ImageFolder( valdir, transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize, ])) # Samplers are initialized to None and train_sampler will be replaced train_sampler, val_sampler = None, None train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None), num_workers=args.workers, pin_memory=True, sampler=train_sampler) val_loader = torch.utils.data.DataLoader( val_dataset, batch_size=args.batch_size, shuffle=False, num_workers=args.workers, pin_memory=True, sampler=val_sampler) main() which prints: Length of traversed unique labels: 100. However, creating self.indices in the for loop takes a lot of time. Is there a more efficient way to construct this sampler? EDIT: yield implementation import torchvision.transforms as transforms import torchvision.datasets as datasets from torch.utils.data import DataLoader from torch.utils.data import Dataset import numpy as np import random import argparse import torch import os from tqdm import tqdm import os.path class DS(Dataset): def __init__(self, data, num_classes): super(DS, self).__init__() self.data = data self.data_len = len(data) indices = [[] for _ in range(num_classes)] for i, (_, class_label) in tqdm(enumerate(data), total=len(data), miniters=1, desc='Building class indices dataset..'): indices[class_label].append(i) self.indices = indices def per_class_sample_indices(self): return self.indices def __getitem__(self, index): return self.data[index] def __len__(self): return self.data_len class BatchSampler: def __init__(self, per_class_sample_indices, batch_size): # classes is a list of lists where each sublist refers to a class and contains # the sample ids that belond to this class self.per_class_sample_indices = per_class_sample_indices self.n_batches = sum([len(x) for x in per_class_sample_indices]) // batch_size self.min_class_size = min([len(x) for x in per_class_sample_indices]) self.batch_size = batch_size self.class_range = list(range(len(self.per_class_sample_indices))) random.shuffle(self.class_range) def __iter__(self): for j in range(self.n_batches): if j < len(self.class_range): batch_class = self.class_range[j] else: batch_class = random.choice(self.class_range) if self.batch_size <= len(self.per_class_sample_indices[batch_class]): batch = np.random.choice(self.per_class_sample_indices[batch_class], self.batch_size) # batches.append(np.random.choice(self.per_class_sample_indices[batch_class], self.batch_size)) else: batch = self.per_class_sample_indices[batch_class] yield batch def n_batches(self): return self.n_batches def main(): file_path = 'a_file_path' file_name = 'per_class_sample_indices.pt' if not os.path.exists(os.path.join(file_path, file_name)): print('File: {} does not exists. Create it.'.format(file_name)) per_class_sample_indices = DS(train_dataset, num_classes).per_class_sample_indices() torch.save(per_class_sample_indices, os.path.join(file_path, file_name)) else: per_class_sample_indices = torch.load(os.path.join(file_path, file_name)) print('File: {} exists. Do not create it.'.format(file_name)) batch_sampler = BatchSampler(per_class_sample_indices, batch_size=args.batch_size) train_loader = torch.utils.data.DataLoader( train_dataset, # batch_size=args.batch_size, # shuffle=(train_sampler is None), num_workers=args.workers, pin_memory=True, # sampler=train_sampler, batch_sampler=batch_sampler ) # We do not use sampler for the validation # val_loader = torch.utils.data.DataLoader( # val_dataset, batch_size=args.batch_size, shuffle=False, # num_workers=args.workers, pin_memory=True, sampler=None) labels = [] for i, (inputs, _labels) in enumerate(train_loader): labels.append(torch.unique(_labels).item()) print("Unique labels: {}".format(torch.unique(_labels).item())) labels = set(labels) print('Length of traversed unique labels: {}'.format(len(labels))) if __name__ == "__main__": parser = argparse.ArgumentParser(description='PyTorch ImageNet Training') parser.add_argument('--data', metavar='DIR', nargs='?', default='imagenet', help='path to dataset (default: imagenet)') parser.add_argument('--dummy', action='store_true', help="use fake data to benchmark") parser.add_argument('-b', '--batch-size', default=64, type=int, metavar='N', help='mini-batch size (default: 256), this is the total ' 'batch size of all GPUs on the current node when ' 'using Data Parallel or Distributed Data Parallel') parser.add_argument('-j', '--workers', default=4, type=int, metavar='N', help='number of data loading workers (default: 4)') args = parser.parse_args() if args.dummy: print("=> Dummy data is used!") num_classes = 100 train_dataset = datasets.FakeData(size=12811, image_size=(3, 224, 224), num_classes=num_classes, transform=transforms.ToTensor()) val_dataset = datasets.FakeData(5000, (3, 224, 224), num_classes, transforms.ToTensor()) else: traindir = os.path.join(args.data, 'train') valdir = os.path.join(args.data, 'val') normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) train_dataset = datasets.ImageFolder( traindir, transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize, ])) val_dataset = datasets.ImageFolder( valdir, transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize, ])) num_classes = len(train_dataset.classes) main() A similar post but in TensorFlow can be found here
You should write your own batch_sampler class for the DataLoader.
5
2
74,242,076
2022-10-29
https://stackoverflow.com/questions/74242076/way-to-simplify-enum
Is there a better way to initialize all this boilerplate? class Type(Enum): Null=auto() Bool=auto() Int=auto() Float=auto() Decimal=auto() String=auto() Bytes=auto() Date=auto() Time=auto() Datetime=auto() Timestamp=auto() Interval=auto() Struct=auto() Array=auto() Json=auto() I wanted to do something like the following but unfortunately it kind of screws up how Pylance works (everything shows up as an error referencing related types): _Types = ['Null','Bool','Int','Float','Decimal','String','Bytes','Date','Time','Datetime','Timestamp','Interval','Struct','Array','Json'] Type = Enum('Type', {v:i for i,v in enumerate(_Types)})
I can't speak for Pylance, but if you want auto implementations, you can just pass your list of types directly to the Enum function. Type = Enum('Type', ['Null','Bool','Int','Float','Decimal','String','Bytes', 'Date','Time','Datetime','Timestamp','Interval','Struct', 'Array','Json'])
5
4
74,237,517
2022-10-28
https://stackoverflow.com/questions/74237517/is-there-a-way-to-draw-borders-around-a-figure-in-plotly
I have the following bar chart. I would like to put it around a border. Does anyone know how to do it? data = {'Programming Languages': ['Python', 'C#', 'Java', 'Ruby', 'C++', 'C', 'JavaScript', 'PHP', 'TypeScript', 'Haskell', 'Closure', 'Kotlin'], 'Responses': [12,11,10,5,3,2,2,1,1,1,1,1]} df = pd.DataFrame(data, columns = ['Programming Languages', 'Responses']) df = df.sort_values(by='Responses', ascending=True) fig = px.bar(df, x='Responses', y='Programming Languages', color_discrete_sequence=["black"]) fig.update_layout({'plot_bgcolor': 'rgba(0, 0, 0, 0)'}, font=dict(size=15)) fig.write_image("../images/pl2.pdf") fig.show() Unfortunately, I cannot paste the figure as it is a pdf file.
fig.update_xaxes(showline=True, linewidth=1, linecolor='black', mirror=True) fig.update_yaxes(showline=True, linewidth=1, linecolor='black', mirror=True) fig.show()
4
7
74,209,148
2022-10-26
https://stackoverflow.com/questions/74209148/automatically-generate-api-reference-for-all-subpackages-modules
I am using mkdocs with the mkdocstrings plugin to generate the documentation of my Python package. My package is organized in a standard fashion - setup.py - mkdocs.yaml - docs/ - mypackage/ - __init__.py - module1.py - module2.py - subpackage1/ - __init__.py - submodule1.py - submodule2.py - [...] in mkdocs.yml: plugins: - mkdocstrings nav: - API: references.md and in references.md: # API references ::: mypackage This does not generate any documentation (i.e. the 'API' page remains blanck) On the other hand, this works: ::: mypackage.module1 Running mkdocs build -v: DEBUG - mkdocstrings: Matched '::: mypackage' DEBUG - mkdocstrings: Using handler 'python' DEBUG - mkdocstrings: Collecting data DEBUG - griffe: Found mypackage: loading DEBUG - griffe: Loading path mypackage/__init__.py DEBUG - griffe: Loading path mypackages/module1.py [...] DEBUG - griffe: Iteration 1 finished, 0 aliases resolved, still 0 to go DEBUG - mkdocstrings: Updating renderer s env DEBUG - mkdocstrings: Rendering templates DEBUG - mkdocstrings: python/templates/material/module.html: Rendering mypackage DEBUG - mkdocstrings: python/templates/material/children.html: Rendering children of mypackage DEBUG - Running 1 `page_content` events DEBUG - mkdocs_autorefs.plugin: Mapping identifiers to URLs for page references.md [...] mkdocs_autorefs.plugin: Fixing references in page index.md DEBUG - Building page references.md DEBUG - Running 1 `page_context` events DEBUG - Running 2 `post_page` events DEBUG - mkdocs_autorefs.plugin: Fixing references in page references.md I am not familiar with mkdocs and mkdocstrings, but it seems this does not indicate any error, and shows the sources are found. I guess I am missing something? (Or should I manually list the path to all modules?)
Ah, it's because by default submodules are not rendered. Try setting the show_submodules option to true. Globally: plugins: - mkdocstrings: handlers: python: options: show_submodules: true Locally: ::: mypackage options: show_submodules: true
7
9
74,229,470
2022-10-28
https://stackoverflow.com/questions/74229470/i-have-issue-at-install-scikit-image-with-python-3-11-what-should-i-do
Good evening everyone, I have an issue with scikit-image while installing this package I installed python 3.11 on windows 11 pip install scikit-image I got these issues as below in the screenshot got previous issue EDIT: I installed Microsoft C++ 14 but I got another issue at the below screenshot enter image description here
The issue is that no wheels are available for scikit-image for Python 3.11 Python libraries can have code that is not written in Python (like C, C++, FORTRAN, Rust, ...) and in order to use those, you need to compile that code for your version of Python, your OS and CPU architecture. Fortunately, Python offers a packaging mechanism known as wheel to provide that precompiled code. Package maintainers can generate those wheels to avoid the end user having to compile it from sources. In your case, as Python 3.11 just got out (beginning of the week), the maintainers of scikit-image didn't have the time to package the wheel and pip tries to install it from sources (hence the message asking you to install Microsoft Visual C++ 14). Your options are: stay on Python 3.10 until the maintainers of scikit-image provides a wheel for Python 3.11 (I would recommend that) check https://github.com/scikit-image/scikit-image to see how to compile the package from sources
4
4
74,232,611
2022-10-28
https://stackoverflow.com/questions/74232611/how-to-apply-operations-on-each-array-item-in-a-column-in-presto
We want to check if the array items in text_array column start with "a" in the input table, and store the results into the third column, so we get the following output table. My first question is: Is there any way to get output table from input table using presto? In python we can define a function for it, like: def myFunc(text): out = [] for word in text: out.append(word.startswith("a")) return out My second question is: Is there any way to use the python function with presto? I was looking for something like this: SELECT id, text_array, myFunc(text_array) AS starts_with_a FROM t1
You can use transform from array functions: - sample data with dataset(id, text_array) AS ( values (1, array['ax', 'by']), (2, array['ax', 'ay', 'cz']) ) -- query select *, transform(text_array, el -> el like 'a%') starts_with_a from dataset; Output: id text_array starts_with_a 1 [ax, by] [true, false] 2 [ax, ay, cz] [true, true, false]
3
3
74,209,534
2022-10-26
https://stackoverflow.com/questions/74209534/check-which-decorator-was-applied-to-a-function
Following this question I have an idea how to check whether my function was decorated or not. Only that I need further information, namely the decorators that were actually applied onto the function (or called when the function was called if it suits better). For being safe from the danger mentioned in this answer, I am using functools.wraps. This way I don't have to be careful for any naming redefinition of the wrapper used. This is what I have so far: from functools import wraps def decorator_wraps(function): @wraps(function) def _wrapper(*a, **kw): ... return _wrapper def is_decorated(func): return hasattr(func, '__wrapped__') @decorator_wraps def foo(x, y): ... print(is_decorated(foo)) # True But what I need: from functools import wraps def decorator_wraps_1(function): @wraps(function) def _wrapper(*a, **kw): ... return _wrapper def decorator_wraps_2(function): @wraps(function) def _wrapper(*a, **kw): ... return _wrapper def decorators(func): # returns list of decorators on `func` # OR def is_decorated_by(func, decorator): # returns True if `func` is decorated by `decorator` @decorator_wraps_1 @decorator_wraps_2 def foo(x, y): ... print(decorators(foo)) # [decorator_wraps_1, decorator_wraps_2] print(is_decorated_by(foo, decorator_wraps_1)) # True TLDR I want to decide if my function was decorated and I need the names of these decorator functions as well. Any idea how to achieve this?
TL;DR Roll your own @wraps. import functools def update_wrapper(wrapper, wrapped, decorator, **kwargs): wrapper = functools.update_wrapper(wrapper, wrapped, **kwargs) if decorator is not None: __decorators__ = getattr(wrapper, "__decorators__", []) setattr(wrapper, "__decorators__", __decorators__ + [decorator]) return wrapper def wraps(wrapped, decorator, **kwargs): return functools.partial( update_wrapper, wrapped=wrapped, decorator=decorator, **kwargs ) def get_decorators(func): return getattr(func, "__decorators__", []) def is_decorated_by(func, decorator): return decorator in get_decorators(func) Usage: def test_decorator_1(function): @wraps(function, test_decorator_1) def wrapper(*args, **kwargs): return function(*args, **kwargs) return wrapper def test_decorator_2(function): @wraps(function, test_decorator_2) def wrapper(*args, **kwargs): return function(*args, **kwargs) return wrapper @test_decorator_1 @test_decorator_2 def foo(x: str, y: int) -> None: print(x, y) assert get_decorators(foo) == [test_decorator_2, test_decorator_1] assert is_decorated_by(foo, test_decorator_1) Custom @wraps Concept There is no built-in way for this as far as I know. All it takes to create a (functional) decorator is to define a function that takes another function as argument and returns a function. No information about that "outer" function is magically imprinted onto the returned function by virtue of decoration. However we can lean on the functools.wraps approach and simply roll our own variation of it. We can define it in such a way that it takes not just a reference to the wrapped function as argument, but also a reference to the outer decorator. The same way that functools.update_wrapper defines the additional __wrapped__ attribute on the wrapper it outputs, we can define our own custom __decorators__ attribute, which will be simply a list of all the decorators in the order of application (the reverse order of notation). Code The proper type annotations are a bit tricky, but here is a full working example: import functools from collections.abc import Callable from typing import Any, ParamSpec, TypeAlias, TypeVar P = ParamSpec("P") T = TypeVar("T") AnyFunc: TypeAlias = Callable[..., Any] def update_wrapper( wrapper: Callable[P, T], wrapped: AnyFunc, decorator: AnyFunc | None = None, assigned: tuple[str, ...] = functools.WRAPPER_ASSIGNMENTS, updated: tuple[str, ...] = functools.WRAPPER_UPDATES, ) -> Callable[P, T]: """ Same as `functools.update_wrapper`, but can also add `__decorators__`. If provided a `decorator` argument, it is appended to the the `__decorators__` attribute of `wrapper` before returning it. If `wrapper` has no `__decorators__` attribute, a list with just `decorator` in it is created and set as that attribute on `wrapper`. """ wrapper = functools.update_wrapper( wrapper, wrapped, assigned=assigned, updated=updated, ) if decorator is not None: __decorators__ = getattr(wrapper, "__decorators__", []) setattr(wrapper, "__decorators__", __decorators__ + [decorator]) return wrapper def wraps( wrapped: AnyFunc, decorator: AnyFunc | None, assigned: tuple[str, ...] = functools.WRAPPER_ASSIGNMENTS, updated: tuple[str, ...] = functools.WRAPPER_UPDATES ) -> Callable[[Callable[P, T]], Callable[P, T]]: """Same as `functools.wraps`, but uses custom `update_wrapper` inside.""" return functools.partial( update_wrapper, # type: ignore[arg-type] wrapped=wrapped, decorator=decorator, assigned=assigned, updated=updated, ) def get_decorators(func: AnyFunc) -> list[AnyFunc]: return getattr(func, "__decorators__", []) def is_decorated_by(func: AnyFunc, decorator: AnyFunc) -> bool: return decorator in get_decorators(func) def test() -> None: def test_decorator_1(function: Callable[P, T]) -> Callable[P, T]: @wraps(function, test_decorator_1) def wrapper(*args: P.args, **kwargs: P.kwargs) -> T: print(f"Called wrapper from {test_decorator_1.__name__}") return function(*args, **kwargs) return wrapper def test_decorator_2(function: Callable[P, T]) -> Callable[P, T]: @wraps(function, test_decorator_2) def wrapper(*args: P.args, **kwargs: P.kwargs) -> T: print(f"Called wrapper from {test_decorator_2.__name__}") return function(*args, **kwargs) return wrapper @test_decorator_1 @test_decorator_2 def foo(x: str, y: int) -> None: print(x, y) assert get_decorators(foo) == [test_decorator_2, test_decorator_1] assert is_decorated_by(foo, test_decorator_1) assert hasattr(foo, "__wrapped__") foo("a", 1) if __name__ == '__main__': test() The output is of course: Called wrapper from test_decorator_1 Called wrapper from test_decorator_2 a 1 Details and caveats With this approach, none of the original functionality of functools.wraps should be lost. Just like the original, this @wraps decorator obviously relies on you passing the correct arguments for the entire affair to make sense in the end. If you pass a nonsense argument to @wraps, it will add nonsense information to your wrapper. The difference is you now have to provide two function references instead of one, namely the function being wrapped (as before) and the outer decorator (or None if you want to suppress that information for some reason). So you would typically use it as @wraps(function, decorator). If you don't like that the decorator argument is mandatory, you could have it default to None. But I thought it was better this way, since the whole point is to have a consistent way of tracking who decorated whom, so omitting the decorator reference should be a conscious choice. Note that I chose to implement __decorators__ in that order because while they are written in the reverse order, they are applied in that order. So in this example foo is decorated with @test_decorator_2 first and then the wrapper that comes out of that is decorated with @test_decorator_1. It made more sense to me for our list to reflect that order. Static type checks With the given type annotations mypy --strict is happy as well and any IDE should still provide the auto-suggestions as expected. The only thing that threw me off, was that mypy complained at my usage of update_wrapper as argument for functools.partial. I could not figure out, why that was, so I added a # type: ignore there. NOTE: If you are on Python <3.10, you'll probably need to adjust the imports and take for example ParamSpec from typing_extensions instead. Also instead of T | None, you'll need to use typing.Optional[T] instead. Or upgrade your Python version. 🙂
5
1
74,233,332
2022-10-28
https://stackoverflow.com/questions/74233332/dataclass-optional-field-that-is-inferred-if-missing
I want my dataclass to have a field that can either be provided manually, or if it isn't, it is inferred at initialization from the other fields. MWE: from collections.abc import Sized from dataclasses import dataclass from typing import Optional @dataclass class Foo: data: Sized index: Optional[list[int]] = None def __post_init__(self): if self.index is None: self.index = list(range(len(self.data))) reveal_type(Foo.index) # Union[None, list[int]] reveal_type(Foo([1,2,3]).index) # Union[None, list[int]] How can this be implemented in a way such that: It complies with mypy type checking index is guaranteed to be of type list[int] I considered using default_factory(list), however, then how does one distinguish the User passing index=[] from the sentinel value? Is there a proper solution besides doing index: list[int] = None # type: ignore[assignment]
Use NotImplemented from collections.abc import Sized from dataclasses import dataclass @dataclass class Foo: data: Sized index: list[int] = NotImplemented def __post_init__(self): if self.index is NotImplemented: self.index = list(range(len(self.data)))
4
2
74,227,476
2022-10-27
https://stackoverflow.com/questions/74227476/groupby-with-condition-count-mean
im having the dataframe: test = pd.DataFrame({'Date': [2020 - 12 - 30, 2020 - 12 - 30, 2020 - 12 - 30, 2020 - 12 - 31, 2020 - 12 - 31, 2021 - 0o1 - 0o1, 2021 - 0o1 - 0o1], 'label': ['Positive', 'Positive', 'Negative', 'Negative','Negative', 'Positive', 'Positive'], 'score': [70, 80, 50, 50, 30, 90, 70]}) Output: Date label score 2020-12-30 Positive 70 2020-12-30 Positive 80 2020-12-30 Negative 50 2020-12-31 Negative 50 2020-12-31 Negative 30 2021-01-01 Positive 90 2021-01-01 Positive 70 My goal is to group by the date and count the labels. In addition the score should be calculating only the mean with the labels/score which are higher at that day. For example if there is more positive than negative at the day it should calculate the mean of the positive scores without the negative scores and the other way around. I though about the function new_df = test.groupby(['Date', 'label']).agg({'label' : 'count', 'score' : 'mean'}) Output should be like that. Maybe .pivot function would help? Date label new_score count_pos count_neg 2020-12-30 Positive 150 2 1 2020-12-31 Negative 80 0 2 2021-01-01 Positive 160 2 0 new_score = 70 + 80 = 150 of the two days with positive label count_pos = count "Positive" at day X count_neg = count "Negative" at day X Im a beginner in python and any help or hints how to tackle this task is appreciated! Thanks!
Another possible solution, which is based on the following ideas: Doing a pivot table indexing only on Date and aggregating with sum and count. Using the pivot table to construct a dataframe with the wanted result. aux = df.pivot_table(index=['Date'], columns='label', values='score', aggfunc=[ 'sum', 'count'], fill_value=0) (pd.DataFrame({'label': aux['sum'].idxmax(axis=1), 'score': aux['sum'].max(axis=1), 'count_pos': aux['count']['Positive'], 'count_neg': aux['count']['Negative']}) .reset_index()) Output: Date label score count_pos count_neg 0 2020-12-30 Positive 150 2 1 1 2020-12-31 Negative 80 0 2 2 2021-01-01 Positive 160 2 0
5
1
74,224,196
2022-10-27
https://stackoverflow.com/questions/74224196/pythonmodelcontext-object-returned-from-mlflow-pyfunc-load-model-how-to-retr
I am creating a custom myflow.pyfunc object that I would like to save to MLFlow and retrieve later. I don't understand the relationship between the object that is saved with mlflow.pyfunc.save_model(), and the one that is retrieved with mlflow.pyfunc.load_model(). The loaded model is a 'PythonModelContext' object rather than my original python class. When I try to use the predict method in the loaded version I get an error. Here I initialise MLflow and create a dummy example of my class # load import os import tempfile from pathlib import Path import pandas as pd import mlflow from mlflow.tracking import MlflowClient import mlflow.pyfunc from mlflow.pyfunc import PythonModelContext # initialise MLFlow mlflow_var = os.getenv('HYMIND_REPO_TRACKING_URI') mlflow.set_tracking_uri(mlflow_var) client = MlflowClient() # Define the class that will be used for fit and predict (dummy example) class PredictSpeciality(mlflow.pyfunc.PythonModel): def fit(self): print('fit') d = {'col1': [1, 2], 'col2': [3, 4]} df = pd.DataFrame(data=d) return df def predict(self, X, y=None): print('predict') print(X.shape) return If I now run the class as it is the predict method works: # Use of this predictor before saving works fine m = PredictSpeciality() df = m.fit() m.predict(df) But if I save the model to the registry, and then re-load it, the predict method no longer works: counter +=1 exp_name = 'MLflow-test-' + str(counter) os.environ["MLFLOW_EXPERIMENT_NAME"] = exp_name experiment_id = mlflow.create_experiment(exp_name) mlflow.set_experiment(exp_name) experiment = dict(mlflow.get_experiment_by_name(exp_name)) experiment_id = experiment['experiment_id'] with mlflow.start_run(): # dummy code here for fitting a model m = PredictSpeciality() df = m.fit() # mark best run runs = mlflow.search_runs() best_run_id = runs['run_id'][0] # tag the best run and save model with mlflow.start_run(run_id=best_run_id): mlflow.set_tag('best_run_', 1) mlflow_model_path = f'/data/hymind/repo/{experiment_id}/{best_run_id}/artifacts/model/' mlflow.pyfunc.save_model(path=mlflow_model_path, python_model=m) # end experiment and register best model model_name = 'MLflow-test' + str(counter) registered_model = mlflow.register_model(f'runs:/{best_run_id}/model', model_name) # now attempt to make a prediction using the loaded model model_version = 1 m = mlflow.pyfunc.load_model(f"models:/{model_name}/{model_version}") m.predict(df) In this case, I get the attribute error AttributeError: 'PythonModelContext' object has no attribute 'shape' How do I get the original model back from the 'PythonModelContext' object?
If you take a close look at the signature of the abstract method predict() in the mlflow.pyfunc.PythonModel class that you are extending, you will see that has 3 parameters: def predict(self, context, model_input): So, if you change your simple class to have the extra parameter context, your example should work: class PredictSpeciality(mlflow.pyfunc.PythonModel): def fit(self): print('fit') d = {'col1': [1, 2], 'col2': [3, 4]} df = pd.DataFrame(data=d) return df def predict(self, context, X, y=None): print('predict') print(X.shape) return To elaborate a bit more on what is going on here: There are 2 classes at play: mlflow.pyfunc.PythonModel and mlflow.pyfunc.PyFuncModel. The mlflow.pyfunc.PythonModel is being wrapped by the mlflow.pyfunc.PyFuncModel. The former is doing the actual work and the latter is dealing with the metadata, packaging, conda environment, etc. In the documentation it is explained like so: Python function models are loaded as an instance of mlflow.pyfunc.PyFuncModel, which is an MLflow wrapper around the model implementation and model metadata (MLmodel file). Unfortunately, the documentation also states that you cannot create a PyFuncModel directly, but only Wrapper around model implementation and metadata. This class is not meant to be constructed directly. Instead, instances of this class are constructed and returned from mlflow.pyfunc.load_model(). I find that quite limiting and am unsure why it was designed this way, however, there are 2 things that you can do here: Pass in an extra parameter when directly dealing with your wrapped class: m.predict(None, df) Save and load the model to get an mlflow.pyfunc.PyFuncModel: mlflow.pyfunc.save_model(path="temp_model", python_model=m) m2 = mlflow.pyfunc.load_model("temp_model") m2.predict(df) I know it isn't elegant, but I actually have been using #2 in the past. It would be good if someone from the MLFlow team could comment on why direct creation of a mlflow.pyfunc.PyFuncModel is not supported.
4
2
74,229,371
2022-10-27
https://stackoverflow.com/questions/74229371/how-to-get-previous-min-and-max-values-in-new-columns-for-each-group
In each new row of dataframe, I need to keep track of min and max values for a previous group of records. Create dataframe with input data: import pandas as pd columns = ['timestamp','groupid','value'] data = [['2022-10-14 11:47:38',1000,200], ['2022-10-14 11:47:39',1000,210], ['2022-10-14 11:47:40',1000,220], ['2022-10-14 11:47:41',1000,230], ['2022-10-14 11:47:42',1001,240], ['2022-10-14 11:47:43',1001,250], ['2022-10-14 11:47:44',1002,260], ['2022-10-14 11:47:45',1002,270]] df = pd.DataFrame(data=data,columns=columns) print(df) Calculate min and max values for each group: df['min'] = df.groupby('groupid')['value'].transform('min') df['max'] = df.groupby('groupid')['value'].transform('max') Create new columns in dataframe: df['pmin'] = 0 df['pmax'] = 0 df['ppmin'] = 0 df['ppmax'] = 0 df['ppmin'] = df.groupby('groupid')['pmin'].transform('first').shift(1) df['ppmax'] = df.groupby('groupid')['pmax'].transform('first').shift(1) df['pmin'] = df.groupby('groupid')['min'].transform('first').shift(1) df['pmax'] = df.groupby('groupid')['max'].transform('first').shift(1) print(df) The code in step 3 fails to return expected result as shown below: columns2 = ['timestamp','groupid','value','min','max','pmin','pmax','ppmin','ppmax'] data2 = [['2022-10-14 11:47:38',1000,200,200,230,0,0,0,0], ['2022-10-14 11:47:39',1000,210,200,230,0,0,0,0], ['2022-10-14 11:47:40',1000,220,200,230,0,0,0,0], ['2022-10-14 11:47:41',1000,230,200,230,0,0,0,0], ['2022-10-14 11:47:42',1001,240,240,250,200,230,0,0], ['2022-10-14 11:47:43',1001,250,240,250,200,230,0,0], ['2022-10-14 11:47:44',1002,260,260,270,240,250,200,230], ['2022-10-14 11:47:45',1002,270,260,270,240,250,200,230]] df = pd.DataFrame(data=data2,columns=columns2) print(df)
Maybe not the prettiest solution: df["tmp"] = (df["groupid"] != df["groupid"].shift()).cumsum() grps = df.groupby("groupid")["value"] df["min"] = grps.transform("min") df["max"] = grps.transform("max") mapper = df.drop_duplicates("tmp").set_index("tmp")[["min", "max"]] tmp1 = df["tmp"] - 1 tmp2 = df["tmp"] - 2 df["pmin"] = tmp1.map(mapper["min"]) df["pmax"] = tmp1.map(mapper["max"]) df["ppmin"] = tmp2.map(mapper["min"]) df["ppmax"] = tmp2.map(mapper["max"]) print(df.fillna(0).drop(columns="tmp")) Prints: timestamp groupid value min max pmin pmax ppmin ppmax 0 2022-10-14 11:47:38 1000 200 200 230 0.0 0.0 0.0 0.0 1 2022-10-14 11:47:39 1000 210 200 230 0.0 0.0 0.0 0.0 2 2022-10-14 11:47:40 1000 220 200 230 0.0 0.0 0.0 0.0 3 2022-10-14 11:47:41 1000 230 200 230 0.0 0.0 0.0 0.0 4 2022-10-14 11:47:42 1001 240 240 250 200.0 230.0 0.0 0.0 5 2022-10-14 11:47:43 1001 250 240 250 200.0 230.0 0.0 0.0 6 2022-10-14 11:47:44 1002 260 260 270 240.0 250.0 200.0 230.0 7 2022-10-14 11:47:45 1002 270 260 270 240.0 250.0 200.0 230.0
3
1
74,230,719
2022-10-28
https://stackoverflow.com/questions/74230719/how-to-use-unique-constraint-for-same-models
I want to create only one object for the same users. class MyModel(models.Model): user1 = models.ForeignKey(settings.AUTH_USER_MODEL,...) user2 = models.ForeignKey(settings.AUTH_USER_MODEL,...) class Meta: constraints = [ UniqueConstraint( fields=['user1', 'user2'], name='user_unique', ), # UniqueConstraint( # fields=['user2', 'user1'], # name='user_unique2', # ), ] I can solve the problem in another way, I just want to know how to do it with UniqueConstraint. Adding another UniqueConstraint and moving the fields didn't solve the problem. For example, for users X and Y, I only need one object.
Since Django 4.0 constraints now support expressions, this allows us to use database functions in our constraints allowing us to create a unique constraint with sorted fields: from django.db.models.functions import Least, Greatest class MyModel(models.Model): user1 = models.ForeignKey(settings.AUTH_USER_MODEL,...) user2 = models.ForeignKey(settings.AUTH_USER_MODEL,...) class Meta: constraints = [ UniqueConstraint( Least("user1", "user2"), Greatest("user1", "user2"), name='user_unique', ) ]
5
3
74,202,814
2022-10-26
https://stackoverflow.com/questions/74202814/in-python-create-index-from-flat-representation-of-nested-structure-in-a-list
I have lists where each entry is representing a nested structure, where / represents each level in the structure. ['a','a/b/a','a/b','a/b/d',....] I want to take such a list and return an index list where each level is sorted in alphabetical order. If we had the following list ['a','a/b','a/b/a','a/c','a/c/a','b'] It represents the nested structure 'a': #1 'b': #1.1 'a': ... #1.1.1 'c': #1.2 'a': ... #1.2.1 'b' : ... #2 I am trying to get the output ['1','1.1','1.1.1', '1.2','1.2.1','2'] But I am having real issue on how to tackle the problem, would it be solved recursively? Or what would be a way to solve this for any generic list where each level is separated by /? The list is originally not necessarily sorted, and each level can be any generic word.
Since the goal is to simply convert the paths to indices according to their respective positions against other paths of the same prefix, there is no need to build a tree at all. Instead, iterate over the paths in alphabetical order while using a dict of sets to keep track of the prefixes at each level of paths, and join the lengths of sets at each level for output: def indices(paths): output = {} names = {} for index, path in sorted(enumerate(paths), key=lambda t: t[1]): counts = [] prefixes = tuple(path.split('/')) for level, name in enumerate(prefixes): prefix = prefixes[:level] names.setdefault(prefix, set()).add(name) counts.append(len(names[prefix])) output[index] = '.'.join(map(str, counts)) return list(map(output.get, range(len(output)))) so that: print(indices(['a', 'a/b', 'a/b/a', 'a/c', 'a/c/a', 'b'])) print(indices(['a', 'c', 'b', 'a/b'])) print(indices(['a/b/c/d', 'a/b/d', 'a/b/c'])) print(indices(['abc/d', 'bcc/d'])) print(indices(['apple/cat','apple/dog', 'banana/dog'])) outputs: ['1', '1.1', '1.1.1', '1.2', '1.2.1', '2'] ['1', '3', '2', '1.1'] ['1.1.1.1', '1.1.2', '1.1.1'] ['1.1', '2.1'] ['1.1', '1.2', '2.1'] Demo: https://replit.com/@blhsing/StainedMassivePi
5
1
74,214,343
2022-10-26
https://stackoverflow.com/questions/74214343/minimum-amount-of-numbers-to-change-inside-of-nn-matrix-to-make-it-symmetrical
There is a n*n matrix made of numbers 0 - 9. For example: 6 0 0 8 9 6 1 5 1 6 8 1 1 0 4 2 1 3 7 1 5 8 8 6 6 2 5 2 7 9 4 6 9 6 4 1 4 7 8 5 3 8 9 4 8 3 9 2 9 I need to find the minimum amount of numbers to change (inside the matrix) to make it symmetrical about multiple lines (/, \, -, |) at once. I made four functions for every symmetry (/, \, -, |). They make lists of two numbers that need to have the same value. The functions look like this: import math """lenght = how many numbers in one line""" def symmetry_horizontaln(lenght): for i in range(math.ceil(lenght/2)): for j in range(lenght): round_result = [] round_result.append((i, j)) round_result.append((lenght-i-1, j)) def symmetry_vertical(lenght): for i in range(lenght): for j in range(lenght): if j < math.ceil(lenght/2): round_result= [] round_result.append((i, j)) round_result.append((i, lenght-j-1)) def symmetry_main_diagonal(lenght): for i in range(lenght): for j in range(lenght): if j <= i: round_result= [] round_result.append((i, j)) round_result.append((j, i)) def symmetry_second_diagonal(lenght): for i in range(lenght): for j in range(lenght): if j <= lenght-i-1: round_result= [] round_result.append((i, j)) round_result.append((lenght-j-1, lenght-i-1)) Now i need to somehow put the round results together so i get lists of points in the matrix that need to have the same value. For example if there is a matrix like this: 6 0 0 8 9 6 1 5 1 6 8 1 1 0 4 2 1 3 7 1 5 8 8 6 6 2 5 2 7 9 4 6 9 6 4 1 4 7 8 5 3 8 9 4 8 3 9 2 9 and it needs to be symatrical about these lines: - and /, the final list would look like this: [[(6, 2), (4, 0), (0, 4), (4, 6), (2, 0), (6, 4), (0, 2), (2, 6)], [(2, 3), (3, 2), (3, 4), (4, 3)], [(3, 1), (5, 3), (1, 3), (3, 5)], [(1, 2), (2, 1), (5, 4), (1, 4), (4, 5), (2, 5), (4, 1), (5, 2)], [(1, 1), (5, 5), (5, 1), (1, 5)], [(0, 1), (6, 5), (6, 1), (5, 0), (5, 6), (0, 5), (1, 0), (1, 6)], [(4, 4), (2, 4), (4, 2), (2, 2)], [(6, 3), (0, 3), (3, 6), (3, 0)], [(3, 3)], [(6, 6), (6, 0), (0, 6), (0, 0)]] from this list i know how to get the minimum amount of numbers to change to make i symmetrical about those lines. I think its 30. But i have no idea how to get that list. This is what i tried: I modified the previous functions like this (last two lines in every function are new): def symmetry_horizontaln(lenght): for i in range(math.ceil(lenght/2)): for j in range(lenght): round_result = [] round_result.append((i, j)) round_result.append((lenght-i-1, j)) function((i, j), round_result) function((delka-i-1, j), round_result) def symetry_horizontal(lenght): ... function((i, j), round_result) function((i, lenght-j-1), round_result) ... and made a new function to get the final list: x = [] def function(point, round_result): if round_result[0] == round_result[1]: round_result = list(set(round_result)) if not x: x.append(round_result) else: for i in x: if i == round_result: pass elif point in i: i.append(round_result[0]) i.append(round_result[1]) else: x.append(round_result) break print(x) but it desn´t work. Any help is appreciated!
You can try this: import numpy as np import scipy a = np.array([[6, 0, 0, 8, 9, 6, 1], [5, 1, 6, 8, 1, 1, 0], [4, 2, 1, 3, 7, 1, 5], [8, 8, 6, 6, 2, 5, 2], [7, 9, 4, 6, 9, 6, 4], [1, 4, 7, 8, 5, 3, 8], [9, 4, 8, 3, 9, 2, 9]]) def symm(a, pattern="hvdc"): b = np.stack([np.rot90(a, k=i) for i in range(4)]) b = np.concatenate([b, b.swapaxes(1, 2)]) subgps = {frozenset(s) : [0, i] for i, s in enumerate("dvch", 4)} subgps |= {frozenset("vh"): [0, 2, 5, 7], frozenset("dc"): [0, 2, 4, 6]} if (p := frozenset(pattern)) in subgps: b = b[subgps[p]] return scipy.stats.mode(b, keepdims=False)[0] This function returns a matrix which has all specified symmetries: h = horizontal, v = vertical, d = diagonal (i.e. \), c = cross-diagonal (i.e. /). The characters -|/\ are not convenient as a function argument, since e.g. "\" is not a valid Python string. For example, to get a matrix symmetric with respect to both diagonals one can use print(symm(a, "dc")) Which gives: [[6 0 4 8 5 0 1] [0 1 6 8 1 1 0] [4 6 1 6 4 1 5] [8 8 6 6 6 8 8] [5 1 4 6 1 6 4] [0 1 1 8 6 1 0] [1 0 5 8 4 0 6]] The symmetrized matrix obtained in this way has the minimal number of entries changed. Such matrix is not unique, since in some places there may be a choice which number to use. Then one can check how many entries were changed: print((a != symm(a, "dc")).sum()) It gives: 26 The list with coordinates of entries that will have the same value after symmetrization can be obtained as follows (in this example for the "dc" symmetries, i.e. with respect to both diagonals): n = a.shape[0] b = symm(np.arange(n**2).reshape(n, n), "dc") orbits = [list(zip(*np.where(b == i))) for i in np.unique(b)] for c in orbits: print(c) This gives: [(0, 0), (6, 6)] [(0, 1), (1, 0), (5, 6), (6, 5)] [(0, 2), (2, 0), (4, 6), (6, 4)] [(0, 3), (3, 0), (3, 6), (6, 3)] [(0, 4), (2, 6), (4, 0), (6, 2)] [(0, 5), (1, 6), (5, 0), (6, 1)] [(0, 6), (6, 0)] [(1, 1), (5, 5)] [(1, 2), (2, 1), (4, 5), (5, 4)] [(1, 3), (3, 1), (3, 5), (5, 3)] [(1, 4), (2, 5), (4, 1), (5, 2)] [(1, 5), (5, 1)] [(2, 2), (4, 4)] [(2, 3), (3, 2), (3, 4), (4, 3)] [(2, 4), (4, 2)] [(3, 3)]
4
2
74,228,995
2022-10-27
https://stackoverflow.com/questions/74228995/how-to-pass-a-json-or-dict-into-a-dataframe-with-pandas
I have this JSON/DICT in Python and I need to pass it to a datframe: { "filters": [ { "field": "example1", "operation": "like", "values": [ "Completed" ] }, { "field": "example2", "operation": "like", "values": [ "value1", "value2", "value3", ] } ] } DF that i need: example1 example2 Completed ["value1","value2","value3"]
Try: dct = { "filters": [ {"field": "example1", "operation": "like", "values": ["Completed"]}, { "field": "example2", "operation": "like", "values": [ "value1", "value2", "value3", ], }, ] } df = pd.DataFrame( [ { f["field"]: f["values"][0] if len(f["values"]) == 1 else f["values"] for f in dct["filters"] } ] ) print(df) Prints: example1 example2 0 Completed [value1, value2, value3]
3
3
74,225,875
2022-10-27
https://stackoverflow.com/questions/74225875/functools-cache-notify-that-the-result-is-cached
import functools @functools.cache def get_some_results(): return results Is there a way to notify the user of the function that the results they are getting are a cached version of the original for any other time they are calling the function?
This isn't a perfect approach, but you could use a custom decorator instead of @functools.cache which would then wrap your function with functools.cache and gather the cache stats before and after the call to determine if the lookup resulted in a cache hit. This was hastily thrown together but seems to work: def cache_notify(func): func = functools.cache(func) def notify_wrapper(*args, **kwargs): stats = func.cache_info() hits = stats.hits results = func(*args, **kwargs) stats = func.cache_info() if stats.hits > hits: print(f"NOTE: {func.__name__}() results were cached") return results return notify_wrapper As an example of a simple function: @cache_notify def f(x): return -x print("calling f") print(f"f(1) returned {f(1)}") print("calling f again") print(f"f(1) returned {f(1)}") Results: calling f f(1) returned -1 calling f again NOTE: f() results were cached f(1) returned -1 How to "notify the user" can be tailored as needed. Also note that it's possible for the cache stats to be a bit misleading in a multi-threaded environment; see Python lru_cache: how can currsize < misses < maxsize? for details.
3
5
74,226,092
2022-10-27
https://stackoverflow.com/questions/74226092/how-to-unpack-a-variable-in-a-lambda-function
Given an input tuple, the goal is to produce a dictionary with some pre-defined keys, e.g. in this case, we have an add_header lambda and use the unpacking inside when calling the function. >>> z = (2, 1) >>> add_header = lambda x, y: {"EVEN": x, "ODD": y} >>> add_header(*z) {'EVEN': 2, 'ODD': 1} My question is, is there way where the unpacking doesn't need to done when calling the add_header function? E.g. I can change avoid the lambda and do it in a normal function: >>> def add_header(w): ... x, y = w ... return {"EVEN": x, "ODD":y} ... >>> z = (2, 1) >>> add_header(z) {'EVEN': 2, 'ODD': 1} Or I could not use the unpacking and use the index of the tuple, i.e. the w[0] and w[1]: >>> z = (2, 1) >>> add_header = lambda w: {"EVEN": w[0], "ODD": w[1]} >>> add_header(z) {'EVEN': 2, 'ODD': 1} But is there some way to: Use the lambda Don't explicitly use the indexing in the tuple Don't unpack with * when calling the add_header() function but it's okay to be inside the lambda function. and still achieve the same {'EVEN': 2, 'ODD': 1} output given z = (2, 1) input? I know this won't work but does something like this exist? z = (2, 1) add_header = lambda x, y from *w: {"EVEN": x, "ODD": y} add_header(z)
You can try using dict() with zip(): z = (2, 1) add_header = lambda tpl: dict(zip(("EVEN", "ODD"), tpl)) print(add_header(z)) Prints: {'EVEN': 2, 'ODD': 1}
3
2
74,222,179
2022-10-27
https://stackoverflow.com/questions/74222179/how-to-split-a-numpy-array-of-integers-into-chunks-that-have-successive-values
I have the following numpy array with positive integers, in ascending order: import numpy as np arr = np.array([222, 225, 227, 228, 230, 232, 241, 243, 244, 245, 252, 253, 258]) I want to split it, into parts, where at each part, each number has maximum difference of 2 from the next one. So the following array should be split as: [[222], [225,227,228,230,232], [241, 243, 244, 245], [252, 253], [258]] Any ideas how I can I achieve that ?
You can compute the diff, get the indices of differences above threshold with flatnonzero, and split with array_split: threshold = 2 out = np.array_split(arr, np.flatnonzero(np.diff(arr)>threshold)+1) output: [array([222]), array([225, 227, 228, 230, 232]), array([241, 243, 244, 245]), array([252, 253]), array([258])]
4
5
74,221,419
2022-10-27
https://stackoverflow.com/questions/74221419/pandas-merging-two-dfs-with-different-amount-of-rows
I have two dataframes that both have a column that can have the same number/value in it. One 'small df with ~300 rows (which is my leading file) and 1 df with ~ 5000 rows. I want to merge on 1 column but I cannot get the same amount of rows when I print the data. first (small) dataframe (left): import pandas as pd df1 = pd.read_excel('./file.xlsx') df1 = df.replace(' ', np.nan) df1.head() col1 row1 123456 row2 123457 row3 123458 row4 123459 row5 123450 second (big) df (right): import pandas as pd df2 = pd.read_excel('./file2.xlsx') df2 = df.replace(' ', np.nan) df2 col1 col2 row1 123456 hello1 row2 123457 hello2 row3 123458 hello3 row4 123459 hello4 row5 123450 hello4 row7 555555 street1 row8 666666 street1 row9 777777 street1 I tried: merged = pd.merge(left=df1, right=df2, how='inner', left_on='col1', right_on='col1') print("Orginele data", len(df1)) print("Merged data", len(df2)) When I print I get like 30k rows in the left df but I only want to see the rows used in the left df (~300 rows). Most of them are NaN's. I tried changing the 'how=' but that did not work. I also checked the post "Merging 101" but can't seem to figure this out. Expected result in left (small) dataframe: col1 col2 row1 123456 hello1 row2 123457 hello2 row3 123458 hello3 row4 123459 hello4 row5 123450 hello4 Appreciate the help and effort. Thank you!
Try dataframe.join you can specify how='left which is by default import pandas as pd df = pd.DataFrame({"a": [0,0,1,1,2,2,2,]}) df2 = pd.DataFrame({"a": [0, 1,2,3,4,5,6,7,8,9], "b": list("abcdefghij")}) df.join(df2, on="a", lsuffix="df_a", rsuffix="df_b") # output adf_a adf_b b 0 0 0 a 1 0 0 a 2 1 1 b 3 1 1 b 4 2 2 c 5 2 2 c 6 2 2 c
4
5
74,206,978
2022-10-26
https://stackoverflow.com/questions/74206978/why-does-this-specific-code-run-faster-in-python-3-11
I have the following code in a Python file called benchmark.py: source = """ for i in range(1000): a = len(str(i)) """ import timeit print(timeit.timeit(stmt=source, number=100000)) When I tried to run with multiple python versions I am seeing a drastic performance difference. C:\Users\Username\Desktop>py -3.10 benchmark.py 16.79652149998583 C:\Users\Username\Desktop>py -3.11 benchmark.py 10.92280820000451 As you can see this code runs faster with python 3.11 than previous Python versions. I tried to disassemble the bytecode to understand the reason for this behaviour but I could only see a difference in opcode names (CALL_FUNCTION is replaced by PRECALL and CALL opcodes). I am quite not sure if that's the reason for this performance change. so I am looking for an answer that justifies with reference to cpython source code. python 3.11 bytecode: 0 0 RESUME 0 2 2 PUSH_NULL 4 LOAD_NAME 0 (range) 6 LOAD_CONST 0 (1000) 8 PRECALL 1 12 CALL 1 22 GET_ITER >> 24 FOR_ITER 22 (to 70) 26 STORE_NAME 1 (i) 3 28 PUSH_NULL 30 LOAD_NAME 2 (len) 32 PUSH_NULL 34 LOAD_NAME 3 (str) 36 LOAD_NAME 1 (i) 38 PRECALL 1 42 CALL 1 52 PRECALL 1 56 CALL 1 66 STORE_NAME 4 (a) 68 JUMP_BACKWARD 23 (to 24) 2 >> 70 LOAD_CONST 1 (None) 72 RETURN_VALUE python 3.10 bytecode: 2 0 LOAD_NAME 0 (range) 2 LOAD_CONST 0 (1000) 4 CALL_FUNCTION 1 6 GET_ITER >> 8 FOR_ITER 8 (to 26) 10 STORE_NAME 1 (i) 3 12 LOAD_NAME 2 (len) 14 LOAD_NAME 3 (str) 16 LOAD_NAME 1 (i) 18 CALL_FUNCTION 1 20 CALL_FUNCTION 1 22 STORE_NAME 4 (a) 24 JUMP_ABSOLUTE 4 (to 8) 2 >> 26 LOAD_CONST 1 (None) 28 RETURN_VALUE PS: I understand that python 3.11 introduced bunch of performance improvements but I am curios to understand what optimization makes this code run faster in python 3.11
There's a big section in the "what's new" page labeled "faster runtime". It looks like the most likely cause of the speedup here is PEP 659, which is a first start towards JIT optimization (perhaps not quite JIT compilation, but definitely JIT optimization). Particularly, the lookup and call for len and str now bypass a lot of dynamic machinery in the overwhelmingly common case where the built-ins aren't shadowed or overridden. The global and builtin dict lookups to resolve the name get skipped in a fast path, and the underlying C routines for len and str are called directly, instead of going through the general-purpose function call handling. You wanted source references, so here's one. The str call will get specialized in specialize_class_call: if (tp->tp_flags & Py_TPFLAGS_IMMUTABLETYPE) { if (nargs == 1 && kwnames == NULL && oparg == 1) { if (tp == &PyUnicode_Type) { _Py_SET_OPCODE(*instr, PRECALL_NO_KW_STR_1); return 0; } where it detects that the call is a call to the str builtin with 1 positional argument and no keywords, and replaces the corresponding PRECALL opcode with PRECALL_NO_KW_STR_1. The handling for the PRECALL_NO_KW_STR_1 opcode in the bytecode evaluation loop looks like this: TARGET(PRECALL_NO_KW_STR_1) { assert(call_shape.kwnames == NULL); assert(cframe.use_tracing == 0); assert(oparg == 1); DEOPT_IF(is_method(stack_pointer, 1), PRECALL); PyObject *callable = PEEK(2); DEOPT_IF(callable != (PyObject *)&PyUnicode_Type, PRECALL); STAT_INC(PRECALL, hit); SKIP_CALL(); PyObject *arg = TOP(); PyObject *res = PyObject_Str(arg); Py_DECREF(arg); Py_DECREF(&PyUnicode_Type); STACK_SHRINK(2); SET_TOP(res); if (res == NULL) { goto error; } CHECK_EVAL_BREAKER(); DISPATCH(); } which consists mostly of a bunch of safety prechecks and reference fiddling wrapped around a call to PyObject_Str, the C routine for calling str on an object. Python 3.11 includes many other performance enhancements besides the above, including optimizations to stack frame creation, method lookup, common arithmetic operations, interpreter startup, and more. Most code should run much faster now, barring things like I/O-bound workloads and code that spent most of its time in C library code (like NumPy).
6
7
74,214,619
2022-10-26
https://stackoverflow.com/questions/74214619/how-to-use-tkinter-after-method-to-delay-a-loop-instead-time-sleep
I´m trying to create a simple thing: a loop with a delay of x seconds between iterations, triggered by a Tkinter button command. The obvious answer is to use time.sleep(), however, this actually freezes the mainloop process, avoiding other events to be captured. I´ve searched and the recommendation is to use the tkinter.after() method, however, I still can't make the loop take time between iterations. Any help? Simplified code is below. import tkinter as tk import tkinter.scrolledtext as st import time # function to be activated by button def do_some_action(): for i in range(10): # just write some variable text to check if it is working txt_bigtextlog.insert(tk.END,'Row text {} off 10\n'.format(i)) # tk.END to point the scrolling text to latest line txt_bigtextlog.see(tk.END) # I´ve tried w/o success (1000 is miliseconds): # mywindowapp.after(1000) # btn_action.after(1000) time.sleep(1) mywindowapp.update() return() # Create the application main window mywindowapp = tk.Tk() # create some label, just to visualize something lbl_justsomelabel = tk.Label(text='Just some label here') lbl_justsomelabel.grid(row=0,column=0,sticky='NSEW',padx=10,pady=10) # create a button, just so simulate loop triggering btn_action = tk.Button(text='Start process',command=do_some_action) btn_action.grid(row=1,column=0,sticky='NSEW',padx=10,pady=10) # create a scrolling text just to do some example iterable action txt_bigtextlog = st.ScrolledText(mywindowapp,width = 30,height = 8) txt_bigtextlog.grid(row=2,column = 0, columnspan=3,sticky='NSEW', pady = 10, padx = 10) txt_bigtextlog.insert(tk.INSERT,'') mywindowapp.mainloop()
I would suggest you to use the tksleep for this task. While @Bryan Oakley's answer is the canonical and produces lesser overhead, tksleep can ease things out by a lot. You can have a control flow over multiple for-loops or even while loops are possible with this technique. Take this example text here: example = ''' Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Elit ut aliquam purus sit amet luctus venenatis. Viverra tellus in hac habitasse platea dictumst vestibulum. Dignissim suspendisse in est ante in nibh mauris. Rhoncus aenean vel elit scelerisque mauris pellentesque pulvinar. Lorem ipsum dolor sit amet consectetur adipiscing elit ut aliquam. Volutpat maecenas volutpat blandit aliquam etiam erat. At urna condimentum mattis pellentesque id nibh tortor. Eu ultrices vitae auctor eu augue ut lectus arcu. Et odio pellentesque diam volutpat commodo sed egestas. Nam libero justo laoreet sit amet cursus sit. Arcu bibendum at varius vel. Aliquam id diam maecenas ultricies mi eget mauris pharetra et. Eget nullam non nisi est sit. Volutpat ac tincidunt vitae semper quis lectus. Enim sit amet venenatis urna. Vitae proin sagittis nisl rhoncus mattis. Justo eget magna fermentum iaculis eu non diam phasellus vestibulum. Scelerisque in dictum non consectetur a erat nam. Neque vitae tempus quam pellentesque nec nam aliquam sem et. Nunc faucibus a pellentesque sit amet porttitor. A scelerisque purus semper eget duis. Proin libero nunc consequat interdum varius sit. Nunc sed velit dignissim sodales ut eu sem integer vitae. Elementum curabitur vitae nunc sed velit. Vulputate odio ut enim blandit volutpat maecenas volutpat. Etiam dignissim diam quis enim lobortis scelerisque fermentum dui faucibus. Eu nisl nunc mi ipsum faucibus vitae aliquet nec ullamcorper. Varius duis at consectetur lorem donec. Suspendisse potenti nullam ac tortor vitae. Nisi porta lorem mollis aliquam ut porttitor leo a. Vitae sapien pellentesque habitant morbi tristique senectus et. Porttitor massa id neque aliquam vestibulum. Dui accumsan sit amet nulla facilisi morbi tempus iaculis urna. ''' and look how easy it is, to write an animation of writing with just a few lines of code. Of course it's still based on .after but think about the workarounds you would need to write code that behaves exactly the same as the following with only after and you will notice, that this technique has some serious benefits. import tkinter as tk def tksleep(t): 'emulating time.sleep(seconds)' ms = int(t*1000) root = tk._get_default_root() var = tk.IntVar(root) root.after(ms, lambda: var.set(1)) root.wait_variable(var) def animated_write(line): for char in line: textbox.insert(tk.END, char) tksleep(0.1) textbox.insert(tk.END, '\n') def read_text(text): for line in text.splitlines(): animated_write(line) root = tk.Tk() textbox = tk.Text(root) textbox.pack() read_text(example) root.mainloop()
4
2
74,217,557
2022-10-27
https://stackoverflow.com/questions/74217557/the-precision-of-decimal-library-in-python
From the documentation page of Decimal, I thought that once we use decimal to compute, it'll be a correct result without any floating error. But when I try this equation from decimal import Decimal, getcontext getcontext().prec = 250 a = Decimal('6') b = Decimal('500000') b = a ** b print('prec: ' + str(getcontext().prec) + ', ', end='') print(b.ln() / a.ln()) It gives me different result! I want to calculate the digit of 6**500000 in base-6 representation, so my expect result would be int(b.ln() / a.ln()) + 1, which should be 500001. However, when I set the prec to 250, it gives me the wrong result. How can I solve this? Also, if I want to output the result without the scientific notation (i.e. 5E+5), what should I do?
The Documentation clearly show the parameters of getcontext() when you simply execute getcontext() it can show its built-in parameters. Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow]) When you can change getcontext().prec = 250 then it can only overwrite prec value. the main thing which can effect the result is rounding parameter right after the prec, It is built-in parameter rounding=ROUND_HALF_EVEN. I want to change this to None but it can show an error. so it can clarify that no matter what we can do it must change even slightly because of rounding parameter. Note: your result may also effect because of other built-in parameters.
6
2
74,214,488
2022-10-26
https://stackoverflow.com/questions/74214488/check-that-function-return-types-match-the-def-statements-in-pr-test-in-python
I have a Github Action that runs unit tests on each Pull Request (PR). It effectively runs pytest. Seeing that we leverage the type hints introduced in PEP 484, I'd like a method like this to cause a PR check fail: def return_an_int() -> int: return 'not an int' Is there a simple way to run such a "compilation" test (I know python is interpreted) for all the .py files in the project?
This is the kind of type checking that mypy does, so you could include running mypy in your Action workflow. If your module and its dependencies are already installed in your workflow, this step should do the trick: - name: Run mypy run: | # needed if mypy isn't already installed pip install mypy # "mypy somedir" looks for and checks all Python files under somedir # I assume your code is in some source directory <src-dir> mypy <src-dir> And that should be all you need, placed after the steps that install your code base.
4
4
74,215,857
2022-10-27
https://stackoverflow.com/questions/74215857/how-to-set-a-pandas-periodindex-with-yearly-frequency
I am able to create quarterly and monthly PeriodIndex like so: idx = pd.PeriodIndex(year=[2000, 2001], quarter=[1,2], freq="Q") # quarterly idx = pd.PeriodIndex(year=[2000, 2001], month=[1,2], freq="M") # monthly I would expect to be able to create a yearly PeriodIndex like so: idx = pd.PeriodIndex(year=[2000, 2001], freq="Y") Instead this throws the following error: Traceback (most recent call last): File ".../script.py", line 3, in <module> idx = pd.PeriodIndex(year=[2000, 2001], freq="Y") File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/core/indexes/period.py", line 250, in __new__ data, freq2 = PeriodArray._generate_range(None, None, None, freq, fields) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/core/arrays/period.py", line 316, in _generate_range subarr, freq = _range_from_fields(freq=freq, **fields) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pandas/core/arrays/period.py", line 1160, in _range_from_fields ordinals.append(libperiod.period_ordinal(y, mth, d, h, mn, s, 0, 0, base)) File "pandas/_libs/tslibs/period.pyx", line 1109, in pandas._libs.tslibs.period.period_ordinal TypeError: an integer is required It seems like something that should be very easy to do but yet I cannot understand what is going wrong. Can anybody help?
month and year are both required "fields" due to the current implementation (through pandas 1.5.1 at least). Most other field values will be configured with a default value, however, neither month or year will be defined if a value is not provided. Therefore, in this case, month will remain None which causes the error shown TypeError: an integer is required Here is a link to the relevant section of the source code where default values are defined. Omitting the month field results in [None, None] (in this case) which cannot be converted to a Periodindex. A correct index can be built as follows. idx = pd.PeriodIndex(year=[2000, 2001], month=[1, 1], freq='Y') Resulting in: PeriodIndex(['2000', '2001'], dtype='period[A-DEC]') Depending on the number of years, it may also make sense to programmatically generate the list of months: years = [2000, 2001] idx = pd.PeriodIndex(year=years, month=[1] * len(years), freq='Y') As an alternative, it may be easier to use to_datetime + to_period to create the Period index from a Datetime index instead (as it is already in a compatible form) pd.to_datetime([2000, 2001], format='%Y').to_period('Y') Resulting in the same PeriodIndex: PeriodIndex(['2000', '2001'], dtype='period[A-DEC]')
3
3
74,214,615
2022-10-26
https://stackoverflow.com/questions/74214615/how-to-update-python-version-in-terminal
I've updated my version of Python to 3.11, but Terminal is printing different versions, depending on what command I enter. Entering python3 --version prints Python 3.9.13. Entering python --version prints Python 3.9.6. When I go to the actual Python framework, I can see that 3.11 is installed and is the current version, per the shortcut there. There are multiple versions of Python--Python 3.7, 3.8, etc.--there; perhaps this is the issue. I've looked into uninstalling some or all versions of Python, but I worry that will just make it worse--I'm not the most experienced programmer. I've also tried adding an alias to the .zshrc file per other posts, but it didn't work. I did save the file correctly, for what it's worth. Any advice is appreciated. macOS Ventura 13.0 Fix: I created a new user on my computer, which isn't a true fix, but it allowed me to bypass this issue (messy PATH) with relative ease.
I think you might be missing some foundational knowledge about how versions are selected from a terminal. You'll want to do some learning about PATH environment variable and how that relates to python versions. https://www.tutorialspoint.com/python/python_environment.htm#:~:text=The%20path%20is%20stored%20in,sensitive%3B%20Windows%20is%20not). The path that you're using is probably going to one version of python in your terminal but you have settings in whatever program you are using for the "python framework" is pointing to a different version. Also look a little into .bashrc and .zshrc files Python environments can be a bit annoying to manage at times. I'd recommend reading about pyenv and anaconda so that you can manage them. Good luck :) TLDR: Make sure your path for the terminal you are using is pointing to the correct location for the python version installation you want to use.
9
-1
74,210,613
2022-10-26
https://stackoverflow.com/questions/74210613/upgrading-python-version-to-3-9-on-macos-now-gives-variable-not-defined-error
I just upgraded from Python 3.7 to 3.9.14 and it now gives a variable not defined error. The same code works fine locally and remotely where Python 3.9.2 is installed but now locally it gives an error in Python 3.9.14 version. Below is the code: def check(url): result = None product = Product(url, user_agents) if product.is_connected(): result = product.parse() return result if __name__ == '__main__': user_agents = [] with open('user-agents.txt', encoding='utf8') as f: user_agents = f.readlines() if len(links) > 0: print('Starting with the Pool count = ', PRODUCT_POOL_COUNT) with Pool(PRODUCT_POOL_COUNT) as p: result = p.map(check, links) result = list(filter(None, result)) # Remove Empty Below is the error message: Traceback (most recent call last): File "/Users/Me/.pyenv/versions/3.9.14/lib/python3.9/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/Users/Me/.pyenv/versions/3.9.14/lib/python3.9/multiprocessing/pool.py", line 48, in mapstar return list(map(*args)) File "/Users/Me/Data/Clients/App/Etsy/products/parse_product.py", line 12, in check product = Product(url, user_agents) NameError: name 'user_agents' is not defined """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/Me/Data/Clients/App/Etsy/products/parse_product.py", line 126, in <module> result = p.map(check, links) File "/Users/Me/.pyenv/versions/3.9.14/lib/python3.9/multiprocessing/pool.py", line 364, in map return self._map_async(func, iterable, mapstar, chunksize).get() File "/Users/Me/.pyenv/versions/3.9.14/lib/python3.9/multiprocessing/pool.py", line 771, in get raise self._value NameError: name 'user_agents' is not defined
You must be on OS-X. On that OS, Python changed across this versions the default method to spawn sub-processes - that also explains why it "works on Python 3.9 remotely": the remote deploy must be on a Linux or other Unix than MacOS - Bear with me: the default child-process creation method used to be "fork" for all Unixes - when "fork" is used, the new process is an exact copy of its parent, including all declared global variables - so the global variable user_agents exists and is visible in the target function. The new method for OS-X is "spawn": the new process starts-over all your project code, and re-execute all the lines, but for the lines guarded by the if __name__ == "__main__": statement: in the child-processes, the variable __name__ contains the module actual name, as it is no longer the __main__ module of the running Python program (the original process is). Motivations for this change apart (they are easily searchable), the fix is simple: just declare your global variables outside of the guarded block: (...) user_agents = open("user-agents.txt", encoding="utf-8").readlines() if __name__ == '__main__': if len(links) > 0: print('Starting with the Pool count = ', PRODUCT_POOL_COUNT) with Pool(PRODUCT_POOL_COUNT) as p: result = p.map(check, links) result = list(filter(None, result)) # Remove Empty (Also, no need to go through all of this of "with open" a file if you are reading it in a single glob)
3
7
74,209,153
2022-10-26
https://stackoverflow.com/questions/74209153/pandas-split-column-if-condition-else-null
I want to split a column. If it has a letter (any letter) at the end, this will be the value for the second column. Otherwise, the second column should be null import pandas as pd data = pd.DataFrame({"data": ["0.00I", "0.01E", "99.99", "0.14F"]}) desired result: a b 0 0.00 I 1 0.01 E 2 99.99 None 3 0.14 F
You can use str.extract with the (\d+(?:\.\d+)?)(\D)? regex: out = data['data'].str.extract(r'(\d+(?:\.\d+)?)(\D)?').set_axis(['a', 'b'], axis=1) Or, if you want to remove the original 'data' column while adding new columns in place: data[['a', 'b']] = data.pop('data').str.extract('(\d+(?:\.\d+)?)(\D)?') output: a b 0 0.00 I 1 0.01 E 2 99.99 NaN 3 0.14 F regex demo (\d+(?:\.\d+)?) # capture a number (with optional decimal) (\D)? # optionally capture a non-digit
3
3
74,207,589
2022-10-26
https://stackoverflow.com/questions/74207589/how-to-make-a-pyramid-using-recursion-in-python
I need to make a pyramid in python using recursion. Already made it, but I need help making it with recursion. def pyramid(n): for i in range(0, n): for j in range(0, i+1): print("* ",end="") print("\r") pyramid(5)
Recursion = the repeated application of a recursive procedure. Code: def pyramid(n): if n==0: return else: pyramid(n-1) print("* "*n) n = 10 pyramid(n) This just repeats the function until n = 0.
3
5
74,201,826
2022-10-26
https://stackoverflow.com/questions/74201826/return-day-of-the-year-with-a-for-loop-that-takes-the-month-day-as-input
I need to write a function called day_of_the_year that takes a month and day as input and returns the associated day of the year. Let the month by a number from 1 (representing January) to 12 (representing December). For example: day_of_the_year(1, 1) = 1 day_of_the_year(2, 1) = 32 day_of_the_year(3, 1) = 60 Use a loop to add up the full number of days for all months before the one you're interested in, then add in the remaining days. For example, to find the day of the year for March 5, add up the first two entries in days_per_month to get the total number of days in January and February, then add in 5 more days. So far, I have made a list: days_per_month = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31] This is my code so far: def day_of_the_year(month,day): days_per_month = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31] for number in day_of_the_year: total = days_per_month[0] + days_per_month[1] + 5 return total Is there something that I am missing?
Yes, you are exiting your loop on the first pass every time. What you should do is define the total variable outside of the for loop and then increment it on each iteration. You also only need to iterate to the month that is specified so use the range function to loop. And since day_of_the_year is the name of the function, it will cause an error if you try to put it in a for loop. Then once the loop has finished you can add the days to total and return it. def day_of_the_year(month,day): days_per_month = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31] total = 0 for number in range(month - 1): total += days_per_month[number] return total + day print(day_of_the_year(1,1)) print(day_of_the_year(12,25)) output: 1 359 Michael's solution is the better solution to getting the answer, I just want to help you understand what you were missing and how to make it work.
4
1
74,201,807
2022-10-26
https://stackoverflow.com/questions/74201807/numpy-reshape-the-matrix
Does anyone can tell me how to use Numpy to reshape the Matrix [1,2,3,4] [5,6,7,8] [9,10,11,12] [13,14,15,16] to [16,15,14,13] [12,11,10,9] [8,7,6,5] [4,3,2,1] Thanks:) python 3.8 numpy 1.21.5 an example of my matrixs: [[ 1.92982258e+00 1.96782439e+00 2.00233048e-01 3.95128552e-01 4.21665915e-01 -1.10885581e-01 3.15967524e-01 1.86851601e-01] [ 5.82581567e-01 3.85242821e-01 6.52345512e-01 6.96774921e-01 4.46925274e-01 1.10208991e-01 -1.78544580e-02 2.63118328e-01] [ 1.18591189e-01 -8.87084649e-02 3.35701398e-01 3.81145692e-01 2.11622312e-02 3.10028567e-01 2.04480529e-01 4.45985566e-01] [ 5.59840625e-01 2.01962111e-01 5.34994738e-01 2.48421290e-01 2.42632687e-01 2.13238611e-01 3.96632085e-01 4.94549692e-01] [-7.69809051e-02 -3.00706661e-04 1.44790257e-01 3.49158021e-01 1.10096226e-01 2.03164938e-01 -3.45361600e-01 -3.33408510e-02] [ 2.33273192e-01 4.39144490e-01 -6.11938054e-02 -6.93128853e-02 -9.55871298e-02 -1.97338746e-02 -6.54788754e-02 2.81574199e-01] [ 6.61742595e-01 4.04149752e-01 2.33536310e-01 8.86332882e-02 -2.21808751e-01 -5.48789656e-03 5.49503834e-01 -1.22011728e-01] [-9.58502481e-03 2.36994437e-01 -1.28777627e-01 3.99751917e-01 -1.92452263e-02 -2.58119080e-01 3.40399940e-01 -2.20455571e-01]]
You can rotate the matrix with numpy.rot90(). To get two rotations as your example, pass in k=2: import numpy as np a = np.array([ [1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,16], ]) np.rot90(a, k=2) returning: array([[16, 15, 14, 13], [12, 11, 10, 9], [ 8, 7, 6, 5], [ 4, 3, 2, 1]]) Note the docs that say it returns a view of the original. This means the rotated matrix shares data with the original. Mutating one effects the other.
3
4
74,168,582
2022-10-23
https://stackoverflow.com/questions/74168582/how-to-read-the-request-body-using-orjson-library-in-fastapi
I am writing code to receive a JSON payload in FastAPI. Here is my code: from fastapi import FastAPI, status, Request from fastapi.responses import ORJSONResponse import uvicorn import asyncio import orjson app = FastAPI() @app.post("/", status_code = status.HTTP_200_OK) async def get_data(request: Request): param = await request.json() return param However, what I want is request.json() to be used with orjson instead of the default json library of Python. Any idea how to address this problem? Please help me, thanks.
Reading request data using orjson When calling await request.json(), FastAPI (actually Starlette) first reads the body (using the .body() method of the Request object), and then calls json.loads() (using the standard json library of Python) to return a dict/list object to you inside the endpoint—it doesn't use json.dumps(), as you mentioned in the comments section beneath your question, as that method is used to serialize a Python object into JSON instead. It is worth noting that, as can been seen in the relevant implementation, Starlette does not run the json.loads() (which is a blocking operation) in a separate thread, using await run_in_threadpool() for instance, as decribed in this answer, meaning that if a request body is rather large that would take a lot of time to be deserialized, the event loop (essentially, the entire server) would be blocked, until that operation is completed. It should be noted that orjson.loads() in the first example below, as well as orjson.dumps() in the second example later on, are blocking operations as well, and thus, using them instead, one could have those operations run in a separate thread/process that can be awaited, as decribed in the linked answer above, instead of calling them directly, when using an async def endpoint (see the linked answer above for more details on def vs async def in FastAPI). Alternatively, you could have the endpoint defined with normal def, and have the raw request body retrieved witihin an async dependency, as demonstrated in Update 2 of this answer, while keeping the blocking orjson.loads() operation inside the def endpoint. In this way, the endpoint will run in a separate thread from the extrernal threadpool that will then be awaited, and thus, the blocking operation inside won't block the event loop (again, please have a look at the linked answers above for more details). It should be noted that, in theory, there is no limit to how big a JSON request/response can be, but in practice there definetely are. As mentioned earlier, if you are deserializing/serializing rather large data within an async def endpoint, the event loop will get blocked, if you don't have these operations run in a separate thread/process, as explained in a linked answer earlier. Also, you might be running out of memory (on either, or both, server and client sides), if the data can't fit into the device's available RAM. Additionally, client timeouts could be another issue, if a response is not returned within a predefined time to clients (i.e., web browsers, Python HTTP clients, etc.). Thus, for rather large request/response bodies, one should rather think about saving the result in a file, and having the user to upload/download the file instead (again, any blocking operations within an async def endpoint could be executed in a separate thread/process, or use libraries that provide async support, if available, or define the endpoint with normal def)—see related answers here, here, as well as here and here. You might not need to worry that much about the event loop getting blocked, if you have a blocking operation, such as orjson.dumps() or json.dumps(), executed directly within an async def endpoint, as long as this is about quick actions, i.e., dealing with small amounts of JSON data (however, you should always perform benchmark tests—see this answer—and choose the approach that suits you best, based on the requirements of your project, the expected traffic, as well as your server's resources). Note that, if your API is publicly accessible, you should always consider applying some limitation on the request body size, in order to prevent malicious actors that would attempt to overload your server etc., either through a reverse proxy server, such as nginx, or through your application, as demonstrated in this answer, by consuming the request body in chunks, using request.stream() (instead of await request.body(), etc.), and calculating the body length as the chunks arrive. from fastapi import FastAPI, Request, HTTPException import orjson app = FastAPI() @app.post('/submit') async def submit(request: Request): try: # orjson.loads() could be run in a separate thread/process data = orjson.loads(await request.body()) except orjson.JSONDecodeError: raise HTTPException(status_code=400, detail='Invalid JSON data') return "success" Returning response data using orjson When returning data such as dict, list, etc, FastAPI will automatically convert that returned value into JSON, using the Python standard json.dumps(), after inspecting every item inside and making sure it is JSON serializable, using the JSON Compatible Encoder (i.e., jsonable_encoder())—see this answer for more details, as well as FastAPI/Starlette's JSONResponse implementation, where you could see that the content you are returning is serialized using json.dumps(), which, it is worth noting, is a blocking operation as well, and Starlette does not execute it in a separate thread. Hence, if you would like to use the orjson library instead—thus, using a faster JSON encoder, as well as avoiding the jsonable_encoder(), given that you are certain that the content you would be serializing is serializable with JSON; otherwise, orjson has a default parameter (that may be a function, lambda, or callable class instance) for the caller to specify how to serialize arbitrary types, which you could use, e.g., orjson.dumps(data, default=str)—you would need to send a custom Response directly, as described in this answer and as shown below, which would result in responding much faster. Note that, as mentioned earlier, orjson.loads() and orjson.dumps() are both blocking operations, and since an async def endpoint is used in the example below, it might be a good idea, depending on the size of data you are expecting to deserialize/serialize, to have these operations run in a separate thread/process, or use a def endpoint in the same way described earlier in the previous section. Otherwise, if you are dealing with small amounts of JSON data, it might not worth spawning a new thread or process for such operations (as described earlier, always perform tests to find the best suited approach, based on your project's requirements and device's resources). from fastapi import FastAPI, Request, Response, HTTPException import orjson app = FastAPI() @app.get('/item') async def get_item(request: Request): try: # orjson.dumps() could be run in a separate thread/process return Response(orjson.dumps({"item_id": "foo"}), media_type='application/json') except TypeError: raise HTTPException(status_code=500, detail='Unable to serialize the object') To test this using the submitted data shown in the previous seciton, you could use: @app.post('/submit') async def submit(request: Request): try: # orjson.loads() could be run in a separate thread/process data = orjson.loads(await request.body()) # orjson.dumps() could be run in a separate thread/process return Response(orjson.dumps(data), media_type='application/json') except orjson.JSONDecodeError: raise HTTPException(status_code=400, detail='Invalid JSON data') except TypeError: raise HTTPException(status_code=500, detail='Unable to serialize the object') Returning response data using FastAPI's ORJSONResponse Alternatively, you could use the ORJSONResponse provided by FastAPI (still make sure you have the orjson libray installed, as well as the content that you are returning is serializable with JSON). Have a look at futher documentation here and here on how to customize and/or set ORJSONResponse as the default response class (the implementation of the ORJSONResponse could be found here). Note that when using the response_class parameter in the endpoint's decorator to set the Response class, one does not necessarily need to use that response class when returning the data from the endpoint as well. This is because FastAPI, behind the scenes, will encode/serialize the data based on the response_class you set, as well as use it to define the "media type" of the response. Hence, one could either set response_class=ORJSONResponse or return ORJSONResponse(...). Having said that, it wouldn't hurt using both, as shown in the example below and in some examples provided in the official FastAPI documentation, not only for clarity purposes, but also for proper Swagger UI documentation purposes. The response_class will inform Swagger UI/OpenAPI docs for the expected media type of a successful response—one could confirm that by looking at the expected "Responses" and their media type under an endpoint in /docs (see FastAPI's documentation for declaring additional responses in OpenAPI). It should also be noted, as described earlier, that ORJSONResponse uses the same orjson.dumps()/etc. operations, behind the scenes, which are blocking operations. Hence, if an async def endpoint was used, and you expected to return some rather large and complex JSON data that would require some time to be serialized, which would render the server unresponsive until they did so, you should rather follow the approach in the previous example, instead of using ORJSONResponse, and have the blocking operation run in a separate thread/process. Otherwise, you could define the endpoint with normal def and use an async dependency, in the same way it was described in the first section of this answer. from fastapi import FastAPI, Request from fastapi.responses import ORJSONResponse app = FastAPI() @app.get('/item', response_class=ORJSONResponse) async def get_item(): return ORJSONResponse({"item_id": "foo"}) In the case of the example above, one wouldn't notice any difference in Swagger UI autodocs whether or not using the response_class=ORJSONResponse, as application/json is the default media type for FastAPI endpoints, regardless. However, in a following example of this answer, where HTMLResponse is returned from a specific endpoint, if one didn't use response_class=HTMLResponse in the decorator, Swagger UI/OpenAPI docs would incorrectly indicate that an application/json response is expected from that endpoint (in case of a successful response). Setting ORJSONResponse as the default_response_class As explained in the documentation, one can define the default response class for their FastAPI application as shown in the example below. In that way, every response from FastAPI will be encoded using ORJSONResponse in the example below; thus, there would be no need for you to either set response_class=ORJSONResponse or use return ORJSONResponse(...). from fastapi import FastAPI from fastapi.responses import ORJSONResponse app = FastAPI(default_response_class=ORJSONResponse) @app.get("/item") async def get_item(): return {"item_id": "foo"} You would still be able to override the default response class in an endpoint, if you had to, as shown in the examples earlier, by either setting the response_class parameter in the endpoint's decorator or returning that Response class directly, or using both methods, as explained earlier, so that Swagger UI/OpenAPI documentation is informed for the expected response type of that endpoint. For instance: from fastapi import FastAPI from fastapi.responses import ORJSONResponse, HTMLResponse app = FastAPI(default_response_class=ORJSONResponse) @app.get("/item") async def get_item(): return {"item_id": "foo"} @app.get("/html", response_class=HTMLResponse) async def get_html(): html_content = """ <html> <body> <h1>HTML!</h1> </body> </html> """ return HTMLResponse(content=html_content, status_code=200) Please make sure to have a look here, here, as well as here and here to learn about the various approaches of sending JSON data to a FastAPI backend, and how to define an endpoint to expect and validate JSON data, instead of relying on using await request.json() (which is useful when the app requires passing arbitrary JSON data, but does not perform any validation on the data).
7
9
74,200,120
2022-10-25
https://stackoverflow.com/questions/74200120/how-to-use-a-polars-column-with-offset-string-to-add-to-another-date-column
Suppose you have df=pl.DataFrame( { "date":["2022-01-01", "2022-01-02"], "hroff":[5,2], "minoff":[1,2] }).with_columns(pl.col('date').str.to_date()) and you want to make a new column that adds the hour and min offsets to the date column. The only thing I saw was the dt.offset_by method. I made an extra column df=df.with_columns(pl.format('{}h{}m','hroff','minoff').alias('offset')) and then tried df.with_columns(pl.col('date') \ .cast(pl.Datetime).dt.convert_time_zone('UTC') \ .dt.offset_by(pl.col('offset')).alias('newdate')) but that doesn't work because dt.offset_by only takes a fixed string, not another column. What's the best way to do that?
Use pl.duration: import polars as pl df = pl.DataFrame({ "date": pl.Series(["2022-01-01", "2022-01-02"]).str.to_date(), "hroff": [5, 2], "minoff": [1, 2] }) print(df.select( pl.col("date") + pl.duration(hours=pl.col("hroff"), minutes=pl.col("minoff")) )) shape: (2, 1) ┌─────────────────────┐ │ date │ │ --- │ │ datetime[μs] │ ╞═════════════════════╡ │ 2022-01-01 05:01:00 │ │ 2022-01-02 02:02:00 │ └─────────────────────┘
3
4
74,183,293
2022-10-24
https://stackoverflow.com/questions/74183293/how-do-i-add-the-result-of-an-apply-map-rows-as-a-new-column-in-polars
I have a wide dataframe, and I'm applying some custom logic to some columns to generate a new column. This works, and returns a dataframe with a single column with my desired values. How can I get this as a new column in the original dataframe? I tried various forms of .with_columns but none did the trick; and without row identifiers I don't feel at ease doing a concatenation. Any ideas? I'm trying to solve for a generic UDF, not one that I plan to express as a polars expression. df = pl.DataFrame({ "foo": [16, 28, 0 ], "bar": [None, 4,17 ], "yat": [41, 174,15 ], "tar": [None, 4,0 ], }) def udf(row: tuple[float])->str: '''This code is illustrative - meant to be a row-wise UDF''' return ' + '.join([f'{x}x{"^"+str(i) if i>0 else ""}' for i,x in enumerate(row) if x!=0 and x is not None]) df.select('foo','bar', 'yat', 'tar').map_rows(udf).rename({'map':'poly'}) shape: (3, 1) ┌────────────────────────────┐ │ poly │ │ --- │ │ str │ ╞════════════════════════════╡ │ 16x + 41x^2 │ │ 28x + 4x^1 + 174x^2 + 4x^3 │ │ 17x^1 + 15x^2 │ └────────────────────────────┘ My desired output is shape: (3, 5) ┌─────┬──────┬─────┬──────┬────────────────────────────┐ │ foo ┆ bar ┆ yat ┆ tar ┆ poly │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 ┆ str │ ╞═════╪══════╪═════╪══════╪════════════════════════════╡ │ 16 ┆ null ┆ 41 ┆ null ┆ 16x + 41x^2 │ │ 28 ┆ 4 ┆ 174 ┆ 4 ┆ 28x + 4x^1 + 174x^2 + 4x^3 │ │ 0 ┆ 17 ┆ 15 ┆ 0 ┆ 17x^1 + 15x^2 │ └─────┴──────┴─────┴──────┴────────────────────────────┘ In pandas you'd do this as: df.assign(poly = lambda x: x.apply(udf, axis=1)) SOLVED: (df.with_columns( pl.struct('foo','bar','yat','tar') .map_elements(lambda x: udf(x.values())) .alias('poly')) ) Or (df.with_columns( pl.struct(pl.all()) .map_elements(lambda x: udf(x.values())) .alias('poly')) )
If you want to apply a function over multiple columns you need to pack them into a struct type. This packing is free, but is needed to suffice the expression rules, that every expressions input only consist of a single datatype. E.g. an expression is Fn(Expr) -> Expr. Below shows an example of using map_elements to compute the horizontal sum and the more idiomatic way to compute a horizontal sum. df = pl.DataFrame({ "foo": [1, 2], "bar": [.1, .2], }) def mysum(row: tuple[float])->float: '''This code is illustrative - meant to be a row-wise UDF''' return sum(row) df.with_columns( # horizontal sum with a custom UDF pl.struct("foo", "bar").map_elements(lambda x: mysum((x["foo"], x["bar"]))).alias("foo+bar (slow)"), # idiomatic way to do a horizontal sum pl.sum_horizontal("foo", "bar").alias("foo+bar (fast)") ) Folds If you want to do more complicated horizontal aggregations, but want to keep the code fast (as using a python function in map_elements is not), you can use fold. Below I show how to compute a horizontal sum with a fold. df.with_columns( pl.fold( acc=0, function=lambda a, b: a + b, exprs=pl.all() ).alias("foo+bar") )
3
7
74,165,901
2022-10-22
https://stackoverflow.com/questions/74165901/polars-add-substract-utc-offset-from-datetime-object
I wanted to add/subtract the UTC offset (usually in hours) to/from the datetime object in polars but I don't seem to see a way to do this. the UTC offset can be dynamic given there's Day Light Saving period comes into play in a calendar year. (e.g., EST/EDT maps to 5/4 hours of UTC offset respectively). from datetime import datetime import pytz import polars as pl from datetime import date # Make a datetime-only dataframe that covers DST period of year, in UTC time first. df = pl.DataFrame( pl.date_range(low=date(2022,1,3), high=date(2022,9,30), interval="5m", time_unit="ns", time_zone="UTC") .alias("timestamp") ) # Convert timezone to "America/New_York", which covers both EST and EDT. us_df = df.with_column( pl.col("timestamp") .dt .cast_time_zone(tz="America/New_York") .alias("datetime") ) # Check us_df output us_df # output, here `polars` is showing US time without the UTC offset # Before 0.14.22 `polars` is showing time with UTC offset # i.e., `23:45:00 UTC` should be `19:45:00 EDT` # Now `polars` is showing `15:45:00 EDT`, without 4 hours of offset ┌─────────────────────────┬────────────────────────────────┐ │ timestamp ┆ datetime │ │ --- ┆ --- │ │ datetime[ns, UTC] ┆ datetime[ns, America/New_York] │ ╞═════════════════════════╪════════════════════════════════╡ │ 2022-01-03 00:00:00 UTC ┆ 2022-01-02 14:00:00 EST │ ├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤ │ 2022-01-03 00:05:00 UTC ┆ 2022-01-02 14:05:00 EST │ ├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤ │ 2022-01-03 00:10:00 UTC ┆ 2022-01-02 14:10:00 EST │ ├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤ │ 2022-01-03 00:15:00 UTC ┆ 2022-01-02 14:15:00 EST │ ├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤ │ ... ┆ ... │ ├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤ │ 2022-09-29 23:45:00 UTC ┆ 2022-09-29 15:45:00 EDT │ ├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤ │ 2022-09-29 23:50:00 UTC ┆ 2022-09-29 15:50:00 EDT │ ├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤ │ 2022-09-29 23:55:00 UTC ┆ 2022-09-29 15:55:00 EDT │ ├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤ │ 2022-09-30 00:00:00 UTC ┆ 2022-09-29 16:00:00 EDT │ └─────────────────────────┴────────────────────────────────┘ Converting to_pandas, we should observe that the underlying datetime object does not include that 4 hours of offset in the actual time as well (remember EST is also in this dataframe, and it has a 5-hour offset). # Convert to pandas us_pd = us_df.to_pandas() us_pd # output timestamp datetime 0 2022-01-03 00:00:00+00:00 2022-01-02 14:00:00-05:00 1 2022-01-03 00:05:00+00:00 2022-01-02 14:05:00-05:00 2 2022-01-03 00:10:00+00:00 2022-01-02 14:10:00-05:00 3 2022-01-03 00:15:00+00:00 2022-01-02 14:15:00-05:00 4 2022-01-03 00:20:00+00:00 2022-01-02 14:20:00-05:00 ... ... ... 77756 2022-09-29 23:40:00+00:00 2022-09-29 15:40:00-04:00 77757 2022-09-29 23:45:00+00:00 2022-09-29 15:45:00-04:00 77758 2022-09-29 23:50:00+00:00 2022-09-29 15:50:00-04:00 77759 2022-09-29 23:55:00+00:00 2022-09-29 15:55:00-04:00 77760 2022-09-30 00:00:00+00:00 2022-09-29 16:00:00-04:00 What I wanted was to include the UTC offset into the actual time, such that I can do filtering on the time (in a natural way). For instance, if I am seeing 2300UTC is 1900EDT, I can filter using 1900 directly (please note I can't just add/substract the UTC offset on the fly during filtering, as the number of hours is dynamic given DST). The underlying python datetime does have utcoffset function, which can be applied on each datetime object, but I'd need to convert polars to pandas first (I don't see how to do this within polars). I've also observed this peculiar difference: us_pd.datetime[us_pd.shape[0]-1].to_pydatetime() # We can see it is identical to what's already in `polars` and `pandas` dataframe. datetime.datetime(2022, 9, 29, 16, 0, tzinfo=<DstTzInfo 'America/New_York' EDT-1 day, 20:00:00 DST>) # Now we create a single datetime object with arbitrary UTC time and convert it to New York time datetime(2022, 9, 30, 22, 45, 0,0, pytz.utc).astimezone(pytz.timezone("America/New_York")) # The representation here is actually the correct New York time (as in, the offset has been included) datetime.datetime(2022, 9, 30, 18, 45, tzinfo=<DstTzInfo 'America/New_York' EDT-1 day, 20:00:00 DST>)
It seems you're looking for convert_time_zone. Ex: from datetime import date import polars as pl df = pl.DataFrame( pl.datetime_range( start=date(2022, 1, 3), end=date(2022, 9, 30), interval="5m", time_unit="ns", time_zone="UTC", eager=True ).alias("timestamp") ) us_df = df.with_columns( pl.col("timestamp").dt.convert_time_zone(time_zone="America/New_York").alias("datetime") ) shape: (77_761, 2) ┌─────────────────────────┬────────────────────────────────┐ │ timestamp ┆ datetime │ │ --- ┆ --- │ │ datetime[ns, UTC] ┆ datetime[ns, America/New_York] │ ╞═════════════════════════╪════════════════════════════════╡ │ 2022-01-03 00:00:00 UTC ┆ 2022-01-02 19:00:00 EST │ │ 2022-01-03 00:05:00 UTC ┆ 2022-01-02 19:05:00 EST │ │ 2022-01-03 00:10:00 UTC ┆ 2022-01-02 19:10:00 EST │ │ 2022-01-03 00:15:00 UTC ┆ 2022-01-02 19:15:00 EST │ │ 2022-01-03 00:20:00 UTC ┆ 2022-01-02 19:20:00 EST │ │ … ┆ … │ │ 2022-09-29 23:40:00 UTC ┆ 2022-09-29 19:40:00 EDT │ │ 2022-09-29 23:45:00 UTC ┆ 2022-09-29 19:45:00 EDT │ │ 2022-09-29 23:50:00 UTC ┆ 2022-09-29 19:50:00 EDT │ │ 2022-09-29 23:55:00 UTC ┆ 2022-09-29 19:55:00 EDT │ │ 2022-09-30 00:00:00 UTC ┆ 2022-09-29 20:00:00 EDT │ └─────────────────────────┴────────────────────────────────┘
3
5
74,126,454
2022-10-19
https://stackoverflow.com/questions/74126454/how-to-get-only-the-index-in-numpy-where-instead-of-a-tuple
I have an array of strings arr in which I want to search for elements and get the index of element. Numpy has a method where to search element and return index in a tuple form. arr = numpy.array(["string1","string2","string3"]) print(numpy.where(arr == "string1") It prints: (array([0], dtype=int64),) But I only want the index number 0. I tried this: i = numpy.where(arr == "string1") print("idx = {}".format(i[0])) which has output: i = [0] Is there any way to get the index number without using replace or slicing method?
TL;DR Use: try: i = numpy.where(arr == "string1")[0][0] except IndexError: # handle the case where "string1" was not found in arr or indices = list(numpy.where(arr == "string1")[0]) Details Finding elements in NumPy arrays is not intuitive the first time you try to do it. Let's decompose the operation: >>> arr = numpy.array(["string1","string2","string3"]) >>> arr == "string1" array([ True, False, False]) Notice how just doing arr == "string1" is already doing the search: it's returning an array of booleans of the same shape as arr telling us where the condition is true. Then, you're using numpy.where which, when used with only one parameter (the condition), returns where its input is non-zero. With booleans, that means non false. >>> numpy.where(numpy.array([ True, False, False])) (array([0], dtype=int64),) >>> numpy.where(arr == "string1") (array([0], dtype=int64),) It's not quite clear to my where this gives you a tuple of arrays for a 1-D input, but when you use this syntax with a 2-d input, it makes more sense. In any case, what you're getting here is a tuple containing a list of indices where the condition matches. Notice it has to be a list, because you might have multiple matches. For your code, you want numpy.where(arr == "string1")[0][0], because you know "string1" occurs in the list, but the inner list may also contain zero or more than one values, depending on how many times the string is found. >>> arr2 = numpy.array(["string1","string2","string3","string1", "string3"]) >>> numpy.where(arr2 == "foo") (array([], dtype=int64),) >>> numpy.where(arr2 == "string3") (array([2, 4], dtype=int64),) So when you want to use these indices, you should simply treat numpy.where(arr == "string1")[0] as a list (it's really a 1-D array, though) and continue from there. Now, just using numpy.where(arr == "some string")[0][0] is risky, because it will throw an IndexError exception if the string is not found in arr. If you really want to do that, do it in a try/except block. If you need the list of indices as a Python list, you can do this: indices = list(numpy.where(arr == "string1")[0])
4
4
74,184,818
2022-10-24
https://stackoverflow.com/questions/74184818/pandas-type-object-is-not-subscriptable
I am trying to type a a function which receives a Series from typing import Any from pandas import Series def func(w: Series[Any], v: Series[Any]) -> int: However I got the error TypeError: 'type' object is not subscriptable What I am doing wrong?
Adding quotes around the type will fix this issue. In the code you provided: from typing import Any from pandas import Series def func(w: 'Series[Any]', v: 'Series[Any]') -> int: # your code pass Modern editors will identify the type and raise appropriate warnings or errors if violated. You are not doing anything "wrong", since this won't have raised any warnings or error for builtin types i.e. list, dict, set, etc. However, using quotes is the safer choice since it will work in all these cases and yours as well.
7
1
74,179,020
2022-10-24
https://stackoverflow.com/questions/74179020/the-unique-method-must-be-invoked-on-this-result-exception-raised-after-sqlalc
I have an issue with SQLAlchemy and I cannot figure out the cause of this error: so my class definition is: class PricingFrequency(enum.Enum): month = 'month' year = 'year' class PlanPricing(Base): __tablename__ = "PlansPricing" pricing_id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) ..... subscription_plan = relationship("SubscriptionPlan", back_populates="plans_pricing") plan_id = Column(UUID(as_uuid=True), ForeignKey("SubscriptionPlans.plan_id")) created_on = Column(DateTime, server_default=func.now()) updated_on = Column(DateTime, server_default=func.now(), server_onupdate=func.now()) class SubscriptionPlanOption(Base): __tablename__ = "SubscriptionPlanOptions" option_id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) ..... subscription_plan = relationship("SubscriptionPlan", back_populates="options_plan") plan_id = Column(UUID(as_uuid=True), ForeignKey("SubscriptionPlans.plan_id")) created_on = Column(DateTime, server_default=func.now()) updated_on = Column(DateTime, server_default=func.now(), server_onupdate=func.now()) class SubscriptionPlan(Base): __tablename__ = "SubscriptionPlans" plan_id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) plan_name = Column(String) plan_description = Column(String) is_popular = Column(Boolean, default=False) plans_pricing: List[Any] = relationship("PlanPricing", back_populates="subscription_plan") # , lazy='joined') options_plan: List[Any] = relationship("SubscriptionPlanOption", back_populates="subscription_plan") # lazy='joined') created_on = Column(DateTime, server_default=func.now()) updated_on = Column(DateTime, server_default=func.now(), server_onupdate=func.now()) When I make this query : query = ( select(SubscriptionPlan) .options(joinedload(SubscriptionPlan.options_plan, innerjoin=True), joinedload(SubscriptionPlan.plans_pricing.and_(PlanPricing.pricing_id == pricing_id), innerjoin=True)) ) items = await session.execute(query) items = items.scalars().all() I got this error message: **The unique() method must be invoked on this Result, as it contains results that include joined eager loads against collections** Note : session is AsyncSession Can anyone explain the source of this issue? Thanks
You are getting this error: sqlalchemy.exc.InvalidRequestError: The unique() method must be invoked on this Result, as it contains results that include joined eager loads against collections Reason: Quoting the documentation in Joined Eager Loading: When including joinedload() in reference to a one-to-many or many-to-many collection, the Result.unique() method must be applied to the returned result, which will uniquify the incoming rows by primary key that otherwise are multiplied out by the join. The ORM will raise an error if this is not present. This is not automatic in modern SQLAlchemy, as it changes the behavior of the result set to return fewer ORM objects than the statement would normally return in terms of number of rows. Therefore SQLAlchemy keeps the use of Result.unique() explicit, so there’s no ambiguity that the returned objects are being uniqified on primary key. So, you need to chain unique() to your Result: session.execute(query).unique()
16
20
74,139,174
2022-10-20
https://stackoverflow.com/questions/74139174/how-to-mock-mongo-with-python
How to create a mocked mongo db object to test my software using python? I tried https://pytest-mock-resources.readthedocs.io/en/latest/mongo.html but got error. First, i tried the code below: def insert_into_customer(mongodb_connection): collection = mongodb_connection['customer'] to_insert = {"name": "John", "address": "Highway 37"} collection.insert_one(to_insert) from pytest_mock_resources import create_mongo_fixture mongo = create_mongo_fixture() def test_insert_into_customer(mongo): insert_into_customer(mongo) collection = mongo['customer'] returned = collection.find_one() assert returned == {"name": "John", "address": "Highway 37"} test_insert_into_customer(mongo) I got the error below: Traceback (most recent call last): File "/home/ehasan-karbasian/Desktop/NationalEliteFoundation/serp_matcher/src/mock_mongo.py", line 19, in <module> test_insert_into_customer(mongo) File "/home/ehasan-karbasian/Desktop/NationalEliteFoundation/serp_matcher/src/mock_mongo.py", line 11, in test_insert_into_customer insert_into_customer(mongo) File "/home/ehasan-karbasian/Desktop/NationalEliteFoundation/serp_matcher/src/mock_mongo.py", line 2, in insert_into_customer collection = mongodb_connection['customer'] TypeError: 'function' object is not subscriptable And then i tried the code: def insert_into_customer(mongodb_connection): collection = mongodb_connection['customer'] to_insert = {"name": "John", "address": "Highway 37"} collection.insert_one(to_insert) from pymongo import MongoClient from pytest_mock_resources import create_mongo_fixture mongo = create_mongo_fixture() def test_create_custom_connection(mongo): client = MongoClient(**mongo.pmr_credentials.as_mongo_kwargs()) db = client[mongo.config["database"]] collection = db["customers"] to_insert = [ {"name": "John"}, {"name": "Viola"}, ] collection.insert_many(to_insert) result = collection.find().sort("name") returned = [row for row in result] assert returned == to_insert test_create_custom_connection(mongo) and got the error: Traceback (most recent call last): File "/home/ehasan-karbasian/Desktop/NationalEliteFoundation/serp_matcher/src/mock_mongo.py", line 30, in <module> test_create_custom_connection(mongo) File "/home/ehasan-karbasian/Desktop/NationalEliteFoundation/serp_matcher/src/mock_mongo.py", line 14, in test_create_custom_connection client = MongoClient(**mongo.pmr_credentials.as_mongo_kwargs()) AttributeError: 'function' object has no attribute 'pmr_credentials' It looks like mongo is a function not a MongoClient. How can i use the library pytest_mock_resources to mock mongo? Is there a better library to mock monodb to test with pytest?
import mongomock client = mongomock.MongoClient() database = client.__getattr__(database_name) collection = database.__getattr__(collection_name) You can define as many as client, database and collections you need at the same time.
5
4
74,120,614
2022-10-19
https://stackoverflow.com/questions/74120614/how-to-play-sound-in-an-android-app-created-by-beeware-using-python
I used the BeeWare environment to create a simple MahJong game (find & click pairs to remove them) using Python (with Toga as layout tool) for Android. Now I would like to have some buttons give a "click sound" when pressed: Anyone have a helping hint (or even working example)?
If you're using Briefcase 0.3.10 or newer (which uses Chaquopy to support Python on Android), then you could use the Chaquopy Python API to play audio files using SoundPool. For example, the code from this answer could be written in Python as follows: from android.media import AudioManager, SoundPool from os.path import dirname, join soundPool = SoundPool(5, AudioManager.STREAM_MUSIC, 0) soundId = soundPool.load(join(dirname(__file__), "filename.mp3"), 1) soundPool.play(soundId, 1, 1, 0, 0, 1) This will play the file "filename.mp3" from the same directory as the Python source file.
4
4
74,127,871
2022-10-19
https://stackoverflow.com/questions/74127871/how-do-i-detect-an-ean-13-5-supplement-barcode-using-python-or-nodejs
I'm trying to find a way to get the UPC plus the 5 number supplement barcode using Python or NodeJS. So far I've tried using pyzbar in Python via this code. img = Image.open(requests.get(url, stream=True).raw) img = ImageOps.grayscale(img) results = decode(img) That only returns the main UPC code. Not the supplemental code. This is an example of an image I'm trying to read this from.
Install the dependencies (on mac + python): $ brew install zbar $ pip install pyzbar $ pip install opencv-python You can define the symbols to ZBar. Please try this python code: import cv2 import pyzbar.pyzbar as pyzbar from pyzbar.wrapper import ZBarSymbol img = cv2.imread('barcode.png') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) barcodes = pyzbar.decode(gray, [ZBarSymbol.EAN13, ZBarSymbol.EAN5]) for barcode in barcodes: print('{}: {}'.format(barcode.type, barcode.data)) I've tested it with this image: The result is: EAN5: b'07900' EAN13: b'9791234567896' (BTW, for your image this code scans just the EAN13 part. Maybe you have to use a bigger image, or increase the contrast)
4
12
74,163,185
2022-10-22
https://stackoverflow.com/questions/74163185/send-premium-emoji-with-pyrogram
I need to send a premium emoji on the user's account using Pyrogram. I tried to send with send_message() a list of MessageEntityCustomEmoji and MessageEntity. The first one gave the error 'MessageEntityCustomEmoji' object has no attribute '_client', and the second one sent a message without an emoji. How do I send them without errors?
After lots of research and suffering, the answer was: ... my_emoji_str = "<emoji id=5310129635848103696>✅</emoji> And this is custom emoji in the text" await app.send_message(message.chat.id, my_emoji_str) This is basically a text formatting, here it is in HTML ParseMode of Pyrogram, but Pyrogram supports both Markdown and HTML formatting simultaneously out-of-the-box, and you also don't need to specify it in the parse_mode argument. And the id is the custom_emoji_id that you can find in the message object if you don't know it yet. To know what to look for: ... "text": "✅ Hello", "entities": [ { "_": "MessageEntity", "type": "MessageEntityType.CUSTOM_EMOJI", "offset": 0, "length": 1, "custom_emoji_id": 5310129635848103696 # <-- here you go } ], ... And.. That's it!
4
4
74,163,301
2022-10-22
https://stackoverflow.com/questions/74163301/how-to-properly-use-regex-in-cors-middleware-for-fastapi
I have an app that uses a FastAPI backend and a Next.js frontend. In development and on production with stable origins, I am able to use the CORSMiddleware with no issues. However, I have deployed the Next.js frontend with Vercel, and want to take advantage of the automatic Preview deployments that Vercel makes with each git commit to allow for staging-type qualitative testing and sanity checks. I'm running into CORS issues on the Preview deployments: since each Preview deployment uses an auto-generated URL of the pattern: <project-name>-<unique-hash>-<scope-slug>.vercel.app, I can't add them directly to the allow_origins argument of the CORSMiddleware. Instead I am trying to add the pattern to the allow_origin_regex argument. I am very new to regex, but was able to figure out a pattern that I've tested to work in REPL. However, because I'm having issues, I've switched to use an ultra-permissive regex of '.*' just to get anything to work but that has failed also. main.py (relevant portions) from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware app = FastAPI() origins = [ "http://localhost", "http://localhost:8080", "http://localhost:3000", "https://my-project-name.vercel.app" ] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_origin_regex=".*", allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) I've looked at the FastAPI/Starlette cors.py file to see how it ingests and uses the origin regex and don't see where the problem would be. I've tested the same methods in REPL with no issues. I'm at a loss as to the next avenue to investigate in order to resolve this issue. Any assistance or pointers or "hey dummy you forgot this" comments are welcome.
Whenever a new deployment is created, Vercel will automatically generate a unique URL that is publicly available, and which is composed of the following pieces: <project-name>-<unique-hash>-<scope-slug>.vercel.app To allow requests from any Vercel deployment, use: allow_origin_regex='https://.*\.vercel\.app' To allow requests from a specific Vercel project, use: allow_origin_regex='https://<project-name>-.*\.vercel\.app' for instance: allow_origin_regex = 'https://my-site-.*\.vercel\.app' The example below is based on how FastAPI/Starlette's CORSMiddleware works internally (see implementation here). The example shows that using the above regex, a match for an origin such as https://my-site-xadvghg2z-acme.vercel.app is found. import re origin = 'https://my-site-xadvghg2z-acme.vercel.app' allow_origin_regex = 'https://my-site-.*\.vercel\.app' compiled_allow_origin_regex = re.compile(allow_origin_regex) if (compiled_allow_origin_regex is not None and compiled_allow_origin_regex.fullmatch(origin)): print('Math found') else: print('No match found') Please make sure to specify the correct protocol (e.g., http, https) and port(80, 8000, 3000) in allow_origin_regex.
3
3
74,191,241
2022-10-25
https://stackoverflow.com/questions/74191241/multiple-imshow-on-the-same-plot-with-opacity-slider
With Plotly, I'd like to display two imshow on the same page, at the same place, with opacity. This nearly works: import plotly.express as px, numpy as np from skimage import io img = io.imread('https://upload.wikimedia.org/wikipedia/commons/thumb/0/00/Crab_Nebula.jpg/240px-Crab_Nebula.jpg') fig = px.imshow(img) x = np.random.random((100, 200)) fig2 = px.imshow(x) fig.show() fig2.show() but it displays the two imshow images in two different tabs. How to display the two "imshow" on the same plot, with an opacity slider for both layers? For reference, here is the matplotlib equivalent: import numpy as np, matplotlib.pyplot as plt, matplotlib.widgets as mpwidgets, scipy.misc x = scipy.misc.face(gray=False) # shape (768, 1024, 3) y = np.random.random((100, 133)) # shape (100, 133) fig, (ax0, ax1) = plt.subplots(2, 1, gridspec_kw={'height_ratios': [5, 1]}) OPACITY = 0.5 img0 = ax0.imshow(x, cmap="jet") img1 = ax0.imshow(y, cmap="jet", alpha=OPACITY, extent=img0.get_extent()) slider0 = mpwidgets.Slider(ax=ax1, label='opacity', valmin=0, valmax=1, valinit=OPACITY) slider0.on_changed(lambda value: img1.set_alpha(value)) plt.show()
As pointed out by the OP, since opacity is a style attribute applied by the client regardless of the trace (data) which is associated to a given image, there is no need to precompute one trace for each image variation, nor to redraw anything when the slider moves. Using simple slider controls, we should be able to apply such (minor) changes in real-time. Option 1 : The first option is to let Plotly.js handle the changes by specifying the proper method and args in the slider' steps configuration (ie. using the restyle method, Plotly is smart enough to not redraw the whole plot) : from PIL import Image import plotly.graph_objects as go import numpy as np import scipy.misc imgA = scipy.misc.face() imgB = Image.fromarray(np.random.random(imgA.shape[:2])*255).convert('RGB') fig = go.Figure([ go.Image(name='raccoon', z=imgA, opacity=1), # trace 0 go.Image(name='noise', z=imgB, opacity=0.5) # trace 1 ]) slider = { 'active': 50, 'currentvalue': {'prefix': 'Noise: '}, 'steps': [{ 'value': step/100, 'label': f'{step}%', 'visible': True, 'execute': True, 'method': 'restyle', 'args': [{'opacity': step/100}, [1]] # apply to trace [1] only } for step in range(101)] } fig.update_layout(sliders=[slider]) fig.show(renderer='browser') Option 2 : The second option demonstrates a more general case where one wants to bypass Plotly API on slider events and trigger his own code instead. For example, what if the restyle method were not efficient enough for this task ? In this situation, one can hook into the plotly_sliderchange event and apply the changes manually in order to obtain smooth transitions. The browser (default) renderer makes it possible by using the post_script parameter, which you can pass to the show() method. It allows to add javascript snippets which are executed just after plot creation. Once one knows that (documentation should emphasize this part!), it is just a matter of binding the appropriate handler. For example (JS syntax highlighting) : // {plot_id} is a placeholder for the graphDiv id. const gd = document.getElementById('{plot_id}'); // Retrieve the image (easier with d3, cf. 1st revision, see comments) const trace = gd.calcdata.find(data => data[0].trace.name === 'noise')[0]; const group = Object.keys(trace).find(prop => (/node[0-4]/).test(prop)); const img = trace[group][0][0].firstChild; // Listen for slider events and apply changes on the fly gd.on('plotly_sliderchange', event => img.style.opacity = event.step.value); Now for the slider configuration, the important thing is to set execute=False and method='skip' to bypass API calls and prevent redrawing the plot when the slider changes : slider = { 'active': 50, 'currentvalue': {'prefix': 'Noise: '}, 'steps': [{ 'value': step/100, 'label': f'{step}%', 'visible': True, 'execute': False, 'method': 'skip', } for step in range(101)] } js = ''' const gd = document.getElementById('{plot_id}'); const trace = gd.calcdata.find(data => data[0].trace.name === 'noise')[0]; const group = Object.keys(trace).find(prop => (/node[0-4]/).test(prop)); const img = trace[group][0][0].firstChild; gd.on('plotly_sliderchange', event => img.style.opacity = event.step.value); ''' fig.update_layout(sliders=[slider]) fig.show(renderer='browser', post_script=[js])
4
2
74,134,920
2022-10-20
https://stackoverflow.com/questions/74134920/why-does-my-conda-deactivate-doesnt-work
I am having troubles with my conda installation on a cluster. It seems that i can't deactivate any of my environments. it goes so far, that i have to close my terminal because it froze. I am working on multiple server with a common home directory, so i can access the same conda installation from different servers. Interestingly, I can deactivate the conda envs on a different server, but not on my main server, which i work most of the time. I use conda 22.9.0 I there a way to solve this without reinstalling everything? (Can I reinstall conda itself without loosing the already set environments?) Thanks
thanks to @merv I could easily solve the problem. Running conda init bash and restarting the terminal seems to clean something and now it works as well. I can see again my prompt and the environments are closing again. yeroslaviz@hpcl8001:~$ conda init bash modified /fs/home/yeroslaviz/miniconda3/condabin/conda modified /fs/home/yeroslaviz/miniconda3/bin/conda modified /fs/home/yeroslaviz/miniconda3/bin/conda-env no change /fs/home/yeroslaviz/miniconda3/bin/activate no change /fs/home/yeroslaviz/miniconda3/bin/deactivate no change /fs/home/yeroslaviz/miniconda3/etc/profile.d/conda.sh no change /fs/home/yeroslaviz/miniconda3/etc/fish/conf.d/conda.fish no change /fs/home/yeroslaviz/miniconda3/shell/condabin/Conda.psm1 no change /fs/home/yeroslaviz/miniconda3/shell/condabin/conda-hook.ps1 no change /fs/home/yeroslaviz/miniconda3/lib/python3.7/site-packages/xontrib/conda.xsh no change /fs/home/yeroslaviz/miniconda3/etc/profile.d/conda.csh no change /fs/home/yeroslaviz/.bashrc ==> For changes to take effect, close and re-open your current shell. <== yeroslaviz@hpcl8001:~$ exit yeroslaviz@hpcl8001:~$ conda activate base (base) yeroslaviz@hpcl8001:~$ (base) yeroslaviz@hpcl8001:~$ conda deactivate yeroslaviz@hpcl8001:~$
3
8
74,194,672
2022-10-25
https://stackoverflow.com/questions/74194672/cmake-problems-after-upgrading-to-macos-13-0
As mentioned on the title, CMake seems to be broken after upgrading to MacOS 13.0. Trying to install something that requires Cmakes takes unusually long then the following pop-up shows up. “CMake” is damaged and can’t be opened. You should move it to the Trash. This file was downloaded on an unknown date. # this txt is grey and smaller font Pop-up Options 1. Move to Trash 2. Cancel Steps to reproduce Err Cloned EasyOCR 1.1 git clone ... Made Python venv 2.0. cd EasyOCR/ 2.1. python3 -m venv venv 2.2. source venv/bin/activate 2.3. venv info python --version && pip --version # output Python 3.10.6 pip 22.3 from ... # path to venv dir pip install -r requirements.txt # requirements.txt content torch torchvision>=0.5 opencv-python-headless<=4.5.4.60 scipy numpy Pillow scikit-image python-bidi PyYAML Shapely pyclipper ninja After a while, the aforementioned pop-up, shows up. Clicking on either option will result in the following error. Building wheels for collected packages: opencv-python-headless Building wheel for opencv-python-headless (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for opencv-python-headless (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [9 lines of output] File "/private/var/folders/5h/36chnb_s3b5fpqmqgt_7cz_m0000gn/T/pip-build-env-ea_5u80v/overlay/lib/python3.10/site-packages/skbuild/setuptools_wrap.py", line 613, in setup cmkr = cmaker.CMaker(cmake_executable) File "/private/var/folders/5h/36chnb_s3b5fpqmqgt_7cz_m0000gn/T/pip-build-env-ea_5u80v/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 141, in __init__ self.cmake_version = get_cmake_version(self.cmake_executable) File "/private/var/folders/5h/36chnb_s3b5fpqmqgt_7cz_m0000gn/T/pip-build-env-ea_5u80v/overlay/lib/python3.10/site-packages/skbuild/cmaker.py", line 95, in get_cmake_version raise SKBuildError( Traceback (most recent call last): Problem with the CMake installation, aborting build. CMake executable is cmake [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for opencv-python-headless Failed to build opencv-python-headless ERROR: Could not build wheels for opencv-python-headless, which is required to install pyproject.toml-based projects Any thoughts on how to fix or get around this? This is the first time I see this pop up.
The pip package is broken on macOS 13 prior to CMake 3.24.2 due to improper code signing. You should upgrade CMake in your virtual environment by running: $ python -m pip install -U pip setuptools wheel $ python -m pip install -U 'cmake>=3.24.2' As CMake is extremely backwards compatible, it should be safe. You can also add cmake>=3.24.2 to your requirements.txt.
7
4
74,186,452
2022-10-24
https://stackoverflow.com/questions/74186452/python-typing-overload-based-on-length-of-tuple-argument
I would like to add overloaded type annotations to an existing API that has semantics something like this: def f(x: Tuple[int, ...]) -> Union[int, List[int]]: if len(x) == 1: return x[0] return list(x) The argument is a tuple, and the return type is either int or List[int] depending on whether the tuple has length 1. To encode the output type's dependence on the input type, I've tried the following: from typing import overload, List, Tuple @overload def f(x: Tuple[int]) -> int: ... @overload def f(x: Tuple[int, ...]) -> List[int]: ... def f(x: Tuple[int, ...]) -> Union[int, List[int]]: if len(x) == 1: return x[0] return list(x) But when type-checked with mypy, this gives the following error: script.py:4: error: Overloaded function signatures 1 and 2 overlap with incompatible return types Found 1 error in 1 file (checked 1 source file) To address this I would need the second overload to have a type that indicates "tuples of any length except 1", but it seems that this sort of exclusive type doesn't exist (PEP 484: exclusive type for type hint) What would be the best way to define overloaded annotations for this function?
The "overlap with incompatible return types" error message can sometimes be a little bit of a lint, as opposed to something broken. If you # type: ignore, mypy will still do what you want. (I'm a maintainer of mypy)
4
1
74,155,189
2022-10-21
https://stackoverflow.com/questions/74155189/how-to-log-uncaught-exceptions-in-flask-routes-with-logging
What is the standard way to log uncaught expressions in Flask routes with logging? This nearly works: import logging, sys, flask logging.basicConfig(filename='test.log', filemode='a', format='%(asctime)s %(levelname)s %(message)s') sys.excepthook = lambda exctype, value, tb: logging.error("", exc_info=(exctype, value, tb)) logging.warning("hello") app = flask.Flask('hello') @app.route('/') def index(): sjkfq # uncaught expresion return "hello" app.run() but there are some ANSI escape characters in the log: [31m[1m etc. (probably for console colors) See here in the produced test.log file: 2022-10-21 16:23:06,817 WARNING hello 2022-10-21 16:23:07,096 INFO [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m * Running on http://127.0.0.1:5000 2022-10-21 16:23:07,097 INFO [33mPress CTRL+C to quit[0m 2022-10-21 16:23:07,691 ERROR Exception on / [GET] Traceback (most recent call last): File "C:\Python38\lib\site-packages\flask\app.py", line 2525, in wsgi_app response = self.full_dispatch_request() File "C:\Python38\lib\site-packages\flask\app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Python38\lib\site-packages\flask\app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() File "C:\Python38\lib\site-packages\flask\app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "D:\test.py", line 10, in index sjkfq NameError: name 'sjkfq' is not defined 2022-10-21 16:23:07,694 INFO 127.0.0.1 - - [21/Oct/2022 16:23:07] "[35m[1mGET / HTTP/1.1[0m" 500 - Note that this is totally reproducible if you run the same code. Is there a documented way to do proper logging of uncaught exceptions in Flask routes, with Python logging? I didn't exactly find this in https://flask.palletsprojects.com/en/2.2.x/logging/. (Note: this code shows it's not the right way to do logging. We shouldn't intercept a logging string, do some reverse engineering to clean the escape characters, and then log to a file. There surely is a a standard logging way.)
How to log uncaught exceptions in Flask routes with logging? Flask is a popular web framework for Python that allows you to create web applications easily and quickly. However, sometimes your Flask routes may encounter uncaught exceptions that cause your application to crash or return an error response. To debug and fix these errors, you need to log them using the logging module. Logging uncaught exceptions in Flask routes The logging module is a standard library module that provides a flexible and powerful way to handle and record different levels of events, errors, and messages in your Python programs. You can use logging to configure different handlers, formatters, and levels for your log messages, and send them to different destinations, such as files, consoles, emails, or web services. To log uncaught exceptions in Flask routes, you need to do two things: Configure a logger object with a handler and a formatter that suit your needs Register an error handler function that logs the exception information using the logger object Configuring a logger object To configure a logger object, you can use the logging.basicConfig() function, which sets up a default handler and formatter for the root logger. The root logger is the parent of all other loggers and it handles all the log messages that are not handled by any other logger. You can pass different parameters to the basicConfig() function, such as: level: the minimum level of severity that the logger will handle, such as logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, or logging.CRITICAL filename: the name of the file where the log messages will be written filemode: the mode of opening the file, such as 'a' for append or 'w' for write format: the format string that specifies how the log messages will be displayed, such as '%(asctime)s - %(name)s - %(levelname)s - %(message)s' datefmt: the format string that specifies how the date and time will be displayed, such as '%Y-%m-%d %H:%M:%S' For example, you can configure a logger object with a file handler and a simple formatter like this: import logging logging.basicConfig(level=logging.ERROR, filename='app.log', filemode='a', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S') This will create a file named app.log in the same directory as your Flask app, and write all the log messages with level ERROR or higher to it, using the specified format and date format. Registering an error handler function To register an error handler function, you can use the app.errorhandler() decorator, which takes an HTTP status code or an exception class as an argument, and wraps a function that handles the error. The function should take an exception object as a parameter, and return a response object or a tuple of (response, status code). Inside the error handler function, you can use the logging.exception() method, which logs a message with level ERROR and adds the exception information to the log message. You can pass a custom message as an argument, or use the default message 'Exception occurred'. For example, you can register an error handler function for the generic Exception class like this: from flask import Flask, render_template app = Flask(__name__) @app.errorhandler(Exception) def handle_exception(e): # log the exception logging.exception('Exception occurred') # return a custom error page or message return render_template('error.html'), 500 This will log any uncaught exception that occurs in your Flask routes, and return a custom error page with status code 500. Example of logging uncaught exceptions in Flask routes To demonstrate how logging uncaught exceptions in Flask routes works, let's create a simple Flask app that has two routes: one that returns a normal response, and one that raises a ZeroDivisionError. We will use the same logger configuration and error handler function as above. from flask import Flask, render_template app = Flask(__name__) # configure the logger logging.basicConfig(level=logging.ERROR, filename='app.log', filemode='a', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S') # register the error handler @app.errorhandler(Exception) def handle_exception(e): # log the exception logging.exception('Exception occurred') # return a custom error page or message return render_template('error.html'), 500 # define the normal route @app.route('/') def index(): return 'Hello, world!' # define the route that raises an exception @app.route('/error') def error(): # this will cause a ZeroDivisionError x = 1 / 0 return 'This will never be returned' # run the app if __name__ == '__main__': app.run(debug=True) If we run this app and visit the / route, we will see the normal response 'Hello, world!'. However, if we visit the /error route, we will see the custom error page that says 'Something went wrong'. We can also check the app.log file and see the log message that contains the exception information, such as: 2021-07-07 12:34:56 - werkzeug - ERROR - Exception occurred Traceback (most recent call last): File "/home/user/.local/lib/python3.8/site-packages/flask/app.py", line 2464, in __call__ return self.wsgi_app(environ, start_response) File "/home/user/.local/lib/python3.8/site-packages/flask/app.py", line 2450, in wsgi_app response = self.handle_exception(e) File "/home/user/.local/lib/python3.8/site-packages/flask/app.py", line 1867, in handle_exception reraise(exc_type, exc_value, tb) File "/home/user/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise raise value File "/home/user/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/home/user/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/user/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/home/user/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise raise value File "/home/user/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/home/user/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/home/user/flask_app.py", line 31, in error x = 1 / 0 ZeroDivisionError: division by zero This way, we can easily identify and debug the source of the error, and improve our Flask app accordingly. Logging uncaught exceptions in Flask routes is a good practice that can help you maintain and troubleshoot your web applications. See also: Flask - Handling Application Errors and Unhandled exceptions.
5
6
74,184,794
2022-10-24
https://stackoverflow.com/questions/74184794/how-to-find-a-distribution-function-from-the-max-min-and-average-of-a-sample
Given that I know the, Max, Min and Average of sample (I don't have access to the sample itself). I would like to write a generic function to generate a sample with the same characteristics. From this answer I gather that this is no simple task since many distribuitions can be found with the same characteristics. max, min, average = [411, 1, 20.98] I'm trying to use scipy.norm but unsuccessfully. I can't seem to understand if I can pass the arguments mentioned above or if they are just returned values from an already generated function. I'm pretty new to python stats so this might be something quite easy to solve.
Triangular distribution should perform your desired task since it takes three parameters (min, mode, max) as inputs that match your criteria. You can think of other distributions such as standard, uniform, and so on; however, all of their input parameters either lack or partially take one of the three input parameters mentioned by you above. If I were in your position, I would consider triangular distribution because even partial exclusion of a single parameter can incur information loss. import numpy as np import matplotlib.pyplot as plt h = plt.hist(np.random.triangular(-3, 0, 8, 100000), bins=200, density=True) plt.show() Numpy - Triangular Distribution
3
4
74,151,442
2022-10-21
https://stackoverflow.com/questions/74151442/how-to-incorporate-individual-measurement-uncertainties-into-gaussian-process
I have a set of observations, f_i=f(x_i), and I want to construct a probabilistic surrogate, f(x) ~ N[mu(x), sigma(x)], where N is a normal distribution. Each observed output, f_i, is associated with a measurement uncertainty, sigma_i. I would like to incorporate these measurement uncertainties into my surrogate, f_i, so that mu(x) predicts the observations, f_i(x_i), and that the predicted standard deviation, sigma(x_i), envelops the uncertainty in the observed output, epsilon_i. The only way I can think of to accomplish this is through a combination of Monte Carlo sampling and Gaussian Process modeling. It would be ideal to accomplish this with a single Gaussian process, without Monte Carlo samples, but I can not make this work. I show three attempts to accomplish my goal. The first two avoid Monte Carlo sampling, but do not predict an average of f(x_i) with uncertainty bands that envelop epsilon(x_i). The third approach uses Monte Carlo sampling and accomplishes what I want to do. Is there a way to create a Gaussian Process that on average predicts the mean observed output, with uncertainty that will envelop uncertainty in the observed output, without using this Monte Carlo approach? import matplotlib.pyplot as plt import numpy as np import matplotlib from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF, Matern, ExpSineSquared, WhiteKernel # given a set of inputs, x_i, and corresponding outputs, f_i, I want to make a surrogate f(x). # each f_i is measured with a different instrument, that has a different uncertainty. # measured inputs xs = np.array([44, 77, 125]) # measured outputs fs = [8.64, 10.73, 12.13] # uncertainty in measured outputs errs = np.array([0.1, 0.2, 0.3]) # inputs to predict finex = np.linspace(20, 200, 200) ############# ### approach 1: uncertainty in kernel # - the kernel is constant and cannot change as a function of the input # - uncertainty in measurements can be incorporated using a whitenoisekernel # - the white noise uncertainty can be specified as the average of the observation error # RBF + whitenoise kernel kernel = 1 * RBF(length_scale=9, length_scale_bounds=(10, 1e3)) + WhiteKernel(errs.mean(), noise_level_bounds=(errs.mean() - 1e-8, errs.mean() + 1e-8)) gaussian_process = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True) gaussian_process.fit((np.atleast_2d(xs).T), (fs)) mu, std = gaussian_process.predict((np.atleast_2d(finex).T), return_std=True) plt.scatter(xs, fs, zorder=3, s=30) plt.fill_between(finex, (mu - std), (mu + std), facecolor='grey') plt.plot(finex, mu, c='w') plt.errorbar(xs, fs, yerr=errs, ls='none') plt.xlabel('input') plt.ylabel('output') plt.title('White Noise Kernel - assumes uniform sensor error') plt.savefig('gp_whitenoise') plt.clf() #################### ### Aproach 2: incorporate measurement uncertainty in the likelihood function # - the likelihood function can be altered throught the alpha parameter # - this assumes gaussian uncertainty in the measured input kernel = 1 * RBF(length_scale=9, length_scale_bounds=(10, 1e3)) gaussian_process = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True, alpha=errs) gaussian_process.fit((np.atleast_2d(xs).T), (fs)) mu, std = gaussian_process.predict((np.atleast_2d(finex).T), return_std=True) plt.scatter(xs, fs, zorder=3, s=30) plt.fill_between(finex, (mu - std), (mu + std), facecolor='grey') plt.plot(finex, mu, c='w') plt.errorbar(xs, fs, yerr=errs, ls='none') plt.xlabel('input') plt.ylabel('output') plt.title('uncertainty in likelihood - assumes measurements may be innacruate') plt.savefig('gp_alpha') plt.clf() #################### ### Aproach 3: Monte Carlo of measurement uncertainty + GP # - The Gaussian process represents uncertainty in creating the surrogate f(x) # - The uncertainty in observed inputs can be propogated using Monte Carlo # - downside: less computationally efficient, no analytic solution for mean or uncertainty kernel = 1 * RBF(length_scale=9, length_scale_bounds=(10, 1e3)) posterior_history = np.zeros((finex.size, 100 * 50)) for sample in range(100): simulatedSamples = fs + np.random.normal(0, errs) gaussian_process = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True) gaussian_process.fit((np.atleast_2d(xs).T), (simulatedSamples)) posterior_sample = gaussian_process.sample_y((np.atleast_2d(finex).T), 50) plt.plot(finex, posterior_sample, c='orange', alpha=0.005) posterior_history[:, sample * 50 : (sample + 1) * 50] = posterior_sample plt.plot(finex, posterior_history.mean(1), c='w') plt.fill_between(finex, posterior_history.mean(1) - posterior_history.std(1), posterior_history.mean(1) + posterior_history.std(1), facecolor='grey', alpha=1, zorder=5) plt.scatter(xs, fs, zorder=6, s=30) plt.errorbar(xs, fs, yerr=errs, ls='none', zorder=6) plt.xlabel('input') plt.ylabel('output') plt.title('Monte Carlo + RBF Gaussian Process. Accurate but expensive.') plt.savefig('gp_monteCarlo') plt.clf()
Using your second approach, only slightly changing Alpha kernel = 1 * RBF(length_scale=9, length_scale_bounds=(10, 1e3)) gaussian_process = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True, alpha=errs**2) gaussian_process.fit((np.atleast_2d(xs).T), (fs)) mu, std = gaussian_process.predict((np.atleast_2d(finex).T), return_std=True)
8
1
74,171,275
2022-10-23
https://stackoverflow.com/questions/74171275/azure-function-not-running-on-m1
Running import logging import azure.functions as func def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') name = req.params.get('name') if not name: try: req_body = req.get_json() except ValueError: pass else: name = req_body.get('name') if name: return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.") else: return func.HttpResponse( "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.", status_code=200 ) in vscode pyenv shell 3.9.12 Requirement already satisfied: azure-functions in ./.venv/lib/python3.9/site-packages (from -r requirements.txt (line 5)) (1.12.0) WARNING: You are using pip version 21.2.4; however, version 22.3 is available. You should consider upgrading via the '/Users/deniz/Library/CloudStorage/OneDrive-DualCitizen/Dev/2ndfunction/.venv/bin/python -m pip install --upgrade pip' command. * Terminal will be reused by tasks, press any key to close it. * Executing task: . .venv/bin/activate && func host start Found Python version 3.9.10 (python3). Azure Functions Core Tools Core Tools Version: 4.0.4829 Commit hash: N/A (64-bit) Function Runtime Version: 4.11.2.19273 [2022-10-23T09:38:17.290Z] Failed to initialize worker provider for: /opt/homebrew/Cellar/azure-functions-core-tools@4/4.0.4829/workers/python [2022-10-23T09:38:17.290Z] Microsoft.Azure.WebJobs.Script: Architecture Arm64 is not supported for language python. [2022-10-23T09:38:17.818Z] Failed to initialize worker provider for: /opt/homebrew/Cellar/azure-functions-core-tools@4/4.0.4829/workers/python [2022-10-23T09:38:17.818Z] Microsoft.Azure.WebJobs.Script: Architecture Arm64 is not supported for language python. [2022-10-23T09:38:17.957Z] A host error has occurred during startup operation '29ac434c-0276-4c0b-a85f-c9e4863577dc'. [2022-10-23T09:38:17.957Z] Microsoft.Azure.WebJobs.Script: Did not find functions with language [python]. [2022-10-23T09:38:17.964Z] Failed to stop host instance '3d492f60-0fd3-48dd-bce0-d7cc99da71c8'. [2022-10-23T09:38:17.964Z] Microsoft.Azure.WebJobs.Host: The host has not yet started. Value cannot be null. (Parameter 'provider') [2022-10-23T09:38:17.986Z] A host error has occurred during startup operation '1327f3e2-25e9-4318-bfea-0bac013aff02'. [2022-10-23T09:38:17.986Z] Microsoft.Extensions.DependencyInjection: Cannot access a disposed object. [2022-10-23T09:38:17.986Z] Object name: 'IServiceProvider'. * Terminal will be reused by tasks, press any key to close it. I tried a bunch of things incl: Create a conda environment with supported python version Go to the root directory of the project, Remove .venv folder Activate the newly created conda environment Create new virtual environment using python3 -m venv .venv/ I also tried to apply the Rosetta fix as outlined here How to run the Homebrew installer under Rosetta 2 on M1 Macbook but wasn't successfull. How can I fix this?
I kept running into problems with python on my M1 Mac until I went completely to Rosetta on the command line. For that, I did the following: Update Rosetta: In a Terminal type: softwareupdate --install-rosetta In Finder, type ⇧⌘G and go to /Applications/Utilities. Then duplicate Terminal: Rename the second Terminal to "Rosetta" (or whatever you like) and have it execute in Rosetta by checking "Open using Rosetta" in the "Get Info" dialogue: Open a Rosetta Terminal and make sure it shows i386 when you issue the command arch: In that terminal, install homebrew (per the homebrew homepage): /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" Once homebrew has been installed, install miniconda using homebrew: brew install --cask miniconda Create a conda environment, for instance here a python 3.9 env named azure: conda create -n azure python=3.9 Then activate the environment: conda activate azure From here on out, you have a fully functioning i386 Python system. This has resolved all problems that I had with Azure, Numpy, Pandas, etc. on my M1 Mac.
5
5
74,200,729
2022-10-25
https://stackoverflow.com/questions/74200729/is-there-a-way-to-make-an-inherited-abstract-property-a-required-constructor-arg
I'm using Python dataclasses with inheritance and I would like to make an inherited abstract property into a required constructor argument. Using an inherited abstract property as a optional constructor argument works as expected, but I've been having real trouble making the argument required. Below is a minimal working example, test_1() fails with TypeError: Can't instantiate abstract class Child1 with abstract methods inherited_attribute, test_2() fails with AttributeError: can't set attribute, and test_3() works as promised. Does anyone know a way I can achieve this behavior while still using dataclasses? import abc import dataclasses @dataclasses.dataclass class Parent(abc.ABC): @property @abc.abstractmethod def inherited_attribute(self) -> int: pass @dataclasses.dataclass class Child1(Parent): inherited_attribute: int @dataclasses.dataclass class Child2(Parent): inherited_attribute: int = dataclasses.field() @dataclasses.dataclass class Child3(Parent): inherited_attribute: int = None def test_1(): Child1(42) def test_2(): Child2(42) def test_3(): Child3(42)
So, the thing is, you declared an abstract property. Not an abstract constructor argument, or an abstract instance dict entry - abc has no way to specify such things. Abstract properties are really supposed to be overridden by concrete properties, but the abc machinery will consider it overridden if there is a non-abstract entry in the subclass's class dict. Your Child1 doesn't create a class dict entry for inherited_attribute - the annotation only creates an entry in the annotation dict. Child2 does create an entry in the class dict, but then the dataclass machinery removes it, because it's a field with no default value. This changes the abstractness status of Child2, which is undefined behavior below Python 3.10, but Python 3.10 added abc.update_abstractmethods to support that, and dataclasses uses that function on Python 3.10. Child3 creates an entry in the class dict, and since the dataclass machinery sees this entry as a default value, it leaves the entry there, so the abstract property is considered overridden. So you've got a few courses of action here. The first is to remove the abstract property. You don't want to force your subclasses to have a property - you want your subclasses to have an accessible inherited_attribute instance attribute, and it's totally fine if this attribute is implemented as an instance dict entry. abc doesn't support that, and using an abstract property is wrong, so just document the requirement instead of trying to use abc to enforce it. With the abstract property removed, Parent isn't actually abstract any more, and in fact doesn't really do anything, so at that point, you can just take Parent out entirely. Option 2, if you really want to stick with the abstract property, would be to give your subclasses a concrete property, properly overriding the abstract property: @dataclasses.dataclass class Child(Parent): _hidden_field: int @property def inherited_attribute(self): return self._hidden_field This would require you to give the field a different name from the attribute name you wanted, with consequences for the constructor argument names, the repr output, and anything else that cares about field names. The third option is to get something else into the class dict to shadow the inherited_attribute name, in a way that doesn't get treated as a default value. Python 3.10 added slots support in dataclasses, so you could do @dataclasses.dataclass(slots=True) class Child(Parent): inherited_attribute: int and the generated slot descriptor would shadow the abstract property, without being treated as a default value. However, this would not give the usual memory savings of slots, because your classes inherit from Parent, which doesn't use slots. Overall, I would recommend option 1. Abstract properties don't mean what you want, so just don't use them.
3
5