question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
76,613,542
2023-7-4
https://stackoverflow.com/questions/76613542/make-a-categorical-column-which-has-categories-a-b-c-in-polars
How do I make a Categorical column which has: elements: ['a', 'b', 'a', 'a'] categories ['a', 'b', 'c'] in polars? In pandas, I would do: In [31]: pd.Series(pd.Categorical(['a', 'b', 'a', 'a'], categories=['a', 'b', 'c'])) Out[31]: 0 a 1 b 2 a 3 a dtype: category Categories (3, object): ['a', 'b', 'c'] I have no idea how to do this in polars, the docs for Categorical look completely empty: https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.Categorical.html
You can use the StringCache with pl.StringCache(): pl.Series(['a', 'b', 'c'], dtype=pl.Categorical()) s = pl.Series(['a', 'b', 'a', 'a','z'], dtype=pl.Categorical()) Everything in the StringCache context will share the same index/value mapping so the first line initialized the mapping with the categories you want. The second line is the Series you want to keep. I added an extra 'z' so that we can see: s.to_physical() shape: (5,) Series: '' [u32] [ 0 1 0 0 3 ] Note that the s series skips 2 as it doesn't have a c value in it.
3
2
76,593,284
2023-7-1
https://stackoverflow.com/questions/76593284/delaunay-triangulation-of-a-fibonacci-sphere
I have generated a set of (x,y,z) coordinates on a unit sphere using the Fibonacci sphere algorithm. Plotted with a 3d scatter plot, they look alright: https://i.imgur.com/OsQo0CC.gif I now want to connect them with edges, i.e. a triangulation. As suggested in How do i draw triangles between the points of a fibonacci sphere? I went for Delaunay triangulation. For that I used the stripy Python package, which provides triangulations on a sphere. First I convert the coords to spherical (degrees) by iterating over the points and using the following formula: r = float(sqrt(x * x + y * y + z * z)) theta = float(acos(z / r)) # to degrees phi = float(atan2(y, x)) return r, theta, phi I obtain vertices_spherical, an array of shape (n, 3) where n is the number of points. We don't need the radius, so I discard it, and I have an array of shape (n, 2). Then I convert to radians and build the triangulation, then make a graph out of it: vertices_lon = np.radians(vertices_spherical.T[0]) vertices_lat = np.radians(vertices_spherical.T[1]) spherical_triangulation = stripy.sTriangulation(lons=vertices_lon, lats=vertices_lat, permute=True) # Build the graph graph: List[Node] = [] for i in range(spherical_triangulation.npoints): node = Node(name=f'{vertices_spherical.T[0][i]}, {vertices_spherical.T[1][i]}', lon=spherical_triangulation.lons[i], lat=spherical_triangulation.lats[i]) graph.append(node) segs = spherical_triangulation.identify_segments() for s1, s2 in segs: graph[s1].add_neighbor(graph[s2]) return graph (Node is a simple class with a name, lon, lat, and neighbors) I then convert the coordinates back to cartesian, then scatter plot them. For each node, I iterate over its neighbors and draw a line between them. And to my suprise, I get the following result. It seems kind of walnut- or brain-shaped, where there are two hemispheres where the triangulation worked fine, but for some reason the middle is sorta scrunched up along one plane: https://i.imgur.com/AIlLTmS.gif What could be causing this? Is it simply because of some limitation in how triangulation works? Is it because the points on a Fibonacci sphere are not periodical in some way? Or some mistake in my code? Kind of at a loss here, since the conversion to spherical and back seems to work fine, and there are no surprises with the plotting.
Fixed it! The issue was that my latitudes were between 0 and pi, whereas stripy.sTriangulation expects them to be between -pi/2 and pi/2. If this happens to anyone else, just know that stripy.sTriangulation expects latitudes between -pi/2 and pi/2, and longitudes between 0 and 2pi. So check if that's what you are providing it. Here's an animation showing the fixed sphere: https://imgur.com/JiGyegj.gif
3
1
76,599,694
2023-7-2
https://stackoverflow.com/questions/76599694/what-is-the-fastest-way-to-append-unique-integers-to-a-list-while-keeping-the-li
I need to append values to an existing list one by one, while maintaining the following two conditions at all times: Every value in the list is unique The list is always sorted in ascending order The values are all integers, and there is a huge amount of them (literally millions, no exaggeration), and the list is dynamically constructed, values are added or removed depending on a changing condition, and I really need a very efficient way to do this. sorted(set(lst)) doesn't qualify as a solution because first the list is not pre-existing and I need to mutate it after that, the solution isn't efficient by itself and to maintain the two above conditions I need to repeat the inefficient method after every mutation, which is impractical to do and would take an unimaginable amount of time to process millions of numbers. One way to do it is to maintain a set with the same elements as the list, and do membership checking using the set and only add elements to the list if there aren't in the set, and add the same elements to both at the same time. This maintains uniqueness. To maintain the order use binary search to calculate the insertion index required and insert the element at the index. Example implementation I come up with without much thought: from bisect import bisect class Sorting_List: def __init__(self): self.data = [] self.unique = set() def add(self, n): if n in self.unique: return self.unique.add(n) if not self.data: self.data.append(n) return if n > self.data[-1]: self.data.append(n) elif n < self.data[0]: self.data.insert(0, n) elif len(self.data) == 2: self.data.insert(1, n) else: self.data.insert(bisect(self.data, n), n) I am not satisfied with this solution, because I have to maintain a set, this isn't memory efficient and I don't think this is the most time efficient solution. I have done some tests: from timeit import timeit def test(n): setup = f'''from bisect import bisect from random import choice c = choice(range({n//2}, {n})) numbers = list(range({n}))''' linear = timeit('c in numbers', setup) / 1e6 binary = timeit('bisect(numbers, c) - 1 == c ', setup) / 1e6 return linear, binary In [182]: [test(i) for i in range(1, 24)] Out[182]: [(3.1215199967846275e-08, 9.411800000816583e-08), (4.0730200009420514e-08, 9.4089699909091e-08), (5.392530001699925e-08, 1.0571250005159527e-07), (5.4071999969892203e-08, 1.111615999834612e-07), (5.495569994673133e-08, 1.3055420003365725e-07), (7.999380002729595e-08, 1.2215890001971274e-07), (6.739119999110698e-08, 1.1633279989473522e-07), (1.1775600002147258e-07, 1.2142769992351532e-07), (9.138470003381372e-08, 1.1602859990671277e-07), (1.212503999704495e-07, 1.2919300002977253e-07), (1.4093979995232076e-07, 1.1543070001062005e-07), (1.3911779993213713e-07, 1.1900339997373521e-07), (1.641304000513628e-07, 1.2721199996303767e-07), (2.2550319996662438e-07, 1.3572790008038284e-07), (2.0048839994706214e-07, 1.2690539995674044e-07), (2.0169020001776515e-07, 1.3345349999144673e-07), (1.482249000109732e-07, 1.2819399998988957e-07), (1.777580000925809e-07, 1.2856919993646443e-07), (1.5940839995164425e-07, 1.2710969999898224e-07), (2.772621000185609e-07, 1.4048079994972795e-07), (2.014727999921888e-07, 1.4225799997802823e-07), (2.851358000189066e-07, 1.3718699989840387e-07), (2.607858000556007e-07, 1.4413580007385463e-07)] So the membership checking of lists is done using a linear search, by iterating through the list one by one and perform equality checking, this is inefficient but beats binary search for small number of elements (n <= 12) and is much slower for larger amounts. In [183]: test(256) Out[183]: (2.5505281999940053e-06, 1.7594210000243037e-07) And set membership checking is faster than binary search, which is much faster than linear search: In [188]: lst = list(range(256)) In [189]: s = set(lst) In [190]: %timeit 199 in s 42.5 ns ± 0.946 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) In [191]: %timeit bisect(lst, 199) 159 ns ± 1.38 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) In [192]: %timeit 199 in lst 2.53 µs ± 31.7 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) How can we devise a faster method? I am well aware that Python sets are unordered, and sorting sets yields lists. But perhaps there are some external libraries that provide ordered sets, if there are then I am not aware of them and I haven't used them. Solutions using such ordered sets are welcome so long as they are performant, but I am not asking for recommendations so please don't close the question for that reason. Nevertheless ordered sets only fulfill the first criterium, only uniqueness is maintained, and I need the orderedness to be maintained as well. list here is just terminology, in plain Python list is the only ordered mutable sequence. Other data types are welcome. About SQLite, I haven't tested in-memory transient databases yet, but for a local HDD based database, the I/O delay is many milliseconds which is unacceptable. I have just performed yet another test: In [243]: numbers = random.choices(range(4294967295), k=1048576) In [244]: sl = Sorting_List() In [245]: ss = SortedSet() In [246]: %timeit for i in numbers: ss.add(i) 306 ms ± 8.55 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [247]: %timeit for i in numbers: sl.add(i) --------------------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) ... In [248]: sls = SortedList() In [249]: s = set() In [250]: %%timeit ...: for i in numbers: ...: if i not in s: ...: sls.add(i) ...: s.add(i) 145 ms ± 3.24 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [251]: len(numbers) Out[251]: 1048576 It seems my custom implementation is at least as performant as SortedSet for reasonable amount of data, I need much more efficient methods. For a million elements my method becomes very inefficient indeed, while SortedSet and SortedList both keep being competent. And now it seems a SortedList plus a set for membership checking is the most time efficient method, but obviously not very memory efficient, just as my custom implementation. But my custom implementation seems to outperform SortedSet which seems to be memory efficient. Testing SortedList by adding 2^20 elements one by one, while keeping the elements unique by doing membership checking using the container itself: In [252]: sls = SortedList() In [253]: %%timeit ...: for i in numbers: ...: if i not in sls: ...: sls.add(i) 1.93 s ± 16.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) (numbers is defined above) The test clearly shows SortedList membership checking is slow, as suspected, it uses linear search.
I like data rather than guessing so I profiled the approaches suggested by @Kelly Bundy from bisect import bisect from sortedcontainers import SortedList, SortedSet class Sorting_List: # The Original def __init__(self): self.data = [] self.unique = set() def add(self, n): if n in self.unique: return self.unique.add(n) if not self.data: self.data.append(n) return if n > self.data[-1]: self.data.append(n) elif n < self.data[0]: self.data.insert(0, n) elif len(self.data) == 2: self.data.insert(1, n) else: self.data.insert(bisect(self.data, n), n) class Sorting_List_Kelly: def __init__(self): self.data = [] self.unique = set() def add(self, n): if n in self.unique: return self.unique.add(n) self.data[bisect(self.data, n) : 0] = (n,) class SortedListWrapper: def __init__(self): self.data = SortedList() self.unique = set() def add(self, n): if n in self.unique: return self.unique.add(n) self.data.add(n) import random from performance_measurement import run_performance_comparison def original(values): x = Sorting_List() for value in values: x.add(value) return x.data def kelly(values): x = Sorting_List_Kelly() for value in values: x.add(value) return x.data def sortedset(values): x = SortedSet() for value in values: x.add(value) return x def sortedlist(values): x = SortedListWrapper() for value in values: x.add(value) return x def generate_lists(N): return [random.sample(range(N), N)] data_size = [1000, 2000, 3000, 4000, 5000, 10000, 20000, 30000, 40000, 50000] approaches = [original, kelly, sortedset, sortedlist] run_performance_comparison(approaches, data_size, setup=generate_lists) Especially for small problem instances @Kelly Bundy's improved approach outperforms SortedList with a wrapper and SortedSet from sortedcontainers. Profiling code: import timeit import matplotlib.pyplot as plt from typing import List, Dict, Callable from contextlib import contextmanager @contextmanager def data_provider(data_size, setup=lambda N: N, teardown=lambda: None): data = setup(data_size) yield data teardown() def run_performance_comparison(approaches: List[Callable], data_size: List[int], setup=lambda N: N, teardown=lambda: None, number_of_repetitions=5, title='N'): approach_times: Dict[Callable, List[float]] = {approach: [] for approach in approaches} for N in data_size: with data_provider(N, setup, teardown) as data: for approach in approaches: approach_time = timeit.timeit(lambda: approach(*data), number=number_of_repetitions) approach_times[approach].append(approach_time) for approach in approaches: plt.plot(data_size, approach_times[approach], label=approach.__name__) plt.xlabel(title) plt.ylabel('Execution Time (seconds)') plt.title('Performance Comparison') plt.legend() plt.show()
6
4
76,595,013
2023-7-1
https://stackoverflow.com/questions/76595013/assertionerror-applications-must-write-bytes-when-streaming-data-using-python
I wrote the following Python code in the app_consumer.py and tasks_consumer.py files to stream data to the consumer.html Jinja template. app_consumer.py from flask import render_template, Response, request from flask_socketio import join_room from init_consumer import app, socketio import tasks_consumer import uuid def render_template_stream(template_name, **context): app.update_template_context(context) t = app.jinja_env.get_template(template_name) rv = t.stream(context) rv.enable_buffering(5) return rv @app.before_request def initialize_params(): if not hasattr(app.config,'uid'): sid = str(uuid.uuid4()) app.config['uid'] = sid print("initialize_params - Session ID stored =", sid) @app.route("/", methods=['GET']) def index(): return render_template('consumer.html', stockInfo = {}) @app.route('/consumetasks', methods=['GET','POST']) def getStockStatus(): if request.method == 'POST': print("Retrieving stock status") return Response(render_template_stream('consumer.html', stockInfo = tasks_consumer.sendStockStatus())) elif request.method == 'GET': return ''' <!doctype html> <html> <head> <title>Stock Sheet</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"/> </head> <body class="container"> <h1>Stock Sheet</h1> <div> <button id="consumeTasks">Check stock status</button> </div> </body> </html> ''' # Run using port 5001 if __name__ == "__main__": socketio.run(app,host='localhost', port=5001,debug=True) tasks_consumer.py import csv from flask import request, stream_with_context from init_consumer import app, socketio import json # Receive the webhook requests and emit a SocketIO event back to the client def send_message(data): status_code = 0 if request.method == 'POST': roomid = app.config['uid'] msg = json.dumps(data) event = "Send_stock_status" socketio.emit(event, msg, namespace = '/collectHooks', room = roomid) status_code = 200 else: status_code = 405 # Method not allowed return status_code # Retrieve the stock status of the products sent through the webhook requests and return them back to the client. @app.route('/consumetasks', methods=['POST']) def sendStockStatus(): stockList = [] # List of products in stock with open("NZ_NVJ_Apparel_SKUs_sheet.csv", newline='') as csvFile: stockReader = csv.reader(csvFile, delimiter=',', quotechar='"') for row in stockReader: stockList.append(row[0]) stockSheet = {} # Dictionary of products sent in the request and their stock status def generateStockStatus(): request_data = request.get_json() if request_data: if 'SKU' in request_data: stockRequest = request_data['SKU'] # List of products sent in the request for stock in stockRequest: if stock in stockList: stockStatus = "In Stock" stockSheet.update({str(stock):stockStatus}) send_message(stockSheet) yield stock, stockStatus else: stockStatus = "Out of Stock" stockSheet.update({str(stock):stockStatus}) send_message(stockSheet) yield stock, stockStatus return stream_with_context(generateStockStatus()) When I ran the app_consumer.py file, I got the following output: 127.0.0.1 - - [02/Jul/2023 00:28:02] "GET / HTTP/1.1" 200 - initialize_params - Session ID stored = 69b0e5e8-d5ea-4279-88d1-9653007662d5 emitting event "Send_stock_status" to 69b0e5e8-d5ea-4279-88d1-9653007662d5 [/collectHooks] 127.0.0.1 - - [02/Jul/2023 00:28:05] "POST /consumetasks HTTP/1.1" 200 - followed by the error: Error on request: Traceback (most recent call last): File "C:\Users\yuanl\AppData\Local\Programs\Python\Python311\Lib\site-packages\werkzeug\serving.py", line 364, in run_wsgi execute(self.server.app) File "C:\Users\yuanl\AppData\Local\Programs\Python\Python311\Lib\site-packages\werkzeug\serving.py", line 328, in execute write(data) File "C:\Users\yuanl\AppData\Local\Programs\Python\Python311\Lib\site-packages\werkzeug\serving.py", line 296, in write assert isinstance(data, bytes), "applications must write bytes" AssertionError: applications must write bytes Note that both the stock and stockStatus variables are of type string. A sample stockSheet looks like this: {'PFMTSHIRT_CN_WM_CLS_L_OLIVEDRAB': 'Out of Stock'} I initially thought the error was generated by the yield statements in the generateStockStatus() function in tasks_consumer.py. Therefore I tried to change the yield statements to yield str(stock), str(stockStatus), yield stock.encode('utf-8'), stockStatus.encode('utf-8') and yield bytes(stock, 'utf-8'), bytes(stockStatus, 'utf-8'), however the error persisted. I then thought the error was generated by the return statement after the elif request.method == 'GET': statement in the app_consumer.py file, therefore I added the .encode() method on the string returned, however I got a new error: TypeError: Object of type bytes is not JSON serializable. Could anyone point me in the right direction in regards to fixing the error?
You should yield only one value at a time. Such as this; yield stock yield stockStatus For more information, you can check this answer.
3
3
76,615,802
2023-7-4
https://stackoverflow.com/questions/76615802/what-are-the-tradeoffs-between-jax-lax-map-and-jax-vmap
This Github issue hints that there are tradeoffs in performance / memory / compilation time when choosing between jax.lax.map and jax.vmap. What are the specific details of these tradeoffs with respect to both GPUs and CPUs?
The main difference is that jax.vmap is a vectorizing transformation, while lax.map is an iterative transformation. Let's look at an example. Example function: vector_dot Suppose you have implemented a simple function that takes 1D vectors as inputs. For simplicity let's make it a simple dot product, but one that asserts the inputs are one-dimensional: import jax import jax.numpy as jnp import numpy as np def vector_dot(x, y): assert x.ndim == y.ndim == 1, "vector inputs required" return jnp.dot(x, y) We can create some random 1D vectors to test this: rng = np.random.default_rng(8675309) x = rng.uniform(size=50) y = rng.uniform(size=50) print(vector_dot(x, y)) # 14.919376 To see what JAX is doing with this function under the hood, we can print the jaxpr, which is JAX's intermediate-level representation of a function: print(jax.make_jaxpr(vector_dot)(x, y)) # { lambda ; a:f32[50] b:f32[50]. let # c:f32[] = dot_general[dimension_numbers=(([0], [0]), ([], []))] a b # in (c,) } This shows that JAX lowers this code to a single call to dot_general, the primitive for generalized dot products in JAX and XLA. Iterating over vector_dot Now, suppose you have a 2D input, and you'd like to apply this function to each row. There are several ways you could imagine doing this: three examples are using a Python for loop, using jax.vmap, or using jax.lax.map: def batched_dot_for_loop(x_batched, y): return jnp.array([vector_dot(x, y) for x in x_batched]) def batched_dot_lax_map(x_batched, y): return jax.lax.map(lambda x: vector_dot(x, y), x_batched) batched_dot_vmap = jax.vmap(vector_dot, in_axes=(0, None)) Applying these three functions to a batched input yields the same results, to within floating point precision: x_batched = rng.uniform(size=(4, 50)) print(batched_dot_for_loop(x_batched, y)) # [11.964929 12.485695 13.683528 12.9286175] print(batched_dot_lax_map(x_batched, y)) # [11.964929 12.485695 13.683528 12.9286175] print(batched_dot_vmap(x_batched, y)) # [11.964927 12.485697 13.683528 12.9286175] But if we look at the jaxpr for each, we can see that the three approaches lead to very different computational characteristics. The for loop solution looks like this: print(jax.make_jaxpr(batched_dot_for_loop)(x_batched, y)) { lambda ; a:f32[4,50] b:f32[50]. let c:f32[1,50] = slice[ limit_indices=(1, 50) start_indices=(0, 0) strides=(1, 1) ] a d:f32[50] = squeeze[dimensions=(0,)] c e:f32[] = dot_general[dimension_numbers=(([0], [0]), ([], []))] d b f:f32[1,50] = slice[ limit_indices=(2, 50) start_indices=(1, 0) strides=(1, 1) ] a g:f32[50] = squeeze[dimensions=(0,)] f h:f32[] = dot_general[dimension_numbers=(([0], [0]), ([], []))] g b i:f32[1,50] = slice[ limit_indices=(3, 50) start_indices=(2, 0) strides=(1, 1) ] a j:f32[50] = squeeze[dimensions=(0,)] i k:f32[] = dot_general[dimension_numbers=(([0], [0]), ([], []))] j b l:f32[1,50] = slice[ limit_indices=(4, 50) start_indices=(3, 0) strides=(1, 1) ] a m:f32[50] = squeeze[dimensions=(0,)] l n:f32[] = dot_general[dimension_numbers=(([0], [0]), ([], []))] m b o:f32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] e p:f32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] h q:f32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] k r:f32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] n s:f32[4] = concatenate[dimension=0] o p q r in (s,) } The key feature is that the iterations in the for loop are unrolled into a single long program. The lax.map version looks like this: print(jax.make_jaxpr(batched_dot_lax_map)(x_batched, y)) { lambda ; a:f32[4,50] b:f32[50]. let c:f32[4] = scan[ jaxpr={ lambda ; d:f32[50] e:f32[50]. let f:f32[] = dot_general[dimension_numbers=(([0], [0]), ([], []))] e d in (f,) } length=4 linear=(False, False) num_carry=0 num_consts=1 reverse=False unroll=1 ] b a in (c,) } The key feature is that it is loaded into a scan primitive, which is XLA's native static loop operation. The vmap version looks like this: print(jax.make_jaxpr(batched_dot_vmap)(x_batched, y)) { lambda ; a:f32[4,50] b:f32[50]. let c:f32[4] = dot_general[dimension_numbers=(([1], [0]), ([], []))] a b in (c,) } The key feature here is that the vmap transformation is able to recognize that a batched 1D dot product is equivalent to a 2D dot product, so the result is a single extremely efficient native operation. Performance considerations These three approaches can have very different performance characteristics. The details will depend on the specifics of the original function (here vector_dot) but in broad strokes, we can consider three aspects: Compilation Cost If you JIT-compile your program, you'll find: The for-loop based solution will have compilation times that grow super-linearly with the number of iterations. This is due to the unrolling seen in the jaxpr above. The lax.map and jax.vmap solutions will have fast compilation time, which under normal circumstances will not grow with the size of the batch dimension. Runtime In terms of runtime: The for loop solution can be very fast, because XLA can often fuse operations between the unrolled iterations. This is the flip side of the long compilation times. The lax.map solution will generally be slow, because it is always executed sequentially with no possibilty of fusing/parallelization between iterations. The jax.vmap solution will generally be the fastest, especially on accelerators like GPU or TPU, because it can make use of native batching parallelism on the device. Memory Cost The for loop and lax.map solutions generally have good memory performance, because they execute sequentially and don't require storage of large intermediate results. The main downside of the jax.vmap solution is that it can cause memory to blow up because the entire problem must fit into memory at once. This is not an issue with the simple vector_dot function used here, but can be for more complicated functions. Benchmarks You can see these general principles at play when benchmarking the above functions. The following timings are on a Colab T4 GPU: y = rng.uniform(size=1000) x_batched = rng.uniform(size=(200, 1000)) %time jax.jit(batched_dot_for_loop).lower(x_batched, y).compile() # CPU times: user 4.96 s, sys: 55 ms, total: 5.01 s # Wall time: 7.24 s %timeit jax.jit(batched_dot_for_loop)(x_batched, y).block_until_ready() # 1.09 ms ± 149 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %time jax.jit(batched_dot_lax_map).lower(x_batched, y).compile() # CPU times: user 117 ms, sys: 2.71 ms, total: 120 ms # Wall time: 172 ms %timeit jax.jit(batched_dot_lax_map)(x_batched, y).block_until_ready() # 2.67 ms ± 56.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %time jax.jit(batched_dot_vmap).lower(x_batched, y).compile() # CPU times: user 51 ms, sys: 941 µs, total: 52 ms # Wall time: 103 ms %timeit jax.jit(batched_dot_vmap)(x_batched, y).block_until_ready() # 719 µs ± 129 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
6
10
76,578,147
2023-6-29
https://stackoverflow.com/questions/76578147/dst-temporal-feature-from-timestamp-using-polars
I'm migrating code to polars from pandas. I have time-series data consisting of a timestamp and value column and I need to compute a bunch of features. i.e. from datetime import datetime, timedelta df = pl.DataFrame({ "timestamp": pl.datetime_range( datetime(2017, 1, 1), datetime(2018, 1, 1), timedelta(minutes=15), time_zone="Australia/Sydney", time_unit="ms", eager=True), }) value = np.random.normal(0, 1, len(df)) df = df.with_columns([pl.Series(value).alias("value")]) I need to generate a column containing an indicator if the timestamp is standard or daylight time. I'm currently using map_elements because as far as I can see the isn't a Temporal Expr, i.e. my current code is def dst(timestamp:datetime): return int(timestamp.dst().total_seconds()!=0) df = df.with_columns(pl.struct("timestamp").map_elements(lambda x: dst(**x)).alias("dst")) (this uses a trick that effectively checks if the tzinfo.dst(dt) offset is zero or not) Is there a (fast) way of doing this using polars expressions rather than (slow) map_elements?
With polars>=0.18.5 the following works df = df.with_columns((pl.col("timestamp").dt.dst_offset()==0).cast(pl.Int32).alias("dst"))
3
3
76,582,545
2023-6-29
https://stackoverflow.com/questions/76582545/how-to-export-to-dict-some-properties-of-an-object
I have a python class which has several properties. I want to implement a method which will return some properties as a dict. I want to mark the properties with a decorator. Here's an example: class Foo: @export_to_dict # I want to add this property to dict @property def bar1(self): return 1 @property # I don't want to add this propetry to dict def bar2(self): return {"smth": 2} @export_to_dict # I want to add this property to dict @property def bar3(self): return "a" @property def bar4(self): return [2, 3, 4] def to_dict(self): return ... # expected result: {"bar1": 1, "bar3": "a"} One way to implement it is to set an additional attribute to the properties with the export_to_dict decorator, like this: def export_to_dict(func): setattr(func, '_export_to_dict', True) return func and to search for properties with the _export_to_dict attribute when to_dict is called. Is there another way to accomplish the task?
Marking each property forces to_dict to scan all methods/attributes on each invocation, which is slow and inelegant. Here's a self-contained alternative that keeps your example usage identical. Keep a list of exported properties in a class attribute By making export_to_dict a class, we can use __set_name__ (Python 3.6+) to get a reference of the Foo class and add a new attribute to it. Now to_dict knows exactly which properties to extract, and because the list is per-class, you can annotate different classes without conflict. And while we're at it, we can make export_to_dict automatically generate the to_dict function to any class that has exported properties. The decorator also restores the original property after the class is created, so the properties work as normal without any performance impact. class export_to_dict: def __init__(self, property): self.property = property def __set_name__(self, owner, name): if not hasattr(owner, '_exported_properties'): owner._exported_properties = [] assert not hasattr(owner, 'to_dict'), 'Class already has a to_dict method' owner.to_dict = lambda self: {prop.__name__: prop(self) for prop in owner._exported_properties} owner._exported_properties.append(self.property.fget) # We don't need the decorator object anymore, restore the property. setattr(owner, name, self.property) class Foo: @export_to_dict # I want to add this property to dict @property def bar1(self): return 1 @property # I don't want to add this propetry to dict def bar2(self): return {"smth": 2} @export_to_dict # I want to add this property to dict @property def bar3(self): return "a" @property def bar4(self): return [2, 3, 4] # to_dict is not needed anymore here! print(Foo().to_dict()) {'bar1': 1, 'bar3': 'a'} If you don't want to your Foo class to have an extra attribute, you can store the mapping in a static dict export_to_dict.properties_by_class = {class: [properties]}. Properties with setters If you need to support property setters, the situation is a bit more complicated but still doable. Passing property.setter through is not sufficient, because the setter replaces the getter and __set_name__ is not called (they have the same name, after all). This can be fixed by splitting the annotation process and creating a wrapper class for property.setter. class export_to_dict: # Used to create setter for properties. class setter_helper: def __init__(self, setter, export): self.setter = setter self.export = export def __set_name__(self, owner, name): self.export.annotate_class(owner) setattr(owner, name, self.setter) def __init__(self, property): self.property = property @property def setter(self): return lambda fn: export_to_dict.setter_helper(self.property.setter(fn), self) def annotate_class(self, owner): if not hasattr(owner, '_exported_properties'): owner._exported_properties = [] assert not hasattr(owner, 'to_dict'), 'Class already has a to_dict method' owner.to_dict = lambda self: {prop.__name__: prop(self) for prop in owner._exported_properties} owner._exported_properties.append(self.property.fget) def __set_name__(self, owner, name): self.annotate_class(owner) # We don't need the decorator object anymore, restore the property. setattr(owner, name, self.property) class Foo: @export_to_dict # I want to add this property to dict @property def writeable_property(self): return self._writeable_property @writeable_property.setter def writeable_property(self, value): self._writeable_property = value foo = Foo() foo.writeable_property = 5 print(foo.to_dict()) {'writeable_property': 5}
6
2
76,612,163
2023-7-4
https://stackoverflow.com/questions/76612163/what-are-the-advantages-of-a-polars-lazyframe-over-a-dataframe
Python polars are pretty similar to Python pandas. I know in pandas we do not have Lazyframes. We can create Lazyframes just like Dataframes in polars. import polars as pl data = {"a": [1, 2, 3], "b": [5, 4, 8]} lf = pl.LazyFrame(data) I want to know what are the advantages of a Lazyframe over a Dataframe ? If someone can explain with examples. Thanks.
I think this is very good explained at the Polars docs: With the lazy API Polars doesn't run each query line-by-line but instead processes the full query end-to-end. To get the most out of Polars it is important that you use the lazy API because: the lazy API allows Polars to apply automatic query optimization with the query optimizer the lazy API allows you to work with larger than memory datasets using streaming the lazy API can catch schema errors before processing the data Here we see how to use the lazy API starting from either a file or an existing DataFrame. So in short in both cases you code your transformations. In a normal data-frame these transformations will be executed one by one, in the lazy case a "query optimizer" will look for shortcuts in the algorithm for reaching the same result. Notice that these shortcuts are not necessary and also not always present, so in the worst case the Lazy operation will just perform as the traditional one. Example As an example imagine that you need to Read a CSV transform it and filter it, in pandas we will: # Read everything even if we don't need everything df = pd.read_csv("example.csv") # Potentially relocate memory or duplicate memory usage df = df[['name','col2']] # We are transforming everything even if we will filter later df['name'] = df['name'].str.upper() # Same as before, and could have been avoided by not reading in the first place, or putting the transformation later df = df[df['col2']>0] In this process we may have allocated much more memory than the required for the original DF and in general we could have read the CSV file once already looking just for 'col1'>1 and 'col2'. There are optimizations we can do in the code to optimize it such as filtering first and transforming later. But let's check what a lazy operation can do does. Again from the docs of polar. we could have make the "query" for the whole process: q1 = ( pl.scan_csv(f"docs/src/data/reddit.csv") .with_columns(pl.col("name").str.to_uppercase()) .filter(pl.col("comment_karma") > 0) ) # that reads in polars as : FILTER [(col("comment_karma")) > (0)] FROM WITH_COLUMNS: [col("name").str.uppercase()] CSV SCAN data/reddit.csv PROJECT */6 COLUMNS Then the polars optimizer will transform it (unless you tell them not to do it), so the actual operation looks like: WITH_COLUMNS: [col("name").str.uppercase()] CSV SCAN data/reddit.csv PROJECT */6 COLUMNS SELECTION: [(col("comment_karma")) > (0)] This is what polars executes and basically it filters and transform the df at the time of reading, altogether, saving computation and time. tldr: The LazyDataframe is meant to allow you to use the lazy API based on query optimization that shortcuts the amount of calculations needed by intelligently reordering operations instead of blindly executing them in order as it is usual in pandas. Of course for using the Lazy API we need to defers the actual execution of the code till when is needed so it can rearrange all operations.
4
9
76,613,427
2023-7-4
https://stackoverflow.com/questions/76613427/fastapi-sqlalchemy-return-data-like-dictionary
Learning FastAPI and SQLAlchemy with video tutorial. In video code working correct. Code from tutorial: @router.get("/") async def get_specific_operations(operation_type: str, session: AsyncSession = Depends(get_async_session)): query = select(operation).where(operation.c.type == operation_type) result = await session.execute(query) return result.all() When I using this code, I have error "Cannot convert dictionary update sequence element #0 to a sequence" I tried: return result.scalars().all() But I get only id numbers, without another data from strings. With chain: @router.get("/") async def get_specific_operations(operation_type: str, session: AsyncSession = Depends(get_async_session)): query = select(operation).where(operation.c.type == operation_type) result = await session.execute(query) result = list(chain(*result)) return result I get correct results, but without column titles. This information is not fully understood by the user. I get / I need (this like in video): / [ [ / { 4 / id: 4 name / username: name 25, / age: 25 7 / }, name2 / { 31 / id: 7 ] / username: name2 / age:31 / } / ] How I can get full information from string of DB with column names? I need a list of dictionaries, I think.
I found a solution that helped me. Just changed "return": @router.get("/") async def get_specific_operations(operation_type: str, session: AsyncSession = Depends(get_async_session)): query = select(operation).where(operation.c.type == operation_type) result = await session.execute(query) return [dict(r._mapping) for r in result]
2
2
76,594,818
2023-7-1
https://stackoverflow.com/questions/76594818/indexing-a-searchvector-vs-having-a-searchvectorfield-in-django-when-should-i-u
Clearly I have some misunderstandings about the topic. I would appreciate if you correct my mistakes. So as explained in PostgreSQL documentation, We need to do Full-Text Searching instead of using simple textual search operators. Suppose I have a blog application in Django. Entry.objects.filter(body_text__search="Cheese") The bottom line is we have "document"s which are our individual records in blog_post field and a term "Cheese". Individual documents are gonna be translated to something called "tsvector"(a vector of simplified words) and also a "tsquery" is created out of our term. If I have no SearchVectorField field and no SearchVector index: for every single record in body_text field, a tsvector is created and it's checked against our tsquery, in failure, we continue to the next record. If I have SearchVectorField field but not SearchVector index: that tsvector vector is stored in SearchVectorField field. So the searching process is faster because we only check for match not creating tsvector anymore, but still we're checking every single record one by one. If I have both SearchVectorField field and SearchVector index: a GIN index is created in database, it's somehow like a dictionary: "cat": [3, 7, 18], .... It stores the occurrences of the "lexems"(words) so that we don't have to iterate through all the records in the database. I think this is the fastest option. Now if I have only SearchVector index: we have all the benefits of number 3. Then why should I have SearchVectorField field in my table? IOW why do I need to store tsvector if I already have it indexed? Django documentation says: If this approach becomes too slow, you can add a SearchVectorField to your model. Thanks in advance.
I confirm that your statements are correct. Others have already brought performance-related reasons for choosing a scenario, but there are even more practical reasons. The fourth scenario, where you just have a functional index calculating the SearchVector, can almost always be used but has some limitations: the functional index can only refer to the fields of the model you are searching on. ("An index column need not be just a column of the underlying table ...") https://www.postgresql.org/docs/current/indexes-expressional.html "PostgreSQL requires functions and operators referenced in an index to be marked as IMMUTABLE. Django doesn't validate this but PostgreSQL will error. This means that functions such as Concat() aren't accepted." https://docs.djangoproject.com/en/4.2/ref/models/indexes/#s-expressions So to answer your question, you are forced to use a SearchVectorField, instead of a simple functional index, when in your full-text search on a model you also want to include fields from other models (eg: name of the author of an article) and also when the functions or operators you want to use in your functional index are not immutable (ex: Date functions) You can see an example in these two slides from my talk "A Pythonic full-text search" I presented at PyCon US 2023 (https://www.paulox.net/2023/04/23/pycon-us-2023/): "SearchVector Field" https://speakerdeck.com/pauloxnet/a-pythonic-full-text-search-pycon-us-2022?slide=31 "SearchVector field update" https://speakerdeck.com/pauloxnet/a-pythonic-full-text-search-pycon-us-2022?slide=32
3
4
76,605,223
2023-7-3
https://stackoverflow.com/questions/76605223/fill-gaps-in-time-intervals-with-other-time-intervals
We have two tables with time intervals. I want to fill gaps in df1 with df2 as in the graph to get df3. df1 is moved to df3 as it is, and only the parts of df2 that lie in the gaps of df1 (difference) are moved to df3. df1 = pd.DataFrame({'Start': ['2023-01-01', '2023-02-01', '2023-03-15', '2023-04-18', '2023-05-15', '2023-05-25'], 'End': ['2023-01-15', '2023-02-20', '2023-04-01', '2023-05-03', '2023-05-20', '2023-05-30']}) df2 = pd.DataFrame({'Start': ['2023-01-02', '2023-01-05', '2023-01-20', '2023-02-25', '2023-03-05', '2023-04-18', '2023-05-12'], 'End': ['2023-01-03', '2023-01-10', '2023-02-10', '2023-03-01', '2023-04-15', '2023-05-10', '2023-06-05']}) df3 = pd.DataFrame({'Start': ['2023-01-01', '2023-01-20', '2023-02-01', '2023-02-25', '2023-03-05', '2023-03-15', '2023-04-02', '2023-04-18', '2023-05-04', '2023-05-12', '2023-05-15', '2023-05-21', '2023-05-25', '2023-05-31'], 'End': ['2023-01-15', '2023-01-31', '2023-02-20', '2023-03-01', '2023-03-14', '2023-04-01', '2023-04-15', '2023-05-03', '2023-05-10', '2023-05-14', '2023-05-20', '2023-05-24', '2023-05-30', '2023-06-05']}) # df1 Start End 0 2023-01-01 2023-01-15 1 2023-02-01 2023-02-20 2 2023-03-15 2023-04-01 3 2023-04-18 2023-05-03 4 2023-05-15 2023-05-20 5 2023-05-25 2023-05-30 # df2 Start End 0 2023-01-02 2023-01-03 1 2023-01-05 2023-01-10 2 2023-01-20 2023-02-10 3 2023-02-25 2023-03-01 4 2023-03-05 2023-04-15 5 2023-04-18 2023-05-10 6 2023-05-12 2023-06-05 # df3 (desired result) Start End 0 2023-01-01 2023-01-15 1 2023-01-20 2023-01-31 2 2023-02-01 2023-02-20 3 2023-02-25 2023-03-01 4 2023-03-05 2023-03-14 5 2023-03-15 2023-04-01 6 2023-04-02 2023-04-15 7 2023-04-18 2023-05-03 8 2023-05-04 2023-05-10 9 2023-05-12 2023-05-14 10 2023-05-15 2023-05-20 11 2023-05-21 2023-05-24 12 2023-05-25 2023-05-30 13 2023-05-31 2023-06-05 Code to generate plot: import plotly.express as px df_plot = pd.concat( [ df1.assign(color='df1', df='df1'), df2.assign(color='df2', df='df2'), df3.assign(color=['df1', 'df2', 'df1', 'df2', 'df2', 'df1', 'df2', 'df1', 'df2', 'df2', 'df1', 'df2', 'df1', 'df2'], df='df3') ], ) fig = px.timeline(df_plot, x_start="Start", x_end="End", y="df", color="color") fig.update_yaxes(categoryorder='category descending') fig.show()
I think I can get you close: df1 = pd.DataFrame({'Start': ['2023-01-01', '2023-02-01', '2023-03-15'], 'End': ['2023-01-15', '2023-02-20', '2023-04-01']}) df2 = pd.DataFrame({'Start': ['2023-01-02', '2023-01-05', '2023-01-20', '2023-02-25', '2023-03-05'], 'End': ['2023-01-03', '2023-01-10', '2023-02-10', '2023-03-01', '2023-04-15']}) df3 = pd.DataFrame({'Start': ['2023-01-01', '2023-01-20', '2023-02-01', '2023-02-25', '2023-03-05', '2023-03-15', '2023-04-02'], 'End': ['2023-01-15', '2023-01-31', '2023-02-20', '2023-03-01', '2023-03-14', '2023-04-01', '2023-04-15']}) df1['dates'] = [pd.date_range(s,e) for s, e in zip(df1['Start'], df1['End'])] df2['dates'] = [pd.date_range(s,e) for s, e in zip(df2['Start'], df2['End'])] df1e = df1.explode('dates').assign(source='df1') df2e = df2.explode('dates').assign(source='df2') df3e = df1e.set_index(df1e['dates']).combine_first(df2e.set_index(df2e['dates'])) df3e['dates'] = pd.to_datetime(df3e['dates']) df3e['group'] = ((df3e['source'] != df3e['source'].shift()) | (df3e['dates'] - df3e['dates'].shift() > pd.Timedelta(days=1))).cumsum() df_out = df3e.groupby(['group', 'source'])['dates'].agg([min, max]) Output: min max group source 1 df1 2023-01-01 2023-01-15 2 df2 2023-01-20 2023-01-31 3 df1 2023-02-01 2023-02-20 4 df2 2023-02-25 2023-03-01 5 df2 2023-03-05 2023-03-14 6 df1 2023-03-15 2023-04-01 7 df2 2023-04-02 2023-04-15 Graphical Output: import plotly.express as px df_out = df_out.reset_index().rename({'source':'color', 'min':'Start', 'max':'End'}, axis=1) df_plot = pd.concat( [ df1.assign(color='df1'), df2.assign(color='df2'), df_out ], keys=['df1' , 'df2', 'df3'] ).reset_index(level=0, names='df') fig = px.timeline(df_plot, x_start="Start", x_end="End", y="df", color="color") fig.update_yaxes(categoryorder='category descending') fig.show() Graph: with updated dataset: min max group source 1 df1 2023-01-01 2023-01-15 2 df2 2023-01-20 2023-01-31 3 df1 2023-02-01 2023-02-20 4 df2 2023-02-25 2023-03-01 5 df2 2023-03-05 2023-03-14 6 df1 2023-03-15 2023-04-01 7 df2 2023-04-02 2023-04-15 8 df1 2023-04-18 2023-05-03 9 df2 2023-05-04 2023-05-10 10 df2 2023-05-12 2023-05-14 11 df1 2023-05-15 2023-05-20 12 df2 2023-05-21 2023-05-24 13 df1 2023-05-25 2023-05-30 14 df2 2023-05-31 2023-06-05 Graph output:
2
3
76,613,672
2023-7-4
https://stackoverflow.com/questions/76613672/python-fastest-way-of-checking-if-there-are-more-than-x-files-in-a-folder
I am looking for a very rapid way to check whether a folder contains more than 2 files. I worry that len(os.listdir('/path/')) > 2 may become very slow if there are a lot of files in /path/, especially since this function will be called frequently by multiple processes at a time.
To get the fastest it's probably something hacky. My guess was: def iterdir_approach(path): iter_of_files = (x for x in Path(path).iterdir() if x.isfile()) try: next(iter_of_files) next(iter_of_files) next(iter_of_files) return True except: return False We create a generator and try to exhaust it, catching the thrown exception if necessary. To profile the approaches we create a bunch of directories with a bunch of files in them : import shutil import tempfile import timeit import matplotlib.pyplot as plt from pathlib import Path def create_temp_directory(num_directories): temp_dir = tempfile.mkdtemp() for i in range(num_directories): dir_path = os.path.join(temp_dir, f"subdir_{i}") os.makedirs(dir_path) for j in range(random.randint(0,i)): file_path = os.path.join(dir_path, f"file_{j}.txt") with open(file_path, 'w') as file: file.write("Sample content") return temp_dir We define the various approaches (Copied the other two from the answers to the question: def iterdir_approach(path): #@swozny iter_of_files = (x for x in Path(path).iterdir() if x.isfile()) try: next(iter_of_files) next(iter_of_files) next(iter_of_files) return True except: return False def len_os_dir_approach(path): #@bluppfisk return len(os.listdir(path)) > 2 def check_files_os_scandir_approach(path): #@PoneyUHC MINIMUM_SIZE = 3 file_count = 0 for entry in os.scandir(path): if entry.is_file(): file_count += 1 if file_count == MINIMUM_SIZE: return True return False def path_resolve_approach(path): #@matleg directory_path = Path(path).resolve() nb_files = 0 enough_files = False for file_path in directory_path.glob("*"): if file_path.is_file(): nb_files += 1 if nb_files > 2: return True return False def dilettant_approach(path): #@dilettant gen = os.scandir(path) # OP states only files in folder /path/ enough = 3 # At least 2 files has_enough = len(list(itertools.islice(gen, enough))) >= enough return has_enough def adrian_ang_approach(path): #@adrian_ang count = 0 with os.scandir(path) as entries: for entry in entries: if entry.is_file(): count += 1 if count > 2: return True return False Then we profile the code using timeit.timeit and plot the execution times for various amounts of directories: num_directories_list = [10, 50, 100, 200, 500,1000] approach1_times = [] approach2_times = [] approach3_times = [] approach4_times = [] approach5_times = [] approach6_times = [] for num_directories in num_directories_list: temp_dir = create_temp_directory(num_directories) subdir_paths = [str(p) for p in Path(create_temp_directory(num_directories)).iterdir()] approach1_time = timeit.timeit(lambda: [iterdir_approach(path)for path in subdir_paths], number=5) approach2_time = timeit.timeit(lambda: [check_files_os_scandir_approach(path)for path in subdir_paths], number=5) approach3_time = timeit.timeit(lambda: [path_resolve_approach(path)for path in subdir_paths], number=5) approach4_time = timeit.timeit(lambda: [len_os_dir_approach(path)for path in subdir_paths], number=5) approach5_time = timeit.timeit(lambda: [dilettant_approach(path)for path in subdir_paths], number=5) approach6_time = timeit.timeit(lambda: [adrian_ang_approach(path)for path in subdir_paths], number=5) approach1_times.append(approach1_time) approach2_times.append(approach2_time) approach3_times.append(approach3_time) approach4_times.append(approach4_time) approach5_times.append(approach5_time) approach6_times.append(approach6_time) shutil.rmtree(temp_dir) Visualization of the results plt.plot(num_directories_list, approach1_times, label='iterdir_approach') plt.plot(num_directories_list, approach2_times, label='check_files_os_scandir_approach') plt.plot(num_directories_list, approach3_times, label='path_resolve_approach') plt.plot(num_directories_list, approach4_times, label='os_dir_approach') plt.plot(num_directories_list, approach5_times, label='dilettant_approach') plt.plot(num_directories_list, approach6_times, label='adrian_ang_approach') plt.xlabel('Number of Directories') plt.ylabel('Execution Time (seconds)') plt.title('Performance Comparison') plt.legend() plt.show() Closeup of best 3 solutions:
9
6
76,608,166
2023-7-3
https://stackoverflow.com/questions/76608166/how-do-i-parse-a-list-of-datetimes-with-a-s-resolution-in-pandas-2
I have a csv with a datetime column. However, dates go all the way from 1 AD to 3000 AD. They are out of range for a normal datetime64[ns] dtype. In pandas < 2, my workaround was to have this column with a "period[H]" dtype. I had a custom function doing this, which I passed as a date_parser in pd.read_csv. In pandas >= 2, an option to have timestamps with other resolutions has been introduced. The date_parser argument has been deprecated, so the previous solution is also not doable. I thought I could simply cast the whole column of strings with pd.to_datetime. But that doesn't work. Full example: example.csv: index,date A,"0004-04-04T12:30" B,"2004-04-04T12:30" C,"3004-04-04T12:30" 1. Clueless optimism - Fails Dates are ISO8601, easiest of the formats. df = pd.read_csv('example.csv', parse_dates=['date']) print(df.date.dtype, '|', type(df.date.iloc[0])) Got a warning: <ipython-input-103-059359fbaeab>:1: UserWarning: Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format And output: object | <class 'str'> If example.csv only contains datetime64[ns]-valid dates, this works perfectly. 2. As documented - Fails Doc says: "read in as object and then apply to_datetime() as-needed." df = pd.read_csv('example.csv') pd.to_datetime(df.date) Fails with OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 0004-04-04T12:30, at position 0 pd.to_datetime(df.date, unit='s') gives : ValueError: non convertible value 0004-04-04T12:30 with the unit 's', at position 0 so I guess the unit argument here doesn't have the meaning I expected. 3. Per-element conversion - Partially works def to_timestamp(val): if pd.isna(val): return pd.Timestamp('NaT', unit='s') return pd.Timestamp(val, unit='s') df = pd.read_csv('example.csv') df['date'] = df.date.apply(_to_timestamp) print(df.date.dtype, '|', type(df.date.iloc[0])) Prints: object | <class 'pandas._libs.tslibs.timestamps.Timestamp'> So indeed, the individual elements have been converted to timestamps, yay! But the column itself still has an "object" dtype, not a "datetime" one. df.date.astype('datetime64[s]') fails with OutOfBoundsDatetime: Cannot cast 0004-04-04 12:30:00 to unit='ns' without overflow., at position 0 even though I specified a "s" resolution ? Same with pd.to_datetime(df.date). 4. Step outside pandas - Works import numpy as np def to_d64(val): if pd.isna(val): return np.datetime64('') return np.datetime64(val) df = pd.read_csv('example.csv') df['date'] = np.array(df.date.apply(to_d64), dtype='M8[s]') print(df.date.dtype, '|', type(df.date.iloc[0])) Success! It prints: datetime64[s] | <class 'pandas._libs.tslibs.timestamps.Timestamp'> But at what price? In the real code, my csv is thousands of lines long. The pandas < 2 code made sure to avoid converting element-per-element, as this had performance issues. I was hoping that the support for "s" resolution would be accessible directly from the string-parsing step. I also note that my final example works because my strings are in a ISO8601 format, which is support by numpy. That is not a general solution at all. Did I miss something? Does anybody have a cleaner solution? EDIT: I am using pandas 2.0.3 and numpy 1.24.4, Python 3.11.4 on Linux (Fedora 37).
I think unit in to_datetime only takes effect if your input is integers - but I think there is work to make it take effect even if the input is strings. Non-nanosecond support is quite new, so not yet too mature. Anyway, here's a solution which you can use until to_datetime can parse non-nanosecond range strings, which involves going via numpy datetime64[s]: In [22]: data = """\ ...: index,date ...: A,"0004-04-04T12:30" ...: B,"2004-04-04T12:30" ...: C,"3004-04-04T12:30" ...: """ In [23]: df = pd.read_csv(io.StringIO(data)) In [24]: df['date'] = df['date'].to_numpy().astype('datetime64[s]') In [25]: df Out[25]: index date 0 A 4-04-04 12:30:00 1 B 2004-04-04 12:30:00 2 C 3004-04-04 12:30:00
2
3
76,602,474
2023-7-3
https://stackoverflow.com/questions/76602474/efficient-random-subsample-of-a-pandas-dataframe-based-on-multiple-columns-targe
I am currently working on a project where I have a large DataFrame (500'000 rows) containing polygons as rows, with each polygon representing a geographical area. The columns of the DataFrame represent different landcover classes (34 classes), and the values in the cells represent the area covered by each landcover class in square kilometers. My objective is to subsample this DataFrame based on target requirements for landcover classes. Specifically, I want to select a subset of polygons that collectively meet certain target coverage requirements for each landcover class. The target requirements are specified as the desired area coverage for each landcover class. Some collegue hinted that this could be interpreted as an optimisation problem with an objective function. However, I have not found a solution to it yet and tried a different, slow, iterative approach (see below). To give you a better understanding, here is a minimum reproducible example of my DataFrame structure with only 4 polygons and 3 classes: import pandas as pd # Create a sample DataFrame data = { 'Polygon': ['Polygon A', 'Polygon B', 'Polygon C', 'Polygon D'], 'Landcover 1': [10, 5, 7, 3], 'Landcover 2': [15, 8, 4, 6], 'Landcover 3': [20, 12, 9, 14] } df = pd.DataFrame(data) For instance, let's say I have the following target requirements for landcover classes: target_requirements = { 'Landcover 1': 15, 'Landcover 2': 20, 'Landcover 3': 25 } Based on these target requirements, I would like to subsample the DataFrame by selecting a subset of polygons that collectively meet or closely approximate the target area coverage for each landcover class. In this example, the polygons A and C are good subsamples as their landcover coverages summed together comes close to the requirements I set. My [extended] code so far Here is what I coded so far. You will see some extra steps which are implemented here: Weights: to guide the selection of polygons using deficits and surplus Random sampling of top 0.5%: based on weights, I select the top 0.5% polygons and randomly pick 1 from this selection. Tolerance: I set a tolerance for discrepancies between cumulated areas found with the current subsample and the requirements needed. Progress bar: aesthetic. import numpy as np import pandas as pd from tqdm import tqdm def select_polygons(row, cumulative_coverages, landcover_columns, target_coverages): selected_polygon = row[landcover_columns] # Add the selected polygon to the subsample subsample = selected_polygon.to_frame().T cumulative_coverages += selected_polygon.values return cumulative_coverages, subsample df_data = # Your DataFrame with polygons and landcover classes landcover_columns = # List of landcover columns in the DataFrame target_coverages = # Dictionary of target coverages for each landcover class total_coverages = df_data[landcover_columns].sum() target_coverages = pd.Series(target_coverages, landcover_columns) df_data = df_data.sample(frac=1).dropna().reset_index(drop=True) # Set parameters for convergence max_iterations = 30000 convergence_threshold = 0.1 top_percentage = 0.005 # Initialize variables subsample = pd.DataFrame(columns=landcover_columns) cumulative_coverages = pd.Series(0, index=landcover_columns) # Initialize tqdm progress bar progress_bar = tqdm(total=max_iterations) # Iterate until the cumulative coverage matches or is close to the target coverage for iteration in range(max_iterations): remaining_diff = target_coverages - cumulative_coverages deficit = remaining_diff.clip(lower=0) surplus = remaining_diff.clip(upper=0) * 0.1 deficit_sum = deficit.sum() normalized_weights = deficit / deficit_sum # Calculate the combined weights for deficit and surplus for the entire dataset weights = df_data[landcover_columns].mul(normalized_weights) + surplus # Calculate the weight sum for each polygon weight_sum = weights.sum(axis=1) # Select the top 1% polygons based on weight sum top_percentile = int(len(df_data) * top_percentage) top_indices = weight_sum.nlargest(top_percentile).index selected_polygon_index = np.random.choice(top_indices) selected_polygon = df_data.loc[selected_polygon_index] cumulative_coverages, subsample_iteration = select_polygons( selected_polygon, cumulative_coverages, landcover_columns, target_coverages ) # Add the selected polygon to the subsample subsample = subsample.append(subsample_iteration) df_data = df_data.drop(selected_polygon_index) # Check if all polygons have been selected or the cumulative coverage matches or is close to the target coverage if df_data.empty or np.allclose(cumulative_coverages, target_coverages, rtol=convergence_threshold): break # Calculate the percentage of coverage achieved coverage_percentage = (cumulative_coverages.sum() / target_coverages.sum()) * 100 # Update tqdm progress bar progress_bar.set_description(f"Iteration {iteration+1}: Coverage Percentage: {coverage_percentage:.2f}%") progress_bar.update(1) progress_bar.close() subsample.reset_index(drop=True, inplace=True) The problem Code is slow (10 iterations/s) and doesn't manage well tolerance, i.e I can get cumulative_coverages way above 100% while tolerance is not met yet ( my "guidance for selection" is not good enough). Plus, there must be a much better OPTIMISATION to get what I want. Any help/idea would be appreciated.
The problem description lends itself to solving with a Mixed-Integer-Linear Program (MIP). A convenient library for solving mixed integer problems is PuLP that ships with the built-in Coin-OR suite and in particular the integer solver CBC. Note that I changed the data structure from the DataFrame in the example because the code gets really messy with your original DataFrame as lookups without index use a lot of .loc which I personally dislike :D data = { 'Polygon A': { 'Landcover 1': 10, 'Landcover 2': 15, 'Landcover 3': 20 }, 'Polygon B': { 'Landcover 1': 5, 'Landcover 2': 8, 'Landcover 3': 12 }, 'Polygon C': { 'Landcover 1': 7, 'Landcover 2': 4, 'Landcover 3': 9 }, 'Polygon D': { 'Landcover 1': 3, 'Landcover 2': 6, 'Landcover 3': 14 } } target_requirements = { 'Landcover 1': 15, 'Landcover 2': 20, 'Landcover 3': 25 } Then we formulate the linear problem: from pulp import LpProblem, LpVariable, LpMinimize, lpSum, value model = LpProblem("LandcoverOptimization", LpMinimize) # Create a binary variable for each polygon to indicate whether it was selected in the solution polygons = list(data.keys()) selected = LpVariable.dicts("Selected", polygons, cat='Binary') # Minimize the amount of selected polygons model += lpSum(selected[polygon] for polygon in polygons) # We need to approximately cover the target required landcover +- tolerance # Also: We have to select a polygon to add it to the solution, this will be minimized by the objective tolerance = 0.1 # We create the sums only once for performance sums = {} for landcover in target_requirements: sums[landcover] = lpSum(landcovers[polygon][landcover] * selected[polygon] for polygon in polygons) for landcover in target_requirements: model += sums[landcover] >= target_requirements[landcover] * (1 - tolerance) model += sums[landcover] <= target_requirements[landcover] * (1 + tolerance) model.solve(PULP_CBC_CMD(fracGap = 0.9))# Set mip gap very permissively for performance To visualize the output we extract the solution: selected_polygons = [polygon for polygon in polygons if value(selected[polygon]) == 1.0] print("Selected Polygons:") for polygon in selected_polygons: print(polygon) achieved_landcovers = {} for landcover in target_requirements: total_landcover = sum(data[polygon][landcover] for polygon in selected_polygons) achieved_landcovers[landcover] = total_landcover print("Required Landcovers:") for landcover, requirement in target_requirements.items(): print(f"{landcover}: {requirement}") print("Achieved Landcovers:") for landcover, coverage in achieved_landcovers.items(): print(f"{landcover}: {coverage}") Output: Selected Polygons: Polygon A Polygon C Required Landcovers: Landcover 1: 15 Landcover 2: 20 Landcover 3: 25 Achieved Landcovers: Landcover 1: 17 Landcover 2: 19 Landcover 3: 29 With regard to tractability: I've solved some MIP models with ~3 million binary variables. This question says 5 million variables won't solve for them. Playing with tolerances, as well as the optimality gap will enable you to dial into a useful and practical solution for your problem. The output will also contain logs of measures of optimality and/or prove infeasibility of the problem(e.g. if you set your tolerances too tightly) : Result - Optimal solution found Objective value: 2.00000000 Enumerated nodes: 0 Total iterations: 1 Time (CPU seconds): 0.00 Time (Wallclock seconds): 0.00 Option for printingOptions changed from normal to all Total time (CPU seconds): 0.00 (Wallclock seconds): 0.01 If the amount of data is truly huge and the tolerances need to be very tight, CBC might no longer be able to handle the problem. Commercially available solvers such as Gurobi will handle much larger problem instances, but aren't available for free. In my experience, if a solver can't find a satisfying solution, there is no way you can write some python/pandas code that will. To get the mindset of describing problems, rather than thinking of solutions I recommend Raymond Hettinger's excellent talk on PyCon 2019 - Modern solvers: Problems well-defined are problems solved to install PuLP pip install pulp EDIT: For OP's 88MB problem instance on my MacBook Pro: Result - Optimal solution found (within gap tolerance) Objective value: 14178.00000000 Lower bound: 14080.357 Gap: 0.01 Enumerated nodes: 0 Total iterations: 0 Time (CPU seconds): 218.06 Time (Wallclock seconds): 219.21 Option for printingOptions changed from normal to all Total time (CPU seconds): 293.59 (Wallclock seconds): 296.26 Tweaking the tolerance and MIP gap will improve runtime or solution quality. It can prove at least 14081 polygons need to be chosen and it found a solution that chooses 14178 polygons(1% optimality gap) with a coverage tolerance of +- 3%: Landcover Comparison: n20: Required=3303.7, Achieved=3204.7, Difference=99.0 n25: Required=20000.0, Achieved=19401.1, Difference=598.9 n5: Required=20000.0, Achieved=19400.0, Difference=600.0 n16: Required=8021.1, Achieved=7781.3, Difference=239.8 ....
3
3
76,603,915
2023-7-3
https://stackoverflow.com/questions/76603915/get-polynomial-x-at-y-python-3-10-numpy
I'm attempting to calculate all possible real X-values at a certain Y-value from a polynomial given in descending coefficent order, in Python 3.10. I want the resulting X-values to be provided to me in a list. I've tried using the roots() function of the numpy library, as shown in one of the answers to this post, however it does not appear to work: import numpy as np import matplotlib.pyplot as plt def main(): coeffs = np.array([1, 2, 2]) y = 1.5 polyDataX = np.linspace(-2, 0) polyDataY = np.empty(shape = len(polyDataX), dtype = float) for i in range(len(polyDataX)): polyDataY[i] = coeffs[0] * pow(polyDataX[i], 2) + coeffs[1] * polyDataX[i] + coeffs[2] coeffs[-1] -= y x = np.roots(coeffs).tolist() plt.axhline(y, color = "orange") plt.plot(polyDataX, polyDataY, color = "blue") plt.title("X = " + str(x)) plt.show() plt.close() plt.clf() if (__name__ == "__main__"): main() In my example above, I have the coefficents of my polynomial stored in the local variable coeffs, in descending order. I then attempt to gather all the X-values at the Y-value of 0.5, stored within the x and y local variables respectivelly. I then display the gathered X-values as the title of the shown plot. The script above results in the following plot: With the X-values being shown as [-2.0, 0.0], instead of the correct: What is the proper way to get all real X-values of a polynomial at a certain Y-value in Python? Thanks for reading my post, any guidance is appreciated.
You should be making use of the numpy.polynomial.Polynomial class that was added in numpy v1.4 (more information here). With that class, you can create a polynomial object. To find your solution, you can subtract y from the Polynomial object and then call the roots method. Another nice feature is that you can directly call the object, which you can use to compute polyDataY. Just note that the Polynomial class expects the coefficients to be given backward from np.roots, i.e. a quadratic of x^2 + 2x + 2 should have the coefficients (2, 2, 1). To keep things consistent with what you gave, I just pass the reversed coeffs. import numpy as np from numpy.polynomial import Polynomial import matplotlib.pyplot as plt plt.close("all") coeffs = np.array([1, 2, 2]) poly = Polynomial(coeffs[::-1]) polyDataX = np.linspace(-2, 0) polyDataY = poly(polyDataX) y = 1.5 x = (poly - y).roots() plt.axhline(y, color = "orange") plt.plot(polyDataX, polyDataY, color = "blue") plt.title("X = " + str(x)) plt.show()
3
3
76,607,902
2023-7-3
https://stackoverflow.com/questions/76607902/how-to-replicate-rows-of-a-dataframe-a-fixed-number-of-times
I want to replicate rows of a dataframe as to prepare for the adding of a column. The dataframe contains years column and I want to add a fixed column of months. The idea is to replicate each same year rows exactly 12 times then add a fixed value column (1-12). my code is the following: all_years = dataframe["Year"].unique().tolist() new_dataset = pd.DataFrame() for idx, year in enumerate(all_years): rows_dataframe = pd.concat( [dataframe.where(dataframe["Year"] == year).dropna()] * 12, ignore_index=True) new_dataset = pd.concat([rows_dataframe, new_dataset], ignore_index=True) The results are correct, but can I avoid the for loop here, and implement this in a more "pandas-ic" way? EDIT: expected results for one value of years (here 2012) is: (to note that months column is not added through my code, but added it to show the final output) +-------+--------+---------+ | Years | Months | SomeCol | +-------+--------+---------+ | 2011 | 12 | val1 | +-------+--------+---------+ | 2012 | 1 | val1 | +-------+--------+---------+ | 2012 | 2 | val1 | +-------+--------+---------+ | 2012 | 3 | val1 | +-------+--------+---------+ | 2012 | 4 | val1 | +-------+--------+---------+ | 2012 | 5 | ... | +-------+--------+---------+ | 2012 | 6 | ... | +-------+--------+---------+ | 2012 | 7 | val1 | +-------+--------+---------+ | 2012 | 8 | val1 | +-------+--------+---------+ | 2012 | 9 | val1 | +-------+--------+---------+ | 2012 | 10 | | +-------+--------+---------+ | 2012 | 11 | | +-------+--------+---------+ | 2012 | 12 | | +-------+--------+---------+ | 2013 | 1 | ... | +-------+--------+---------+
How about using groupby. I am assuming each year occurs only once in your data frame: new_dataset = dataframe.groupby("Year").apply(lambda x: pd.concat([x] * 12)).reset_index(drop=True) Or just repeat and sort the values: new_dataset = pd.concat([dataframe] * 12).sort_values("Year").reset_index(drop=True)
3
0
76,593,563
2023-7-1
https://stackoverflow.com/questions/76593563/redis-queue-task-is-queued-but-never-executed
I am having issues getting RQ-python to run. Like in the example of the documentation (https://python-rq.org) I have a function in an external file def createAndSaveIndex(url_list, index_path): print("--------------------------------------started task------------------------------------") index = indexFromURLList(url_list=url_list) index.save_local(index_path) return "Im done!" which I import into my main file and use in the queue: from redis import Redis conn = Redis() q = Queue(connection=conn) job = q.enqueue(f=createAndSaveIndex, args=(["amazon.com"], dirname+"/myIndex/")) # how long to hold onto the result print(job.id) The job is created and I get the job Id, however not even the print statement is executed and job.is_finished always returns false. I am on MacOs and have redis installed thorugh homebrew. I called redis-server and using it through the terminal works too? Does anyone have an idea what I could have done wrong?enter image description here Tried the different examples I found online, none of which worked in my case. Tried using the function inside and outside the file. Edit I forgot to check and mention the worker, turns out it is actually throwing an error: File "/Users/-/micromamba/envs/flask/lib/python3.11/site-packages/rq/utils.py", line 107, in import_attribute return __builtins__[name] ~~~~~~~~~~~~^^^^^^ KeyError: 'backend_index.createAndSaveIndex' as well as: File "/Users/-/micromamba/envs/flask/lib/python3.11/site-packages/rq/utils.py", line 109, in import_attribute raise ValueError('Invalid attribute name: %s' % name) ValueError: Invalid attribute name: backend_index.createAndSaveIndex backend_index is the file where my function is located and createAndSaveIndex the name of the function. Edit I first had both of my python files in the root directory I was told that the function must come from a module and not just a file, so now this is my project structure, however nothing changed for me: project/ ├── main.py └── indexCreation/ ├── __init__.py └── createIndex.py (was named backend_index) This is the content of my init.py from .createIndex import createAndSaveIndex Solution After trying to recreate the error on another computer I finally got it to work: Turns it out the issue was the kind of terminal I used. When creating the redis-server and initializing the rq worker I used my macOs Terminal and to run the code I used the VSCode built in terminal (Which I thought was the same since I was using the same venv). Now If you do everything in the integrated vscode terminal it works with no issue. Thank you very much @รยקคгรђשค for helping me find the issue. Actual solution When working with rq again I encountered the error once again, but for another reason. Also make sure when calling rq worker, that your working directory is also the one where your imported module is located.
Solution: After trying to recreate the error on another computer I finally got it to work: Turns it out the issue was the kind of terminal I used. When creating the redis-server and initializing the rq worker I used my macOs Terminal and to run the code I used the VSCode built in terminal (Which I thought was the same since I was using the same venv). Now If you do everything in the integrated vscode terminal it works with no issue. Thank you very much @รยקคгรђשค for helping me find the issue.
3
0
76,604,223
2023-7-3
https://stackoverflow.com/questions/76604223/combination-optimization-in-python
I need to gather products from warehouse centers for product bundling like this {product:quantity} center_a_products = {1: 8, 2: 4, 3: 12} Every centers has different kinds and quantities of products like this {center:{product:quantity}} product_quantities = {1: {1: 10, 2: 3, 3: 15}, 2: {1: 5, 2: 8, 3: 10}, 3: {1: 12, 2: 6}} I can only visit one center at a time but I can collect multiple product from one center. If total quantity of some product is less than center_a_products, I must visit every center containing that product and collect total quantity of that product. There are many combination meets these conditions but I need to know most efficient combination that visits the smallest number of centers I tried this but it failed... from itertools import combinations # Define the centers centers = [1, 2, 3] # Define the products and quantities needed in Center A center_a_products = { 1: 8, 2: 4, 3: 12 } # Define the products and quantities for each center product_quantities = { 1: {1: 10, 2: 3, 3: 15}, 2: {1: 5, 2: 8, 3: 10}, 3: {1: 12, 2: 6} } # Function to check if a product combination meets the requirements def meets_requirements(combination): product_counts = {product: 0 for product in center_a_products.keys()} for center, products in combination: for product, quantity in products.items(): product_counts[product] += quantity for product, required_quantity in center_a_products.items(): if product_counts[product] < required_quantity: return False return True # Find all combinations of products combinations_to_move = [] for r in range(1, len(centers) + 1): for combination in combinations(zip(centers, [product_quantities[c] for c in centers]), r): if meets_requirements(combination): combinations_to_move.append(combination) # Print the combinations for idx, combination in enumerate(combinations_to_move, start=1): print(f"Combination {idx}:") for center, products in combination: for product, quantity in products.items(): print(f"Move {quantity} units of product {product} from Center {center}") I expected something like this Move 8 units of product 1 from Center 1 Move 3 units of product 2 from Center 1 Move 12 units of product 3 from Center 1 Move 1 units of product 3 from Center 2
The problem description lends itself to solving with a Mixed-Integer-Linear Program (MIP). A convenient library for solving mixed integer problems is PuLP that ships with the built-in Coin-OR suite and in particular the integer solver CBC. We formulate a model that describes your problem: import pulp center_a_products = { 1: 8, 2: 4, 3: 12 } product_quantities = { 1: {1: 10, 2: 5, 3: 15}, 2: {1: 5, 2: 8, 3: 10}, 3: {1: 12, 2: 6, 3: 20} } problem = pulp.LpProblem("CenterSatisfactionProblem", pulp.LpMinimize) # Center vars are binary variables that we minimize in the objective. They indicate whether a center was visited or not center_vars = {center: pulp.LpVariable(f"Center_{center}", cat="Binary") for center in product_quantities} product_vars = {(center, product): pulp.LpVariable(f"Center_{center}_Product_{product}", lowBound=0, cat="Integer") for center in product_quantities for product in product_quantities[center]} # Satisfy all demand for product in center_a_products: problem += pulp.lpSum(product_vars[center, product] for center in product_quantities) == center_a_products[product] # We can only take stuff from a center if we visit it # Also: can't take more stuff from a center than is in stock for center in product_quantities: for product in product_quantities[center]: problem += product_vars[center, product] <= center_vars[center] * product_quantities[center][product] # Minimize amount of centers visited problem += pulp.lpSum(center_vars.values()) problem.solve() Print solution: for center in product_quantities: if center_vars[center].varValue == 1: for product in product_quantities[center]: quantity = product_vars[center, product].varValue if quantity > 0: print(f"Move {quantity} units of product {product} from center {center}") Output: Move 8.0 units of product 1 from center 3 Move 4.0 units of product 2 from center 3 Move 12.0 units of product 3 from center 3 Currently the example you provided is pretty trivial. It's easily possible to add additional constraints, but you need to provide better description of the problem and the expected output. Translating a problem description into a linear program can take some practice. The trickiest line in this program is problem += product_vars[center, product] <= center_vars[center] * product_quantities[center][product] The objective minimizes center_vars, but the satisfaction constraint forces some of them to be 1. To get the mindset of describing problems, rather than thinking of solutions I recommend Raymond Hettinger's excellent talk on PyCon 2019 - Modern solvers: Problems well-defined are problems solved to install PuLP pip install pulp
2
4
76,603,818
2023-7-3
https://stackoverflow.com/questions/76603818/zipfile-badzipfile-even-when-im-not-reading-zip-file-using-pandas
In the current path there are multiple folders, each folder has multiple folders or xlsx files inside,I want to iterate through each folder and read the xlsx files until there are no more folders or until all xlsx files are read. There are 50+ folders and 2000+ excel files. Below is my code: import os import pandas as pd current_path=os.getcwd() dfs = [] def process_folder(path): for item in os.listdir(path): item_path=os.path.join(path, item) if os.path.isdir(item_path): process_folder(item_path) elif item.endswith('.xlsx'): df = pd.read_excel(item_path) dfs.append(df) process_folder(current_path) result_df = pd.concat(dfs, ignore_index=True) result_df.to_excel('result.xlsx') when I run the code, it shows error:"Excel file cannot be determined, you must specify an engine manually". So I modified read_excel: df = pd.read_excel(item_path, engine='openpyxl'). Then there is the error: "zipfile.BadZipFile: File is not a zip file." However, I didn't read any zipfile. Not sure why this error shows up.
You probably have files with the extension .xlsx but which are not real Excel files. To find them, you can use: import pathlib for filename in pathlib.Path.cwd().glob('*.xlsx'): with open(filename, 'rb') as xlsx: sig = xlsx.read(2) if sig != b'PK': print(f'"{filename}" does not appear to be a valid Excel file') Testing the signature of a zip file with .xlsx extension should be sufficient for the moment.
2
2
76,599,671
2023-7-2
https://stackoverflow.com/questions/76599671/python-pandas-how-to-find-nearest-point-by-latitude-and-longitude
New at pandas, having an issue relating two DataFrames together based on the function of two columns in each DataFrame. I have two DataFrames - one representing road segments, and another with points somewhere near that road. Road Shape DataFrame shape_pt_lat shape_pt_lon shape_pt_sequence 2583910 53.402329 -6.150988 1 2583911 53.402334 -6.151043 2 2583912 53.402345 -6.151175 3 2583913 53.402359 -6.151328 4 2583914 53.402518 -6.152953 5 ... ... ... ... Points DataFrame latitude longitude timestamp 0 53.376873 -6.216212 1.686826e+09 1 53.370968 -6.223517 1.686827e+09 2 53.363358 -6.234719 1.686827e+09 3 53.360840 -6.238742 1.686827e+09 4 53.355160 -6.246171 1.686827e+09 .. ... ... ... I want to add a column to the Points DataFrame that contains the index of the Road DataFrame row containing the closest point. I can do this by iterating through the Points DataFrame, and for each row then iterating through the Road DataFrame, calculating each distance, and finding a minimum, but this seems very inefficient. Is there a better way to achieve this? EDIT I tried to adapt the approach suggested, it works but it ends up taking about 4x as long as simply doing all the iterations. Is there any way to make this faster? shape_tup = [tuple(r) for r in shape_df[['shape_pt_lat', 'shape_pt_lon']].to_numpy()] pos_points["shape_pt_ind"] = np.nan for index, pos in pos_points.iterrows(): min_pair = min(shape_tup, key=lambda t: (abs(t[0] - pos["latitude"]) + abs(t[1] - pos["longitude"]))) min_index = shape_df.index[(shape_df[['shape_pt_lat', 'shape_pt_lon']].values[:, None] == min_pair).all(2).any(1)] pos_points.loc[index, "shape_pt_ind"] = min_index[0]
The simplest approach is to use the cdist function from scipy. It calculates the distance between every point in points and every point in road. You have the choice of 20 or so distance functions. After that it's trivial to find the closest point: from scipy.spatial.distance import cdist distance = cdist( points[["latitude", "longitude"]], road[["shape_pt_lat", "shape_pt_lon"]], metric="euclidean" ) points["shape_pt_sequence"] = road["shape_pt_sequence"].to_numpy()[distance.argmin(axis=1)] If your coordinates are far apart, the geodesic distance is more appropriate: from scipy.spatial.distance import cdist from geopy.distance import geodesic distance = cdist( points[["latitude", "longitude"]], road[["shape_pt_lat", "shape_pt_lon"]], metric=lambda a, b: geodesic(a, b).kilometers, ) points["shape_pt_sequence"] = road["shape_pt_sequence"].to_numpy()[distance.argmin(axis=1)]
2
3
76,590,705
2023-6-30
https://stackoverflow.com/questions/76590705/format-string-output-to-json
I'm playing around with FastAPI and Structlog and wanted to test and convert log format from plain text/string to JSON format for better readability and processing by the log aggregator platforms. Facing a case where certain log output are available in JSON but rest in plain string. Current Output INFO: 127.0.0.1:62154 - "GET /api/preface HTTP/1.1" 200 OK INFO: 127.0.0.1:62154 - "GET /loader.json HTTP/1.1" 200 OK INFO: 127.0.0.1:62155 - "GET /hello_world HTTP/1.1" 200 OK {"key":"test_key","message":"Push to NFS Success","event":"Testing Fast API..","logger":"test_my_api","filename":"main.py","func_name":"Hello_World","process":23760,"module":"docker","thread":23140,"pathname":"D:\\my_work\\fast_api\\main.py","process_name":"SpawnProcess-1","level":"info","time-iso":"2023-06-30T15:25:03.113400Z"} Expected Output: { "level": "INFO", "IP": "127.0 .0 .1: 62154", "method": "GET", "endpoint": "/loader.json", "protocol": "HTTP / 1.1", "status_code": 200, "status": "OK" } { "level": "INFO", "IP": "127.0 .0 .1: 62155", "method": "GET", "endpoint": "/api/preface", "protocol": "HTTP / 1.1", "status_code": 200, "status": "OK" } { "level": "INFO", "IP": "127.0 .0 .1: 62155", "method": "GET", "endpoint": "/hello_world", "protocol": "HTTP / 1.1", "status_code": 200, "status": "OK" } {"key":"test_key","message":"Push to NFS Success","event":"Testing Fast API..","logger":"test_my_api","filename":"main.py","func_name":"Hello_World","process":23760,"module":"docker","thread":23140,"pathname":"D:\\my_work\\fast_api\\main.py","process_name":"SpawnProcess-1","level":"info","time-iso":"2023-06-30T15:25:03.113400Z"} What am I missing here ? thanks ! struct.py import orjson import structlog import logging ## Added only the necessary context. class StructLogTest: def __init__(self, logging_level=logging.DEBUG, logger_name="test"): self.logging_level = logging_level self.logger_name = logger_name StructLogTest.logger_name_var = self.logger_name self.configure_structlog(self.logging_level, self.logger_name) def logger_name(_, __, event_dict): event_dict["test_log"] = StructLogTest.logger_name_var return event_dict @staticmethod def configure_structlog(logging_level, logger_name): structlog.configure( processors=[ StructLogTest.logger_name, structlog.threadlocal.merge_threadlocal, structlog.processors.CallsiteParameterAdder(), structlog.processors.add_log_level, structlog.stdlib.PositionalArgumentsFormatter(), structlog.processors.StackInfoRenderer(), structlog.processors.format_exc_info, structlog.processors.TimeStamper(fmt="iso", utc=True, key="time-iso"), structlog.processors.JSONRenderer(serializer=orjson.dumps), ], wrapper_class=structlog.make_filtering_bound_logger(logging_level), context_class=dict, logger_factory=structlog.BytesLoggerFactory(), ) return structlog def define_Logger(self, *args, **kwargs): return structlog.get_logger(*args, **kwargs) def info(self, message, *args, **kwargs): return structlog.get_logger().info(message, *args, **kwargs) and other methods so on.. main.py from struct import StructLogTest from fastapi import APIRouter import requests from requests.auth import HTTPBasicAuth from requests import Response log = StructLogTest(logger_name="test_my_api") log = log.get_Logger() @router.get("/hello_world") def Hello_World(): logg = log.bind(key=test_key) logg.info( "Testing Fast API..", message=some_other_meaningful_function.dump(), ) return {" Hello World !! "}
Structlog is an entirely separate logging framework from stdlib logging. It will not configure the stdlib logging framework for you. Uvicorn uses stdlib logging, and you can't change that short of forking and editing the source code to use some other logging framework. You do have some options here, though. The simplest is just to configure stdlib logging with a json formatter, for example create uvicorn-logconfig.ini: [loggers] keys=root, uvicorn, gunicorn [handlers] keys=access_handler [formatters] keys=json [logger_root] level=INFO handlers=access_handler propagate=1 [logger_gunicorn] level=INFO handlers=access_handler propagate=0 qualname=gunicorn [logger_uvicorn] level=INFO handlers=access_handler propagate=0 qualname=uvicorn [handler_access_handler] class=logging.StreamHandler formatter=json args=() [formatter_json] class=pythonjsonlogger.jsonlogger.JsonFormatter That uses a stdlib json formatter from python-json-logger. Now, run your app with something like: uvicorn --log-config=uvicorn-logconfig.ini main:router The logs from uvicorn and from your own application code will render as JSON, the former through stdlib formatters and the latter through structlog processors. Here's what I saw when starting up your app, making a request to http://127.0.0.1:8000/hello_world to get the "Testing Fast API..." event from structlog, and then shutting down uvicorn with Ctrl+C: $ uvicorn --log-config=uvicorn-logconfig.ini main:router {"message": "Started server process [54894]", "color_message": "Started server process [\u001b[36m%d\u001b[0m]"} {"message": "Waiting for application startup."} {"message": "Application startup complete."} {"message": "Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)", "color_message": "Uvicorn running on \u001b[1m%s://%s:%d\u001b[0m (Press CTRL+C to quit)"} {"test_key":"test_val","message":"some_message","event":"Testing Fast API..","pathname":"/private/tmp/s/main.py","lineno":42,"process_name":"MainProcess","func_name":"Hello_World","filename":"main.py","thread_name":"AnyIO worker thread","thread":12930912256,"module":"main","process":54894,"level":"info","time-iso":"2023-07-02T19:22:58.085382Z"} {"message": "127.0.0.1:58519 - \"GET /hello_world HTTP/1.1\" 200"} ^C{"message": "Shutting down"} {"message": "Waiting for application shutdown."} {"message": "Application shutdown complete."} {"message": "Finished server process [54894]", "color_message": "Finished server process [\u001b[36m%d\u001b[0m]"} This is recommended as the simplest approach in structlog's documentation (see the subsection "Don't Integrate" in the Standard Library section): The most straight-forward option is to configure standard library logging close enough to what structlog is logging and leaving it at that. Since these are usually log entries from third parties that don’t take advantage of structlog’s features, this is surprisingly often a perfectly adequate approach. For instance, if you log JSON in production, configure logging to use python-json-logger to make it print JSON too, and then tweak the configuration to match their outputs. Since stdlib logging is highly configurable, and structlog is also highly flexible, there are other approaches possible to integrate structlog and stdlib more tightly. See Rendering Using structlog-based Formatters Within logging if interested.
7
2
76,593,906
2023-7-1
https://stackoverflow.com/questions/76593906/how-to-resolve-cannot-import-name-missingvalues-from-sklearn-utils-param-v
I am trying to import imblearn into my python notebook after installing the required modules. However, I am getting the following error: Additional info: I am using a virtual environment in Visual Studio Code. I've made sure that venv was selected as interpreter and as the notebook kernel. I've reloaded the window and restarted the kernel several times. I have also uninstalled and installed imbalanced-learn and scikit-learn several times, with and without "--upgrade". I'm still getting the same error. Edit: Full traceback of error { "name": "ImportError", "message": "cannot import name '_MissingValues' from 'sklearn.utils._param_validation' (c:\\Users\\wen\\OneDrive\\Desktop\\Colab_Notebooks\\.venv\\Lib\\site-packages\\sklearn\\utils\\_param_validation.py)", "stack": "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[1;31mImportError\u001b[0m Traceback (most recent call last)\nCell \u001b[1;32mIn[1], line 1\u001b[0m\n\u001b[1;32m----> 1\u001b[0m \u001b[39mimport\u001b[39;00m \u001b[39mimblearn\u001b[39;00m\n\u001b[0;32m 2\u001b[0m \u001b[39m# Data Processing\u001b[39;00m\n\u001b[0;32m 3\u001b[0m \u001b[39mimport\u001b[39;00m \u001b[39mpandas\u001b[39;00m \u001b[39mas\u001b[39;00m \u001b[39mpd\u001b[39;00m\n\nFile \u001b[1;32mc:\\Users\\wen\\OneDrive\\Desktop\\Colab_Notebooks\\.venv\\Lib\\site-packages\\imblearn\\__init__.py:52\u001b[0m\n\u001b[0;32m 48\u001b[0m sys\u001b[39m.\u001b[39mstderr\u001b[39m.\u001b[39mwrite(\u001b[39m\"\u001b[39m\u001b[39mPartial import of imblearn during the build process.\u001b[39m\u001b[39m\\n\u001b[39;00m\u001b[39m\"\u001b[39m)\n\u001b[0;32m 49\u001b[0m \u001b[39m# We are not importing the rest of scikit-learn during the build\u001b[39;00m\n\u001b[0;32m 50\u001b[0m \u001b[39m# process, as it may not be compiled yet\u001b[39;00m\n\u001b[0;32m 51\u001b[0m \u001b[39melse\u001b[39;00m:\n\u001b[1;32m---> 52\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m \u001b[39mimport\u001b[39;00m (\n\u001b[0;32m 53\u001b[0m combine,\n\u001b[0;32m 54\u001b[0m ensemble,\n\u001b[0;32m 55\u001b[0m exceptions,\n\u001b[0;32m 56\u001b[0m metrics,\n\u001b[0;32m 57\u001b[0m over_sampling,\n\u001b[0;32m 58\u001b[0m pipeline,\n\u001b[0;32m 59\u001b[0m tensorflow,\n\u001b[0;32m 60\u001b[0m under_sampling,\n\u001b[0;32m 61\u001b[0m utils,\n\u001b[0;32m 62\u001b[0m )\n\u001b[0;32m 63\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m\u001b[39m_version\u001b[39;00m \u001b[39mimport\u001b[39;00m __version__\n\u001b[0;32m 64\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m\u001b[39mbase\u001b[39;00m \u001b[39mimport\u001b[39;00m FunctionSampler\n\nFile \u001b[1;32mc:\\Users\\wen\\OneDrive\\Desktop\\Colab_Notebooks\\.venv\\Lib\\site-packages\\imblearn\\combine\\__init__.py:5\u001b[0m\n\u001b[0;32m 1\u001b[0m \u001b[39m\"\"\"The :mod:`imblearn.combine` provides methods which combine\u001b[39;00m\n\u001b[0;32m 2\u001b[0m \u001b[39mover-sampling and under-sampling.\u001b[39;00m\n\u001b[0;32m 3\u001b[0m \u001b[39m\"\"\"\u001b[39;00m\n\u001b[1;32m----> 5\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m\u001b[39m_smote_enn\u001b[39;00m \u001b[39mimport\u001b[39;00m SMOTEENN\n\u001b[0;32m 6\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m\u001b[39m_smote_tomek\u001b[39;00m \u001b[39mimport\u001b[39;00m SMOTETomek\n\u001b[0;32m 8\u001b[0m __all__ \u001b[39m=\u001b[39m [\u001b[39m\"\u001b[39m\u001b[39mSMOTEENN\u001b[39m\u001b[39m\"\u001b[39m, \u001b[39m\"\u001b[39m\u001b[39mSMOTETomek\u001b[39m\u001b[39m\"\u001b[39m]\n\nFile \u001b[1;32mc:\\Users\\wen\\OneDrive\\Desktop\\Colab_Notebooks\\.venv\\Lib\\site-packages\\imblearn\\combine\\_smote_enn.py:12\u001b[0m\n\u001b[0;32m 9\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39msklearn\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mbase\u001b[39;00m \u001b[39mimport\u001b[39;00m clone\n\u001b[0;32m 10\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39msklearn\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m \u001b[39mimport\u001b[39;00m check_X_y\n\u001b[1;32m---> 12\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mbase\u001b[39;00m \u001b[39mimport\u001b[39;00m BaseSampler\n\u001b[0;32m 13\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mover_sampling\u001b[39;00m \u001b[39mimport\u001b[39;00m SMOTE\n\u001b[0;32m 14\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mover_sampling\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mbase\u001b[39;00m \u001b[39mimport\u001b[39;00m BaseOverSampler\n\nFile \u001b[1;32mc:\\Users\\wen\\OneDrive\\Desktop\\Colab_Notebooks\\.venv\\Lib\\site-packages\\imblearn\\base.py:21\u001b[0m\n\u001b[0;32m 18\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39msklearn\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mmulticlass\u001b[39;00m \u001b[39mimport\u001b[39;00m check_classification_targets\n\u001b[0;32m 20\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m \u001b[39mimport\u001b[39;00m check_sampling_strategy, check_target_type\n\u001b[1;32m---> 21\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39m_param_validation\u001b[39;00m \u001b[39mimport\u001b[39;00m validate_parameter_constraints\n\u001b[0;32m 22\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39m_validation\u001b[39;00m \u001b[39mimport\u001b[39;00m ArraysTransformer\n\u001b[0;32m 25\u001b[0m \u001b[39mclass\u001b[39;00m \u001b[39mSamplerMixin\u001b[39;00m(BaseEstimator, metaclass\u001b[39m=\u001b[39mABCMeta):\n\nFile \u001b[1;32mc:\\Users\\wen\\OneDrive\\Desktop\\Colab_Notebooks\\.venv\\Lib\\site-packages\\imblearn\\utils\\_param_validation.py:908\u001b[0m\n\u001b[0;32m 906\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39msklearn\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39m_param_validation\u001b[39;00m \u001b[39mimport\u001b[39;00m generate_valid_param \u001b[39m# noqa\u001b[39;00m\n\u001b[0;32m 907\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39msklearn\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39m_param_validation\u001b[39;00m \u001b[39mimport\u001b[39;00m validate_parameter_constraints \u001b[39m# noqa\u001b[39;00m\n\u001b[1;32m--> 908\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39msklearn\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mutils\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39m_param_validation\u001b[39;00m \u001b[39mimport\u001b[39;00m (\n\u001b[0;32m 909\u001b[0m HasMethods,\n\u001b[0;32m 910\u001b[0m Hidden,\n\u001b[0;32m 911\u001b[0m Interval,\n\u001b[0;32m 912\u001b[0m Options,\n\u001b[0;32m 913\u001b[0m StrOptions,\n\u001b[0;32m 914\u001b[0m _ArrayLikes,\n\u001b[0;32m 915\u001b[0m _Booleans,\n\u001b[0;32m 916\u001b[0m _Callables,\n\u001b[0;32m 917\u001b[0m _CVObjects,\n\u001b[0;32m 918\u001b[0m _InstancesOf,\n\u001b[0;32m 919\u001b[0m _IterablesNotString,\n\u001b[0;32m 920\u001b[0m _MissingValues,\n\u001b[0;32m 921\u001b[0m _NoneConstraint,\n\u001b[0;32m 922\u001b[0m _PandasNAConstraint,\n\u001b[0;32m 923\u001b[0m _RandomStates,\n\u001b[0;32m 924\u001b[0m _SparseMatrices,\n\u001b[0;32m 925\u001b[0m _VerboseHelper,\n\u001b[0;32m 926\u001b[0m make_constraint,\n\u001b[0;32m 927\u001b[0m validate_params,\n\u001b[0;32m 928\u001b[0m )\n\n\u001b[1;31mImportError\u001b[0m: cannot import name '_MissingValues' from 'sklearn.utils._param_validation' (c:\\Users\\wen\\OneDrive\\Desktop\\Colab_Notebooks\\.venv\\Lib\\site-packages\\sklearn\\utils\\_param_validation.py)" } The versions of the modules are as follows: scikit-learn 1.3.0 imblearn 0.0 imbalanced-learn 0.10.1
I was having the same issue, downgrading to scikit-learn 1.2.2 fixed it for me
9
11
76,593,825
2023-7-1
https://stackoverflow.com/questions/76593825/how-to-fix-none-result-whilst-using-bs4-for-web-scraping
Here I have a simple code that is not seeming to do what I am trying to do. The goal of this code is to retrieve the value of the stock that the user inputs. ticker = input("What stock would you like to track?: (enter the ticker) ") r = requests.get(f"https://www.marketwatch.com/investing/stock/{ticker}?mod=search_symbol").text webpage = BeautifulSoup(r, 'html.parser') locate = webpage.find('bq-quote', {'class': 'value'}) print(locate) Although the value of the stock is inside the class value it does not seem to be returning anything. web.find_all() just returns '[]' so I am not sure what I am doing wrong here.
This works. from bs4 import BeautifulSoup import requests ticker = input("What stock would you like to track?: (enter the ticker) ") r = requests.get(f"https://www.marketwatch.com/investing/stock/{ticker}?mod=search_symbol",).text webpage = BeautifulSoup(r, "html.parser") locate = webpage.find(class_="value") stock_price = locate.text print(stock_price)
3
1
76,572,824
2023-6-28
https://stackoverflow.com/questions/76572824/why-do-i-get-a-no-matching-distribution-found-error-when-installing-a-package
I am trying to upload my package to PyPi. It uploads successfully every time but when I try to install it, I get the following error: ERROR: Could not find a version that satisfies the requirement Google-Ads-Transparency-Scraper==1.4 (from versions: none) ERROR: No matching distribution found for Google-Ads-Transparency-Scraper==1.4 The following is the directory structure I have GoogleAds GoogleAdsTransparency __init __.py main.py regions.py setup.py setup.cfg license.txt README.md The setup.py has the following content """Install packages as defined in this file into the Python environment.""" from setuptools import setup, find_packages setup( name="Google Ads Transparency Scraper", author="Farhan Ahmed", author_email="[email protected]", url="https://github.com/faniAhmed/GoogleAdsTransparencyScraper", description="A scraper for getting Ads from Google Transparency", version="1.4", packages=find_packages(), download_url= 'https://github.com/faniAhmed/GoogleAdsTransparencyScraper/archive/refs/tags/v1.2.tar.gz', keywords= ['Google', 'Transparency', 'Scraper', 'API', 'Google Ads', 'Ads', 'Google Transparency', 'Google Transparency Scraper', 'Google Ads Scrapre'], license='Securely Incorporation', install_requires=[ "setuptools>=45.0", "beautifulsoup4>=4.12.2", "Requests>=2.31.0", "lxml>=4.6.3", ], classifiers=[ "Development Status :: 5 - Production/Stable", "Environment :: Console", "Intended Audience :: Developers", "License :: Free for non-commercial use", "Natural Language :: Urdu", "Operating System :: OS Independent", "Programming Language :: Python", ], platforms=["any"], ) The setup.cfg [metadata] description-file = README.md The init.py has the following content from GoogleAdsTransparency.main import GoogleAds from GoogleAdsTransparency.main import show_regions_list I run the following commands to upload the package python setup.py sdist twine upload dist/* The link to the package is https://pypi.org/project/Google-Ads-Transparency-Scraper/
This might be your problem, The file uploaded contains spaces where as when you see the second image, it should not contain spaces By examining that, I come to the conclusion as to what is wrong. Do you have spaces inside your setup.py? If so, remove them, also the github page for your project is incomplete, it does not contain setup.py, so I can't diagnose any further. Please, if possible, give out the complete source code for such highly specific problems. To look at the downloads available for a package: So, i dug a little deeper, and found out this: Your name contains spaces and your download_url is outdated. After digging deeper by looking at verbose logs of pip: I find that it is able to find the download link for the packages, but the issue is that, it is unable to find the project version from the link.
4
1
76,592,108
2023-6-30
https://stackoverflow.com/questions/76592108/how-can-a-pandas-datetimearray-be-sorted
I have a DataFrame which contains a date column. I need to get the unique values of that column and then sort them out. I started with: unique_dates = data['date'].unique() Which works well. Next, I need to sort this unique list of dates. However, I am having issues because unique_values is now of DatetimeArray type, and this does not have the sort() or sort_values() methods I can find in a DataFrame. If I could convert it to just a NumPy array, I could sort it, but I am not finding a way to do that either... How can I get a DatetimeArray sorted? Is there another way of doing this that I am not considering? Thanks! Eduardo
The DatetimeArray is a flavour of ExtensionArray. ExtensionArrays don't have a sort or sort_values method, but they do have argsort, so we can achieve sorting using: unique_dates_sorted = unique_dates[unique_dates.argsort()] i.e. we get the indexes which result in the sorted array, then use those to produce a reordered array. A complete mini-example: import datetime import pandas as pd dates = [datetime.date(2021, 1, 1)] * 50 + [datetime.date(2022, 1, 1)] * 50 + [datetime.date(2020, 1, 1)] * 50 df = pd.DataFrame(dates, columns=['date']) unique_dates = df['date'].unique() unique_dates_sorted = unique_dates[unique_dates.argsort()] print(unique_dates_sorted) giving: [datetime.date(2020, 1, 1) datetime.date(2021, 1, 1) datetime.date(2022, 1, 1)]
2
2
76,591,838
2023-6-30
https://stackoverflow.com/questions/76591838/space-of-using-rangen
Could you tell me please, if I use for i in range(n), does python creates a range of 0 .. n-1, and iterates elements in this container (O(n) extra space), or it use only 1 vatiable (O(1)) space) From one side, I thought, if we can convert range to list, then using range function creates a container (O(n)).but from the other side, instead of using for i in range(n), we can use while i < n ... O(1).
In Python 3, range does not create all the elements, but only returns the current element when you request it. So yes, it's using O(1) space.
2
3
76,587,201
2023-6-30
https://stackoverflow.com/questions/76587201/getting-solution-as-nan-for-mixed-integer-non-linear-programming-problem-with-ob
I wanted to maximize gross profit margin (total profit/total revenue) with binary variables, say whether products will be in the mix or not by that variables will be 1 or 0 (binary), trying to solve with gekko mixed integer non linear programming here is the example for 3 products, where we want to keep any 2 products variables for 3 products, x1, x2 and x3 total profit = 150*x1 + 120*x2 + 100*x3 total revenue = 200*x1 + 150x2 + 250*x3 gross profit margin = total profit/ total revenue solution tried m = GEKKO() x1 = m.Var(integer=True, lb=0, ub=1) x2 = m.Var(integer=True, lb=0, ub=1) x3 = m.Var(integer=True, lb=0, ub=1) m.Maximize((150*x1 + 120*x2 + 100*x3)/(200*x1 + 150*x2 + 250*x3)) m.Equation(x1 + x2 + x3 == 2) m.options.SOLVER = 1 m.solve() result x1: 0 x2: 0 x3: 0 objective function: nan things tried i) tried with adding one more constraint with the denominator > 0, getting same solution ii) tried with changing lb=0 to any other integer value and it is working (say lb=1, ub=2), not sure if anything particularly needs to be added for using lb=0 ii) tried absolute profit maximization (removing the denominator) and it is working fine any help will be appreciated, thanks in advance
Try using a different initial guess than 0 for at least one of the variables: x1 = m.Var(value=1,integer=True, lb=0, ub=1) Gekko uses a default starting value of 0. Gradient-based optimizers need to calculate a search direction and the initial guesses of zero lead to an Inf evaluation of the objective function. Here is the complete script with a condensed way to define the variables: from gekko import GEKKO m = GEKKO() x1,x2,x3 = m.Array(m.Var,3,value=1,integer=True,lb=0,ub=1) m.Maximize((150*x1 + 120*x2 + 100*x3)/(200*x1 + 150*x2 + 250*x3)) m.Equation(x1 + x2 + x3 == 2) m.options.SOLVER = 1 m.solve() This gives a successful solution: --------------------------------------------------- Solver : APOPT (v1.0) Solution time : 3.579999999965366E-002 sec Objective : -0.771428571428571 Successful solution --------------------------------------------------- Gekko converts maximization problems to minimization problems that are standard for MINLP and NLP solvers so the objective function value is the negative of the true value. Use this to get the Maximization objective of 0.7714: print(-m.options.OBJFCNVAL)
3
1
76,588,693
2023-6-30
https://stackoverflow.com/questions/76588693/plotly-not-displaying-the-y-zero-line
when plotting data using plotly the y-zero line is not shown in the grid (it is blank), creating a weird blank space. I've tried to modify the grid parameters so y-zero and x-zero lines are always displayed, but it hasn't work. I attach the function that plots the data. See the image and how for y=0 there is no grey line: example of the behaviour def plot_dictionary_lines(data_dict, title, colors, dtick_y, y_title): tilt_values = list(data_dict.keys()) orientation_values = list(data_dict[tilt_values[0]].keys()) fig = go.Figure() for i,tilt in enumerate(tilt_values): tilt_data = data_dict[tilt] y_values = [] for orientation in orientation_values: y_values.append(tilt_data[orientation]) fig.add_trace(go.Scatter(x=orientation_values, y=y_values, mode='lines', name=tilt, line=dict(color=colors[i]))) fig.update_layout(title=title, xaxis_title='Orientation') fig.update_layout(yaxis=dict(title=y_title, dtick=dtick_y, minor=dict(ticks="inside", showgrid=True))) fig.update_layout(title=title, title_font_size=12, title_x=0.5, margin=dict(l=5, r=5, t=30, b=5)) fig.update_layout(showlegend=True, plot_bgcolor= 'white' ) fig.update_xaxes(showline=True, mirror=True, linewidth=2, linecolor='black', gridcolor='grey', gridwidth=1) fig.update_yaxes(showline=True, mirror=True, linewidth=2, linecolor='black', gridcolor='grey', gridwidth=1) fig.update_layout(legend_title_text='Tilt (deg)') fig.show() Data to reproduce the plot: data_dict = {'20': {'0': -29.993976613550426, '90': -9.982751500722712, '180': 0.0, '270': -16.409959148158066}, '40': {'0': -51.454747907503716, '90': -13.263266344556737, '180': 0.0, '270': -23.372002876678337}, '60': {'0': -60.76495332385845, '90': -13.362989512932888, '180': 0.0, '270': -25.197913842540597}} title = "Solar production sold entirely, respect south" color_codes = { "10colors": [ '#cce542', '#98d956', '#66cb69', '#2fba79', '#00a884', '#009589', '#008286', '#006e7d', '#185b6d', '#2a4858'] } colors = color_codes["10colors"] d_ticky = 5 y_title = "%"
That took me forever. Apparently the zero line does NOT follow the grid color. It has separate properties, zerolinecolor and zerolinewidth. Yours is currently white on a white background. Change it like this (or another color): fig.update_layout(yaxis=dict(zerolinecolor='black'))
2
5
76,575,878
2023-6-28
https://stackoverflow.com/questions/76575878/python-loguru-output-to-stderr-and-a-file
I have the following line that configures my logger instance logger.add(sys.stderr, level=log_level, format="<green>{time:YYYY-MM-DD HH:mm:ss.SSS zz}</green> | <level>{level: <8}</level> | <yellow>Line {line: >4} ({file}):</yellow> <b>{message}</b>", colorize=True, backtrace=True, diagnose=True) I want to configure my logger to use the above, and also output to a file, so that when I call logger.whatever() it outputs to the terminal and to a log file This way, if I'm doing dev work and running the file directly, I can see the output in my terminal, and when the code is being run on a server by a cronjob, it can log to a file I don't understand the whole concept of the "sinks" so sorry if this is an easy question
In the context of logging, a "sink" refers to an output destination for log records. It can be a stream (such as sys.stderr or sys.stdout), a file path, or any custom function that accepts the logged message an input. See documentation for more details. Using Loguru, you can add as many sinks as you like. This means that in your case, you can have two sinks: one writing to sys.stderr and another one writing to file.log. log_level = "DEBUG" log_format = "<green>{time:YYYY-MM-DD HH:mm:ss.SSS zz}</green> | <level>{level: <8}</level> | <yellow>Line {line: >4} ({file}):</yellow> <b>{message}</b>" logger.add(sys.stderr, level=log_level, format=log_format, colorize=True, backtrace=True, diagnose=True) logger.add("file.log", level=log_level, format=log_format, colorize=False, backtrace=True, diagnose=True) Each time a message is logged with logger.info() for example, it will be sent to each of the added sinks and processed based on the the corresponding configuration.
6
8
76,579,783
2023-6-29
https://stackoverflow.com/questions/76579783/rapids-pip-installation-issue
I've been trying to install RAPIDS in my Docker environment, which initially went smoothly. However, over the past one or two weeks, I've been encountering an error. The issue seems to be that pip is attempting to fetch from the default PyPi registry, where it encounters a placeholder project. I'm unsure who placed it there or why, as it appears to serve no practical purpose. => ERROR [12/19] RUN pip3 install cudf-cu11 cuml-cu11 cugraph-cu11 cucim --extra-index-url=https://pypi.nvidia.com 2.1s ------ > [12/19] RUN pip3 install cudf-cu11 cuml-cu11 cugraph-cu11 cucim --extra-index-url=https://pypi.nvidia.com: #0 1.038 Looking in indexes: https://pypi.org/simple, https://pypi.nvidia.com #0 1.466 Collecting cudf-cu11 #0 1.542 Downloading cudf-cu11-23.6.0.tar.gz (6.8 kB) #0 1.567 Preparing metadata (setup.py): started #0 1.972 Preparing metadata (setup.py): finished with status 'error' #0 1.980 error: subprocess-exited-with-error #0 1.980 #0 1.980 × python setup.py egg_info did not run successfully. #0 1.980 │ exit code: 1 #0 1.980 ╰─> [16 lines of output] #0 1.980 Traceback (most recent call last): #0 1.980 File "<string>", line 2, in <module> #0 1.980 File "<pip-setuptools-caller>", line 34, in <module> #0 1.980 File "/tmp/pip-install-8463q674/cudf-cu11_9d3e1a792dae4026962cdff29926ce8d/setup.py", line 137, in <module> #0 1.980 raise RuntimeError(open("ERROR.txt", "r").read()) #0 1.980 RuntimeError: #0 1.980 ########################################################################################### #0 1.980 The package you are trying to install is only a placeholder project on PyPI.org repository. #0 1.980 This package is hosted on NVIDIA Python Package Index. #0 1.980 #0 1.980 This package can be installed as: #0 1.980 ``` #0 1.980 $ pip install --extra-index-url https://pypi.nvidia.com cudf-cu11 #0 1.980 ``` #0 1.980 ########################################################################################### #0 1.980 #0 1.980 [end of output] #0 1.980 #0 1.980 note: This error originates from a subprocess, and is likely not a problem with pip. #0 1.983 error: metadata-generation-failed #0 1.983 #0 1.983 × Encountered error while generating package metadata. #0 1.983 ╰─> See above for output. #0 1.983 #0 1.983 note: This is an issue with the package mentioned above, not pip. #0 1.983 hint: See above for details. I attempted to explicitly set the --index-url to pypi.nvidia.com, but this approach wasn't feasible either, as the dependencies for the RAPIDS packages appear to be hosted on the default PyPi.
The issue has already been reported in the cudf repository and it seems to have existed since version 23.06. I will try the suggested workaround of installing version 23.04 and will report back if this temporarily resolves the problem. Original Issue: https://github.com/rapidsai/cudf/issues/13642 Feedback: Using strict versioning to 23.04 worked perfectly. I simply replaced the unversioned pip install with a versioned one: # Before pip3 install cudf-cu11 cuml-cu11 cugraph-cu11 cucim --extra-index-url=https://pypi.nvidia.com # After pip3 install cudf-cu11==23.04 cuml-cu11==23.04 cugraph-cu11==23.04 cucim --extra-index-url=https://pypi.nvidia.com One thing to note for anyone else encountering this issue while building in an Ubuntu 20.04 container: you need to add RUN apt remove python3-psutil before installing rapids, to allow pip to install psutil in the correct version.
2
3
76,581,535
2023-6-29
https://stackoverflow.com/questions/76581535/python-equivalent-for-java-hashcode-function
I have an A/B test split based on the result of Java hashCode() function applied to user's id (a string). I want to emulate that split in my dataframe to analyse the results. Is there a python equivalent for that function? Or maybe a documentation on the specific hashing algorithm inside hashCode() so I can produce that function myself? Thanks I searched for the documentation but couldn't find the specific details
According to java String source code, the hash implementation is: public int hashCode() { if (cachedHashCode != 0) return cachedHashCode; // Compute the hash code using a local variable to be reentrant. int hashCode = 0; int limit = count + offset; for (int i = offset; i < limit; i++) hashCode = hashCode * 31 + value[i]; return cachedHashCode = hashCode; } You can transfer this to Python (w/o caching): class JavaHashStr(str): def __hash__(self): hashCode = 0 for char in self: hashCode = hashCode * 31 + ord(char) return hashCode >>> j = JavaHashStr("abcd") >>> hash(j) 2987074 # same as java >>> j = JavaHashStr("abcdef") >>> hash(j) 2870581347 # java: -1424385949 Note, Python ints do not overflow like java, so this is wrong for many cases. You would have to add a simulation for the overflow (Update: thx to @PresidentJamesK.Polk for the improved version, SO thread on the topic): class JavaHashStr(str): def __hash__(self): hashCode = 0 for char in self: hashCode = (hashCode * 31 + ord(char)) & (2**32 - 1) # unsigned if hashCode & 2**31: hashCode -= 2**32 # make it signed return hashCode Now, even overflowing hashes behave the same: >>> j = JavaHashStr("abc") >>> hash(j) 96354 >>> j = JavaHashStr("abcdef") >>> hash(j) -1424385949 # Java hash for "abcdef" This might still be off for characters from the latter unicode panes like emojis or the like. But for the most common punctuation and latin-based characters, this should work.
2
5
76,582,964
2023-6-29
https://stackoverflow.com/questions/76582964/pandas-from-dict-returns-typeerror
I have a dictionary like the following: testProps = {"Key1":[], "Key2":False, "Key3":True, "Key4":[], "Key5":False} I want to convert it to a Pandas DataFrame with newTestProps = pd.DataFrame.from_dict(testProps,orient="index") but it is throwing me the following error. TypeError: object of type 'bool' has no len() When it should be creating a DataFrame like the following: Which is bizarre because I've had these two lines of code in my program for a long time and it hasn't thrown this error until now. Is it possible that there's a bug with Pandas?
To create the dataframe you can use next example: testProps = {"Key1": [], "Key2": False, "Key3": True, "Key4": [], "Key5": False} df = pd.DataFrame({'Index': testProps.keys(), 0: testProps.values()}) print(df) Prints: Index 0 0 Key1 [] 1 Key2 False 2 Key3 True 3 Key4 [] 4 Key5 False
2
3
76,575,991
2023-6-28
https://stackoverflow.com/questions/76575991/safe-way-of-converting-seconds-to-nanoseconds-without-int-float-overflowing
I currently have a UNIX timestamp as a 64-bit float. It's seconds with a few fractions as a float. Such as 1687976937.597064. I need to convert it to nanoseconds. But there's 1 billion nanoseconds in 1 second. And doing a straight multiplication by 1 billion would overflow the 64-bit float. Let's first consider the limits: 1_687_976_937_597_064_000 is the integer result of the above timestamp multiplied by 1 billion. The goal is figuring out a way to safely reach this number. 9_223_372_036_854_775_807 is the maximum number storable in a 64-bit signed integer. 9_007_199_254_740_992.0 is the maximum number storable in a 64-bit float. And at that scale, there aren't enough bits to store any decimals at all (it's permanently .0). Edit: This claim is not correct. See the end of this post... So 64-bit signed integer can hold the result. But a 64-bit float cannot hold the result and would overflow. So I was thinking: Since an integer is able to easily represent the result, I thought I could first convert the integer portion to an integer, and multiply by 1 billion. And then extract just the decimals so that I get a new 0.XXXXXX float, and then multiply that by 1 billion. By leading with a zero, I ensure that the integer portion of the float will never overflow. But perhaps the decimals could still overflow somehow? Hopefully floats will just safely truncate the trailing decimals instead of overflowing. By multiplying a 0.X number by 1 billion, the resulting value should never be able to be higher than 1_999_999_999.XXXXX so it seems like this multiplication should be safe... After that, I truncate the "decimals float" into an integer to ensure that the result will be an integer. Lastly, I add together the two integers. It seems to work, but this technique looks so hacky. Is it safe? Here's a Python repl showing the process: >>> num = 1687976937.597064 >>> whole = int(num) >>> whole 1687976937 >>> decimals = num - whole >>> decimals 0.5970640182495117 >>> (whole * 1_000_000_000) 1687976937000000000 >>> (decimals * 1_000_000_000) 597064018.2495117 >>> int(decimals * 1_000_000_000) 597064018 >>> (whole * 1_000_000_000) + int(decimals * 1_000_000_000) 1687976937597064018 >>> type((whole * 1_000_000_000) + int(decimals * 1_000_000_000)) <class 'int'> So here's the comparison: 1_687_976_937_597_064_018 was the result of the above algorithm. And yes, there's a slight, insignificant float rounding error but I don't mind. 1_687_976_937_597_064_000 is the scientifically correct answer given by Wolfram Alpha's calculator. It certainly looks like a success, but is there any risk that my algorithm would be dangerous and break? I am not brave enough to put it into production without confirmation that it's safe. Concerning the 64-bit float limits: Here are the results in Python 3's repl (pay attention to the 993 input and the 992 in the output): >>> 9_007_199_254_740_993.0 9007199254740992.0 But perhaps I am reading that "limit" incorrectly... Perhaps this is just a float rounding error.
Floats can have a really large exponent without losing their significant precision. Turns out that floats allow really large multiplication without any issues, as such: >>> 9_000_000_000_000_000_000_000_000_000.0 * 1_000_000_000 9e+36 >>> 9_123_456_789_012_345_678_901_234_567.0 * 1_000_000_000 9.123456789012346e+36 >>> int(9_123_456_789_012_345_678_901_234_567.0 * 1_000_000_000) 9123456789012346228226434616254267392 So basically, the float is keeping as many "significant digits" as it can fit internally, truncating the rest (in the left hand operator in the examples above), and then just scaling the exponent. It's able to roughly represent Unix nanosecond timestamps that are far larger than the age of the universe. When it's time to convert it to an integer, you can also see that the float keeps as much precision as it could and does a good job with the conversion. All of the significant digits are there. There's a lot of "random float rounding errors/noise" at the end of the output number, but those digits don't matter. In other words, I've had a fundamental misunderstanding about the size of numbers that a float can store. It's not limited per se. It just stores a fixed amount of significant digits and then it uses an exponent to reach the desired scale. So a float would suffice here! The answer is that I can just do the multiplication directly, and it will be totally safe. Since my multiplier is a straight 1 billion without any fractions, it will just scale up the exponent by 1 billion, without changing any of the digits at all. Fantastic. :) Just like this! >>> int(1687976937.597064 * 1_000_000_000) 1687976937597063936 Although when we use an integer like above, Python actually internally converts it into a float (1_000_000_000 (int) -> 1e9 (float)), since the other operand is a float. So it's actually 6% faster to do that multiplication with a float directly (avoiding the need for int -> float conversion of the multiplier): >>> int(1687976937.597064 * 1e9) 1687976937597063936 As you can see, the result is identical, since both cases are doing float * float math. The integer just required an extra conversion step first, which the latter method avoids. Let's recap: 1_687_976_937_597_064_018 was the result of my "split" algorithm earlier (in my original question). 1_687_976_937_597_063_936 is the result given by the suggestion to "just trust the float and do the multiply directly". 1_687_976_937_597_064_000 is the mathematically correct answer given by Wolfram Alpha's calculator. So my "split" technique had a smaller rounding error. The reason why my method was more accurate is because I had "split" my number into "whole" (int) and "decimals/fractions" (float). Which means that my method has full devotion of all significant digits to the decimals, since I had removed "the whole number" before the decimals/fractions. This means that my "decimals" float was able to devote all significant digits to properly representing the decimals with much greater precision. But these are UNIX timestamps represented as nanoseconds, and nobody really cares about the "fractions of a second" precision that much. What matters are the first few, important digits of the fraction, and those are all correct. That's all that matters in the end. I'll be using this result to set timestamps on disk via the utimensat API, and all that really matters is that I get roughly the correct fractions of a second. :) I use the Python os.utime() wrapper for that API, which takes the nanoseconds as a signed integer: "If ns is specified, it must be a 2-tuple of the form (atime_ns, mtime_ns) where each member is an int expressing nanoseconds." I'm going to do the straight multiplication and then convert the result to an int. That does the math in one simple step, gets sufficient precision for the decimals (fractions of a second), and solves the issue in a satisfactory way! Here's the Python code I'll be using. It preserves the current "access time" as nanoseconds by fetching that value from disk, and takes the self.unix_mtime float (a UNIX timestamp with fractions of a second as decimals) and converts that to a signed 64-bit integer nanosecond representation, and then applies the change to the target file/directory: # Good enough precision for practically anybody. Fast. file_meta = target_path.lstat() st_mtime_ns = int(self.unix_mtime * 1e9) os.utime( target_path, ns=(file_meta.st_atime_ns, st_mtime_ns), follow_symlinks=False ) If anyone else wants to do this, beware that I am using lstat() to get the status of symlinks rather than their target, and using follow_symlinks=False to ensure that if the final target_path component is a symlink then I affect the link itself rather than the target. Other people may want to change these calls to stat() and follow_symlinks=True if you prefer affecting the target rather than the symlink itself. But I would guess that most people prefer my method of affecting the symlink itself if the target_path points at a symlink. If you care about doing this "seconds-float to nanoseconds int" conversion with the highest achievable precision (by devoting maximum float precision to all the decimal digits to minimize rounding errors), then you can do my "split" variant as follows instead (I added type hints for clarity): # Great conversion precision. Slower. file_meta = target_path.lstat() whole: int = int(self.unix_mtime) frac: float = self.unix_mtime - whole st_mtime_ns: int = whole * 1_000_000_000 + int(frac * 1e9) os.utime( target_path, ns=(file_meta.st_atime_ns, st_mtime_ns), follow_symlinks=False ) As you can see, it uses int * int math for the "whole seconds" and uses float * float math for the "fractions of a second". And then combines the result into an integer. This gives the best of both worlds in terms of accuracy and speed. I did some benchmarks: 50 million iterations on a Ryzen 3900x CPU. The "simplified, less accurate" version took 11.728529000014532 seconds. The more accurate version took 26.941824199981056 seconds. That's 2.3x the time. Considering that I did 50 million iterations, you can be sure that you can safely use the more accurate version without having to worry about the performance. So if you want more accurate timestamps, feel free to use the last method. :) As a bonus, I benchmarked @dawg's answer, which is the exact same idea as "the more accurate method", but is done via two calls to math.modf() instead of directly calculating the whole/fraction manually. Their answer is the slowest at 33.54755139999557 seconds. I wouldn't recommend it. Besides, the primary idea behind their technique was just to discard everything after the first three float decimals, which doesn't even matter for any practical purposes, and if their removal is truly desired then it can be achieved without slow math.modf() calls by simply changing my "more accurate" variant's final line to say whole * 1_000_000_000 + (int(frac * 1e3) * 1_000_000) instead, which achieves that decimal truncation technique in 27.95227960000746 seconds instead. There's also a third method via the discussed decimal library which would have perfect mathematical precision (it doesn't use floats), but it's very slow, so I didn't include it.
4
0
76,582,307
2023-6-29
https://stackoverflow.com/questions/76582307/i-am-using-undetected-chromedriver
I am using undetected_chromedriver, but I am getting the error This version of ChromeDriver only supports Chrome version 114 Current browser version is 103.0.5060.53 My code: import undetected_chromedriver as uc uc.TARGET_VERSION = 103 options = uc.ChromeOptions() options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') driver = uc.Chrome(options=options, version_main=103, patcher_force_close=True) driver.get('https://nowsecure.nl') driver.quit()
You can use SeleniumBase's UC Mode to use undetected-chromedriver with any browser version (it automatically downloads the correct driver if missing). First pip install seleniumbase, and then run the following script with python: from seleniumbase import Driver import time driver = Driver(uc=True, incognito=True) driver.get("https://nowsecure.nl/#relax") time.sleep(8) driver.quit()
2
3
76,582,211
2023-6-29
https://stackoverflow.com/questions/76582211/what-method-is-called-when-using-dot-notation-in-myclass-my-attribute
I want to print "getting x attr" whenever I use the dot notation to get an attribute. For example class MyClass: my_attribute = "abc" print(MyClass.my_attribute) I'd expect it to output like this: >>> getting my_attribute attr >>> abc I'm not instantiating the class. I tried to add a print statement to MyClass.__getattr__() and MyClass.__getattribute__() but none of these will display anything. I tried the following: class MyClass: def __getattr__(self, key): print("running __getattr__") return super().__getattr__(key) def __getattribute__(self, __name: str) -> Any: print("running __getattribute__") return super().__getattribute__(__name) my_attribute = "abc"
MyClass (the class object) is itself an object. You want to define your custom __getattribute__ method on the class that MyClass is an instance of. To do that, use a metaclass: class MyMeta(type): def __getattribute__(self, item): print(f"getting {item}") return super().__getattribute__(item) class MyClass(metaclass=MyMeta): my_class_attribute = "abc" then print(type(MyClass)) print(MyClass.my_class_attribute) outputs <class '__main__.MyMeta'> getting my_class_attribute abc
2
4
76,578,411
2023-6-29
https://stackoverflow.com/questions/76578411/how-to-correct-coordinate-shifting-in-ax-annotate
I tried to annotate a line plot with ax.annotate as follows. import numpy as np import matplotlib.pyplot as plt x_start = 0 x_end = 200 y_start = 20 y_end = 20 fig, ax = plt.subplots(figsize=(5,5),dpi=600) ax.plot(np.asarray([i for i in range(0,1000)])) ax.annotate('', xy=(x_start, y_start), xytext=(x_end, y_end), xycoords='data', textcoords='data', arrowprops={'arrowstyle': '|-|'}) plt.show() which gave a plot (zoomed in) Although I have specified x_start to be 0 and x_end to be 200, the actual start is greater than 0 and actual end is smaller than 200 on the x-axis. How do I correctly line up this annotation with the set coordinates?
By default, the arrow is shrunk by 2 points on both ends (see the doc). You can set shrinkA and shrinkB to 0 to align with your x-axis: import numpy as np import matplotlib.pyplot as plt x_start = 0 x_end = 200 y_start = 20 y_end = 20 fig, ax = plt.subplots(figsize=(5,5),dpi=600) ax.plot(np.asarray([i for i in range(0,1000)])) ax.annotate('', xy=(x_start, y_start), xytext=(x_end, y_end), xycoords='data', textcoords='data', arrowprops={'arrowstyle': '|-|', 'shrinkA': 0, 'shrinkB': 0}) plt.show() Output:
2
4
76,572,666
2023-6-28
https://stackoverflow.com/questions/76572666/creating-a-custom-pydantic-field-to-accept-str-or-none-values
I want to create a Pydantic custom field. The main goal of this validator is to be able to accept two data types: "str" and "None". If the value is "None", it should return an empty string. I tried to do it as follows: from pydantic import BaseModel class EmptyStringField: @classmethod def __get_validators__(cls): yield cls.validate @classmethod def validate(cls, v): if v is None: return "" return str(v) class Model(BaseModel): url: EmptyStringField model = Model(url=None) print(model.url) However, I'm getting the following error: url none is not an allowed value (type=type_error.none.not_allowed)
If all you want is for the url field to accept None as a special case, but save an empty string instead, you should still declare it as a regular str type field. You can handle the special case in a custom pre=True validator. No need for a custom data type there. from pydantic import BaseModel, validator class Model(BaseModel): url: str @validator("url", pre=True) def none_to_empty(cls, v: object) -> object: if v is None: return "" return v model = Model(url=None) print(model.json()) # {"url": ""} Update If you don't want to repeat the validator in different models for different fields, you can define a catch-all pre=True validator on a custom base model and implement some sort of logic to discern, which fields on the model to process and how. One option is to utilize typing.Annotated to "package" any given type with some custom conversion function that should be called. The catch-all validator would then check every field for Annotated metadata and if it finds a function, it would call that on the value. Here is a working example: from typing import Annotated, get_origin from pydantic import BaseModel as PydanticBaseModel, validator from pydantic.fields import ModelField class BaseModel(PydanticBaseModel): @validator("*", pre=True) def process_annotated(cls, v: object, field: ModelField) -> object: if get_origin(field.annotation) is not Annotated: return v func = field.annotation.__metadata__[0] if not callable(func): return v return func(v) StrOrNone = Annotated[str, lambda v: "" if v is None else v] class Model1(BaseModel): url: StrOrNone print(Model1(url=None).json()) # {"url": ""}
4
5
76,580,973
2023-6-29
https://stackoverflow.com/questions/76580973/extracting-first-3-elements-from-list-of-strings-in-pandas-df
I want to extract the first 3 elements from a list of strings from the '1/1' column. My df_unique looks like that: 1/1 0/0 count 0 ['P1-12', 'P1-22', 'P1-25', 'P1-26', 'P1-28', 'P1-6', 'P1-88', 'P1-93'] ['P1-89', 'P1-90', 'P1-92', 'P1-95'] 1 1 ['P1-12', 'P1-22', 'P1-25', 'P1-26', 'P1-6', 'P1-89', 'P1-92', 'P1-95'] ['P1-28', 'P1-90', 'P1-93'] 1 2 ['P1-12', 'P1-22', 'P1-25', 'P1-26', 'P1-88', 'P1-89', 'P1-92', 'P1-93', 'P1-95'] ['P1-28', 'P1-6', 'P1-90'] 1 3 ['P1-12', 'P1-22', 'P1-25', 'P1-26', 'P1-88', 'P1-89', 'P1-92', 'P1-93'] ['P1-28', 'P1-6', 'P1-90'] 1 I've tried to use different solutions: df_extract_3 = df_unique['1/1'].str.split().map(lambda lst: [string[0:3] for string in lst]) but the result looks like that: 0 [['P, 'P1, 'P1, 'P1, 'P1, 'P1, 'P1, 'P1] 1 [['P, 'P1, 'P1, 'P1, 'P1, 'P1, 'P1, 'P1] 2 [['P, 'P1, 'P1, 'P1, 'P1, 'P1, 'P1, 'P1, 'P1] 3 [['P, 'P1, 'P1, 'P1, 'P1, 'P1, 'P1, 'P1] And the second solution: df_extract_3 = df_unique['1/1'].str[0:3] gives: 0 ['P 1 ['P 2 ['P 3 ['P When I try to add split : df_extract_3 = df_unique['1/1'].str.split().str[0:3] the final result is: 0 [['P1-12',, 'P1-22',, 'P1-25',] 1 [['P1-12',, 'P1-22',, 'P1-25',] 2 [['P1-12',, 'P1-22',, 'P1-25',] 3 [['P1-12',, 'P1-22',, 'P1-25',] What should I change to receive 'normal' output like: 0 ['P1-12', 'P1-22', 'P1-25'] 1 ['P1-12', 'P1-22', 'P1-25'] 2 ['P1-12', 'P1-22', 'P1-25'] 3 ['P1-12', 'P1-22', 'P1-25'] ? I know it can be easy modification but I've stuck and messed with that... Thanks a lot!
First convert your strings to read lists, then slice with str: import ast df_unique['1/1'] = df_unique['1/1'].apply(ast.literal_eval) df_unique['0/0'] = df_unique['0/0'].apply(ast.literal_eval) df_extract_3 = df_unique['1/1'].str[:3] print(df_extract_3) Or in one shot: df_extract_3 = df_unique['1/1'].apply(lambda x: ast.literal_eval(x)[:3]) Output: 0 [P1-12, P1-22, P1-25] 1 [P1-12, P1-22, P1-25] 2 [P1-12, P1-22, P1-25] 3 [P1-12, P1-22, P1-25] Name: 1/1, dtype: object
4
4
76,579,241
2023-6-29
https://stackoverflow.com/questions/76579241/reduce-by-multiple-columns-in-pandas-groupby
Having dataframe import pandas as pd df = pd.DataFrame( { "group0": [1, 1, 2, 2, 3, 3], "group1": ["1", "1", "1", "2", "2", "2"], "relevant": [True, False, False, True, True, True], "value": [0, 1, 2, 3, 4, 5], } ) I wish to produce a target target = pd.DataFrame( { "group0": [1, 2, 2, 3], "group1": ["1","1", "2", "2",], "value": [0, 2, 3, 5], } ) where "value" has been chosen by Maximum of all positive "relevant" indices in "value" column Otherwise maximum of "value" if no positive "relevant" indices exist This would be produced by def fun(x): tmp = x["value"][x["relevant"]] if len(tmp): return tmp.max() return x["value"].max() were x a groupby dataframe. Is it possible to achive the desired groupby reduction efficiently? EDIT: with payload from time import perf_counter() df = pd.DataFrame( { "group0": np.random.randint(0, 30,size=10_000_000), "group1": np.random.randint(0, 30,size=10_000_000), "relevant": np.random.randint(0, 1, size=10_000_000).astype(bool), "value": np.random.random_sample(size=10_000_000) * 1000, } ) start = perf_counter() out = (df .sort_values(by=['relevant', 'value']) .groupby(['group0', 'group1'], as_index=False) ['value'].last() ) end = perf_counter() print("Sort values", end - start) def fun(x): tmp = x["value"][x["relevant"]] if len(tmp): return tmp.max() return x["value"].max() start = perf_counter() out = df.groupby(["group0", "group1"]).apply(fun) end = perf_counter() print("Apply", end - start) #Sort values 14.823943354000221 #Apply 1.5050544870009617 .apply-solution got time of 1.5s. The solution with sort_values performed with 14.82s. However, reducing sizes of the test groups with ... "group0": np.random.randint(0, 500_000,size=10_000_000), "group1": np.random.randint(0, 100_000,size=10_000_000), ... led to vastly superior performance by the sort_values solution. (15.29s versus 1423.84s). sort_values solution by @mozway is preferred, unless user specifically knows that data contains small group counts.
Sort the values to put True, then highest number last and use a groupby.last: out = (df .sort_values(by=['relevant', 'value']) .groupby(['group0', 'group1'], as_index=False) ['value'].last() ) Output: group0 group1 value 0 1 1 0 1 2 1 2 2 2 2 3 3 3 2 5 Intermediate before aggregation: * selected rows group0 group1 relevant value 1 1 1 False 1 2 2 1 False 2 * 0 1 1 True 0 * 3 2 2 True 3 * 4 3 2 True 4 5 3 2 True 5 *
2
2
76,576,074
2023-6-28
https://stackoverflow.com/questions/76576074/how-to-import-a-postscript-file-into-pil-with-a-transparent-background
Using Pillow 9.5.0, Python 3.11.4, Tk 8.6, and Windows 11 (version 22H2) On the program i'm working on, I have a tkinter canvas that saves a postscript file, which is then imported into a PIL Image and shown. The problem is that it always has a white background, even when I changed the canvas background color Here's an excerpt of my example self.canvas.update() self.canvas.postscript(file='temp_{insert_garbage_here}.eps') img = Image.open('temp_{insert_garbage_here}.eps') img.show() Is there any way to fix this, so that the background is transparent? Btw, I tried using mask = Image.new('L', img.size, color=255) img.putalpha(mask) but that didn't change anything And this is the file the postscript exported: https://www.dropbox.com/scl/fi/fz0xt6is5aicgnqvkcskd/temp_-insert_garbage_here.eps?dl=0&rlkey=yd2pjb61t1rmydm9ky704zcfl
After opening the image, you can call the load() method with transparency=True to tell ghostscript to load the EPS (Encapsulated PostScript) image with a transparent background: from PIL import Image im = Image.open('a.eps') im.load(transparency=True) # Check we now have RGBA inage print(im.mode) # prints "RGBA"
2
3
76,577,224
2023-6-28
https://stackoverflow.com/questions/76577224/python-does-importing-sys-module-also-import-os-module
I am following this tutorial to see why Python doesn't recognize an environment variable that was set at the Conda prompt (using CMD syntax, as this is an Anaconda installation on Windows 10). It requires the os module, and as part of my getting familiar with Python, I decided to test whether os was already imported. Testing the presence of a module requires the sys module (as described here). Strangely, right after importing sys, I found that os was imported without me having to do so. I find this odd, as most of my googling shows that you have to import them individually, e.g., here. Does importing sys also impart os, as it seems to? If so, why is it common to import both individually? I can't test for the presence of os before importing sys, as I need sys to test for the presence of modules. Here is the code that shows the apparent presence of os from importing sys, formatted for readability. It starts from the Conda prompt in Windows. The Conda environment is "py39", which is for Python 3.9: (py39) C:\Users\User.Name > python Python 3.9.16 (main, Mar 8 2023, 10:39:24) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> "os" in sys.modules True Afternote: Thanks to Zero's answer, I found this code to be more what I'm looking for. After loading sys, the appropriate test is ( 'os' in sys.modules ) and ( 'os' in dir() ): (py39) C:\Users\User.Name > python 'os' in dir() # False import sys 'os' in sys.modules , 'os' in dir() # (True, False) ( 'os' in sys.modules ) and ( 'os' in dir() ) # False import os 'os' in sys.modules , 'os' in dir() # (True, True) ( 'os' in sys.modules ) and ( 'os' in dir() ) # True sys.modules shows whether the module has been imported anywhere (presumably in the code that the Python interpreter has executed) while dir() indicates whether the module name is in the current namespace. Thanks to Carcigenicate for clarifying this point, and I hope that I understood it properly.
sys uses the os module, but doesn't import it to your code. You can confirm it through the code. In [1]: import sys In [2]: os.getcwd() --------------------------------------------------------------------------- NameError Traceback (most recent call last) Cell In [2], line 1 ----> 1 os.getcwd() NameError: name 'os' is not defined
3
-1
76,576,150
2023-6-28
https://stackoverflow.com/questions/76576150/filtering-dataframe-based-on-specific-conditions-in-python
I have a DataFrame with the following columns: INVOICE_DATE, COUNTRY, CUSTOMER_ID, INVOICE_ID, DESCRIPTION, USIM, and DEMANDQTY. I want to filter the DataFrame based on specific conditions. The condition is that if the DESCRIPTION column contains the words "kids" or "baby", I want to include all the values from that INVOICE_ID in the filtered DataFrame. In other words, at least one item in the transaction should belong to the kids or baby category for the entire transaction to be included. I tried using the str.contains() method in combination with a regular expression pattern, but I'm having trouble getting the desired results. Here's my code: import pandas as pd # Assuming the DataFrame is named 'df' # Filter the DataFrame based on the condition filtered_df = df[df['DESCRIPTION'].str.contains('kids|baby', case=False, regex=True)] # Print the filtered DataFrame filtered_df However, this code does not provide the expected results. It filters the data frame based on individual rows rather than considering the entire transaction. Please find below the test data: - import pandas as pd import random import string import numpy as np random.seed(42) np.random.seed(42) num_transactions = 100 max_items_per_transaction = 6 # Generate a list of possible items possible_items = [ "Kids T-shirt", "Baby Onesie", "Kids Socks", "Men's Shirt", "Women's Dress", "Kids Pants", "Baby Hat", "Women's Shoes", "Men's Pants", "Kids Jacket", "Baby Bib", "Men's Hat", "Women's Skirt", "Kids Shoes", "Baby Romper", "Men's Sweater", "Kids Gloves", "Baby Blanket" ] # Create the DataFrame rows = [] for i in range(num_transactions): num_items = random.randint(1, max_items_per_transaction) items = random.sample(possible_items, num_items) invoice_dates = pd.date_range(start='2022-01-01', periods=num_items, freq='D') countries = random.choices(['USA', 'Canada', 'UK'], k=num_items) customer_id = i + 1 invoice_id = 1001 + i for j in range(num_items): item = items[j] usim = ''.join(random.choices(string.ascii_uppercase + string.digits, k=6)) # Generate a random 6-character USIM value demand_qty = random.randint(1, 10) row = { 'INVOICE_DATE': invoice_dates[j], 'COUNTRY': countries[j], 'CUSTOMER_ID': customer_id, 'INVOICE_ID': invoice_id, 'DESCRIPTION': item, 'USIM': usim, 'DEMANDQTY': demand_qty } rows.append(row) df = pd.DataFrame(rows) # Print the DataFrame df Can anyone please guide me on how to properly filter the DataFrame based on the described condition? I would greatly appreciate any help or suggestions. Thank you!
Suppose the following dataframe: >>> df DESCRIPTION INVOICE_ID 0 kids 123 1 hello 123 2 world 123 3 another 456 4 one 456 You can want to keep INVOICE_ID=123 because 'kids' is in the description of row 0: m = df['DESCRIPTION'].str.contains('kids|baby', case=False, regex=True) filtered_df = df[m.groupby(df['INVOICE_ID']).transform('max')] Output: >>> filtered_df DESCRIPTION INVOICE_ID 0 kids 123 1 hello 123 2 world 123
3
1
76,575,992
2023-6-28
https://stackoverflow.com/questions/76575992/rounding-down-numbers-to-the-nearest-even-number
I have a dataframe with a column of numbers that look like this: data = [291.79, 499.31, 810.93, 1164.25] df = pd.DataFrame(data, columns=['Onset']) Is there an elegant way to round down the odd numbers to the nearest even number? The even numbers should be rounded to the nearest even integer. This should be the output: data = [290, 498, 810, 1164] df = pd.DataFrame(data, columns=['Onset']) Thank you!
I'd use modulo operator: df['Onset_new'] = (i:=df['Onset'].astype(int)) - i % 2 print(df) Prints: Onset Onset_new 0 291.79 290 1 499.31 498 2 810.93 810 3 1164.25 1164
3
2
76,575,762
2023-6-28
https://stackoverflow.com/questions/76575762/how-to-add-multiple-or-queries-in-django
I have 2 models, Item and Category, the model Item has category field as a foreign key In my views.py I get a list of queries from a POST request queries = ['category1', 'category2', 'category3', ...] I don't know the number of the queries I will get from the request, and I want to filter the Item model based on category field I tried this from django.db.models import Q from .models import Item, Category from django import views class myView(views.View): def post(self, request): queries = request.POST.get('queries', '') if queries: queriesList = [] queries = queries.split(',') # queries = ['category1', 'category2', ....] for query in queries: queriesList.append(Q(category__icontains=query)) queryset = Item.objects.filter(*queriesList) # this will do AND but won't do OR # I tried: queryset = Item.objects.filter([q | Q() for q in queriesList]) but it didn't work Also I tried queryset = Item.objects.filter(category__in=queries) but it's case sensitive
You can use Q.OR as _connector: from django.db.models import Q from django import views from .models import Category, Item class myView(views.View): def post(self, request): queries = request.POST.get('queries', '') if queries: queryset = Item.objects.filter( *[Q(category__icontains=query) for query in queries.split(',')], _connector=Q.OR ) searching is however normally done through a GET request, such that the search query appears in the URL, and thus the URL can be copied, bookmarked, etc. with the search query included.
2
3
76,575,561
2023-6-28
https://stackoverflow.com/questions/76575561/get-values-of-dataframe-index-column
I have a DataFrame with an index column that uses pandas TimeStamps. How can I get the values of the index as if it were a normal column (df["index"])?
You can get the index values from your dataframe by the following: df.index.values The above will return an array of the index values of your dataframe. You can simply store that in a variable. You can also get the values as a list by the following: list(df.index.values)
2
5
76,574,991
2023-6-28
https://stackoverflow.com/questions/76574991/pandas-compare-values-of-multiple-columns
I want to find out if any of the value in columns mark1, mark2, mark3, mark4 and mark5 are the same, column-wise comparison from a dataframe below, and list result as True or False import pandas as pd df = pd.DataFrame(data=[[7, 2, 3, 7, 7], [3, 4, 3, 2, 7], [1, 6, 5, 2, 7], [5, 5, 6, 3, 1]], columns=["mark1", "mark2", 'mark3', 'mark4', 'mark5']) Ideal output: mark1 mark2 mark3 mark4 mark5 result 0 7 2 3 7 7 True 1 3 4 3 2 7 True 2 1 6 5 2 7 False 3 5 5 6 3 1 True So I came up with a func using nested forloop to compare each value in a column, does not work. AttributeError: 'Series' object has no attribute 'columns' What's the correct way? Avoid nested forloop by all means. def compare_col(df): check = 0 for i in range(len(df.columns.tolist())+1): for j in range(1, len(df.columns.tolist())+1): if df.iloc[i, i] == df.iloc[j, i]: check += 1 if check >= 1: return True else: return False df['result'] = df.apply(lambda x: compare_col(x[['mark1', 'mark2', 'mark3', 'mark4', 'mark5]]), axis=1)
No need to use apply or a loop, compare the output of nunique to the number of columns: df['result'] = df.nunique(axis=1).ne(df.shape[1]) Output: mark1 mark2 mark3 mark4 mark5 result 0 7 2 3 7 7 True 1 3 4 3 2 7 True 2 1 6 5 2 7 False 3 5 5 6 3 1 True If you want a more efficient method and assuming a reasonable number of columns (less than a thousand) and numbers, you could use numpy to sort the values, compute the diff and check whether any value is 0: import numpy as np df['result'] = (np.diff(np.sort(df), axis=1)==0).any(axis=1) Output: mark1 mark2 mark3 mark4 mark5 result 0 7 2 3 7 7 True 1 3 4 3 2 7 True 2 1 6 5 2 7 False 3 5 5 6 3 1 True
3
0
76,573,883
2023-6-28
https://stackoverflow.com/questions/76573883/how-to-hide-cmd-window-when-run-perl-in-python
I write the main application with python and build it to windows *.exe file, and now need to call a 3rd party perl script. I use, cmd = "perl fileconverter.pl" subprocess.call(cmd) Everything works well except the cmd window, which will pop up when subprocess.call(cmd) is called. Question is, how to hide the cmd window in this situation?
You can use wperl instead of perl. If your program is running in a console, perl should reuse the same console. >perl -e"system 'perl -Esay+123'" 123 (I don't have Python installed, but the same applies no matter what launches perl.) But maybe the Python is running without a console. Then perl will create itself a console. Another answer shows you how to hide this console when using Python, and it's a great solution. But there's a simple Perl-specific option you can also use. Strawberry Perl (and I think ActivePerl, and probably all builds of Perl for Windows) provide not just perl.exe, but wperl.exe.[1] The latter is identical to the former, except it uses a hidden console.[2] So all you would need to do is change perl to wperl. I don't count builds of Perl for unix emulation environments (Cygwin, MSYS, MSYS2) as Windows builds of Perl. The binaries differ by only one bit.
3
4
76,573,550
2023-6-28
https://stackoverflow.com/questions/76573550/dictionary-hashmap-implementation-using-double-hashing-is-stuck-in-an-infinite-l
I'm following this formula from wikipedia: H(i, k) = (H1(k) + i*H2(k)) % size and my H1 is Python's built-in hash() function. H2 is: PRIME - (H1(k) % PRIME) Unfortunately it randomly sticks in an infinite loop after a couple of execution. It cannot traverse all the slots in my table. Here is my code but you have to set PYTHONHASHSEED=12 in order to reproduce this bug. (I deliberately removed many details so that the implementation would be minimal) EMPTY = object() class DoubleHashingHashMap: def __init__(self): self.prime = 7 self.size = 15 self.slots = [EMPTY] * self.size def __setitem__(self, key, value): for idx in self.probing_squence(key): slot = self.slots[idx] if slot is EMPTY: self.slots[idx] = (key, value) break elif isinstance(slot, tuple): k, v = slot if k == key: self.slots[idx] = (key, value) break def probing_squence(self, key): h1 = self.hash_func1(key) % self.size h2 = self.hash_func2(key) % self.size i = 1 while True: yield (h1 + i*h2) % self.size i += 1 def hash_func1(self, item): return hash(item) def hash_func2(self, item): return self.prime - (self.hash_func1(item) % self.prime) hashmap = DoubleHashingHashMap() for i in range(8): hashmap[str(i)] = i print("8 items added.") print("Going into the infinite loop when adding 9th item(which is 8)...") hashmap["8"] = 8 print("This line can't be reached.") I would appreciate if you tell me what's wrong with my math.
The logic calculating the sequence is flawed. For the configuration you mentioned it just will output 0, 5, 10 forever since the 0, 5, 10 slots are already occupied this will go on forever. You only multiply h2 with i and do the modulo with the size. This will loop quite often through a few specific values and won't cover all possible indexes. This is what happens in your case # h1 = 10, h2 = 5, calculating the first 10 outputs you would get print((10 + np.arange(10) * 5) % 15) array([10, 0, 5, 10, 0, 5, 10, 0, 5, 10]) So this actually loops through only 3 values, quite bad with 15 possible ones. Probably the reason why this bug happens so fast. With how your implement it you can just increase the index by one and do this until a slot is empty and in the __getitem__ you need to check if the key requested matches the key in the slot and if not do the same logic by increasing it by one until you find it. EMPTY = object() class DoubleHashingHashMap: def __init__(self): self.prime = 7 self.size = 15 self.slots = [EMPTY] * self.size def __setitem__(self, key, value): for idx in self.probing_squence(key): slot = self.slots[idx] if slot is EMPTY: self.slots[idx] = (key, value) break elif isinstance(slot, tuple): k, v = slot if k == key: self.slots[idx] = (key, value) break def __getitem__(self, key): for idx in self.probing_squence(key): slot = self.slots[idx] if slot is not EMPTY and slot[0] == key: return slot[1] def probing_squence(self, key): h1 = self.hash_func1(key) % self.size h2 = self.hash_func2(key) % self.size i = 0 while True: yield (h1 + h2 + i) % self.size i += 1 def hash_func1(self, item): return hash(item) def hash_func2(self, item): return self.prime - (self.hash_func1(item) % self.prime) hashmap = DoubleHashingHashMap() for i in range(8): hashmap[str(i)] = i print("8 items added.") print("Going into the infinite loop when adding 9th item(which is 8)...") hashmap["8"] = 8 print("This line can't be reached.") print(hashmap["1"], hashmap["8"]) So this fixes it but probably not in the way you want since you reference the wikipedia. So why does the formula from wikpedia not work in your case. This is probably because your h2 does not have all needed characteristics. The Wikipedia you linked says The secondary hash function h2(k) should have several characteristics: it should never yield an index of zero it should be pair-wise independent of h1k it should cycle through the whole table All h2(k) be relatively prime to the size Your h2 actually has only the first characteristics. It can't be 0. It is definitely dependent on h1 since you use h1 to calculate h2. It won't cycle through the whole table since your self.prime < self.size. It can definitely output e.g. 5, which is not relative prime to a total size of 15. They both share the factor of 5. As said in the article to e.g. have the relative prime characteristic you can have the total size be a power of 2 and only ever return odd numbers from h2. This will automatically make it relatively prime. You should not use h1 to calculate h2 to make them independent and make sure the outputs of h2 are in the interval [1, size - 1]. So if you want to apply the hashing rule you need to make sure your h2 actually has the characteristics needed. Otherwise the closed loop of a few numbers will happens as you observed.
4
2
76,570,896
2023-6-28
https://stackoverflow.com/questions/76570896/importerror-cannot-import-name-jsonencoder-from-flask-json
I'm following a course on full-stack with Flask. My init.py looks like: from flask import Flask from config import Config from flask_mongoengine import MongoEngine app = Flask(__name__) app.config.from_object(Config) db = MongoEngine() db.init_app(app) from application import routes However, when importing from flask_mongoengine import MongoEngine, I'm getting an ImportError: ImportError: cannot import name 'JSONEncoder' from 'flask.json' My venv looks like: blinker==1.6.2 click==8.1.3 colorama==0.4.6 dnspython==2.3.0 email-validator==2.0.0.post2 Flask==2.3.2 flask-mongoengine==1.0.0 Flask-WTF==1.1.1 idna==3.4 itsdangerous==2.1.2 Jinja2==3.1.2 MarkupSafe==2.1.3 mongoengine==0.27.0 pymongo==4.4.0 python-dotenv==1.0.0 Werkzeug==2.3.6 WTForms==3.0.1 Is there anything I can do here to avoid this conflict? Thanks!
flask_mongoengine seems to be not currently maintained and does not work with current Flask versions. If you absolutely must use it, you need to downgrade your Flask version, which may (and likely will) get you into other trouble. There is an issue on github regarding your error message: https://github.com/MongoEngine/flask-mongoengine/issues/522 The deprecation warning came with Flask 2.2.0 in 08/2022: Flask Changes After a brief look at the repo, it seems the maintainer was already on it: https://github.com/MongoEngine/flask-mongoengine/blob/master/flask_mongoengine/json.py
8
3
76,570,857
2023-6-28
https://stackoverflow.com/questions/76570857/how-to-count-number-of-true-and-false-blocks-in-a-column-in-a-pandas-dataframe
I have the following pandas dataframe: col 2023-05-04 10:34:51.002100665 True 2023-05-04 10:34:51.007100513 True 2023-05-04 10:34:51.012100235 True 2023-05-04 10:34:51.017100083 False 2023-05-04 10:34:51.022099789 False 2023-05-04 10:35:23.610740595 False 2023-05-04 10:35:23.615740466 True 2023-05-04 10:35:23.620740227 True 2023-05-04 10:35:23.625740082 False 2023-05-04 10:35:23.630739797 True How do I count the number of True and False blocks present in it? The result should be: Number of True blocks: 3 Number of False blocks: 2
Assuming 'col' your column, you can remove the identical successive values with boolean indexing and use value_counts: out = df.loc[df['col'].ne(df['col'].shift()), 'col'].value_counts() Output: col True 3 False 2 dtype: int64 Intermediates: * only those rows will be considered col shift ≠ shift 2023-05-04 10:34:51.002100665 True NaN True * 2023-05-04 10:34:51.007100513 True True False 2023-05-04 10:34:51.012100235 True True False 2023-05-04 10:34:51.017100083 False True True * 2023-05-04 10:34:51.022099789 False False False 2023-05-04 10:35:23.610740595 False False False 2023-05-04 10:35:23.615740466 True False True * 2023-05-04 10:35:23.620740227 True True False 2023-05-04 10:35:23.625740082 False True True * 2023-05-04 10:35:23.630739797 True False True *
2
3
76,522,582
2023-6-21
https://stackoverflow.com/questions/76522582/how-to-pass-parameters-to-an-endpoint-using-add-route-in-fastapi
I'm developing a simple application with FastAPI. I need a function to be called as endpoint for a certain route. Everything works just fine with the function's default parameters, but wheels come off the bus as soon as I try to override one of them. Example. This works just fine: async def my_function(request=Request, clientname='my_client'): print(request.method) print(clientname) ## DO OTHER STUFF... return SOMETHING private_router.add_route('/api/my/test/route', my_function, ['GET']) This returns an error instead: async def my_function(request=Request, clientname='my_client'): print(request.method) print(clientname) ## DO OTHER STUFF... return SOMETHING private_router.add_route('/api/my/test/route', my_function(clientname='my_other_client'), ['GET']) The Error: INFO: 127.0.0.1:60005 - "GET /api/my/test/route HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): ... ... TypeError: 'coroutine' object is not callable The only difference is I'm trying to override the clientname value in my_function. It is apparent that this isn't the right syntax but I looked everywhere and I'm just appalled that the documentation about the add_route method is nowhere to be found. Is anyone able to point me to the right way to do this supposedly simple thing? Thanks!
Option 1 One way is to make a partial application of the function using functools.partial. As per functools.partial's documentation: functools.partial(func, /, *args, **keywords) Return a new partial object which when called will behave like func called with the positional arguments args and keyword arguments keywords. If more arguments are supplied to the call, they are appended to args. If additional keyword arguments are supplied, they extend and override keywords. Roughly equivalent to: def partial(func, /, *args, **keywords): def newfunc(*fargs, **fkeywords): newkeywords = {**keywords, **fkeywords} return func(*args, *fargs, **newkeywords) newfunc.func = func newfunc.args = args newfunc.keywords = keywords return newfunc The partial() is used for partial function application which "freezes" some portion of a function's arguments and/or keywords resulting in a new object with a simplified signature. Working Example Here is the source for the add_route() method, as well as the part in Route class where Starlette checks if the endpoint_handler that is passed to add_route() is an instance of functools.partial. Note that the endpoint has to return an instance of Response/JSONResponse/etc., as returning a str or dict object (e.g., return client_name), for instance, would throw TypeError: 'str' object is not callable or TypeError: 'dict' object is not callable, respectively. Please have a look at this answer for more details and examples on how to return JSON data using a custom Response. from fastapi import FastAPI, Request, APIRouter, Response from functools import partial async def my_endpoint(request: Request, client_name: str ='my_client'): print(request.method) return Response(client_name) app = FastAPI() router = APIRouter() router.add_route('/', partial(my_endpoint, client_name='my_other_client'), ['GET']) app.include_router(router) Option 2 As noted by @MatsLindh in the comments section, you could use a wrapper/helper function that returns an inner function, which is essentially the same as using functools.partial in Option 1, as that is exactly how that function works under the hood (as shown in the quote block earlier). Hence, throught the wrapper function you could pass the parameters of your choice to the nested function. Working Example from fastapi import FastAPI, Request, APIRouter, Response def my_endpoint(client_name: str ='my_client'): async def newfunc(request: Request): print(request.method) return Response(client_name) return newfunc app = FastAPI() router = APIRouter() router.add_route('/', my_endpoint(client_name='my_other_client'), ['GET']) app.include_router(router) I would also suggest having a look at this answer and this answer, which demonstrate how to use add_api_route() instead of add_route(), which might be a better alternative, if you faced issues when using FastAPI dependencies.
3
3
76,557,773
2023-6-26
https://stackoverflow.com/questions/76557773/interpolate-based-on-datetimes
In pandas, I can interpolate based on a datetimes like this: df1 = pd.DataFrame( { "ts": [ datetime(2020, 1, 1), datetime(2020, 1, 3, 0, 0, 12), datetime(2020, 1, 3, 0, 1, 35), datetime(2020, 1, 4), ], "value": [1, np.nan, np.nan, 3], } ) df1.set_index('ts').interpolate(method='index') Outputs: value ts 2020-01-01 00:00:00 1.000000 2020-01-03 00:00:12 2.333426 2020-01-03 00:01:35 2.334066 2020-01-04 00:00:00 3.000000 Is there a similar method in polars? Say, starting with df1 = pl.DataFrame( { "ts": [ datetime(2020, 1, 1), datetime(2020, 1, 3, 0, 0, 12), datetime(2020, 1, 3, 0, 1, 35), datetime(2020, 1, 4), ], "value": [1, None, None, 3], } ) shape: (4, 2) ┌─────────────────────┬───────┐ │ ts ┆ value │ │ --- ┆ --- │ │ datetime[μs] ┆ i64 │ ╞═════════════════════╪═══════╡ │ 2020-01-01 00:00:00 ┆ 1 │ │ 2020-01-03 00:00:12 ┆ null │ │ 2020-01-03 00:01:35 ┆ null │ │ 2020-01-04 00:00:00 ┆ 3 │ └─────────────────────┴───────┘ EDIT: I've updated the example to make it a bit more "irregular", so that upsample can't be used as a solution and to make it clear that we need something more generic
Update: Expr.interpolate_by was added in Polars 0.20.28 df1.with_columns(pl.col("value").interpolate_by("ts")) shape: (4, 2) ┌─────────────────────┬──────────┐ │ ts ┆ value │ │ --- ┆ --- │ │ datetime[μs] ┆ f64 │ ╞═════════════════════╪══════════╡ │ 2020-01-01 00:00:00 ┆ 1.0 │ │ 2020-01-03 00:00:12 ┆ 2.333426 │ │ 2020-01-03 00:01:35 ┆ 2.334066 │ │ 2020-01-04 00:00:00 ┆ 3.0 │ └─────────────────────┴──────────┘
2
2
76,548,222
2023-6-24
https://stackoverflow.com/questions/76548222/how-to-get-maps-to-geopandas-after-datasets-are-removed
I have found very easy and useful to load world map from geopandas datasets, as probably many others, for example: import geopandas as gpd world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) However, this gives a FutureWarning that dataset module is deprecated and will be removed in the future. There are maps available for download, for example from https://www.naturalearthdata.com/downloads/110m-cultural-vectors/ but the files are zipped and it does not seem like a convinient workflow to either get and process files from there or neither include processed files with the source. Is there an alternative? What is the best way to do this, especially if I want my code to work with future versions of Geopandas?
You can read it from Nacis : import geopandas as gpd url = "https://naciscdn.org/naturalearth/110m/cultural/ne_110m_admin_0_countries.zip" gdf = gpd.read_file(url) old answer: The simplest solution would be to download/store the shapefile somewhere. That being said, if (for some reason), you need to read it from the source, you can do it this way : import fsspec url = "https://www.naturalearthdata.com/http//www.naturalearthdata.com/" \ "download/110m/cultural/ne_110m_admin_0_countries.zip" with fsspec.open(f"simplecache::{url}") as file: gdf = gpd.read_file(file) Output : featurecla scalerank ... FCLASS_UA geometry 0 Admin-0 country 1 ... None MULTIPOLYGON (((180.00000 -16.0... 1 Admin-0 country 1 ... None POLYGON ((33.90371 -0.95000, 34... 2 Admin-0 country 1 ... None POLYGON ((-8.66559 27.65643, -8... .. ... ... ... ... ... 174 Admin-0 country 1 ... Unrecognized POLYGON ((20.59025 41.85541, 20... 175 Admin-0 country 1 ... None POLYGON ((-61.68000 10.76000, -... 176 Admin-0 country 1 ... None POLYGON ((30.83385 3.50917, 29.... [177 rows x 169 columns]
7
7
76,564,697
2023-6-27
https://stackoverflow.com/questions/76564697/polars-shuffle-and-split-dataframe-with-grouping
I am using polars for all preprocessing and feature engineering. I want to shuffle the data before performing a train/valid/test split. A training 'example' consists of multiple rows. The number of rows per example varies. Here is a simple contrived example (Note I am actually using a LazyFrame in my code): pl.DataFrame({ "example_id": [1, 1, 2, 2, 2, 3, 3, 3, 4, 4], "other_col": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] }) ┌────────────┬───────────┐ │ example_id ┆ other_col │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞════════════╪═══════════╡ │ 1 ┆ 1 │ │ 1 ┆ 2 │ │ 2 ┆ 3 │ │ 2 ┆ 4 │ │ 2 ┆ 5 │ │ 3 ┆ 6 │ │ 3 ┆ 7 │ │ 3 ┆ 8 │ │ 4 ┆ 9 │ │ 4 ┆ 10 │ └────────────┴───────────┘ I want to shuffle 'over' the example_id column, while keeping the examples grouped together. Producing a result something like this: ┌────────────┬───────────┐ │ example_id ┆ other_col │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞════════════╪═══════════╡ │ 2 ┆ 3 │ │ 2 ┆ 4 │ │ 2 ┆ 5 │ │ 1 ┆ 1 │ │ 1 ┆ 2 │ │ 4 ┆ 9 │ │ 4 ┆ 10 │ │ 3 ┆ 6 │ │ 3 ┆ 7 │ │ 3 ┆ 8 │ └────────────┴───────────┘ I then want to split the data fractionally, for example 0.6, 0.2, 0.2 for training, validation and testing respectively, but do this based on 'whole examples' rather than just row wise. Is there a clean way to do this in polars without having to convert the example_id to an array, shuffling it, splitting into sublists, then reselecting from the original dataframe?
There must be a far cleaner way of achieving this, hopefully someone can improve on this. Also it requires collecting the dataframe which is not ideal. Either way, it seems to work for now. Thanks @jqurious for the pointer. Grab the unique example_ids, shuffle them and add a row count. example_ids = ( example_df .select("example_id") .unique() .sample(fraction=1, shuffle=True) .with_row_index() ) Split the unique ids into subsets using the row count. # assume we'll test on remaining data train_frac = 0.6 valid_frac = 0.2 train_ids = example_ids.filter( pl.col("index") < pl.col("index").max() * train_frac ) valid_ids = example_ids.filter( pl.col("index").is_between( pl.col("index").max() * train_frac, pl.col("index").max() * (train_frac + valid_frac), ) ) test_ids = example_ids.filter( pl.col("index") > pl.col("index").max() * (train_frac + valid_frac) ) Join each subset back to the example_df and drop the row_nr train_df = example_df.join(train_ids, on="example_id").drop("index") valid_df = example_df.join(valid_ids, on="example_id").drop("index") test_df = example_df.join(test_ids, on="example_id").drop("index") This will produce 3 dataframe, something like this ┌────────────┬───────────┐ │ example_id ┆ other_col │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞════════════╪═══════════╡ │ 1 ┆ 1 │ │ 1 ┆ 2 │ │ 3 ┆ 6 │ │ 3 ┆ 7 │ │ 3 ┆ 8 │ └────────────┴───────────┘ ┌────────────┬───────────┐ │ example_id ┆ other_col │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞════════════╪═══════════╡ │ 2 ┆ 3 │ │ 2 ┆ 4 │ │ 2 ┆ 5 │ └────────────┴───────────┘ ┌────────────┬───────────┐ │ example_id ┆ other_col │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞════════════╪═══════════╡ │ 4 ┆ 9 │ │ 4 ┆ 10 │ └────────────┴───────────┘
3
1
76,523,245
2023-6-21
https://stackoverflow.com/questions/76523245/how-to-get-current-index-of-element-in-polars-list
When evaluating list elements I would like to know and use the current index. Is there already a way of doing it? Something like pl.element().idx() ? import polars as pl data = {"a": [[1,2,3],[4,5,6]]} schema = {"a": pl.List(pl.Int8)} df = pl.DataFrame(data, schema=schema) df = df.with_columns( pl.col("a").list.eval(pl.element() * pl.element().idx()) ) # AttributeError: 'Expr' object has no attribute 'idx' Expected result: +-------------+ ¦ a ¦ ¦ --- ¦ ¦ list[u8] ¦ ¦-------------¦ ¦ [0, 2, 6] ¦ ¦ [0, 5, 12] ¦ +-------------+
The best way (that I know of) is to make a row index, explode, use int_range with a window function to create the idx (I'm calling it j), and then put it back together with group_by/agg ( df .with_row_index('i') .explode('a') .with_columns(j=pl.int_range(pl.len()).over('i')) .with_columns(new=pl.col('a')*pl.col('j')) .group_by('i', maintain_order=True) .agg(pl.col('new')) .drop('i') )
3
2
76,569,092
2023-6-27
https://stackoverflow.com/questions/76569092/polars-arr-geti-replacement
I have a code df.select([ pl.all().exclude("elapsed_time_linreg"), pl.col("elapsed_time_linreg").arr.get(0).suffix("_slope"), pl.col("elapsed_time_linreg").arr.get(1).suffix("_intercept"), pl.col("elapsed_time_linreg").arr.get(2).suffix("_resid_std"), ]) which unpacked the result of a function @jit def linear_regression(session_np: np.ndarray) -> np.ndarray: w = len(session_np) x = np.arange(w) sx = w ** 2 / 2 sy = np.sum(session_np) sx2 = (w * (w + 1) * (2 * w + 1)) / 6 sxy = np.sum(x * session_np) slope = (w * sxy - sx * sy) / (w * sx2 - sx**2) intercept = (sy - slope * sx) / w resids = session_np - (x * slope + intercept) return slope, intercept, resids.std() def get_linreg_aggs(session) -> np.ndarray: return linear_regression(np.array(session)) df.select( pl.col("elapsed_time") .apply(utils.get_linreg_aggs) .cast(pl.Float32) .alias("elapsed_time_linreg") ) which worked perfectly on polars 17.15, but since I upgraded it to the latest (0.18.4), it stopped working. Now I get the following error and have no idea how to fix this: AttributeError: 'ExprArrayNameSpace' object has no attribute 'get' Is there a way of fixing it without downgrading the version? Or even a more efficient way of creating a bunch of columns from this function? I thought of the caching the computational result with lru-cache, but since polars uses multiprocessing, idk if it helps.
The .arr namespace was renamed to .list in v0.18.0 The function is now .list.get()
3
13
76,528,317
2023-6-22
https://stackoverflow.com/questions/76528317/quote-string-value-in-f-string-in-python
I'm trying to quote one of the values I send to an f-string in Python: f'This is the value I want quoted: \'{value}\'' This works, but I wonder if there's a formatting option that does this for me, similar to how %q works in Go. Basically, I'm looking for something like this: f'This is the value I want quoted: {value:q}' >>> This is the value I want quoted: 'value' I would also be okay with double-quotes. Is this possible?
Use the explicit conversion flag !r: >>> value = 'foo' >>> f'This is the value I want quoted: {value!r}' "This is the value I want quoted: 'foo'" The r stands for repr; the result of f'{value!r}' should be equivalent to using f'{repr(value)}'. Conversion flags are a feature carried over from str.format, which predates f-strings. For some reason undocumented in PEP 3101, but mentioned in the docs, there's also an !a flag which converts with ascii: >>> f'quote {"🔥"!a}' "quote '\\U0001f525'" And there's an !s for str, which seems useless... unless you know that objects can override their formatter to do something different than object.__format__ does. It provides a way to opt-out of those shenanigans and use __str__ anyway. >>> class What: ... def __format__(self, spec): ... if spec == "fancy": ... return "𝓅𝑜𝓉𝒶𝓉𝑜" ... return "potato" ... def __str__(self): ... return "spam" ... def __repr__(self): ... return "<wacky object at 0xcafef00d>" ... >>> obj = What() >>> f'{obj}' 'potato' >>> f'{obj:fancy}' '𝓅𝑜𝓉𝒶𝓉𝑜' >>> f'{obj!s}' 'spam' >>> f'{obj!r}' '<wacky object at 0xcafef00d>' One example of where that !s might be useful in practice is with alignment/padding of types that don't otherwise support it: >>> f"{[1,2]:.>10}" TypeError: unsupported format string passed to list.__format__ >>> f"{[1,2]!s:.>10}" '....[1, 2]' Or types which "support" it, but not how you may have intended: >>> from datetime import date >>> f"{date.today():_^14}" '_^14' >>> f"{date.today()!s:_^14}" '__2024-03-25__'
4
9
76,551,067
2023-6-25
https://stackoverflow.com/questions/76551067/how-to-create-a-langchain-doc-from-an-str
I've searched all over langchain documentation on their official website but I didn't find how to create a langchain doc from a str variable in python so I searched in their GitHub code and I found this : doc=Document( page_content="text", metadata={"source": "local"} ) PS: I added the metadata attribute then I tried using that doc with my chain: Memory and Chain: memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input") chain = load_qa_chain( llm, chain_type="stuff", memory=memory, prompt=prompt ) the call method: chain({"input_documents": doc, "human_input": query}) prompt template: template = """You are a senior financial analyst analyzing the below document and having a conversation with a human. {context} {chat_history} Human: {human_input} senior financial analyst:""" prompt = PromptTemplate( input_variables=["chat_history", "human_input", "context"], template=template ) but I am getting the following error: AttributeError: 'tuple' object has no attribute 'page_content' when I tried to check the type and the page content of the Document object before using it with the chain I got this print(type(doc)) <class 'langchain.schema.Document'> print(doc.page_content) "text"
I had a similar issue and I noticed the API calls for a list, so try doc = Document(page_content=input, metatdata={ "source": "userinput" } ) #db.add_documents(doc) db.add_documents([doc])
27
3
76,531,474
2023-6-22
https://stackoverflow.com/questions/76531474/django-ninja-testing
I am trying to create a test for an API I wrote using Django-Ninja. Here is my Model: class Country(models.Model): created_at = models.DateTimeField(auto_created=True, auto_now_add=True) name = models.CharField(max_length=128, null=False, blank=False) code = models.CharField(max_length=128, null=False, blank=False, unique=True) timezone = models.CharField(max_length=128, null=False, blank=False) Here is my schema: class CountryAddSchema(Schema): name: str code: str timezone: str Here is the post endpoint: router.post("/add", description="Add a Country", summary="Add a Country", tags=["Address"], response={201: DefaultSchema, 401: DefaultSchema, 422: DefaultSchema, 500: DefaultSchema}, url_name="address_country_add") def country_add(request, country: CountryAddSchema): try: if not request.auth.belongs_to.is_staff: return 401, {"detail": "None Staff cannot add Country"} the_country = Country.objects.create(**country.dict()) the_country.save() return 201, {"detail": "New Country created"} except Exception as e: return 500, {"detail": str(e)} Finally, here the test function: def test_add_correct(self): """ Add a country """ data = { "name": "".join(choices(ascii_letters, k=32)), "code": "".join(choices(ascii_letters, k=32)), "timezone": "".join(choices(ascii_letters, k=32)) } respond = self.client.post(reverse("api-1.0.0:address_country_add"), data, **self.AUTHORIZED_HEADER) self.assertEquals(respond.status_code, 201) self.assertDictEqual(json.loads(respond.content), {"detail": "New Country created"}) the_country = Country.objects.last() self.assertDictEqual( data, { "name": the_country.name, "code": the_country.code, "timezone": the_country.timezone } ) Please notice I have self.AUTHORIZED_HEADER set in setUp. And here the error: FAIL: test_add_correct (address.tests_country.CountryTest) Add a country ---------------------------------------------------------------------- Traceback (most recent call last): File "SOME_PATH/tests_country.py", line 80, in test_add_correct self.assertEquals(respond.status_code, 201) AssertionError: 400 != 201 I can add a country using swagger provided with django-ninja. I mean the endpoint works. But I can not test it using djano.test.Client. Any Idea? Update: Here the curl code generated by swagger: curl -X 'POST' \ 'http://127.0.0.1:8000/api/address/country/add' \ -H 'accept: application/json' \ -H 'X-API-Key: API-KEY' \ -H 'Content-Type: application/json' \ -d '{ "name": "string", "code": "string", "timezone": "string" }'
I had {"detail": "Cannot parse request body"} as an error. Turns out, Django-ninja expects your data to be passed as a json, but by default, the content_type is set to multipart/form-data; boundary=BoUnDaRyStRiNg for the test client. When you explicitely mention the content type should be json, it will work. Client().post(url, {"your": "dict"}, content_type="application/json") Note: Setting the content-type in the headers will not work as it will get overwritten.
4
4
76,560,788
2023-6-26
https://stackoverflow.com/questions/76560788/how-do-i-annotate-that-a-python-class-should-implement-a-protocol
If I define a Protocol and a class that implements that protocol, I would like to document that the class does/should implement the protocol so that other developers can easily see that the class is supposed to implement the protocol, and so that static analysis tools can verify that the class implements the protocol. What is the proper way to document that in the class? Is it okay to subclass the protocol? Consider the protocol: from typing import Protocol class MyProtocol(Protocol): def foo() -> None: ... Is it correct to define a class like this?: from my_protocol import MyProtocol class MyClass(MyProtocol): def __init__(self): pass def foo() -> None: pass I could use an abstract base class, but then I wouldn't be able to support classes that are structural sub types, but not actual derived classes. I saw it looked like a version of this question was asked here, but it didn't seem to have an actual answer: Correct way to hint that a class is implementing a Protocol?
I think the answer is on PEP-0544 : "To explicitly declare that a certain class implements a given protocol, it can be used as a regular base class." https://peps.python.org/pep-0544/#explicitly-declaring-implementation Personally I do subclass from protocol often because: my IDE is able to automatically complete method signature when I type them (the one defined inside the Protocol) my IDE can rename all method classes if I rename them from the Protocol I can implement deprecated methods (which call another method) in the Protocol class and not into all subclasses: class DataGetter(Protocol): def get_data(self): ... def get(self): warn('get method is deprecated use get_data instead', DeprecationWarning) return self.get_data() class Image(DataGetter): def get_data(self): return "some image data"
3
5
76,554,411
2023-6-26
https://stackoverflow.com/questions/76554411/unable-to-pass-prompt-template-to-retrievalqa-in-langchain
I am new to Langchain and followed this Retrival QA - Langchain. I have a custom prompt but when I try to pass Prompt with chain_type_kwargs its throws error in pydantic StufDocumentsChain. and on removing chain_type_kwargs itt just works. how can pass to the prompt? error File /usr/local/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for StuffDocumentsChain __root__ document_variable_name context was not found in llm_chain input_variables: ['question'] (type=value_error) Code import json, os from langchain.chains import RetrievalQA from langchain.llms import OpenAI from langchain.document_loaders import JSONLoader from langchain.text_splitter import CharacterTextSplitter from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.chat_models import ChatOpenAI from langchain import PromptTemplate from pathlib import Path from pprint import pprint os.environ["OPENAI_API_KEY"] = "my-key" def metadata_func(record: dict, metadata: dict) -> dict: metadata["drug_name"] = record["drug_name"] return metadata loader = JSONLoader( file_path='./drugs_data_v2.json', jq_schema='.drugs[]', content_key="data", metadata_func=metadata_func) docs = loader.load() text_splitter = CharacterTextSplitter(chunk_size=5000, chunk_overlap=200) texts = text_splitter.split_documents(docs) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(texts, embeddings) template = """/ example custom prommpt Question: {question} Answer: """ PROMPT = PromptTemplate(template=template, input_variables=['question']) qa = RetrievalQA.from_chain_type( llm=ChatOpenAI( model_name='gpt-3.5-turbo-16k' ), chain_type="stuff", chain_type_kwargs={"prompt": PROMPT}, retriever=docsearch.as_retriever(), ) query = "What did the president say about Ketanji Brown Jackson" qa.run(query)
{context} is missing in template.
3
2
76,557,066
2023-6-26
https://stackoverflow.com/questions/76557066/cannot-run-tensorflow-gpu-on-docker-although-it-seems-to-be-installed-outside-o
Here is my situation I downloaded the tensorflow/tensorflow:latest-gpu image. In order to run it, I run the following command to start the docker image: docker run -it --rm \ --ipc=host \ --gpus all \ --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ --volume="$(pwd)/://mydir:rw" \ --workdir="/mydir/" \ tensorflow/tensorflow:latest-gpu bash -c 'bash' However, whenever I run the following Python commands: >> import tensorflow as tf >> print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) Here is the output I have (inside the Docker): 2023-06-26 13:10:46.768093: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:266] failed call to cuInit: CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE: forward compatibility was attempted on non supported HW 2023-06-26 13:10:46.768177: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:168] retrieving CUDA diagnostic information for host: 466cc7912253 2023-06-26 13:10:46.768189: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:175] hostname: 466cc7912253 2023-06-26 13:10:46.768314: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:199] libcuda reported version is: NOT_FOUND: was unable to find libcuda.so DSO loaded into this program 2023-06-26 13:10:46.768355: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:203] kernel reported version is: 515.65.1 Num GPUs Available: 0 Here is what I have outside and INSIDE docker: ## nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:04:00.0 Off | N/A | | 0% 37C P0 58W / 250W | 0MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA GeForce ... Off | 00000000:05:00.0 Off | N/A | | 0% 35C P0 59W / 250W | 0MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 NVIDIA GeForce ... Off | 00000000:84:00.0 Off | N/A | | 0% 38C P0 60W / 250W | 0MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 NVIDIA GeForce ... Off | 00000000:85:00.0 Off | N/A | | 0% 32C P0 59W / 250W | 0MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 4 NVIDIA GeForce ... Off | 00000000:88:00.0 Off | N/A | | 0% 26C P0 58W / 250W | 0MiB / 11264MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 5 NVIDIA GeForce ... Off | 00000000:89:00.0 Off | N/A | | 0% 28C P0 57W / 250W | 0MiB / 11264MiB | 1% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ I also can see that there is a CUDA version installed: ## nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Wed_Jul_22_19:09:09_PDT_2020 Cuda compilation tools, release 11.0, V11.0.221 Build cuda_11.0_bu.TC445_37.28845127_0 So what can I do to make my docker image see where is my CUDA and make my code run in the GPU?
It still seems like the Docker container is not able to access the GPUs on the host machine even though you are passing --gpus all as an argument. A few things to try: Make sure the Nvidia container toolkit is installed on the host - this is required for Docker to access Nvidia GPUs. You can install it with: distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \ && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt-get update sudo apt-get install -y nvidia-docker2 sudo systemctl restart docker Pass additional runtime arguments to expose the GPUs: --runtime=nvidia \ --gpus all \ -e NVIDIA_VISIBLE_DEVICES=all Make sure the TensorFlow Docker image has GPU support Double check GPU drivers are up-to-date on the host machine. Some combination of these steps should allow the container to access the GPUs properly...
5
1
76,533,384
2023-6-22
https://stackoverflow.com/questions/76533384/docker-alpine-build-fails-on-mysqlclient-installation-with-error-exception-can
I'm encountering a problem when building a Docker image using a Python-based Dockerfile. I'm trying to use the mysqlclient library (version 2.2.0) and Django (version 4.2.2). Here is my Dockerfile: FROM python:3.11-alpine WORKDIR /usr/src/app COPY requirements.txt . RUN apk add --no-cache gcc musl-dev mariadb-connector-c-dev && \ pip install --no-cache-dir -r requirements.txt COPY . . CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"] The problem arises when the Docker build process reaches the point of installing the mysqlclient package. I get the following error: Exception: Can not find valid pkg-config name To address this issue, I tried adding pkgconfig to the apk add command, Unfortunately, this didn't help and the same error persists. I would appreciate any guidance on how to resolve this issue. Thank you in advance.
I've managed to solve the issue and here's how I did it: Here is the new Dockerfile: FROM python:3.11-alpine WORKDIR /usr/src/app COPY requirements.txt . RUN apk add --no-cache --virtual build-deps gcc musl-dev libffi-dev2 pkgconf mariadb-dev && \ apk add --no-cache mariadb-connector-c-dev && \ pip install --no-cache-dir -r requirements.txt && \ apk del build-deps COPY . . CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"] requirements.txt: mysqlclient==2.2.0 Django~=4.2.0 I hope this will help someone who visits this post in the future.
30
1
76,556,703
2023-6-26
https://stackoverflow.com/questions/76556703/how-to-authenticate-using-windows-authentication-in-playwright
I need to automate a test that uses Windows Authentication. I know that the the prompt that opens up is not part of the HTML page, yet why my code is not working: login_page.click_iwa() sleep(5) self.page.keyboard.type('UserName') sleep(5) self.page.keyboard.press('Tab') self.page.keyboard.type('Password')
I solved the issue by using a virtual keyboard: from pyautogui import press, typewrite, hotkey def press_key(key): press(key) def type_text(text): typewrite(text) def special_keys(special, normal): hotkey(special, normal) Then, after I implemented the virtual keyboard, I did this: def login_iwa(self, user_name='User321', password='123456', need_to_fail=False): pyautogui.FAILSAFE = False self.click_iwa() sleep(7) type_text(user_name) sleep(1) press_key('Tab') sleep(1) type_text(password) sleep(1) press_key('Enter') sleep(2) if need_to_fail: press_key('Tab') sleep(1) press_key('Tab') sleep(1) press_key('Tab') sleep(1) press_key('Enter') sleep(1) I used pyautogui.FAILSAFE = False because sometime the popup was hiding behind the screen or was openning up on the 2nd screen.
4
2
76,562,382
2023-6-27
https://stackoverflow.com/questions/76562382/inserting-data-as-vectors-from-sql-database-to-pinecone
I have a profiles table in SQL with around 50 columns, and only 244 rows. I have created a view with only 2 columns, ID and content and in content I concatenated all data from other columns in a format like this: FirstName: John. LastName: Smith. Age: 70, Likes: Gardening, Painting. Dislikes: Soccer. Then I created the following code to index all contents from the view into pinecone, and it works so far. However I noticed something strange. There are over 2000 vectors and still not finished, the first iterations were really fast, but now each iteration is taking over 18 seconds to finish and it says it will take over 40 minutes to finish upserting. (but for 244 rows only?) What am I doing wrong? or is it normal? pinecone.init( api_key=PINECONE_API_KEY, # find at app.pinecone.io environment=PINECONE_ENV # next to api key in console ) import streamlit as st st.title('Work in progress') embed = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME, model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1) cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+DATABASE_SERVER+'.database.windows.net;DATABASE='+DATABASE_DB+';UID='+DATABASE_USERNAME+';PWD='+ DATABASE_PASSWORD) query = "SELECT * from views.vwprofiles2;" df = pd.read_sql(query, cnxn) index = pinecone.Index("default") batch_limit = 100 texts = [] metadatas = [] text_splitter = RecursiveCharacterTextSplitter( chunk_size=400, chunk_overlap=20, length_function=tiktoken_len, separators=["\n\n", "\n", " ", ""] ) for _, record in stqdm(df.iterrows(), total=len(df)): # First get metadata fields for this record metadata = { 'IdentityId': str(record['IdentityId']) } # Now we create chunks from the record text record_texts = text_splitter.split_text(record['content']) # Create individual metadata dicts for each chunk record_metadatas = [{ "chunk": j, "text": text, **metadata } for j, text in enumerate(record_texts)] # Append these to the current batches texts.extend(record_texts) metadatas.extend(record_metadatas) # If we have reached the batch_limit, we can add texts if len(texts) >= batch_limit: ids = [str(uuid4()) for _ in range(len(texts))] embeds = embed.embed_documents(texts) index.upsert(vectors=zip(ids, embeds, metadatas)) texts = [] metadatas = [] if len(texts) > 0: ids = [str(uuid4()) for _ in range(len(texts))] embeds = embed.embed_documents(texts) index.upsert(vectors=zip(ids, embeds, metadatas))
I have done some good research on the topic and have some recommendations Consider the following when optimizing code: The specific hardware and software environment in which the code will be run. The specific tasks that the code will be used for. The level of performance that is required. With these factors in mind, it is possible to make significant improvements to the time and complexity of code. also: Use a variety of data structures and algorithms to find the best fit for your task. Optimize your code for the specific hardware and software environment in which it will be run. Use a profiler to identify and fix performance bottlenecks. Test your code thoroughly to ensure that it is correct and efficient. do this and you should be able to improve the time and complexity of your code. Example: from faker import Faker import pandas as pd import time # Initialize Faker for random data generation fake = Faker() # Create a DataFrame with 244 rows of random data data = { 'IdentityId': [fake.uuid4() for _ in range(244)], 'content': [fake.text(max_nb_chars=1000) for _ in range(244)] } df = pd.DataFrame(data) # Initialize lists for texts and metadata texts = [] metadatas = [] # Set the batch limit batch_limit = 500 # Initialize the text splitter text_splitter = RecursiveCharacterTextSplitter( chunk_size=800, chunk_overlap=20, length_function=tiktoken_len, separators=["\n\n", "\n", " ", ""] ) # Iterate through DataFrame rows # Time Complexity: O(n), where n is the number of rows in the DataFrame for _, record in df.iterrows(): start_time = time.time() # Get metadata for this record # Time Complexity: O(1) metadata = { 'IdentityId': str(record['IdentityId']) } print(f'Time taken for metadata extraction: {time.time() - start_time} seconds') start_time = time.time() # Split record text into chunks # Time Complexity: O(m), where m is the size of the text record_texts = text_splitter.split_text(record['content']) print(f'Time taken for text splitting: {time.time() - start_time} seconds') start_time = time.time() # Create metadata for each chunk # Time Complexity: O(k), where k is the number of chunks in the text record_metadatas = [{ "chunk": j, "text": text, **metadata } for j, text in enumerate(record_texts)] print(f'Time taken for metadata dictionary creation: {time.time() - start_time} seconds') start_time = time.time() # Append chunks and metadata to current batches # Time Complexity: O(1) texts.extend(record_texts) metadatas.extend(record_metadatas) print(f'Time taken for data appending: {time.time() - start_time} seconds') # If batch_limit is reached, upsert vectors # Time Complexity: Depends on the upsert implementation if len(texts) >= batch_limit: start_time = time.time() ids = [str(uuid4()) for _ in range(len(texts))] # Simulating embedding and upserting here texts = [] metadatas = [] print(f'Time taken for vector upsertion (simulated): {time.time() - start_time} seconds') # Upsert any remaining vectors after the loop # Time Complexity: Depends on the upsert implementation if len(texts) > 0: start_time = time.time() ids = [str(uuid4()) for _ in range(len(texts))] # Simulating embedding and upserting here print(f'Time taken for remaining vector upsertion (simulated): {time.time() - start_time} seconds')
3
2
76,553,171
2023-6-26
https://stackoverflow.com/questions/76553171/avoiding-extra-next-call-after-yield-from-in-python-generator
Please see the below snippet, run with Python 3.10: from collections.abc import Generator DUMP_DATA = 5, 6, 7 class DumpData(Exception): """Exception used to indicate to yield from DUMP_DATA.""" def sample_gen() -> Generator[int | None, int, None]: out_value: int | None = None while True: try: in_value = yield out_value except DumpData: yield len(DUMP_DATA) yield from DUMP_DATA out_value = None continue out_value = in_value My question pertains to the DumpData path where there is a yield from. After that yield from, there needs to be a next(g) call, to bring the generator back to the main yield statement so we can send: def main() -> None: g = sample_gen() next(g) # Initialize assert g.send(1) == 1 assert g.send(2) == 2 # Okay let's dump the data num_data = g.throw(DumpData) data = tuple(next(g) for _ in range(num_data)) assert data == DUMP_DATA # How can one avoid this `next` call, before it works again? next(g) assert g.send(3) == 3 How can this extra next call be avoided?
When you yield from a tuple directly, the built-in tuple_iterator (which sample_gen delegates to) handles an additional "final value" yield before it terminates. It does not have a send method (unlike generators in general) and returns a final value None to sample_gen. The behavior: yield from DUMP_DATA # is equivalent to: yield from tuple_iterator(DUMP_DATA) def tuple_iterator(t): for item in t: yield item return None You can implement tuple_iterator_generator, with usage: try: in_value = yield out_value except DumpData: yield len(DUMP_DATA) in_value = yield from tuple_iterator_generator(DUMP_DATA) out_value = in_value def tuple_iterator_generator(t): in_value = None for item in t: in_value = yield item return in_value Or just not use yield from if you don't want that behavior: try: in_value = yield out_value except DumpData: yield len(DUMP_DATA) for out_value in DUMP_DATA: in_value = yield out_value out_value = in_value See https://docs.python.org/3/whatsnew/3.3.html#pep-380-syntax-for-delegating-to-a-subgenerator for a use case of that behavior.
3
3
76,567,790
2023-6-27
https://stackoverflow.com/questions/76567790/numpy-matmul-and-einsum-6-to-7-times-slower-than-matlab
I am trying to port some code from MATLAB to Python and I am getting much slower performance from Python. I am not very good at Python coding, so any advise to speed these up will be much appreciated. I tried an einsum one-liner (takes 7.5 seconds on my machine): import numpy as np n = 4 N = 200 M = 100 X = 0.1*np.random.rand(M, n, N) w = 0.1*np.random.rand(M, N, 1) G = np.einsum('ijk,iljm,lmn->il', w, np.exp(np.einsum('ijk,ljn->ilkn',X,X)), w) I also tried a matmult implementation (takes 6 seconds on my machine) G = np.zeros((M, M)) for i in range(M): G[:, i] = np.squeeze(w[i,...].T @ (np.exp(X[i, :, :].T @ X) @ w)) But my original MATLAB code is way faster (takes 1 second on my machine) n = 4; N = 200; M = 100; X = 0.1*rand(n, N, M); w = 0.1*rand(N, 1, M); G=zeros(M); for i=1:M G(:,i) = squeeze(pagemtimes(pagemtimes(w(:,1,i).', exp(pagemtimes(X(:,:,i),'transpose',X,'none'))) ,w)); end I was expecting both Python implementations to be comparable in speed, but they are not. Any ideas why the Python implementations are this slow, or any suggestions to speed those up?
First of all np.einsum has a parameter optimize which is set to False by default (mainly because the optimization can be more expensive than the computation in some cases and it is better in general to pre-compute the optimal path in a separate call first). You can use optimal=True to significantly speed-up np.einsum (it provides the optimal path in this case though the internal implementation is not be optimal). Note that pagemtimes in Matlab is more specific than np.einsum so there is not need for such a parameter (i.e. it is fast by default in this case). Moreover, Numpy function like np.exp create a new array by default. The thing is computing arrays in-place is generally faster (and it also consumes less memory). This can be done thanks to the out parameter. The np.exp is pretty expensive on most machines because it runs serially (like most Numpy functions) and it is often not very optimized internally either. Using a fast math library like the one of Intel helps. I suspect Matlab uses such kind of fast math library internally. Alternatively, one can use multiple threads to compute this faster. This is easy to do with the numexpr package. Here is the resulting more optimized Numpy code: import numpy as np import numexpr as ne # [...] Same initialization as in the question tmp = np.einsum('ijk,ljn->ilkn',X,X, optimize=True) ne.evaluate('exp(tmp)', out=tmp) G = np.einsum('ijk,iljm,lmn->il', w, tmp, w, optimize=True) Performance results Here are results on my machine (with a i5-9600KF CPU, 32 GiB of RAM, on Windows): Naive einsums: 6.62 s CPython loops: 3.37 s This answer: 1.27 s <---- max9111 solution: 0.47 s (using an unmodified Numba v0.57) max9111 solution: 0.54 s (using a modified Numba v0.57) The optimized code is about 5.2 times faster than the initial code and 2.7 times faster than the initial fastest one! Note about performances and possible optimizations The first einsum takes a significant fraction of the runtime in the faster implementation on my machine. This is mainly because einsum perform many small matrix multiplications internally in a way that is not very efficient. Indeed, each matrix multiplication is done in parallel by a BLAS library (like OpenBLAS library which is the default one on most machines like mine). The thing is OpenBLAS is not efficient to compute small matrices in parallel. In fact, computing each small matrix in parallel is not efficient. A more efficient solution is to compute all the matrix multiplication in parallel (each thread should perform several serial matrix multiplication). This is certainly what Matlab does and why it can be a bit faster. This can be done using a parallel Numba code (or with Cython) and by disabling the parallel execution of BLAS routines (note this can have performance side effects on a larger script if it is done globally). Another possible optimization is to do all the operation at once in Numba using multiple threads. This solution can certainly reduce even more the memory footprint and further improve performance. However, this is far from being easy to write an optimized implementation and the resulting code will be significantly harder to maintain. This is what the max9111's code does.
6
9
76,559,814
2023-6-26
https://stackoverflow.com/questions/76559814/python-init-of-derived-singleton-not-called
I was toying around with the Singleton pattern and derivation. Specifically, I had this code: class Singleton: _instance = None init_attempts = 0 def __init__(self): self.init_attempts += 1 def __new__(cls, *args, **kwargs): if not cls._instance: cls._instance = super().__new__(cls, *args, **kwargs) return cls._instance class Derived(Singleton): def __init__(self): super().__init__() self.attribute = "this is derived" def main(): instance_1 = Singleton() instance_2 = Singleton() print("instance_1 and instance_2 are ", end="") if id(instance_1) == id(instance_2): print("the ame") else: print("different") derived_instance = Derived() print("derived_instance and instance_2 are the same:", derived_instance is instance_2) try: print(derived_instance.attribute) except AttributeError: print("Initialization of Derived has been prevented") print("Number of initializations:", derived_instance.init_attempts) if __name__ == '__main__': main() To my surprise, the __init__ of Derived is never called, i.e. derived_instance.attribute does not exist and derived_instance.init_attempts remains 2. As I understand it, SomeClass() is resolved to SomeClass.__call__(), which in turn should call the __new__ and __init__ methods. Yet, in the above example, Derived.__init__ is never called. In detail, my output is: instance_1 and instance_2 are the ame derived_instance and instance_2 are the same: True Initialization of Derived has been prevented Number of initializations: 2 Can anybody explain to me, why that is? In particular, the similar example here: How to initialize Singleton-derived object once works as expected, i.e. Tracer triggers printing init twice. I am aware that deriving from Singletons cannot be done in a meaningful way, and that both, a Singleton decorator and metaclass exist and how to use them, so I'm specifically interested in how the inheritance process goes awry. I presume it has something to do with the MRO, but I cannot think of anything that would not also compromise the Tracer example from the link. I tried finding similar links, but could not find any additional information. This here asks a very similar question: does __init__ get called multiple times with this implementation of Singleton? (Python) but doesn't answer it, or rather only confirms that a derived class' ´__init__´ should get called as well. Thanks in advance! Hoping to pay back the favour soon!
The Python documentation has the answer: If __new__() is invoked during object construction and it returns an instance of cls, then the new instance’s __init__() method will be invoked like __init__(self[, ...]), where self is the new instance and the remaining arguments are the same as were passed to the object constructor. If __new__() does not return an instance of cls, then the new instance’s __init__() method will not be invoked. In the case at hand, when Derived() (which has no __new__ of its own) is called to create a class instance, Singleton.__new__ is invoked, and because that does not return an instance of Derived (but of Singleton), the __init__() method will not be invoked.
4
3
76,536,988
2023-6-23
https://stackoverflow.com/questions/76536988/why-flask-python-streaming-data-not-work
Server side: from flask import Flask, request, Response, stream_with_context import time app = Flask(__name__) # Define a route /stream that handles POST requests @app.route('/stream', methods=['POST']) # GET also been tried, no difference def stream(): @stream_with_context def generate(): print('1') yield "Hello\n" time.sleep(1) # Simulate some delay print('2') yield "World\n" time.sleep(1) print('3') yield "This is\n" time.sleep(1) print('4') yield "Streaming data\n" time.sleep(1) print('5') return Response(generate(), content_type='text/event-stream') if __name__ == '__main__': app.run(debug=True) Client side: import requests import sseclient reqUrl = 'https://api.my-domain-name.com/stream' headers={'Accept': 'text/event-stream'} # response = requests.get(url=reqUrl, headers=headers, stream=True) response = requests.post(url=reqUrl, headers=headers, stream=True) if response.status_code == 200: for chunk in response.iter_content(chunk_size=3): if chunk: print(chunk) else: print('fail:', response.status_code) # sse tried, no difference # client = sseclient.SSEClient(response) # for event in client.events(): # print(event.data) When client connected, there was 1 2 3 4 5 printed one by one at the server side. But client printed nothing, until server finished, client printed all data. Help! Have tried the code above, and GET / POST tried, with or without sse. When I tried GET method, I also tried curl -v command, the same result, data comes together after about 5 seconds, not one by one. I expect stream data, that is, each yield data can be handled in time separately, not together.
Turned out to be the server setting problem, this post 10 years ago does help. nginx setting is important. Now server side is ok, tested by postman on my local machine, the strange is, the client side code does not go streaming, it prints all data at the same time. Have tried copy postman test header but not help. headers={ 'Accept': 'text/event-stream', # "Content-Type": "application/json", # in postman is this, tested no difference "Content-Type": "text/event-stream", "Transfer-Encoding": "chunked" #tried, no help # below are copied from postman "Cache-Control":"no-cache", "Connection":"keep-alive", 'Accept':'*/*', 'User-Agent':'PostmanRuntime/7.32.3' } BTW: client code can call some third party streaming data api correctly. So I am confusing, where is the problem. Update: Finally solved the problem. sseclient and sseclient-py are different, I uninstall them and install only one, then ok. for me, I use sseclient, and modified some code, to make GET and POST all work well. sseclient should also work but I have not tried yet. (simply trying still have problem, need investigation)
3
1
76,569,147
2023-6-27
https://stackoverflow.com/questions/76569147/how-can-i-make-vs-code-format-my-python-code
I have following Python code: check_files(original_file = original_file_name, used_file = used_file_name, unused_file = unused_file_name) I want to make it instead to look like: check_files(original_file = original_file_name, used_file = used_file_name, unused_file = unused_file_name) Also I want to correct formatting not only for function calls but also that way dictionary key/value pairs and etc. For Example, in RStudio, if I select the code and press CTRL + I RStudio will correct formating as I have described above. Is there any similar way to correct formating in VSCode?
Based on the comments by @starball, @jarmod and additional googling I found that you need to follow those steps: Step 1. Install Python extension from marketplace: https://marketplace.visualstudio.com/items?itemName=ms-python.python Step 2. Install one of the formatter packages for Python. The Python extension supports source code formatting using either autopep8 (the default), black, or yapf. from and More about it here: https://code.visualstudio.com/docs/python/editing#_formatting Step 3. Select which code formatter you want to use in python.formatting.provider which is in settings>Extensions>Python (this maybe automatically set after step 1 and step 2). Also, in settings>Extensions>Python there are more options to select. How to use formatting: The code formatting is available in Visual Studio Code (VSCode) through the following shortcuts or key combinations: On Windows Shift + Alt + F On macOS Shift + Option + F On Linux Ctrl + Shift + I Format Selection (Ctrl+K Ctrl+F) - Format the selected text. Or you can use right click menu: from: https://mkyong.com/vscode/how-to-format-source-code-in-visual-studio-code-vscode/ and from: https://code.visualstudio.com/docs/editor/codebasics#_formatting
8
5
76,568,476
2023-6-27
https://stackoverflow.com/questions/76568476/how-to-load-a-pandas-dataframe-from-orm-sqlalchemy-from-an-existing-database
I want to load an entire database table into a Pandas DataFrame using SqlAlchemy ORM. I have successfully queried the number of rows in the table like this: from local_modules import RemoteConnector from sqlalchemy import Integer, Column from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.automap import automap_base import pandas as pd Base = automap_base() class Calculations(Base): __tablename__ = "calculations" id = Column("ID", Integer, primary_key=True) Base.prepare() connection = RemoteConnector('server', 'calculations_database') connection.connect() Session = sessionmaker(bind=connection.engine) session = Session() result = session.query(Calculations).count() print('Record count:', result) Output: Record count: 13915 Process finished with exit code 0 If possible and, it seems it can be done, I want to define the table using automap_base from sqlalchemy.ext.automap and not have to manually state each column. I did so with 'id' because I had an error that asked me to set a primary key (is there a better way to do this?). In order to get any results I've been able to do the following: results = session.query(Calculations).all() Output: [<__main__.Calculations object at 0x000001AF2324F510>, <__main__.Calculations object at 0x000001AF2324F6D0>, <__main__.Calculations object at 0x000001AF2324F810>, <__main__.Calculations object at 0x000001AF2324F910>, <__main__.Calculations object at 0x000001AF2324FA50>, <__main__.Calculations object at 0x000001AF2324FB90>, <__main__.Calculations object at 0x000001AF2324FCD0>, <__main__.Calculations object at 0x000001AF2324FE10>, <__main__.Calculations object at 0x000001AF2324FF50>, <__main__.Calculations object at 0x000001AF22CD40D0>, <__main__.Calculations object at 0x000001AF22CD4210>, <__main__.Calculations object at 0x000001AF22CD4350>, <__main__.Calculations object at 0x000001AF22CD4490>, <__main__.Calculations object at 0x000001AF22CD45D0>, <__main__.Calculations object at 0x000001AF22CD4710>, <__main__.Calculations object at 0x000001AF22CD4850>, <__main__.Calculations object at 0x000001AF22CD4990>, <__main__.Calculations object at 0x000001AF22CD4AD0>, <__main__.Calculations object at 0x000001AF22CD4C10>, <__main__.Calculations object at 0x000001AF22CD4D50>, <__main__.Calculations object at 0x000001AF22CD4E90>, <__main__.Calculations object at 0x000001AF22CD4FD0>, <__main__.Calculations object at 0x000001AF22CD5110>, <__main__.Calculations object at 0x000001AF22CD5250>, <__main__.Calculations object at 0x000001AF22CD53D0>, <__main__.Calculations object at 0x000001AF22CD5510>, <__main__.Calculations object at 0x000001AF22CD5650>, <__main__.Calculations object at 0x000001AF22CD5790>, <__main__.Calculations object at 0x000001AF22CD58D0>, <__main__.Calculations object at 0x000001AF22CD5A10>, <__main__.Calculations object at 0x000001AF22CD5B50>, <__main__.Calculations object at 0x000001AF22CD5C90>, <__main__.Calculations object at 0x000001AF22CD5DD0>, <__main__.Calculations object at 0x000001AF22CD5F10>, <__main__.Calculations object at 0x000001AF22CD6050>, <__main__.Calculations object at 0x000001AF22CD6190>, <__main__.Calculations object at 0x000001AF22CD62D0>, <__main__.Calculations object at 0x000001AF22CD6410>, <__main__.Calculations object at 0x000001AF22CD6550>, <__main__.Calculations object at 0x000001AF22CD6690>, <__main__.Calculations object at 0x000001AF22CD67D0>, <__main__.Calculations object at 0x000001AF22CD6910>, <__main__.Calculations object at 0x000001AF22CD6A50>, <__main__.Calculations object at 0x000001AF22CD6B90>, <__main__.Calculations object at 0x000001AF22CD6CD0>, <__main__.Calculations object at 0x000001AF22CD6E10>, <__main__.Calculations object at 0x000001AF22CD6F50>, <__main__.Calculations object at 0x000001AF22CD7090>] This shows all the columns in the table as an object. My best attempt to extract the values has been: for result in results: print(result.__dict__) Output: {'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x00000232E0A91730>, 'id': 1.0} {'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x00000232E0A90E90>, 'id': 2.0} ... and so on Not only I do not get the values but it does not print the columns, only the ID I defined in the class. I thought that when I did the automap_base it would transfer automatically. When I do define them they do appear, like this: class Calculations(Base): __tablename__ = "Calculations" id = Column("Trade ID", Integer, primary_key=True) Amount = Column("Amount", Integer) Yield = Column("Yield", Integer) Output: {'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x000001BFD2092090>, 'Amount': 34303.0, 'Yield': 0.01141, 'id': 1.0} {'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x000001BFD2091010>, 'Amount': 10000.0, 'Yield': 0.01214, 'id': 2.0} {'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x000001BFD2090FB0>, 'Amount': 43515.0, 'Yield': 0.01206, 'id': 3.0} ... and so on What I would like to ultimately do is something like this as suggested in SQLAlchemy ORM conversion to pandas DataFrame: df = pd.read_sql_query(sql=session.query(Calculation).all(), con=connection.engine) But I get the following error: raise exc.ObjectNotExecutableError(statement) from err sqlalchemy.exc.ObjectNotExecutableError: Not an executable object: [<__main__.CALC_TFSB_INVESTMENTS object at 0x000001FF42966E50>, ... an so on I have also tried: df = pd.read_sql_query(sql=select(Calculations), con=connection.engine) print(df.head()) How can I load the DataFrame? How can I automate the schema detection, I suppose using automap_base? How can I improve my code, are there other things I can add, perhaps dunder fields to make things better?
The answer is df = pd.read_sql_query(sql=select(Calculations), con=connection.engine) print(df.head()) This does the trick Corralien's answer is much more detailed.
4
2
76,568,086
2023-6-27
https://stackoverflow.com/questions/76568086/non-commutative-expansion-of-brackets-python
I wanted to ask whether there is a method to expand brackets in Python non-commutatively. For example, INPUT (x+y)**2, OUTPUT x**2 + x*y + y*x + y**2, instead of the usual output x**2 + 2*x*y + y**2. SymPy gives this commutative output, but I have seen that a non-commutative output is possible in Mathematica (NonCommutativeMultiply). Could anyone suggest some Python code which will expand brackets non-commutatively? It would be a big help.
You have to create non-commutative symbols: from sympy import * x, y = symbols("x, y", commutative=False) expr = (x+y)**2 expr = expr.expand() print(expr) # out: x*y + x**2 + y*x + y**2
2
4
76,567,692
2023-6-27
https://stackoverflow.com/questions/76567692/hydra-how-to-express-none-in-config-files
I have a very simple Python script: import hydra from omegaconf import DictConfig, OmegaConf @hydra.main(version_base="1.3", config_path=".", config_name="config") def main(cfg: DictConfig) -> None: if cfg.benchmarking.seed_number is None: raise ValueError() if __name__ == "__main__": main() And here the config file: benchmarking: seed_number: None Unfortunately, the Python script does not raise an error. Instead, when I print the type of cfg.benchmarking.seed_number, it is str. How can I pass None instead?
Try null: benchmarking: seed_number: null
10
17
76,533,397
2023-6-22
https://stackoverflow.com/questions/76533397/python-pass-contexvars-from-parent-thread-to-child-thread-spawn-using-threading
I am setting some context variables using contextvars module that can be accessed across the modules running on the same thread. Initially I was creating contextvars.ContextVars() object in each python file hoping that there is only single context shared amongts all the python files of the module running on same thread. But for each file it did create new context variables. I took inspiration from flask library how it sets context of the web request in request object so that only thread on which web request came will be able to access it. Resources: (1) Request Contex working in flask (2) Flask Contexts advance Basically, the Local class below is copy pasted from werkzeug library (werkzeug.local module : https://werkzeug.palletsprojects.com/en/2.3.x/local/#werkzeug.local.Local) customContextObject.py from contextvars import ContextVar import typing as t import warnings class Local: __slots__ = ("_storage",) def __init__(self) -> None: object.__setattr__(self, "_storage", ContextVar("local_storage")) @property def __storage__(self) -> t.Dict[str, t.Any]: warnings.warn( "'__storage__' is deprecated and will be removed in Werkzeug 2.1.", DeprecationWarning, stacklevel=2, ) return self._storage.get({}) # type: ignore def __iter__(self) -> t.Iterator[t.Tuple[int, t.Any]]: return iter(self._storage.get({}).items()) def __getattr__(self, name: str) -> t.Any: values = self._storage.get({}) try: print(f"_storage : {self._storage} | values : {values}") return values[name] except KeyError: raise AttributeError(name) from None def __setattr__(self, name: str, value: t.Any) -> None: values = self._storage.get({}).copy() values[name] = value self._storage.set(values) def __delattr__(self, name: str) -> None: values = self._storage.get({}).copy() try: del values[name] self._storage.set(values) except KeyError: raise AttributeError(name) from None localContextObject = Local() The localContextObject know can be imported in any python file in the project and they will have access to same ContextVar object. Example: I am setting email property in localContextObject variable in contextVARSDifferentModulesCUSTOM.py file contextVARSexperiments module. We import and call check_true_false() function from utils.py from contextVARSexperiments.utils import check_true_false, check_true_false from contextVARSexperiments.customContextObject import localContextObject import threading localContextObject.email = "[email protected]" print(f"localContextObject : {localContextObject} | email : {localContextObject.email}") def callingUtils(a): print(f"{threading.current_thread()} | {threading.main_thread()}") check_true_false(a) callingUtils('MAIN CALL') Now the other file utils.py in the same module will have access to the same contextVars through localContextObject. It will print the same email as set in above file. utils.py import threading import contextvars from contextVARSexperiments.customContextObject import localContextObject def decorator(func): def wrapper(*args, **kwargs): print("\n~~~ENTERING check_true_false~~~~~~ ") func(*args, **kwargs) print("~~~EXITED check_true_false~~~~~~\n") return wrapper @decorator def check_true_false(a): print(f"check_true_false2 {threading.current_thread()} | {threading.main_thread()}") print(f" a : {a}") print(f"localContextObject : {localContextObject}") print(f"email : {localContextObject.email}") Below is the output when we run contextVARSDifferentModulesCUSTOM.py /Users/<user>/PycharmProjects/Temp/contextVARSexperiments/contextVARSDifferentModulesCUSTOM.py localContextObject : <_thread._local object at 0x7fcfb85fdd58> | email : [email protected] <_MainThread(MainThread, started 8671015616)> | <_MainThread(MainThread, started 8671015616)> ~~~ENTERING check_true_false~~~~~~ check_true_false <_MainThread(MainThread, started 8671015616)> | <_MainThread(MainThread, started 8671015616)> a : MAIN CALL localContextObject : <_thread._local object at 0x7fcfb85fdd58> email : [email protected] ~~~EXITED check_true_false~~~~~~ Now, I updated contextVARSDifferentModulesCUSTOM.py to call callingUtils() function on a new thread. from contextVARSexperiments.utils import check_true_false from contextVARSexperiments.customContextObject import localContextObject import threading localContextObject.email = "[email protected]" print(f"localContextObject : {localContextObject} | email : {localContextObject.email}") def callingUtils(a): print(f"{threading.current_thread()} | {threading.main_thread()}") check_true_false(a) t1 = threading.Thread(target=callingUtils, args=('THREAD"S CALL',)) t1.start() t1.join() But this threw error because child thread didn't have access to parent thread's ContextVars. Output: /Users/<user>/PycharmProjects/Temp/contextVARSexperiments/contextVARSDifferentModulesCUSTOM.py _storage : <ContextVar name='local_storage' at 7ff1d0435668> | values : {'email': '[email protected]'} localContextObject : <contextVARSexperiments.customContextObject.Local object at 0x7ff1c02162e8> | email : [email protected] <Thread(Thread-1, started 12937875456)> | <_MainThread(MainThread, started 8609043136)> ~~~ENTERING check_true_false~~~~~~ check_true_false <Thread(Thread-1, started 12937875456)> | <_MainThread(MainThread, started 8609043136)> a : THREAD"S CALL localContextObject : <contextVARSexperiments.customContextObject.Local object at 0x7ff1c02162e8> _storage : <ContextVar name='local_storage' at 7ff1d0435668> | values : {} Exception in thread Thread-1: Traceback (most recent call last): File "/Users/<user>/miniconda3/envs/test_env/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/Users/<user>/miniconda3/envs/test_env/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/Users/<user>/PycharmProjects/Temp/contextVARSexperiments/contextVARSDifferentModulesCUSTOM.py", line 13, in callingUtils check_true_false(a) File "/Users/<user>/PycharmProjects/Temp/contextVARSexperiments/utils.py", line 26, in wrapper func(*args, **kwargs) File "/Users/<user>/PycharmProjects/Temp/contextVARSexperiments/utils.py", line 43, in check_true_false print(f"email : {localContextObject.email}") File "/Users/<user>/PycharmProjects/Temp/contextVARSexperiments/customContextObject.py", line 31, in __getattr__ raise AttributeError(name) from None AttributeError: email Now, I am trying to inherit Thread class and create my own custom implementation which will pass the context from parent thread to child thread. I tried to replace threading.Thread class with a CustomThread class. Following are the implementations of CustomThread class inside customThreading.py : More about Context object returned by copy_context() method of contextvars library : https://docs.python.org/3/library/contextvars.html#contextvars.Context Using Context object returned by copy_context() to run initialiser of Threading class: import threading import contextvars class CustomThread(threading.Thread): def __init__(self, *args, **kwargs): self.current_context = contextvars.copy_context() self.current_context.run(super().__init__, *args, **kwargs) def start(self) -> None: super().start() Using Context object returned by copy_context() while calling start() of Threading class: import threading import contextvars class CustomThread(threading.Thread): def __init__(self, *args, **kwargs): self.current_context = contextvars.copy_context() super().__init__(*args, **kwargs) def start(self) -> None: self.current_context.run(super().start) Using contextmanager decorator from contextlib on start() of my class: import threading import contextvars from contextlib import contextmanager class CustomThread(threading.Thread): def __init__(self, *args, **kwargs): self.current_context = contextvars.copy_context() super().__init__(*args, **kwargs) @contextmanager def start(self) -> None: super().start() But none of this worked. Also, I am looking for custom implementation of ThreadPoolExecutor from concurrent.futures module.
Contextvars work similar as threading.local variables, in that, in each thread, a context var is initially empty. It can take further independent values in the same thread by using the context.run method from a contextvars.Context object, and that is extensively used by the asyncio code, so that each call-stack in a asyncio task can have an independent context in a transparent way. The code you picked from werkzeug automatically creates an empty dictionary when the context var used as storage is read - so you get the errors you listed, instead of a LookupError. Anyway, I digress - the only thing incorrect in your code is that start is not the function to override in order to change the running context: it is called in the parent thread. The run method in the Thread class is the one that is executed in the child thread - if you just override that one so that it executes the code in the original run method inside your passed context, you will get things working: class CTXThread(threading.Thread): def __init__(self, *args, **kwargs): self.ctx = contextvars.copy_context() super().__init__(*args, **kwargs) def run(self): # This code runs in the target, child class: self.ctx.run(super().run) Also, as a side note, see that the contextlib module, and the contextmanager decorator, are not related to contextvars at all. Python re-uses the term "context" for more than one thing - in the case "contextlib" refers to context managers as used by the with statement.
3
3
76,559,257
2023-6-26
https://stackoverflow.com/questions/76559257/theoretically-can-the-ackermann-function-be-optimized
I am wondering if there can be a version of Ackermann function with better time complexity than the standard variation. This is not a homework and I am just curious. I know the Ackermann function doesn't have any practical use besides as a performance benchmark, because of the deep recursion. I know the numbers grow very large very quickly, and I am not interested in computing it. Even though I use Python 3 and the integers won't overflow, I do have finite time, but I have implemented a version of it myself according to the definition found on Wikipedia, and computed the output for extremely small values, just to make sure the output is correct. def A(m, n): if not m: return n + 1 return A(m - 1, A(m, n - 1)) if n else A(m - 1, 1) The above code is a direct translation of the image, and is extremely slow, I don't know how it can be optimized, is it impossible to optimize it? One thing I can think of is to memoize it, but the recursion runs backwards, each time the function is recursively called the arguments were not encountered before, each successive function call the arguments decrease rather than increase, therefore each return value of the function needs to be calculated, memoization doesn't help when you call the function with different arguments the first time. Memoization can only help if you call it with the same arguments again, it won't compute the results and will retrieve cached result instead, but if you call the function with any input with (m, n) >= (4, 2) it will crash the interpreter regardless. I also implemented another version according to this answer: def ack(x, y): for i in range(x, 0, -1): y = ack(i, y - 1) if y else 1 return y + 1 But it is actually slower: In [2]: %timeit A(3, 4) 1.3 ms ± 9.75 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [3]: %timeit ack(3, 4) 2 ms ± 59.9 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) Theoretically can Ackermann function be optimized? If not, can it be definitely proven that its time complexity cannot decrease? I have just tested A(3, 9) and A(4, 1) will crash the interpreter, and the performance of the two functions for A(3, 8): In [2]: %timeit A(3, 8) 432 ms ± 4.63 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [3]: %timeit ack(3, 8) 588 ms ± 10.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) I did some more experiments: from collections import Counter from functools import cache c = Counter() def A1(m, n): c[(m, n)] += 1 if not m: return n + 1 return A(m - 1, A(m, n - 1)) if n else A(m - 1, 1) def test(m, n): c.clear() A1(m, n) return c The arguments indeed repeat. But surprisingly caching doesn't help at all: In [9]: %timeit Ackermann = cache(A); Ackermann(3, 4) 1.3 ms ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) Caching only helps when the function is called with the same arguments again, as explained: In [14]: %timeit Ackermann(3, 2) 101 ns ± 0.47 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each) I have tested it with different arguments numerous times, and it always gives the same efficiency boost (which is none).
Solution I recently wrote a bunch of solutions based on the same paper that templatetypedef mentioned. Many use generators, one for each m-value, yielding the values for n=0, n=1, n=2, etc. This one might be my favorite: def A_Stefan_generator_stack3(m, n): def a(m): if not m: yield from count(1) x = 1 for i, ai in enumerate(a(m-1)): if i == x: x = ai yield x return next(islice(a(m), n, None)) Explanation Consider the generator a(m). It yields A(m,0), A(m,1), A(m,2), etc. The definition of A(m,n) uses A(m-1, A(m, n-1)). So a(m) at its index n yields A(m,n), computed like this: A(m,n-1) gets yielded by the a(m) generator itself at index n-1. Which is just the previous value (x) yielded by this generator. A(m-1, A(m, n-1)) = A(m-1, x) gets yielded by the a(m-1) generator at index x. So the a(m) generator iterates over the a(m-1) generator and grabs the value at index i == x. Benchmark Here are times for computing all A(m,n) for m≤3 and n≤17, also including templatetypedef's solution: 1325 ms A_Stefan_row_class 1228 ms A_Stefan_row_lists 544 ms A_Stefan_generators 1363 ms A_Stefan_paper 459 ms A_Stefan_generators_2 866 ms A_Stefan_m_recursion 704 ms A_Stefan_function_stack 468 ms A_Stefan_generator_stack 945 ms A_Stefan_generator_stack2 582 ms A_Stefan_generator_stack3 467 ms A_Stefan_generator_stack4 1652 ms A_templatetypedef Note: Even faster (much faster) solutions using math insights/formulas are possible, see my comment and pts's answer. I intentionally didn't do that, as I was interested in coding techniques, for avoiding deep recursion and avoiding re-calculation. I got the impression that that's also what the question/OP wanted, and they confirmed that now (under a deleted answer, visible if you have enough reputation). Code def A_Stefan_row_class(m, n): class A0: def __getitem__(self, n): return n + 1 class A: def __init__(self, a): self.a = a self.n = 0 self.value = a[1] def __getitem__(self, n): while self.n < n: self.value = self.a[self.value] self.n += 1 return self.value a = A0() for _ in range(m): a = A(a) return a[n] from collections import defaultdict def A_Stefan_row_lists(m, n): memo = defaultdict(list) def a(m, n): if not m: return n + 1 if m not in memo: memo[m] = [a(m-1, 1)] Am = memo[m] while len(Am) <= n: Am.append(a(m-1, Am[-1])) return Am[n] return a(m, n) from itertools import count def A_Stefan_generators(m, n): a = count(1) def up(a, x=1): for i, ai in enumerate(a): if i == x: x = ai yield x for _ in range(m): a = up(a) return next(up(a, n)) def A_Stefan_paper(m, n): next = [0] * (m + 1) goal = [1] * m + [-1] while True: value = next[0] + 1 transferring = True i = 0 while transferring: if next[i] == goal[i]: goal[i] = value else: transferring = False next[i] += 1 i += 1 if next[m] == n + 1: return value def A_Stefan_generators_2(m, n): def a0(): n = yield while True: n = yield n + 1 def up(a): next(a) a = a.send i, x = -1, 1 n = yield while True: while i < n: x = a(x) i += 1 n = yield x a = a0() for _ in range(m): a = up(a) next(a) return a.send(n) def A_Stefan_m_recursion(m, n): ix = [None] + [(-1, 1)] * m def a(m, n): if not m: return n + 1 i, x = ix[m] while i < n: x = a(m-1, x) i += 1 ix[m] = i, x return x return a(m, n) def A_Stefan_function_stack(m, n): def a(n): return n + 1 for _ in range(m): def a(n, a=a, ix=[-1, 1]): i, x = ix while i < n: x = a(x) i += 1 ix[:] = i, x return x return a(n) from itertools import count, islice def A_Stefan_generator_stack(m, n): a = count(1) for _ in range(m): a = ( x for a, x in [(a, 1)] for i, ai in enumerate(a) if i == x for x in [ai] ) return next(islice(a, n, None)) from itertools import count, islice def A_Stefan_generator_stack2(m, n): a = count(1) def up(a): i, x = 0, 1 while True: i, x = x+1, next(islice(a, x-i, None)) yield x for _ in range(m): a = up(a) return next(islice(a, n, None)) def A_Stefan_generator_stack3(m, n): def a(m): if not m: yield from count(1) x = 1 for i, ai in enumerate(a(m-1)): if i == x: x = ai yield x return next(islice(a(m), n, None)) def A_Stefan_generator_stack4(m, n): def a(m): if not m: return count(1) return ( x for x in [1] for i, ai in enumerate(a(m-1)) if i == x for x in [ai] ) return next(islice(a(m), n, None)) def A_templatetypedef(i, n): positions = [-1] * (i + 1) values = [0] + [1] * i while positions[i] != n: values[0] += 1 positions[0] += 1 j = 1 while j <= i and positions[j - 1] == values[j]: values[j] = values[j - 1] positions[j] += 1 j += 1 return values[i] funcs = [ A_Stefan_row_class, A_Stefan_row_lists, A_Stefan_generators, A_Stefan_paper, A_Stefan_generators_2, A_Stefan_m_recursion, A_Stefan_function_stack, A_Stefan_generator_stack, A_Stefan_generator_stack2, A_Stefan_generator_stack3, A_Stefan_generator_stack4, A_templatetypedef, ] N = 18 args = ( [(0, n) for n in range(N)] + [(1, n) for n in range(N)] + [(2, n) for n in range(N)] + [(3, n) for n in range(N)] ) from time import time def print(*args, print=print, file=open('out.txt', 'w')): print(*args) print(*args, file=file, flush=True) expect = none = object() for _ in range(3): for f in funcs: t = time() result = [f(m, n) for m, n in args] # print(f'{(time()-t) * 1e3 :5.1f} ms ', f.__name__) print(f'{(time()-t) * 1e3 :5.0f} ms ', f.__name__) if expect is none: expect = result elif result != expect: raise Exception(f'{f.__name__} failed') del result print()
43
27
76,565,954
2023-6-27
https://stackoverflow.com/questions/76565954/regex-pattern-did-not-match-even-though-they-should-pytest
I am trying to match the string in this error: for warning in args: if not isinstance(warning, str): with StaticvarExceptionHandler(): raise TypeError(f"Configure.suppress() only takes string arguments. Current type: {type(warning)}") with the one in this pytest test: with pytest.raises( TypeError, match = "Configure.suppress() only takes string arguments. Current type: .*" ): Configure.suppress('ComplicatedTypeWarning', 1) It should match, but I am getting this error: E AssertionError: Regex pattern did not match. E Regex: 'Configure.suppress() only takes string arguments. Current type: .*' E Input: "Configure.suppress() only takes string arguments. Current type: <class 'int'>" Keep in mind that for all other tests that I use .*, everything works fine. I am very new to Regex, so I apologise if this is actually a really stupid question. Also, I've seen many many questions like these, but they were all in Java and not the same case as mine, so I couldn't find a solution. Feel free to flag this as a duplicate and link a matching question, though.
Parentheses have a special meaning in Regex, for marking capture groups and a few other things, see https://regex101.com (select "Python" on the left menu to get Python-specific regex explanations, since they vary from language to language). You need to escape them match = "Configure.suppress\\(\\) only takes string arguments. Current type: .*" or (note the r before the ", for a "raw string") match = r"Configure.suppress\(\) only takes string arguments. Current type: .*" Technically you don't need to escape those periods, since a period will match any non-newline character, and it's unlikely that there will be another error with a similar name that has a different character in place of those periods, but if you want to fix that use either of these: match = "Configure\\.suppress\\(\\) only takes string arguments\\. Current type: .*" or (note the r before the ", for a "raw string") match = r"Configure\.suppress\(\) only takes string arguments\. Current type: .*"
4
8
76,559,939
2023-6-26
https://stackoverflow.com/questions/76559939/getting-error-when-using-clientside-callback-via-java-script-in-dash-python
I've recently asked a question about how to use clientside_callback (see this) and am practicing it on my own dashboard application. In my dashboard, I have a map, a drop down menu, and a button. The user selects states on the map with a click, and can also select them from the drop down menu. However, there is ALL option in the drop down menu as well. As for the button, it clears the user's selection. My application works with the regular Dash callbacks, but my goal is to use clientside_callback to speed up the process. However, I receive multiple errors with my code due to the Java Script part about which I have no experience. That's why I'd appreciate if someone could assist me. import random, json import dash from dash import dcc, html, Dash, callback, Output, Input, State import dash_leaflet as dl import geopandas as gpd from dash import dash_table #https://gist.github.com/incubated-geek-cc/5da3adbb2a1602abd8cf18d91016d451?short_path=2de7e44 us_states_gdf = gpd.read_file("us_states.geojson") us_states_geojson = json.loads(us_states_gdf.to_json()) options = [{'label': 'Select all', 'value': 'ALL'}, {'label': 'AK', 'value': 'AK'}, {'label': 'AL', 'value': 'AL'}, {'label': 'AR', 'value': 'AR'}, {'label': 'AZ', 'value': 'AZ'}, {'label': 'CA', 'value': 'CA'}, {'label': 'CO', 'value': 'CO'}, {'label': 'CT', 'value': 'CT'}, {'label': 'DE', 'value': 'DE'}, {'label': 'FL', 'value': 'FL'}, {'label': 'GA', 'value': 'GA'}, {'label': 'HI', 'value': 'HI'}, {'label': 'IA', 'value': 'IA'}, {'label': 'ID', 'value': 'ID'}, {'label': 'IL', 'value': 'IL'}, {'label': 'IN', 'value': 'IN'}, {'label': 'KS', 'value': 'KS'}, {'label': 'KY', 'value': 'KY'}, {'label': 'LA', 'value': 'LA'}, {'label': 'MA', 'value': 'MA'}, {'label': 'MD', 'value': 'MD'}, {'label': 'ME', 'value': 'ME'}, {'label': 'MI', 'value': 'MI'}, {'label': 'MN', 'value': 'MN'}, {'label': 'MO', 'value': 'MO'}, {'label': 'MS', 'value': 'MS'}, {'label': 'MT', 'value': 'MT'}, {'label': 'NC', 'value': 'NC'}, {'label': 'ND', 'value': 'ND'}, {'label': 'NE', 'value': 'NE'}, {'label': 'NH', 'value': 'NH'}, {'label': 'NJ', 'value': 'NJ'}, {'label': 'NM', 'value': 'NM'}, {'label': 'NV', 'value': 'NV'}, {'label': 'NY', 'value': 'NY'}, {'label': 'OH', 'value': 'OH'}, {'label': 'OK', 'value': 'OK'}, {'label': 'OR', 'value': 'OR'}, {'label': 'PA', 'value': 'PA'}, {'label': 'RI', 'value': 'RI'}, {'label': 'SC', 'value': 'SC'}, {'label': 'SD', 'value': 'SD'}, {'label': 'TN', 'value': 'TN'}, {'label': 'TX', 'value': 'TX'}, {'label': 'UT', 'value': 'UT'}, {'label': 'VA', 'value': 'VA'}, {'label': 'VT', 'value': 'VT'}, {'label': 'WA', 'value': 'WA'}, {'label': 'WI', 'value': 'WI'}, {'label': 'WV', 'value': 'WV'}, {'label': 'WY', 'value': 'WY'}] state_abbreviations = {'Alabama': 'AL', 'Alaska': 'AK', 'Arizona': 'AZ', 'Arkansas': 'AR', 'California': 'CA', 'Colorado': 'CO', 'Connecticut': 'CT', 'Delaware': 'DE', 'Florida': 'FL', 'Georgia': 'GA', 'Hawaii': 'HI', 'Idaho': 'ID', 'Illinois': 'IL', 'Indiana': 'IN', 'Iowa': 'IA', 'Kansas': 'KS', 'Kentucky': 'KY', 'Louisiana': 'LA', 'Maine': 'ME', 'Maryland': 'MD', 'Massachusetts': 'MA', 'Michigan': 'MI', 'Minnesota': 'MN', 'Mississippi': 'MS', 'Missouri': 'MO', 'Montana': 'MT', 'Nebraska': 'NE', 'Nevada': 'NV', 'New Hampshire': 'NH', 'New Jersey': 'NJ', 'New Mexico': 'NM', 'New York': 'NY', 'North Carolina': 'NC', 'North Dakota': 'ND', 'Ohio': 'OH', 'Oklahoma': 'OK', 'Oregon': 'OR', 'Pennsylvania': 'PA', 'Rhode Island': 'RI', 'South Carolina': 'SC', 'South Dakota': 'SD', 'Tennessee': 'TN', 'Texas': 'TX', 'Utah': 'UT', 'Vermont': 'VT', 'Virginia': 'VA', 'Washington': 'WA', 'West Virginia': 'WV', 'Wisconsin': 'WI', 'Wyoming': 'WY'} states = list(state_abbreviations.values()) app = Dash(__name__) app.layout = html.Div([ #Store lists to use in the callback dcc.Store(id='options-store', data=json.dumps(options)), dcc.Store(id='states-store', data=json.dumps(states)), dcc.Store(id='states-abbrevations', data=json.dumps(state_abbreviations)), #US Map here dl.Map([ dl.TileLayer(url="http://tile.stamen.com/toner-lite/{z}/{x}/{y}.png"), dl.GeoJSON(data=us_states_geojson, id="state-layer")], style={'width': '100%', 'height': '250px'}, id="map", center=[39.8283, -98.5795], ), #Drop down menu here html.Div(className='row', children=[ dcc.Dropdown( id='state-dropdown', options=[{'label': 'Select all', 'value': 'ALL'}] + [{'label': state, 'value': state} for state in states], value=[], multi=True, placeholder='States' )]), html.Div(className='one columns', children=[ html.Button( 'Clear', id='clear-button', n_clicks=0, className='my-button' ), ]), ]) @app.callback( Output('state-dropdown', 'value', allow_duplicate=True), [Input('clear-button', 'n_clicks')], prevent_initial_call=True ) def clear_tab(user_click): if user_click: return [] else: raise dash.exceptions.PreventUpdate app.clientside_callback( """ function(click_feature, selected_states, defaults_options, states, states_abbreviations) { let options = defaults_options let select_all_selected = selected_states.includes('ALL'); let list_states; if (select_all_selected) { options = [{'label': 'Select All', 'value': 'ALL'}]; selected_states = states; list_states = 'ALL'; } else { list_states = selected_states; if (click_feature && dash.callback_context.triggered[0]['prop_id'].split('.')[0] == 'state-layer') { let state_name = state_abbreviations[click_feature["properties"]["NAME"]]; if (!selected_states.includes(state_name)) { selected_states.push(state_name); list_states = selected_states; } } } return [options, list_states]; } """, Output('state-dropdown', 'options'), Output('state-dropdown', 'value'), Input('state-layer', 'click_feature'), Input('state-dropdown', 'value'), State('options-store', 'data'), State('states-store', 'data'), State('states-abbrevations', 'data'), prevent_initial_call=True ) if __name__ == '__main__': app.run_server(debug=True)
You don't need to serialize the store data into a JSON string (or by doing so you would have to use JSON.parse() clientside to unserialize them back, but Dash already does it internally) so the first thing is to fix that in the app layout in order to receive proper JS objects in your clientside callback : #Store lists to use in the callback dcc.Store(id='options-store', data=options), dcc.Store(id='states-store', data=states), dcc.Store(id='states-abbrevations', data=state_abbreviations), # ... The second thing is to use dash_clientside instead of dash in the clientside callback so you can get the callback context etc., and also fix a typo for states_abbreviations : function(click_feature, selected_states, defaults_options, states, states_abbreviations) { let options = defaults_options; let select_all_selected = selected_states.includes('ALL'); let list_states; if (select_all_selected) { options = [{'label': 'Select All', 'value': 'ALL'}]; selected_states = states; list_states = 'ALL'; } else { list_states = selected_states; if (click_feature && dash_clientside.callback_context.triggered[0]['prop_id'].split('.')[0] == 'state-layer') { let state_name = states_abbreviations[click_feature["properties"]["NAME"]]; if (!selected_states.includes(state_name)) { selected_states.push(state_name); list_states = selected_states; } } } return [options, list_states]; }
3
1
76,555,620
2023-6-26
https://stackoverflow.com/questions/76555620/submission-and-custom-input-on-geeksforgeeks-gives-different-judge-result-on-sam
I was practicing with a GeeksForGeeks problem Fractional Knapsack: Given weights and values of N items, we need to put these items in a knapsack of capacity W to get the maximum total value in the knapsack. Note: Unlike 0/1 knapsack, you are allowed to break the item. Example 1: Input: N = 3, W = 50 values[] = {60,100,120} weight[] = {10,20,30} Output: 240.00 Explanation Total maximum value of item we can have is 240.00 from the given capacity of sack. When I submit my code, the online judge tells me my code returns a wrong answer, however when I run the same test case on custom input, I get the expected answer by the judge. I searched as to why this happened, and the potential reasons I came across were because of Global/Static variables, however I am not using any global/static variables. My Approach for the problem was : Make a sorted hashmap based on value/weight ratio, with the (key,value) pair being (ratio,item). For each item, if the weight of the item is less than available weight, then we will directly add the value to the final answer. Else we will multiply the available-weight with the ratio, add it to the final answer and break. Finally we will return the answer. The code I am using : ### Item class for reference class Item: def __init__(self,val,w): self.value = val self.weight = w class Solution: def fractionalknapsack(self, W,arr,n): hashmap = {(item.value / item.weight) : item for i,item in enumerate(arr) } keys = list(hashmap.keys()) keys.sort() sorted_hashmap = {} keys.reverse() for ele in keys : sorted_hashmap[ele] = hashmap[ele] final_ans = 0 available_weight = W for ratio,item in sorted_hashmap.items() : if available_weight > 0 : if item.weight <= available_weight : final_ans += item.value available_weight -= item.weight else : final_ans += available_weight * ratio break else : break return final_ans The input for which the test succeeds, but submission fails: Input : 84 87 78 16 94 36 87 43 50 22 63 28 91 10 64 27 41 27 73 37 12 19 68 30 83 31 63 24 68 36 30 3 23 9 70 18 94 7 12 43 30 24 22 20 85 38 99 25 16 21 14 27 92 31 57 24 63 21 97 32 6 26 85 28 37 6 47 30 14 8 25 46 83 46 15 18 35 15 44 1 88 9 77 29 89 35 4 2 55 50 33 11 77 19 40 13 27 37 95 40 96 21 35 29 68 2 98 3 18 43 53 7 2 31 87 42 66 40 45 20 41 30 32 18 98 22 82 26 10 28 68 7 98 4 87 16 7 34 20 25 29 22 33 30 4 20 71 19 9 16 41 50 97 24 19 46 47 2 22 6 80 39 65 29 42 1 94 1 35 15 Output : Online Judges Expected Output : 1078.00 My Submission Output : 235.58 My custom input output : 1078.00
This is indeed very confusing! I looked at the Python version that GeeksForGeeks is running your code on, and at the time of writing these versions are different depending on whether you test or submit! When testing the code: Python 3.7.13 When submitting the code: Python 3.5.2 This explains why you get different results when testing versus submitting. Since Python 3.6 dictionaries are iterated in insertion order, but in older versions the iteration order is undefined -- there is no guarantee whatsoever that the order in which you inserted the items will also be the order in which the items are iterated. Taking a step back, you should not need a dictionary here. In fact, it will be problematic when two items happen to have the same ratio, because then you will lose out on one. Solve this by just sorting the input items by the ratio. Here is your code with the hasmap removed, and the sort called on the input list instead: class Solution: def fractionalknapsack(self, W, arr, n): arr.sort(key=lambda item: item.value / item.weight, reverse=True) final_ans = 0 available_weight = W for item in arr: if available_weight > 0: if item.weight <= available_weight: final_ans += item.value available_weight -= item.weight else : final_ans += available_weight * item.value / item.weight break else: break return final_ans
3
2
76,551,956
2023-6-25
https://stackoverflow.com/questions/76551956/kill-a-future-if-program-stops
I have a ThreadPoolExecutor in my programs which submit()s a task. However, when I end my program, the script "freezes". It seems like the thread is not ended correctly. Is there a solution for this? example: from concurrent.futures import ThreadPoolExecutor from time import sleep def task(): for i in range(3): print(i) sleep(1) with ThreadPoolExecutor() as executor: future = executor.submit(task) future.cancel() # Waits for loop of blocking task to complete executor.shutdown(wait=False) # Still waits for loop in blocking task to complete sys.exit() does not work either, it will still wait for the future to complete
This program does not hang for me and I do not see anything in your code that is violating any published restrictions placed on the use of the concurrent.futures package. So reading the rest of this may be a waste of your time. But that's not to say that you don't have some statements that are not accomplishing anything in your code that you may not be aware of and, if that is the case, I thought I should point these out to you. And perhaps the issue you are having may be related to one of these statements combined with the version of Python you are using or the platform on which you are executing (although I am not very confident that your problem doesn't lie elsewhere). Anyway, I have modified your code below to point out a few things. First, I have modified the pool size to be 1 and I now call method submit twice. Consequently the first submitted task executes immediately but the second task will not start executing until the first submitted task completes. Second, when you use a ThreadPoolExecutor instance as a context manager like you are doing, then when the block terminates there is an implicit call to ThreadPoolExecutor.shutown(wait=True). Consequently, I have rewritten your code to make this implicit call explicit. The resulting modified code is: from concurrent.futures import ThreadPoolExecutor from time import sleep def task(task_no): for i in range(3): print(f'task no. = {task_no}, i = {i}') sleep(1) executor = ThreadPoolExecutor(1) future1 = executor.submit(task, 1) future2 = executor.submit(task, 2) print('future 1 canceled = ', future1.cancel()) print('future 2 canceled = ', future2.cancel()) executor.shutdown(wait=False) executor.shutdown(wait=True) print('I am done!') And its output in my environment is: task no. = 1, i = 0 future 1 canceled = False future 2 canceled = True task no. = 1, i = 1 task no. = 1, i = 2 I am done! Discussion The first thing to observe is that when the call future1.cancel() is executed, the first task is already running and therefore calling cancel has no effect. But since the second task has not started execution when future2.cancel() is called, we can see that the task can be canceled. The point is that in your original code, the call to future.cancel() will have no effect. The second point is that because you are using variable executor as a context manager, in addition to the call you explicitly make to executor.shutdown(wait=False), it is immediately followed by an implicit call to executor.shutdown(wait=True), so you end up waiting for all pending tasks to complete before continuing thus rendering the first call to executor.shutdown(wait=False) rather useless. Question Is it possible that for some reason in your environment these two consecutive calls to shutdown is the cause of your hang? Try the following code to see if it makes any difference: from concurrent.futures import ThreadPoolExecutor from time import sleep def task(): for i in range(3): print(i) sleep(1) executor = ThreadPoolExecutor() future = executor.submit(task) executor.shutdown(wait=False) If this runs to completion, then you can try first adding the call future.cancel() and if the program still completes, then add back the additional call to executor.shutdown(wait=True). Update Based on your comment, if you are looking to terminate a thread that never ends on its own, you cannot do this if you are using the concurrent.futures package or a threading.Thread instance. However, you can do this with the multiprocessing.pool.ThreadPool multithreading pool: from multiprocessing.pool import ThreadPool from time import sleep def task(): i = 0; while True: print(i) i += 1 sleep(.5) pool = ThreadPool(1) async_result = pool.apply_async(task) sleep(2) # Let the task run for a while pool.terminate() # Now terminate the pool Prints: 0 1 2 3 Or you can use the pool as a context manager, which implicitly calls terminate() on the pool when the context manager block exits: from multiprocessing.pool import ThreadPool from time import sleep def task(): i = 0; while True: print(i) i += 1 sleep(.5) with ThreadPool(1) as pool: async_result = pool.apply_async(task) sleep(2) # Let the task run for a while # Now we exit the block: # There is an explicit call to pool.terminate() when # the context manager block exits. Finally, since the pool threads are daemon threads, when the process that created the pool terminates, then the pool's threads automatically terminate and you do not even have to call terminate: from multiprocessing.pool import ThreadPool from time import sleep def task(): i = 0; while True: print(i) i += 1 sleep(.5) pool = ThreadPool(1) async_result = pool.apply_async(task) sleep(2) # Let the task run for a while # Now the main process implicitly terminates since no more statements # are being executed and the pool's threads are destroyed. But if you do not want the running task to terminate prematurely just because the program has no more statements to execute: from multiprocessing.pool import ThreadPool from time import sleep def task(): for i in range(10): print(i) sleep(.5) pool = ThreadPool(1) async_result = pool.apply_async(task) # Explicitly wait for the task to complete by calling `get` on the # AsyncResult instance or call pool.close() followed by pool.join() # to wait for all submitted tasks to complete: async_result.get() # Program terminates here: You may also wish to look at this for a comparison of the two pool packages.
5
4
76,555,779
2023-6-26
https://stackoverflow.com/questions/76555779/cant-install-or-upgrade-python-3-10-8-in-wsl
I need to specifically install Python 3.10.8 for school, but I can't seem to get a version higher than 3.10.6 sudo apt-get install python3.10 just result in telling me that python3.10 is already the newest version (3.10.6-1~22.04.2ubuntu1.1) Reading package lists... Done Building dependency tree... Done Reading state information... Done python3.10 is already the newest version (3.10.6-1~22.04.2ubuntu1.1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. I already did sudo apt update && sudo apt upgrade and this is what I got: Failed to start apt-news.service: Unit apt-news.service not found. Failed to start esm-cache.service: Unit esm-cache.service not found. Hit:1 http://archive.ubuntu.com/ubuntu jammy InRelease Hit:2 http://security.ubuntu.com/ubuntu jammy-security InRelease Hit:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease Get:4 http://archive.ubuntu.com/ubuntu jammy-backports InRelease \[108 kB\] Fetched 108 kB in 1s (110 kB/s) Reading package lists... Done Building dependency tree... Done Reading state information... Done All packages are up to date. Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Step 1: sudo apt update Step 2: sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev wget Step 3: wget https://www.python.org/ftp/python/3.10.8/Python-3.10.8.tgz Step 4: tar -xf Python-3.10.8.tgz Step 5: cd Python-3.10.8 Step 6: ./configure --enable-optimizations Step 7: make -j $(nproc) Step 9: sudo make altinstall Verify if python 3.10.8 was installed: python3.10 --version
2
8
76,554,603
2023-6-26
https://stackoverflow.com/questions/76554603/python-regex-issue-with-optional-substring-in-between
Been bashing my head on this since 2 days. I'm trying to match a packet content with regex API: packet_re = (r'.*RADIUS.*\s*Accounting(\s|-)Request.*(Framed(\s|-)IP(\s|-)Address.*Attribute.*Value: (?P<client_ip>\d+\.\d+\.\d+\.\d+))?.*(Username|User-Name)(\s|-)Attribute.*Value:\s*(?P<username>\S+).*') packet1 = """ IP (tos 0x0, ttl 64, id 35592, offset 0, flags [DF], proto UDP (17), length 213) 10.10.10.1.41860 > 10.10.10.3.1813: [udp sum ok] RADIUS, length: 185 Accounting-Request (4), id: 0x0a, Authenticator: 41b3b548c4b7f65fe810544995620308 Framed-IP-Address Attribute (8), length: 6, Value: 10.10.10.11 0x0000: 0a0a 0a0b User-Name Attribute (1), length: 14, Value: 005056969256 0x0000: 3030 3530 3536 3936 3932 3536 """ result = search(packet_re, packet1, DOTALL) The regex matches, but it fails to capture Framed-IP-Address Attribute, client_ip=10.10.10.11. The thing is Framed-IP-Address Attribute can or cannot come in the packet. Hence the pattern is enclosed in another capture group ending with ? meaning 0 or 1 occurrence. I should be able to ignore it when it doesn't come. Hence packet content can also be: packet2 = """ IP (tos 0x0, ttl 64, id 60162, offset 0, flags [DF], proto UDP (17), length 163) 20.20.20.1.54035 > 20.20.20.2.1813: [udp sum ok] RADIUS, length: 135 Accounting-Request (4), id: 0x01, Authenticator: 219b694bcff639221fa29940e8d2a4b2 User-Name Attribute (1), length: 14, Value: 005056962f54 0x0000: 3030 3530 3536 3936 3266 3534 """ The regex should ignore Framed-IP-Address in this case. It does ignore but it doesn't capture when it does come.
I suggest using RADIUS.*?Accounting[\s-]Request(?:.*?(Framed[\s-]IP[\s-]Address.*?Attribute(?:.*?Value: (?P<client_ip>\d+\.\d+\.\d+\.\d+))?))?.*User-?[nN]ame[\s-]Attribute.*?Value:\s*(?P<username>\S+) See the regex demo. Note I removed .* on both ends of the pattern as you are using re.search that does not require matching at the start of string like re.match, and the MatchData object contains .string property that you can access to obtain the whole input string. Details RADIUS - a word .*? - any zero or more chars, as few as possible Accounting - a word [\s-] - a whitespace or hyphen Request - a word (?:.*? - start of an optional non-capturing group: any zero or more chars as few as possible, then... (Framed[\s-]IP[\s-]Address.*?Attribute - Group 1: Framed + a whitespace or a hyphen + IP + whitespace/hyphen + Address + any zero or more chars as few as possible + Attribute (?:.*?Value: (?P<client_ip>\d+\.\d+\.\d+\.\d+))? - an optional non-capturing group matching any zero or more chars as few as possible + Value: + Group "client_ip": four one or more digit matching patterns separated with a literal dot ) - end of the Group 1 )? - end of the outer non-capturing group .* - any zero or more chars as many as possible User-?[nN]ame - Username, UserName or User-name/User-Name [\s-] - whitespace or hyphen Attribute - a word .*? - any zero or more chars as few as possible Value: - a literal string \s* - zero or more whitespaces (?P<username>\S+) - Group "username": one or more non-whitespace chars
4
2
76,548,894
2023-6-25
https://stackoverflow.com/questions/76548894/vs-code-python-debugger-opens-a-new-integrated-terminal-instead-of-reusing-an-ex
When I try to debug a Python file in VS Code, the debugger opens in a new terminal instead of using the existing integrated terminal. I have tried the following: Making sure that I have saved my launch.json configuration file. Restarting VS Code. Trying debugging a different Python file. If I am using a virtual environment, making sure that the virtual environment is activated. launch.json { "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "justMyCode": true } Environment: VS code Version: 1.79.2 (user setup) OS: WSL Ubuntu As you can see in the image it launches a new terminal rather than using the existing terminal.
From googling "github vscode python issues debug reuse terminal", I found this issue ticket: Run Python Debug Console in Existing Terminal #13040, where a maintainer of the VS Code Python extension, @int19h stated this: This would be a great feature, but, unfortunately, we're constrained by VSCode itself here. The debugger can only request to run something in a terminal with a given title (and you can specify that title in launch.json by setting "consoleTitle", if you don't like the default "Python Debug Console"). However, whether and how to reuse an existing terminal is entirely down to VSCode. I was hoping that it'd reuse an existing terminal with matching title - but this doesn't seem to be the case. microsoft/vscode#68123 might be relevant here, although it's not quite the same, since it's about grouping newly spawned terminals. Note for historical purposes, there was also an older bug where debugging with a Python-type launch config would cause a new terminal to be created every time instead of reusing an existing debug terminal, but that one seems to have been fixed (see also Debug mode spawns a new "Python Debug Console" every time with console set to integratedTerminal #132 and In VSCode, Python debugger launches a new Terminal every time I debug).
3
4
76,539,324
2023-6-23
https://stackoverflow.com/questions/76539324/pytube-exceptions-regexmatcherror-get-throttling-function-name-could-not-find
def video_downloader(video_url_list: List[str], download_folder: str) -> None: """ Download videos from a list of YouTube video URLs. Args: video_urls (List[str]): The list of YouTube video URLs to download. download_folder (str): The folder to save the downloaded videos. """ successful_downloads = 0 valid_urls: List[str] = [] for url in video_url_list: if not url: continue try: with tempfile.TemporaryDirectory() as temp_dir: cleaned_url = clean_youtube_url(url=url) if _get_streams(url=cleaned_url, temp_dir=temp_dir): create_folder(download_folder) _merge_streams( temp_dir=temp_dir, url=url, download_folder=download_folder ) logger.info(f"The video from {url} was downloaded successfully") successful_downloads += 1 valid_urls.append(url) else: logger.warning(f"No valid video found at the URL: {url}") except Exception as e: logger.error( f"Error has occurred while downloading the video from {url}: {e}" ) print(e) if successful_downloads != 0 and (successful_downloads == len(valid_urls)): messagebox.showinfo( title="Finished download", message=f"Your download is complete!.\n\n\t{successful_downloads}/{len(video_url_list[1:])}", ) This is the main function I was using to download YouTube videos using Pytube. It was working fine last night, but this morning it's not. I'm getting the error "pytube.exceptions.RegexMatchError: get_throttling_function_name: could not find match for multiple", which prevents me from downloading any video or audio. Any possible solutions?
The work around in the official github page issue: https://github.com/pytube/pytube/issues/1684 function_patterns = [ # https://github.com/ytdl-org/youtube-dl/issues/29326#issuecomment-865985377 # https://github.com/yt-dlp/yt-dlp/commit/48416bc4a8f1d5ff07d5977659cb8ece7640dcd8 # var Bpa = [iha]; # ... # a.C && (b = a.get("n")) && (b = Bpa[0](b), a.set("n", b), # Bpa.length || iha("")) }}; # In the above case, `iha` is the relevant function name r'a\.[a-zA-Z]\s*&&\s*\([a-z]\s*=\s*a\.get\("n"\)\)\s*&&.*?\|\|\s*([a-z]+)', r'\([a-z]\s*=\s*([a-zA-Z0-9$]+)(\[\d+\])?\([a-z]\)',]
6
8
76,546,278
2023-6-24
https://stackoverflow.com/questions/76546278/how-we-can-run-python-in-sublime-text3
code doesn't run correctly ctrl + B and search on google, and also I searched on Youtube, Google and I tried several ways. there was some solutions but it didn't work. there is no error but unfortunately, my code doesn't run
Download the portable sublime3, go to: Tools | Build Systems | Python. With "Build" you can run your code. Crtl + B shows your python version, what is in your system environment variable defined: I would anyway suggest Thonny
3
3
76,541,795
2023-6-23
https://stackoverflow.com/questions/76541795/google-drive-ucid-no-longer-works
I am trying to write code to get the direct urls of images in a public Google Drive folder and embed them. Until a few minutes ago everything has been working and using the uc? trick gives the raw image so I can embed it. However now it is sending a download, even when I use &export=view. Here is my code: key = "supersecretgooglecloudconsolekey" import re import requests def get_raw_image_links(folder_link): # Extract folder id folder_id = re.findall(r"/folders/([^\s/]+)", folder_link)[0] # Google cloud console request to get links api_url = f"https://www.googleapis.com/drive/v3/files?q='{folder_id}'+in+parents+and+mimeType+contains+'image/'&key={key}" response = requests.get(api_url).json() raw_image_links = [] # Iterate over the files and retrieve raw image links for file in response['files']: raw_link = f"https://drive.google.com/uc?id={file['id']}" print(raw_link) raw_image_links.append(f'<a href="{raw_link}"> <img src="{raw_link}" /> </a>') return ''.join(raw_image_links) folder_link = "https://drive.google.com/drive/u/0/folders/**********" image_links = get_raw_image_links(folder_link) For example, this works: https://drive.google.com/uc?id=15bwLlBuPc4O8LXwxjsBJe_ctyPl8Tll2 But this doesn't for some reason and instead sends a download: https://drive.google.com/uc?id=1Gz4YrH2tefSVxKRvv6gnqnY3tk0Obm0T How do I fix this? Also, it would be helpful if anyone knows how to scrape the raw image link from the viewing page of the image itself https://drive.google.com/file/d/15bwLlBuPc4O8LXwxjsBJe_ctyPl8Tll2/view Thank you!
In your situation, how about changing the endpoint? When your showing script is modified, please modify raw_link as follows. From: raw_link = f"https://drive.google.com/uc?id={file['id']}" To: raw_link = f"https://drive.google.com/thumbnail?id={file['id']}&sz=w1000" # Please modify w1000 to your actual situation. By this change, the image files are converted to PNG format. I thought that this might be one of several solutions. From your question and showing script, I guessed that your folder of folder_link = "https://drive.google.com/drive/u/0/folders/**********" might be publicly shared. In that case, I thought that the above modification might be able to be used. Reference: Permanent links to thumbnails in Google Drive API
6
4
76,540,270
2023-6-23
https://stackoverflow.com/questions/76540270/uvicorn-reload-kill-all-created-sub-process-when-auto-reloading
I currently have a fastapi deployed with uvicorn that starts a thread on initialisation (among other things) using threading. This thread is infinite (it's a routine that updates every x seconds). Before I updated to python 3.10, everything was working fine, everytime I changed the code, the server would detect change and reload, killing and creating a new thread at init. But now, when I modify my code, the server detects change and try to reload but the created thread isn't killed (print still continue to flow in the console) refraining the server to fully reload. my print from my thread WARNING: StatReload detected changes in 'app\api.py'. Reloading... INFO: Shutting down INFO: Waiting for application shutdown. INFO: Application shutdown complete. INFO: Finished server process [3736] my print from my thread my print from my thread ... This works the same way if I ctrl+C in the console. The thread stays alive My solution for the moment is to kill PIDeverytime I want to refresh but that's a bit annoying. I tried to get back to python 3.7.9 but the problem remains. I also tried to implement atexit and manually kill the process but it didn't work. Any lead on how to properly handle this ?
Fixed by downgrading uvicorn to an older version (0.19.0 works).
5
2
76,532,906
2023-6-22
https://stackoverflow.com/questions/76532906/unrecognized-configuration-parameter-standard-conforming-strings-while-queryin
I am trying to connect to my redshift cluster using Sqlalchemy in a linux environment, but i am facing the following issue. from sqlalchemy import create_engine import pandas as pd conn = create_engine('postgresql://connection string') data_frame = pd.read_sql_query("SELECT * FROM schema.table", conn) sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedObject) unrecognized configuration parameter "standard_conforming_strings" I really dont understand what the issue is. It works perfectly fine in Windows. PS: Not sure if this makes any difference but i have installed psycopg2-binary on the linux machine in contrast to psycopg2 on windows. EDIT 1 .The version of pyscopg2 is windows is 2.9.3 while the version of pyscopg2-binary in linux is 2.9.6 The version of Sqlalchemy in windows is 1.4.39 while in linux is 2.0.16
I have figured out the answer based on the comment by @Adrian . Just changed the version of Sql alchemy on the linux env to what i have on Windows and it works now.
12
0
76,536,650
2023-6-23
https://stackoverflow.com/questions/76536650/read-json-to-pandas-dataframe
I have a data file in json format downloaded from another system. It can be observed that it contains one row with all the column names as shown as keys in the json string with its values. Metadata for this one row contains another json string. { "createdTime": 1625433352096, "lastUpdatedTime": 1664815053126, "rootId": 5710030000009051, "parentId": 0000010904715472, "parentExtId": "J802-RFPS-2F-1004", "externalId": "J802-RFPS-003FT1004A", "name": "J802-RFPS-003FT2224A", "description": "113FT1004A, RFP5 CRDE CHG PMP TO TN 1", "dataSetId": 4833410000089553, "metadata": { "Catalog Profile": "16", "Cost Center": "TNK7222414", "Installation of equipment allowed": "Yes", "Location": "112-SCU", "Location and Account Assignment": "11223355", "PP Work Center": "10000420", "Planner Group": "CRD", "Plant Section": "CE", "SAP Object No": "01649", "Single Equipment Installation": "Yes", "Type": "I_CVL", "Work Center": "9876542" }, "source": "SAP PM", "id": 8972234567977 } I am using the following code in python to read it as dataframe. import json from pathlib import Path import pandas as pd json_file_path = r'D:\asset-897428209692977.json' # path to file p = Path(json_file_path) # read json with p.open('r', encoding='utf-8') as f: data = json.loads(f.read()) # create dataframe df2 = pd.json_normalize(data) The output has one row but the dataframe splits the metadata into multiple columns such as metadata.planner, metadata.PM, metadata.ID...... Is there another way to keep it as one row so that when some keys are missing in certain records, it doesn't find itself missing or removed in this json parsing. Please advise the best approach
I don't think there is any direct way to do this in a direct manner. However, you can still include metadata into the database manually, and then add metadata manually as a single column. metadata = data.pop('metadata') df2 = pd.json_normalize(data) df2['metadata'] = [metadata] However I'm not able to test the code on the data you've provided, so I am not sure it'll work without without any issues. Both original and modified versions do return the same error - "json.decoder.JSONDecodeError: Expecting ',' delimiter: line 1 column 108 (char 107)" while I'm trying to run them, and I'm not sure why does the error happen.
3
1
76,534,046
2023-6-22
https://stackoverflow.com/questions/76534046/why-is-my-manual-convolution-different-to-scipy-ndimage-convolve
I apologise in advance, I may just not understand convolution. I'm struggling to reconcile the results using scipy.ndimage.convolve with what I get attempting to do it by hand. For the example in the documentation: import numpy as np a = np.array([[1, 2, 0, 0], [5, 3, 0, 4], [0, 0, 0, 7], [9, 3, 0, 0]]) k = np.array([[1,1,1],[1,1,0],[1,0,0]]) from scipy import ndimage ndimage.convolve(a, k, mode='constant', cval=0.0) array([[11, 10, 7, 4], [10, 3, 11, 11], [15, 12, 14, 7], [12, 3, 7, 0]]) However I would expect the result to be: ([[1, 8, 5, 0], [8, 11, 5, 4], [8, 17, 10, 11], [9, 12, 10, 7]]) For example for the top left value: 1×0 (extended beyond the input) 1×0 (extended beyond the input) 1×0 (extended beyond the input) 1×0 (extended beyond the input) 1×1 0×2 1×0 (extended beyond the input) 0×5 0×3 ___ =1 I don't see how it can be 11 What am I misunderstanding about convolution, arrays, or what scipy is doing here?
The operation you're performing by hand is cross correlation, not convolution. The process of convolution is similar but "flips" the kernel. You can show that your expected result can be obtained by first flipping your kernel which then gets unflipped during covolution: > ndimage.convolve(a, np.flip(k), mode='constant', cval=0.0) array([[ 1, 8, 5, 0], [ 8, 11, 5, 4], [ 8, 17, 10, 11], [ 9, 12, 10, 7]]) A bit more information here: https://cs.stackexchange.com/questions/11591/2d-convolution-flipping-the-kernel
4
4
76,532,429
2023-6-22
https://stackoverflow.com/questions/76532429/typehint-googleapiclient-discovery-build-returning-value
I create the Google API Resource class for a specific type (in this case blogger). from googleapiclient.discovery import Resource, build def get_google_service(api_type) -> Resource: credentials = ... return build(api_type, 'v3', credentials=credentials) def blog_service(): return get_google_service('blogger') def list_blogs(): return blog_service().blogs() The problem arises when using the list_blogs function. Since I am providing a specific service name, I know that the return value of blog_service has a blogs method, but my IDE doesn't recognize it. Is there a way to annotate the blog_service function (or any other part of the code) to help my IDE recognize the available methods like blogs?
The problem here is that the Resource class dynamically sets arbitrary attributes/methods based on what collections the underlying API provides. In your case apparently there is a blogs collection, which means the blogs method is dynamically constructed on that Resource object. Expecting static type annotations, when your types are created dynamically is a tall order. (I assume this is one of the reasons the maintainers of google-api-python-client do not even bother with type-hinting in that package.) But depending on your goals with those functions, you might improve your IDE experience by using protocols. If you know that your blog_service function returns an object that has a blogs method, you can define a corresponding protocol. The problem is of course kicked down the road because whatever that blogs method returns may also have arbitrary methods. But depending on if this is important to you, you can apply the same principle for that again. The upside is that Resource itself actually seems to have very few public methods, essentially just close and the context manager protocol. So you could emulate that in some sort of base protocol and use inheritance to construct different resource-protocols. Here is an example to illustrate: from typing import Any, Protocol, Self from googleapiclient.discovery import build # type: ignore[import] class ResourceProtocol(Protocol): def close(self) -> None: ... def __enter__(self) -> Self: ... def __exit__(self, *args: Any) -> None: ... class BloggerProtocol(ResourceProtocol): def blogs(self) -> Any: ... def get_google_service(api_type: str) -> Any: credentials = ... return build(api_type, 'v3', credentials=credentials) def blog_service() -> BloggerProtocol: return get_google_service('blogger') # type: ignore[no-any-return] def list_blogs() -> Any: return blog_service().blogs() (Note that those are all literal ellipses .... Also, if you are on Python <3.11, you should be able to import Self from typing_extensions instead.) Now your IDE should be able to detect that the object returned by blog_service has a blogs method and can be used as a context manager (using with). Notice that I annotated the return type of blogs as Any because I don't know what type it is supposed to return.
3
2
76,533,178
2023-6-22
https://stackoverflow.com/questions/76533178/corr-results-in-valueerror-could-not-convert-string-to-float
I'm getting this very strange error when trying to follow the following exercise on using the corr() method in Python https://www.geeksforgeeks.org/python-pandas-dataframe-corr/ Specifically, when I try to run the following code: df.corr(method ='pearson') The error message offers no clue. I thought the corr() method was supposed to automatically ignore strings and empty values etc. Traceback (most recent call last): File "<pyshell#6>", line 1, in <module> df.corr(method='pearson') File "C:\Users\d.o\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\frame.py", line 10059, in corr mat = data.to_numpy(dtype=float, na_value=np.nan, copy=False) File "C:\Users\d.o\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\frame.py", line 1838, in to_numpy result = self._mgr.as_array(dtype=dtype, copy=copy, na_value=na_value) File "C:\Users\d.o\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\internals\managers.py", line 1732, in as_array arr = self._interleave(dtype=dtype, na_value=na_value) File "C:\Users\d.o\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\internals\managers.py", line 1794, in _interleave result[rl.indexer] = arr ValueError: could not convert string to float: 'Avery Bradley'
When I try to replicate this behavior, the corr() method works OK but spits out a warning (shown below) that warns that the ignoring of non-numeric columns will be removed in the future. Perhaps the future has arrived? I've got pandas version 1.5.3. You may need to just specify which columns to use--which is actually a better way to do it rather than rely on pd to do this for you. You can do that by just supplying a list of the columns of interest as an index (shown below.) In [1]: import pandas as pd In [2]: data = {'name': ['bob', 'cindy', 'tom'], ...: 'x' : [ 1, 2, 3 ], ...: 'y' : [ 6.5, 8.9, 12.0]} In [3]: df = pd.DataFrame(data) In [4]: df Out[4]: name x y 0 bob 1 6.5 1 cindy 2 8.9 2 tom 3 12.0 In [5]: df.describe() Out[5]: x y count 3.0 3.000000 mean 2.0 9.133333 std 1.0 2.757414 min 1.0 6.500000 25% 1.5 7.700000 50% 2.0 8.900000 75% 2.5 10.450000 max 3.0 12.000000 In [6]: df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 3 entries, 0 to 2 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 name 3 non-null object 1 x 3 non-null int64 2 y 3 non-null float64 dtypes: float64(1), int64(1), object(1) memory usage: 200.0+ bytes In [7]: df.corr(method='pearson') <ipython-input-7-432dd9d4238b>:1: FutureWarning: The default value of numeric_only in DataFrame.corr is deprecated. In a future version, it will default to False. Select only valid columns or specify the value of numeric_only to silence this warning. df.corr(method='pearson') Out[7]: x y x 1.000000 0.997311 y 0.997311 1.000000 In [8]: df[['x', 'y']].corr(method='pearson') Out[8]: x y x 1.000000 0.997311 y 0.997311 1.000000
25
3
76,527,086
2023-6-21
https://stackoverflow.com/questions/76527086/find-an-empty-space-in-a-binary-image-that-can-fit-a-shape
I have this image I need to find an empty area that can fit this shape so that the end result is something like this
Here is a simple yet naive solution to this problem. It uses 2D convolution as mentioned by UnquoteQuote. import random import numpy as np import scipy.signal as sig import matplotlib.pyplot as plt import cv2 # load images image = (cv2.imread('image.jpg').mean(axis=2) > 127).astype(np.float32) shape = (cv2.imread('shape.jpg').mean(axis=2) > 127).astype(np.float32) # perform 2D convolution conv = sig.convolve2d(image, shape, mode='valid') solutions = np.where(conv == 0) # draw some solutions plt.figure(figsize=(16, 4)) for i in range(4): r = random.randint(0, solutions[0].shape[0] - 1) x, y = solutions[0][r], solutions[1][r] solution_plot = np.zeros((*image.shape, 3)) solution_plot[:, :, 0] = image solution_plot[x:x + shape.shape[0], y:y + shape.shape[1], 1] = shape plt.subplot(1, 4, i + 1) plt.imshow(solution_plot) plt.show() Example results: This algorithm finds all possible solutions. If you only need one, you can optimize it so that it gets a random (x, y) point and perform dot product of the shape and a cropped image area [x:x+shape_width, y:y+shape_height] to check if there is a space until you find the right point. This can be done for example like this: while True: x = random.randint(0, image.shape[0] - shape.shape[0]) y = random.randint(0, image.shape[1] - shape.shape[1]) if np.sum(shape*image[x:x + shape.shape[0], y:y + shape.shape[1]]) == 0: break # x, y is the solution Compared to convolution this one is much faster (but it depends on the number of the solutions): convolution: 6.65 s ± 21.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) random search: 1.31 ms ± 31.2 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
3
2
76,523,859
2023-6-21
https://stackoverflow.com/questions/76523859/improving-performance-of-resolving-a-solitaire-math-game
A bit of history: Since I was a kid, I have been playing a very easy little solitaire game to which I have never found a solution and honestly, I don't know if there is one, but I would like to find it out with the help of my computer. First, let me explain the rules: First you have to take a grid sheet and draw an area of 10 x 10 squares. Next you have to place the first number at a square of your chioce (I usually use the square at 0,0) Now you have to start counting up numbers from 1 to 100 according to certain jump rules (I'll get into that right now) and annotating the corresponding number inside the square you are jumping to. Jump rules are: leave 2 squares blank horizontally or vertically and leave only one blank diagonally. The problem: I have written the following (awfully slow) code in Python. Considering that my interpretation is that the possiblities the computer has to explore are 100!, I think the "bruteforcing" method will take a pretty long time. I'm running an 11th gen I7 but my Python code gets only executed on a single core. How could I speed my code up and/or how I could improve the algorithm? Here is the code: class Gametable: def __init__( self ): #This value contains the maximum number reched by the algorithm self.max_reached = 1 def start_at( self, coordX, coordY ): tmpTable = [ [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] ] #We do not need to check the first jump, since the table should be empty. if self.is_valid_move( coordX, coordY, 1, tmpTable ): print("Found a solution!") else: print("Found no solution :(") def print_table( self, table ): print(52*"-") for X in range(10): for Y in range(10): if table[ X ][ Y ] == 0: print("| ", end="") else: print("| {:02d} ".format( table[ X ][ Y ]), end='') print(" |") print(52*"-") print() def is_valid_move( self, X, Y, counter, table): # Check bounds if X < 0: return False elif X > 9: return False elif Y < 0: return False elif Y > 9: return False # Now check next steps if table[ X ][ Y ]==0: table[ X ][ Y ] = counter if self.is_valid_move( X + 3, Y, counter+1, table ) or self.is_valid_move( X - 3, Y, counter+1, table ) or self.is_valid_move( X , Y + 3, counter+1, table ) or self.is_valid_move( X, Y - 3, counter+1, table ) or self.is_valid_move( X + 2, Y + 2, counter+1, table ) or self.is_valid_move( X + 2, Y - 2, counter+1, table ) or self.is_valid_move( X - 2, Y + 2, counter+1, table ) or self.is_valid_move( X - 2, Y - 2, counter+1, table ): return True else: if counter > self.max_reached: print("Max reached "+str(self.max_reached)) self.print_table(table) self.max_reached = counter table[X][Y] = 0 # We'll have to delete the last step if there is no further possibility return False mytable = Gametable() mytable.start_at( 0, 0 ) This is an example of the game:
I had a go and implemented Warnsdorff's rule for prioritising moves, and the program immediately found a solution: def print_table(table): print(51*"-") for row in table: for cell in row: print(f"|{cell:>3} " if cell else "| ", end='') print("|\n" + 51*"-") print() def solve(table, x, y): sizey = len(table) sizex = len(table[0]) size = sizey * sizex # Generate move list (nothing special) def moves(x, y): return [(x1, y1) for x1, y1 in ( (x + dx, y + dy) for dx, dy in ( (-3, 0), (-2, 2), (0, 3), ( 2, 2), ( 3, 0), ( 2, -2), (0, -3), (-2, -2) ) ) if 0 <= x1 < sizex and 0 <= y1 < sizey and table[y1][x1] == 0 ] # Heuristic for evaluating a move by the number of followup moves def freedom(x, y): return len(moves(x, y)) # Get move list sorted by heuristic (Warnsdorff's rule) def sortedmoves(x, y): return sorted((freedom(x1, y1), x1, y1) for x1, y1 in moves(x, y)) def dfs(x, y, i): table[y][x] = i # Table completed? if i == size or any(dfs(x1, y1, i + 1) for _, x1, y1 in sortedmoves(x, y)): return True # BINGO! table[y][x] = 0 # backtrack return False return dfs(x, y, 1) table = [ [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] ] if solve(table, 0, 0): print("SOLVED!") print_table(table) Output: SOLVED! --------------------------------------------------- | 1 | 43 | 67 | 16 | 42 | 75 | 31 | 39 | 74 | 32 | --------------------------------------------------- | 69 | 18 | 3 | 83 | 35 | 4 | 82 | 36 | 5 | 81 | --------------------------------------------------- | 66 | 15 | 41 | 76 | 94 | 40 | 73 | 93 | 30 | 38 | --------------------------------------------------- | 2 | 44 | 68 | 17 | 79 | 86 | 34 | 80 | 87 | 33 | --------------------------------------------------- | 70 | 19 | 95 | 84 | 72 | 96 | 89 | 37 | 6 | 92 | --------------------------------------------------- | 65 | 14 | 57 | 77 | 99 | 56 | 78 |100 | 29 | 60 | --------------------------------------------------- | 23 | 45 | 71 | 26 | 90 | 85 | 27 | 91 | 88 | 10 | --------------------------------------------------- | 54 | 20 | 98 | 55 | 58 | 97 | 50 | 59 | 7 | 49 | --------------------------------------------------- | 64 | 13 | 24 | 63 | 12 | 25 | 62 | 11 | 28 | 61 | --------------------------------------------------- | 22 | 46 | 53 | 21 | 47 | 52 | 8 | 48 | 51 | 9 | --------------------------------------------------- Other starting squares When I tried other starting squares, it turned out that when starting at (4, 2) the search took too long. So I added a tie-breaker to the heuristic (in case the minimum freedom was shared by multiple moves): I went with the taxicab distance to the closest corner. This turned out to work out well for all starting positions: def print_table(table): print(51*"-") for row in table: for cell in row: print(f"|{cell:>3} " if cell else "| ", end='') print("|\n" + 51*"-") print() def solve(table, x, y): sizey = len(table) sizex = len(table[0]) size = sizey * sizex # Generate move list (nothing special) def moves(x, y): return [(x1, y1) for x1, y1 in ( (x + dx, y + dy) for dx, dy in ( (-3, 0), (-2, 2), (0, 3), ( 2, 2), ( 3, 0), ( 2, -2), (0, -3), (-2, -2) ) ) if 0 <= x1 < sizex and 0 <= y1 < sizey and table[y1][x1] == 0 ] # Heuristic for evaluating a move by the number of followup moves def freedom(x, y): return len(moves(x, y)) # Heuristic for breaking ties: taxicab distance to closest corner def cornerdistance(x, y): return min(x, sizex - 1 - x) + min(y, sizey - 1 - y), # Get move list sorted by heuristic (Warnsdorff's rule) def sortedmoves(x, y): return sorted((freedom(x1, y1), cornerdistance(x1, y1), x1, y1) for x1, y1 in moves(x, y)) def dfs(x, y, i): table[y][x] = i # Table completed? if i == size or any(dfs(x1, y1, i + 1) for _,_, x1, y1 in sortedmoves(x, y)): return True # BINGO! table[y][x] = 0 # backtrack return False return dfs(x, y, 1) # Try any starting square for i in range(10): for j in range(10): table = [ [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ] ] print("=====",i,j,"=================") if solve(table, i, j): print("SOLVED!") print_table(table) All 100 solutions were spit out to the screen in no time.
3
2
76,513,289
2023-6-20
https://stackoverflow.com/questions/76513289/how-to-properly-use-gcps-artifact-repository-with-python
Adding Private GCP Repo Breaks normal pip behaviour When using Google Cloud Platform's Artifact Repository, you have to alter your .pypirc file for any uploads (twine) and your pip.conf for any downloads (pip). For the downloads specifically, you have to add something like: [global] extra-index-url = https://<YOUR-LOCATION>-python.pkg.dev/<YOUR-PROJECT>/<YOUR-REPO-NAME>/simple/ However, by doing this, now anything that will call pip will also check this extra repository, and when doing so, it will ask for a user name and password. This means that anything, like calls behind the scenes that poetry, pdm, pip, or pipx do will all ask for this username and password. Often these requests are being made as part of a non-interactive action, so that everything just stalls. Non-ideal, but working, solution: I ran across this "solution", which does indeed work, but which the author himself says is not the right way to do things because it compromises security, bringing us back to the "infinitely live keys stored on a laptop" days. (I'm sorry, that link is now behind Medium's paywall. In short, the link said that you should use a JSON key and provide that key in your pip.conf and .pypirc files. You can create a JSON key following something like this Google doc showing how to authenticate with a key file.) More secure solution?? But what is the right solution? I want the following: To be able to run things like pip, pdm, etc. on my local machine and not have them stall, waiting for a username and password that I cannot fill out. This is both for things that are in fact in my private repository, but also things living in normal PYPI or wherever I look. To keep the security in place, so that I am being recognized as "ok to do this" because I have authorized myself and my computer via gcloud auth login or something similar (gcloud auth login does nothing to assist with this repo issue, at least not with any flags I tried). And still be able to perform twine actions (upload to registry) without problems. I use newer solutions, specifically pdm, for package build. I need something that uses pyproject.toml, not setup.py, etc. If I perform something like pdm install (or poetry install), I need for credentials to be evaluated without human input.
I am still not 100% satisfied with my solution, so the question is still relevant and important, IMHO. However, I have resolved it "good enough", and moved on to other tasks. In a thread above, @insidehustle asked if I could describe how I'm doing it. So even though I don't find this a "solution", I wanted to add it to possibly help him (and others) out with a solution that is good enough. CI/CD We use Google for our cloud provider and GitHub Actions for (most of) our builder. In the GitHub Actions workflow YAML file, I do the following for creating Python packages (I have left my own comments/notes in, because it may help you): steps: - uses: actions/checkout@v4 - uses: pdm-project/setup-pdm@v4 # With this, set up Python is no longer needed name: setup PDM using python version ${{ inputs.python-version }} with: python-version: ${{ inputs.python-version }} # Range or exact version, same as actions/setup-python update-python: true # Update env with requested python version cache: false # Use cache support. Default path is ./pdm.lock # Authentication does not seem to be providing sufficient rights to PDM, but it # should. We can revisit this. - id: 'auth' name: 'Authenticate to Google Cloud' uses: 'google-github-actions/auth@v2' with: token_format: 'access_token' workload_identity_provider: ${{ secrets.workload_identity_provider }} service_account: ${{ secrets.service_account }} - name: install '${{ inputs.artifact }}' dependencies env: PYTHON_REPO_USERNAME: ${{ inputs.repo-username }} PYTHON_REPO_PASSWORD: ${{ secrets.python_repo_password }} run: | if [[ ${{ inputs.verbose }} ]] then pdm install --no-default --no-self --verbose else pdm install --no-default --no-self fi - name: run tests on '${{ inputs.artifact }}' (using nox) if: ${{ inputs.run-tests }} env: PYTHON_REPO_USERNAME: ${{ inputs.repo-username }} PYTHON_REPO_PASSWORD: ${{ secrets.python_repo_password }} run: | if [[ ${{ inputs.verbose }} ]] then pdm run nox -f ${{ inputs.noxfile }} --error-on-missing-interpreters --non-interactive --add-timestamp --verbose else pdm run nox -f ${{ inputs.noxfile }} --error-on-missing-interpreters --non-interactive --add-timestamp fi - name: build '${{ inputs.artifact }}' package env: PYTHON_REPO_USERNAME: ${{ inputs.repo-username }} PYTHON_REPO_PASSWORD: ${{ secrets.python_repo_password }} run: | if [[ ${{ inputs.verbose }} ]] then pdm build --verbose else pdm build fi - name: persist package (for later upload to artifact registry) # This is persisting to GH, not to Google Cloud uses: actions/upload-artifact@v4 with: name: ${{ inputs.artifact }} path: ${{ inputs.working-directory }}/dist/ if-no-files-found: ${{ inputs.action-on-upload-fail }} and then I use Github's Secret Manager to store the things that you see above like secrets.workload_identity_provider. That allows the flexibility of inputs, no secrets being stored in the repo, and an easy way to rotate secrets (I change the actual IAM objects in Google, then update the GHA secrets to reflect the new information). I then use a similar concept for publishing the Python packages. It's much simpler, because I don't have to worry about "fighting" against my package manager. (Actually, I think that PDM is doing the right thing, it's just that Google and other providers target "broken" old systems and not the newer standards that PDM has adopted.) There is one more bit of magic to make this work, though: I need to provide my pyproject.toml file with a pointer to my repo, using the secrets mentioned above. To avoid giving away too much company information, I'm editing down the pyproject.toml to just show the core part. [[tool.pdm.source]] name = "insights_python_packages" url = "https://${PYTHON_REPO_USERNAME}:${PYTHON_REPO_PASSWORD}@<our-region>-python.pkg.dev/<our-project>/insights-python-packages/simple/" verify_ssl = true NOTE: The ${PYTHON_REPO_PASSWORD} is pointing back to the same secret environment variable that is referenced in the GHA YAML file. That is, pdm build works because it has env: PYTHON_REPO_USERNAME: ${{ inputs.repo-username }} PYTHON_REPO_PASSWORD: ${{ secrets.python_repo_password }} providing the secrets that are needed to be able to identify the repository. Local Locally, I do the same thing, except that I create a bash script that I source which has the environment variable in it. In other words, I have a file that has the following it in: # GCP Artifact Registry Python repo access (Mgmt Infra) set -x PYTHON_REPO_USERNAME _json_key_base64 set -x PYTHON_REPO_PASSWORD <my_password> and then I source this file. The downside is that hidden in my local computer (and my colleagues') is the password. I am still not happy with this part of the solution, but the rest is good. The positive side is that the workflow is exactly the same: I am able to use the same pyproject.toml file, etc. So all I need to do is pdm run nox or pdm build, and everything runs just as it should and does in the CI/CD pipeline.
4
3
76,475,419
2023-6-14
https://stackoverflow.com/questions/76475419/how-can-i-select-the-proper-openai-api-version
I read on https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions: openai.api_version = "2023-05-15" and on https://learn.microsoft.com/en-us/answers/questions/1193969/how-to-integrate-tiktoken-library-with-azure-opena: openai.api_version = "2023-03-15-preview" This makes me wonder: How can I select the proper openai.api_version? Does that depend on my Azure OpenAI instance or deployed models or which features I use in my Python code? Or something else? I couldn't find the info in my deployed models:
The API Version property depends on the method you are calling in the API: all methods are not supported in all API versions. Details are listed here: https://learn.microsoft.com/en-US/azure/cognitive-services/openai/reference And preview API lifecycle is described here: https://learn.microsoft.com/en-us/azure/ai-services/openai/api-version-deprecation => check those pages for up-to-date references As of March 7th, 2024: Example: "completions" endpoint is available in the following versions (ordered by date): 2024-02-15-preview 2023-12-01-preview (retiring April 2, 2024) 2023-09-01-preview (retiring April 2, 2024) 2023-08-01-preview (retiring April 2, 2024) 2023-07-01-preview (retiring April 2, 2024) 2023-06-01-preview (still supported, due to DALL-E 2) 2023-05-15 2023-03-15-preview (retiring April 2, 2024) 2022-12-01 But "chat completions" endpoint is available only in the following versions (ordered by date): 2024-02-15-preview 2023-12-01-preview (retiring April 2, 2024) 2023-09-01-preview (retiring April 2, 2024) 2023-08-01-preview (retiring April 2, 2024) 2023-07-01-preview (retiring April 2, 2024) 2023-06-01-preview (still supported, due to DALL-E 2) 2023-05-15 2023-03-15-preview (retiring April 2, 2024) Because basically it was not offered in the initial API. Generally, use the latest "not preview" version for production when possible, as the preview versions might be retired on a more frequent basis.
14
18
76,498,857
2023-6-18
https://stackoverflow.com/questions/76498857/what-is-the-difference-between-mapped-column-and-column-in-sqlalchemy
I am new to SQLAlchemy and I see that in the documentation the older version (Column) can be swapped directly with the newer "mapped_column". Is there any advantage to using mapped_column over Column? Could you stick to the older 'Column'?
I think originally Column was used in the lower "core"/sqlalchemy.sql layer AND the higher ORM layer. This created a conflict of purpose. So mapped_column now supersedes Column when using the ORM layer to add more functionality that can't be used by the core layer. The core layer will keep using Column. So I think it is just meant to help you do more faster or more succinctly with the ORM. There is a blurb about them titled "mapped_column() supersedes the use of Column()" below declarative-table-with-mapped-column. Here are some basic examples, using postgresql. See SQL output at end. class Base(DeclarativeBase): pass class Controller(Base): __tablename__ = "controllers" id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] = mapped_column() # Example 1 index: Mapped[int] # Example 2 configured: Mapped[Optional[bool]] # Example 3 setup_mode: Mapped[bool] # Example 4 created_at = Column(DateTime(timezone=True)) # Example 5 Example 1 The column type is derived from the type hint, VARCHAR is derived from str in this case. name: Mapped[str] = mapped_column() Example 2 When mapped_column would be empty it can be left out entirely and this still works, ie. INTEGER is derived from int. index: Mapped[int] Example 3 When Optional from typing is used then a column will allow NULL, ie. nullable=True. configured: Mapped[Optional[bool]] Example 4 When Optional is NOT used then a column will not allow NULL, ie. nullable=False. setup_mode: Mapped[bool] Example 5 Column can still be used alongside mapped_column without using type hints at all. created_at = Column(DateTime(timezone=True)) Example of type checking Running this code through mypy will produce an error similar to error: "Controller" has no attribute "unknown_attribute" [attr-defined] with Session(engine) as session: controller = session.scalars(select(Controller).limit(1)).first() if controller is not None: assert controller.created_at assert controller.unknown_attribute Final CREATE TABLE output CREATE TABLE controllers ( id SERIAL NOT NULL, name VARCHAR NOT NULL, index INTEGER NOT NULL, configured BOOLEAN, setup_mode BOOLEAN NOT NULL, created_at TIMESTAMP WITH TIME ZONE, PRIMARY KEY (id) ) Some "Why"s allow more orm specific functionality that does not make sense to be in Column() reduce boilerplate code int, str, datetime, Optional, etc. are available from python without needing sqlalchemy imports In cases where the type can be derived from the typehint and no special configuration is necessary the entire column/mapped_column definition can be left out, ie. index: Mapped[int] allow type checkers to better check types the checks can be expanded by using something like the data-class integration but is beyond this question
41
50
76,500,503
2023-6-18
https://stackoverflow.com/questions/76500503/how-i-can-unpack-typing-typevartuple-with-pattern
How do I get the type tuple[A[int], A[int]] for b.b? import typing T = typing.TypeVar('T') Ts = typing.TypeVarTuple('Ts') class A(typing.Generic[T]): a: T class B(typing.Generic[*Ts]): b: tuple[*A[Ts]] b: B[int, int] = B() typing.reveal_type(b.b) # tuple[A[*tuple[int, int]]] i got # but i need a tuple[A[int], A[int]] I tried something like tuple[*(A[T1] for T1 in Ts)], but it doesn't work too UPD: I want it to be like in c++ #include <iostream> #include <tuple> #include <typeinfo> template<typename T> struct A { T a; }; template<typename... Ts> struct B { std::tuple<A<Ts>...> b; }; int main() { B<int, int> b; std::cout << typeid(b.b).name() << std::endl; // std::__1::tuple<A<int>, A<int>> return 0; }
Now (Python 3.12) there is no such possibility. But the issue was created on github python/typing. And I hope that they will add such an opportunity! My suggestion: import typing T = typing.TypeVar('T') Ts = typing.TypeVarTuple('Ts') class A[T]: a: T class B[*Ts]: b: tuple[*A[Ts]] b: B[int, int] = B() typing.reveal_type(b.b) # tuple[A[int], A[int]] class SerializationWrapper[*Ts]: def __init__( self, types: *type[Ts], additional_encoding_hooks: Callable | None = None, additional_decoding_hooks: Callable | None = None, ): self._types: *type[Ts] = types # create stateful en/decoders, register these types, and raise if the types aren't understood or # if the types have ambiguous overlap from the perspective of encoding or decoding # after the registration of optional hooks def encode(obj: *type[Ts]) -> bytes: ... def decode(bytes) -> Union[*Ts]: ...
3
0
76,491,448
2023-6-16
https://stackoverflow.com/questions/76491448/runtimeerror-tk-h-version-8-5-doesnt-match-libtk-a-version-8-6
I'm getting an error while using tkinter. I have installed python using pyenv. >>> import tkinter >>> tkinter._test() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/hrj/.pyenv/versions/3.8.16/lib/python3.8/tkinter/__init__.py", line 4557, in _test root = Tk() File "/Users/hrj/.pyenv/versions/3.8.16/lib/python3.8/tkinter/__init__.py", line 2272, in __init__ self._loadtk() File "/Users/hrj/.pyenv/versions/3.8.16/lib/python3.8/tkinter/__init__.py", line 2288, in _loadtk raise RuntimeError("tk.h version (%s) doesn't match libtk.a version (%s)" RuntimeError: tk.h version (8.5) doesn't match libtk.a version (8.6) The error is similar to this question, but my tk.h version is lower. Any way to upgrade this? tkinter._test() should display a test window but I get a error. In stackoverflow and GitHub people usually have a lower version of libtk.a but mine is the reverse. So those solutions won't work.
This worked in Python versions 3.7 to 3.11. brew update brew install tcl-tk echo 'export PATH="/usr/local/opt/tcl-tk/bin:$PATH"' >> ~/.zshrc export LDFLAGS="-L/usr/local/opt/tcl-tk/lib" export CPPFLAGS="-I/usr/local/opt/tcl-tk/include" export PKG_CONFIG_PATH="/usr/local/opt/tcl-tk/lib/pkgconfig" brew install pyenv --head python --version >> py_ver sed -i '' "s/Python //g" py_ver export PY_VER="$( cat py_ver )" env \ PATH="$(brew --prefix tcl-tk)/bin:$PATH" \ LDFLAGS="-L$(brew --prefix tcl-tk)/lib" \ CPPFLAGS="-I$(brew --prefix tcl-tk)/include" \ PKG_CONFIG_PATH="$(brew --prefix tcl-tk)/lib/pkgconfig" \ CFLAGS="-I$(brew --prefix tcl-tk)/include" \ PYTHON_CONFIGURE_OPTS="--with-tcltk-includes='-I$(brew --prefix tcl-tk)/include' --with-tcltk-libs='-L$(brew --prefix tcl-tk)/lib -ltcl8.6 -ltk8.6'" \ pyenv install $PY_VER pyenv global $PY_VER I have tried it on Github Actions.
3
2
76,480,902
2023-6-15
https://stackoverflow.com/questions/76480902/playwright-install-deps-fails-in-dockerfile
I have a small application that uses playwright to scrape data from various websites. The application is Dockerized well and everything worked perfectly until I tried to re-build the Docker image (nothing really changed in the code) and it failed to install the playwright deps (like it used to before). This is the Dockerfile: FROM python:3.9-slim COPY ../../requirements/dev.txt ./ RUN python3 -m ensurepip RUN pip install -r dev.txt RUN playwright install RUN playwright install-deps ENV PYTHONPATH "${PYTHONPATH}:/app/" WORKDIR /code/src EXPOSE 8000 COPY ./src /app CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"] This is the requirements: fastapi>=0.85.0 uvicorn>=0.18.3 bs4==0.0.1 playwright This is the error message: => ERROR [6/8] RUN playwright install-deps 4.1s ------ > [6/8] RUN playwright install-deps: #10 0.762 BEWARE: your OS is not officially supported by Playwright; installing dependencies for Ubuntu as a fallback. #10 0.762 Installing dependencies... #10 1.084 Get:1 http://deb.debian.org/debian bookworm InRelease [147 kB] #10 1.269 Get:2 http://deb.debian.org/debian bookworm-updates InRelease [52.1 kB] #10 1.338 Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB] #10 1.407 Get:4 http://deb.debian.org/debian bookworm/main amd64 Packages [8904 kB] #10 2.278 Get:5 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [24.2 kB] #10 3.063 Fetched 9176 kB in 2s (4021 kB/s) #10 3.063 Reading package lists... #10 3.474 Reading package lists... #10 3.868 Building dependency tree... #10 3.969 Reading state information... #10 3.972 Package ttf-ubuntu-font-family is not available, but is referred to by another package. #10 3.972 This may mean that the package is missing, has been obsoleted, or #10 3.972 is only available from another source #10 3.972 #10 3.972 Package libjpeg-turbo8 is not available, but is referred to by another package. #10 3.972 This may mean that the package is missing, has been obsoleted, or #10 3.972 is only available from another source #10 3.972 #10 3.972 Package ttf-unifont is not available, but is referred to by another package. #10 3.972 This may mean that the package is missing, has been obsoleted, or #10 3.972 is only available from another source #10 3.972 However the following packages replace it: #10 3.972 fonts-unifont #10 3.972 #10 3.972 Package xfonts-cyrillic is not available, but is referred to by another package. #10 3.972 This may mean that the package is missing, has been obsoleted, or #10 3.972 is only available from another source #10 3.972 #10 3.974 E: Package 'ttf-unifont' has no installation candidate #10 3.974 E: Package 'xfonts-cyrillic' has no installation candidate #10 3.974 E: Package 'ttf-ubuntu-font-family' has no installation candidate #10 3.974 E: Unable to locate package libx264-155 #10 3.974 E: Unable to locate package libenchant1c2a #10 3.974 E: Unable to locate package libicu66 #10 3.974 E: Package 'libjpeg-turbo8' has no installation candidate #10 3.974 E: Unable to locate package libvpx6 #10 3.974 E: Unable to locate package libwebp6 #10 3.975 Failed to install browser dependencies #10 3.975 Error: Installation process exited with code: 100 ------ executor failed running [/bin/sh -c playwright install-deps]: exit code: 1 The command I'm running is 'docker-compose build'. Hope someone could help, Thanks.
Solution: I changed the base image to a newer version and now it is working properly. FROM python:3.10.7 instead of: FROM python:3.9-slim
4
2
76,517,720
2023-6-20
https://stackoverflow.com/questions/76517720/langchain-pandas-agent-unable-to-run-pandas-commands
I'm trying to use langchain's pandas agent on python for some development work but it goes into a recursive loop due to it being unable to take action on a thought, the thought being, having to run some pandas code to continue the thought process for the asked prompt on some sales dataset (sales.csv). here is the below code import os os.environ['OPENAI_API_KEY'] = 'sk-xxx' from langchain.agents import create_pandas_dataframe_agent from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI import pandas as pd df = pd.read_csv('sales.csv') llm = ChatOpenAI(temperature=0.0,model_name='gpt-3.5-turbo') pd_agent = create_pandas_dataframe_agent(llm, df, verbose=True) pd_agent.run("what is the mean of the profit?") and well the response it gives is as below (i replaced ``` with ----) > Entering new chain... Thought: We need to calculate the profit first by subtracting the cogs from the total, and then find the mean of the profit. Action: Calculate the profit and find the mean using pandas. Action Input: ---- df['Profit'] = df['Total'] - df['cogs'] df['Profit'].mean() ---- Observation: Calculate the profit and find the mean using pandas. is not a valid tool, try another one. Thought:I need to use python_repl_ast to execute the code. Action: Calculate the profit and find the mean using pandas. Action Input: `python_repl_ast` ---- df['Profit'] = df['Total'] - df['cogs'] df['Profit'].mean() ---- Observation: Calculate the profit and find the mean using pandas. is not a valid tool, try another one. Thought:I need to use `python` instead of `python_repl_ast`. Action: Calculate the profit and find the mean using pandas. Action Input: `python` ---- import pandas as pd df = pd.read_csv('filename.csv') df['Profit'] = df['Total'] - df['cogs'] df['Profit'].mean() ---- . . . . . . Observation: Calculate the profit and find the mean using pandas. is not a valid tool, try another one. Thought: > Finished chain. 'Agent stopped due to iteration limit or time limit.' Now my question is why is it not using the python_repl_ast tool to do the calculation? I even changed this agent's tool's description (python_repl_ast ) which was A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer. into A Python shell. Use this to execute python commands and profit, mean calculation using pandas. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer. But it did not help. Also i noticed when the python_repl_ast is initialized into my agent the dataframe is loaded into it's local variables tools = [PythonAstREPLTool(locals={"df": df})] so I'm guessing I'm doing something wrong. Any help will be greatly appreciated. Thank you.
Most of the information you provide in Langchain agents is supposed to be for prompt context. Add the below code to provide custom information for your agent. query = "what is the mean of the profit?" query = query + " using tool python_repl_ast" pd_agent.run(query) It worked for me.
4
4