question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
78,511,875 | 2024-5-21 | https://stackoverflow.com/questions/78511875/stripe-promocode-how-to-verify-that-user-have-redeemed-the-promocode-before-or | I have a Django project configured with Stripe for payments. I have created promocodes for discounts, which can only be redeemed once per customer. The problem is that I want to validate whether a given customer has already redeemed a specific promocode. If they haven't redeemed it, I want to proceed to the next step; otherwise, I want to stop the customer at that point. I have retrieve the promocode but it returns the following values: { "active": true, "code": "test10", "coupon": { "amount_off": null, "created": 1716289929, "currency": "usd", "duration": "once", "duration_in_months": null, "id": "test10", "livemode": false, "max_redemptions": null, "metadata": {}, "name": null, "object": "coupon", "percent_off": 10.0, "redeem_by": null, "times_redeemed": 1, "valid": true }, "created": 1716289930, "customer": null, "expires_at": 1716595200, "id": "promo_1PIqc2B67hjAZnF4umv6pZCP", "livemode": false, "max_redemptions": null, "metadata": {}, "object": "promotion_code", "restrictions": { "first_time_transaction": true, "minimum_amount": null, "minimum_amount_currency": null }, "times_redeemed": 1 } I am not able to find an API that validates whether a customer has already redeemed a given promocode. I'm expecting a method or API endpoint in Stripe that allows me to check if a specific customer (customer ID) has redeemed a specific promocode. I have created The promocode which is only be redeemed once per customer. | You would need to either: create individual one-time Promotion Code objects for each customer, and then list them by customer and code (in case you want to use the same code) to check if it was used, or update Customer's metadata field, to record if a given promo code was used. | 3 | 2 |
78,502,897 | 2024-5-19 | https://stackoverflow.com/questions/78502897/reduce-the-sum-of-differences-between-adjacent-array-elements | I came across a coding challenge on the internet the question is listed below: Have the function FoodDistribution(arr) read the array of numbers stored in arr which will represent the hunger level of different people ranging from 0 to 5 (0 meaning not hungry at all, 5 meaning very hungry). You will also have N sandwiches to give out which will range from 1 to 20. The format of the array will be [N, h1, h2, h3, ...] where N represents the number of sandwiches you have and the rest of the array will represent the hunger levels of different people. Your goal is to minimize the hunger difference between each pair of people in the array using the sandwiches you have available. For example: if arr is [5, 3, 1, 2, 1], this means you have 5 sandwiches to give out. You can distribute them in the following order to the people: 2, 0, 1, 0. Giving these sandwiches to the people their hunger levels now become: [1, 1, 1, 1]. The difference between each pair of people is now 0, the total is also 0, so your program should return 0. Note: You may not have to give out all, or even any, of your sandwiches to produce a minimized difference. Another example: if arr is [4, 5, 2, 3, 1, 0] then you can distribute the sandwiches in the following order: [3, 0, 1, 0, 0] which makes all the hunger levels the following: [2, 2, 2, 1, 0]. The differences between each pair of people is now: 0, 0, 1, 1 and so your program should return the final minimized difference of 2. My first approach was to try to solve it greedily as the following: Loop until the sandwiches are zero For each element in the array copy the array and remove one hunger at location i Get the best combination that will give you the smallest hunger difference Reduce the sandwiches by one and consider the local min as the new hunger array Repeat until sandwiches are zero or the hunger difference is zero I thought when taking the local minimum it led to the global minimum which was wrong based on the following use case [7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5] def FoodDistribution(arr): sandwiches = arr[0] hunger_levels = arr[1:] # Function to calculate the total difference def total_difference(hunger_levels): return sum(abs(hunger_levels[i] - hunger_levels[i + 1]) for i in range(len(hunger_levels) - 1)) def reduce_combs(combs): local_min = float('inf') local_min_comb = None for comb in combs: current_difference = total_difference(comb) if current_difference < local_min: local_min = current_difference local_min_comb = comb return local_min_comb # Function to distribute sandwiches def distribute_sandwiches(sandwiches, hunger_levels): global_min = total_difference(hunger_levels) print(global_min) while sandwiches > 0 and global_min > 0: combs = [] for i in range(len(hunger_levels)): comb = hunger_levels[:] comb[i] -= 1 combs.append(comb) local_min_comb = reduce_combs(combs) x = total_difference(local_min_comb) print( sandwiches, x, local_min_comb) global_min = min(global_min, x) hunger_levels = local_min_comb sandwiches -= 1 return global_min # Distribute sandwiches and calculate the minimized difference global_min = distribute_sandwiches(sandwiches, hunger_levels) return global_min if __name__ == "__main__": print(FoodDistribution([7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5])) I changed my approach to try to brute force and then use memorization to optimize the time complexity Recurse until out of bounds or sandwiches are zero For each location there are two options either to use a sandwich or ignore When the option is to use a sandwich decrement sandwiches by one and stay at the same index. When the option is to ignore increment the index by one. Take the minimum between the two options and return it. The issue here is that I didn't know what to store in the memo and storing the index and sandwiches is not enough. I am not sure if this problem has a better complexity than 2^(n+s). Is there a way to know if dynamic programming or memorization is not the way to solve the problem and in this case can I improve the complexity by memorization or does this problem need to be solved with a different approach? def FoodDistribution(arr): sandwiches = arr[0] hunger_levels = arr[1:] # Distribute sandwiches and calculate the minimized difference global_min = solve(0, sandwiches, hunger_levels) return global_min def solve(index, sandwiches, hunger_levels): if index >= len(hunger_levels) or sandwiches == 0: return total_difference(hunger_levels) # take a sandwich hunger_levels[index] += -1 sandwiches += -1 minTake = solve(index, sandwiches, hunger_levels) hunger_levels[index] += 1 sandwiches += 1 # dont take sandwich dontTake = solve(index + 1, sandwiches, hunger_levels) return min(minTake, dontTake) def total_difference(hunger_levels): return sum(abs(hunger_levels[i] - hunger_levels[i + 1]) for i in range(len(hunger_levels) - 1)) if __name__ == "__main__": print(FoodDistribution([7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5])) Edit: Multiple states will give you the optimal answer for the use case above sandwiches = 7 hunger = [5, 4, 3, 4, 5, 2, 3, 1, 4, 5] optimal is 6 states as follow [3, 3, 3, 3, 3, 2, 2, 1, 4, 5] [4, 3, 3, 3, 3, 2, 2, 1, 4, 4] [4, 4, 3, 3, 2, 2, 2, 1, 4, 4] [4, 4, 3, 3, 3, 2, 1, 1, 4, 4] [4, 4, 3, 3, 3, 2, 2, 1, 3, 4] [4, 4, 3, 3, 3, 2, 2, 1, 4, 4] [5, 4, 3, 3, 3, 2, 2, 1, 3, 3] Note: I accepted @Matt Timmermans answer as it provides the best time complexity n and nlogn. But the two other answer are amazing and good to understand and be able to implement the solution using dynamic programming or memorization. Personally I prefer the memorization version expected time complexity is snh where h is the max hunger level in the array. | The sum of the absolute differences only goes down when you reduce a local maximum. If you reduce a maximum on either end, the sum of differences goes down by one, like [3,2,1] -> [2,2,1]. If you reduce a maximum in the middle, the sum of differences goes down by two, like [1,3,2] -> [1,2,2]. If a maximum gets reduced, it may merge into another maximum that you can reduce, but the new maximum will never be cheaper or more cost effective. It can only get wider, like [1,3,2] -> [1,2,2]. The optimal strategy is, therefore, just to repeatedly reduce the most cost-effective maximum, in terms of benefit/width, that you have enough sandwiches to reduce. benefit is 1 for maximums on the ends or 2 for maximums in the middle. Stop when you no longer have enough sandwiches to reduce the narrowest maximum. You can do this in O(n) time by finding all the maximums and keeping them in a priority queue to process them in the proper order as they are reduced. O(n log n) is easy. In order to make that O(n) bound, you'll need to use a counting-sort-type priority queue instead of a heap. You also need to be a little clever about keeping track of the regions of the array that are known to be at the same height so you can merge them in constant time. Here's an O(n) implementation in python def distribute(arr): foodLeft = arr[0] hungers = arr[1:] # For each element in hungers, calculate number of adjacent elements at same height spans = [1] * len(hungers) for i in range(1, len(hungers)): if hungers[i-1]==hungers[i]: spans[i] = spans[i-1]+1 for i in range(len(hungers)-2, -1, -1): if hungers[i+1]==hungers[i]: spans[i] = spans[i+1] # spans are identified by their first element. Only the counts and hungers on the edges need to be correct # if a span is a maximum, it's height. Otherwise 0 def maxHeight(left): ret = len(spans) if left > 0: ret = min(ret, hungers[left] - hungers[left-1]) right = left + spans[left]-1 if right < len(spans)-1: ret = min(ret, hungers[right] - hungers[right+1]) return max(ret,0) # change the height of a span and return the maybe new span that it is a part of def reduce(left, h): right = left + spans[left] - 1 hungers[left] -= h hungers[right] = hungers[left] if right < len(spans)-1 and hungers[right+1] == hungers[right]: # merge on the right w = spans[right+1] spans[right] = spans[right+1] = 0 # for debuggability right += w if left > 0 and hungers[left-1] == hungers[left]: # merge on left w = spans[left-1] spans[left] = spans[left-1] = 0 # for debuggability left -= w spans[left] = spans[right] = right - left + 1 return left def isEdge(left): return left < 1 or left + spans[left] >= len(spans) # constant-time priority queue for non-edge spans # it's just a list of spans per width pq = [[] for _i in range(len(spans)+1)] # populate priority queue curspan = 0 while curspan < len(spans): width = spans[curspan] if maxHeight(curspan) > 0 and not isEdge(curspan): pq[width].append(curspan) curspan += width # this will be True at the end if we can sacrifice one edge max selection to get one # mid max selection, which would produce one more point canBacktrack = False # process mid spans in order curpri = 1 # while not all hungers are the same while spans[0] < len(spans): # find the best middle maximum bestmid = None midwidth = None if curpri < len(pq) and curpri <= foodLeft: if len(pq[curpri]) == 0: curpri += 1 continue bestmid = pq[curpri][-1] midwidth = spans[bestmid] # find the best edge maximum bestedge = None edgewidth = None if maxHeight(0) > 0 and foodLeft >= spans[0]: bestedge = 0 edgewidth = spans[0] r = len(spans)-spans[-1] if maxHeight(r) > 0 and foodLeft >= spans[r] and (bestedge == None or spans[r] < edgewidth): bestedge = r edgewidth = spans[r] # choose bestspan = None h = 0 if bestedge == None: if bestmid == None: break bestspan = bestmid bestwidth = midwidth canBacktrack = False elif bestmid == None: bestspan = bestedge bestwidth = edgewidth canBacktrack = False elif midwidth <= edgewidth*2: # mid maximum is more cost effective # OR choo bestspan = bestmid bestwidth = midwidth canBacktrack = False else: bestspan = bestedge bestwidth = edgewidth # tentative canBacktrack = True if bestspan == bestmid: # chose the middle span -- remove from pq pq[curpri].pop() # how much we can reduce this maxium by h = min(foodLeft//bestwidth, maxHeight(bestspan)) foodLeft -= bestwidth*h canBacktrack = canBacktrack and foodLeft < midwidth and foodLeft + edgewidth >= midwidth bestspan = reduce(bestspan, h) if maxHeight(bestspan) > 0 and not isEdge(bestspan): pq[spans[bestspan]].append(bestspan) # finally, calculate the new total diffs totaldiff = 0 curspan = spans[0] while curspan < len(spans): totaldiff += abs(hungers[curspan] - hungers[curspan-1]) curspan += spans[curspan] if canBacktrack: totaldiff -= 1 return totaldiff # test cases = [ [8, 11, 14, 15, 16, 13, 2, 3], [7, 5, 4, 3, 4, 5, 2, 3, 1, 4, 5], [2, 4, 4, 3, 4, 5], [3, 3, 4, 4, 4, 3, 4], [4, 3, 4, 4, 4, 3, 5], [5, 3, 4, 4, 4, 3, 6], [3, 3, 4, 4, 3, 4, 5] ] for case in cases: print("{0}: {1}".format(case, distribute(case))) | 5 | 2 |
78,508,381 | 2024-5-20 | https://stackoverflow.com/questions/78508381/pivot-dataframe-values-and-its-unit | I have the input dataframe in the following format: id city state date param value unit 1 Phoenix AZ 4-21-2024 temp 100 F 2 Phoenix AZ 4-21-2024 prec 0 mm 3 Phoenix AZ 4-21-2024 wind 2 mph 4 Phoenix AZ 4-20-2024 temp 101 F 5 Phoenix AZ 4-20-2024 prec 0 NaN 6 Phoenix AZ 4-20-2024 wind 4 mph 7 Seattle WA 4-20-2024 temp 82 F 8 Seattle WA 4-20-2024 prec 3 mm 9 Seattle WA 4-20-2024 wind 5 mph I want this data pivoted into this format: id city state date temp prec wind temp_unit prec_unit wind_unit 1 Phoenix AZ 4-21-2024 100 0 2 F mm mph 2 Phoenix AZ 4-20-2024 101 0 4 F mph 3 Seattle WA 4-20-2024 82 3 5 F mm mph How do I pivot the dataframe on param and get the value and its corresponding unit? | You can pivot, flatten the MultiIndex, then reset_index: out = df.pivot(index=['city', 'state', 'date'], columns='param', values=['value', 'unit']) out.columns = out.columns.map(lambda x: f'{x[0]}_{x[1]}' if x[0]!='value' else x[1]) out.reset_index(inplace=True) Output: city state date prec temp wind unit_prec unit_temp unit_wind 0 Phoenix AZ 4-20-2024 0 101 4 NaN F mph 1 Phoenix AZ 4-21-2024 0 100 2 mm F mph 2 Seattle WA 4-20-2024 3 82 5 mm F mph | 2 | 2 |
78,505,563 | 2024-5-20 | https://stackoverflow.com/questions/78505563/is-it-possible-to-use-docker-compose-watch-with-an-image-created-by-earthfile | So I have this simple code to run. import streamlit as st st.title("hellobored") I created a docker image of this code with a simple earthfile with the command earthly +all VERSION 0.8 streamlit: FROM python:3.11-slim WORKDIR /src/app RUN apt-get update && apt-get install -y curl RUN python -m pip install --upgrade pip RUN pip install streamlit COPY ./hellobored.py /src/app/hellobored.py RUN ls EXPOSE 8501 ENTRYPOINT ["streamlit", "run", "/src/app/hellobored.py", "--server.port=8501"] SAVE IMAGE hellobored all: BUILD +streamlit I have a Docker Compose file that looks like this: version: '3.8' services: streamlit: image: hellobored ports: - "8501:8501" develop: watch: - path: ./ action: rebuild - path: ./ target: /src/app/ action: sync I already tried using rebuild or sync separately, but both don't seem to work. I used docker compose up -d and docker compose watch to see the watch-related problem and received this error for the rebuild: [+] Running 1/0manualy β Container hellobored-streamlit-1 Running 0.0s can't watch service "streamlit" with action rebuild without a build context And this error for the sync: [+] Running 1/0 β Container hellobored-streamlit-1 Running 0.0s none of the selected services is configured for watch, consider setting an 'develop' section I can't find a way to give Docker Compose a build context for Earthly on the net, so I'm asking here. For note, I need the build to be done by Earthly because the images I create are 10GB, and Earthly can compile them in seconds. I need to watch because it would be really long to rebuild and reload manually after every change. | In your initial setup, the docker-compose-watch command failed because there was no build context specified. Docker Compose requires a build context to track changes and trigger rebuilds. Without a build context, the watch command cannot determine which files to monitor for changes. build: context: . | 2 | 1 |
78,507,768 | 2024-5-20 | https://stackoverflow.com/questions/78507768/create-new-dataframe-from-aggregate-of-original-df-and-calculations | I want to create a new dataframe from my original dataframe that aggregates by two columns and has a calculated column dependent on the sum of selected rows from two other columns. Here is a sample df: df = pd.DataFrame([['A','X',2000,5,3],['A','X',2001,6,2],['B','X',2000,6,3],['B','X',2001,7,2],['C','Y',2000,10,4],['C','Y',2001,12,4],['D','Y',2000,11,2],['D','Y',2001,15,1]], columns=['ctry','rgn','year','val1, val2'])) and what it looks like: ctry rgn year val1 val2 0 A X 2000 5 3 1 A X 2001 6 2 2 B X 2000 6 3 3 B X 2001 7 2 4 C Y 2000 10 4 5 C Y 2001 12 4 6 D Y 2000 11 2 7 D Y 2001 15 1 I ultimately want a new dataframe that gets rid of the ctry column and groups by the rgn and year, and has a calculated column value dependent on val1 and val2 such that the sum of the product of val1 and val2 is divided by the sum of val2 for a rgn and year: df['value'] = β(val1*val2)/βval2 for each rgn and year rgn year value 0 X 2000 5.5 1 X 2001 6.5 2 Y 2000 10.333333 3 Y 2001 12.6 I ended up successfully doing so: df['calc'] = df['val1'] * df['val2'] new_df = df.groupby(['rgn', 'year']).sum() new_df['value'] = new_df['calc']/new_df['val2'] new_df = new_df.reset_index().rename_axis(None, axis=1) new_df = new_df.drop(columns=['ctry', 'val1', 'val2', 'calc']) However, I'd like to know if there is a more succinct way that doesn't require all these steps, perhaps using the lambda function. Appreciate any help I get. Thanks! | Pre-compute value = val1*val2, perform a groupby.sum, then compute value/val1: out = (df.eval('value = val1*val2') .groupby(['rgn', 'year'], as_index=False) [['value', 'val2']].sum() .assign(value=lambda x: x['value']/x.pop('val2')) ) Less efficient, but more flexible, you can also use groupby.apply: out = (df.groupby(['rgn', 'year']) .apply(lambda x: (x['val1']*x['val2']).sum()/x['val2'].sum(), include_groups=False) .reset_index(name='value') ) Output: rgn year value 0 X 2000 5.500000 1 X 2001 6.500000 2 Y 2000 10.333333 3 Y 2001 12.600000 | 4 | 3 |
78,506,798 | 2024-5-20 | https://stackoverflow.com/questions/78506798/keep-build-a-map-when-doing-multiple-pandas-groupby-operations | Imagine a process, where we do several pandas groupbys. We start with a df like so: import numpy as np import pandas as pd np.random.seed(1) df = pd.DataFrame({ 'id': np.arange(10), 'a': np.random.randint(1, 10, 10), 'b': np.random.randint(1, 10, 10), 'c': np.random.randint(1, 10, 10) }) df Out[23]: id a b c 0 0 2 8 2 1 1 8 8 9 2 2 7 2 9 3 3 3 8 4 4 4 5 1 9 5 5 6 7 8 6 6 3 8 4 7 7 5 7 7 8 8 3 2 6 9 9 5 1 2 We perform a groupby 'a' (The 'max' agg is just an example) new_df = df.groupby('a').agg('max').reset_index() Out[25]: a id b c 0 2 0 8 2 1 3 8 8 6 2 5 9 7 9 3 6 5 7 8 4 7 2 2 9 5 8 1 8 9 And I want to keep track of the original id to which group it belongs. For example, id 0 belongs to a = 2, 1 to 8, 2 to 7, (3, 6, 8) belongs to 3 etc.. Afterword we perform another groupby: new_df.groupby('b').agg('max').reset_index() Out[28]: b a id c 0 2 7 2 9 1 7 6 9 9 2 8 8 8 9 Now we have a continued mapping, Where group a (2, 3, 8) belongs to group 8 (of b) (5, 6) = 7 7 = 2 And this result in a long map where the original id: 0 => a = 2 => b = 8 (where b = 8 is the final group that interests me) 1 => a = 8 => b = 8 2 => a = 7 => b = 2 And so on.. Now I do this in order to reduce a lot of entities in my data so that I can group them in the same bucket somehow. And I need to map them from their original id to a new id, that is being represented after many iterations of groupby. In the end I want to see something like so Out[32]: id grp 0 0 8 1 1 8 2 2 2 3 3 8 4 4 7 5 5 7 6 6 8 7 7 7 8 8 8 9 9 7 again, because id=0 went to a=2 and a=2 went to b=8.. Any solution or suggestion will be most welcome. Even carrying the values in a column dedicated to this, with each group by. And when we aggregate, we can do a set addition... | This looks to me like a graph problem. You could build a directed graph from successive groupby using networkx, then pair the roots/leaves of each subgraph: import networkx as nx G = nx.DiGraph() ref = 'id' cols = ['a', 'b'] for col in cols: tmp = df.groupby(ref, as_index=False)[[ref, col]].max() G.add_edges_from((((ref, r), (col, x)) for r, x in zip(tmp[ref], tmp[col]))) ref = col out = [] for c in nx.weakly_connected_components(G): H = G.subgraph(c) roots = [n for n, d in H.in_degree() if d==0] leaves = [n for n, d in H.out_degree() if d==0] out.extend([dict((r, l)) for r in roots for l in leaves]) out = (pd.DataFrame(out) .sort_values(by='id', ignore_index=True) ) Output: id b 0 0 8 1 1 8 2 2 2 3 3 8 4 4 7 5 5 7 6 6 8 7 7 7 8 8 8 9 9 7 Graph: | 2 | 1 |
78,505,815 | 2024-5-20 | https://stackoverflow.com/questions/78505815/polars-apply-by-several-columns | I have dataframe Polars with a lot of columns. I need to create new columnt with the result of my own function df = pl.DataFrame({ 'Stock_name': ['reserve', 'fullfilment', 'ntl', 'ntl'], 'doc_type': ['sales', 'moving', 'sales', 'corr'], }) def doc_check(row): if (row['Stock_name'] == 'reserve') or (row['Stock_name'] == 'ntl'): if row['doc_type'] == 'sales': doc = 'sales' else: doc = row['doc_type'] else: doc = row['doc_type'] return doc df = df.apply(doc_check).alias('doc_type_new') df['doc_type_new'] = df.apply(doc_check) But I received an error "TypeError: tuple indices must be integers or slices, not str". Where is an error in my code? | You can use pl.struct to extract the two columns and then use map_elements to map over it: df = df.with_columns( doc_type_new=pl.struct("Stock_name", "doc_type").map_elements( doc_check, return_dtype=pl.String ) ) print(df) Output: shape: (4, 3) βββββββββββββββ¬βββββββββββ¬βββββββββββββββ β Stock_name β doc_type β doc_type_new β β --- β --- β --- β β str β str β str β βββββββββββββββͺβββββββββββͺβββββββββββββββ‘ β reserve β sales β sales β β fullfilment β moving β moving β β ntl β sales β sales β β ntl β corr β corr β βββββββββββββββ΄βββββββββββ΄βββββββββββββββ Note that in this case, your function can be implemented using native polars API which will run much faster: df = df.with_columns( doc_type_new=pl.when( pl.col("Stock_name").is_in(["reserve", "ntl"]) & (pl.col("doc_type") == "sales") ) .then(pl.lit("sales")) .otherwise(pl.col("doc_type")) ) print(df) (Same output.) | 2 | 1 |
78,505,196 | 2024-5-20 | https://stackoverflow.com/questions/78505196/syntaxerror-cannot-assign-to-function-call-here-maybe-you-meant-when-usin | I am creating a AI agent in my Jupyter Notebook on Anaconda. When I enter the following settings I get an error. import os os.environ("OPENAI_API_KEY")=openai_api_key os.environ("OPENAI_MODEL_NAME")="gpt-3.5-turbo" Cell In[47], line 1 os.environ("OPENAI_API_KEY")=openai_api_key ^ SyntaxError: cannot assign to function call here. Maybe you meant '==' instead of '='? This seems to be a standard statement that works on lightning.ai or even VScode editor but why doesnt it work on my Jupyter notebook running on Localhost? I am running this on my Macbook. Next I was planning to create a few agents to create a bot to read and respond to a blog. | Use Square brackets instead of parentheses. os.environ["OPENAI_API_KEY"]=openai_api_key os.environ["OPENAI_MODEL_NAME"]="gpt-3.5-turbo" It'll solve the issue. | 2 | 2 |
78,503,985 | 2024-5-19 | https://stackoverflow.com/questions/78503985/creating-a-categorical-data-from-two-columns-in-pandas | I'm trying to practice coding again and currently having problems with figuring out why my function doesn't work. Here's my sample Table for reference: diastolic systolic 80.0 130.0 77.0 126.0 92.0 152.0 76.0 147.0 70.0 127.0 64.0 119.0 72.0 135.0 84.0 137.0 85.0 165.0 81.0 156.0 and here's the function I created to categorize the data: def blood_pressure_cat(diastolic, systolic): """ diastolic: float systolic: float """ if systolic <= 90 and diastolic <= 60: return "Hypotension" elif systolic < 120 and diastolic < 80: return "Normal" elif systolic < 130 and diastolic < 80: return "Elevated" elif systolic < 140 and diastolic < 90: return "Stage 1" elif systolic < 180 and diastolic < 90: return "Stage 2" elif systolic >= 180 and diastolic >= 90: return "Hypertensive Crisis" else: return np.nan This is how I execute my function: blood_pressure_cat(DF["diastolic"], DF["systolic"]) And getting this error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() If I change the "and"s in that function I get this one instead: Cannot perform 'rand_' with a dtyped [float64] array and scalar of type [bool] Can somebody help me understand why it is not working or if there's any alternative way to run this code? | A very simple approach would be to vectorize your function, for example by decorating the function: import numpy as np @np.vectorize def blood_pressure_cat(diastolic, systolic): """ diastolic: float systolic: float """ if systolic <= 90 and diastolic <= 60: return "Hypotension" elif systolic < 120 and diastolic < 80: return "Normal" elif systolic < 130 and diastolic < 80: return "Elevated" elif systolic < 140 and diastolic < 90: return "Stage 1" elif systolic < 180 and diastolic < 90: return "Stage 2" elif systolic >= 180 and diastolic >= 90: return "Hypertensive Crisis" else: return np.nan blood_pressure_cat(DF["diastolic"], DF["systolic"]) Output: array(['Stage 1', 'Elevated', 'nan', 'Stage 2', 'Elevated', 'Normal', 'Stage 1', 'Stage 1', 'Stage 2', 'Stage 2'], dtype='<U8') Note that the returned NaNs will be strings. To get real NaNs, instead of a decorator, manually wrap the function and pass the otypes=[object] parameter: blood_pressure_cat = np.vectorize(blood_pressure_cat, otypes=[object]) blood_pressure_cat(DF["diastolic"], DF["systolic"]) Output: array(['Stage 1', 'Elevated', nan, 'Stage 2', 'Elevated', 'Normal', 'Stage 1', 'Stage 1', 'Stage 2', 'Stage 2'], dtype=object) As a new column: DF['cat'] = blood_pressure_cat(DF["diastolic"], DF["systolic"]) Also be aware that this vectorization is a wrapper around a python loop. Therefore you could also use pure python with zip: DF['cat'] = [blood_pressure_cat(d, s) for d,s in zip(DF['diastolic'], DF['systolic'])] This gives a similar result than using apply with axis=1 but avoids creating a Series for each row (which is expensive). Output: diastolic systolic cat 0 80.0 130.0 Stage 1 1 77.0 126.0 Elevated 2 92.0 152.0 NaN 3 76.0 147.0 Stage 2 4 70.0 127.0 Elevated 5 64.0 119.0 Normal 6 72.0 135.0 Stage 1 7 84.0 137.0 Stage 1 8 85.0 165.0 Stage 2 9 81.0 156.0 Stage 2 | 2 | 1 |
78,504,305 | 2024-5-20 | https://stackoverflow.com/questions/78504305/filtering-row-with-same-column-date-value | I have a datetime field, and I want to filter the rows that has the same date value. models.py class EntryMonitoring(models.Model): student = models.ForeignKey('Student', models.DO_NOTHING) clockin = models.DateTimeField() clockout = models.DateTimeField(null=True) views.py def check_attendance(request, nid): day = EntryMonitoring.objects.filter( clockout__isnull=False, '# same value', student=nid ).annotate(...) # do something I wanted to add inside that filter query that clockin__date has the same value as clockout__date. Is that possible? If it is, what would be the right query to filter it? | Yes, it is possible. You should use the TruncDate function: from django.db.models import F from django.db.models.functions import TruncDate entries = EntryMonitoring.objects.filter( clockout__isnull=False, student=nid, clockin__date=TruncDate(F("clockout")) ).annotate(...) | 2 | 3 |
78,503,255 | 2024-5-19 | https://stackoverflow.com/questions/78503255/how-can-i-get-the-left-edge-as-the-label-of-pandas-cut | This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'high': [110, 110, 101, 101, 115, 300], } ) And this is the expected output. Column bin should be created: high bin 0 110 105.0 1 110 105.0 2 101 100.0 3 101 100.0 4 115 111.0 5 300 220.0 Basically bin is created by using pd.cut: import numpy as np evaluation_bins = [100, 105, 111, 120, 220, np.inf] df['bin'] = pd.cut(df['high'], bins=evaluation_bins, include_lowest=True, right=False) This gives me the category itself but I want the left edge as the output. Honestly there was not much I could try to do. I could get the dtypes of df by df.dtypes but I don't know how to continue. | Pass evaluation_bins to labels inside pd.cut, without the final edge (np.inf): df['bin'] = pd.cut(df['high'], bins=evaluation_bins, labels=evaluation_bins[:-1], include_lowest=True, right=False) Output: high bin 0 110 105 1 110 105 2 101 100 3 101 100 4 115 111 5 300 220 | 4 | 3 |
78,502,175 | 2024-5-19 | https://stackoverflow.com/questions/78502175/search-for-text-in-a-very-large-txt-file-50gb | I have a hashes.txt file that stores strings and their compressed SHA-256 hash values. Each line in the file is formatted as follows: <compressed_hash>:<original_string> The compressed_hash is created by taking the 6th, 13th, 20th, and 27th characters of the full SHA-256 hash. For example, the string alon when hashed: 5a24f03a01d5b10cab6124f3c0e7086994ac9c869fc8e76e1463458f829fc864 would be stored as: 0db3:alon I have a search.py script that works like this For example, if the user inputs 5a24f03a01d5b10cab6124f3c0e7086994ac9c869fc8e76e1463458f829fc864 in search.py the script searches for its shortened form, 0db3 in hashes.txt. If multiple matches are found, like: 0db3:alon 0db3:apple The script rehashes the matches (alon, apple) to get their full SHA-256 hash, and if there is a match (eg. alon when fully hashed matches the user input (5a24f03a01d5b10cab6124f3c0e7086994ac9c869fc8e76e1463458f829fc864), the script prints the string (alon) The problem with this script is that it, the search usually takes around 1 hour, and my hashes.txt is 54GB. Here is the search.py: import hashlib import mmap def compress_hash(hash_value): return hash_value[6] + hash_value[13] + hash_value[20] + hash_value[27] def search_compressed_hash(hash_input, compressed_file): compressed_input = compress_hash(hash_input) potential_matches = [] with open(compressed_file, "r+b") as file: # Memory-map the file, size 0 means the whole file mmapped_file = mmap.mmap(file.fileno(), 0) # Read through the memory-mapped file line by line for line in iter(mmapped_file.readline, b""): line = line.decode().strip() parts = line.split(":", 1) # Split only on the first colon if len(parts) == 2: # Ensure there are exactly two parts compressed_hash, string = parts if compressed_hash == compressed_input: potential_matches.append(string) mmapped_file.close() return potential_matches def verify_full_hash(potential_matches, hash_input): for string in potential_matches: if hashlib.sha256(string.encode()).hexdigest() == hash_input: return string return None if __name__ == "__main__": while True: hash_input = input("Enter the hash (or type 'exit' to quit): ") if hash_input.lower() == 'exit': break potential_matches = search_compressed_hash(hash_input, "hashes.txt") found_string = verify_full_hash(potential_matches, hash_input) if found_string: print(f"Corresponding string: {found_string}") else: print("String not found for the given hash.") And, if it helps, here's the hash.py script that actually generates the strings and hashes and puts them in hashes.txt import hashlib import sys import time `# Set the interval for saving progress (in seconds) SAVE_INTERVAL = 60 # Save progress every minute BUFFER_SIZE = 1000000 # Number of hashes to buffer before writing to file def generate_hash(string): return hashlib.sha256(string.encode()).hexdigest() def compress_hash(hash_value): return hash_value[6] + hash_value[13] + hash_value[20] + hash_value[27] def write_hashes_to_file(start_length): buffer = [] # Buffer to store generated hashes last_save_time = time.time() # Store the last save time for generated_string in generate_strings_and_hashes(start_length): full_hash = generate_hash(generated_string) compressed_hash = compress_hash(full_hash) buffer.append((compressed_hash, generated_string)) if len(buffer) >= BUFFER_SIZE: save_buffer_to_file(buffer) buffer = [] # Clear the buffer after writing to file # Check if it's time to save progress if time.time() - last_save_time >= SAVE_INTERVAL: print("Saving progress...") save_buffer_to_file(buffer) # Save any remaining hashes in buffer buffer = [] # Clear buffer after saving last_save_time = time.time() # Save any remaining hashes in buffer if buffer: save_buffer_to_file(buffer) def save_buffer_to_file(buffer): with open("hashes.txt", "a") as file_hashes: file_hashes.writelines(f"{compressed_hash}:{generated_string}\n" for compressed_hash, generated_string in buffer) def generate_strings_and_hashes(start_length): for length in range(start_length, sys.maxsize): # Use sys.maxsize to simulate infinity current_string = [' '] * length # Initialize with spaces while True: yield ''.join(current_string) if current_string == ['z'] * length: # Stop when all characters reach 'z' break current_string = increment_string(current_string) def increment_string(string_list): index = len(string_list) - 1 while index >= 0: if string_list[index] == 'z': string_list[index] = ' ' index -= 1 else: string_list[index] = chr(ord(string_list[index]) + 1) break if index < 0: string_list.insert(0, ' ') return string_list def load_progress(): # You may not need this function anymore return 1 # Just return a default value if __name__ == "__main__": write_hashes_to_file(load_progress())` My OS is Windows 10. | You only have 65536 unique search keys. Why not just make 65536 files? You could use 256 directories with 256 files each to keep it manageable. Then you can entirely eliminate all of the lookup process. Bonus: your files are smaller because you donβt need to store the key at all. Youβll have to do some one-time processing to split your big hash file into smaller files, of course, but your lookups should be effectively instant. | 2 | 4 |
78,498,583 | 2024-5-18 | https://stackoverflow.com/questions/78498583/web-scraping-a-dataframe-but-only-500-rows | My aim is to web-scrape the table on https://data.eastmoney.com/executive/list.html and save it to an excel. Please note that it has 2945 pages and I want to put all of them into one excel sheet. The easiest way of doing it is to see where the data is pulled. So I F12 the webpage to get the source code and saw a data center <datacenter-web.eastmoney.com> So I use that API within the red box above to get the data. Here's my code import pandas as pd import requests df = pd.DataFrame( requests.get('https://datacenter-web.eastmoney.com/api/data/v1/get? reportName=RPT_EXECUTIVE_HOLD_DETAILS&columns=ALL&sortColumns=CHANGE_DATE')\ .json().get('result').get('data')) df.to_excel('G:/ExecutiveHoldings/all_by_date.xlsx', index = True) I want to get all the columns and the row to be sorted by CHANGE_DATE, and here's the data I get: Note that the rows only have a limited number of 500 lines, index from 0 ~ 499. Yet I want to get all dataframe from the web, with line numbers way beyond 500. Is there any easy method to get all 2945 pages of table and put them all into one excel sheet? | You can retrieve the next page using the pageNumber={page_number} parameter. The pageSize parameter determines how many data items are retrieved per request. The default and maximum value is 500. This is the access URL https://datacenter-web.eastmoney.com/api/data/v1/get?reportName=RPT_EXECUTIVE_HOLD_DETAILS&columns=ALL&sortColumns=CHANGE_DATE&pageNumber={page_number}&pageSize=500 This code can get all of rows data. The script fetches paginated data from a specified URL until no more data is available. Data is collected in the all_data list, with each page containing up to 500 items. Progress, including page number and data count, is logged during each fetch. After fetching all data, it is converted into a pandas DataFrame. The DataFrame is saved to an Excel file named all_by_date.xlsx. Key Logic While Loop Condition: The loop will continue indefinitely until a break statement is encountered. This is because the condition for the while loop is always True fetch-data.py import requests import pandas as pd def fetch_data(): all_data = [] page_number = 1 while True: url = f'https://datacenter-web.eastmoney.com/api/data/v1/get?reportName=RPT_EXECUTIVE_HOLD_DETAILS&columns=ALL&sortColumns=CHANGE_DATE&pageNumber={page_number}&pageSize=500' try: response = requests.get(url) response.raise_for_status() result = response.json().get('result', {}) if not result or not result.get('data') or len(result['data']) == 0: break # Exit loop if data array is empty all_data.extend(result['data']) print(f'Page: {page_number}, Data Count: {len(result["data"])}, Total Data: {len(all_data)}') page_number += 1 except requests.exceptions.RequestException as e: print(f'Error fetching data from page {page_number}:', e) break # Exit loop on error return all_data def save_data_to_excel(data): df = pd.DataFrame(data) df.to_excel('./all_by_date.xlsx', index=False) print('Data successfully saved to all_by_date.xlsx') if __name__ == '__main__': data = fetch_data() if data: save_data_to_excel(data) else: print('No data fetched.') Install dependencies & Run it pip install requests pandas openpyxl python fetch-data.py Total Data 50 * 2944 + 39 = 147,239 data *Note last on page 2945 has 39 rows Result 147,240 rows - header row = 147,239 row | 2 | 1 |
78,499,163 | 2024-5-18 | https://stackoverflow.com/questions/78499163/why-the-difference-in-checking-for-value-of-pd-dataframe-vs-pd-series-if-value-i | I'm working with a pandas DataFrame and I noticed a difference in behavior when using the in operator. Hereβs an example to illustrate this: import pandas as pd df = pd.DataFrame({'a': [4, 5, 6], 'b': [7, 8, 9]}) print(1 in df) print(type(df)) print(1 in df["a"]) print(type(df["a"])) Output: False <class 'pandas.core.frame.DataFrame'> True <class 'pandas.core.series.Series'> The most straightforward difference is that of course one object is a DataFrame, the other is a Series; nonetheless, I was not really expecting 1 to be found in the Index of the Series and this expression evaluating to True. Especially as it is False for the DataFrame. Is there an explanation why I should have been expecting this? | In both cases the in operator calls __contains__ to test membership. Both pd.DataFrame and pd.Series are subclasses of NDFrame, which has this method defined as follows: def __contains__(self, key) -> bool: """True if the key is in the info axis""" return key in self._info_axis So, under the hood the following happens: print(df._info_axis) # Index(['a', 'b'], dtype='object') print(df.__contains__(1)) # False # approx.: 1 in ['a', 'b'] print(df['a']._info_axis) # RangeIndex(start=0, stop=3, step=1) print(df['a'].__contains__(1)) # True # approx.: 1 in [0, 1, 2] I.e., the difference lies in the fact that a df uses df.columns as the 'info axis', while a series, such as df['a'], naturally must use series.index. | 2 | 4 |
78,499,028 | 2024-5-18 | https://stackoverflow.com/questions/78499028/what-does-the-tempfile-mkstemptext-parameter-actually-do | Is the text=True|False parameter in mkstemp something Windows specific? I'm sorry that I have to ask, but I'm a UNIX/Linux person. At the low level of file descriptors - where the mkstemp operates - are all files just bytes. I was surprised to see the text= parameter. The only hint I found is a comment in os.open docs: In particular, on Windows adding O_BINARY is needed to open files in binary mode. For completness the tempfile.mkstemp docs: If text is specified and true, the file is opened in text mode. Otherwise, (the default) the file is opened in binary mode. mkstemp() returns a tuple containing an OS-level handle to an open file (as would be returned by os.open()) and the absolute pathname of that file, in that order. And an example. It indeed returns a file descriptor and a filename: >>> import tempfile >>> tempfile.mkstemp(text=False) (3, '/tmp/tmp9z8rp2_2') >>> tempfile.mkstemp(text=True) (4, '/tmp/tmpc6z9j2yu') | It's a feature of the default C runtime library on Windows. When a file is opened in text mode β not through the Win32 CreateFile() API, but specifically through the POSIX open() API β writes to it will transparently convert LF line-endings to CR+LF (i.e. writing \n will actually write \r\n). https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/setmode?view=msvc-170 https://learn.microsoft.com/en-us/cpp/c-runtime-library/file-handling?view=msvc-170 At the low level of file descriptors - where the mkstemp operates - are all files just bytes File descriptors are not "low level" when it comes to Windows, though. One important thing about Windows is that it is not a POSIX-like system in the slightest β even though it comes with a C runtime library that provides POSIX-like functionality, a lot of it is emulated in userspace. For example, the "file descriptors" you get from open() are not real and the kernel doesn't know them; it's only the C runtime that maps them to actual OS "file handles" for easier porting of POSIX-like code (see e.g. _get_osfhandle() and _open_osfhandle()). Compare MSVC open() with Win32 CreateFile(). The latter returns you a HANDLE, which indeed works purely with bytes β and is not too different from POSIX file descriptors β but is not quite the same as an ordinary int. (And even that isn't the real system call β the entirety of Win32 is another layer over the "NT native API", e.g. NtCreateFile(), which works with the same file handles but has different semantics when it comes to paths and such.) So with that in mind, you can actually consider POSIX file descriptors on Windows almost as high level as C stdio 'FILE' objects that fopen() would return β or even Python's io.TextIOWrapper objects that open("foo", "w", newline="\r\n") gives you. Python automatically specifies O_BINARY when necessary, e.g. when opening a file as "rb" it will correctly be in binary mode, but sometimes it's necessary to do that manually (e.g. stdin/out cannot be re-opened and defaults to text mode), which can be done using the msvcrt module: if sys.platform == "win32": import msvcrt msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) You can also use it to see the underlying file handles of the "file descriptors": >>> a = open("foo.txt", "w") >>> a.fileno() 3 >>> msvcrt.get_osfhandle(3) 1044 | 3 | 1 |
78,498,832 | 2024-5-18 | https://stackoverflow.com/questions/78498832/how-to-use-sqlalchemy-hybrid-property-in-where-or-filter-method | I'm using SQLAlchemy v2 and have two models: User and Transaction. Here, Transaction refers to money transactions between users, not database transactions. I have defined a hybrid property in the User model that calculates the balance for each user based on their incoming and outgoing money transactions. However, I am encountering issues when trying to use this hybrid property in where or filter for querying users based on there balances. Here is how I defined my models and hybrid property: from __future__ import annotations from decimal import Decimal from typing import List from sqlalchemy import ForeignKey, SQLColumnExpression, create_engine, func, select from sqlalchemy.ext.hybrid import hybrid_property from sqlalchemy.orm import ( DeclarativeBase, Mapped, mapped_column, relationship, sessionmaker, ) class Base(DeclarativeBase): pass class Transaction(Base): __tablename__ = "transactions" id: Mapped[int] = mapped_column(primary_key=True) amount: Mapped[float] sender_id: Mapped[int] = mapped_column(ForeignKey("users.id")) recipient_id: Mapped[int] = mapped_column(ForeignKey("users.id")) sender: Mapped[User] = relationship(foreign_keys=[sender_id]) recipient: Mapped[User] = relationship(foreign_keys=[recipient_id]) class User(Base): __tablename__ = "users" id: Mapped[int] = mapped_column(primary_key=True) sent_transactions: Mapped[List[Transaction]] = relationship( foreign_keys=[Transaction.sender_id], overlaps="sender" ) received_transactions: Mapped[List[Transaction]] = relationship( foreign_keys=[Transaction.recipient_id], overlaps="recipient" ) @hybrid_property def balance(self) -> Decimal: incoming = sum(txn.amount for txn in self.received_transactions) outgoing = sum(txn.amount for txn in self.sent_transactions) balance = incoming - outgoing return balance @balance.inplace.expression @classmethod def _balance_expression(cls) -> SQLColumnExpression[Decimal]: return select( ( func.coalesce( select(func.sum(Transaction.amount)) .where(Transaction.recipient_id == 1) .scalar_subquery(), 0, ) - func.coalesce( select(func.sum(Transaction.amount)) .where(Transaction.sender_id == 1) .scalar_subquery(), 0, ).label("balance") ) ) And here is the code I ran to query users with balance greater than 1 (not working as expected): engine = create_engine("sqlite:///db.db") Session = sessionmaker(engine) with Session() as session: stmt = select(User).where(User.balance > 1) session.execute(stmt) After running the above cd,encountered the following errors: Traceback (most recent call last): File "c:\Users\Nima\Desktop\userbalance\main.py", line 111, in <module> stmt = select(User).where(User.balance > 1) ^^^^^^^^^^^^^^^^ File "C:\Users\Nima\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\sql\operators.py", line 629, in __gt__ return self.operate(gt, other) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nima\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\orm\attributes.py", line 453, in operate return op(self.comparator, *other, **kwargs) # type: ignore[no-any-return] # noqa: E501 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nima\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\sql\operators.py", line 629, in __gt__ return self.operate(gt, other) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nima\AppData\Local\Programs\Python\Python311\Lib\site-packages\sqlalchemy\ext\hybrid.py", line 1509, in operate return op(self.expression, *other, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: '>' not supported between instances of 'Select' and 'int' | The User.balance hybrid property returns a Select object as defined in your _balance_expression function. Select is a subquery, and SQLAlchemy wouldn't know how to compare that subquery with an integer value (1 in your case). To solve this issue, you need to modify the _balance_expression method to return a scalar value instead of a subquery. You can do it this way: from sqlalchemy import case @balance.inplace.expression @classmethod def _balance_expression(cls) -> SQLColumnExpression[Decimal]: incoming = ( select(func.coalesce(func.sum(Transaction.amount), 0)) .where(Transaction.recipient_id == cls.id) .scalar_subquery() ) outgoing = ( select(func.coalesce(func.sum(Transaction.amount), 0)) .where(Transaction.sender_id == cls.id) .scalar_subquery() ) return case( (incoming - outgoing, incoming - outgoing), else_=0).label("balance") | 2 | 1 |
78,496,873 | 2024-5-17 | https://stackoverflow.com/questions/78496873/how-to-write-csv-data-directly-from-string-or-bytes-to-a-duckdb-database-file | I would like to write CSV data directly from a bytes (or string) object in memory to duckdb database file (i.e. I want to avoid having to write and read the temporary .csv files). This is what I've got so far: import io import duckdb data = b'a,b,c\n0,1,2\n3,4,5' rawtbl = duckdb.read_csv( io.BytesIO(data), header=True, sep="," ) con = duckdb.connect('some.db') con.sql('CREATE TABLE foo AS SELECT * FROM rawtbl') which throws following exception: --------------------------------------------------------------------------- IOException Traceback (most recent call last) Cell In[1], line 10 5 rawtbl = duckdb.read_csv( 6 io.BytesIO(data), header=True, sep="," 7 ) 9 con = duckdb.connect('some.db') ---> 10 con.sql('CREATE TABLE foo AS SELECT * FROM rawtbl') IOException: IO Error: No files found that match the pattern "DUCKDB_INTERNAL_OBJECTSTORE://2843be5a66472f9c" However, it is possible to do: >>> duckdb.sql('CREATE TABLE foo AS SELECT * FROM rawtbl') >>> duckdb.sql('show tables') βββββββββββ β name β β varchar β βββββββββββ€ β foo β βββββββββββ >>> duckdb.sql('SELECT * from foo') βββββββββ¬ββββββββ¬ββββββββ β a β b β c β β int64 β int64 β int64 β βββββββββΌββββββββΌββββββββ€ β 0 β 1 β 2 β β 3 β 4 β 5 β βββββββββ΄ββββββββ΄ββββββββ since rawtbl is a duckdb.duckdb.DuckDBPyRelation object. But that is the in-memory duckdb database, not the 'some.db' file. Question How to read csv data directly from bytes (or a string) to duckdb database file, without using intermediate CSV files? Versions duckdb 0.10.2 on Python 3.12.2 on Ubuntu | You can create the connection before you use read_csv and pass the connection into it. import io import duckdb from pathlib import Path data = b'a,b,c\n0,1,2\n3,4,5' db_path = 'some.db' Path(db_path).unlink(missing_ok=True) with duckdb.connect(db_path) as con: rawtbl = duckdb.read_csv( io.BytesIO(data), header=True, sep=",", connection=con, ) con.execute(''' CREATE TABLE foo as select * from rawtbl ''') with duckdb.connect(db_path) as con: res = con.sql('select * from foo') print(res) # βββββββββ¬ββββββββ¬ββββββββ # β a β b β c β # β int64 β int64 β int64 β # βββββββββΌββββββββΌββββββββ€ # β 0 β 1 β 2 β # β 3 β 4 β 5 β # βββββββββ΄ββββββββ΄ββββββββ | 3 | 5 |
78,491,778 | 2024-5-16 | https://stackoverflow.com/questions/78491778/pytest-dependency-doesnt-work-when-both-across-files-and-parametrized | I'm running into a problem wherein pytest_dependency works as expected when EITHER Doing parametrization, and dependent tests are in the same file OR Not doing parametrization, and dependent tests are in a separate file But, I can't get the dependency to work properly when doing BOTH - parametrized dependent tests in a different file. It skips all the tests even when the dependencies have succeeded. I have a directory structure like so: tests/ - common.py - test_0.py - test_1.py common.py: import numpy as np ints = [1, 2, 3] strs = ['a', 'b'] pars = list(zip(np.repeat(ints, 2), np.tile(strs, 3))) test_0.py: import numpy as np import pytest from pytest_dependency import depends from common import pars def idfr0(val): if isinstance(val, (int, np.int32, np.int64)): return f"n{val}" def idfr1(val): return "n{}-{}".format(*val) # I use a marker here because I have a lot of code parametrized this way perm_mk = pytest.mark.parametrize('num, lbl', pars, ids=idfr0) # 2 of these parametrized tests should fail @perm_mk @pytest.mark.dependency(scope="session") def test_a(num, lbl): if num == 2: assert False else: assert True # I set up a dependent parametrized fixture just like the in the documentation @pytest.fixture(params=pars, ids=idfr1) def perm_fixt(request): return request.param @pytest.fixture() def dep_perms(request, perm_fixt): depends(request, ["test_a[n{}-{}]".format(*perm_fixt)]) return perm_fixt # This one works @pytest.mark.dependency(scope="session") def test_b(dep_perms): pass # These are non-parametrized independent tests @pytest.mark.dependency(scope="session") def test_1(): pass @pytest.mark.xfail() @pytest.mark.dependency(scope="session") def test_2(): assert False test_1.py: import pytest from pytest_dependency import depends from common import pars def idfr2(val): return "n{}-{}".format(*val) @pytest.fixture(params=pars, ids=idfr2) def perm_fixt(request): return request.param @pytest.fixture() def dep_perms(request, perm_fixt): depends(request, ["test_0.py::test_a[n{}-{}]".format(*perm_fixt)]) return perm_fixt # Same use of a parametrized fixture, but this one doesn't work @pytest.mark.dependency(scope="session") def test_c(dep_perms): num, lbl = dep_perms assert True # These are non-parametrized dependent tests that work as expected @pytest.mark.dependency(scope="session", depends=["test_0.py::test_1"]) def test_3(): pass @pytest.mark.dependency(scope="session", depends=["test_0.py::test_2"]) def test_4(): pass I expect to see test_a pass for 4 of its 6 parametrized runs and fail 2, test_b pass 4 and skip 2, and test_c likewise pass 4 and skip 2. I expect test_1 to pass, test_2 to xfail, test_3 to pass, and test_4 to be skipped. All of the above happens perfectly except for test_c - all of it gets skipped. I've confirmed that the test names look like they are right. I run pytest from the tests directory like so: pytest --tb=no -rpfxs ./test_0.py ./test_1.py The output is: collected 22 items test_0.py ..FF....ss...x [ 63%] test_1.py ssssss.s [100%] =================================================================================== short test summary info =================================================================================== PASSED test_0.py::test_a[n1-a] PASSED test_0.py::test_a[n1-b] PASSED test_0.py::test_a[n3-a] PASSED test_0.py::test_a[n3-b] PASSED test_0.py::test_b[n1-a] PASSED test_0.py::test_b[n1-b] PASSED test_0.py::test_b[n3-a] PASSED test_0.py::test_b[n3-b] PASSED test_0.py::test_1 PASSED test_1.py::test_3 FAILED test_0.py::test_a[n2-a] - assert False FAILED test_0.py::test_a[n2-b] - assert False XFAIL test_0.py::test_2 SKIPPED [1] test_0.py:36: test_b[n2-a] depends on test_a[n2-a] SKIPPED [1] test_0.py:36: test_b[n2-b] depends on test_a[n2-b] SKIPPED [1] test_1.py:20: test_c[n1-a] depends on test_0.py::test_a[n1-a] SKIPPED [1] test_1.py:20: test_c[n1-b] depends on test_0.py::test_a[n1-b] SKIPPED [1] test_1.py:20: test_c[n2-a] depends on test_0.py::test_a[n2-a] SKIPPED [1] test_1.py:20: test_c[n2-b] depends on test_0.py::test_a[n2-b] SKIPPED [1] test_1.py:20: test_c[n3-a] depends on test_0.py::test_a[n3-a] SKIPPED [1] test_1.py:20: test_c[n3-b] depends on test_0.py::test_a[n3-b] SKIPPED [1] ..\..\Miniconda3\envs\python_utils\Lib\site-packages\pytest_dependency.py:101: test_4 depends on test_0.py::test_2 ===================================================================== 2 failed, 10 passed, 9 skipped, 1 xfailed in 0.43s ====================================================================== Notice that it explicitly states that (for example) test_0.py::test_a[n1-a] has passed, but later it skips test_c[n1-a] because it depends on test_0.py::test_a[n1-a]. Yet test_3 passes because test_1 passed, and test_4 is skipped because test_2 xfailed, so I know my root node name is right. I've scoured the other issues here but the vast majority of them are from naming or scope issues, both of which don't appear to be a problem here. Can anybody tell me why test_c doesn't work? | I believe it's failing to find the dependency in the other file, because the depends() function uses scope='module' by default. Change that to depends(request, ["test_0.py::test_a[n{}-{}]".format(*perm_fixt)], scope='session') And the dependent tests work as expected. What helped me in finding this issue was displaying the debug logs pytest-dependency emits, by passing --log-cli-level=debug when running pytest. | 3 | 4 |
78,496,560 | 2024-5-17 | https://stackoverflow.com/questions/78496560/is-there-a-way-to-shut-down-the-python-multiprocessing-resource-tracker-process | I submitted a question a week ago about persistent processes after terminating the ProcessPoolExecutor, but there have been no replies. I think this might be because not enough people are familiar with how ProcessPoolExecutor is coded, so I thought it would be helpful to ask a more general question to those who use the multiprocessing module. In the Python documentation, it states that On POSIX using the spawn or forkserver start methods will also start a resource tracker process which tracks the unlinked named system resources (such as named semaphores or SharedMemory objects) created by processes of the program. When all processes have exited the resource tracker unlinks any remaining tracked object. However, there is nothing in the documentation stating how to shut down this resource tracker when it is no longer needed. As far as I can tell, the tracker PID is not available to the ProcessPoolExecutor, but I did read somewhere that it might be accessible using a Pool instead. Can anyone confirm if this is true before I refactor my code? | You may use an internal method _stop to achieve this, but ... it should be done with caution due to the potential risks involved while using internal and/or undocumented features, Below an example of code demonstrating what is said above: from concurrent.futures import ProcessPoolExecutor from multiprocessing import resource_tracker def example_function(x): return x * x if __name__ == '__main__': with ProcessPoolExecutor() as executor: results = list(executor.map(example_function, range(10))) # Manually stop the resource tracker resource_tracker._resource_tracker._stop() print("Resource tracker stopped.") The code exits on my Linux Mint 21.2 Xfce machine with exit code 0, so I assume it does what it is intended to do. | 3 | 3 |
78,494,178 | 2024-5-17 | https://stackoverflow.com/questions/78494178/constructing-pointer-chains-with-ctypes | These are simple variable declarations in cpp, how do I do it in Python ctypes? B *C = (B *) A; D *E = (D *)(A + sizeof(B)); Assume that B and D are structs and A is uint8_t A[42];. Where do I start from here? I tried using cast functions but maybe I'm wrong, can you help me? from ctypes import POINTER, byref, addressof, sizeof, cast C = cast(byref(A), POINTER(B)).contents E = cast(addressof(A) + sizeof(B), POINTER(D)).contents | Listing [Python.Docs]: ctypes - A foreign function library for Python. In short, to: Dereference a pointer - cts_ptr_var.contents Reference a variable (to a pointer) - ctypes.pointer(cts_var) (to later pass it as an argument to a function - ctypes.byref(cts_var)) Get its (C) address - ctypes.addressof(cts_var) Now, regarding your question, a nicer way would be to use (each CTypes type's) from_address (and avoid those pointers). code00.py: #!/usr/bin/env python import ctypes as cts import sys I8Arr42 = cts.c_uint8 * 42 class S0(cts.Structure): if len(sys.argv) > 1: # @TODO - cfati: Lame (for demo purposes only) _pack_ = 1 _fields_ = ( ("ui80", cts.c_uint8), ("ui81", cts.c_uint8), ("ui320", cts.c_uint32), ("ui82", cts.c_uint8), ) PS0 = cts.POINTER(S0) class S1(cts.Structure): if len(sys.argv) > 1: # @TODO _pack_ = 1 _fields_ = ( ("ui80", cts.c_uint8), ("ui320", cts.c_uint32), ("ui81", cts.c_uint8), ) PS1 = cts.POINTER(S1) def main(*argv): size0 = cts.sizeof(S0) arr = I8Arr42(*range(42)) #arr = I8Arr42() print(arr, hex(cts.addressof(arr)), hex(id(arr))) print(f"Initial values: {arr[0]}, {arr[size0]}, {arr[-1]}\n") print("FromAddress") s00 = S0.from_address(cts.addressof(arr)) # Initialize structure from memory contents at address print("First structure:") for name, _ in s00. _fields_: print(f" {name}: 0x{getattr(s00, name):02X}") s10 = S1.from_address(cts.addressof(arr) + size0) # Initialize structure from memory contents at address print("\nSecond structure:") for name, _ in s10. _fields_: print(f" {name}: 0x{getattr(s10, name):02X}") s00.ui80 = 100 s10.ui80 = 101 print(f"\nModified values: {arr[0]}, {arr[size0]}, {arr[-1]}\n") print("\nCast") s01 = cts.cast(cts.addressof(arr), PS0).contents print("First structure:") for name, _ in s01. _fields_: print(f" {name}: 0x{getattr(s01, name):02X}") s11 = cts.cast(cts.addressof(arr) + size0, PS1).contents print("\nSecond structure:") for name, _ in s11. _fields_: print(f" {name}: 0x{getattr(s11, name):02X}") s01.ui80 = 200 s11.ui80 = 201 print(f"\nModified values: {arr[0]}, {arr[size0]}, {arr[-1]}\n") if __name__ == "__main__": print( "Python {:s} {:03d}bit on {:s}\n".format( " ".join(elem.strip() for elem in sys.version.split("\n")), 64 if sys.maxsize > 0x100000000 else 32, sys.platform, ) ) rc = main(*sys.argv[1:]) print("\nDone.\n") sys.exit(rc) Output: [cfati@CFATI-5510-0:e:\Work\Dev\StackExchange\StackOverflow\q078494178]> sopr.bat ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [prompt]> :: No argument - default (structure) packing [prompt]> "e:\Work\Dev\VEnvs\py_pc064_03.10_test0\Scripts\python.exe" ./code00.py Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] 064bit on win32 <__main__.c_ubyte_Array_42 object at 0x000001E9E8C04140> 0x1e9e8fffdb0 0x1e9e8c04140 Initial values: 0, 12, 41 FromAddress First structure: ui80: 0x00 ui81: 0x01 ui320: 0x7060504 ui82: 0x08 Second structure: ui80: 0x0C ui320: 0x13121110 ui81: 0x14 Modified values: 100, 101, 41 Cast First structure: ui80: 0x64 ui81: 0x01 ui320: 0x7060504 ui82: 0x08 Second structure: ui80: 0x65 ui320: 0x13121110 ui81: 0x14 Modified values: 200, 201, 41 Done. [prompt]> :: Dummy argument - #pragma pack(1) [prompt]> "e:\Work\Dev\VEnvs\py_pc064_03.10_test0\Scripts\python.exe" ./code00.py dummy Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] 064bit on win32 <__main__.c_ubyte_Array_42 object at 0x000001A85B484140> 0x1a85b87ff30 0x1a85b484140 Initial values: 0, 7, 41 FromAddress First structure: ui80: 0x00 ui81: 0x01 ui320: 0x5040302 ui82: 0x06 Second structure: ui80: 0x07 ui320: 0xB0A0908 ui81: 0x0C Modified values: 100, 101, 41 Cast First structure: ui80: 0x64 ui81: 0x01 ui320: 0x5040302 ui82: 0x06 Second structure: ui80: 0x65 ui320: 0xB0A0908 ui81: 0x0C Modified values: 200, 201, 41 Done. For future tasks, you might want to check [SO]: C function called from Python via ctypes returns incorrect value (@CristiFati's answer) for a common pitfall when working with CTypes (calling functions). | 4 | 4 |
78,494,262 | 2024-5-17 | https://stackoverflow.com/questions/78494262/skipping-dictionaries-that-contain-certain-keys | I'm looking for a good way to skip dictionaries that contain certain keys among multiple dictionaries that are different. Right now I'm chaining .get() methods on a dictionary object and continuing on whatever matches, which is working but its kinda messy. I'm doing something like: ... if my_dict.get('some_key', my_dict.get('other_key', my_dict.get('some_other_key'))): continue | You can use any with a generator expression that iterates over the possible keys and tests if the dict has any of the keys: if any(key in my_dict for key in ('some_key', 'other_key', 'some_other_key')): continue Alternatively, you can use set.isdisjoint to test if the set of keys is disjoint with the dict keys: if not {'some_key', 'other_key', 'some_other_key'}.isdisjoint(my_dict): continue And as @deceze points out, you can also simply test if the set of keys intersects with the set of dict keys, which looks more readable: if {'some_key', 'other_key', 'some_other_key'}.intersection(my_dict): continue But be aware that unlike set.isdisjoint, set.intersection creates a new set and does not short-circuit on the first match, so is more expensive both time and space-wise. | 2 | 4 |
78,488,359 | 2024-5-16 | https://stackoverflow.com/questions/78488359/nicegui-tree-with-toggle-buttons | I'm trying to create a tree in NiceGui where every element has a toggle button. This is what I got so far: from nicegui import events, ui def toggleUpdate(e: events.GenericEventArguments) -> None: print(tree._props['nodes']) tree = ui.tree([ {'id': 'numbers', 'description': 'Just some numbers', 'update': True, 'children': [ {'id': '1', 'description': 'The first number', 'update': False}, {'id': '2', 'description': 'The second number', 'update': True}, ]}, {'id': 'letters', 'description': 'Some latin letters', 'update': True, 'children': [ {'id': 'A', 'description': 'The first letter', 'update': True}, {'id': 'B', 'description': 'The second letter', 'update': True}, ]}, ], label_key='id', on_select=lambda e: ui.notify(e.value)) tree.add_slot('default-header', ''' <span :props="props">Node <strong>{{ props.node.id }}</strong></span> ''') tree.add_slot('default-body', r''' <span :props="props">Description: "{{ props.node.description }}" <q-toggle v-model="props.node.update" label="Elemente updaten" left-label @update:model-value="() => $parent.$emit('toggleUpdate', props.node)" /></span> ''') ui.button("Test", on_click=lambda: toggleUpdate("TEST")) tree.on('toggleUpdate', toggleUpdate) tree.on('update:model-value', lambda e: ui.notify(e.args)) ui.run() The tree works fine, but the data is not changed when I toggle the button and also the toggleUpdate event is not triggered. What am I doing wrong? | The Author of NiceGui found the solution. You have to use $parent.$parent.$parent.$emit() for the toggle. Full answer: https://github.com/zauberzeug/nicegui/discussions/3085#discussioncomment-9462868 | 2 | 2 |
78,492,616 | 2024-5-16 | https://stackoverflow.com/questions/78492616/why-is-the-image-result-flipped-by-90-degrees | I am trying to turn all of the white pixels in this image red, but when I run the program the shape is fine but the red pixels are rotated 90 degrees: This is the current code that I am using to do this: import cv2 as cv import numpy as np import os from matplotlib import pyplot as plt import cv2 as cv def get_white_pixels(image_as_array): threshold = 40 indices = np.where(image_as_array >= threshold) width=image.shape[1] height=image.shape[0] cartesian_y=height-indices[0]-1 np_data_points=np.column_stack((indices[1],cartesian_y)) return cartesian_y, np_data_points, width,height image = cv.imread("framenumber0.jpg") ind, pixels, width, height = get_white_pixels(image) #Goes through every pixel and changes its values for i in range(0, len(pixels)): loc_x = int(pixels[i][0]) loc_y = int(pixels[i][1]) image[loc_x,loc_y] = (0,0,255) cv.imshow('Modified Image', image) cv.waitKey(0) cv.destroyAllWindows() I do need the location of the white points as I will use them later for the second part of the project. I suspect the problem has something to do with the np.column_stack(). I have been reading the info page of the function but I still do not understand why this happens. If you want to replicate here is also the image that I am using: | Change this line to: # image[loc_x,loc_y] = (0,0,255) image[loc_y,loc_x] = (0,0,255) along with # cartesian_y=height-indices[0]-1 cartesian_y=indices[0] You need to know that numpy and OpenCV arrays are y,x ones, where other image processing packages work the x,y way. Here the result of running the fixed code: As mentioned in a comment by aRTy You are effectively flipping the y-coordinate top-down in this line: cartesian_y=height-indices[0]-1. If you are converting from OpenCV/numpy orientation (where y=0 is top) to matplotlib orientation (where y=0 is bottom) then you need to do that when you reach the matplotlib section, not before. The required effect of getting the thresholded image which is apparently a grayscale one can also be achieved using simplified code separating the channels and running OpenCV threshold on one of them: import cv2 as cv, numpy as np threshold = 40 image = cv.imread("srcImage.jpg") _, G, _ = cv.split(image) _, thresh = cv.threshold(G, threshold, 255, cv.THRESH_BINARY) black=np.zeros_like(thresh) red_image = cv.merge([black, black, thresh]) cv.imshow('White areas in source Image marked red', red_image) cv.waitKey(0) cv.destroyAllWindows() The result appears to be the same even if two channels are skipped from consideration. | 2 | 2 |
78,491,904 | 2024-5-16 | https://stackoverflow.com/questions/78491904/transpose-a-rolling-set-of-values-in-a-column-into-a-single-value-in-a-cell | I am having troubles with a data transformation I am trying to do. I have a column of data (Ex. 1,2,3,4,5,6,7,8,9)I want to create a new column that looks back n rows and concatenates the values into a new value, preferably an integer. For example, if the lookback window is 3 in my example, the new column would be Nan, Nan, 123, 234,345,456,567,678,789. So far, here is a bit of code that I tried where n is the lookback window and Streak is the dataframe with the values I am looking to combine into a new value (streakHistory): def getStreakHistory(Streak, n=20): streakHistory= "" for x in range(1, n + 1): streakHistory=str(streakHistory) + str(Streak["Streak"].shift(x)) return streakHistory df["Streak History"] = getStreakHistory(Streak) This seems to run an error because streakHistory is a string. I have seen other options where you transpose into other cells, but I want all of the values to be combined and entered into 1 cell. Any help would be greatly appreciated. I also looked into a join, but that seems to be similar to a standard table join and not really the same as what I was looking at unless I am overlooking a particular functionality of it. | One option would be to use numpy's sliding_window_view combined with agg: from numpy.lib.stride_tricks import sliding_window_view as svw df = pd.DataFrame({'Streak': [1,2,3,4,5,6,7,8,9]}) N = 3 df['Streak History'] = (pd.DataFrame(svw(df['Streak'].astype(str), N), index=df.index[N-1:]) .agg(''.join, axis=1) ) Output: Streak Streak History 0 1 NaN 1 2 NaN 2 3 123 3 4 234 4 5 345 5 6 456 6 7 567 7 8 678 8 9 789 Numeric variant: df['Streak History'] = pd.Series((svw(df['Streak'], N) *(10**np.arange(N-1, -1, -1))).sum(1), index=df.index[N-1:]) Output: Streak Streak History 0 1 NaN 1 2 NaN 2 3 123.0 3 4 234.0 4 5 345.0 5 6 456.0 6 7 567.0 7 8 678.0 8 9 789.0 | 2 | 1 |
78,484,610 | 2024-5-15 | https://stackoverflow.com/questions/78484610/fluent-pattern-with-async-methods | This class has async and sync methods (i.e. isHuman): class Character: def isHuman(self) -> Self: if self.human: return self raise Exception(f'{self.name} is not human') async def hasJob(self) -> Self: await asyncio.sleep(1) return self async def isKnight(self) -> Self: await asyncio.sleep(1) return self If all methods were sync, I'd have done: # Fluent pattern jhon = ( Character(...) .isHuman() .hasJob() .isKnight() ) I know I could do something like: jhon = Character(...).isHuman() await jhon.hasJob() await jhon.isKnight() But I'm looking for something like this: jhon = await ( Character(...) .isHuman() .hasJob() .isKnight() ) | Tricky - but can be done using some advanced properties in Python. Usually, the big problem, even with all Python capabilities, implementing an automated way to curry methods like you are doing is to know when the chain stops (i.e. when the result of the last method will be actually used, and no longer used with a . to call the next method). But if the whole expression is to be used with await that is resolved, we use the __await__ method to finish up and execute the chain. I needed some back-and forth to get it all working, and your Character class will have to inherit from asyncio.Future (so, beware of clashing names for methods and attributes) - other than that, the code bellow did work. (May need some clean-up - sorry, there where some approaches, I can clean-up later) from inspect import isawaitable from types import MethodType from inspect import iscoroutinefunction from typing import Self from collections import deque import asyncio class LazyCallProxy: def __init__(self, parent, callable): self.__callable = callable self.__parent = parent def __call__(self, *args, **kw): result = self.__callable(*args, **kw) self.__result = result return self.__parent def __getattribute__(self, attr): if attr == "_result": return self.__result if attr.startswith(f"_{__class__.__name__}"): return super().__getattribute__(attr) return getattr(self.__parent, attr) class AwaitableCurryMixin(asyncio.Future): def __init__(self, *args, **kw): self._to_be_awaited = deque() super().__init__(*args, **kw) def __await__(self): tasks = [] for proxy in self._to_be_awaited: result = proxy._result if isawaitable(result): tasks.append(asyncio.create_task(result)) if not tasks: return super().__await__() def mark_done(task): self.set_result(task.result()) tasks[-1].add_done_callback(mark_done) return super().__await__() def __getattribute__(self, attr): obj = super().__getattribute__(attr) _to_be_awaited = super().__getattribute__("_to_be_awaited") if not iscoroutinefunction(obj) and not (_to_be_awaited := super().__getattribute__("_to_be_awaited")): # if not in the midle of a chaincall, and this is not # an async method, just return the attribute: return obj if isinstance(obj, MethodType) and obj.__annotations__["return"] in (Self, type(self)): _to_be_awaited.append(proxy:=LazyCallProxy(self, obj)) return proxy return obj class Character(AwaitableCurryMixin): def __init__(self, name): super().__init__() self.name = name self.human = True def isHuman(self) -> Self: if self.human: return self raise Exception(f'{self.name} is not human') async def hasJob(self) -> Self: await asyncio.sleep(1) return self async def isKnight(self) -> Self: await asyncio.sleep(1) return self def __repr__(self): return self.name async def main(): john = await ( Character("john") .isHuman() .hasJob() .isKnight() ) print(john) if __name__ == "__main__": asyncio.run(main()) | 2 | 2 |
78,490,421 | 2024-5-16 | https://stackoverflow.com/questions/78490421/rename-a-variable-and-select-this-variable-using-a-string-store-in-a-vector | I am a R user trying to learn python. For some analysis, I used to rename dataframe variable like this library(dplyr) variable = "c" df = data.frame(a=c(8,5,7,8), b=c(9,6,6,8), c=c(0,7,8,9)) > df a b c 1 8 9 0 2 5 6 7 3 7 6 8 4 8 8 9 out = df %>% rename(variable := !!variable) > out a b variable 1 8 9 0 2 5 6 7 3 7 6 8 4 8 8 9 I don't find way to do the same in python. Any idea to help me ? import pandas as pd df = pd.DataFrame( { "a": [8,5,7,8], "b": [9,6,6,8], "c": [0,7,8,9], } ) | You can do the same with rename and a dictionary, the order of the "parameters" is just reversed compared to R: variable = 'c' out = df.rename(columns={variable: 'variable'}) Or, without the intermediate: out = df.rename(columns={'c': 'variable'}) # df %>% rename(variable := c) Output: a b variable 0 8 9 0 1 5 6 7 2 7 6 8 3 8 8 9 | 3 | 2 |
78,489,776 | 2024-5-16 | https://stackoverflow.com/questions/78489776/gekko-solver-error-results-json-not-found-and-being-unable-to-pinpoint-the | I'm getting the following error message while executing a constrained regression by using Gekko: > ---------------------------------------------------------------- APMonitor, Version 1.0.1 APMonitor Optimization Suite ---------------------------------------------------------------- --------- APM Model Size ------------ Each time step contains Objects : 2 Constants : 0 Variables : 119 Intermediates: 0 Connections : 58 Equations : 59 Residuals : 59 Number of state variables: 3535109 Number of total equations: - 3535080 Number of slack variables: - 0 --------------------------------------- Degrees of freedom : 29 ---------------------------------------------- Model Parameter Estimation with APOPT Solver ---------------------------------------------- Error: forrtl: severe (174): SIGSEGV, segmentation fault occurred Stack trace terminated abnormally. Error: 'results.json' not found. Check above for additional error details To find the reason I looked into the gekko path and found the following files: APOPT.out dbs_read.rpt gk_model0.apm gk_model0.csv gk_model0.info measurements.dbs So I'm not being able to understand what is causing the issue as there is no infeasibilty file being generated. Following is the code I'm using: def gk_enet(y, x, min_l2, indep_vars, sign_dict, l1_penalty=True): # setting the Gekko solver m = gk.GEKKO(remote=False) m.options.MAX_ITER = 100000 m.options.IMODE = 2 # Regression mode opt_path = m.path print(m.path) model_iter = opt_path.rsplit("_", 1)[1] # assigning gekko solver to minimize SSE instead of default l1-norm minimizer if l1_penalty != True: m.options.EV_TYPE = 2 # setting up the model & parameters # number of variables n = x.shape[1] # assigning array of parameters: c stands for coefficients # assigning sign-restrictions based on sign_dict c = m.Array(m.FV, n) for i, ci in enumerate(c): ci.STATUS = 1 if sign_dict[indep_vars[i]] > 0: ci.LOWER = 0 elif sign_dict[indep_vars[i]] < 0: ci.UPPER = 0 # parameter for l2-regularization R = m.FV() R.STATUS = 1 R.LOWER = min_l2 # array of gekko variable to predict the coefficients' effects x_pred = [None]*n # load data xd = m.Array(m.Param, n) #xd_intcpt = m.Array(m.Param, 1) yd = m.Param(value=y) for i in range(n): xd[i].value = x[:, i] x_pred[i] = m.Var() # y = sum(c[i]*x[i]) m.Equation(x_pred[i] == c[i]*xd[i]) y_pred = m.Var() m.Equation(y_pred == m.sum([x_pred[i] for i in range(n)])) l2_penalty = m.Var() m.Equation(l2_penalty == R*m.sum([c[i]**2 for i in range(n)])) # Minimize difference between actual and predicted y + l2-regularization factor m.Minimize((yd-y_pred)**2 + l2_penalty) # APOPT solver m.options.SOLVER = 1 # Solve try: m.solve(disp=True) a = [np.round(i.value[0], 4) for i in c] except: a = [0 for i in c] return a, min_l2 Any suggestion to tackle this issue would be highly helpful. Also, please let me know if any further info is required for getting a solution. Edit: Thanks John for the suggestions. I've reduced the data and re-ran the code with first BPOPT, followed by IPOPT solver. While running BPOPT solver I got the following error: MATRIX IS SINGULAR. RANK= 0 Problem with linear solver, INFO: 3 Error in initialization of lagrange multipliers Setting lam(1:m) = 0 While running the IPOPT, got the following message: solver 3 not supported using default solver: APOPT But this time APOPT solver provided a solution fairly quickly. I am yet to check the results. But looking at the BPOPT error message I think the issue lies in the data, even for the previous case with untrimmed data. Is there any debugging option in Gekko which I can try to figure that out? | The solver crashed with a segmentation fault because it couldn't access the memory it needed. The size of the optimization problem: Number of state variables: 3535109 Number of total equations: - 3535080 Number of slack variables: - 0 --------------------------------------- Degrees of freedom : 29 shows that it is likely near the upper limit of what the APOPT solver can successfully handle. Suggestion 1: Model Reduction Try using m.Intermediate() instead of variables if there is no constraint on ypred or xpred. def gk_enet(y, x, min_l2, indep_vars, sign_dict, l1_penalty=True): # setting the Gekko solver m = gk.GEKKO(remote=False) m.options.MAX_ITER = 100000 m.options.IMODE = 2 # Regression mode opt_path = m.path print(m.path) model_iter = opt_path.rsplit("_", 1)[1] # assigning gekko solver to minimize SSE instead of default l1-norm minimizer if l1_penalty != True: m.options.EV_TYPE = 2 # setting up the model & parameters # number of variables n = x.shape[1] # assigning array of parameters: c stands for coefficients # assigning sign-restrictions based on sign_dict c = m.Array(m.FV, n) for i, ci in enumerate(c): ci.STATUS = 1 if sign_dict[indep_vars[i]] > 0: ci.LOWER = 0 elif sign_dict[indep_vars[i]] < 0: ci.UPPER = 0 # parameter for l2-regularization R = m.FV() R.STATUS = 1 R.LOWER = min_l2 # array of gecko variable to predict the coefficients' effects x_pred = [None]*n # load data xd = m.Array(m.Param, n) #xd_intcpt = m.Array(m.Param, 1) yd = m.Param(value=y) for i in range(n): xd[i].value = x[:, i] x_pred[i] = m.Intermediate(c[i]*xd[i]) y_pred = m.Intermediate(m.sum([x_pred[i] for i in range(n)])) l2_penalty = m.Intermediate(R*m.sum([c[i]**2 for i in range(n)])) # Minimize difference between actual and predicted y + l2-regularization factor m.Minimize((yd-y_pred)**2 + l2_penalty) # APOPT solver m.options.SOLVER = 1 # Solve try: m.solve(disp=True) a = [np.round(i.value[0], 4) for i in c] except: a = [0 for i in c] return a, min_l2 Suggestion 2: Switch Solver Try switching to another solver such as IPOPT or BPOPT: m.options.SOLVER = 'IPOPT' There is no indication that a Mixed Integer solution is needed so switching from a Mixed Integer Nonlinear Programming (MINLP) solver with APOPT to a Nonlinear Programming (NLP) solver with IPOPT may help. If the problem is Quadratically Constrained Quadratic Program (QCQP), there are specialized solvers that may work better than NLP solvers. If you can submit a complete problem with randomly generated data, the developers of the APOPT solver can use it to find the error. You can submit the problem as a new Issue on the Gekko GitHub or APOPT GitHub repositories. | 2 | 1 |
78,482,220 | 2024-5-15 | https://stackoverflow.com/questions/78482220/fixing-boundary-values-on-a-spline | I have data x and y which are noisy evaluations of a function f:[0,alpha] -> [0,1]. I know very little about my function except that f(0) = 0 and f(alpha) = 1. Is there any way to enforce these boundary conditions when fitting a spline? Here is a picture, where one sees that the spline fits nicely, but the approximation is bad around 0, where it takes a value around 0.08: I am aware of bc_type for various splines, but as far as I can tell this only allows one to specify 1st and 2nd derivative, and not fix boundary values. (Probably this question betrays my ignorance of how splines are fitted, and that what I am asking for is not possible. I just want to make sure I'm not missing something obvious.) Here is a toy example: import numpy as np from scipy.interpolate import UnivariateSpline import matplotlib.pyplot as plt ## Generate noisy evaluations of the square root function. ## (In my exmaple, I don't have a closed form expression for my function) x = np.linspace(0,1,1000) y = np.sqrt(x) + (np.random.random(1000)-0.5)*0.05 ## Fit spline spline = UnivariateSpline(x,y) plt.figure(figsize=(7,7)) plt.scatter(x,y,s=1,label="Monte Carlo samples") plt.plot(x,spline(x),color='red',label="spline") plt.legend() plt.title("Noisy evaluations of sqrt") plt.grid() plt.show() And the resulting plot, where one sees rather nicely that the spline provides a poor approximation around zero: | What you're looking for is a function that constructs a clamped B-Spline that interpolates the given endpoints. Unfortunately, to the best of my knowledge, no scipy spline interface implements this exactly. Still, Unit 9 of this online course describes an algorithm to solve exactly this problem denoted "Curve Global Approximation". Below I implement a python version of this algorithm, which follows the description in the website. The function bspline_least_square_with_clamped_endpoints(x, y, n_ctrl, p) returns a scipy.interpolate.BSpline object with n_ctrl coefficients, that interpolates the first and last input points as you requested. The x,y inputs are as in your example, and the p parameter designates the spline degree, which is 3 by default. The parameter n_ctrl is the number of coefficients in the resulting B-Spline, which you can adjust to your needs. Note that as it increases the result may overfit. Calling the function like below on your input, with the additional points (0,0) and (1,1) at the beginning and end, and with n_ctrl=6, results in the following figure. spline = bspline_least_square_with_clamped_endpoints(x, y, n_ctrl=6, p=3) The implementation follows the explanation in the website. from scipy.interpolate import BSpline from scipy.linalg import solve def bspline_least_square_with_clamped_endpoints(x, y, n_ctrl, p=3): """ Build a clamped BSpline least square approximation of the points (x, y) with number of coefficients=n_ctrl and spline_degree=p. The resulting BSpline will satisfy BSpline(x[0]) == y[0] and BSpline(x[-1]) == y[-1]. Based on the algorithm presented in: https://pages.mtu.edu/~shene/COURSES/cs3621/NOTES/INT-APP/CURVE-APP-global.html """ # Build clamped knot vector between x[0] and x[-1], (simple) implementation here builds uniform inner knots num_inner_knots_inc_end_pts = n_ctrl - p + 1 # including knots[p] and knots[-p-1] inner_knots = np.linspace(x[0], x[-1], num_inner_knots_inc_end_pts) knots = np.concatenate([np.array([x[0]] * p), inner_knots, np.array([x[-1]] * p)]) N = build_basis_functions_eval_matrix(knots, x, p) Q = y - N[:, 0] * y[0] - N[:, -1] * y[-1] # now remove first and last rows and columns since we only need rows 1 to n-1, columns 1 to h-1 N = N[1:-1, 1:-1] Q = Q[1:-1] c = solve((N.T).dot(N), (N.T).dot(Q)) c = np.concatenate([[y[0]], c, [y[-1]]]) # append the first and last values (as the interpolating end coefficients) bs = BSpline(knots, c, p) return bs The current implementation constructs a clamped knot vector with inner knots uniformly spaced, there are other options (which can result in different basis functions and therefore not exactly the same resulting curve). The main algorithmic work is done in the function build_basis_functions_eval_matrix(knots, x, p) below, which constructs the matrix N of B-Spline basis function evaluations at the inputs. The matrix shape is (len(x), n_ctrl) and the entry N[k, i] contains the evaluation of the basis function N_i(x_k). In my implementation I use the fact that the i'th B-Spline basis functions can be reconstructed from scipy.interpolate.BSpline by setting the coefficients to [0,..,0,1,0,...,0] - see my answer here for a discussion of how this is done. def build_basis_functions_eval_matrix(knots, x, p=3): """ Build N matrix, where N[k, i] contains the evaluation of basis function N_i(x_k).""" n = len(knots) - p - 1 # number of ctrls == number of columns m = len(x) # number of samples == number of rows # Using the hack that scipy.interpolate.BSpline with coefficients [0,...,0, 1, 0,...,0] # reconstructs the i'th basis function (see https://stackoverflow.com/a/77962609/9702190). cols = [] for i in range(n): c = np.zeros((n,)) c[i] = 1 N_i = BSpline(knots, c, p) col = N_i(x) cols.append(col.reshape((m, 1))) N = np.hstack(cols) return N | 2 | 4 |
78,489,029 | 2024-5-16 | https://stackoverflow.com/questions/78489029/pandas-dense-rank-with-same-values-in-order-by | I have the following DataFrame in Pandas: ID snapshot_date row_hash qwe Jan 01 2024 123 qwe Jan 03 2024 456 qwe Jan 05 2024 456 qwe Jan 07 2024 123 Note: that row_hash changed back on Jan 07 2024 I want to create 3 groups (like window function in SQL), but I can't get the desired rseult: ID snapshot_date row_hash dense_rank I get dense_rank needed qwe Jan 01 2024 123 1 1 qwe Jan 03 2024 456 1 2 qwe Jan 05 2024 456 2 2 qwe Jan 07 2024 123 2 3 I tried to create it with the code: snapshot_df['dense_rank'] = snapshot_df.groupby(['ID', 'row_hash'])['snapshot_date'].rank( method='dense').astype(int) Can anybody help me? | If need counter for change by columns ID and row_hash compare by shifted values by DataFrame.shift with DataFrame.any and Series.cumsum: cols = ['ID', 'row_hash'] df['dense_rank'] = df[cols].ne(df[cols].shift()).any(axis=1).cumsum() print (df) ID snapshot_date row_hash dense_rank 0 qwe Jan 01 2024 123 1 1 qwe Jan 03 2024 456 2 2 qwe Jan 05 2024 456 2 3 qwe Jan 07 2024 123 3 How it working: print (df[cols].ne(df[cols].shift())) ID row_hash 0 True True 1 False True 2 False False 3 False True print (df[cols].ne(df[cols].shift()).any(axis=1)) 0 True 1 True 2 False 3 True dtype: bool print (df[cols].ne(df[cols].shift()).any(axis=1).cumsum()) 0 1 1 2 2 2 3 3 dtype: int32 EDIT: Solution for groups: cols = ['ID', 'row_hash'] df['dense_rank'] = df[cols].ne(df[cols].shift()).any(axis=1).groupby(df['ID']).cumsum() Or: cols = ['ID', 'row_hash'] df['dense_rank'] = (df.assign(g = df[cols].ne(df[cols].shift()).any(axis=1)) .groupby(['ID'])['g'] .cumsum()) print (df) ID snapshot_date row_hash dense_rank 0 qwe Jan 01 2024 123 1 1 qwe Jan 03 2024 456 2 2 qwe Jan 05 2024 456 2 3 qwe Jan 07 2024 123 3 4 qwe1 Jan 01 2024 123 1 5 qwe1 Jan 03 2024 456 2 6 qwe1 Jan 05 2024 456 2 7 qwe1 Jan 07 2024 123 3 | 2 | 1 |
78,489,949 | 2024-5-16 | https://stackoverflow.com/questions/78489949/python-merge-two-dataframes-based-on-created-time | I have two dfs, two df's has be to be merged by class and joining dates. Please check the below df's df1 class teacher age instructor_joining_date A mark 50 2024-01-20 07:18:29.599 A john 45 2024-05-08 05:31:21.379 df2 class count student_joining_date A 1 2024-05-17 01:05:58.072 A 50 2024-04-10 10:39:06.608 A 75 2024-04-05 09:49:07.246 Final output df class count student_joining_date teacher age A 1 2024-05-17 01:05:58.072 john 45 A 50 2024-04-10 10:39:06.608 mark 50 A 75 2024-04-05 09:49:07.246 mark 50 For df2 we have merge df1 by class and joining dates Edit: Yes if student_joining_date and instructor_joining_date is different. If student_joining_date is greater than instructor_joining_date then that teacher will be mapped here | You have to use a merge_asof, then restore the original order with reindex: df1['instructor_joining_date'] = pd.to_datetime(df1['instructor_joining_date']) df2['student_joining_date'] = pd.to_datetime(df2['student_joining_date']) out = (pd.merge_asof(df2.sort_values(by='student_joining_date').reset_index(), df1.sort_values(by='instructor_joining_date'), left_on='student_joining_date', right_on='instructor_joining_date', by='class') .set_index('index').reindex(df2.index) .drop(columns='instructor_joining_date') ) Output: class count student_joining_date teacher age 0 A 1 2024-05-17 01:05:58.072 john 45 1 A 50 2024-04-10 10:39:06.608 mark 50 2 A 75 2024-04-05 09:49:07.246 mark 50 Intermediate before dropping the instructor_joining_date column: class count student_joining_date teacher age instructor_joining_date 0 A 1 2024-05-17 01:05:58.072 john 45 2024-05-08 05:31:21.379 1 A 50 2024-04-10 10:39:06.608 mark 50 2024-01-20 07:18:29.599 2 A 75 2024-04-05 09:49:07.246 mark 50 2024-01-20 07:18:29.599 | 2 | 2 |
78,488,599 | 2024-5-16 | https://stackoverflow.com/questions/78488599/unexpected-reversed-secondary-y-axis-on-dataframe-plot | I'm trying to plot a electrical consumption, first in mA with a date, and with secondary axis in W with julian day. I refered to this matplotlib article, and even if the example works perfectly, I can't figrure where mine differ of it. Because my secondary y axis is inverted as it's supposed to be. Here my prog : import pandas as pd import matplotlib.pyplot as plt import glob import os import matplotlib.dates as mdates import datetime path = '[...]TEMPORARY/CR1000_test_intergration/' all_files = glob.glob(os.path.join(path , "*.dat")) li = [] for filename in all_files: df = pd.read_csv(filename, skiprows=[0,2,3], header=0, index_col=0 ) li.append(df) frame = pd.concat(li, axis=0) frame=frame.sort_values('TIMESTAMP') frame.fillna(0) frame.index = pd.to_datetime(frame.index,format="%Y-%m-%d %H:%M:%S") st_date = pd.to_datetime("2024-05-12 23:30:00", format='%Y-%m-%d %H:%M:%S') en_date = frame.index[-1] mask = frame.loc[st_date:en_date].index window1 = frame.loc[(frame.index >= st_date) & (frame.index <= en_date)] #PLOT fig, ax = plt.subplots(1,1, figsize=(20,6), dpi=150, sharex=True) fig.suptitle('CUBE CONSO',fontsize=14, fontweight='bold') fig.subplots_adjust(hspace=0) plt.xticks(rotation=30) ax.grid(True) ax.xaxis.set_major_locator(mdates.HourLocator(interval=6)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d | %H:%M')) ax.set_ylabel('A') ax.plot(window1.index,window1['R2_voltage_Avg'], color='r', linewidth=2) def date2yday(x): y = x - mdates.date2num(datetime.datetime(2024, 1, 1)) return y def yday2date(x): y = x + mdates.date2num(datetime.datetime(2024, 1, 1)) return y secax_x = ax.secondary_xaxis( 'top', functions=(date2yday, yday2date)) secax_x.set_xlabel('julian day [2024]') def ma_to_w(x): return (x * 12.5) def w_to_ma(x): return (12.5 / (x+0.0001)) #avoid divide by 0 secax_y = ax.secondary_yaxis( 'right', functions=(ma_to_w,w_to_ma)) secax_y.set_ylabel('W') And here a sample of data (the concatened dataframe) : TIMESTAMP RECORD R1_voltage_Avg R2_voltage_Avg out1_voltage_Avg out2_voltage_Avg 2024-05-13 00:00:00 34155 0.286 0.099 78.56 3.949 2024-05-13 00:01:00 34156 0.797 0.104 20.91 0.057 2024-05-13 00:02:00 34157 0.599 0.091 41.6 0.966 2024-05-13 00:03:00 34158 0.519 0.097 27.76 0.824 2024-05-13 00:04:00 34159 0.814 0.096 27.39 0.455 2024-05-13 00:05:00 34160 0.828 0.101 19.75 0.398 2024-05-13 00:06:00 34161 0.664 0.098 58.36 1.193 2024-05-13 00:07:00 34162 0.081 0.1 49.98 1.023 2024-05-13 00:08:00 34163 0.414 0.098 50.26 0.739 2024-05-13 00:09:00 34164 0.708 0.101 45.97 0.568 2024-05-13 00:10:00 34165 0.698 0.099 82.2 3.552 2024-05-13 00:11:00 34166 0.524 0.101 40.6 -0.54 2024-05-13 00:12:00 34167 0.793 0.093 63.76 3.864 2024-05-13 00:13:00 34168 0.72 0.086 12.76 -0.256 2024-05-13 00:14:00 34169 0.564 0.096 23.44 0.881 2024-05-13 00:15:00 34170 0.67 0.094 30.17 2.33 And finally a plot : | It is unclear what you are expecting, but here is what I would have done: import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates import datetime # Sample data data = { 'TIMESTAMP': [ '2024-05-13 00:00:00', '2024-05-13 00:01:00', '2024-05-13 00:02:00', '2024-05-13 00:03:00', '2024-05-13 00:04:00', '2024-05-13 00:05:00', '2024-05-13 00:06:00', '2024-05-13 00:07:00', '2024-05-13 00:08:00', '2024-05-13 00:09:00', '2024-05-13 00:10:00', '2024-05-13 00:11:00', '2024-05-13 00:12:00', '2024-05-13 00:13:00', '2024-05-13 00:14:00', '2024-05-13 00:15:00' ], 'R2_voltage_Avg': [0.099, 0.104, 0.091, 0.097, 0.096, 0.101, 0.098, 0.1, 0.098, 0.101, 0.099, 0.101, 0.093, 0.086, 0.096, 0.094] } df = pd.DataFrame(data) df['TIMESTAMP'] = pd.to_datetime(df['TIMESTAMP']) df.set_index('TIMESTAMP', inplace=True) fig, ax = plt.subplots(1, 1, figsize=(20, 6), dpi=150, sharex=True) fig.suptitle('CUBE CONSO', fontsize=14, fontweight='bold') fig.subplots_adjust(hspace=0) plt.xticks(rotation=30) ax.grid(True) ax.xaxis.set_major_locator(mdates.HourLocator(interval=1)) ax.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d | %H:%M')) ax.set_ylabel('A') ax.plot(df.index, df['R2_voltage_Avg'], color='r', linewidth=2) def date2yday(x): y = x - mdates.date2num(datetime.datetime(2024, 1, 1)) return y def yday2date(x): y = x + mdates.date2num(datetime.datetime(2024, 1, 1)) return y secax_x = ax.secondary_xaxis('top', functions=(date2yday, yday2date)) secax_x.set_xlabel('Julian day [2024]') def ma_to_w(x): voltage = 12.5 return (x / 1000) * voltage def w_to_ma(x): voltage = 12.5 return (x / voltage) * 1000 secax_y = ax.secondary_yaxis('right', functions=(ma_to_w, w_to_ma)) secax_y.set_ylabel('W') plt.show() which gives | 2 | 1 |
78,486,620 | 2024-5-15 | https://stackoverflow.com/questions/78486620/can-t-get-exponential-curve-fit-to-work-with-dates | Iβm trying to plot a curve_fit for the S&P 500. Iβm successful (I think) at performing a linear fit/plot. When I try to get an exponential curve_fit to work, I get this error: Optimal parameters not found: Number of calls to function has reached maxfev = 800. import numpy as np import matplotlib.pyplot as plt import yfinance as yf from scipy.optimize import curve_fit # get data df = yf.download("SPY", interval = '1mo') df = df.reset_index() def func(x, a, b): return a * x + b # ?? return a * np.exp(-b * x) + c # ?? return a*(x**b)+c # ?? return a*np.exp(b*x) # create data arrays # convert Date to numeric for curve_fit ?? xdata = df['Date'].to_numpy().astype(np.int64)//10**9 ydata = df['Close'].to_numpy() # p0 = (?, ?, ?) use guesses? popt, pcov = curve_fit(func, xdata, ydata) print(popt) y_pred = func(xdata, *popt) plt.plot(xdata, ydata) plt.plot(xdata, y_pred, '-') plt.show() Am I dealing with dates correctly? Should I be doing a p0 initial guess? This question/solution may provide some clues. It would be nice to have the x-axis labeled in a date format (but not important right now). | In addition to normalizing the data, it is important to actually choose a good function. In your example you had: def func(x, a, b): return a * x + b # ?? return a * np.exp(-b * x) + c # ?? return a*(x**b)+c # ?? return a*np.exp(b*x) The correct one, when you say you want to fit an exponential, should be this, IMO: # define the exponential growth function, before you had exponential decay because of -b def ExponentialGrowth(x, a, b, c): return a * np.exp(b * x) + c # + c due to account for offset The power function might work as well, I did not check. Anyways, here's the code: # define the exponential growth function, before you had exponential decay because of -b def ExponentialGrowth(x, a, b, c): return a * np.exp(b * x) + c # + c due to account for offset # get data x = df['Date'].to_numpy().astype(np.int64)//10**9 y = df['Close'].to_numpy() # apply z normalization xNorm = (x - x.mean()) / x.std() yNorm = (y - y.mean()) / y.std() # get the optimal parameters popt, pcov = curve_fit(ExponentialGrowth, xNorm, yNorm) # get the predicted but in the normalized range yPredNorm = ExponentialGrowth(xNorm, *popt) # reverse normalize the predicted values yPred = yPredNorm * (y.std()) + y.mean() plt.figure() plt.scatter(df['Date'], y, 1) plt.plot(df['Date'], yPred, 'r-') plt.grid() plt.legend(["Raw", "Fitted"]) plt.xlabel("Year") plt.ylabel("Close") And the results: If you will eventually need to get initial guesses, you can search online how to get the initial guesses for any function. For example, if I am fitting an exponential growth function and I know that the data has an offset of 100, I can set the initial guess of c to a 100... Hope this helps you. | 2 | 2 |
78,485,825 | 2024-5-15 | https://stackoverflow.com/questions/78485825/width-parameter-not-affecting-dash-column | I'm trying to build a simple dashboard to display temperature and humidity on different days. I would like to have a dropdown to select the date and then have a row showing a graph of temperature throughout the day next to a graph showing humidity (two graphs in a single row). I can't get it to work even though I've tried using the embedded dbc.column within dbc.row. I believe the width parameter isn't doing anything and maybe the app is wrapping around. Here is the code with which you can reproduce what I am doing (keep in mind I am using Jupyter Notebook): import pandas as pd import dash from dash import dcc from dash import html from dash.dependencies import Input, Output import dash_bootstrap_components as dbc import plotly.express as px # Sample DataFrame data_dict = { 'date': ['2024-05-14', '2024-05-14', '2024-05-14', '2024-05-15','2024-05-15','2024-05-15'], 'time': ['12:00', '12:10', '12:20', '12:00', '12:10', '12:20'], 'temperature': [25, 26, 27, 20, 22, 28], 'humidity': [50, 55, 50, 60, 78, 77] } data = pd.DataFrame(data_dict) app = dash.Dash(__name__) app.layout = dbc.Container([ html.H1('Weather Dashboard'), dbc.Row([ dbc.Col([ dcc.Dropdown( id='date-dropdown', options=[{'label': date, 'value': date} for date in data['date'].unique()], value=data['date'].unique()[0] # Default value ) ], width=4) ]), dbc.Row([ dbc.Col([ dcc.Graph(id='temp-graph-plotly', figure={}) ], width=12), dbc.Col([ dcc.Graph(id='hum-graph-plotly', figure={}) ], width=12) ]) ]) # Define callback to update graph @app.callback( Output('temp-graph-plotly', 'figure'), Output('hum-graph-plotly', 'figure'), Input('date-dropdown', 'value') ) def update_graph(selected_date): # Filter DataFrame based on selected date filtered_data = data[data['date'] == selected_date] ### TEMPERATURE ### fig_temp_plotly = px.line(filtered_data, x='time', y='temperature').update_xaxes(tickangle=300) ### HUMIDITY ### fig_hum_plotly = px.line(filtered_data, x='time', y='humidity').update_xaxes(tickangle=300) return fig_temp_plotly, fig_hum_plotly # Run the app if __name__ == '__main__': app.run_server(debug=True, port=8010) I thought my layout was ok but I'm getting something like the following image, where the graphs are in two rows instead of simply one next to each other: | Dash Bootstrap Components: Layout: Layout in Bootstrap is controlled using the grid system. The Bootstrap grid has twelve columns [...] The width of your columns can be specified in terms of how many of the twelve grid columns it should span [...] Each of your graph column spans across the whole 12-column sized row. That is why the second dbc.Col is placed below the first one - no more space in that dbc.Row. Set the width to 6 or less to fit both columns in one row. You're also missing external_stylesheets in your application initialization, should be: app = dash.Dash( external_stylesheets=[dbc.themes.BOOTSTRAP] ) | 2 | 4 |
78,486,428 | 2024-5-15 | https://stackoverflow.com/questions/78486428/is-it-possible-in-python-to-type-a-function-that-uses-the-first-elements-of-an-a | I have a Python function that retrieves the first element of an arbitrary number of *args: def get_first(*args): return tuple(a[0] for a in args) Lets say that I call this function as follows: b = (1, 2, 3) c = ("a", "b", "c", "d") x = get_first(b, c) I expect the type Tuple[int, str]. To me it seems impossible to achieve the correct typing to accurately reveal this type. I have had no luck with the TypeVarTuple PEP 646 or with the Paramspec PEP 612. | The signature you're looking for is related to the signature of zip; in fact, you could implement your function as next(zip(*args)). Unfortunately, there is, at the moment, no good way to type such a function. It would require variadic generics which are not currently implemented. For example, you can see the type signature for zip here: @overload def __new__(cls, *, strict: bool = ...) -> zip[Any]: ... @overload def __new__(cls, iter1: Iterable[_T1], /, *, strict: bool = ...) -> zip[tuple[_T1]]: ... @overload def __new__(cls, iter1: Iterable[_T1], iter2: Iterable[_T2], /, *, strict: bool = ...) -> zip[tuple[_T1, _T2]]: ... @overload def __new__( cls, iter1: Iterable[_T1], iter2: Iterable[_T2], iter3: Iterable[_T3], /, *, strict: bool = ... ) -> zip[tuple[_T1, _T2, _T3]]: ... @overload def __new__( cls, iter1: Iterable[_T1], iter2: Iterable[_T2], iter3: Iterable[_T3], iter4: Iterable[_T4], /, *, strict: bool = ... ) -> zip[tuple[_T1, _T2, _T3, _T4]]: ... @overload def __new__( cls, iter1: Iterable[_T1], iter2: Iterable[_T2], iter3: Iterable[_T3], iter4: Iterable[_T4], iter5: Iterable[_T5], /, *, strict: bool = ..., ) -> zip[tuple[_T1, _T2, _T3, _T4, _T5]]: ... @overload def __new__( cls, iter1: Iterable[Any], iter2: Iterable[Any], iter3: Iterable[Any], iter4: Iterable[Any], iter5: Iterable[Any], iter6: Iterable[Any], /, *iterables: Iterable[Any], strict: bool = ..., ) -> zip[tuple[Any, ...]]: ... This typechecks precisely only up to six arguments, which is good enough in practice, but is quite repetitive and rather clumsy. Unfortunately, this is probably the best you can do for now. | 3 | 6 |
78,484,792 | 2024-5-15 | https://stackoverflow.com/questions/78484792/python-polars-vectorized-operation-of-determining-current-solution-with-the-u | Let's say we have 3 variables a, b & c. There are n instances of each, and all but the first instance of c are null. We are to calculate each next c based on a given formula comprising of only present variables on the right hand side: c = [(1 + a) * (current_c) * (b)] + [(1 + b) * (current_c) * (a)] How do we go about this calculation without using native python looping? I've tried: pl.int_range(my_index_column_value, pl.len() + 1) (my index starts form 1) pl.rolling(...) (this seems to be quite an expensive operation) pl.when(...).then(...).otherwise(...) with the above two along with .over(...) & pl.select(...).item() to no avail. It's always the case that _the shift has already been fully made at once. I thought perhaps the most plausible way to do this would be either rolling by 1 with grouping by 2, or via pl.int_range(...) and using the current index column number as the shift value. However, these keep failing as I am unable to properly come up with the correct syntax - I'm unable to pass the index column value and have polars accept it as a number. Even casting throws the same errors. Right now I am thinking we could manage another row for shifting and passing values back to row c, but then again, I'm not sure if this would even be an efficient way to go about it... What would be the most optimal way to go about this without offloading to Rust? Code for reference: import polars as pl if __name__ == "__main__": initial_c_value = 3 df = pl.DataFrame(((2, 3, 4, 5, 8), (3, 7, 4, 9, 2)), schema=('a', 'b')) df = df.with_row_index('i', 1).with_columns(pl.lit(None).alias('c')) df = df.with_columns(pl.when(pl.col('i') == 1) .then( (((1 + pl.col('a')) * (initial_c_value) * (pl.col('b'))) + ((1 + pl.col('b')) * (initial_c_value) * (pl.col('a')))).alias('c')) .otherwise( ((1 + pl.col('a')) * (pl.col('c').shift(1)) * (pl.col('b'))) + ((1 + pl.col('b')) * (pl.col('c').shift(1)) * (pl.col('a')))).shift(1).alias('c')) print(df) | Using numba you can make ufuncs which polars can use seamlessly. from numba import guvectorize, int64 import polars as pl @guvectorize([(int64[:], int64[:], int64, int64[:])], '(n),(n),()->(n)', nopython=True) def make_c(a,b,init_c, res): res[0]=(1+a[0]) * init_c * b[0] + (1+b[0]) * init_c * a[0] for i in range(1,a.shape[0]): res[i] = (1+a[i]) * res[i-1] * b[i] + (1+b[i]) * res[i-1] * a[i] df = pl.DataFrame(((2, 3, 4, 5, 8), (3, 7, 4, 9, 2)), schema=('a', 'b')) df.with_columns( c=make_c(pl.col('a'), pl.col('b'), 3) ) shape: (5, 3) βββββββ¬ββββββ¬ββββββββββββ β a β b β c β β --- β --- β --- β β i64 β i64 β i64 β βββββββͺββββββͺββββββββββββ‘ β 2 β 3 β 51 β β 3 β 7 β 2652 β β 4 β 4 β 106080 β β 5 β 9 β 11032320 β β 8 β 2 β 463357440 β βββββββ΄ββββββ΄ββββββββββββ The way it works is that the ufunc detects that its input is a polars Expr (ie pl.col() is an Expr) and then it hands control to polars. Because of that you can NOT just do make_c('a','b',3) as then its input is just a str and it won't know what to do with that. | 3 | 2 |
78,483,440 | 2024-5-15 | https://stackoverflow.com/questions/78483440/polars-rolling-groups-with-starting-indices-set-by-a-different-column | Iβm working on a dataset with Polars (Python), and Iβm stumped on a βrollingβ grouping operation. The data looks like this: updated_at last_trade_ts ask_price ask_qty bid_price bid_qty 2023-12-20 15:30:54 CET 2023-12-20 15:30:42 CET 114.2 1.2 109.49 0.1 2023-12-20 15:31:38 CET 2023-12-20 15:30:42 CET 112.0 15.2 109.49 0.1 2023-12-20 15:31:44 CET 2023-12-20 15:30:42 CET 112.0 13.2 109.49 0.1 2023-12-20 15:31:58 CET 2023-12-20 15:31:56 CET 112.0 1.2 109.49 0.1 2023-12-20 15:33:14 CET 2023-12-20 15:31:56 CET 112.0 1.2 109.49 0.1 2023-12-20 15:33:27 CET 2023-12-20 15:31:56 CET 112.0 1.2 109.49 0.1 2023-12-20 15:33:36 CET 2023-12-20 15:31:56 CET 112.0 1.2 109.49 0.1 2023-12-20 15:33:47 CET 2023-12-20 15:31:56 CET 112.0 1.2 109.49 0.1 What I want to do is aggregate data in 5-minute windows with starting times set by the updated_at column, but only when there are new values in the last_trade_ts column. For example, in the fragment above, I want to have updated_at BETWEEN '2023-12-20 15:30:54' AND '2023-12-20 15:35:54' for the first group, and updated_at BETWEEN '2023-12-20 15:31:58' AND '2023-12-20 15:36:58' for the second. Of course, one possible solution would be to apply the .rolling() method on updated_at and then select only the first row from a subsequent grouping by last_trade_ts. However, the dataset is quite large, and this approach involves a lot of unnecessary calculations. Is there a way to perform something similar to what .rolling() does, but with a subset of starting indices? P.S. Iβm open to solutions using other tools, if needed. If someone has a solution in R, I can also consider migrating to that. | I cannot come up with solution which would use .group_by_dynamic() cause you can only use single column for both every and period parameters. One possible way of doing it would be more classical pre-window functions sql-like style, with DataFrame.join(). So, first we create a separate DataFrame which gives us list of start moments of desired groups. To do that, we would use Expr.rle_id(), the index column which would increase by 1 every time last_trade_ts changes: ( df .with_columns(pl.col("last_trade_ts").rle_id().alias('i')) ) βββββββββββββββββββββββ¬ββββββββββββββββββββββ¬ββββββββββββ¬ββββββ β updated_at β last_trade_ts β ask_price β i β β --- β --- β --- β --- β β datetime[ΞΌs] β datetime[ΞΌs] β f64 β u32 β βββββββββββββββββββββββͺββββββββββββββββββββββͺββββββββββββͺββββββ‘ β 2023-12-20 15:30:54 β 2023-12-20 15:30:42 β 114.2 β 0 β β 2023-12-20 15:31:38 β 2023-12-20 15:30:42 β 112.0 β 0 β β 2023-12-20 15:31:44 β 2023-12-20 15:30:42 β 112.0 β 0 β β 2023-12-20 15:31:58 β 2023-12-20 15:31:56 β 112.0 β 1 β β 2023-12-20 15:33:14 β 2023-12-20 15:31:56 β 112.0 β 1 β β 2023-12-20 15:33:27 β 2023-12-20 15:31:56 β 112.0 β 1 β β 2023-12-20 15:33:36 β 2023-12-20 15:31:56 β 112.0 β 1 β β 2023-12-20 15:33:47 β 2023-12-20 15:31:56 β 112.0 β 1 β βββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββ΄ββββββ now, we don't need this column, we only want to use it as a grouping index: df_groups = ( df .group_by(pl.col("last_trade_ts").rle_id()) .agg(pl.col("updated_at").first()) .drop("last_trade_ts") ) βββββββββββββββββββββββ β updated_at β β --- β β datetime[ΞΌs] β βββββββββββββββββββββββ‘ β 2023-12-20 15:31:58 β β 2023-12-20 15:30:54 β βββββββββββββββββββββββ Alternatively, you can use DataFrame.unique(): df_groups = ( df .unique("last_trade_ts",keep="first") .select("updated_at") ) βββββββββββββββββββββββ β updated_at β β --- β β datetime[ΞΌs] β βββββββββββββββββββββββ‘ β 2023-12-20 15:30:54 β β 2023-12-20 15:31:58 β βββββββββββββββββββββββ Now what we want to do is to join df_groups to df on condition df.updated_at between df_groups.updated_at and df_groups.updated_at + 5 minutes. Unfortunately, polars is not great on inequality joins, so you can either use cross join and .filter() afterwards: ( df_groups .join(df, how="cross") .filter( pl.col("updated_at_right") >= pl.col("updated_at"), pl.col("updated_at_right") <= pl.col("updated_at") + pl.duration(minutes=5) ) .group_by("updated_at") .agg(pl.col("updated_at_right").max()) ) βββββββββββββββββββββββ¬ββββββββββββββββββββββ β updated_at β updated_at_right β β --- β --- β β datetime[ΞΌs] β datetime[ΞΌs] β βββββββββββββββββββββββͺββββββββββββββββββββββ‘ β 2023-12-20 15:31:58 β 2023-12-20 15:33:47 β β 2023-12-20 15:30:54 β 2023-12-20 15:33:47 β βββββββββββββββββββββββ΄ββββββββββββββββββββββ Alternatively, you can also use duckdb and use sql to get the results: import duckdb duckdb.sql(""" select df_groups.updated_at as start, max(df.updated_at) as end from df_groups inner join df on df.updated_at >= df_groups.updated_at and df.updated_at <= df_groups.updated_at + interval 5 minutes group by df_groups.updated_at """).pl() βββββββββββββββββββββββ¬ββββββββββββββββββββββ β start β end β β --- β --- β β datetime[ΞΌs] β datetime[ΞΌs] β βββββββββββββββββββββββͺββββββββββββββββββββββ‘ β 2023-12-20 15:31:58 β 2023-12-20 15:33:47 β β 2023-12-20 15:30:54 β 2023-12-20 15:33:47 β βββββββββββββββββββββββ΄ββββββββββββββββββββββ | 3 | 3 |
78,484,160 | 2024-5-15 | https://stackoverflow.com/questions/78484160/trying-get-a-table-from-a-website-valueerror-if-using-all-scalar-values-you-m | I'm trying to make a function that automatically takes a table from a website(Wikipedia) cleans it a bit and than displays it, everything worked well with my first 2 tables but the third one is giving me some troubles. This is the code to define the function: def createTable(url, match): data= pd.read_html(url, match= match) name= data[0]["Name"] origin= data[0]["Origin"] type_= data[0]["Type"] number= data[0]["Number"] df= pd.DataFrame({"Name": name, "Origin": origin, "Type": type_, "Number": number}) df.replace("?", np.nan, inplace=True) df['Number']= df['Number'].replace(to_replace={r"\(.*\)": "", r"\[.*\]": ""}, regex=True) return df and this is the function at work: df_avIT= pd.DataFrame() df_avIT= createTable("https://en.wikipedia.org/wiki/List_of_equipment_of_the_Italian_Army", "125 To be upgraded and remain in service until 2035") df_avUK= pd.DataFrame() df_avUK= createTable("https://en.wikipedia.org/wiki/List_of_equipment_of_the_British_Army", "Challenger 2") df_avFR= pd.DataFrame() df_avFR= createTable("https://en.wikipedia.org/wiki/List_of_equipment_of_the_French_Army", "AMX Leclerc") As I said the first 2 give me no problem at all but when I tried on the third it returns, ValueError: If using all scalar values, you must pass an index. I know well the code isn't great I'm trying to improve it but this problem is stopping me and I can't find a valid solution, even though I scearched for similar problem to mine in various forums. (I'm sorry if my English is bad, if you didn't understand something tell me I'm gonna try to explain more). | Your script does not consistently yield Series for name/origin/type_/number, you sometimes have DataFrames, you can try to squeeze: name= data[0]["Name"].squeeze() origin= data[0]["Origin"].squeeze() type_= data[0]["Type"].squeeze() number= data[0]["Number"].squeeze() Side note: df_avIT = pd.DataFrame() is useless, you don't need to initialize empty DataFrames since the variable will be overwritten by df_avIT = createTable(...) | 2 | 1 |
78,483,843 | 2024-5-15 | https://stackoverflow.com/questions/78483843/how-to-scrape-links-from-summary-section-link-list-of-wikipedia | update: many thanks for the replies - the help and all the efforts! some additional notes i have added. below (at the end) howdy i am trying to scrape all the Links of a large wikpedia page from the "List of Towns and Gemeinden in Bayern" on Wikipedia using python. The trouble is that I cannot figure out how to export all of the links containing the words "/wiki/" to my CSV file. I am used to Python a bit but some things are still kinda of foreign to me. Any ideas? Here is what I have so far... the page: https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern#A from bs4 import BeautifulSoup as bs import requests res = requests.get("https://en.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern#A") soup = bs(res.text, "html.parser") gemeinden_in_bayern = {} for link in soup.find_all("a"): url = link.get("href", "") if "/wiki/" in url: gemeinden_in_bayern[link.text.strip()] = url print(gemeinden_in_bayern) the results do not look very specific: nt': 'https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Cookie_statement'} Kostenpflichtige Colab-Produkte - Hier kΓΆnnen Sie VertrΓ€ge kΓΌndigen what is really aimed - is to geth the list like so: https://de.wikipedia.org/wiki/Abenberg https://de.wikipedia.org/wiki/Abensberg https://de.wikipedia.org/wiki/Absberg https://de.wikipedia.org/wiki/Abtswind btw: on a sidenote: on the above mentioned subpages i have information in the infobox - which i am able to gather. See an example: import pandas urlpage = 'https://de.wikipedia.org/wiki/Abenberg' data = pandas.read_html(urlpage)[0] null = data.isnull() for x in range(len(data)): first = data.iloc[x][0] second = data.iloc[x][1] if not null.iloc[x][1] else "" print(first,second,"\n") which runs perfectly see the output: Basisdaten Basisdaten Koordinaten: 49Β° 15β² N, 10Β° 58β² OKoordinaten: 49Β° 15β² N, 10Β° 58β² O Bundesland: Bayern Regierungsbezirk: Mittelfranken Landkreis: Roth HΓΆhe: 414 m ΓΌ. NHN FlΓ€che: 48,41 km2 Einwohner: 5607 (31. Dez. 2022)[1] BevΓΆlkerungsdichte: 116 Einwohner je km2 Postleitzahl: 91183 Vorwahl: 09178 Kfz-Kennzeichen: RH, HIP GemeindeschlΓΌssel: 09 5 76 111 LOCODE: ABR Stadtgliederung: 14 Gemeindeteile Adresse der Stadtverwaltung: Stillaplatz 1 91183 Abenberg Website: www.abenberg.de Erste BΓΌrgermeisterin: Susanne KΓΆnig (parteilos) Lage der Stadt Abenberg im Landkreis Roth Lage der Stadt Abenberg im Landkreis Roth And that said i found out that the infobox is a typical wiki-part. so if i get familiar on this part - then i have learned alot - for future tasks - not only for me but for many others more that are diving into the Topos of scraping-wiki pages. So this might be a general task - helpful and packed with lots of information for many others too. so far so good: i have a list with pages that lead to quite a many infoboxes: https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern#A i think its worth to traverse over them - and fetch the infobox. the information you are looking for could be found with a python code that traverses over all the findindgs https://de.wikipedia.org/wiki/Abenberg https://de.wikipedia.org/wiki/Abensberg https://de.wikipedia.org/wiki/Absberg https://de.wikipedia.org/wiki/Abtswind ....and so on and so forth - note: with that i would be able to traverse my above mentioned scraper that is able to fetch the data of one info-box. update again hello dear HedgeHog , hello dear Salman Khan , first of all - many many thanks for the quick help and your awesome support. Glad that you set me stragiht. i am very very glad. btw. now that we have all the Links of a large wikpedia page from the "List of Towns and Gemeinden in Bayern". i would love to go ahead and work with the extraction of the infobox - which btw. would be a general task that might be interesting for many user on stackoverflow: conclusio: see the main page: https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern and the subpage with the infobox: https://de.wikipedia.org/wiki/Abenberg and how i gather data: import pandas urlpage = 'https://de.wikipedia.org/wiki/Abenberg' data = pandas.read_html(urlpage)[0] null = data.isnull() for x in range(len(data)): first = data.iloc[x][0] second = data.iloc[x][1] if not null.iloc[x][1] else "" print(first,second,"\n") which runs perfectly see the output: Basisdaten Basisdaten Koordinaten: 49Β° 15β² N, 10Β° 58β² OKoordinaten: 49Β° 15β² N, 10Β° 58β² O Bundesland: Bayern Regierungsbezirk: Mittelfranken Landkreis: Roth HΓΆhe: 414 m ΓΌ. NHN FlΓ€che: 48,41 km2 Einwohner: 5607 (31. Dez. 2022)[1] BevΓΆlkerungsdichte: 116 Einwohner je km2 Postleitzahl: 91183 Vorwahl: 09178 Kfz-Kennzeichen: RH, HIP GemeindeschlΓΌssel: 09 5 76 111 LOCODE: ABR Stadtgliederung: 14 Gemeindeteile Adresse der Stadtverwaltung: Stillaplatz 1 91183 Abenberg Website: www.abenberg.de Erste BΓΌrgermeisterin: Susanne KΓΆnig (parteilos) Lage der Stadt Abenberg im Landkreis Roth Lage der Stadt Abenberg im Landkreis Roth what is aimed is to gather all the data of the infobox(es) from all the pages. import requests from bs4 import BeautifulSoup import pandas as pd def fetch_city_links(list_url): response = requests.get(list_url) if response.status_code != 200: print(f"Failed to retrieve the page: {list_url}") return [] soup = BeautifulSoup(response.content, 'html.parser') divs = soup.find_all('div', class_='column-multiple') href_list = [] for div in divs: li_items = div.find_all('li') for li in li_items: a_tags = li.find_all('a', href=True) href_list.extend(['https://de.wikipedia.org' + a['href'] for a in a_tags]) return href_list def scrape_infobox(url): response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') infobox = soup.find('table', {'class': 'infobox'}) if not infobox: print(f"No infobox found on this page: {url}") return None data = {} for row in infobox.find_all('tr'): header = row.find('th') value = row.find('td') if header and value: data[header.get_text(" ", strip=True)] = value.get_text(" ", strip=True) return data def main(): list_url = 'https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern' city_links = fetch_city_links(list_url) all_data = [] for link in city_links: print(f"Scraping {link}") infobox_data = scrape_infobox(link) if infobox_data: infobox_data['URL'] = link all_data.append(infobox_data) df = pd.DataFrame(all_data) df.to_csv('wikipedia_infoboxes.csv', index=False) if __name__ == "__main__": main() the Main Function: def main(): list_url = 'https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern' city_links = fetch_city_links(list_url) all_data = [] for link in city_links: print(f"Scraping {link}") infobox_data = scrape_infobox(link) if infobox_data: infobox_data['URL'] = link all_data.append(infobox_data) df = pd.DataFrame(all_data) df.to_csv('wikipedia_infoboxes.csv', index=False) Well i thoght that this function orchestrates the process: it fetches the city links, scrapes the infobox data for each city, and stores the collected data in a pandas DataFrame. Finally, it saves the DataFrame to a CSV file. BTW: i hope that this will not nukes the thread. i hope that this is okay here - this extended question - but if not - i can open a new thread! Thanks for all | Your selector is wrong. The names of towns are in a tag which is in li tag which in turn is under a div with class column-multiple. First, get all divs with class column-multiple and then get all the li items from the gathered divs and then get the href attribute of all the a tags inside. url = "https://de.wikipedia.org/wiki/Liste_der_St%C3%A4dte_und_Gemeinden_in_Bayern" response = requests.get(url) if response.status_code == 200: soup = BeautifulSoup(response.content, 'html.parser') #find all the div elemnts with class column-multiple divs = soup.find_all('div', class_='column-multiple') href_list = [] for div in divs: # Find all li elements within the div.column-multiple li_items = div.find_all('li') for li in li_items: #now get the href of all <a> tags in li items a_tags = li.find_all('a', href=True) href_list.extend([a['href'] for a in a_tags]) for href in href_list: print(f"https://de.wikipedia.org{href}") It will print what you want: https://de.wikipedia.org/wiki/Amberg https://de.wikipedia.org/wiki/Ansbach https://de.wikipedia.org/wiki/Aschaffenburg https://de.wikipedia.org/wiki/Augsburg https://de.wikipedia.org/wiki/Bamberg . . . | 2 | 3 |
78,483,941 | 2024-5-15 | https://stackoverflow.com/questions/78483941/pandas-dataframe-with-counted-values | I have the following data: data = [{'Shape': 'Circle', 'Color': 'Green'}, {'Shape': 'Circle', 'Color': 'Green'}, {'Shape': 'Circle', 'Color': 'Green'}] Which I create a DataFrame from: df = pd.DataFrame(data) Giving: >>> df Shape Color 0 Circle Green 1 Circle Green 2 Circle Green The data is always received in this form, and I cannot change it. Now I need to count the Color column for each Shape, with the Color as the index, like this: Circle Square Green 3 0 Red 0 0 However, while I know that Shape can be either Circle or Square, they may or may not be present in the data. Likewise, Color can be either Green or Red, but also may or may not be present in the data. So at the moment my solution is to use: df2 = pd.DataFrame( [ { "Color": "Green", "Circle": len(np.where((df["Shape"] == "Circle") & (df["Color"] == "Green"))[0]), "Square": len(np.where((df["Shape"] == "Square") & (df["Color"] == "Green"))[0]), }, { "Color": "Red", "Circle": len(np.where((df["Shape"] == "Circle") & (df["Color"] == "Red"))[0]), "Square": len(np.where((df["Shape"] == "Square") & (df["Color"] == "Red"))[0]), }, ] ) df2 = df2.set_index("Color") df2.index.name = None Which gives the desired result: >>> df2 Circle Square Green 3 0 Red 0 0 But I suspect this is inefficient. Is there a better way of doing this in Pandas directly? I tried a pivot_table, but couldn't get it to account for possible missing values in the data. | Using pivot_table and reindex: data = [{'Shape': 'Circle', 'Color': 'Green'}, {'Shape': 'Circle', 'Color': 'Green'}, {'Shape': 'Circle', 'Color': 'Green'}] df = pd.DataFrame(data) shapes = ['Circle', 'Square'] colors = ['Green', 'Red'] pivot_table = df.pivot_table(index='Color', columns='Shape', aggfunc='size', fill_value=0) pivot_table = pivot_table.reindex(index=colors, columns=shapes, fill_value=0) | 2 | 4 |
78,482,270 | 2024-5-15 | https://stackoverflow.com/questions/78482270/saving-and-dropping-a-dataframe-in-a-for-loop | I have a number of dataframes from which I have to take a sample. The samples taken from that dataframe, have to be excluded from the next dataframe in order to not have any 'double' samples as there is some overlap. my code is as follows df_list = [df1, df2, df3, df4, df5] samplesizes = [8, 2, 4, 4, 2] sample = [] for df, samplesize in zip(df_list, samplesizes): if sample: #can't drop in the first loop df = df.drop(sample) #I want to drop the taken samples from the current df if max_pop_size < len(df): samplesize = max_pop_size #can't take a sample larger than population sample.append(df.sample(samplesize, random_state=1000)) I get stuck on the dropping after the first loop. I've tried several things and none seem to work. EDIT: the df's are a subset of 1 big population, so the columns are identical. the subsets (in df_list) are sliced based on earlier criteria which may contain duplicates. If a sample is taken, we don't want that specific row to be in the population of the next sample. To make things easier, they all have the same index! Any help would be much appreciated! | This is an interesting and not straightforward problem. I think you cannot/shouldn't: pre-filter the duplicates, since it might remove too many rows (it might remove a row in group 2 that was in group 1 even if it was not sampled from group 1) sample then post-filter, since removing rows after sampling would decrease the sample size in an unpredictable way concatenate the sampled DataFrames in the loop(*), this has quadratic complexity and would make the code slow for a large number of inputs I think a reasonable approach could be to first compute an ID to identify duplicated rows, then to loop over the groups and keep track of the already sampled rows: df1 = pd.DataFrame({'A': [1,2,3], 'B': [10,20,30]}) df2 = pd.DataFrame({'A': [2,3,4], 'B': [20,30,40]}) df3 = pd.DataFrame({'A': [1,4,5], 'B': [10,40,50]}) np.random.seed(1) df_list = [df1, df2, df3] samplesizes = [2, 1, 1] # combine all datasets to be able to identify the duplicates tmp = (pd.concat(df_list, keys=range(len(df_list))) .assign(ID=lambda d: d.groupby(list(tmp)).ngroup()) .set_index('ID', append=True) ) # set to keep track of already sampled unique IDs seen = set() # for each input (group) # first remove the already sampled rows # then sample the expected size sample = [] for k, g in tmp.groupby(level=0, sort=False): selected = g.drop(index=seen, level='ID').sample(samplesizes[k]) seen.update(selected.index.get_level_values('ID')) sample.append(selected.droplevel([0, -1])) print(sample) Of course, it could be that in some cases there are not enough rows left to sample from, but this is not really avoidable if you want to maintain an unbiased sampling. Example output: [ A B 0 1 10 2 3 30, A B 2 4 40, A B 2 5 50] (*) concatenating in a loop If your have a small number of DataFrames and rows, concatenating in a loop the total sampled rows (concat+merge) might be acceptable. This gives a shorter code: df1 = pd.DataFrame({'A': [1,2,3], 'B': [10,20,30]}) df2 = pd.DataFrame({'A': [2,3,4], 'B': [20,30,40]}) df3 = pd.DataFrame({'A': [1,4,5], 'B': [10,40,50]}) np.random.seed(1) df_list = [df1, df2, df3] samplesizes = [2, 1, 1] sample = [] total = pd.DataFrame(columns=df_list[0].columns) for df, samplesize in zip(df_list, samplesizes): selected = (df.merge(total, how='left', indicator=True) .loc[lambda x: x.pop('_merge').eq('left_only')] .sample(samplesize) ) total = pd.concat([total, selected]) sample.append(selected) print(sample) Example output: [ A B 0 1 10 2 3 30, A B 2 4 40, A B 2 5 50] | 2 | 2 |
78,482,381 | 2024-5-15 | https://stackoverflow.com/questions/78482381/how-to-restrict-pydantic-url-validation-to-specific-hosts-or-websites | I'm currently working with Pydantic's URL type for URL validation in my Python project. However, it seems that Pydantic does not currently provide a built-in mechanism for this. So what is the best approach to restrict a URL to a specific list of hosts in Pydantic? I have a pydantic model called Media which has an url attribute. I want the url to be restricted to certain websites or hosts." from pydantic import BaseModel, AnyUrl class Media(BaseModel): url: AnyUrl # the url host should only be from x.com or y.com | As you already mentioned there is no built-in support for doing so. Here an AfterValidator can do the job: from typing import Annotated, TypeAlias from pydantic import AfterValidator, AnyUrl, BaseModel valid_hosts = {"www.google.com", "www.yahoo.com"} def check_specific_hosts(url: AnyUrl) -> AnyUrl: if url.host in valid_hosts: return url raise ValueError("It's not in the list of accepted hosts") AcceptedUrl: TypeAlias = Annotated[AnyUrl, AfterValidator(check_specific_hosts)] class Media(BaseModel): url: AcceptedUrl print(Media(url="http://www.google.com")) try: print(Media(url="http://www.facebook.com")) except ValueError as e: print(e) output: url=Url('http://www.google.com/') 1 validation error for Media url Value error, It's not in the list of accepted hosts [type=value_error, input_value='http://www.facebook.com', input_type=str] For further information visit https://errors.pydantic.dev/2.6/v/value_error | 2 | 3 |
78,482,107 | 2024-5-15 | https://stackoverflow.com/questions/78482107/how-to-loop-through-each-element-of-a-loop-and-filter-out-conditions-in-a-python | I have a list of subcategories and a dataframe. I want to filter out the dataframe on the basis of each subcategory of the list. lst = [7774, 29409, 36611, 77553] import pandas as pd data = {'aucctlg_id': [143424, 143424, 143424, 143388, 143388, 143430], 'catalogversion_id': [1, 1, 1, 1, 1, 1.2], 'Key': [1434241, 1434241, 1434241, 1433881, 1433881, 14343012], 'item_id': [4501118, 4501130, 4501129, 4501128, 4501127, 4501126], 'catlog_description': ['M&BP PIG IRON FA', 'M&BP PIG IRON FA', 'M&BP PIG IRON FA', 'PIG IRON MIXED OG','PIG IRON MIXED OG', 'P.S JAM & PIG IRON FINES'], 'catlog_start_date': ['17-05-2024 11:00:00', '17-05-2024 11:00:00', '17-05-2024 11:00:00', '17-05-2024 11:00:00','17-05-2024 11:00:00', '17-05-2024 11:00:00'], 'subcategoryid': [29409, 29409, 29409, 7774, 7774, 36611], 'quantity': [200, 200, 200, 180, 180, 100], 'auctionable': ['Y', 'Y', 'Y', 'Y' ,'Y' ,'Y'] } df = pd.DataFrame(data) print(df) I have tried using the following code but I want output as a dataframe here it generates a list and of a single subcategory: new=[] for i in range(0, len(lst)): mask1 = df['subcategoryid']==(lst[i]) df2 = df.loc[mask1] new.append(df2) Required Output files, with the filtered data: df_7774, df_29409, df_36611 | You could pre-filter with isin, then use groupby: lst = [7774, 29409, 36611, 77553] out = dict(list(df[df['subcategoryid'].isin(lst)].groupby('subcategoryid'))) Which will create a dictionary of the desired dataframes: {7774: aucctlg_id catalogversion_id Key item_id catlog_description catlog_start_date subcategoryid quantity auctionable 3 143388 1.0 1433881 4501128 PIG IRON MIXED OG 17-05-2024 11:00:00 7774 180 Y 4 143388 1.0 1433881 4501127 PIG IRON MIXED OG 17-05-2024 11:00:00 7774 180 Y, 29409: aucctlg_id catalogversion_id Key item_id catlog_description catlog_start_date subcategoryid quantity auctionable 0 143424 1.0 1434241 4501118 M&BP PIG IRON FA 17-05-2024 11:00:00 29409 200 Y 1 143424 1.0 1434241 4501130 M&BP PIG IRON FA 17-05-2024 11:00:00 29409 200 Y 2 143424 1.0 1434241 4501129 M&BP PIG IRON FA 17-05-2024 11:00:00 29409 200 Y, 36611: aucctlg_id catalogversion_id Key item_id catlog_description catlog_start_date subcategoryid quantity auctionable 5 143430 1.2 14343012 4501126 P.S JAM & PIG IRON FINES 17-05-2024 11:00:00 36611 100 Y} If you want to create files without intermediate: lst = [7774, 29409, 36611, 77553] for k, g in df[df['subcategoryid'].isin(lst)].groupby('subcategoryid'): g.to_excel(f'df_{k}.xlsx') | 2 | 2 |
78,481,829 | 2024-5-15 | https://stackoverflow.com/questions/78481829/schedule-optimization-problem-dont-know-a-better-way-to-present-the-solution | So my music school runs a festival for all student bands to play together. Since there are some family members across diferent bands, I'm trying to create a solution to optimize the schedule of the bands - trying to minimize sum of time between bands that have relatives. Now I think I've found a solution. Don't know if it's the best but it looks like it's working. Now I'm trying to find a better way to present the solution. This "x" matrix is huge and maybe it would be better to extract only the "1"s and make a schedule table with the slots and what band is in a cronological order. But I don't know where to start. Hoping someone can point the direction. Relevant infos: English is not my first language; I'm not a programmer, and everything I did at the code bellow was what I was able to learn about python last 2 days. I'm using the code bellow. Feel free to criticize and make any suggestions. Thanks! from gekko import GEKKO import numpy as np m = GEKKO() #variables and constrains n = 20 #n of bands s = 37 #n of slots to play t = 25 #time in minutes one band spend on the stage x = m.Array(m.Var,(n,s),value=0,lb=0,ub=1,integer=True) #matrix of all bands(rows) x slots (columns) for j in range(s): m.Equation(m.sum([x[i,j] for i in range(n)])<=1) #since this is the decision i made it binary and sum=1 to 1 band only ocuppie one slot for i in range(n): m.Equation(m.sum([x[i,j] for j in range(s)])==1) #since this is the decision i made it binary and sum=1 to 1 band only ocuppie one slot z = [k for k in range(1,s+1)] #array with slot index to use to calc time w = z*x*t #time the band will perform ;; used in objetive function #objective #in this exemple the band (1,4) and the band (1,5) have a relationship, and so do (3,2) and (1,15), so the objetive function bellow will try to minimize the sum of the time between bands that have relationship y = m.abs2(w.item(1, 4)-w.item(1, 5))+ m.abs2(w.item(3, 2)-w.item(1, 15)) m.Minimize(y) #solver m.options.SOLVER = 1 m.solve() #print(w.item((1, 4))) print('y') print(y) print('x') print(x) | You need to extract the band and slot numbers for each 1 in the matrix and sort the schedule by slot number: schedule = [(i+1, j+1) for i in range(n) for j in range(s) if x[i,j][0] == 1] schedule.sort(key=lambda x: x[1]) for band, slot in schedule: print(f"Band {band} is scheduled to play in slot {slot}") | 3 | 1 |
78,481,203 | 2024-5-15 | https://stackoverflow.com/questions/78481203/plotly-dash-function-to-toggle-graph-parameters-python | I'm trying to install a function that provides a switch or toggle to alter a plotly graph. Using below, I have a scatter plot and two buttons that intend to change the color and size of the points. The buttons are applied to the same scatter plot. I can use daq.BooleanSwitch individually on either color or size but I can't use them together on the same two as two calls for id raises an error. Is it possible two use a button instead? Where it can toggle each parameter on or off. I don't want dropdown bars or distinct buttons as the graph should be able to hold combinations (on/off) for both buttons. At the moment, the buttons are in place but they don;t alter the color or size of the graph. import dash import dash_daq as daq from dash import dcc, html import dash_mantine_components as dmc from dash.dependencies import Input, Output import plotly.express as px import plotly.graph_objs as go import pandas as pd df = px.data.iris() color_dict = {'setosa':'green', 'versicolor':'yellow', 'virginica':'red'} colormap = df["species"].map(color_dict) app = dash.Dash(__name__) app.layout = html.Div( [ html.P("Color"), #daq.BooleanSwitch(id="color_toggle", on=False, color="red"), dmc.Button('Color', id="color_toggle", variant="light"), html.P("Size"), #daq.BooleanSwitch(id="size_toggle", on=False, color="red"), dmc.Button('Size', id="size_toggle", variant="light"), html.Div( dcc.Graph(id="chart"), #id="power-button-result-1", #id="power-button-result-2" ), ] ) @app.callback( Output("chart", "figure"), #Output("power-button-result-1", "children"), #Output("power-button-result-2", "children"), [ Input("color_toggle", "value"), Input("size_toggle", "value"), ] ) def update_output(color_on, size_on, ): if color_on: fig = px.scatter(df, x="sepal_width", y="sepal_length", color = colormap) #dcc.Graph(figure=fig) #return [dcc.Graph(figure=fig)] else: fig = px.scatter(df, x="sepal_width", y="sepal_length") #return [dcc.Graph(figure=fig)] if size_on: fig = px.scatter(df, x="sepal_width", y="sepal_length", size = 'sepal_length') #dcc.Graph(figure=fig) #return [dcc.Graph(figure=fig)] else: fig = px.scatter(df, x="sepal_width", y="sepal_length") #return [dcc.Graph(figure=fig)] return fig if __name__ == "__main__": app.run_server(debug=True) | It appears that you need the toggles for the Size and the Color. In that case, you can take advantage of dcc.checklist() in your html and style it. This code would most likely work, run it and see: app.layout = html.Div( [ html.P("Toggle for the Color"), dcc.Checklist( id="color_toggle", options=[{"label": "", "value": True}], value=[], inline=True ), html.P("Toggle for the Size"), dcc.Checklist( id="size_toggle", options=[{"label": "", "value": True}], value=[], inline=True ), html.Div( dcc.Graph(id="chart"), ), ] ) Then you don't need the if and else statements in the update_output(). For that, you can just call the update_traces() method. if color_on: fig.update_traces(marker=dict(color=colormap)) if size_on: fig.update_traces(marker=dict(size=10)) Code import dash from dash import dcc, html from dash.dependencies import Input, Output import plotly.express as px import pandas as pd df = px.data.iris() color_dict = {'setosa': 'green', 'versicolor': 'yellow', 'virginica': 'red'} colormap = df["species"].map(color_dict) app = dash.Dash(__name__) app.layout = html.Div( [ html.P("Toggle for the Color"), dcc.Checklist( id="color_toggle", options=[{"label": "", "value": True}], value=[], inline=True ), html.P("Toggle for the Size"), dcc.Checklist( id="size_toggle", options=[{"label": "", "value": True}], value=[], inline=True ), html.Div( dcc.Graph(id="chart"), ), ] ) @app.callback( Output("chart", "figure"), [Input("color_toggle", "value"), Input("size_toggle", "value")] ) def update_output(color_on, size_on): fig = px.scatter(df, x="sepal_width", y="sepal_length") if color_on: fig.update_traces(marker=dict(color=colormap)) if size_on: fig.update_traces(marker=dict(size=10)) return fig if __name__ == "__main__": app.run_server(debug=False) Notes: dcc.checklist() is a component for rendering a set of checkboxes "(See also RadioItems for selecting a single option at a time or Dropdown for a more compact view.)" See Interactive Visualizations | 3 | 3 |
78,461,651 | 2024-5-10 | https://stackoverflow.com/questions/78461651/why-arent-exceptions-caught-with-one-liner-while-statement | I have a code with a one-liner while and a try-except statement which behaves weirdly. This prints 'a' on Ctrl+C: try: while True: pass except KeyboardInterrupt: print("a") and this too: try: i = 0 while True: pass except KeyboardInterrupt: print("a") but this doesn't, and it throws a traceback: try: while True: pass except KeyboardInterrupt: print("a") and neither does this code: try: while True: pass i = 0 except KeyboardInterrupt: print("a") Addition some additional details. In 3.11, the instruction JUMP_BACKWARD was added and seems invloved with this issue see: Disassembler for Python bytecode In 3.12 when the code in the first and the 3rd blocks are disassembled the results are: Cannot be caught: 0 0 RESUME 0 2 2 NOP 3 >> 4 JUMP_BACKWARD 1 (to 4) >> 6 PUSH_EXC_INFO 4 8 LOAD_NAME 0 (KeyboardInterrupt) 10 CHECK_EXC_MATCH 12 POP_JUMP_IF_FALSE 11 (to 36) 14 POP_TOP 5 16 PUSH_NULL 18 LOAD_NAME 1 (print) 20 LOAD_CONST 1 ('a') 22 CALL 1 30 POP_TOP 32 POP_EXCEPT 34 RETURN_CONST 2 (None) 4 >> 36 RERAISE 0 >> 38 COPY 3 40 POP_EXCEPT 42 RERAISE 1 ExceptionTable: 4 to 4 -> 6 [0] 6 to 30 -> 38 [1] lasti 36 to 36 -> 38 [1] lasti None Can be caught: 0 0 RESUME 0 2 2 NOP 3 4 NOP 4 >> 6 NOP 3 8 JUMP_BACKWARD 2 (to 6) >> 10 PUSH_EXC_INFO 5 12 LOAD_NAME 0 (KeyboardInterrupt) 14 CHECK_EXC_MATCH 16 POP_JUMP_IF_FALSE 11 (to 40) 18 POP_TOP 6 20 PUSH_NULL 22 LOAD_NAME 1 (print) 24 LOAD_CONST 1 ('a') 26 CALL 1 34 POP_TOP 36 POP_EXCEPT 38 RETURN_CONST 2 (None) 5 >> 40 RERAISE 0 >> 42 COPY 3 44 POP_EXCEPT 46 RERAISE 1 ExceptionTable: 4 to 8 -> 10 [0] 10 to 34 -> 42 [1] lasti 40 to 40 -> 42 [1] lasti None The main differences that jump out are the two additional NOP and the different targets for JUMP_BACKWARD. Note: the exception really cannot be caught as this will also throw the exception in 3.12 try: try: while True: pass except KeyboardInterrupt: print("a") except Exception: print("b") | Its a known CPython bug introduced in 3.11 and exists in 3.12. One of comments of the bug, mentioned that partial backport of this pull request looks to be the right direction to fix the bug. I built and tested following CPython versions from source using pyenv on Arch Linux with GCC 14.1.1 compiler: 3.11-dev: Python 3.11.9+ (heads/3.11:ba43157, May 20 2024, 04:40:02) 3.12-dev: Python 3.12.3+ (heads/3.12:30c687c, May 20 2024, 04:38:13) 3.13.0b1: Python 3.13.0b1 (main, May 20 2024, 04:14:35) 3.13-dev: Python 3.13.0b1+ (heads/3.13:27b61c1, May 20 2024, 04:24:49) 3.14-dev: Python 3.14.0a0 (heads/main:0abf997, May 20 2024, 08:25:05) In 3.13.0b1, 3.13-dev and 3.14-dev the bug is fixed ππ and exception handling works as expected. But 3.11-dev and 3.12-dev still have the bug. I hope it will be backport to existing stable 3.11 and 3.12 versions (in-time for inclusion in the next 3.11.10 and 3.12.4 bug-fix releases respectively). EDIT 1: 3.12.4 released in 2024-06-06: the bug didn't fixed. EDIT 2: 3.12.5 released in 2024-08-06: the bug didn't fixed. | 14 | 7 |
78,470,913 | 2024-5-13 | https://stackoverflow.com/questions/78470913/generate-new-column-in-dataframe-with-modulo-of-other-column | I would like to create a new column "Day2" which takes the second digit of column named "Days", so if we have Days equal to 35, we would take the number 5 to be in "Day2", I tried this but it's not working: DF["Day2"] = DF["Days"].where( DF["Days"] < 10, (DF["Days"] / 10 % 10).astype(int), ) It seems it's taking the first digit but never the second one, can someone help? | This is a very simple and pythonic solution: import pandas as pd data = [10,11,35,45,65] df = pd.DataFrame(data, columns=['Day1']) df['Day2'] = df['Day1'].mod(10) df Result Day1 Day2 0 10 0 1 11 1 2 35 5 3 45 5 4 65 5 | 2 | 2 |
78,472,764 | 2024-5-13 | https://stackoverflow.com/questions/78472764/langchain-workaround-for-with-structured-output-using-chatbedrock | I'm working with the langchain library to implement a document analysis application. Especifically I want to use the routing technique described in this documentation. i wanted to follow along the example, but my environment is restricted to AWS, and I am using ChatBedrock instead of ChatOpenAI due to limitations with my deployment. According to this overview the with_structured_output method, which I need, is not (yet) implemented for models on AWS Bedrock, which is why I am looking for a workaround or any method to replicate this functionality. The key functionality I am looking for is shown in this example: from typing import List from typing import Literal from langchain_core.prompts import ChatPromptTemplate from langchain_core.pydantic_v1 import BaseModel, Field from langchain_openai import ChatOpenAI class RouteQuery(BaseModel): """Route a user query to the most relevant datasource.""" datasources: List[Literal["python_docs", "js_docs", "golang_docs"]] = Field( ..., description="Given a user question choose which datasources would be most relevant for answering their question", ) system = """You are an expert at routing a user question to the appropriate data source. Based on the programming language the question is referring to, route it to the relevant data source.""" prompt = ChatPromptTemplate.from_messages( [ ("system", system), ("human", "{question}"), ] ) llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) structured_llm = llm.with_structured_output(RouteQuery) router = prompt | structured_llm router.invoke( { "question": "is there feature parity between the Python and JS implementations of OpenAI chat models" } ) The output would be: RouteQuery(datasources=['python_docs', 'js_docs']) The most important fact for me is that it just selects items from the list without any additional overhead, which makes it possible to setup the right follow up questions. Did anyone find a workaround how to resolve this issue? | I found a solution in these two blog posts: here and here. The key is to use the instructor package, which is a wrapper around pydantic. This means langchain is not necessary. Here is an example based on the blog posts: from typing import List import instructor from anthropic import AnthropicBedrock from loguru import logger from pydantic import BaseModel import enum class User(BaseModel): name: str age: int class MultiLabels(str, enum.Enum): TECH_ISSUE = "tech_issue" BILLING = "billing" GENERAL_QUERY = "general_query" class MultiClassPrediction(BaseModel): """ Class for a multi-class label prediction. """ class_labels: List[MultiLabels] if __name__ == "__main__": # Initialize the instructor client with AnthropicBedrock configuration client = instructor.from_anthropic( AnthropicBedrock( aws_region="eu-central-1", ) ) logger.info("Hello World Example") # Create a message and extract user data resp = client.messages.create( model="anthropic.claude-instant-v1", max_tokens=1024, messages=[ { "role": "user", "content": "Extract Jason is 25 years old.", } ], response_model=User, ) print(resp) logger.info("Classification Example") # Classify a support ticket text = "My account is locked and I can't access my billing info." _class = client.chat.completions.create( model="anthropic.claude-instant-v1", max_tokens=1024, response_model=MultiClassPrediction, messages=[ { "role": "user", "content": f"Classify the following support ticket: {text}", }, ], ) print(_class) | 3 | 2 |
78,476,603 | 2024-5-14 | https://stackoverflow.com/questions/78476603/calculate-cmyk-spot-coverage-on-pdf-with-python | I don't find any free or open source libraries to calculate CMYK and spot color on pdf. I would be grateful if someone could guide me in the right direction as to what I should do to access color channels and calculate the percentage of color used ( C,M,Y,K and spot, Export each separately ) with Python. Point: Actually, I don't have a problem with extract C,M,Y,K because I can easily extract it from the image, but the problem is that when I add spot colors, it convert it into cmyk again. That's why I'm looking for it in PDF. Thanks | I hope this will be useful for those who have a similar problem in the future. Dependencies : Ghostscript - Pillow How does it work ? Ghostscript will separated colors ( C,M,Y,K, Spots ) and save each one as .tiff and Pillow calculate the percentage of color ( In fact, the file saved by Ghostscript is in grayscale mode and has only one color channel. 0 to 255 ) used on each file. Point: Before that, make sure you have installed Ghostscript from PIL import Image from django.http import HttpResponse import os , fnmatch def pdf_color_splitter(): # Where the photos of separated colors are placed => path = 'image_inputs/' if not os.path.exists(path): os.makedirs(path) # now we run ghostscript command for separated colors and save them as tiff files => os.system(f'gs -sDEVICE=tiffsep -o {path}c.tiff cmyk_calculate/2021.pdf') # get all .tiff FILES = fnmatch.filter(os.listdir(path), '*.tiff') # calculate colors coverage each separately splited_colors = [] for f in FILES: O_FILE = Image.open(path+f) image_sizew,image_sizeh = O_FILE.size # get width,height count=image_sizeh*image_sizew val=0 # Collects colored pixels for i in range(0, image_sizew): for j in range(1, image_sizeh): pixVal = O_FILE.getpixel((i, j)) if pixVal != 255 and type(pixVal) != tuple: # no white pixels val+= 100 - (pixVal//2.55) # Pay attention to the point below this code resp = {'name':f.split('.')[0].replace('c(','').replace(')',''),'coverage':val/count} split_colors.append(resp) os.remove(path+f) # remove .tiff file in the end return splited_colors Look to this code val+= 100 - (pixVal//2.55) So what was this for? We want a number in the range of 0 to 100 because we are working in CMYK mode and subtract the answer from 100 because the photo is in Grayscale mode (actually to get the correct color density). | 2 | 1 |
78,481,022 | 2024-5-14 | https://stackoverflow.com/questions/78481022/error-no-matching-distribution-found-for-tensorflow-cygwin | I'm trying to install Tensorflow on Cygwin but i'm getting these Errors : $ /usr/bin/python3 -m pip install tensorflow==2.12.0 ERROR: Could not find a version that satisfies the requirement tensorflow==2.12.0 (from versions: none) ERROR: No matching distribution found for tensorflow==2.12.0 $ /usr/bin/python3 -m pip install tensorflow ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none) ERROR: No matching distribution found for tensorflow What im using: Windows10 Cygwin last version Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)] on win32 Python 3.7.12 (default, Nov 23 2021, 18:58:07) [GCC 11.2.0] on Cygwin How to fix it ? i tried to upgrade PIP but nothing and got this (note: This error originates from a subprocess, and is likely not a problem with pip.) | It seems you're trying to install tensorflow 2.12.0 to python 3.7.0. According to release notes of tensorflow 2.12.0. It says that Support for Python 3.7 has been removed. We are not releasing any more patches for Python 3.7. I would suggest to upgrade python or downgrade tensorflow. | 2 | 1 |
78,471,617 | 2024-5-13 | https://stackoverflow.com/questions/78471617/python-keyring-does-not-overwrite-entry-but-creates-new-entry | I'm having trouble with the python keyring library. I want to extract passwords from an entry in the windows credential manager. After that I want to overwrite the password of that specific entry. Here is my Code. import keyring # The service name in the credentials manager is "User@Service", the username is "User" password = keyring.get_password("User@Service", "User") # Overwrite the password keyring.set_password("User@Service", "User", "NewPassword") I would expect this to overwrite my old password, but it creates a new entry in the credentials manager with the service name "User@User@Service" I also tried to simpley do keyring.set_password("Service", "User", "NewPassword") but that creates a new entry with the service name "Service". I simply can't get it to overwrite an existing entry. Any help is appreciated. Thanks. | I ran into this problem today as well. Seems like the same thing mentioned here: https://github.com/jaraco/keyring/issues/545 I don't know if it was ever fixed or has been an issue since then. I can't get it to work, as it just creates a new credential unless both versions already exist (service and user@service). I am working around this by just deleting the credential and creating a new one. import keyring from keyring.errors import PasswordDeleteError try: keyring.delete_password("testapp", "username") except PasswordDeleteError: # In case the password doesn't exist yet pass keyring.set_password("testapp", "username", "pw") | 2 | 1 |
78,471,354 | 2024-5-13 | https://stackoverflow.com/questions/78471354/slack-files-completeuploadexternal-no-uploading-files-to-the-slack | Since the slack file.upload is going to be depreciated, I was going to make new the methods work. The first one works as: curl -s -F [email protected] -F filename=test.txt -F token=xoxp-token -F length=50123 https://slack.com/api/files.getUploadURLExternal and returns {"ok":true,"upload_url":"https:\/\/files.slack.com\/upload\/v1\/CwABAAAAXAoAAVnX....","file_id":"F07XXQ9XXXX"} The second request is: curl -X POST \ -H "Authorization: Bearer xoxp-token" \ -H "Content-Type: application/json" \ -d '{ "files": [{"id":"F07XXXXXX", "title":"Slack API updates Testing"}], "channel_id": "C06EXXXXX" }' \ https://slack.com/api/files.completeUploadExternal With the second request, I get 200 OK responses. {"ok":true,"files":[{"id":"XXXXXXX","created":XXXXXXXX,"timestamp":XXXXXXXX,"name":"test.txt","title":"Slack API updates Testing","mimetype":"","filetype":"","pretty_type":"","user":"XXXXXXXXX","user_team":"XXXXXXXXX","editable":false,"size":50123,"mode":"hosted","is_external":false,"external_type":"","is_public":false,"public_url_shared":false,"display_as_bot":false,"username":"","url_private":"https:\/\/files.slack.com\/files-pri\/XXXXXXXXX-XXXXXXX\/test.txt","url_private_download":"https:\/\/files.slack.com\/files-pri\/XXXXXXXXX-XXXXXXX\/download\/test.txt","media_display_type":"unknown","permalink":"https:\/\/XXXXXXXX.slack.com\/files\/XXXXXXXXX\/XXXXXXX\/test.txt","permalink_public":"https:\/\/slack-files.com\/XXXXXXXXX-XXXXXXX-XXXXXXXX","comments_count":0,"is_starred":false,"shares":{},"channels":[],"groups":[],"ims":[],"has_more_shares":false,"has_rich_preview":false,"file_access":"visible"}],"warning":"missing_charset","response_metadata":{"warnings":["missing_charset"]}} Questions and problems: 1- How do I determine the length= in the first request (files.getUploadURLExternal ) 2- Even though I have provided the channel_id the file is still not getting uploaded on the Slack channel. Documentation: https://api.slack.com/methods/files.completeUploadExternal https://api.slack.com/methods/files.getUploadURLExternal | You skipped a step. You have to send a POST to the URL that is returned to you on the first step. In that post you are also doing the file upload. 2nd step should be: curl -F filename="@$filename" -H "Authorization: Bearer $SLACK_KEY" -v POST $upload_url then your 2nd step should be the 3rd step. As far as getting the byte size stat --printf="%s" file if you are using bash | 5 | 0 |
78,476,300 | 2024-5-14 | https://stackoverflow.com/questions/78476300/pip-install-fails-for-pandas | I am trying to install pandas on a Windows machine but get the following output: python -m pip install pandas Collecting pandas Using cached pandas-2.2.2.tar.gz (4.4 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error Γ Preparing metadata (pyproject.toml) did not run successfully. β exit code: 1 β°β> [13 lines of output] + meson setup C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db\.mesonpy-6bapjes7\build -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --vsenv --native-file=C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db\.mesonpy-6bapjes7\build\meson-python-native-file.ini The Meson build system Version: 1.2.1 Source dir: C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db Build dir: C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db\.mesonpy-6bapjes7\build Build type: native build Project name: pandas Project version: 2.2.2 Activating VS 15.9.60 ..\..\meson.build:2:0: ERROR: Value "c11" (of type "string") for combo option "C language standard to use" is not one of the choices. Possible choices are (as string): "none", "c89", "c99", "gnu89", "gnu90", "gnu9x", "gnu99". A full log can be found at C:\Users\serba\AppData\Local\Temp\pip-install-6cc_jg_i\pandas_90906587555b4b9d852626b7159094db\.mesonpy-6bapjes7\build\meson-logs\meson-log.txt [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ Encountered error while generating package metadata. β°β> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. I tried the following: running the terminal as administrator updating pip installing older versions of pandas because I read there is some issue with the latest one looking inside the log file from the output, but it doesn't exist anymore after the process finishes The output is still the same. I see it complaining about a C standard. Do I need a specific compiler to be installed or any other prerequisites? Thanks. Regards, Serban | So I got past this, it was a stupid reason this was happening. I had another pip binary somewhere in a MinGW installation folder and it was always calling that one. I made sure change the environment variables to point to a proper, standalone installation of pip (also python). | 2 | 1 |
78,477,442 | 2024-5-14 | https://stackoverflow.com/questions/78477442/dynamic-update-of-add-qty-parameter-in-odoo-ecommerce-product-details-page | I'm customizing an Odoo eCommerce website and I need to modify the add_qty parameter in the product details page dynamically via a URL parameter. I'm extending the WebsiteSale class to achieve this. Here's my code snippet: class WebsiteSaleCustom(WebsiteSale): def _prepare_product_values(self, product, category, search, **kwargs): values = super()._prepare_product_values(product, category, search, **kwargs) values['add_qty'] = int(kwargs.get('add_qty', 1)) return values This solution works as expected upon the first load of the product details page. However, the issue arises when attempting to dynamically update the add_qty parameter. Suppose i hit this link http://localhost:8069/shop/black-shoe?add_qty=5 Quantity will set 5. For first time, it is setting 5 but if i change quantity it not changing in quantity field. It's remaining 5. Any insights or suggestions on how to achieve this would be greatly appreciated. Thank you for your help! | To address issues related to outdated or incorrect data being fetched by Odoo due to caching, you can use the method request.env.registry._clear_cache() before applying your solution. This method clears the cache, ensuring that Odoo fetches fresh data. from odoo.http import request from odoo.addons.website_sale.controllers.main import WebsiteSale class WebsiteSaleCustom(WebsiteSale): def _prepare_product_values(self, product, category, search, **kwargs): request.env.registry._clear_cache() values = super()._prepare_product_values(product, category, search, **kwargs) values['add_qty'] = int(kwargs.get('add_qty', 1)) return values | 3 | 2 |
78,480,187 | 2024-5-14 | https://stackoverflow.com/questions/78480187/how-do-i-delete-all-documents-from-a-marqo-index | I have a ton of documents that have been vectorized poorly / are using an old multimodal_field_combination. mappings={ 'combo_text_image': { "type": "multimodal_combination", "weights": { "name": 0.05, "description": 0.15, "image_url": 0.8 } } }, But I need to update to mappings={ 'combo_text_image': { "type": "multimodal_combination", "weights": { "name": 0.2, "image_url": 0.8 } } }, I've realized that to do this in the same index, I should delete the documents and then reindex but I haven't been able to find a mq.delete_all_documents() call. What's the best way to do this? | Here is a code snippet that would delete all documents from the index: def empty_index(index_name): index = mq.index(index_name) res = index.search(q = '', limit=400) while len(res['hits']) > 0: id_set = [] for hit in res['hits']: id_set.append(hit['_id']) index.delete_documents(id_set) res = index.search(q = '', limit=400) | 2 | 3 |
78,478,825 | 2024-5-14 | https://stackoverflow.com/questions/78478825/numpy-slicing-on-a-zero-padded-2d-array | Given a 2D Numpy array, I'd like to be able to pad it on the left, right, top, bottom side, like the following pseudo-code. Is there anything like this already built into Numpy? import numpy as np a = np.arange(16).reshape((4, 4)) # [[ 0 1 2 3] # [ 4 5 6 7] # [ 8 9 10 11] # [12 13 14 15]] pad(a)[-4:2, -1:3] # or any syntax giving the same result #[[0 0 0 0] # [0 0 0 0] # [0 0 0 0] # [0 0 0 0] # [0 0 1 2] # [0 4 5 6]] pad(a)[-4:2, -1:6] #[[0 0 0 0 0 0 0] # [0 0 0 0 0 0 0] # [0 0 0 0 0 0 0] # [0 0 0 0 0 0 0] # [0 0 1 2 3 0 0] # [0 4 5 6 7 0 0]] | Some trickery: import numpy as np class PaddedArray: def __init__(self, arr): self.arr = arr def __getitem__(self, idx): idx = tuple( slice(s.start or 0, s.stop or size) for size, s in zip(self.arr.shape, idx) ) slices = tuple( slice(max(0, s.start), min(size, s.stop)) for size, s in zip(self.arr.shape, idx) ) paddings = tuple( (max(0, -s.start), max(0, s.stop - size)) for size, s in zip(self.arr.shape, idx) ) return np.pad(self.arr[slices], paddings) Testing: >>> PaddedArray(np.arange(16).reshape((4, 4)))[-4:2, -1:3] array([[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 2], [0, 4, 5, 6]]) >>> PaddedArray(np.arange(16).reshape((4, 4)))[-4:2, -1:6] array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 2, 3, 0, 0], [0, 4, 5, 6, 7, 0, 0]]) >>> PaddedArray(np.arange(32).reshape((4, 4, 2)))[-4:2, -1:3, :] array([[[ 0, 0], [ 0, 0], [ 0, 0], [ 0, 0]], [[ 0, 0], [ 0, 0], [ 0, 0], [ 0, 0]], [[ 0, 0], [ 0, 0], [ 0, 0], [ 0, 0]], [[ 0, 0], [ 0, 0], [ 0, 0], [ 0, 0]], [[ 0, 0], [ 0, 1], [ 2, 3], [ 4, 5]], [[ 0, 0], [ 8, 9], [10, 11], [12, 13]]]) | 2 | 6 |
78,474,769 | 2024-5-13 | https://stackoverflow.com/questions/78474769/how-to-terminate-async-process-in-python-in-case-of-failure | I am using asyncio to asynchronously launch a process, and then read data from the stdout, my code looks like the following: process = await asyncio.subprocess.create_subprocess_exe(subprocess_args, stderr=asyncio.subprocess.PIPE, stdout=asyncio.subprocess.PIPE) I am then asynchronously reading from the stdout of this process, and for every line read, I am calling a user-provided callback function, my code looks like: async for line in process.stdout: user_provided_callback(line) I then wait for the process to terminate: _, stderr = await process.communicate() if process.returncode != 0: raise Exception(stderr.decode()) However, I noticed that is user_provided_callback raises an error, the process is not properly disposed of, so I see warnings such as "Resource warning: unclosed transport", because I am running this code through pytest unit tests, warnings are treated as errors so this is causing a failure. I thought I would have to manually shut down the process in the case of failure, so I changed my code to: async for line in process.stdout: try: user_provided_callback(line) except Exception as err: process.terminate() raise err This explicitly terminates the process in case the callback fails, but I am still seeing this warning. I then modified the code above to: async for line in process.stdout: try: user_provided_callback(line) except Exception as err: process.terminate() process._transport.close() ## <- NEWLY ADDED raise err after explicitly closing the transport, now the warning goes away, however this accesses a private variable so it does not seem like the right fix. What is the proper way to handle an exception in this scenario? I am looking for a cross platform solution as I want to support windows and linux. I am also using python 3.11. | process.terminate() is a fast, synchronous call that sends a signal that the target process asynchronously handles; thus, the target doesn't shut down immediately when terminate is called, but somewhat later, after the OS next schedules it and invokes its signal handler. This means you still want to await process.wait() or await process.communicate() (which may be better for reasons I'll describe below) after calling process.terminate() to allow cleanup to take place. In the simplest case, just adding await process.wait() may be sufficient, but there are potential complications: If the process tries to flush stdout or stderr after getting a SIGTERM, but is unable to do so because nothing in the Python code is still reading from the other side of the pipeline, this may cause a deadlock. To avoid this you'll want to continue to read output even after the terminate() call, either in your own code or by using await process.communicate(). On non-Windows platforms, if the process defines a handler for SIGTERM, it can choose to ignore or delay handling of that signal. If you're willing to accept some risk of data loss (perhaps because files the process modifies aren't fully flushed to disk or stdout isn't fully written), you can wait for a reasonable amount of time and then call process.kill() if the process still hasn't exited; unlike SIGTERM, SIGKILL cannot be ignored. (This should never be necessary on Windows, where terminate() has behavior closer to the UNIX SIGKILL). | 4 | 2 |
78,477,934 | 2024-5-14 | https://stackoverflow.com/questions/78477934/how-to-pip-install-from-a-text-file-skipping-unreachable-libraries | I'm currently in a scenario where I need to install the following: pip install -r https://raw.githubusercontent.com/intro-stat-learning/ISLP_labs/v2.1.3/requirements.txt What happens is that the library scipy is not available for some reason, resulting in the following error: (test_islp) PS C:\Users\project> pip install -r https://raw.githubusercontent.com/intro-stat-learning/ISLP_labs/v2.1.3/requirements.txt Collecting numpy==1.24.2 (from -r https://raw.githubusercontent.com/intro-stat-learning/ISLP_labs/v2.1.3/requirements.txt (line 1)) ... ... ERROR: No matching distribution found for scipy==1.11.1 Is there a way to simply ignore libraries that return such error, and still go on with the installation of the remaining libraries? I tried using --ignore-installed but it still leads to the same error The quick and dirty solution would be to simply copy and paste the requirements into a text file, delete scipy and perform pip install again. That is not ideal for me because in a scenario with hundreds of libraries and dozens of potential missing distributions it could lead to tedious work. | You can use this simple python script to loop through all the packages in the requirements.txt and install them. As far as I can tell, this works on Linux and Windows, not sure about Mac. import subprocess def install_packages(requirements_file): with open(requirements_file, 'r') as file: packages = file.readlines() for package in packages: package = package.strip() try: subprocess.check_call(['pip', 'install', package]) except subprocess.CalledProcessError as e: print(f"Error installing package: {package}") if __name__ == '__main__': requirements_file = 'requirements.txt' install_packages(requirements_file) | 3 | 1 |
78,475,551 | 2024-5-14 | https://stackoverflow.com/questions/78475551/discontinuous-selections-with-pandas-multiindex | I have the following DataFrame with MultiIndex columns (the same applies to MultiIndex rows): import pandas as pd df = pd.DataFrame(columns=pd.MultiIndex.from_product([['A','B'],[1,2,3,4]]), data=[[0,1,2,3,4,5,6,7],[10,11,12,13,14,15,16,17], [20,21,22,23,24,25,26,27]]) Now I want to select the following columns into a new DataFrame: from the A group, elements at index [1,2,4], AND from the B group, elements at index [1,3]. So my new DataFrame would have 5 columns. I can easily make any of the two selections separately using .loc: df_grA = df.loc[:,('A',(1,2,4))] df_grB = df.loc[:,('B',(1,3))] But I cannot find a way to achieve what I want. The only way that I can think of is to concat the two pieces together like that: df_selection = pd.concat([df_grA,df_grB],axis=1) This works, but it's clunky. I can't believe there's not a more convenient way. | Another idea with product: from itertools import product print (df.loc[:, list(product('A',(1,2,4))) + list(product('B',(1,2,3)))]) A B 1 2 4 1 2 3 0 0 1 3 4 5 6 1 10 11 13 14 15 16 2 20 21 23 24 25 26 | 2 | 1 |
78,474,573 | 2024-5-13 | https://stackoverflow.com/questions/78474573/extract-event-pairs-from-multiline-text | I would like to extract event pair (start and end marked by + and -). but the pairs maybe not match which means start happen two times then followed the end event. In below example, event B start happed 2 times, so I wish it output a mismatched pair with nil in the end event not found. import re import pandas as pd data = """ 00:00:00 +running A dummy data 00:00:01 -running 00:00:02 +running B dummy data 00:00:03 +running B 00:00:04 -running 00:00:05 +running C dummy data 00:00:06 -running 00:00:07 +running D 10:00:08 -running """ m = re.findall(r"(\d+:\d+:\d+) \+running (\w+).*?(\d+:\d+:\d+) \-running",data,re.DOTALL) print(len(m)) df = pd.DataFrame(m,columns=['ts1','name','ts2']) print(df) Current output: ts1 name ts2 0 00:00:00 A 00:00:01 1 00:00:02 B 00:00:04 2 00:00:05 C 00:00:06 3 00:00:07 D 10:00:08 Expected: ts1 name ts2 0 00:00:00 A 00:00:01 1 00:00:02 B NA 2 00:00:03 B 00:00:04 3 00:00:05 C 00:00:06 4 00:00:07 D 10:00:08 What's proper way to get such results in python? I do not care about if use findall or not. | You can just modify your regex to have an optional trailing part: data = """ 00:00:00 +running A dummy data 00:00:01 -running 00:00:02 +running B dummy data 00:00:03 +running B 00:00:04 -running 00:00:05 +running C dummy data 00:00:06 -running 00:00:07 +running D 10:00:08 -running """ m = re.findall(r"(\d+:\d+:\d+) \+running (\w+)(?:\n(\d+:\d+:\d+) \-running)?", data, re.DOTALL) df = pd.DataFrame(m, columns=['ts1','name','ts2']).replace('', None) NB. replacing .*? by \n. Output: ts1 name ts2 0 00:00:00 A 00:00:01 1 00:00:02 B None 2 00:00:03 B 00:00:04 3 00:00:05 C 00:00:06 4 00:00:07 D 10:00:08 regex demo Handling dummy data If you assume there could be arbitrary rows of data you could filter them out if they don't match the expected pattern: data = """ 00:00:00 +running A dummy data 00:00:01 -running 00:00:02 +running B dummy data 00:00:03 +running B 00:00:04 -running 00:00:05 +running C dummy data 00:00:06 -running 00:00:07 +running D 10:00:08 -running """ m = re.findall(r"(\d+:\d+:\d+) \+running (\w+)(?:\s*(\d+:\d+:\d+) \-running)?", '\n'.join(x for x in data.splitlines() if re.match(r'\d+:\d+:\d+ [-+]running', x)), re.DOTALL) df = pd.DataFrame(m,columns=['ts1','name','ts2']).replace('', None) Or using a negative lookahead to handle the dummy part: m = re.findall(r"(\d+:\d+:\d+) \+running (\w+)(?:(?:.(?!\d+:\d+:\d+))*\n(\d+:\d+:\d+) \-running)?", data, re.DOTALL) df = pd.DataFrame(m,columns=['ts1','name','ts2']).replace('', None) regex demo Output: ts1 name ts2 0 00:00:00 A 00:00:01 1 00:00:02 B None 2 00:00:03 B 00:00:04 3 00:00:05 C 00:00:06 4 00:00:07 D 10:00:08 | 3 | 1 |
78,473,728 | 2024-5-13 | https://stackoverflow.com/questions/78473728/why-is-using-the-distance-cosine-function-from-scipy-faster-than-directly-exec | I am executing the below two code snippets to calculate the cosine similarity of two vectors where the vectors are the same for both executions and the code for the second one is mainly the code SciPy is running (see scipy cosine implementation). The thing is that when calling SciPy it is running slightly faster (~0.55ms vs ~0.69ms) and I don't understand why, as my implementation is like the one from SciPy removing some checks, which if something I would expect to make it faster. Why is SciPy's function faster? import time import math import numpy as np from scipy.spatial import distance SIZE = 6400000 EXECUTIONS = 10000 path = "" # From https://github.com/joseprupi/cosine-similarity-comparison/blob/master/tools/vectors.csv file_data = np.genfromtxt(path, delimiter=',') A,B = np.moveaxis(file_data, 1, 0).astype('f') accum = 0 start_time = time.time() for _ in range(EXECUTIONS): cos_sim = distance.cosine(A,B) print(" %s ms" % (((time.time() - start_time) * 1000)/EXECUTIONS)) cos_sim_scipy = cos_sim def cosine(u, v, w=None): uv = np.dot(u, v) uu = np.dot(u, u) vv = np.dot(v, v) dist = 1.0 - uv / math.sqrt(uu * vv) # Clip the result to avoid rounding error return np.clip(dist, 0.0, 2.0) accum = 0 start_time = time.time() for _ in range(EXECUTIONS): cos_sim = cosine(A,B) print(" %s ms" % (((time.time() - start_time) * 1000)/EXECUTIONS)) cos_sim_manual = cos_sim print(np.isclose(cos_sim_scipy, cos_sim_manual)) EDIT: The code to generate A and B is below and the exact files I am using can be found at: https://github.com/joseprupi/cosine-similarity-comparison/blob/master/tools/vectors.csv def generate_random_vector(size): """ Generate 2 random vectors with the provided size and save them in a text file """ A = np.random.normal(loc=1.5, size=(size,)) B = np.random.normal(loc=-1.5, scale=2.0, size=(size,)) vectors = np.stack([A, B], axis=1) np.savetxt('vectors.csv', vectors, fmt='%f,%f') generate_random_vector(640000) Setup: AMD Ryzen 9 3900X 12-Core Processor 64GB RAM Debian 12 Python 3.11.2 scipy 1.13.0 numpy 1.26.4 | It seems, scipy does at the beginning of correlation() function this which practically means: u = np.asarray(u, dtype=None, order="c") v = np.asarray(v, dtype=None, order="c") This ensures that the arrays are C_CONTIGUOUS (you can check this by printing u.flags and/or v.flags) I presume numpy uses different implementations of np.dot for contiguos/non-contiguos arrays. If you change your function to: def cosine(u, v, w=None): u = np.asarray(u, dtype=None, order="c") # <-- Ensure C_CONTIGUOUS True v = np.asarray(v, dtype=None, order="c") # <-- detto. uv = np.dot(u, v) uu = np.dot(u, u) vv = np.dot(v, v) dist = 1.0 - uv / math.sqrt(uu * vv) # Clip the result to avoid rounding error return np.clip(dist, 0.0, 2.0) I get the same results 0.45ms vs 0.45ms on my AMD 5700x. | 3 | 2 |
78,472,054 | 2024-5-13 | https://stackoverflow.com/questions/78472054/manually-color-columns-in-pandas | I have a list of colors that created manually; colmap = ['green', 'red', 'red', 'red', 'green'] and also have a pandas dataframe df with same length of my colmap Percent 0 -0.5 1 0.2 2 0.8 3 -0.3 4 0.6 I want to apply the colors to only Percent column, df.style.applymap doesn't work because it requires a function to apply styles. I couldn't figure out how to apply without a lambda function. Thanks in advance | You could pass a DataFrame to style. The callable doesn't need to use the input data: df.style.apply( lambda x: pd.DataFrame( {'Percent': [f'background-color:{x}' for x in colmap]}, index=x.index ), axis=None, ) NB. since the styling DataFrame should be aligned, you still should pass the index. Output: Example with another column: mapping to several columns If you want to use different mappers for different colors: other_colors = ['background-color:white', 'background-color:pink', 'background-color:teal', 'background-color:pink', 'background-color:white'] df.style.apply( lambda x: pd.DataFrame( {'Percent': [f'background-color:{x}' for x in colmap], 'other': other_colors, }, index=x.index ).reindex(columns=x.columns), axis=None, ) Or even: colmap = ['green', 'red', 'red', 'red', 'green'] other_colors = ['white', 'pink', 'teal', 'pink', 'white'] df.style.apply( lambda x: pd.DataFrame( {'Percent': colmap, 'other': other_colors }, index=x.index ).radd('background-color:') .reindex(columns=x.columns), axis=None, ) Or using a variant of @antoine's approach, with axis=0 and a dictionary: colors = {'Percent': ['green', 'red', 'red', 'red', 'green'], 'other': ['white', 'pink', 'teal', 'pink', 'white'] } def lst_to_property(lst): if lst is not None: return [f'background-color:{x}' for x in lst] df_.style.apply(lambda c: lst_to_property(colors.get(c.name, None)), axis=0) | 2 | 2 |
78,471,100 | 2024-5-13 | https://stackoverflow.com/questions/78471100/check-for-equality-across-n-n2-columns-horizontally-in-polars | Assume I have the following Polars DataFrame: all_items = pl.DataFrame( { "ISO_codes": ["fin", "nor", "eng", "eng", "swe"], "ISO_codes1": ["fin", "nor", "eng", "eng", "eng"], "ISO_codes2": ["fin", "ice", "eng", "eng", "eng"], "OtherColumn": ["1", "2", "3", "4", "5"], }) How can I implement a method like check_for_equality along the lines of def check_for_equality(all_items, columns_to_check_for_equality): return all_items.with_columns( pl.col_equals(columns_to_check_for_equality).alias("ISO_EQUALS") ) So that when I call it: columns_to_check_for_equality = ["ISO_codes", "ISO_codes1", "ISO_codes2"] resulting_df = check_for_equality(all_items, columns_to_check_for_equality) I achieve the following: resulting_df == pl.DataFrame( { "ISO_codes": ["fin", "nor", "eng", "eng", "swe"], "ISO_codes1": ["fin", "nor", "eng", "eng", "eng"], "ISO_codes2": ["fin", "ice", "eng", "eng", "eng"], "OtherColumn": ["1", "2", "3", "4", "5"], "ISO_EQUALS": [True, False, True, True, False], }) Note that I do not "know" the column names when doing the actual check, and the number of columns can vary between calls. Is there something like "col_equals" in the Polars API? | pl.n_unique_horizontal would probably be the answer, but it has not yet been added. https://github.com/pola-rs/polars/issues/9966 You can use the List API: df.with_columns(ISO_EQUALS = pl.concat_list(columns_to_check_for_equality).list.n_unique() == 1 ) shape: (5, 5) βββββββββββββ¬βββββββββββββ¬βββββββββββββ¬ββββββββββββββ¬βββββββββββββ β ISO_codes β ISO_codes1 β ISO_codes2 β OtherColumn β ISO_EQUALS β β --- β --- β --- β --- β --- β β str β str β str β str β bool β βββββββββββββͺβββββββββββββͺβββββββββββββͺββββββββββββββͺβββββββββββββ‘ β fin β fin β fin β 1 β true β β nor β nor β ice β 2 β false β β eng β eng β eng β 3 β true β β eng β eng β eng β 4 β true β β swe β eng β eng β 5 β false β βββββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββββββββ΄βββββββββββββ | 2 | 3 |
78,458,284 | 2024-5-10 | https://stackoverflow.com/questions/78458284/how-to-fix-line-bleeding-in-fpdf | I have these functions def investment_summary(pdf, text): pdf.set_font("Arial", size=8) pdf.write(5, text) for point in text.splitlines(): if point.startswith("-"): pdf.set_x(15) pdf.write(5, point) def create_report(county_and_state, llm_text, analytics_location_path=None): pdf = FPDF() pdf.add_page() pdf.set_text_color(r=50,g=108,b=175) pdf.set_font('Arial', 'B', 18) pdf.cell(w=0, h=10, txt="Verus-AI: " + county_and_state, ln=1,align='C') pdf.set_font('Arial', 'B', 16) pdf.cell(w=0, h=10, txt="Investment Summary", ln=1,align='L') investment_summary(pdf, llm_text) # pdf.set_text_color(r=50,g=108,b=175) # pdf.set_font('Arial', 'B', 16) # pdf.cell(w=0, h=10, txt="Analytics", ln=1,align='L') pdf.output(f'./example1.pdf', 'F') When I use this text: text = """ Branch County is a rural county in southwestern Michigan, with a population of about 45,000. The county seat is Coldwater, which is also the largest city in the county. The county has a diverse economy, with agriculture, manufacturing, health care, and education as the main sectors. The county is also home to several state parks, lakes, and recreational trails. Pros of investing in Branch County, MI: - Low property taxes and affordable housing prices, compared to the state and national averages. - High rental demand and low vacancy rates, due to the lack of new construction and the presence of several colleges and universities in the county. - Potential for appreciation and cash flow, as the county has a stable and growing population, a low unemployment rate, and a strong school system. - Opportunity to diversify your portfolio and hedge against inflation, by investing in different types of properties, such as single-family homes, multi-family buildings, mobile homes, and land. Cons of investing in Branch County, MI: - Limited access to public transportation and major highways, which may affect the mobility and convenience of tenants and owners. - High crime rates and low quality of education in some areas, which may deter some potential renters and buyers. - Weather-related risks, such as harsh winters, floods, and tornadoes, which may require additional maintenance and insurance costs. - Lack of economic and demographic diversity, which may limit the future growth and demand for real estate in the county. Overall investment thesis: Branch County, MI is a viable option for real estate investors who are looking for a long-term, passive, and income-generating strategy. The county offers a combination of low-cost, high-demand, and appreciating properties, with a relatively low risk of market fluctuations and downturns. However, investors should also be aware of the potential drawbacks and challenges of investing in a rural and homogeneous market, and be prepared to deal with the environmental and social issues that may arise. Investors should also conduct thorough due diligence and research on the specific locations, neighborhoods, and properties that they are interested in, and seek professional advice and guidance as needed. """ and call the function create_report("Branch County, MI", text) I get this I am not sure why text on the bottom bleeds in with other text, any suggestions on how to fix this would be greatly appreciated. | This is connected to Generating line break in FPDF and the write method of fpdf. As described in the answer to above mentioned question, try replacing pdf.write by pdf.multi_cell in the investment_summary function: def investment_summary(pdf, text, bullet_indent=15): pdf.set_font("Arial", size=8) for point in text.splitlines(): if point.startswith("-"): pdf.set_x(bullet_indent) pdf.multi_cell(0, 5, point) | 2 | 1 |
78,449,471 | 2024-5-8 | https://stackoverflow.com/questions/78449471/how-to-most-efficiently-delete-a-tuple-from-a-list-of-tuples-based-on-the-first | I have a data structure as follows, data = { '0_0': [('0_0', 0), ('0_1', 1), ('0_2', 2)], '0_1': [('0_0', 1), ('0_1', 0), ('0_2', 1)], '0_2': [('0_0', 2), ('0_1', 1), ('0_2', 0)], } Each key of the dictionary is unique. The values corresponding to the key value in the dictionary structure are list of tuples, and each tuple contains the coordinate name and the distance to that coordinate. How to delete tuple from lists by given key? For example I wrote a function as follows, def remove_all_acs( dictionary, delete_ac ): for key in dictionary: acs = dictionary[key] for ac in acs: if ac[0] == delete_ac: acs.remove(ac) break return dictionary It's my expected output and returns a result as follows, print(remove_all_acs(data, '0_0')) { '0_0': [('0_1', 1), ('0_2', 2)], '0_1': [('0_1', 0), ('0_2', 1)], '0_2': [('0_1', 1), ('0_2', 0)] } It works but is not effective on large lists. Do you have any idea? Thanks in advance. In addition you can create large dataset using this code, import math def generate(width, height): coordinates = [(x, y) for x in range(width) for y in range(height)] dataset = {} for x1, y1 in coordinates: key = f"{x1}_{y1}" distances = [] for x2, y2 in coordinates: if (x1, y1) != (x2, y2): distance = math.sqrt((x2 - x1)**2 + (y2 - y1)**2) distances.append((f"{x2}_{y2}", distance)) dataset[key] = distances return dataset | def remove_all_acs_now(dictionary, delete_ac): for key in dictionary: acs = dictionary[key] for ac in acs: if ac[0] == delete_ac: acs.remove(ac) break return dictionary def remove_all_acs(dictionary, delete_ac): for i in dictionary.values(): # .items() / .values() are the fastest ways for dicts, always! for q in (i): if q[0] == delete_ac: i.remove(q) break data=generate(200, 10) from copy import deepcopy remove_all_acs(dictionary=deepcopy(data), delete_ac="0_0") remove_all_acs_now(dictionary=deepcopy(data), delete_ac="0_0") %timeit deepcopy(data) # needed, because we are modifying the original value 502 ms Β± 1.83 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) %timeit remove_all_acs(dictionary=deepcopy(data), delete_ac="0_0") 539 ms Β± 12.2 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 539 - 502 = 37 ms %timeit remove_all_acs_now(dictionary=deepcopy(data), delete_ac="0_0") 559 ms Β± 44.8 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) 559 - 502 = 57 ms Further improvements: The function doesn't need to return anything, since you are modifying the original dict anyway. You wouldn't even need to pass the dict to the function if it is available in the function's scope (probably a tiny bit faster). However, like I stated in my comments: try to use Cython or Numpy (or both) for real (10x or faster) improvements. Edit - Test with cython - 6 times faster Cython is at least 6 times faster. The less there is to delete, the faster it is. The speed of Cython's loops is insane. If there is no matching value (which means nothing to delete), it is about 50 times faster. from cythondicttest import deletefomdict import random from copy import deepcopy def generate(size, max_connections): data = {} keys = [f"{i}_{j}" for i in range(size) for j in range(size)] for key in keys: connections = random.randint(1, min(max_connections, 10)) data[key] = [ (random.choice(keys), random.randint(1, 10)) for _ in range(connections) ] return data def remove_all_acs_now(dictionary, delete_ac): for key in dictionary: acs = dictionary[key] for ac in acs: if ac[0] == delete_ac: acs.remove(ac) break return dictionary data = generate(500, 100) # %timeit deepcopy(data) # 3.16 s Β± 27.8 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) # %timeit deletefomdict(dictionary=deepcopy(data), delete_ac="0_0") # 3.2 s Β± 26.8 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) # CYTHON: # 3200 ms - 3160 ms = 40 ms # %timeit remove_all_acs_now(dictionary=deepcopy(data), delete_ac="0_0") # 3.41 s Β± 122 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) # 3410 ms - 3160 ms = 250 ms And it is not much work to do: Create a PYX file cythondicttest.pyx, compile it with Cython, and you are ready to go! # important: use optimized compiler directives # https://cython.readthedocs.io/en/latest/src/userguide/source_files_and_compilation.html#compiler-directives cimport cython import cython cpdef int deletefomdict(dict[str,list[tuple[str,int]]] dictionary, str delete_ac): cdef: Py_ssize_t loop for i in dictionary.values(): loop=len(i) for qq in range(loop): if i[qq][0] == delete_ac: del i[qq] break return 0 If you have never used Cython before, you might use these directives (my default settings) as an orientation, but you might get even better results tweaking the settings here and there. optionsdict = { "Options.docstrings": False, "Options.embed_pos_in_docstring": False, "Options.generate_cleanup_code": False, "Options.clear_to_none": True, "Options.annotate": True, "Options.fast_fail": False, "Options.warning_errors": False, "Options.error_on_unknown_names": True, "Options.error_on_uninitialized": True, "Options.convert_range": True, "Options.cache_builtins": True, "Options.gcc_branch_hints": True, "Options.lookup_module_cpdef": False, "Options.embed": False, "Options.cimport_from_pyx": False, "Options.buffer_max_dims": 8, "Options.closure_freelist_size": 8, } configdict = { "define_macros": [ ("NPY_NO_DEPRECATED_API", 1), ("NPY_1_7_API_VERSION", 1), ("CYTHON_USE_DICT_VERSIONS", 1), ("CYTHON_FAST_GIL", 1), ("CYTHON_USE_PYLIST_INTERNALS", 1), ("CYTHON_USE_UNICODE_INTERNALS", 1), ("CYTHON_ASSUME_SAFE_MACROS", 1), ("CYTHON_USE_TYPE_SLOTS", 1), ("CYTHON_USE_PYTYPE_LOOKUP", 1), ("CYTHON_USE_ASYNC_SLOTS", 1), ("CYTHON_USE_PYLONG_INTERNALS", 1), ("CYTHON_USE_UNICODE_WRITER", 1), ("CYTHON_UNPACK_METHODS", 1), ("CYTHON_USE_EXC_INFO_STACK", 1), ("CYTHON_ATOMICS", 1), ], "undef_macros": [], "library_dirs": [], "libraries": [], "runtime_library_dirs": [], "extra_objects": [], "extra_compile_args": ["/O2", "/Oy"], "extra_link_args": [], "export_symbols": [], "swig_opts": [], "depends": [], "language": "c", "optional": None, } compiler_directives = { "binding": True, "boundscheck": False, "wraparound": False, "initializedcheck": False, "nonecheck": False, "overflowcheck": False, "overflowcheck.fold": True, "embedsignature": False, "embedsignature.format": "c", # (c / python / clinic) "cdivision": True, "cdivision_warnings": False, "cpow": True, "always_allow_keywords": False, "c_api_binop_methods": False, "profile": False, "linetrace": False, "infer_types": True, "language_level": 3, # (2/3/3str) "c_string_type": "bytes", # (bytes / str / unicode) "c_string_encoding": "default", # (ascii, default, utf-8, etc.) "type_version_tag": False, "unraisable_tracebacks": True, "iterable_coroutine": True, "annotation_typing": True, "emit_code_comments": True, "cpp_locals": False, "legacy_implicit_noexcept": False, "optimize.use_switch": True, "optimize.unpack_method_calls": True, "warn.undeclared": False, # (default False) "warn.unreachable": True, # (default True) "warn.maybe_uninitialized": False, # (default False) "warn.unused": False, # (default False) "warn.unused_arg": False, # (default False) "warn.unused_result": False, # (default False) "warn.multiple_declarators": True, # (default True) "show_performance_hints": True, # (default True) } | 2 | 1 |
78,469,600 | 2024-5-12 | https://stackoverflow.com/questions/78469600/pandasdata-frame-handeling-of-nested-list-within-json-in-pandas-table-for-subseq | So I've been working with a json.loads() of blockdevices for a small project that i'm working on. Presently, I'm trying to feed json data into a Pandas dataframe, then convert that dataframe into a numpy array to be processed by a third party project called CurseXcel, but I digress. What i'm presently having trouble with is correctly handling children within a list of dictionaries of json data. What code i've been able to work out is essentially def return_pandas(): process = subprocess.run("lsblk --json -o NAME,SIZE,UUID,MOUNTPOINT,PATH,FSTYPE ".split(), capture_output=True, text=True) data = json.loads(process.stdout) return pd.json_normalize(data['blockdevices']) then json_pandas = return_pandas() which returns name size uuid mountpoint path fstype children 0 sda 25G None None /dev/sda None NaN 1 sdb 25G None None /dev/sdb None [{'name': 'sdb1', 'size': '512M', 'uuid': '008... 2 sr0 1024M None None /dev/sr0 None NaN as a table. Again, what i'm having trouble with is getting the 'children' list within said json and pandas dataframe table to be potentially listed alongside regular devices or listed in a more normalized fashion within the children column comprising name size uuid etc. This would involve potentially using recursion to flatten all data or something else. Numpy data looks like this [['sda' '25G' None None '/dev/sda' None nan] ['sdb' '25G' None None '/dev/sdb' None list([{'name': 'sdb1', 'size': '512M', 'uuid': '0087-DF28', 'mountpoint': '/boot/efi', 'path': '/dev/sdb1', 'fstype': 'vfat'}, {'name': 'sdb2', 'size': '488M', 'uuid': 'bd51147c-8f8c-4d3a-afde-4ebf67ae4558', 'mountpoint': '/boot', 'path': '/dev/sdb2', 'fstype': 'ext2'}, {'name': 'sdb3', 'size': '24G', 'uuid': 'd07d17bf-ebbd-491a-b57c-f9d43b7e6be5', 'mountpoint': None, 'path': '/dev/sdb3', 'fstype': 'crypto_LUKS', 'children': [{'name': 'sda3_crypt', 'size': '24G', 'uuid': '3NbbDi-BtQ4-PXzK-NVBO-cR2d-aLzZ-pzh0A5', 'mountpoint': None, 'path': '/dev/mapper/sda3_crypt', 'fstype': 'LVM2_member', 'children': [{'name': 'kali--vg-root', 'size': '23G', 'uuid': '6bc26692-9b22-4d38-b1c0-53e99326e9d5', 'mountpoint': '/', 'path': '/dev/mapper/kali--vg-root', 'fstype': 'ext4'}, {'name': 'kali--vg-swap_1', 'size': '980M', 'uuid': '3eb8e4cc-56bb-4b5b-b2a8-bc7ac016df67', 'mountpoint': '[SWAP]', 'path': '/dev/mapper/kali--vg-swap_1', 'fstype': 'swap'}]}]}])] ['sr0' '1024M' None None '/dev/sr0' None nan]] If i could get that 'list' after rendering the table to a numpy array as it's own unnested somehow that'd be great. Again i'm at a loss, yet i'm sure it's something simple i'm missing. this could involve flattening into a 1 dimensional array however i still want to maintain the ability to use numpyarrays. i've thought about using recursion however dont know specifically how to grab a nested list without recreating the entire structure, nor a conditional statement that applies to nested list. data output per request {'blockdevices': [{'name': 'sda', 'size': '25G', 'uuid': None, 'mountpoint': None, 'path': '/dev/sda', 'fstype': None}, {'name': 'sdb', 'size': '25G', 'uuid': None, 'mountpoint': None, 'path': '/dev/sdb', 'fstype': None, 'children': [{'name': 'sdb1', 'size': '512M', 'uuid': '0087-DF28', 'mountpoint': '/boot/efi', 'path': '/dev/sdb1', 'fstype': 'vfat'}, {'name': 'sdb2', 'size': '488M', 'uuid': 'bd51147c-8f8c-4d3a-afde-4ebf67ae4558', 'mountpoint': '/boot', 'path': '/dev/sdb2', 'fstype': 'ext2'}, {'name': 'sdb3', 'size': '24G', 'uuid': 'd07d17bf-ebbd-491a-b57c-f9d43b7e6be5', 'mountpoint': None, 'path': '/dev/sdb3', 'fstype': 'crypto_LUKS', 'children': [{'name': 'sda3_crypt', 'size': '24G', 'uuid': '3NbbDi-BtQ4-PXzK-NVBO-cR2d-aLzZ-pzh0A5', 'mountpoint': None, 'path': '/dev/mapper/sda3_crypt', 'fstype': 'LVM2_member', 'children': [{'name': 'kali--vg-root', 'size': '23G', 'uuid': '6bc26692-9b22-4d38-b1c0-53e99326e9d5', 'mountpoint': '/', 'path': '/dev/mapper/kali--vg-root', 'fstype': 'ext4'}, {'name': 'kali--vg-swap_1', 'size': '980M', 'uuid': '3eb8e4cc-56bb-4b5b-b2a8-bc7ac016df67', 'mountpoint': '[SWAP]', 'path': '/dev/mapper/kali--vg-swap_1', 'fstype': 'swap'}]}]}]}, {'name': 'sr0', 'size': '1024M', 'uuid': None, 'mountpoint': None, 'path': '/dev/sr0', 'fstype': None}]} | Is this your expected output? code: df = pd.json_normalize(data=data.get("blockdevices")).explode(column="children") df = (pd .concat(objs=[df, df.children.apply(func=pd.Series)], axis=1) .drop(columns=[0, "children"]) .fillna("") .reset_index(drop=True) ) print(df) result: name size uuid mountpoint path fstype name size uuid mountpoint path fstype 0 sda 25G /dev/sda 1 sdb 25G /dev/sdb sdb1 512M 0087-DF28 /boot/efi /dev/sdb1 vfat 2 sdb 25G /dev/sdb sdb2 488M bd51147c-8f8c-4d3a-afde-4ebf67ae4558 /boot /dev/sdb2 ext2 3 sdb 25G /dev/sdb sdb3 24G d07d17bf-ebbd-491a-b57c-f9d43b7e6be5 /dev/sdb3 crypto_LUKS 4 sr0 1024M /dev/sr0 | 2 | 3 |
78,469,541 | 2024-5-12 | https://stackoverflow.com/questions/78469541/get-periodic-updates-from-a-process | I have a function which will increment a number with time. This is given to a process; but it doesn't increment while the process runs. I want the incremented value of my global variable at regular intervals. But, I am not seeing the increment happening. Here is what I tried: from multiprocessing import Process from time import sleep x = 0; def increment(): global x; while(1): x +=1 sleep(1) process_1 = Process(target=increment) process_1.start() for i in range(0,10): print(x); sleep(2) I am expecting that it will print 1 3 5 7 ... However, it remains at 0. What am I missing here? | Note that each process is a separate object(). You should pass the vars in the process, to be locally saved in the memory. from multiprocessing import Process, Value from time import sleep def increment(x): while True: x.value += 1 sleep(1) if __name__ == '__main__': x = Value('i', 1) process_1 = Process(target=increment, args=(x,)) process_1.start() for i in range(10): print(f'{i}: {x.value}') sleep(2) Prints 0: 1 1: 3 2: 5 3: 7 4: 9 5: 11 6: 13 7: 15 8: 17 9: 19 | 2 | 2 |
78,457,389 | 2024-5-9 | https://stackoverflow.com/questions/78457389/auto-inherite-a-parent-class-with-decorator | I have a class decorator p() that can be used only on classes that inherite from some abstract class P. So anytime someone uses p it should be written as follows: @p() class Child(P): ... Is there any way to automatically inherite P using the decorator p() so it would simply be written as follows: @p() class Child: ... | As I commented, it's tricky to apply a base class using a decorator the way you asked. However, it is a lot easier if you try an alternative approach, where the decorator's behavior is incorporated into the base class, rather than the other way around. Here's an example, that prints out a message before calling the child class's __init__ method: from functools import wraps class Base: def __init_subclass__(cls): original_init = cls.__init__ @wraps(cls.__init__) def wrapper(self, *args): print("foo") original_init(self, *args) cls.__init__ = wrapper class Derived(Base): def __init__(self): print("bar") You can customize the wrapper function's behavior to do whatever your decorator currently does (rather than just printing a message). And of course, the base class can have whatever other behavior you want the derived class to inherit, rather than just the __init_subclass__ machinery that enables changing the subclass's __init__ method. | 2 | 1 |
78,467,899 | 2024-5-12 | https://stackoverflow.com/questions/78467899/python-giving-different-response-when-called-via-async | Getting different output for the same logic? Was trying to learn async/await in python. Output for first Snippet : <class 'dict'> Output for second Snippet : <class 'list'> Snippet 1 : without async import urllib.request import json def get_response(user_url): resp = urllib.request.urlopen(user_url).read() resp_json = json.loads(resp) return resp_json['results'][0] def make_request(user_url): json_response = get_response(user_url) print(type(json_response)) USER_URL = 'https://randomuser.me/api' make_request(USER_URL) Snippet 2 : with async import urllib.request import json import asyncio async def get_response(user_url): resp = urllib.request.urlopen(user_url).read() resp_json = json.loads(resp) return resp_json['results'][0] async def make_request(user_url): json_response = await asyncio.gather(get_response(user_url)) print(type(json_response)) USER_URL = 'https://randomuser.me/api' asyncio.run(make_request(USER_URL)) Why are we getting different output for the same logic in code? Expectation is to get dict data type for both. Using Python 3.10 | That <class 'list'> is the return value of the asyncio.gather(). If all awaitables are completed successfully, the result is an aggregate list of returned values. You'll then need to iterate through the list to access the result of the awaitables. If you only have that single URL, await get_response(user_url) is enough. asyncio.gather() is for executing multiple awaitables concurrently and wait for them to finished. | 2 | 2 |
78,467,626 | 2024-5-12 | https://stackoverflow.com/questions/78467626/is-there-a-way-to-remove-zombies-in-airflow | I'm making an airflow program that needs to convert .tsv to .parquet But I have an error: ERROR - Detected zombie job: {'full_filepath': '/opt/airflow/dags/formatting_data.py', 'processor_subdir': '/opt/airflow/dags', 'msg': "{'DAG Id': 'formatting_data', 'Task Id': 'format_imdb_data', 'Run Id': 'scheduled__2024-05-11T00:00:00+00:00', 'Hostname': 'dd0bbc46b6fa'}", 'simple_task_instance': <airflow.models.taskinstance.SimpleTaskInstance object at 0xffff937ba850>, 'is_failure_callback': True} and my DAG cuts itself in the middle I made a class that manages my python files: class FileHandler: def convert_csv_to_parquet(self, csv_file): df = pd.read_csv(csv_file) parquet_file = csv_file.replace(".csv", ".parquet") df.to_parquet(parquet_file) return df def convert_tsv_to_parquet(self, tsv_file): df = pd.read_csv(tsv_file, sep='\t') parquet_file = tsv_file.replace(".tsv", ".parquet") df.to_parquet(parquet_file) os.remove(tsv_file) return df def list_files_in_directory(self, directory, extension='.tsv'): tsv_files = [] for root, dirs, files in os.walk(directory): for file in files: if file.endswith(extension): tsv_files.append(os.path.join(root, file)) return tsv_files def remove_empty_directory(self, path): for root, dirs, _ in os.walk(path, topdown=False): for directory in dirs: directory_path = os.path.join(root, directory) if not os.listdir(directory_path): os.rmdir(directory_path) def move_files(self, files, destination, remove_dir_flags=True): for file in files: shutil.move(file, destination) if remove_dir_flags: self.remove_empty_directory(destination) my DAG is : from datetime import datetime, timedelta import pandas as pd from airflow import DAG from airflow.operators.python import PythonOperator import os import sys LIBS_PATH = os.path.join('/opt/airflow', 'libs') if LIBS_PATH not in sys.path: sys.path.insert(0, LIBS_PATH) from preprocessing.formatting.file_handler import FileHandler from utils.logs_manager import LogManager from utils.path_manager import PathManager file_handler = FileHandler() lm = LogManager() default_args = { 'owner': 'airflow', 'depends_on_past': False, 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=5), } dag = DAG( 'formatting_data', default_args=default_args, description='DAG used to format data in datalake', schedule_interval='@daily', start_date=datetime(2024, 5, 7), catchup=False, tags=["formatting", "preprocessing"] ) def format_imdb_files(): pm = PathManager('/opt/airflow/datalake') imdb_path = pm.get_local_path('raw', 'imdb') for file in os.listdir(imdb_path): if file.endswith('.tsv'): file_path = os.path.join(imdb_path, file) df = pd.read_csv(file_path, sep='\t') parquet_file = file_path.replace(".tsv", ".parquet") df.to_parquet(parquet_file) os.remove(file_path) del df format_imdb_data = PythonOperator( task_id='format_imdb_data', python_callable=format_imdb_files, dag=dag, ) format_imdb_data I don't understand why I get this error and how to solve it Thanks a lot for your help !! | in airflow when the worker awaits for the operator to finish. If thereβs a connection failure between worker and operator or worker kills off unexpectedly, while the operator is still running, it becomes a zombie process. i think in your case the format_imdb_files() is taking too much time to complete, you must increase the scheduler configuration scheduler_zombie_task_threshold from default 5 minutes to somewhere around ~20β30 minutes ( depending on the process time). | 2 | 1 |
78,453,450 | 2024-5-9 | https://stackoverflow.com/questions/78453450/identifying-and-retrieving-particular-sequences-of-characters-from-within-text-f | I have a list named MAT_DESC that contains material descriptions in a free-text format. Here are some sample values from the MAT_DESC column: QWERTYUI PN-DR, Coarse, TR, 1-1/2 in, 50/Carton, 200 ea/Case, Dispenser Pack 2841 PC GREY AS/AF (20/CASE) CI-1A, up to 35 kV, Compact/Solid, Stranded, 10/Case MT53H7A4410WS5 WS WEREDSS PMR45678 ERTYUI HEERTYUIND 10/case TYPE.2 86421-K40-F000, 1 Set/Pack, 100 Packs/Case Clear, 1 in x 36 yd, 4.8 mil, 24 rolls per case 3Mβ’ Victory Seriesβ’ Bracket MBTβ’ 017-873, .022, UL3, 0T/8A, Hk, 5/Pack 3Mβ’ BXβ’ Dual Reader Protective Eyewear 11458-00000-20, Clear Anti-Fog Lens, Silver/Black Frame, +2.0 Top/Bottom Diopter, 20 ea/Case 4220VDS-QCSHC/900-000/A CABINET EMPTY 3Mβ’ Bumponβ’ Protective Product SJ5476 Fluorescent Yellow, 3.000/Case 3Mβ’ Bumponβ’ Protective Products SJ61A2 Black, 10,000/Case Material Desc String to be Extracted QWERTYUI PN-DR, Coarse, TR, 1-1/2 in, 50/Carton, 200 ea/Case, Dispenser Pack 50/Carton, 200 ea/Case 2841 PC GREY AS/AF (20/CASE) 20/CASE TYPE.2 86421-K40-F000, 1 Set/Pack, 100 Packs/Case 1 Set/Pack, 100 Packs/Case RTYU 31655, 240+, 6 in, 50 Discs/Roll, 6 Rolls/Case 50 Discs/Roll, 6 Rolls/Case Clear, 1 in x 36 yd, 4.8 mil, 24 rolls per case 24 rolls per case 3Mβ’ Victory Seriesβ’ Bracket MBTβ’ 017-873, .022, UL3, 0T/8A, Hk, 5/Pack 5/Pack 3Mβ’ BXβ’ Dual Reader Protective Eyewear 11458-00000-20, Clear Anti-Fog Lens, Silver/Black Frame, +2.0 Top/Bottom Diopter, 20 ea/Case 20 ea/Case 4220VDS-QCSHC/900-000/A CABINET EMPTY No units 3Mβ’ Bumponβ’ Protective Product SJ5476 Fluorescent Yellow, 3.000/Case 3.000/Case 3Mβ’ Bumponβ’ Protective Products SJ61A2 Black, 10,000/Case 10,000/Case I'm trying to extract specific patterns of substrings from the MAT_DESC column, such as the quantity and unit information (e.g., "50 Discs/Roll", "200 ea/Case", "10/Case",50/Carton, 200 ea/Case etc.). I'm currently using the following PYTHON to attempt this: pattern = r"(\d+)\s*(\w+)/(\w+)" results = [] for desc in material_descriptions: matches = re.findall(pattern, desc) unit_strings = [] if matches: for match in matches: quantity, unit1, unit2 = match unit_string = f"{quantity} {unit1}/{unit2}" unit_strings.append(unit_string) if unit_strings: unit_info = ", ".join(unit_strings) results.append((desc, unit_info)) for material_desc, unit_info in results: print(f"Material Description: {material_desc}") print(f"Unit Information: {unit_info}") print() Python script fails in the below listed scenarios Material Desc String to be Extracted 3Mβ’ Victory Seriesβ’ Bracket MBTβ’ 017-873, .022, UL3, 0T/8A, Hk, 5/Pack 5/Pack 3Mβ’ BXβ’ Dual Reader Protective Eyewear 11458-00000-20, Clear Anti-Fog Lens, Silver/Black Frame, +2.0 Top/Bottom Diopter, 20 ea/Case 20 ea/Case 4220VDS-QCSHC/900-000/A CABINET EMPTY No units 3Mβ’ Bumponβ’ Protective Product SJ5476 Fluorescent Yellow, 3.000/Case 3.000/Case 3Mβ’ Bumponβ’ Protective Products SJ61A2 Black, 10,000/Case 10,000/Case Is there a way to achieve this ? | For the data you shared, I suggest using \b(?<!\d[,.])(?<!\d-)((?:\d+/)?\d+(?:[,.]\d+)*)\b(\s*(\w*)/)?\s*((?(2)\w+|\w+\s+per\s+\w+))(?=\s*(?:[,)]|$)) See the regex demo. In your code, since the regex contains 4 groups now, you need to use quantity, _, unit1, unit2 = match Just to skip Group 2 value that is technical in this regex. Regex details: \b - a word boundary (?<!\d[,.-]) - immediately before the current location, there should be no digit + ,, . or - ((?:\d+/)?\d+(?:[,.]\d+)*) - Group 1: an optional group of one or more digits and then a /, and then one or more digits and then zero or more repetitions of , or - and one or more digits \b - a word boundary (\s*(\w*)/)? - Group 2 (optional): zero or more whitespaces and then zero or more word chars (captured into Group 3) and a / char \s* - zero or more whitespaces ((?(2)\w+|\w+\s+per\s+\w+)) - Group 4: if Group 2 matched, match one or more word chars, else, one or more words + one or more whitepsaces, per + one or more whitepsaces, one or more word chars (?=\s*(?:[,)]|$)) - a positive lookahead requiring (immediately to the right of the current location) zero or more whitespaces, then a , or ) or end of string. | 2 | 1 |
78,467,544 | 2024-5-12 | https://stackoverflow.com/questions/78467544/how-to-add-two-pandas-dataframe-in-a-way-that-achieve-the-desired-behaviour-desc | the resulting dataframe will sum the value of matching index, but if an index is missing in one dataframe but present in the other, it will just carry them over instead of removing them. for example df1 = data index1 4.2 index2 5.0 index3 3.0 df2 = data index2 1.0 index3 7.0 index4 2.0 result = data index1 4.2 ( carry from df1) index2 6.0 (5.0 + 1.0) index3 10.0 (3.0 + 7.0) index4 2.0 ( carry from df2) I tried using + and add(), also concatenate, they either remove the index or fill it with na if the index is missing from one of the dataframe | You can try pd.concat and sum along rows: out = pd.concat([df1, df2], axis=1).sum(axis=1).to_frame(name="data") print(out) Prints: data index1 4.2 index2 6.0 index3 10.0 index4 2.0 OR: using .merge: out = ( df1.merge(df2, left_index=True, right_index=True, how="outer") .sum(axis=1) .to_frame(name="data") ) print(out) | 2 | 2 |
78,466,491 | 2024-5-12 | https://stackoverflow.com/questions/78466491/incremental-computation-of-total-value-over-dates-and-groups | Here is my dataframe. There is a date index and there are 4 symbols for each date. I want to loop over each date for each symbol. The 'quantity' column is calculated based on the 'tot_value' of the previous date. The 'tot_value' is computed for a specific date and is common for all symbols. The 'value' column varies for each symbol for each date. It is an issue with the way I am using shift here. It does not reference the previous date value. Instead it uses the default value of tot_value that I use while populating the dataframe. However, in the final result the tot_value is getting computed correctly. I am new to python and would appreciate any help with this loop. dataframe Here is my code. import pandas as pd # create the dataframe data = {'symbol': ['A', 'B', 'C', 'D','A', 'B', 'C', 'D','A', 'B', 'C', 'D','A', 'B', 'C', 'D'], 'date':['05/06/2024','05/06/2024','05/06/2024','05/06/2024', '05/07/2024','05/07/2024','05/07/2024','05/07/2024', '05/08/2024','05/08/2024','05/08/2024','05/08/2024', '05/09/2024','05/09/2024','05/09/2024','05/09/2024'], 'tot_value': [1000, 1000, 1000, 1000,1000, 1000, 1000, 1000,1000, 1000, 1000, 1000,1000, 1000, 1000, 1000], 'mult': [1, 1.1, 1.2, 1.3,1.4, 1.5, 1.6, 1.7,1.8, 1.9, 2, 2.1,2.2, 2.3, 2.4, 2.5], 'quantity': [0, 0, 0, 0,0, 0, 0, 0,0, 0, 0, 0,0, 0, 0, 0], 'value': [0, 0, 0, 0,0, 0, 0, 0,0, 0, 0, 0,0, 0, 0, 0], } df = pd.DataFrame(data) df.set_index(['date'], inplace = True) symbols = df['symbol'].unique() # loop by date index and symbol for ind in df.index.unique(): for symbol in symbols: df['quantity'][ind] = df['tot_value'][ind].shift(1) * df['mult'][ind] df['value'][ind] = df['quantity'][ind] * 5 g = df.groupby('date')['value'].sum() df['tot_value'][ind] = g.sum() df Here is the expected output. Calculations: date = 5/6 There is no tot_value for a previous date so the quantity column is NaN. Hence the value is also NaN. And tot_value = default value of 1,000. date = 5/7 tot_value for previous date = 1,000. The quantity for this date is based on the tot_value of 1,000. Once the quantity is calculated the value calc is straightforward. tot_value for 5/7 = tot_value for 5/6 + sum of value for the 4 symbols on 5/7. tot_value for 5/7 = 1,000 + sum(7,000 + 7,500 + 8,000 + 8,500) = 32,000 date = 5/8 tot_value for previous date = 32,000. The quantity for this date is based on the tot_value of 32,000. Once the quantity is calculated the value calc is straightforward. tot_value for 5/8 = tot_value for 5/7 + sum of value for the 4 symbols on 5/8. tot_value for 5/8 = 32,000 + sum(288,000 + 304,000 + 320,000 + 336,000) = 1,280,000. symbol tot_value (expected) mult quantity (expected) value (expected) date 5/6/2024 A 1,000 1 NaN NaN 5/6/2024 B 1,000 1.1 NaN NaN 5/6/2024 C 1,000 1.2 NaN NaN 5/6/2024 D 1,000 1.3 NaN NaN 5/7/2024 A 32,000 1.4 1,400 7,000 5/7/2024 B 32,000 1.5 1,500 7,500 5/7/2024 C 32,000 1.6 1,600 8,000 5/7/2024 D 32,000 1.7 1,700 8,500 5/8/2024 A 1,280,000 1.8 57,600 288,000 5/8/2024 B 1,280,000 1.9 60,800 304,000 5/8/2024 C 1,280,000 2 64,000 320,000 5/8/2024 D 1,280,000 2.1 67,200 336,000 5/9/2024 A 61,440,000 2.2 2,816,000 14,080,000 5/9/2024 B 61,440,000 2.3 2,944,000 14,720,000 5/9/2024 C 61,440,000 2.4 3,072,000 15,360,000 5/9/2024 D 61,440,000 2.5 3,200,000 16,000,000 expected output | Your computation is inherently iterative, therefore a loop is a valid approach. This is however not the classical type of pandas operation (usually vectorized). One option, assuming the data is sorted by date, then symbol, would be to loop over a groupby (ignoring the first date): # initial total tot = df['tot_value'].iloc[0] dates = df.index.unique() for d, g in df.loc[dates[1]:].groupby('date', sort=False): qty = tot*g['mult'] val = qty*5 tot += val.sum() df.loc[d, 'quantity'] = qty df.loc[d, 'value'] = val df.loc[d, 'tot_value'] = tot Output: symbol tot_value mult quantity value date 05/06/2024 A 1000 1.0 0 0 05/06/2024 B 1000 1.1 0 0 05/06/2024 C 1000 1.2 0 0 05/06/2024 D 1000 1.3 0 0 05/07/2024 A 32000 1.4 1400 7000 05/07/2024 B 32000 1.5 1500 7500 05/07/2024 C 32000 1.6 1600 8000 05/07/2024 D 32000 1.7 1700 8500 05/08/2024 A 1280000 1.8 57600 288000 05/08/2024 B 1280000 1.9 60800 304000 05/08/2024 C 1280000 2.0 64000 320000 05/08/2024 D 1280000 2.1 67200 336000 05/09/2024 A 61440000 2.2 2816000 14080000 05/09/2024 B 61440000 2.3 2944000 14720000 05/09/2024 C 61440000 2.4 3072000 15360000 05/09/2024 D 61440000 2.5 3200000 16000000 | 2 | 1 |
78,464,072 | 2024-5-11 | https://stackoverflow.com/questions/78464072/what-is-the-best-way-to-merge-two-dataframes-that-one-of-them-has-overlapping-ra | My DataFrames are: import pandas as pd df_1 = pd.DataFrame( { 'a': [10, 12, 14, 20, 25, 30, 42, 50, 80] } ) df_2 = pd.DataFrame( { 'start': [9, 19], 'end': [26, 50], 'label': ['a', 'b'] } ) Expected output: Adding column label to df_1: a label 10 a 12 a 14 a 20 a 25 a 20 b 25 b 30 b 42 b 50 b df_2 defines the ranges of labels. So for example, the first row of df_2 start of the range is 9 and the end is 22. Now I want to slice df_1 based on start and end and give this label to the slice. Note that start is exlusive and end is inclusive. And my labels ranges are overlapping. These are my attempts. The first one works but I am not sure if it is the best. # attempt_1 dfc = pd.DataFrame([]) for idx, row in df_2.iterrows(): start = row['start'] end = row['end'] label = row['label'] df_slice = df_1.loc[df_1.a.between(start, end, inclusive='right')] df_slice['label'] = label dfc = pd.concat([df_slice, dfc], ignore_index=True) ## attempt 2 idx = pd.IntervalIndex.from_arrays(df_2['start'], df_2['end'], closed='both') label = df_2.iloc[idx.get_indexer(df_1.a), 'label'] df_1['label'] = label.to_numpy() | I'd try to concatenate generated labeled ranges using pandas.concat this way: template = df_1.set_index('a') ranges = df_2.values output = pd.concat( template.loc[start:end].assign(label=label) for start, end, label in ranges ).reset_index() It's close to your solution with two major differences: All iterations are collapsed into an inner generator. We use df_1['a'] as an index, which is implied by its nature. | 4 | 2 |
78,464,719 | 2024-5-11 | https://stackoverflow.com/questions/78464719/recursive-backtracking-word-search-in-a-2d-array | I was attempting the word search problem here on leetcode. Leetcode seems to give me an error for the two test cases: board = [["a","a"]], word = "aa" and for : board = [["a"]], word = "a" But the following code seems to work fine on Google colab for both of these test cases. Could someone be able to please pinpoint the exact cause? Likely, it is the result of the difference in Python versions. But what exactly is a loss for me! Thanks in advance! class Solution: def exist2(self, board: List[List[str]], word: str, current_row, current_col,\ match_index=0,seen_cells=[]) -> bool: if match_index==len(word): return True else: for i in range(-1,2): for j in range(-1,2): if current_row+i>=0 and current_col+j>=0 and current_row+i<len(board)\ and current_col+j<len(board[0]) and\ board[current_row+i][current_col+j]==word[match_index] and\ (current_row+i,current_col+j) not in seen_cells\ and i+j!=-2 and i+j!=2: match_index+=1 seen_cells.append((current_row+i,current_col+j)) if self.exist2(board, word, current_row+i, current_col+j,\ match_index,seen_cells): return True else: seen_cells.remove((current_row+i,current_col+j)) current_row,current_col=seen_cells[-1] return False def exist(self, board: List[List[str]], word: str, current_row=0, current_col=0,\ match_index=0,seen_cells=[]) -> bool: start_index=[] for i in range(len(board)): for j in range(len(board[0])): if board[i][j]==word[0]: start_index.append((i,j)) for ele in start_index: if self.exist2(board, word, ele[0],ele[1]): return True return False def main() print(exist([["a","a"]],'a')) | There are multiple issues with your code. First, you are relying on the default argument of seen_cells, but there is only one instance of that list, which is used across all calls to exist2. Multiple test cases will share the same state, which causes problems when there are tuples in seen_cells from previous calls of exist2. You are also incrementing match_index in every iteration of the inner loop in exist2 even though each iteration should be independent. You should instead pass match_index + 1 to the recursive call of exist2. Furthermore, you have allowed diagonal moves by only checking i+j!=-2 and i+j!=2. For instance, the move (-1, 1) is allowed since -1 + 1 = 0 is neither equal to -2 nor 2. It would be correct to check that exactly one of i and j is non-zero. This can be done concisely with ((not i) ^ (not j)), though you can also write it out more explicitly. In addition, you do not add the first cell in the path to seen_paths. You can fix this off-by-one error by checking for the match right at the start of exist2 and remove from seen_cells right before the return. Note that the line current_row,current_col=seen_cells[-1] does not make much sense and can cause IndexErrors; remove it. Finally, this solution is too slow because checking for existence in a list is a linear time operation. Instead, use a set or a boolean matrix to mark which cells have already been passed through. | 2 | 3 |
78,465,168 | 2024-5-11 | https://stackoverflow.com/questions/78465168/how-do-i-draw-text-thats-right-aligned-in-pygame | I am making a Math-thingy that requires numbers to be drawn to the screen right aligned. I'm using this function to draw text: def drawText(text, font, text_col, x, y): img = font.render(text, True, text_col) screen.blit(img, (x, y)) How do I draw the text right aligned instead of left aligned? I tried to use Python's .rjust() function for this, but I'm not using a monospaced font. Is there anyway to possibly either use .rjust() to right align it or possibly something else? | The pygame.Rect object has a set of virtual attributes that can be used to define the position of surfaces and text. Align the text to its right by setting topright: def drawText(text, font, text_col, x, y): img = font.render(text, True, text_col) rect = img.get_rect(topright = (x, y)) screen.blit(img, rect) Minimal example import pygame from pygame.locals import * pygame.init() screen = pygame.display.set_mode((500, 300)) clock = pygame.time.Clock() font = pygame.font.SysFont(None, 32) count = 0 run = True while run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False count += 1 text_surf = font.render(str(count), True, "black") rect = screen.get_rect().inflate(-110, -110) screen.fill("white") pygame.draw.rect(screen, "red", screen.get_rect().inflate(-100, -100), 5) screen.blit(text_surf, text_surf.get_rect(topleft = rect.topleft)) screen.blit(text_surf, text_surf.get_rect(midtop = rect.midtop)) screen.blit(text_surf, text_surf.get_rect(topright = rect.topright)) screen.blit(text_surf, text_surf.get_rect(midleft = rect.midleft)) screen.blit(text_surf, text_surf.get_rect(center = rect.center)) screen.blit(text_surf, text_surf.get_rect(midright = rect.midright)) screen.blit(text_surf, text_surf.get_rect(bottomleft = rect.bottomleft)) screen.blit(text_surf, text_surf.get_rect(midbottom = rect.midbottom)) screen.blit(text_surf, text_surf.get_rect(bottomright = rect.bottomright)) pygame.display.flip() clock.tick(100) pygame.quit() exit() | 2 | 3 |
78,464,426 | 2024-5-11 | https://stackoverflow.com/questions/78464426/how-to-correctly-index-a-dataframe-with-a-function | Given a dataframe: a b c u 5 0 3 v 3 7 9 w 3 5 2 I'd like to select rows/columns in a dataframe using a function. This function gets the dataframe and returns a tuple of lists of labels, e.g. it returns ['v', 'u'], ['c'] using this lambda: get_labels = lambda df: (['v', 'u'], ['c']) If I use .loc with the label literals, I get the desired result: df.loc[['v', 'u'], ['c']] c v 9 u 3 The documentation says about using a function (a callable), that a function returning two lists of labels is valid ("one of the above"): A callable function with one argument (the calling Series or DataFrame) and that returns valid output for indexing (one of the above) But if I use .loc with the function, which I believe returns the same list tuple, I get an error: df.loc[get_labels] unhashable type: 'list' What's wrong? (as a possible clue: Pandas seems trying to create an Index object with the result of the function -- don't know why, no explanation in the documentation.) Full code: import numpy.random as npr import pandas as pd npr.seed(0) df = pd.DataFrame(npr.randint(10, size=(3,3)), index=['u','v','w'], columns=['a','b','c']) get_labels = lambda df: (['v', 'u'], ['c']) print(df.loc[['v', 'u'], ['c']]) print(df.loc[get_labels]) | As noted in the loc documentation, you need to pass: A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index, and never as an integer position along the index). A list or array of labels, e.g. ['a', 'b', 'c']. A slice object with labels, e.g. 'a':'f'. A boolean array of the same length as the axis being sliced, e.g. [True, False, True]. An alignable boolean Series. The index of the key will be aligned before masking. An alignable Index. The Index of the returned selection will be the input. A callable function with one argument (the calling Series or DataFrame) and that returns valid output for indexing (one of the above) Although maybe a bit ambiguous in the documentation, all of these are applicable to one axis. df.loc['a'] # single label df.loc[['a', 'b', 'c']] # list of labels df.loc['a':'f'] # slice df.loc[[True, False, True]] # list of booleans df.loc[pd.Series([True], index=['a']) # series df.loc[pd.Index(['a'])] # index df.loc[lambda x: ['a', 'b']] # callable In your case, the callable returns two indexers, making it as if you were passing them to the index locator: df.loc[(['v', 'u'], ['c']), :] You would need one callable per axis: get_index = lambda x: ['u', 'v'] get_col = lambda x: ['c'] df.loc[get_index, get_col] You need to call your function: df.loc[get_labels(df)] To use in a chain: df.pipe(lambda x: x.loc[get_labels(x)]) In my opinion, it might be better to have the function perform the selection directly: def get_labels(df): return df.loc[['v', 'u'], ['c']] df.pipe(get_labels) Output: c v 9 u 3 | 2 | 1 |
78,463,386 | 2024-5-11 | https://stackoverflow.com/questions/78463386/python-multiindex-dataframe-to-json-like-list-of-dictionaries | I want to store this data frame df = pd.DataFrame({ 'id':[1,1,2,2], 'gender':["m","m","f","f"], 'val1':[1,2,5,6], 'val2':[3,4,7,8] }).set_index(['id','gender']) as a json file that contains lists of dictionaries like this: d = [{'id':1, 'gender':"m", 'val1':[1,2], 'val2':[3,4]}, {'id':2, 'gender':"f", 'val1':[5,6], 'val2':[7,8]}] All my attempts to use variations of df.to_dict(orient = '...') or df.to_json(orient = '...') did not produce the desired output. | You first need to aggregate as list with groupby.agg and reset_index: (df.groupby(['id', 'gender']).agg(list) .reset_index().to_json(orient='records') ) Output: [{"id":1,"gender":"m","val1":[1,2],"val2":[3,4]},{"id":2,"gender":"f","val1":[5,6],"val2":[7,8]}] | 4 | 3 |
78,460,161 | 2024-5-10 | https://stackoverflow.com/questions/78460161/find-length-of-sequences-of-identical-values-along-one-axis-in-a-3d-array-relat | I'm interested in finding the length of sequences of 1's along a single axis in a multi-dimension array. For a 1D array I have worked my way to a solution using the answers at this older question. E.g. [0,1,0,0,1,1,1,0,1,1] --> [nan,1,nan,nan,3,nan,nan,nan,2,nan] For a 3D array I can of course create a loop, but I'd rather not. (Background climate science, looping over all latitude/longitude grid cells is going to make this very slow.) I'm trying to find a solution in line with the 1D solution. Help would be much appreciated, in line with the 1D code, but complete different solutions welcome too of course. For reference, this is my working 1D solution: import xarray as xr import numpy as np def run_lengths(da): n = len(da) y = da.values[1:] != da.values[:-1] i = np.append(np.where(y), n - 1) z = np.diff(np.append(-1,i)) p = np.cumsum(np.append(0,z))[:-1] runs = np.where(da[i]==1)[0] runs_len = z[runs] # length of sequence time_val = da.time[p[runs]] # date of first day in sequence da_runs = xr.DataArray(runs_len,coords={'time':time_val}) _,da_runs = xr.align(da,da_runs,join='outer') # make sure we have full time axis return da_runs da = xr.DataArray(np.array([[[0,1,1,0,0,0],[1,0,1,1,0,1],[1,1,1,1,0,1]],[[0,1,1,0,0,0],[1,0,1,1,0,1],[1,1,1,1,0,1]]]),coords={'lat':[0,1],'lon':[0,1,2],'time':[0,1,2,3,4,5]}) da_runs = run_lengths(da[0,1]) print(da_runs) <xarray.DataArray (time: 6)> array([ 1., nan, 2., nan, nan, 1.]) Coordinates: * time (time) int64 0 1 2 3 4 5 And this is the attempt in 3D. I'm stuck on how to shift the valid entries in i to the front/remove NaNs from i. (And maybe beyond that as well?) def run_lengths_3D(da): n = len(da.time) y = da.values[:,:,1:] != da.values[:,:,:-1] y = xr.DataArray(y,coords={'lat':da.lat,'lon':da.lon,'time':da.time[0:-1]}) i = y.where(y)*xr.DataArray(np.arange(0,len(da.time[0:-1])),coords={'time':y.time}) -1 | For this task you can try to use numba, e.g.: import numba import numpy as np @numba.njit def calculate(arr): out = np.empty_like(arr, dtype="uint16") lat, lon, time = arr.shape for i in range(lat): for j in range(lon): curr = 0 for k in range(time): out[i, j, k] = 0 if arr[i, j, k] == 0: if curr > 0: out[i, j, k - curr] = curr curr = 0 else: curr += 1 if curr > 0: out[i, j, -curr] = curr return out arr = np.array( [ [[0, 1, 1, 0, 0, 0], [1, 0, 1, 1, 0, 1], [1, 1, 1, 1, 0, 1]], [[0, 1, 1, 0, 0, 0], [1, 0, 1, 1, 0, 1], [1, 1, 1, 1, 0, 1]], ] ) print(calculate(arr)) Prints: [[[0 2 0 0 0 0] [1 0 2 0 0 1] [4 0 0 0 0 1]] [[0 2 0 0 0 0] [1 0 2 0 0 1] [4 0 0 0 0 1]]] Benchmark using timeit + parallel version: from timeit import timeit import numba import numpy as np @numba.njit def calculate(arr): out = np.empty_like(arr, dtype="uint16") lat, lon, time = arr.shape for i in range(lat): for j in range(lon): curr = 0 for k in range(time): out[i, j, k] = 0 if arr[i, j, k] == 0: if curr > 0: out[i, j, k - curr] = curr curr = 0 else: curr += 1 if curr > 0: out[i, j, -curr] = curr return out @numba.njit(parallel=True) def calculate_parallel(arr): out = np.empty_like(arr, dtype="uint16") lat, lon, time = arr.shape for i in range(lat): for j in numba.prange(lon): curr = 0 for k in range(time): out[i, j, k] = 0 if arr[i, j, k] == 0: if curr > 0: out[i, j, k - curr] = curr curr = 0 else: curr += 1 if curr > 0: out[i, j, -curr] = curr return out arr = np.array( [ [[0, 1, 1, 0, 0, 0], [1, 0, 1, 1, 0, 1], [1, 1, 1, 1, 0, 1]], [[0, 1, 1, 0, 0, 0], [1, 0, 1, 1, 0, 1], [1, 1, 1, 1, 0, 1]], ], dtype="uint8", ) # compile calculate()/calculate_parallel() assert np.allclose(calculate(arr), calculate_parallel(arr)) np.random.seed(42) arr = np.random.randint(low=0, high=2, size=(256, 512, 3650), dtype="uint8") t_serial = timeit("calculate(arr)", number=1, globals=globals()) t_parallel = timeit("calculate_parallel(arr)", number=1, globals=globals()) print(f"{t_serial * 1_000_000:.2f} usec") print(f"{t_parallel * 1_000_000:.2f} usec") Prints on my machine (AMD 5700x): 1575227.47 usec 320453.57 usec | 2 | 1 |
78,457,374 | 2024-5-9 | https://stackoverflow.com/questions/78457374/circular-imports-from-classes-whose-attributes-reference-one-another | I know there are a lot of questions that talk about circular imports. I've looked at a lot of them, but I can't seem to be able to figure out how to apply them to this scenario. I have a pair of data loading classes for importing data from an excel document with multiple sheets. The sheets are each configurable, so the sheet name and the individual column names can change (but they have defaults defined in the class attributes). There are other inter-class references between the 2 classes in question, but I figured this example was the most straight-forward: One of the features of a separate export script is to use the loaders' metadata to populate an excel template (with multiple sheets). That template has comments on the column headers in each sheet that reference the other sheets, because the contents of some sheets are used to populate dropdowns in other sheets. So a comment in a header in one sheet may say "This column's dropdown data is populated by the contents of column X in sheet 2". And sheet 2 column X's header would have a comment that says "This column's contents is used to populate dropdowns for column Y in sheet 1." I went ahead and added the respective imports, knowing I would end up with a circular import issue, but I figured I would get everything conceptually established as to what I wanted to do and then try and solve the import issue. Here's some toy code to try and boil it down: infusates_loader.py: from DataRepo.loaders.tracers_loader import TracersLoader class InfusatesLoader(TableLoader): DataColumnMetadata = DataTableHeaders( TRACERNAME=TableColumn.init_flat( ... source_sheet=TracersLoader.DataSheetName, source_column=TracersLoader.DataHeaders.NAME, ), ) tracers_loader.py: from DataRepo.loaders.infusates_loader import InfusatesLoader class TracersLoader(TableLoader): DataColumnMetadata = DataTableHeaders( NAME=TableColumn.init_flat( ... # Cannot reference the InfusatesLoader here (to include the name of its # sheet and its tracer name column) due to circular import target_sheet=InfusatesLoader.DataSheetName, source_column=InfusatesLoader.DataHeaders.TRACERNAME, ), ) I was able to avoid the issue for now just by setting static string values in tracers_loader.py, but ideally, those values would only live in one place (each in its respective class). A lot of the circular import questions out there have to do with methods, so I don't think they apply to class attributes? I tried using importlib and I tried doing the import inside a function, but as soon as it tried to set up the class, I get hit with the import error. | Often times, answers to circular import questions fall into a couple of categories: Tricks to avoid the error and keep the design and... You need to rethink your redesign Usually however, the "rethink your design" is not accompanied with any suggested design patterns. Clearly, in my case, it was a design issue. I knew that was likely the case, but I couldn't see the forest for the trees. I was holding firm to the concept of (as @user2357112's comment pointed out) a "single source of truth". However, I was missing the fact that I had a conceptual option that was true to what I was modeling. I had a class for each sheet in my excel document, but I was missing a class for the document itself, where I could put the inter-sheet relationships (and the definition of the list of sheets in the document). @user2357112 had put it in terms of a "config", but I realized that could easily be a sort of "superclass" or "coordinating class". I'm just about to endeavor into creating that class, but I'm certain a class that defines the inter-sheet relationships is what's called for here. I don't know what to call such a design pattern or what specific form it will take, but conceptually, that was what I was missing. So, to give an example, what I need is something like: study_doc.py: class StudyDoc(): infusates_tracer_reference: { "sheet": "Tracers", "column": "Name", } tracers_infusate_reference: { "sheet": "Infusates", "column": "Tracer Name", } Then I can import that in both of the other classes: infusates_loader.py: from DataRepo.loaders.study_doc import StudyDoc class InfusatesLoader(TableLoader): DataColumnMetadata = DataTableHeaders( TRACERNAME=TableColumn.init_flat( ... source_sheet=StudyDoc.infusates_tracer_reference["sheet"], source_column=StudyDoc.infusates_tracer_reference["column"], ), ) tracers_loader.py: from DataRepo.loaders.study_doc import StudyDoc class TracersLoader(TableLoader): DataColumnMetadata = DataTableHeaders( NAME=TableColumn.init_flat( ... target_sheet=StudyDoc.tracers_infusate_reference["sheet"], source_column=StudyDoc.tracers_infusate_reference["column"], ), ) I probably will make it a bit more sophisticated, but this is the basic idea: All these relationships are based on the fact that they all belong to the same excel document. It's that document that is the basis for the relationships, so it should coordinate their connections. | 2 | 1 |
78,461,650 | 2024-5-10 | https://stackoverflow.com/questions/78461650/save-and-restore-terminal-window-conent-using-curses-for-python | I'm making some console app on Python with Curses (window-curses) library. At some point I need to save window state (or maybe whole terminal state) to some object/variable and restore it in future. What's the proper way to do this? I found methods in module documentaion which do this via saving state to the file. But maybe there exists some other way to do this in memory. | As you found, you can use things like the putwin function to save a pad to a file and getwin to restore it from a file. If you want to keep this in memory, rather than a file on disk, you can use a BytesIO object in place of a real file handle. import io import curses curses.initscr() pad = curses.newpad(100, 100) # ... do things with the pad # save the pad in memory f = io.BytesIO() pad.putwin(f) # later recall the data f.seek(0) # reset the cursor to the beginning of the "file" pad.getwin(f) You could also write some functions to describe this another way: def save_win(win) -> bytes: f = io.BytesIO() win.putwin(f) bytes_data = f.getvalue() return bytes_data def load_win(bytes_data: bytes) -> curses.window: f = io.BytesIO(bytes_data) f.seek(0) return curses.getwin(f) | 2 | 2 |
78,458,753 | 2024-5-10 | https://stackoverflow.com/questions/78458753/how-to-receive-file-send-from-flask-with-curl | Here the server side code with python & flask: from flask import Flask, request, send_file import io import zipfile import bitstring app = Flask(__name__) @app.route('/s/', methods=['POST']) def return_files_tut(): try: f = io.BytesIO() f.write("abcd".encode()) return send_file(f, attachment_filename='a.ret.zip') except Exception as e: return str(e) if __name__ == '__main__': app.run(debug=True) Bellowing is curl command: Ξ» curl -X POST http://localhost:5000/s/ -o ret.tmp % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 4 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (18) transfer closed with 4 bytes remaining to read How should I use curl to receive the file? | When data is written, the pointer moves to the next position in the file. This means that before the data is read to make it available as a download, the pointer points to the end and no further data is read. So the download fails. In order to offer the file as a download, the pointer within the file must be reset to the beginning using seek(0). The attachment_filename attribute of send_file() is now deprecated and has been replaced by download_name. @app.post('/s/') def return_files_tut(): f = io.BytesIO() f.write("abcd".encode()) f.seek(0) return send_file(f, download_name='a.ret.zip') | 2 | 4 |
78,459,786 | 2024-5-10 | https://stackoverflow.com/questions/78459786/pandas-every-nth-row-from-each-group | Assume groups will have more than n memebers, I want to take every nth row from each group. I looked at https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.core.groupby.GroupBy.nth.html but that only takes one row from each group. For example: import pandas as pd x = pd.DataFrame.from_dict({'a': [1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3], 'b': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]}) a b 0 1 1 1 1 2 2 2 3 3 2 4 4 2 5 5 3 6 6 3 7 7 3 8 8 3 9 9 3 10 10 3 11 11 3 12 And if we are keeping every 2nd row, this would be the desired outcome: a b 1 1 2 3 2 4 6 3 7 8 3 9 10 3 11 | Use GroupBy.cumcount with modulo and compare by 1: out = x[x.groupby('a').cumcount() % 2 == 1] print (out) a b 1 1 2 3 2 4 6 3 7 8 3 9 10 3 11 How it working: print (x.assign(counter=x.groupby('a').cumcount(), mod2 = x.groupby('a').cumcount() % 2, mask = x.groupby('a').cumcount() % 2 == 1)) a b counter mod2 mask 0 1 1 0 0 False 1 1 2 1 1 True 2 2 3 0 0 False 3 2 4 1 1 True 4 2 5 2 0 False 5 3 6 0 0 False 6 3 7 1 1 True 7 3 8 2 0 False 8 3 9 3 1 True 9 3 10 4 0 False 10 3 11 5 1 True 11 3 12 6 0 False Another idea with iloc in GroupBy.apply: out = x.groupby('a', group_keys=False)[x.columns].apply(lambda x: x.iloc[1::2]) | 2 | 4 |
78,459,504 | 2024-5-10 | https://stackoverflow.com/questions/78459504/write-dictionary-to-excel | I'm trying to convert a python dictionary into an excel file. But I didn't manage to format the dataframe on a way that is how I want the excel file to look like. I have a dict in the shape as below: data = { 'Key 1': { 'PP': {'A': 105.08, 'B': 9.03, 'C': 0.12, 'D': 3.18, 'E': 0.5}, 'RP': {'A': 43.35, 'B': 6.92, 'C': -0.13, 'D': 3.03, 'E': -0.1}, 'SC': {'A': 36.15, 'B': 7.3, 'C': -0.01, 'D': 2.32, 'E': 0.34} }, 'Key2': { 'PP': {'A': 616.68, 'B': 11.09, 'C': 15.47, 'D': 3.82, 'E': 12.45}, 'RP': {'A': 416.92, 'B': 7.77, 'C': 6.48, 'D': 2.78, 'E': 5.25}, 'SC': {'A': 298.54, 'B': 9.57, 'C': 13.67, 'D': 3.51, 'E': 10.67} } } And I got almost what I wanted in the output using this code: df = pd.DataFrame.from_dict( {(i,j):data[i][j] for i in data.keys() for j in data[i]}, orient = 'index' ) The output looks like: And I want this to look like: | From your current approach, you can chain unstack and post-process the columns with swaplevel and sort_index: df = (pd.DataFrame.from_dict( {(i,j):data[i][j] for i in data.keys() for j in data[i]}, orient = 'index' ) .unstack().swaplevel(axis=1).sort_index(axis=1) ) Or, change the dictionary comprehension to: out = pd.DataFrame.from_dict({k: {(k1, k2): v for k1, d2 in d.items() for k2, v in d2.items()} for k, d in data.items()}, orient='index') Output: PP RP SC A B C D E A B C D E A B C D E Key 1 105.08 9.03 0.12 3.18 0.50 43.35 6.92 -0.13 3.03 -0.10 36.15 7.30 -0.01 2.32 0.34 Key2 616.68 11.09 15.47 3.82 12.45 416.92 7.77 6.48 2.78 5.25 298.54 9.57 13.67 3.51 10.67 | 2 | 3 |
78,459,217 | 2024-5-10 | https://stackoverflow.com/questions/78459217/keep-columns-and-rows-that-contains-fail-in-pandas-dataframe | I would like to keep columns that contains word "FAIL". Input data: Values1 Values2 Values3 Status1 Status2 Status3 1 1 1 PASS PASS FAIL 2 2 2 PASS PASS PASS 3 3 3 PASS PASS PASS 4 4 4 PASS FAIL PASS Expected output: Status2 Status3 PASS FAIL FAIL PASS Current Output: Status1 Status2 Status3 PASS PASS FAIL PASS FAIL PASS My code: import pandas as pd values = range(1,5) status_pass = ["PASS"]*len(values) status1 = status_pass[1:]+["FAIL"] status2 = status1[::-1] df = pd.DataFrame({"Values1":values,"Values2":values,"Values3":values,"Status1":status_pass,"Status2":status1,"Status3":status2}) # drop unwanted rows words_to_keep = ["FAIL"] df = df[df.stack().groupby(level=0).apply( lambda x: all(x.str.contains(w, case=False).any() for w in words_to_keep))] # Filter by column name df = df.filter(like='Status', axis=1) | Use boolean indexing with any: df.loc[:, df.eq('FAIL').any()] # or for multiple words # the mask doesn't matter as long # as you have True/False df.loc[:, df.isin(words_to_keep).any()] Output: Status2 Status3 0 PASS FAIL 1 PASS PASS 2 PASS PASS 3 FAIL PASS How it works: df.eq('FAIL').any() Values1 False Values2 False Values3 False Status1 False Status2 True Status3 True dtype: bool # which is equivalent to df.loc[:, [False, False, False, False, True, True]] | 3 | 1 |
78,459,085 | 2024-5-10 | https://stackoverflow.com/questions/78459085/django-serializer-is-not-validating-or-saving-data | I am trying to save data in the database, it works for most of my models but for few models I am not sure why but django is not serializing data. I hope someone can point out where I might be doing wrong. Here is my code Django Models class UserModel(AbstractUser): uuid = models.UUIDField(default=uuid.uuid4, editable=False) class UserWorkPresenceModel(models.Model): user = models.ForeignKey("UserModel", on_delete=models.CASCADE, related_name="workpresence") is_active = models.BooleanField(default=False) check_in = models.DateTimeField(blank=False) check_out = models.DateTimeField(blank=True, null=True) work_break = [] class UserWorkBreakModel(models.Model): work_presence = models.ForeignKey("UserWorkPresenceModel", on_delete=models.CASCADE, related_name="work_break") start_break = models.DateTimeField(blank=True) end_break = models.DateTimeField(blank=True, null=True) is_current_break = models.BooleanField(default=False) Serializer: class UserTakeBreakSerializer(serializers.Serializer): class Meta: model = UserWorkBreakModel # fields = ("start_break", "end_break") fields = '__all__' API View class UserStartWorkBreakView(APIView): def post(self, request, id): try: user = UserModel.objects.get(uuid=id) except: return Response({'message': 'User not found'}, status=HTTP_400_BAD_REQUEST) try: work_presence = UserWorkPresenceModel.objects.get(user=user, is_active=True) except UserWorkPresenceModel.DoesNotExist: return Response({'message': 'User work presence not found'}, status=HTTP_400_BAD_REQUEST) currently_onbreak = UserWorkBreakModel.objects.filter(work_presence=work_presence, is_current_break=True) if currently_onbreak.exists(): return Response({'message': 'User already working'}, status=HTTP_400_BAD_REQUEST) serializer = UserTakeBreakSerializer(data=request.data) print(serializer) if serializer.is_valid(): print(f'validated_data: {serializer.validated_data}') user.workpresence_status = UserWorkPresenceStatus.ON_BREAK serializer.validated_data['is_current_break'] = True serializer.save(work_presence=work_presence) user.save() print(f'serializer.data: {serializer.data}') return Response(serializer.data, status=HTTP_201_CREATED) return Response(serializer.errors, status=HTTP_400_BAD_REQUEST) I'm not getting any error just empty dictionary, here is result of my print commands: UserTakeBreakSerializer(data={'start_break': '2024-05-10 15:05:52.829867'}): validated_data: {} serializer.data: {} You might find it funny but I have implemented same logic for UserWorkPresenceModel and it is working fine, I'm able to do both create and update for UserWorkPresenceModel. And it is not just this, I'm having same trouble with another model, one is working while the same logic is not working in other model which quite similar. Thank you for you help | Use a ModelSerializer [drf-doc]. For a simple serializer, you will have to write the boilerplate code yourself: class UserTakeBreakSerializer(serializers.ModelSerializer): class Meta: model = UserWorkBreakModel fields = '__all__' essentially what happened was that you defined an empty serializer: a serializer with no fields, and no save logic. A ModelSerializer on the other hand looks at the Meta and then builds the fields and the corresponding logic for that model. | 2 | 1 |
78,458,478 | 2024-5-10 | https://stackoverflow.com/questions/78458478/customtkinter-adding-additional-input-fields-to-form | I am creating a program in Customtkinter where I want to the user to input their ingredients one ingredient at a time- ideally I want to have the text field and then they have the option to add additional text fields below the original while retaining the original. I didn't want to set a fixed number of fields as I won't know in advance how many they will be inputting- if this isn't possible I have alternate fixes but I was looking for where I should be looking in the documentation to resolve my issue. The goal is for the form to add on additional elements until they are finished and they can save the whole form. I want the user to be able to edit the different elements of the form and then submit all at once. import customtkinter as ctk ctk.set_appearance_mode("Dark") ctk.set_default_color_theme("dark-blue") class root(ctk.CTk): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.title("GenSoft") self.geometry("800x480") self.resizable() self.state("normal") self.iconbitmap() self.grid_columnconfigure(1, weight=1) self.grid_columnconfigure((2, 3), weight=0) self.grid_rowconfigure((0, 1, 2), weight=1) self.tabview = ctk.CTkTabview(self, width=800, height=480, corner_radius=20, fg_color="#ffffff", segmented_button_selected_color="red", segmented_button_fg_color="black", segmented_button_selected_hover_color="green", segmented_button_unselected_hover_color="green") self.tabview.pack(padx=20, pady=20) self.tabview.add("Home") self.tabview.add("New Recipe") self.tabview.add("Saved Recipe") self.tabview.tab("Home").grid_columnconfigure(0, weight=1) # configure grid of individual tabs self.tabview.tab("New Recipe").grid_columnconfigure(0, weight=1) self.tabview.tab("Saved Recipe").grid_columnconfigure(0, weight=1) self.ingredientEntry = ctk.CTkEntry(self.tabview.tab("New Recipe"), placeholder_text="ingredient") self.ingredientEntry.pack(padx=20, pady=10) if __name__ == "__main__": app = root() app.mainloop() main() It is very basic at the moment- I haven't managed to find any examples of what I am looking for but I feel like it is a fairly common feature that is implemented. | Add a button that creates a new entry field each time it's clicked: class root(ctk.CTk): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # Your already existing code self.ingredientEntries = [] # The list that stores all the ingredient entries self.addButton = ctk.CTkButton(self.tabview.tab("New Recipe"), text="Add Ingredient", command=self.add_ingredient) self.addButton.pack(padx=20, pady=10) def add_ingredient(self): newEntry = ctk.CTkEntry(self.tabview.tab("New Recipe"), placeholder_text="ingredient") newEntry.pack(padx=20, pady=10) self.ingredientEntries.append(newEntry) # Add the new entry to your list Hopefully it works out for you as intended π. | 2 | 1 |
78,457,093 | 2024-5-9 | https://stackoverflow.com/questions/78457093/pandas-map-multiple-columns-with-a-filter | I have a dataframe like so (simplified for this example) Site LocationName Resource# 01 Test Name 5 01 Testing 6 02 California 10 02 Texas 11 ... Each site has their own mapping for LocationName and Resource# For example: Site 01 has a mapping of {'Test Name': 'Another Test', 'Testing': 'RandomLocation'} for LocationName and a mapping of {5: 5000} for Resource# Site 02 has a mapping of {'California': 'CA-123'} for LocationName and a mapping of {10: '10A', 11: '11B'} for Resource# I am trying to map each respective site with their mappings for the different columns. If the mapping does not exist, I want the field to be None/blank. My ideal output is: Site# LocationName Resource# 01 Another Test 5000 01 RandomLocation 02 CA-123 10A 02 11B My idea was to filter for each site and run map on the series df01 = df[df.Site == '01'] df01 = df['LocationName'].map({'Test Name': 'Another Test', 'Testing': 'RandomLocation'}) But this returns SettingWithCopyWarning since I am performing these operations on a copy. Is there a simple way to achieve this? | Yes there is. You can use the map function within the apply method on your dataframe. import pandas as pd data = { 'Site': ['01', '01', '02', '02'], 'LocationName': ['Test Name', 'Testing', 'California', 'Texas'], 'Resource#': [5, 6, 10, 11] } df = pd.DataFrame(data) mappings = { '01': { 'LocationName': {'Test Name': 'Another Test', 'Testing': 'RandomLocation'}, 'Resource#': {5: '5000', 6: None} }, '02': { 'LocationName': {'California': 'CA-123', 'Texas': None}, 'Resource#': {10: '10A', 11: '11B'} } } def apply_mappings(row): site = row['Site'] if site in mappings: location_map = mappings[site]['LocationName'] row['LocationName'] = location_map.get(row['LocationName']) resource_map = mappings[site]['Resource#'] row['Resource#'] = resource_map.get(row['Resource#'], None) return row df = df.apply(apply_mappings, axis=1) print(df) which gives you your expected output: Site LocationName Resource# 0 01 Another Test 5000 1 01 RandomLocation None 2 02 CA-123 10A 3 02 None 11B | 2 | 1 |
78,457,898 | 2024-5-10 | https://stackoverflow.com/questions/78457898/im-getting-no-matching-distribution-error-when-pip-installing-sourcedefender-fo | I have to pip install dependencies after setting up a virtual environment for an intro python course and I'm getting an error message: ERROR: Could not find a version that satisfies the requirement sourcedefender==10.0.13 (from versions: 12.0.2, 12.0.3) ERROR: No matching distribution found for sourcedefender==10.0.13 It has a requirements.txt file that contains all dependencies. I looked on the sourcedefender release page on pypi and it doesn't have any releases listed besides 12.0.2 and 12.0.3. I installed the same dependencies for a similar assignment last week and had no issues. I've tried Python 3.10.11 (which is the version I've been using all semester) and 3.8.10. When I put sourcedefender== 12.0.2 in the requirements.txt file, it says it needs updated versions of other dependencies, and when I edit those into the requirements.txt file, it gives an error about a NoneType and seems to just break. I'm not sure why this is happening, can someone help me? Is it possible that they removed the sourcedefender release that I need? | I can't seem to comment as I dont have enough reputation so I will leave my answer here. Sometimes the right answer is the most obvious one. Someone probably made a mistake or typo. or this version used to be hosted on pypi and was recently taken down. Do you know what version you installed for your previous assignment? When you installed it previously, the files might have been downloaded locally so you wouldnt have to get them from pypi. I also looked into it and tried to install it and got this error when running pip3 install sourcedefender==10.0.13 ERROR: Could not find a version that satisfies the requirement sourcedefender==10.0.13 (from versions: 12.0.2, 12.0.3) ERROR: No matching distribution found for sourcedefender==10.0.13 I also checked the pypi and you are right there are only 2 versions as written in the error above. You should contact your lecturer and ask them whats up. Don't waste too much time on this. If you are expected to solve this yourself. Then use version 12.0.2 and try to see which packages need to be upgraded and slowly change their versions until it works. | 2 | 1 |
78,457,216 | 2024-5-9 | https://stackoverflow.com/questions/78457216/how-to-add-a-new-column-with-different-length-into-a-existing-dataframe | I am trying to go over a loop to add columns into a empty dataframe. each column might have a different length. Look like the final number of rows are defined by lenght of first column added. The columns with a Longer length will be cut values. How to always keep all the values of each column when column length are different? Thanks Here is case 1 with first column has lower length, then 2nd column's value will be cut import pandas as pd df_profile=pd.DataFrame() df_profile['A']=pd.Series([1,2,3,4]) df_profile['B']=pd.Series([10,20,30,40,50,60]) print(df_profile) A B 0 1 10 1 2 20 2 3 30 3 4 40 Here are case 2 with first column has highest length, then it is find for other columns import pandas as pd df_profile=pd.DataFrame() df_profile['A']=pd.Series([1,2,3,4,5,6,7,8]) df_profile['B']=pd.Series([10,20,30,40,50,60]) df_profile['C']=pd.Series([100,200,300,400,500,600]) df_profile['D']=pd.Series([100,200]) print(df_profile) A B C D 0 1 10.0 100.0 100.0 1 2 20.0 200.0 200.0 2 3 30.0 300.0 NaN 3 4 40.0 400.0 NaN 4 5 50.0 500.0 NaN 5 6 60.0 600.0 NaN 6 7 NaN NaN NaN 7 8 NaN NaN NaN | You can use pd.concat to add another Series, e.g.: # you have already this dataframe: df_profile = pd.DataFrame() df_profile["A"] = pd.Series([1, 2, 3, 4]) # you can use pd.concat to add another Series: out = pd.concat([df_profile, pd.Series([10, 20, 30, 40, 50, 60], name="B")], axis=1) print(out) Prints: A B 0 1.0 10 1 2.0 20 2 3.0 30 3 4.0 40 4 NaN 50 5 NaN 60 | 4 | 5 |
78,456,849 | 2024-5-9 | https://stackoverflow.com/questions/78456849/recursive-r-function-and-its-python-translation-behave-differently | Here is a recursive R function involving a matrix S from the parent environment: f <- function(m, k, n) { if(n == 0) { return(100) } if(m == 1) { return(0) } if(!is.na(S[m, n])) { return(S[m, n]) } s <- f(m-1, 1, n) i <- k while(i <= 5) { if(n > 2) { s <- s + f(m, i, n-1) } else { s <- s + f(m-1, 1, n-1) } i <- i + 1 } if(k == 1) { S[m, n] <- s } return(s) } Here is the result of a call to this function: > n <- 4 > S <- matrix(NA_real_, nrow = n, ncol = n) > f(n, 1, n) [1] 127500 Now, here is the Python version: import numpy as np def f(m, k, n): if n == 0: return 100 if m == 1: return 0 if S[m-1, n-1] is not None: return S[m-1, n-1] s = f(m-1, 1, n) i = k while i <= 5: if n > 2: s = s + f(m, i, n-1) else: s = s + f(m-1, 1, n-1) i = i + 1 if k == 1: S[m-1, n-1] = s return s The call: >>> n = 4 >>> S = np.full((n, n), None) >>> f(n, 1, n) 312500 The Python result is different from the R result. Why? One gets identical results if one replaces if S[m-1, n-1] is not None: return S[m-1, n-1] with if k == 1 and S[m-1, n-1] is not None: return S[m-1, n-1] I also have a C++ version of this function, and it has the same behavior as the R function. | Side-effects such as a function modifying external state are a code smell. Your R code doesn't do this but your Python code does. Assignment to outer scope variables within a function scope can behave differently in R to Python (and actually within R and Python depending on whether the object is mutable). Instinctively I feel uncomfortable that without passing S as an argument you're modifying it within the function, with S[m, n] <- s in R and S[m-1, n-1] = s in Python. Here's a simpler example where we define S as a matrix containing one value, 0, which we increment every time we call a recursive function. The function, f(), has one parameter, k. It calls itself k times, incrementing the only value in S every time. First in R: S <- matrix(0) f <- function(k) { if (k == 1) { return(S) } S[1,1] <- S[1,1] + 1 f(k - 1) } f(5) # [,1] # [1,] 0 The output is 0 and will always be zero no matter how many times you call the function, as S is only modified in the function scope. Conversely, in Python: S = np.zeros((1,1)) def f(k): if k == 1: print(S) return S[0,0] += 1 f(k - 1) f(5) # [[4.]] As you can see, even though we modify S[0,0] within the function and do not pass it as a parameter, this actually modifies it in the outer scope (or parent environment in R terms). This may or may not be the reason for the difference in your example - your code has a lot of moving parts. But I would not expect R and Python to behave the same way when modifying variables in this way. More generally, I think you'd need a pretty compelling reason to write code which references (let alone modifies) variables from within a function without passing them as arguments. | 2 | 2 |
78,455,501 | 2024-5-9 | https://stackoverflow.com/questions/78455501/checking-for-false-when-variable-can-also-be-none-or-true | Update: please see my discussion if you want to delve further into this topic! Thank you everyone for your feedback on this! I have a boolean(ish) flag that can be True or False, with None as an additional valid value. Each value has a different meaning. (Edit for clarification: the variable in question, bool_flag is a custom dict class attribute, which has a value of None when uninitialized, as .get('bool_flag') returns None, and can be set to True or False, although True and None is generally sufficient for value checking for my needs.) I understand the Pythonic way of checking for True and None and not None are: if bool_flag: print("This will print if bool_flag is True") # PEP8 approved method to check for True if not bool_flag: print("This will print if bool_flag is False or None") # Also will print if bool_flag is an empty dict, sequence, or numeric 0 if bool_flag is None: print("This will print if bool_flag is None") if bool_flag is not None: print("This will print if bool_flag is True or False") # Also will print if bool_flag is initialized as anything except None And if you need to check for all three in an if statement block, you would use a laddered approach, for example: if bool_flag: print("This will print if bool_flag is True") elif bool_flag is None: print("This will print if bool_flag is None") else: print("This will print if bool_flag is False") # Note this will also print in any case where flag_bool is neither True nor None But what is the Pythonic way of just checking for a value of False (when only checking for False) when the flag can also be None or True as valid values? I have seen several questions but there doesn't seem to be a consensus. Is it "more pythonic" to write: # Option A: if isinstance(bool_flag, bool) and not bool_flag: print("This will print if bool_flag is False") # Option B: if bool_flag is not None and not bool_flag: print("This will print if bool_flag is False") ## These two appear to be strictly prohibited by PEP8: # Option C: if bool_flag is False: print("This will print if bool_flag is False") # Option D: if bool_flag == False: print("This will print if bool_flag is False") # Option E (per @CharlesDuffy): match flag_bool: case False: print("This will print if bool_flag is False") This topic has been discussed before: What is the correct way to check for False? In Python how should I test if a variable is None, True or False Is there a difference between "== False" and "is not" when checking for an empty string? Why does comparing strings using either '==' or 'is' sometimes produce a different result? Is there a difference between "==" and "is"? This appears to be the closest answer to my question out of what is available (providing Option A above), but even this Answer is ambiguous (suggesting Option C as well): https://stackoverflow.com/a/37104262/22396214 However, this answer points out that if not flag_bool is equivalent to if bool(flag_value) == False which would imply that checking for False equivalence using the == operator is the official Pythonic method of checking for False (Option D): https://stackoverflow.com/a/36936790/22396214 But that directly contradicts this answer that == False (Option D) should never be used: https://stackoverflow.com/a/2021257/22396214 | If you really want to do it : if bool_flag is False: pass PEP8 is a guideline. There may be style checkers which whine about it, and you may need to litter your code with #noqa to quiet them, but at the end of the day you need to decide what best represents what you are actually trying to do. In this case specifically, the checking of a value against True and False literal is discouraged by PEP8 largely because there are a number of other conditions which make a value Truthy or Falsey. When dealing with parameters or return values to/from elsewhere in your code or external libraries, there are going to be a number of instances when you get back what are actually different (and occasionally, surprising) types. In your case, you're not restricting your variable to true or false, or even to truthy or falsey. In fact, since you've got a trinary state possibility, one could argue that it isn't a boolean at all. At closest approach to a boolean, its an Optional[Boolean]. You could instead argue it's similar to an enum type with 3 possible values. From that perspective, you need to be checking for the actual value and not its truthiness. The latter is what PEP8 is talking about when it discourages testing against literals. Here's the thing to remember, though. All the arguments for not testing against boolean literals will apply to you as well. Six months from now, will you remember that the value needs to be tested against a literal instead of just being checked for truthiness? If someone else were looking at your code or using the return value from a function you've written, will they realize this? For maintainability, you may wish to use a different type entirely. An enum type, perhaps. That makes things more explicit, and it'll be a lot easier to explain, reason about, and deal with if you ever want to try type annotating your code. | 4 | 3 |
78,455,345 | 2024-5-9 | https://stackoverflow.com/questions/78455345/segmentation-faults-and-memory-leaks-while-calling-gmp-c-functions-from-python | Work was quiet today, and so the team was directed to do some "self-development". I decided to have some fun calling C functions from Python. I had already had a good time using Rust to speed up Python, but I kept hitting a brick wall whenever I wanted to work with integers greater than the u128 type could hold. I thought that, by using C's famous GMP library, I could overcome this. So far, I've managed to build a minimal C program which runs, which seems to do what I want, and which - to my eyes - doesn't have anything obviously wrong with it. This is my code: #include <stdio.h> #include <gmp.h> #define BASE 10 void _factorial(int n, mpz_t result) { int factor; mpz_t factor_mpz; mpz_init(result); mpz_init_set_ui(result, 1); for (factor = 1; factor <= n; factor++) { mpz_init(factor_mpz); mpz_init_set_ui(factor_mpz, factor); mpz_mul(result, result, factor_mpz); mpz_clear(factor_mpz); } } char *factorial(int n) { char *result; mpz_t result_mpz; _factorial(n, result_mpz); mpz_get_str(result, BASE, result_mpz); mpz_clear(result_mpz); return result; } int main(void) { // This runs without any apparent issues. char *result = factorial(100); printf("%s\n", result); return 0; } I then try to call this from Python like so: from ctypes import CDLL, c_void_p, c_char_p, c_int32, cast CLIB = CDLL("./shared.so") cfunc = CLIB.factorial cfunc.argtypes = [c_int32] cfunc.restype = c_char_p raw_pointer = cfunc(100) result = raw_pointer.decode() print(result) I compiled the C code to an .so file using the following command: gcc main.c -lgmp -fpic -shared -o shared.so I then ran the above Python script, but unfortunately ran into two issues: Although it reaches the print() statements and prints the correct result, it then hits a segmentation fault. I'm worried that, in passing an arbitrary-length string from C to Python, there may be some memory leaks. Does anyone know how I can overcome the segmentation fault, and, if there is indeed a memory leak, how I can plug it? | Although it reaches the print() statements and prints the correct result, it then hits a segmentation fault. Your factorial function uses mpz_get_str() incorrectly. Consider: char *factorial(int n) { char *result; mpz_t result_mpz; _factorial(n, result_mpz); mpz_get_str(result, BASE, result_mpz); mpz_clear(result_mpz); return result; } mpz_get_str() offers you two alternatives for how the result is to be stored: It can write the result into large-enough space that the caller specifies. This is triggered by passing a non-null pointer (to the destination space) as the first argument. It can dynamically allocate sufficient space for the result. This is triggered by passing a null pointer as the first argument. Either way, it returns a pointer to the first byte of the result. You are passing a wild pointer as the first argument, with undefined behavior resulting. From your description of the behavior, you are getting some arbitrary program data overwritten by the digit-string output, such that the program successfully prints that, but soon fails because some essential data has been corrupted. Your best bet would probably be to let GMP allocate the space for you: char *factorial(int n) { char *result; mpz_t result_mpz; _factorial(n, result_mpz); result = mpz_get_str(NULL, BASE, result_mpz); // <=== notice this mpz_clear(result_mpz); return result; } I'm worried that, in passing an arbitrary-length string from C to Python, there may be some memory leaks. C does not have arbitrary-length strings. It does have dynamically allocated objects, which can be character arrays whose contents are C strings. Avoiding memory leaks is an exercise in ensuring that dynamically allocated objects are also freed. If you proceed as I describe above, then you do have to arrange for freeing the memory to which the return value points. I'm uncertain whether Python's ctypes has a specific provision for that, but at minimum, you should be able to use ctypes to pass the pointer obtained from factorial() to the standard library's free() function. | 4 | 3 |
78,452,284 | 2024-5-9 | https://stackoverflow.com/questions/78452284/keyboardinterrupt-in-asyncio-taskgroup | The docs on Task Groups say: Two base exceptions are treated specially: If any task fails with KeyboardInterrupt or SystemExit, the task group still cancels the remaining tasks and waits for them, but then the initial KeyboardInterrupt or SystemExit is re-raised instead of ExceptionGroup or BaseExceptionGroup. This makes me believe, given the following code: import asyncio async def task(): await asyncio.sleep(10) async def run() -> None: try: async with asyncio.TaskGroup() as tg: t1 = tg.create_task(task()) t2 = tg.create_task(task()) print("Done") except KeyboardInterrupt: print("Stopped") asyncio.run(run()) running and hitting Ctrl-C should result in printing Stopped; but in fact, the exception is not caught: ^CTraceback (most recent call last): File "<python>/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<python>/asyncio/base_events.py", line 685, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "<module>/__init__.py", line 8, in run async with asyncio.TaskGroup() as tg: File "<python>/asyncio/taskgroups.py", line 134, in __aexit__ raise propagate_cancellation_error File "<python>/asyncio/taskgroups.py", line 110, in __aexit__ await self._on_completed_fut asyncio.exceptions.CancelledError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<frozen runpy>", line 189, in _run_module_as_main File "<frozen runpy>", line 148, in _get_module_details File "<frozen runpy>", line 112, in _get_module_details File "<module>/__init__.py", line 15, in <module> asyncio.run(run()) File "<python>/asyncio/runners.py", line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "<python>/asyncio/runners.py", line 123, in run raise KeyboardInterrupt() KeyboardInterrupt What am I missing? What is the correct way of detecting KeyboardInterrupt? | TL;DR This acutally isn't TaskGroup's fault. Try running this: >>> async def other_task(): ... try: ... await asyncio.sleep(10) ... except KeyboardInterrupt: ... print("Stopped") >>> >>> asyncio.run(other_task()) KeyboardInterrupt >>> This also doesn't print. Nor this: >>> async def other_task(): ... try: ... await asyncio.sleep(10) ... except Exception as err: ... print("Stopped by", err) >>> >>> asyncio.run(other_task()) KeyboardInterrupt >>> You can't catch KeyboardInterrupt here. I can't say I know exactly why, but can say this is why asyncio.shield is useless protecting tasks from canceling, that it requires one to write our own signal handlers for asyncio because task cancellation internally triggers KeyboardInterrupt. Alternative way trio - made by Nathaniel J. Smith who first came up with modern Structured concurrency - does clearly what you intended. # NOTE: somehow ctrl+c doesn't work in ptpython. Following ran on default python shell # Python 3.12.1 (tags/v3.12.1:2305ca5, Dec 7 2023, 22:03:25) [MSC v.1937 64 bit (AMD64)] on win32 >>> import trio >>> async def task(): ... await trio.sleep(5) ... >>> async def run(): ... try: ... async with trio.open_nursery() as nursery: ... for _ in range(10): ... nursery.start_soon(task) ... except* KeyboardInterrupt: ... print("Nursery caught KeyboardInterrupt!") ... >>> trio.run(run) Nursery caught KeyboardInterrupt! >>> # Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux >>> import trio >>> from exceptiongroup import catch >>> async def task(): ... await trio.sleep(5) ... >>> async def run(): ... def handle_keyboard_interrupt(excgroup): ... nonlocal nursery ... print("Nursery caught KeyboardInterrupt!") ... nursery.cancel_scope.cancel() ... ... with catch({KeyboardInterrupt: handle_keyboard_interrupt}): ... async with trio.open_nursery() as nursery: ... for _ in range(10): ... nursery.start_soon(task) ... >>> trio.run(run) ^CNursery caught KeyboardInterrupt! >>> which is not callback soup unlike asyncio, making it much more stable and intuitive, predictive. (Not saying asyncio is trash; it was the right option when it was created.) So definitely check out trio when what you're trying to make requires such. asyncio.TaskGroup is just asyncio's attempt of Structured Concurrency achieved by trio.Nursery and original does it's job better than asyncio which inevitably suffer from internals being callback soup. Edit: Also you can use trio-asyncio or such wrappers to run asyncio loop on the trio, or mix thread-based libraries with trio More details To why this happens, this is the hint: >>> async def sync_task(): ... try: ... time.sleep(10) ... except KeyboardInterrupt: ... print("Stopping") >>> asyncio.run(sync_task()) Stopping KeyboardInterrupt If we intentionally block the thread by time.sleep() call, it do catches the KeyboardInterrupt. (Despite asyncio also re-raise again) Which is just same as this. >>> def sync_task(): ... try: ... time.sleep(10) ... except KeyboardInterrupt: ... print("Stopping") >>> sync_task() Stopping Now we established that we do can catch KeyboardInterrupt in synchronous code section regardless of the surroundings being async context or not. Then we can draw the attention back to: Is await asyncio.sleep(10) synchronous? And this might sound obvious, it's not! ...But then, What is being executed if it's not synchronous? That's the core reason why asyncio can't catch a thing on KeyboardInterrupt. Because it's not our task, but the main thread that's consistently checking if certain event should be triggered or not. In other words, we just killed the main loop by KeyboardInterrupt while our tasks are suspended, waiting for main thread's loop to call it's callback for event "Call me AFTER 10 seconds are passed". Hence there was no chance for our poor task to ever catch that exception. This is because unlike other exceptions that is Returned as Result of task (then probably re-thrown at await keyword), KeyboardInterrupt is an urgent exception signaling to stop whatever we're doing, so event loop decide to stop and start doing cleanup if possible. Way more detail about Callback-based vs async/await native To quote from Control-C handling in Python and Trio by Nathaniel J. Smith : By default, the Python interpreter sets things up so that control-C will cause a KeyboardInterrupt exception to materialize at some point in your code. This is pretty nice! If your code was accidentally caught in an infinite loop, then it breaks out of that. If you have cleanup code in finally blocks, it gets run. It shows a traceback so you can find that infinite loop. That's the advantage of the KeyboardInterrupt approach: even if you didn't think about control-C at all while you were writing the program, then it still does something that's pretty darn reasonable β say, 99% of the time. lock.acquire() try: do_stuff() # <- do_something_else() # <- control-C anywhere here is safe and_some_more() # <- finally: lock.release() But what if we're unlucky? lock.acquire() # <- control-C could happen here try: ... finally: # <- or here lock.release() If a KeyboardInterrupt happens at one of the two points marked above, then sucks to be us: the exception will propagate but our lock will never be released. ... KeyboardInterrupt is such powerful and dangerous message to Python - it's not mere simple Exception but is almost like a SIGINT for python. (On windows task kill triggers KeyboardInterrupt because it doesn't have SIGINT) This is the major disadvantage of Callback-based Concurrency we had for years on Javascript and many others, including python's future and asyncio. From python's history of 1991~2015: Python 1.0 release (Python 1, 1991) Asyncore library (Python 1, 1996) Callback based Generator support added (Python 2.2, 2001) Twisted (Python 2.X, 2002) Easier concurrency programming via Generator Generator.send() support added (Python 2.5, 2005) Now generator can 'talk' with caller Future library (Python 3.2, 2011) asyncio library (Python 3.4, 2014) generator based, wraps Future Best of the bests from previous libraries async / await added (Python 3.5, 2015) asyncio was never designed with async / await in mind - but is based on callbacks like recent Javascript does. (Initial javascript release at 1995 didn't have concurrency iirc) So how we use asyncio with async / await nowdays are under-the-hood a complete hybrid mess trying to make callback soup work on something fundamentally different. When we call asyncio.run(async_func) - it runs in this order: # NOTE: all comment with ^^^^^^ prefix are my comments explaining relevant parts # ----------------------------------------------------------------------- # asyncio/runners.py def run(main, *, debug=None, loop_factory=None): ... with Runner(debug=debug, loop_factory=loop_factory) as runner: return runner.run(main) # ^^^^^ entry point # ----------------------------------------------------------------------- # asyncio/runners.py class Runner: ... def run(self, coro, *, context=None): """Run a coroutine inside the embedded event loop.""" ... task = self._loop.create_task(coro, context=context) # ^^^^^^ All async funcs are wrapped in task and is immediately pending for execution if (threading.current_thread() is threading.main_thread() and signal.getsignal(signal.SIGINT) is signal.default_int_handler ): # ^^^^^^ First hint of asyncio's internals being thread based sigint_handler = functools.partial(self._on_sigint, main_task=task) else: sigint_handler = None self._interrupt_count = 0 try: return self._loop.run_until_complete(task) # ^^^^^ entry point except exceptions.CancelledError: if self._interrupt_count > 0: uncancel = getattr(task, "uncancel", None) if uncancel is not None and uncancel() == 0: raise KeyboardInterrupt() # ^^^^^^ This is why KeyboardInterrupt raises from task cancelation raise # CancelledError finally: ... # ----------------------------------------------------------------------- # asyncio/base_events.py # This file and class is abstract; Actual Loops depends on OS # due to difference event handling method per OS. class BaseEventLoop(events.AbstractEventLoop): def run_until_complete(self, future): ... new_task = not futures.isfuture(future) future = tasks.ensure_future(future, loop=self) ... future.add_done_callback(_run_until_complete_cb) # ^^^^^^ Another hint of callback soup try: self.run_forever() # ^^^^^ entry point ... ... def run_forever(self): """Run until stop() is called.""" ... try: self._thread_id = threading.get_ident() ... while True: self._run_once() # ^^^^^^ entry point (complex callback & event checks, skipping) if self._stopping: break ... Quite complex, it revolves around the callback and we can't find direct point where task actually get executed - even in self._run_once() it's just Runner class triggering enqueued event listeners. No wonder this has so many bugs and stability issues, making us awe on how much effort could've gone into django, flask, fastAPI and etc to make it work with great stability. To quote Some thoughts on asynchronous API design in a post-async/await world again from NJS: Review and summing up: what is "async/await-native" anyway? In previous asynchronous APIs for Python, the use of callback-oriented programming led to the invention of a whole set of conventions that effectively make up an entire ad hoc programming language... The result is somewhat analogous to the bad old days before structured programming, where basic constructs like function calls and loops had to be constructed on the fly out of primitive tools like goto. In practice, it's extraordinarily difficult to write correct code in this style, especially when one starts to think about edge conditions. Now that Python has async/await, it's possible to start using Python's native mechanisms to solve these problems. Such limitation of asyncio, and seeing how effective curio was - seemed to lead NJS finalizing concept of Structured Concurrency and created trio, opening path for python a bright future without future! (Actual image attached in *Some thoughts on asynchronous API design in a post-async/await world) I can't say I'm expert, but from my testing: A simple remote-python-execution-shell server attached to discord (Yes I actually sacrificed my Raspberry Pi here) virtually same code in asyncio died every 1 week, while trio rewrite never died over 3 months. Is asyncio trash then No, I think that was the best library we had when it came out, and still until we got trio. I believe trio was possible because there was asyncio! To quote Some thoughts on asynchronous API design in a post-async/await world again & again from NJS: Should asyncio be "fixed" to have a curio-style async/await-native API? I can't see how this could be done without substantially throwing out and rewriting most of asyncio. ... but the callback chaining parts are pretty deeply baked into asyncio as it currently exists. Seems like it's now too late to rewrite it, also considering it being in standard near decade - but I do wish asyncio stop trying to mimic Structured Concurrency like recent addition of TaskGroup and rather do rewrite or deprecate in favor of other libraries - but since it's not feasible they're adding it to at least improve the current situation I suppose. Still serves just fine on many simple usecases anyway! | 4 | -2 |
78,454,457 | 2024-5-9 | https://stackoverflow.com/questions/78454457/pandas-read-json-future-warning-the-behavior-of-to-datetime-with-unit-when | I am updating pandas version from 1.3.5 to 2.2.2 on an old project. I am not very familiar with pandas, and I am stuck with a Future Warning: FutureWarning: The behavior of 'to_datetime' with 'unit' when parsing strings is deprecated. In a future version, strings will be parsed as datetime strings, matching the behavior without a 'unit'. To retain the old behavior, explicitly cast ints or floats to numeric type before calling to_datetime. Here is the causes the error: >>> from io import StringIO >>> import json >>> import pandas >>> >>> str = '{"Keyword":{"0":"TELECOM","1":"OPERATOR","Total":"Total"},"Type":{"0":"job_field","1":"title_class","Total":null},"Seniority Score":{"0":0.0,"1":-1.0,"Total":-1.0}}' >>> df = pandas.read_json(StringIO(str), typ='frame', orient='records') From what I get, it has to do something with integers and floats, probably being represented as strings in the json, but I tried various combinations and can not get arround the warning. I am confused becase I am not calling the to_datetime function at all. | This error is due to having a mix of number-like and strings in the index. A minimal reproducible example that triggers the error would be: s = '{"A":{"0":"X","Y":"Y"}}' pandas.read_json(StringIO(s), typ='frame', orient='records') Pandas uses to_datetime to infer the dtype of the index, which triggers the warning. This should most likely be considered a minor bug since this only triggers a warning. Maybe still worth reporting but it might be that no action will be taken. | 3 | 1 |
78,453,990 | 2024-5-9 | https://stackoverflow.com/questions/78453990/how-to-present-uint16-as-float16 | I have a buffer of float16 elements in the memory I read this buffer to a list but since I use np.ctypeslib I can't read it as float16 so I read it as uint16 and now I want to represent it as float16 simple convert by list.astype(float16) won't help I found this table https://gist.github.com/gregcotten/9911bda086c5850cb513b2980cca7648 and according to that table it works. I want to write a python function for convention unint16_as_float16 but don't know what are the rules. appreciate your help thanks! | I found a solution by numpy uint16_list.view(np.float16) | 2 | 1 |
78,454,058 | 2024-5-9 | https://stackoverflow.com/questions/78454058/getting-503-service-unavailable-while-running-haproxy-in-docker | I'm new to HAProxy. I'm trying to run my Python Service's Docker Image in multiple instances and use HAProxy to Balance the Load. (python service is running at 8090 via fastapi) Firstly, I've created docker image of my python service. Then I've created docker-compose.yml: version : '3' services: elb: image: haproxy ports: - "8100:8100" volumes: - ./haproxy:/usr/local/etc/haproxy pythonapp1: image: my-python-service:0.0.1 ports: - "8090:8090" Then I've created HAProxy folder and inside that folder I've created haproxy.cfg: # haproxy.cfg frontend http bind *:8100 mode http timeout client 10s use_backend all backend all mode http server s1 pythonapp1:8090 Then I run docker compose up and sent request to http://localhost:8100 - it worked perfectly fine. But when I added more containers it gave me error. version : '3' services: elb: image: haproxy ports: - "8100:8100" volumes: - ./haproxy:/usr/local/etc/haproxy pythonapp1: image: craft-text-detection-python-service:0.0.1 ports: - "8090:8090" pythonapp2: image: craft-text-detection-python-service:0.0.1 ports: - "8091:8090" # haproxy.cfg frontend http bind *:8100 mode http timeout client 10s use_backend all backend all mode http server s1 pythonapp1:8090 server s2 pythonapp2:8091 Now when I run docker compose up and hit request for the first time at http://localhost:8100 it works fine. But when hit again at http://localhost:8100 it gives me 503 Service Unavailable. Guide me is it problem of port setting or something else? | There is no need to specify ports for pythonapp1 and pythonapp2 in docker compose Your docker compose should look like: version : '3' services: elb: image: haproxy ports: - "8100:8100" volumes: - ./haproxy:/usr/local/etc/haproxy pythonapp1: image: craft-text-detection-python-service:0.0.1 pythonapp2: image: craft-text-detection-python-service:0.0.1 And update your haproxy.cfg as well, which should look like: # haproxy.cfg frontend http bind *:8100 mode http timeout client 10s use_backend all backend all mode http server s1 pythonapp1:8090 server s2 pythonapp2:8090 | 3 | 1 |
78,452,601 | 2024-5-9 | https://stackoverflow.com/questions/78452601/add-sha1-to-signxml-python | I am using the library signxml to sign XML signatures for SAML authentication. One of our implementer partners requires that we send the signature in SHA1. The base configuration of XMLSigner does not support SHA1 because it has been deprecated because SHA1 is not secure. Unfortunately I still have to send it as SHA1 because the other implementer won't change their code base. I have read the library documentation and unsure how to force SHA1 support. If you call this code below, it errors out at this point in the code: https://github.com/XML-Security/signxml/blob/9f06f4314f1a0480e22992bbb8209a71bc581e05/signxml/signer.py#L120 signed_saml_root = XMLSigner(method=signxml.methods.enveloped, signature_algorithm="rsa-sha1", digest_algorithm="sha1", c14n_algorithm="http://www.w3.org/2001/10/xml-exc-c14n#")\ .sign(saml_root, key=self.key, cert=self.cert, always_add_key_value=True) verified_data = XMLVerifier().verify(signed_saml_root, x509_cert=self.cert).signed_xml The documentation mentions doing the following for SHA1 deprecation: SHA1 based algorithms are not secure for use in digital signatures. They are included for legacy compatibility only and disabled by default. To verify SHA1 based signatures, use: XMLVerifier().verify( expect_config=SignatureConfiguration( signature_methods=..., digest_algorithms=... ) ) But that looks for verification only, unsure how to make it work on signature. Can someone provide some advice on how to get SHA1 working with the signxml library. | You can overwrite function check_deprecated_methods in source to pass the error. from signxml import XMLSigner class XMLSignerWithSHA1(XMLSigner): def check_deprecated_methods(self): pass Now, you can use class XMLSignerWithSHA1 to sign: signer = XMLSignerWithSHA1(signature_algorithm=SignatureMethod.RSA_SHA1, digest_algorithm=DigestAlgorithm.SHA1) signed = signer.sign(data, cert=cert, key=key) | 2 | 2 |
78,451,352 | 2024-5-8 | https://stackoverflow.com/questions/78451352/passing-parameters-to-a-pytest-fixture-that-also-needs-cleanup | I've defined a fixture that accepts arguments to be used in an integration style test that requires some teardown upon completion. It looks something like this: @pytest.fixture def create_user_in_third_party(): from clients import third_party_client def _create_user_in_third_party(user: User): third_party_client.create_user(user) return _create_user_in_third_party However, we then would like to clean up the created user once the test's lifecycle is complete. In a standard fixture, that may look like: @pytest.fixture def create_user(): user = User(name="Jim") user.save() yield user user.delete() When applying that same paradigm to the fixture that takes args above, I am not seeing the fixture being called at all (returns a generator that isn't exercising the inner method code). That code (not working) looks like this: def create_user_in_third_party(): from clients import third_party_client def _create_user_in_third_party(user: User): third_party_client.create_user(user) yield # give back control to caller third_party_client.delete_user(user) return _create_user_in_third_party is it possible to achieve the above without having to break up creation and deletion into two different fixtures ? | Using the docs that Andrej linked above: https://docs.pytest.org/en/7.1.x/how-to/fixtures.html#factories-as-fixtures , we can use the factory pattern approach to accrue created entities and delete them later. So it ends up looking like this: @pytest.fixture def create_user_in_third_party(): from clients import third_party_client created_users = [] def _create_user_in_third_party(user: User): third_party_client.create_user(user) created_users.append(user) yield _create_user_in_third_party for user in created_users: third_party_client.delete(user) | 2 | 2 |
78,450,478 | 2024-5-8 | https://stackoverflow.com/questions/78450478/pandas-rolling-sum-within-a-group | I am trying to calculate a rolling sum or any other statistic (e.g. mean), within each group. Below I am giving an example where the window is 2 and the statistic is sum. df = pd.DataFrame.from_dict({'class': ['a', 'b', 'b', 'c', 'c', 'c', 'b', 'a', 'b'], 'val': [1, 2, 3, 4, 5, 6, 7, 8, 9]}) df['sum2_per_class'] = [1, 2, 5, 4, 9, 11, 10, 9, 16] # I want to compute this column # df['sum2_per_class'] = df[['class', 'val']].groupby('class').rolling(2).sum() # what I tried class val sum2_per_class 0 a 1 1 1 b 2 2 2 b 3 5 3 c 4 4 4 c 5 9 5 c 6 11 6 b 7 10 7 a 8 9 8 b 9 16 Here's what I tried and the corresponding error: df['sum2_per_class'] = df[['class', 'val']].groupby('class').rolling(2).sum() TypeError: incompatible index of inserted column with frame index | As the error message conveys, the rolling sum operation returns a pandas Series with a MultiIndex, which can't be directly assigned to a single column in a dataframe. A possible fix is to use reset_index() to convert the MultiIndex to a normal index like the following: df['sum2_per_class'] = df[['class', 'val']].groupby('class').rolling(2).sum().reset_index(level=0, drop=True) However, after running the above code a few of the I was getting unexpected NaN values in the 'sum2_per_class' column as follows: [NaN, NaN, 5, NaN, 9, 11, 10, 9, 16] while other values are as expected. After investigating the NaN issues I came to the following conclusion: The Rolling Sum operation requires at least two CONSECUTIVE rows within each group to calculate the sum. for example for the first group 'a' we have: 1) Row 0 with val1=1 and Row 7 with val=8 you expect the rolling sum to be 1 + 8 = 9 while these rows are not consecutive and will result in NaN. For other groups where we got the expected rolling sum the grouped rows are consecutive. For example for group 'c' we have: Row 3, Row 4, and Row 5. Update: To solve the NaN issues you can specify min_periods=1 in the Rolling function like below: df['sum2_per_class'] = df[['class', 'val']].groupby('class').rolling(2, min_periods=1).sum().reset_index(level=0, drop=True) | 2 | 3 |
78,450,557 | 2024-5-8 | https://stackoverflow.com/questions/78450557/how-can-i-pass-a-namedtuple-attribute-to-a-method-without-using-a-string | I'm trying to create a class to represent a list of named tuples and I'm having trouble with accessing elements by name. Here's an example: from typing import NamedTuple class Record(NamedTuple): id: int name: str age: int class NamedTupleList: def __init__(self, data): self._data = data def attempt_access(self, row, column): print(f'{self._data[row].age=}') r = self._data[row] print(f'{type(column)=}') print(f'{column=}') print(f'{r[column]=}') data = [Record(1, 'Bob', 30), Record(2, 'Carol', 25), Record(3, 'Ted', 29), Record(4, 'Alice', 28), ] class_data = NamedTupleList(data) print('show data') print(f'{data[0]=}') print(f'{type(data[0])=}') print(f'{data[0].age=}') print('\nshow class_data') print(f'{type(class_data)=}') print('\nattempt_access by index') class_data.attempt_access(0, 2) print('\nattempt_access by name') class_data.attempt_access(0, Record.age) # why can't I do this? Produces: data[0]=Record(id=1, name='Bob', age=30) type(data[0])=<class '__main__.Record'> data[0].age=30 show class_data type(class_data)=<class '__main__.NamedTupleList'> attempt_access by index self._data[row].age=30 type(column)=<class 'int'> column=2 r[column]=30 attempt_access by name self._data[row].age=30 type(column)=<class '_collections._tuplegetter'> column=_tuplegetter(2, 'Alias for field number 2') print(f'{r[column]=}') ~^^^^^^^^ TypeError: tuple indices must be integers or slices, not _collections._tuplegetter So I can successfully access 'rows' and 'columns' by index, but if I want to access a column (i.e. namedtuple attribute) through a method call I get an error. What's interesting is that the column value is _tuplegetter(2, 'Alias for field number 2') so the index is known in my method but I can't get to it. Does anyone know how I can access this value so that I can pass a name to the method? I'm trying to avoid passing the name as a string - I'd really like to take advantage of the namespace since that's one of the advantages of a namedtuple after all. | Interesting question. The field is a descriptor, so you can invoke it: class NamedTupleList: def __init__(self, data): self._data = data def attempt_access(self, row, column): r = self._data[row] try: val = r[column] except TypeError: val = column.__get__(r) return val Demo: >>> class_data.attempt_access(0, 2) 30 >>> class_data.attempt_access(0, Record.age) 30 | 4 | 6 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.