question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,017,670
2024-2-18
https://stackoverflow.com/questions/78017670/is-it-possible-to-use-the-depthsort-argument-in-breadthfirst-layout-in-dash-cyto
I'm plotting a network using Dash-Cytoscape, using the breadthfirst layout, as documented here. I would like to control the order of appearance of elements on the same level. The JS API has the depthSort argument to achieve this, but I couldn't figure out how to pass a callback in Python that the frontend can live with. Things I've tried: "depthSort": lambda a,b: a - b "depthSort": "(a,b) => a - b" "depthSort": "function (a,b) { return a - b}" Minimal example: I would like to get this: 1 3 2 but what I'm getting is this: 1 2 3 from dash import Dash import dash_cytoscape as cyto app = Dash(__name__) app.layout = cyto.Cytoscape( elements=[ {"data": {"id": "1", "label": "1"}}, {"data": {"id": "2", "label": "2"}}, {"data": {"id": "3", "label": "3"}}, {"data": {"source": "1", "target": "2"}}, {"data": {"source": "1", "target": "3"}}, ], layout={ "name": "breadthfirst", "roots": ["1"], # "depthSort": ? }, ) app.run_server(debug=True)
Options that expect a JS function are not supported in Python, because it implies passing the function as a string, and thus it would require Cytoscape.js to evaluate arbitrary strings, which is probably something the maintainers don't want for security reasons. That said, Dash supports clientside callbacks (JS) so we can still assign the function within a callback : from dash import Dash, Output, Input, State import dash_cytoscape as cyto app = Dash(__name__) app.layout = cyto.Cytoscape( id="cyto", elements=[ {"data": {"id": "1", "label": "1"}}, {"data": {"id": "2", "label": "2"}}, {"data": {"id": "3", "label": "3"}}, {"data": {"source": "1", "target": "2"}}, {"data": {"source": "1", "target": "3"}}, ], layout={ "name": "breadthfirst", "roots": ["1"] } ) app.clientside_callback( """ function (id, layout) { layout.depthSort = (a, b) => b.data('id') - a.data('id'); cy.layout(layout).run(); return layout; } """, Output('cyto', 'layout'), # update the (dash) cytoscape component's layout Input('cyto', 'id'), # trigger the function when the Cytoscape component loads (1) State('cyto', 'layout'), # grab the layout so we can update it in the function prevent_initial_call=False # ensure (1) (needed if True at the app level) ) app.run_server(debug=True) NB. We need to execute cy.layout(layout).run() because no element are added/removed so it won't run automatically.
3
4
78,017,053
2024-2-18
https://stackoverflow.com/questions/78017053/langchain-agents-agent-toolkits-is-not-in-the-subpath-of-site-packages-la
I just upgrade LangChain and OpenAi using below conda install. Then I got below error, any idea how to solve it? Thanks https://anaconda.org/conda-forge/langchain conda install conda-forge::langchain https://anaconda.org/conda-forge/openai conda install conda-forge::openai from langchain.agents.agent_toolkits import create_python_agent --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[2], line 1 ----> 1 from langchain.agents.agent_toolkits import create_python_agent 2 from langchain.tools.python.tool import PythonREPLTool 3 from langchain.llms.openai import OpenAI File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive) File c:\Users\yongn\miniconda3\envs\langchain_ai\lib\site-packages\langchain\agents\agent_toolkits\__init__.py:50, in __getattr__(name) 48 """Get attr name.""" 49 if name in DEPRECATED_AGENTS: ---> 50 relative_path = as_import_path(Path(__file__).parent, suffix=name) 51 old_path = "langchain." + relative_path 52 new_path = "langchain_experimental." + relative_path File c:\Users\test\miniconda3\envs\langchain_ai\lib\site-packages\langchain_core\_api\path.py:30, in as_import_path(file, suffix, relative_to) 28 if isinstance(file, str): 29 file = Path(file) ---> 30 path = get_relative_path(file, relative_to=relative_to) 31 if file.is_file(): 32 path = path[: -len(file.suffix)] File c:\Users\test\miniconda3\envs\langchain_ai\lib\site-packages\langchain_core\_api\path.py:18, in get_relative_path(file, relative_to) 16 if isinstance(file, str): 17 file = Path(file) ---> 18 return str(file.relative_to(relative_to)) File c:\Users\test\miniconda3\envs\langchain_ai\lib\pathlib.py:818, in PurePath.relative_to(self, *other) 816 if (root or drv) if n == 0 else cf(abs_parts[:n]) != cf(to_abs_parts): 817 formatted = self._format_parsed_parts(to_drv, to_root, to_parts) --> 818 raise ValueError("{!r} is not in the subpath of {!r}" 819 " OR one path is relative and the other is absolute." 820 .format(str(self), str(formatted))) 821 return self._from_parsed_parts('', root if n == 1 else '', 822 abs_parts[n:]) ValueError: 'c:\\Users\\test\\miniconda3\\envs\\langchain_ai\\lib\\site-packages\\langchain\\agents\\agent_toolkits' is not in the subpath of 'c:\\Users\\test\\miniconda3\\envs\\langchain_ai\\lib\\site-packages\\langchain_core' OR one path is relative and the other is absolute.
It seems in October 2023, some logic related with agents was moved to a module called "experimental". You first need to install this new library: pip install langchain_experimental and then shift the module from where you import certain classes. PythonREPLTool and create_python_agent needs to be imported from the new langchain_experimental module while other classes remain still. Classes that need to be imported from the new module: from langchain_experimental.agents.agent_toolkits import create_python_agent from langchain_experimental.tools.python.tool import PythonREPLTool Classes that still use the old notation: from langchain.python import PythonREPL from langchain.agents import load_tools, initialize_agent from langchain.agents import AgentType For more information you can read this thread: https://github.com/langchain-ai/langchain/discussions/11680 Hope that helps
2
6
78,020,832
2024-2-19
https://stackoverflow.com/questions/78020832/pandas-replace-with-value-from-another-row
I have a table: Object Col1 Col2 Col3 Col4 reference 10 14 7 29 Obj1 0 9 1 30 Obj2 1 16 0 17 Obj3 9 21 3 0 Obj4 11 0 4 22 I want to transform it by condition: if any cell (except the cells of the 1st row) is =0, then it must be replaced with an incremented (X+1) value from the 1st row of this column. The resulting table is: Object Col1 Col2 Col3 Col4 reference 10 14 7 29 Obj1 11 9 1 30 Obj2 1 16 8 17 Obj3 9 21 3 30 Obj4 11 15 4 22 I've tried this variant: df = np.where(df[df == 0] == 0, df.iloc[0] + 1, df) but the result is ndarray, not DataFrame and the performance is not well enough. Is there a way to do this using only pandas' utils?
Use DataFrame.mask: out = df.mask(df == 0, df.iloc[0] + 1, axis=1) print (out) Col1 Col2 Col3 Col4 Object reference 10 14 7 29 Obj1 11 9 1 30 Obj2 1 16 8 17 Obj3 9 21 3 30 Obj4 11 15 4 22
2
5
78,020,461
2024-2-19
https://stackoverflow.com/questions/78020461/comparing-a-numeric-pythons-list-against-a-pandas-dataframe
I have a Pandas's dataframe with 9 columns (called just dataframe), 5 of them called: [['DEG1','DEG2','DEG3','DEG4','DEG5']], it has almost 2000 rows. I have a list from which I build another dataframe with 5 columns called [['deg1','deg2','deg3','deg4','deg5']], this dataframe only has 1 row with the data series that needs to be checked in the other dataframe. I need to check if the entire series (the 5 columns) from the little dataframe (n_serie_df), if repeated in the big dataframe, here my actual state, which gives me: "ValueError: Item wrong length 1 instead of 1709.". This is my actual code: import pandas as pd import numpy as np def check_repeated_deg(n_serie_list, dataframe): n_serie_dict = { "DEG1": [n_serie_list[0]], "DEG2": [n_serie_list[1]], "DEG3": [n_serie_list[2]], "DEG4": [n_serie_list[3]], "DEG5": [n_serie_list[4]], } n_serie_df = pd.DataFrame(n_serie_dict) repeated = dataframe[np.all(n_serie_df.values == n_serie_df.values, 1)].any().any() if repeated: return f"This deg serie is already measured" else: return None dataframe = pd.read_csv(r"data_deg.csv") n_serie_list = [2, 11, 21, 27, 41] result = check_repeated_deg(n_serie_list, dataframe) print(result)
Use: #create sample data - for test rewrite last row by n_serie_list values np.random.seed(56) dataframe = pd.DataFrame(np.random.randint(10, size=(5,9)), columns=['DEG1','DEG2','DEG3','DEG4','DEG5','a','b','c','d']) n_serie_list = [2, 11, 21, 27, 41] dataframe.iloc[-1, :5] = n_serie_list print (dataframe) DEG1 DEG2 DEG3 DEG4 DEG5 a b c d 0 5 4 0 2 9 7 6 4 9 1 7 1 8 2 0 5 6 1 9 2 5 5 2 9 3 5 9 2 1 3 0 4 6 2 0 8 6 4 0 4 2 11 21 27 41 2 1 3 8 Compare filtered rows of DataFrame by list n_serie_list with DataFrame.all and test at least one match by Series.any: cols = ['DEG1','DEG2','DEG3','DEG4','DEG5'] repeated = (dataframe[cols].values == n_serie_list).all(axis=1).any() print (repeated) True How it working: print ((dataframe[cols].values == n_serie_list)) [[False False False False False] [False False False False False] [False False False False False] [False False False False False] [ True True True True True]]
2
1
78,019,970
2024-2-19
https://stackoverflow.com/questions/78019970/pythonic-way-to-get-polars-data-frame-absolute-max-values-of-all-relevant-column
I want to create absolute maximum of all Polars data frame columns. Here is a way, but surely could be improved. import numpy as np import polars as pl df = pl.DataFrame({ "name": ["one", "one", "one", "two", "two", "two"], "val1": [1.2, -2.3, 3, -3.3, 2.2, -1.3], "val2": [1,2,3,-4,-3,-2] }) absVals = [] for col in df.columns: try: absVals.append((lambda arr: max(arr.min(), arr.max(), key=abs)) (df[col])) except: absVals.append(np.NaN) df_out= pl.DataFrame(data=absVals).transpose() df_out.columns=df.columns print(df_out) Outputs -
You can use import numpy as np import polars as pl df = pl.DataFrame({ "name": ["one", "one", "one", "two", "two", "two"], "val1": [1.2, -2.3, 3, -3.3, 2.2, -1.3], "val2": [1,2,3,-4,-3,-2] }) polars.DataFrame.schema schema has the Column name as well as the Column datatype. pl.NUMERIC_DTYPES contains all the numeric datatypes. for col,typ in df.schema.items(): if typ in pl.NUMERIC_DTYPES: print(max(df[col],key=abs)) else: print(np.NaN) Using List comprehension in one line: [max(df[col],key=abs) if typ in pl.NUMERIC_DTYPES else np.NaN for col,typ in df.schema.items()] output #[nan, -3.3, -4] Bonus: Your code without lambda: lst=[] for col in df.columns: try: lst.append(max(df[col],key=abs)) except: lst.append(np.NaN) print(lst) [nan, -3.3, -4]
3
2
78,019,900
2024-2-19
https://stackoverflow.com/questions/78019900/how-can-i-use-list-comprehension-to-count-high-or-low-pair
I am trying to make my code more elegant/less verbose with list comprehension. I have the following so far: a = [47, 67, 22] b = [26, 47, 12] at = 0 bt = 0 for a, b in [c for c in zip(a, b)]: if a>b: at+=1 elif a<b: bt+=1 I've been trying to do something like this to remove the if elif statements: [at+=1 if (a>b) else bt+=1 if (a<b) for a, b in [c for c in zip(a, b)]] Which gives me a invalid syntax error highlighting the at+=1 (so I've gone down a rabbit hole: [at+=1 if (a>b) else bt+=1 if (a<b) else hello=None for a, b in [c for c in zip(a, b)]] ^^ SyntaxError: invalid syntax
I'm not sure that list comprehension is more elegant (because it can be harder to read for people new to the language). However, you can write something like: a = [47, 67, 22] b = [26, 47, 12] at = sum(1 for x,y in zip(a,b) if x > y) bt = sum(1 for x,y in zip(a,b) if x < y) This uses a generator expression to count all the pairs which are either truly greater or truly smaller. You are running into syntax errors because the trinary operator (bar if foo else baz) needs all three parts. It also must produce an output, so you can't use it to filter during a comprehension. If you wish to filter during a comprehension you need to use the (<expr> for <iterator> if <filter_expr>) variant of a comprehension which will skip elements for which the filter_expr evaluates to false. Edit: Deceze pointed out that this will iterate over the list twice. This is true and could be fixed by sacrificing readability: a = [47, 67, 22] b = [26, 47, 62] # changed the array for demo purposes result = sum(1 if x > y else 1j if x < y else 0 for x,y in zip(a,b)) at, bt = int(result.real), int(result.imag) This solution is, however, specific to this particular example.
2
2
78,005,112
2024-2-16
https://stackoverflow.com/questions/78005112/create-dataframe-with-all-unique-combinations-given-a-set-of-constraints
I need to create a dataframe with all the possible unique combinations of n number of original arrays given a couple constraints. I want to be able to do this without filtering down the initial dataframe due to memory constraints. There will be two types of original input arrays. Either they will have boolean values of just True and False, or they will have a variable amount of float values. The additional tricky layer is that any row should only have one non zero float value, the other float values must be a 0. Example input: inputs = { "a": [True, False], "b": [True, False], "c": [0.0, 0.1, 0.2], "d": [0.0, 0.1, 0.2, 0.3], } bool_inputs = {"a", "b"} float_inputs = {"c", "d"} Example Output: a b c d 0 True True 0.0 0.0 1 True False 0.0 0.0 2 False True 0.0 0.0 3 False False 0.0 0.0 4 True True 0.1 0.0 5 True False 0.1 0.0 6 False True 0.1 0.0 7 False False 0.1 0.0 8 True True 0.2 0.0 9 True False 0.2 0.0 10 False True 0.2 0.0 11 False False 0.2 0.0 12 True True 0.0 0.1 13 True False 0.0 0.1 14 False True 0.0 0.1 15 False False 0.0 0.1 16 True True 0.0 0.2 17 True False 0.0 0.2 18 False True 0.0 0.2 19 False False 0.0 0.2 20 True True 0.0 0.3 21 True False 0.0 0.3 22 False True 0.0 0.3 23 False False 0.0 0.3 I have been able to do this by filtering the data afterwards with the below solution, but I am wanting to not have any filtering. A bonus would also be not having to fix the column types import numpy as np import pandas as pd input_arrays = list(inputs.values()) results = np.array(np.meshgrid(*input_arrays)).T.reshape(-1, len(inputs)) df = pd.DataFrame(results, columns=list(inputs.keys())) df[list(bool_inputs)] = df[list(bool_inputs)].astype(bool) df = df[~(df[list(float_inputs)] > 0).all(axis=1)] df = df.reset_index(drop=True)
Key to not having to filter the float columns is making a block diagonal matrix. Everything else here is just .join(..., how = 'cross') from scipy.linalg import block_diag import pandas as pd import numpy as np inputs = { "a": [True, False], "b": [True, False], "c": [0.0, 0.1, 0.2], "d": [0.0, 0.1, 0.2, 0.3], } bool_inputs = {"a", "b"} float_inputs = {"c", "d"} num = block_diag(*[np.array(inputs[k])[None, np.flatnonzero(inputs[k])] for k in float_inputs]).T df = pd.DataFrame(columns = float_inputs, data = num) for k in bool_inputs: df = df.join(pd.DataFrame(columns = [k], data = inputs[k]), how = 'cross') df = df.reindex(sorted(df.columns), axis=1) Output: a b c d 0 True True 0.1 0.0 1 False True 0.1 0.0 2 True False 0.1 0.0 3 False False 0.1 0.0 4 True True 0.2 0.0 5 False True 0.2 0.0 6 True False 0.2 0.0 7 False False 0.2 0.0 8 True True 0.0 0.1 9 False True 0.0 0.1 10 True False 0.0 0.1 11 False False 0.0 0.1 12 True True 0.0 0.2 13 False True 0.0 0.2 14 True False 0.0 0.2 15 False False 0.0 0.2 16 True True 0.0 0.3 17 False True 0.0 0.3 18 True False 0.0 0.3 19 False False 0.0 0.3 EDIT: I assumed that "Only one non-zero float column" was strict, otherwise replace: num = block_diag(*[np.array(inputs[k])[None, np.flatnonzero(inputs[k])] for k in float_inputs]).T df = pd.DataFrame(columns = float_inputs, data = num) with: num = block_diag(*np.array(inputs[k])[None, np.flatnonzero(inputs[k])] for k in float_inputs]).T num = np.r_[np.zeros_like(num)[[0], :], num] df = pd.DataFrame(columns = float_inputs, data = num)
3
2
78,018,799
2024-2-19
https://stackoverflow.com/questions/78018799/how-to-transform-multi-columns-to-rows
I have a excel file with multi-columns as below (Sorry but I don't know how to recreate it with pandas): Below is my expected Output: import pandas as pd import numpy as np df = pd.DataFrame({'Code': ['11000000000', '11200100000', '11710000000', '11000000000', '11200100000', '11710000000', '11000000000', '11200100000', '11710000000'], 'Code Name': ['Car', 'Motorbike', 'Bike', 'Car', 'Motorbike', 'Bike', 'Car', 'Motorbike', 'Bike'], 'Date': ['19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024', '19-02-2024'], 'Customer': ['Customer A', 'Customer A', 'Customer A', 'Customer B', 'Customer B', 'Customer B', 'Customer ...', 'Customer ...', 'Customer ...'], 'Point_1': [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], 'Point_2': [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]}) df Code Code Name Date Customer Point_1 Point_2 0 11000000000 Car 19-02-2024 Customer A NaN NaN 1 11200100000 Motorbike 19-02-2024 Customer A NaN NaN 2 11710000000 Bike 19-02-2024 Customer A NaN NaN 3 11000000000 Car 19-02-2024 Customer B NaN NaN 4 11200100000 Motorbike 19-02-2024 Customer B NaN NaN 5 11710000000 Bike 19-02-2024 Customer B NaN NaN 6 11000000000 Car 19-02-2024 Customer ... NaN NaN 7 11200100000 Motorbike 19-02-2024 Customer ... NaN NaN 8 11710000000 Bike 19-02-2024 Customer ... NaN NaN What should I do to get this result. Thank you
Create DataFrame with MultiIndex first in index and columns by parameters index_col and header and then use DataFrame.stack with first level and dropna parameter for avoid remove rows with missing values: df = pd.read_excel(file, index_col=[0,1,2], header=[0,1]) #test MultiIndex in columns print (df.columns) MultiIndex([('Customer A', 'Point_1'), ('Customer A', 'Point_2'), ('Customer B', 'Point_1'), ('Customer B', 'Point_2'), ('Customer ...', 'Point_1'), ('Customer ...', 'Point_2')], names=['Customer', None]) #test MultiIndex in index print (df.index) MultiIndex([('11000000000', 'Car', '19-02-2024'), ('11200100000', 'Motorbike', '19-02-2024'), ('11710000000', 'Bike', '19-02-2024')], names=['Code', 'Code Name', 'Date']) EDIT: There is problem with missing values in headers, so is possible use alternative solution - create MultiIndex first in columns and use DataFrame.set_index with DataFrame.rename_axis: df = pd.read_excel('file.xls', header=[0,1]) df = df.set_index(df.columns[:3].tolist()).rename_axis(df.columns[:3].get_level_values(0)) print (df) Customer A Customer B \ Point_1 Point_2 Point_1 Point_2 Code Code Name Date 11000000000 Car 19-02-2024 NaN NaN NaN NaN 11200100000 Motorbike 19-02-2024 NaN NaN NaN NaN 11710000000 Bike 19-02-2024 NaN NaN NaN NaN Customer … Point_1 Point_2 Code Code Name Date 11000000000 Car 19-02-2024 NaN NaN 11200100000 Motorbike 19-02-2024 NaN NaN 11710000000 Bike 19-02-2024 NaN NaN out = df.stack(0, dropna=False).reset_index() print (out) Code Code Name Date Customer Point_1 Point_2 0 11000000000 Car 19-02-2024 Customer ... NaN NaN 1 11000000000 Car 19-02-2024 Customer A NaN NaN 2 11000000000 Car 19-02-2024 Customer B NaN NaN 3 11200100000 Motorbike 19-02-2024 Customer ... NaN NaN 4 11200100000 Motorbike 19-02-2024 Customer A NaN NaN 5 11200100000 Motorbike 19-02-2024 Customer B NaN NaN 6 11710000000 Bike 19-02-2024 Customer ... NaN NaN 7 11710000000 Bike 19-02-2024 Customer A NaN NaN 8 11710000000 Bike 19-02-2024 Customer B NaN NaN
2
2
78,017,555
2024-2-18
https://stackoverflow.com/questions/78017555/numpy-create-a-3d-array-using-other-2-3d-arrays-and-a-1d-array-to-discriminate
I have 2 3D numpy arrays: a = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]], [[13, 14, 15], [16, 17, 18]]]) b = a + 100 and 1 1D array: c = np.array([0, 1, 0]) I want to create another 3D array which elements are from a and b based on c, i.e. if c is 0 take from a, if 1 take from b. The result should be: array([[[ 1, 2, 3], [ 4, 5, 6]], [[107, 108, 109], [110, 111, 112]], [[ 13, 14, 15], [ 16, 17, 18]]]) As I want to use just numpy and nothing else, I was trying with np.where, but the result is not what I want: >>> np.where(c==0, a, b) array([[[ 1, 102, 3], [ 4, 105, 6]], [[ 7, 108, 9], [ 10, 111, 12]], [[ 13, 114, 15], [ 16, 117, 18]]]) Any suggestion?
You just need to up c to the desired dimension: np.where(c[:, None, None]==0, a, b) [[[ 1 2 3] [ 4 5 6]] [[107 108 109] [110 111 112]] [[ 13 14 15] [ 16 17 18]]]
2
3
78,016,706
2024-2-18
https://stackoverflow.com/questions/78016706/integration-from-a-set-of-acceleration-data-to-position
I'm trying for a project to integrate acceleration data in order to have an approximation of the position. I used a real simple set of data to start with, with a constant acceleration. from scipy.integrate import cumtrapz t = [0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7] a = [-9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8] v = cumtrapz(a, t, initial=0) z = cumtrapz(v, t, initial=5) The result is quite satisfying, apart from the fact that the initial condition for the position is only respected for the first value, and I don't understand how I can change this ?
First of all, scipy.integrate.cumtrapz is left for backward compatibility, you should use the newer scipy.integrate.cumulative_trapezoid function. Secondly, if you read the documentation you'll see that initial is not an initial condition but simply a value prepended onto the array, which would normally be one element shorter than the original data. initial : scalar, optional If given, insert this value at the beginning of the returned result. 0 or None are the only values accepted. Default is None, which means res has one element less than y along the axis of integration. There is also a deprecation warning that, starting version 1.12.0, providing anything but 0 or None will result in a warning This shows the sizes of the result: from scipy.integrate import cumulative_trapezoid t = [0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7] a = [-9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8, -9.8] print(len(t), len(cumulative_trapezoid(a, t)), len(cumulative_trapezoid(a, t, initial=0))) # 15, 14, 15 To enforce your initial condition, you should set initial=0 and then add the initial condition to the result. v0 = 0. z0 = 5. v = cumulative_trapezoid(a, t, initial=0) + v0 z = cumulative_trapezoid(v, t, initial=0) + z0
2
3
78,013,162
2024-2-17
https://stackoverflow.com/questions/78013162/typehint-function-args-tupleargs-with-constraint-on-the-args
We want to type hint a function def f(*args:float)->tuple[float,...]: ... return tuple(args) such that it is specified that the number of elements in the tuple matches the number of args. Of course, the return here is a placeholder for more complicated logic. We would like to use mypy or pylance to check if we always return a) the correct number of elements and b) the correct tyoe of all elements. Using TypeVarTuple would allow to specify that we return the same number of elements, but not the type. Is there in current python (3.12) way to do it besides writing many overloads for 1-parameter, 2-parameter, 3-parameters etc?
Yes, you can write a no-op decorator to make the signature of f reject attempts at passing a type that you don't want. The following example makes f reject any attempts at passing values which aren't compatible with float. Demo: mypy Playground, Pyright Playground import typing_extensions as t if t.TYPE_CHECKING: import collections.abc as cx F = t.TypeVar("F", bound=cx.Callable[..., t.Any]) Ts = t.TypeVarTuple("Ts") class _FloatOnlyCallable(t.Protocol): def __call__(self, /, *args: float) -> t.Any: ... def asFloatOnlyCallable(f: F, /) -> F | _FloatOnlyCallable: """Decorate a function to make it only accept variadic positional float arguments""" return f @asFloatOnlyCallable def f(*args: *Ts) -> tuple[*Ts]: return args >>> a, b = f(1.0, 2.0) # OK >>> c, d = f("3.0", 4.0) # Error: Incompatible type "str", expected "float" >>> e, g, h = f(5.0, 6.0) # Error: Need more values to unpack
2
2
77,992,101
2024-2-14
https://stackoverflow.com/questions/77992101/how-to-get-all-elements-as-rows-for-each-href-in-html-and-add-it-to-a-pandas-dat
I am trying to fetch as rows the different values inside each href element from the following website: https://www.bmv.com.mx/es/mercados/capitales There should be 1 row that matches each field on the provided headers for each different href element on the HTML file. This is one of the portions of the HTML that I am trying to scrape: <tbody> <tr role="row" class="odd"> <td class="sorting_1"><a href="/es/mercados/cotizacion/1959">AC </a></td><td><span class="series">*</span> </td><td>03:20</td><td><span class="color-2">191.04 </span></td><td>191.32</td> <td>194.51</td> <td>193.92</td> <td>191.01</td> <td>380,544</td> <td>73,122,008.42</td> <td>2,793</td> <td>-3.19</td><td>-1.64</td></tr><tr role="row" class="even"> <td class="sorting_1"><a href="/es/mercados/cotizacion/203">ACCELSA</a> </td> <td><span class="series">B</span> </td><td>03:20</td><td> <span class="">22.5</span></td><td>0</td> <td>22.5</td><td>0</td><td>0 </td><td>3</td><td>67.20</td> <td>1</td><td>0</td><td>0</td></tr> <tr role="row" class="odd"> <td class="sorting_1"> <a href="/es/mercados/cotizacion/6096">ACTINVR</a></td> <td><span class="series">B</span></td><td>03:20</td><td> <span class="">15.13</span></td><td>0</td><td>15.13</td><td>0</td> <td>0</td><td>13</td><td>196.69</td><td>4</td><td>0</td> <td>0</td></tr><tr role="row" class="even"><td class="sorting_1"> <a href="/es/mercados/cotizacion/339083">AGUA</a></td> <td><span class="series">*</span> </td><td>03:20</td><td> <span class="color-1">29</span> </td><td>28.98</td><td>28.09</td> <td>29</td><td>28</td><td>296,871</td> <td>8,491,144.74</td><td>2,104</td><td>0.89</td> <td>3.17</td></tr><tr role="row" class="odd"><td class="sorting_1"> <a href="/es/mercados/cotizacion/30">ALFA</a></td><td><span class="series">A</span></td> <td>03:20</td> <td><span class="color-2">13.48</span> </td><td>13.46</td> <td>13.53</td><td>13.62</td><td>13.32</td> <td>2,706,398</td> td>36,494,913.42</td><td>7,206</td><td>-0.07</td> <td>-0.52</td> </tr><tr role="row" class="even"><td class="sorting_1"> <a href="/es/mercados/cotizacion/7684">ALPEK</a></td><td><span class="series">A</span> </td><td>03:20</td><td><span class="color-2">10.65</span> </td><td>10.64</td><td>10.98</td><td>10.88</td><td>10.53</td> <td>1,284,847</td><td>13,729,368.46</td><td>6,025</td><td>-0.34</td> <td>-3.10</td></tr><tr role="row" class="odd"><td class="sorting_1"> <a href="/es/mercados/cotizacion/1729">ALSEA</a></td><td><span class="series">*</span> </td><td>03:20</td><td><span class="color-2">65.08</span></td><td>64.94</td><td>65.44</td><td>66.78</td><td>64.66</td><td>588,826</td><td>38,519,244.51</td><td>4,442</td><td>-0.5</td><td>-0.76</td></tr> <tr role="row" class="even"><td class="sorting_1"> <a href="/es/mercados/cotizacion/424518">ALTERNA</a></td><td><span class="series">B</span></td><td>03:20</td><td><span class="">1.5</span></td><td>0</td><td>1.5</td> <td>0</td><td>0</td><td>2</td><td>3</td><td>1</td><td>0</td><td>0</td></tr><tr role="row" class="odd"><td class="sorting_1"> <a href="/es/mercados/cotizacion/1862">AMX</a></td> <td><span class="series">B</span></td><td>03:20</td> <td><span class="color-2">14.56</span></td><td>14.58</td> <td>14.69</td><td>14.68</td><td>14.5</td><td>86,023,759</td> <td>1,254,412,623.59</td><td>41,913</td><td>-0.11</td> <td>-0.75</td></tr><tr role="row" class="even"> <td class="sorting_1"><a href="/es/mercados/cotizacion/6507">ANGELD</a> </td><td><span class="series">10</span></td><td>03:20</td><td> <span class="color-2">21.09</span> </td><td>21.1</td><td>21.44</td><td>21.23</td><td>21.09</td> <td>51,005</td><td>1,076,281.67</td> <td>22</td><td>-0.34</td><td>-1.59</td></tr> </tbody> And my current code results into an empty dataframe: # create empty pandas dataframe import pandas as pd import requests from bs4 import BeautifulSoup # get response code from webhost page = requests.get('https://www.bmv.com.mx/es/mercados/capitales') soup = BeautifulSoup(page.text, 'lxml') #print(soup.p.text) # yet it doesn't bring the expected rows! print('Read html!') # get headers tbody = soup.find("thead") tr = tbody.find_all("tr") headers= [t.get_text().strip().replace('\n', ',').split(',') for t in tr][0] #print(headers) df = pd.DataFrame(columns=headers) # fetch rows into pandas dataframe# You can find children with multiple tags by passing a list of strings rows = soup.find_all('tr', {"role":"row"}) #rows for row in rows: cells = row.findChildren('td') for cell in cells: value = cell.string #print("The value in this cell is %s" % value) # append row in dataframe I would like to know if it's possible to get a pandas dataframe whose fields are the ones portrayed in the headers list and the rows are each element from href. For better perspective, the expected output should be equal to the table at the bottom of the provided website. Whose first row has the next schema: EMISORA SERIE HORA ÚLTIMO PPP ANTERIOR MÁXIMO MÍNIMO VOLUMEN IMPORTE OPS. VAR PUNTOS VAR % AC * 3:20 191.04 191.32 194.51 193.92 191.01 380,544 73,122,008.42 2,793 -3.19 -1.64 Is this possible to create such dataset?
As mentioned before, the table is loaded and rendered dynamically via JavaScript, something you could not handle with requests because it just get the static response and does not behave like a browser. A solution to mimic a browsers behaviour is given by @thetaco using selenium but you could get your goal also with requests while using the source the data comes from. Get the request url use your browsers dev tools to inspect the network traffic in this example it is: https://www.bmv.com.mx/es/Grupo_BMV/BmvJsonGeneric?idSitioPagina=4 Extract the string from the response (it is not valid JSON) requests.get('https://www.bmv.com.mx/es/Grupo_BMV/BmvJsonGeneric?idSitioPagina=4').text.split(';(', 1)[-1].split(')')[0] Convert the string into JSON (json.loads()) and tranform it with pandas.json_normalize() into a dataframe. Your data is under the path ['response']['resultado']['A'] The column names may differ a bit because they are build on the keyes from the JSON but they could be easily mapped. The response contains all content, including that of the other groups (ACCIONES, CKD'S, FIBRAS, TÍTULOS OPCIONALES) which can also be extracted (A, CKDS, F, TO) would be the abbreviations that can be used analogously for the selection. Example (all available information for ACCIONES from XHR Request) import json, requests import pandas as pd df = pd.json_normalize( json.loads( requests.get('https://www.bmv.com.mx/es/Grupo_BMV/BmvJsonGeneric?idSitioPagina=4')\ .text\ .split(';(', 1)[-1]\ .split(')')[0] )['response']['resultado']['A'] )\ .dropna(axis=1, how='all') idEmision idTpvalor cveSerie cveCorta idEmisora datosEstadistica.hora datosEstadistica.maximo datosEstadistica.minimo datosEstadistica.importeAcomulado datosEstadistica.noOperaciones datosEstadistica.variacionPuntos datosEstadistica.variacionPorcentual datosEstadistica.precioUltimoHecho datosEstadistica.ppp datosEstadistica.precioAnterior datosEstadistica.volumenOperado datosEstadistica.anioEjercicio datosEstadistica.insumosPu 0 1959 1 * AC 6081 03:20 192.98 189.01 9.54831e+07 3333 -2.59 -1.35 189.3 189.32 191.91 502297 0 0 1 203 1 B ACCELSA 5015 03:20 0 0 22.4 1 0 0 22.5 0 22.5 1 0 0 ... 103 404833 1B 19 VMEX 34347 03:20 45.29 45.29 11007.9 8 0.14 0.31 45.29 0 45.15 243 0 0 104 327336 1 A VOLAR 30023 03:20 12.76 12.42 1.5744e+07 5006 0.24 1.93 12.67 12.68 12.44 1246397 0 0 105 5 1 * WALMEX 5214 03:20 70.37 67.83 1.21326e+09 19593 -2.02 -2.86 68.7 68.72 70.74 17639588 0 0 Coming closer to your result, you could post process the dataframe to your needs: import re # exclude all columns referencing an id information df = df.loc[:, ~df.columns.str.startswith('id')] # adjust the column names df.columns = [re.sub(r"(?<=\w)([A-Z])", r" \1", c).split('.')[-1].lstrip('cve').upper() for c in df.columns] df SERIE CORTA HORA MAXIMO MINIMO IMPORTE ACOMULADO NO OPERACIONES ARIACION PUNTOS ARIACION PORCENTUAL PRECIO ULTIMO HECHO PPP PRECIO ANTERIOR OLUMEN OPERADO ANIO EJERCICIO INSUMOS PU * AC 03:20 191.17 187.8 1.14863e+08 4175 0.64 0.34 189.65 189.96 189.32 604632 0 0 B ACTINVR 03:20 15.03 15.03 36614.4 14 0 0 15.03 0 15.03 2436 0 0 ... A VOLAR 03:20 12.97 12.51 1.48613e+07 2832 0.07 0.55 12.83 12.75 12.68 1162684 0 0 * WALMEX 03:20 69.03 67.66 7.2698e+08 22462 -0.71 -1.03 68 68.01 68.72 10672270 0 0 or simply map against the columns, to get exact column names: map_dict = {'cveSerie':'SERIE', 'cveCorta':'EMISORA', 'datosEstadistica.hora':'HORA', 'datosEstadistica.maximo':'MÁXIMO', 'datosEstadistica.minimo':'MÍNIMO', 'datosEstadistica.importeAcomulado':'IMPORTE', 'datosEstadistica.noOperaciones':'OPS.', 'datosEstadistica.variacionPuntos':'VAR PUNTOS', 'datosEstadistica.variacionPorcentual':'VAR %', 'datosEstadistica.precioUltimoHecho':'ÚLTIMO', 'datosEstadistica.ppp':'PPP', 'datosEstadistica.precioAnterior':'ANTERIOR', 'datosEstadistica.volumenOperado':'VOLUMEN'} df.loc[:,[c for c in df.columns if c in map_dict.keys()]].rename(columns=map_dict) SERIE EMISORA HORA MÁXIMO MÍNIMO IMPORTE OPS. VAR PUNTOS VAR % ÚLTIMO PPP ANTERIOR VOLUMEN * AC 03:20 191.17 187.8 1.14863e+08 4175 0.64 0.34 189.65 189.96 189.32 604632 ...
5
4
78,015,392
2024-2-18
https://stackoverflow.com/questions/78015392/vectorize-a-folding-process-in-a-dataframe
Suppose we have a sample dataframe like the one below: df = pd.DataFrame({'A': [np.nan, 0.5, 0.5, 0.5, 0.5], 'B': [np.nan, 3, 4, 1, 2], 'C': [10, np.nan, np.nan, np.nan, np.nan]}) >>> df A B C 0 NaN NaN 10.0 1 0.5 3.0 NaN 2 0.5 4.0 NaN 3 0.5 1.0 NaN 4 0.5 2.0 NaN Col 'D' is calculated with the following operation: >>> df A B C D 0 NaN NaN 10.0 10.0 1 0.5 3.0 NaN 8.0 = (10 x 0.5) + 3 2 0.5 4.0 NaN 8.0 = (8 x 0.5) + 4 3 0.5 1.0 NaN 5.0 = (8 x 0.5) + 1 4 0.5 2.0 NaN 4.5 = (5 x 0.5) + 2 Calculating col 'D' reflects a folding process that recalls the previous row of col 'C' and current row of col 'A' and 'B' in each row operation. I've tried using for loops, functools.reduce() and iterators to do this, but I want to know if there's another method that uses vectorization as much as possible in order to make this operation more efficient in a larger dataset.
I'm not aware of pure vectorized pandas/numpy solution, but you can try to use numba to speed up the computation: from numba import njit @njit def calculate(A, B, starting_value=10): out = np.empty_like(A, dtype=np.float64) out[0] = starting_value for i, (a, b) in enumerate(zip(A[1:], B[1:]), 1): out[i] = (out[i - 1] * a) + b return out df["D"] = calculate(df["A"].values, df["B"].values, 10) print(df) Prints: A B C D 0 NaN NaN 10.0 10.0 1 0.5 3.0 NaN 8.0 2 0.5 4.0 NaN 8.0 3 0.5 1.0 NaN 5.0 4 0.5 2.0 NaN 4.5
5
3
78,014,923
2024-2-18
https://stackoverflow.com/questions/78014923/why-is-this-multi-threaded-code-faster-than-the-sequential-one-if-the-task-is-1
I have two simple code examples of the same calculation, which creates load on the CPU. The sequential one is significantly slower then the multi-threaded one. If I understand the concept of the GIL correctly, that should not be the case, as just one thread can run at the same time. So the overall time should be the same, as there is not wait time in I/O. What could be an explanation for this? I have another example with the multiprocessing library, which runs even faster then the multi-threaded one, which is expected, as it creates load on 3 cores in parallel. Sequential code - ~31 seconds: import time start_time = time.time() my_threads = [100000000, 200000000, 300000000] for x in my_threads: print(f"start {x}") for y in range(x): y*y print(f"finished input {x}: {time.time() - start_time:.2f} seconds") print(f"{time.time() - start_time:.2f} seconds") Multi-threaded code - ~15 seconds execution time: EDIT: removed all unnecessary code, to make it simpler (does not change the outcome) import threading import time def my_threaded_function(s, x): with s: start_time = time.time() for y in range(x): y*y print(f"finished input {x}: {time.time() - start_time:.2f} seconds") s = threading.Semaphore(25) my_threads = [100000000, 200000000, 300000000] threads = [] start_time = time.time() for x in my_threads: t = threading.Thread( target=my_threaded_function, name=f"Thread-{x}", args=(s, x,) ) t.daemon = True threads.append(t) t.start() [thread.join() for thread in threads] print(f"{time.time() - start_time:.2f} seconds") I expect the two examples to have the same execution time. EDIT: full output: ❯ python3 code/sequential.py finished input 100000000: 4.57 seconds finished input 200000000: 13.61 seconds finished input 300000000: 27.09 seconds 27.09 seconds ❯ python3 code/threading_example.py finished input 100000000: 7.84 seconds finished input 200000000: 12.44 seconds finished input 300000000: 14.65 seconds 14.68 seconds
The reason is that in the sequential code, you have global variables for reading and writing but in the threaded code, the variables are local. Local variables are stored(STORE_FAST) and loaded(LOAD_FAST) much faster because the namespace of the functions are actually an array, instead of the dictionary and Python accesses them by index. Put all the code in the sequential code in a function and run that again. You'll see that they are almost the same. Having: for i in range(n): x = i * i In the first line of the for loop, you store the returned number in i. In the second line, you look up i twice (i * i) and store x. These are all slower in global namespace.
2
2
78,014,412
2024-2-18
https://stackoverflow.com/questions/78014412/big-o-notation-of-string-permutation-in-python
def permutation(str): #str = string input if len(str) == 0: return [""] result = [] for i, char in enumerate(str): for p in permutation(str[:i] + str[i + 1 :]): result.append(char + p) return result I am confused in asymptotic analysis (time complexity) in this code does it have an O(n!) or O(n*n!) because the recursive call is in the loop. I am still confused does the big O notation only based on regular notation like logarithmic, linear, quadratic, exponential, factorial, etc. Or could beyond that like a combination not only based on the regular notation I already tried to calculate it and see some explanation in youtube but still couldn't understand well.
Let the time complexity of the function be T(n), where n is the length of the input string. For the case where n is not 0, we need to consider the following parts: For the outer loop, it needs to be repeated n times. For the outer loop body, the time complexity of each recursive call is T(n - 1). (I omitted the O(n) time complexity of constructing the input, considering that T(n - 1) is at least (n - 1)!, which should not affect the results.) For inner loop, since the output size of the permutation is n!, it needs to be repeated (n - 1)! times (the input length for recursive calls is n - 1). For the inner loop body, due to the length of p being n - 1, the time complexity of char + p is O(n). In summary, we can conclude that: T(n) = n * (T(n - 1) + O(n) * (n - 1)!) = n * T(n - 1) + n * O(n!) = n * ((n - 1) * T(n - 2) + (n - 1) * O((n - 1)!)) + n * O(n!) = n * (n - 1) * T(n - 2) + (n - 1) * O(n!) + n * O(n!) ... = n! * T(0) + 1 * O(n!) + 2 * O(n!) + ... + n * O(n!) = O(n!) + (1 + 2 + ... + n) * O(n!) = O((1/2 * n**2 + 1/2 * n + 1) * n!) = O(n**2 * n!)
4
4
78,007,405
2024-2-16
https://stackoverflow.com/questions/78007405/aggregating-over-subsets-of-columns-in-numpy-array
I'm trying to compute the aggregate metrics (e.g. mean, median, sum) of column subsets in a numpy array. Take this array for example: 1 6 3 4 2 3 4 5 1 4 5 6 3 5 6 7 I have a set of clusters, which are lists of column indices like this: clusters = [[0, 1], [2], [3]] Both the array and the list of cluster indices can be large and I can guarantee that each column in the array belongs to exactly one cluster, i.e. there are no duplicates in the clusters list. The indexes in the list aren't necessarily ordered, i.e. [[0, 3], [2, 1]] would also be a valid cluster. What I'm looking for is for example summing up the values per cluster - the result for the example above would look like this: [25, 18, 22] A simple implementation in python could look like this: import numpy as np arr = np.array([ [1, 6, 3, 4], [2, 3, 4, 5], [1, 4, 5, 6], [3, 5, 6, 7], ]) clusters = [[0, 1], [2], [3]] result = np.array([arr[:,c_indices].sum() for c_indices in clusters]) # array([25, 18, 22]) My problem is that the matrix and number of clusters can grow quite large and I'd like to avoid looping in Python and instead keeping as much of this as possible in numpy's C-implementation for performance reasons. Are there more efficient ways of doing this? (Ideally compatible with min, max, median, mean, and sum)
A start would be to sum the columns in numpy and then only sum up the results: n = 10**4 arr = np.random.rand(n,n) clusters = [[random.randint(0,n-1) for _ in range(random.randint(0,n))] for _ in range(n//100)] %%timeit arr_sumed = arr.sum(0) [arr_sumed[c_indices].sum() for c_indices in clusters] 38.6 ms Β± 1.08 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) %timeit [arr[:,c_indices].sum() for c_indices in clusters] 27.3 s Β± 382 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) What if I want a median instead of a sum? Ok I hope somebody else will come up with a better solution BUT one thing you can do is sort the columns in advance. Then you can find the index of a number with adding up the indices in the sorted lists. Finally you do binary search for the median: n = 10**4 arr = np.random.rand(n,n) clusters = [np.array([random.randint(0,n-1) for i in range(random.randint(0,n))]) for j in range(n//100)] %%time def median_from_sorted_arrays(arr_sorted, arr_min, arr_max, c_indices): n = len(arr_min)*len(c_indices) a = min(arr_min[c_indices]) b = max(arr_max[c_indices]) while True: k = sum(np.searchsorted(arr_sorted[i], (a+b)/2) for i in c_indices) if k/n < 0.499: a = (a+b)/2 elif k/n > 0.501: b = (a+b)/2 else: return (a+b)/2 arr_sorted = np.sort(arr, 0) arr_min = np.min(arr, 0) arr_max = np.max(arr, 0) [median_from_sorted_arrays(arr_sorted, arr_min, arr_max,c_indices) for c_indices in clusters] This took 13.7 seconds compared to 1min 43s for the naive approach. Note that though the binary search should always work and give a perfect solution I didn't take the time to work out the correct stopping criterion making this median an approximation.
2
3
78,011,941
2024-2-17
https://stackoverflow.com/questions/78011941/django-application-to-manage-a-small-inventory
I have created a very small Django application to manage a very small Inventory. My models.py code is: class Inventory(models.Model): account = models.ForeignKey( "accounts.Account", on_delete=models.DO_NOTHING, null=False, blank=False ) class InventoryProduct(models.Model): inventory = models.ForeignKey("Inventory", on_delete=models.CASCADE) sku = models.ForeignKey( "products.SKU", on_delete=models.DO_NOTHING, null=False, blank=False ) quantity = models.PositiveIntegerField( default=0 ) class Transaction(models.Model): IN = 1 OUT = 0 TYPE_CHOICES = ( (IN, "Incoming"), # Carico / Entrata (OUT, "Outgoing"), # Scarico / Uscita ) inventory = models.ForeignKey("Inventory", on_delete=models.CASCADE) transferred_to = models.ForeignKey( "Inventory", on_delete=models.CASCADE, blank=True, null=True ) code = models.UUIDField(default=uuid.uuid4) transaction_type = models.PositiveSmallIntegerField( choices=TYPE_CHOICES, default=IN ) transaction_date = models.DateTimeField(auto_now_add=True) notes = models.TextField(null=True, blank=True) class TransactionItem(models.Model): transaction = models.ForeignKey("Transaction", on_delete=models.CASCADE) item = models.ForeignKey("InventoryProduct", on_delete=models.CASCADE) quantity = models.IntegerField() def save(self, *args, **kwargs): super().save(*args, **kwargs) self.item.quantity += self.quantity self.item.save() The code is quite explanatory, I basically have an Inventory per account and the Inventory has products that I add in the related model InventoryItem. Each product in InventoryItem has a quantity that will be updated during a Transaction. A Transaction has a type IN/OUT to understand if i have to add or remove Items from the Inventory. Lastly, as you can surelly understand the TransactionItem has all the Items with the quantity (to add or remove) inside a Transaction. It is quite simply, I really appreciate your opinion if i can improve the modelling somehow. My questions are: Add a way to transfer products from an Inventory to another (i think i need an OUT Transaction and then an IN Transaction), but how can I keep track of this "movement" from an Inventory to another? I would like to save from what Inventory the products are coming from. This question is also related to the first one. Products can be added to an Inventory from an Order or by a transfer from another Inventory. How can I also include the Order "concept"?
1. Tracking Transfers Between Inventories To keep track of where the products are coming from, you might consider adding a transferred_from field as well. This will allow you to create a direct link between the source and destination inventories for each transfer. class Transaction(models.Model): # Existing fields... transferred_from = models.ForeignKey( "Inventory", related_name="transferred_from_transactions", on_delete=models.CASCADE, blank=True, null=True ) transferred_to = models.ForeignKey( "Inventory", related_name="transferred_to_transactions", on_delete=models.CASCADE, blank=True, null=True ) Example Query: # Assuming 'last_transaction' is a Transaction instance last_transaction = Transaction.objects.last() # To find out where the product came from if last_transaction.transferred_from: source_inventory = last_transaction.transferred_from print(f"Product came from Inventory ID: {source_inventory.id}") else: print("This product did not come from another inventory.") 2. Including the Order Concept Add new model: Order and OrderItem Models class Order(models.Model): order_code = models.UUIDField(default=uuid.uuid4, editable=False) order_date = models.DateTimeField(auto_now_add=True) # Additional fields as needed class OrderItem(models.Model): order = models.ForeignKey(Order, on_delete=models.CASCADE) sku = models.ForeignKey("products.SKU", on_delete=models.CASCADE) quantity = models.PositiveIntegerField(default=0) Next, add a reference to an Order in your Transaction model: class Transaction(models.Model): # Existing fields... order = models.ForeignKey(Order, on_delete=models.SET_NULL, null=True, blank=True, related_name="transactions") Example Usage: # Assuming 'some_transaction' is a Transaction instance some_transaction = Transaction.objects.select_related('order', 'transferred_from').last() # Check if the transaction is linked to an order if some_transaction.order: print(f"Transaction linked to Order ID: {some_transaction.order.id}") # Check where the products came from if some_transaction.transferred_from: print(f"Products transferred from Inventory ID: {some_transaction.transferred_from.id}") else: print("Products were not transferred from another inventory.") Hope this helps.
2
2
78,010,688
2024-2-17
https://stackoverflow.com/questions/78010688/return-all-rows-above-last-row-until-condition-is-met-in-python-dataframe
I have a data frame that oscillates between negative and positive values. What I'm looking to do is find the value of the last row, and return a dataframe of all rows above it until the sign changes form negative to positive or vice versa. An example of problem would be something like this import numpy as np import pandas as pd df = pd.DataFrame(np.random.randint(-10,10,size=(100, 1)), columns=list('v')) creating a dataframe that contains numbers from -10 to 10. What I need to do is find the very last row of "V" and return all rows above it until it switches signs. Running this code gave me the following output : 93, 3 94, 2 95, 0 96, -1 97, -7 98, -2 What im trying to get as new dataframe containing rows 96 - 98, and everything above 95 is not needed or sliced off. Since the sign changes so many times inside the dataframe, im having trouble singling out the last change before the end of the dataframe (row 95) I've tried various types of slicing with iloc and .tail() but haven't had any success. df[(df['v'].tail() < 0).idxmin(): ] is as close as I've got, however it only returns previous 5 values and sometimes it can take up to 20 index's to change signs. I've tried in various forms data[: (df['v'].iloc[-1] < 0)] but I just cant seem to get exactly what I'm after. A big hangup is the data set is hundreds rows long, so there are too many zero's or sign changes to go about other methods that I can think of, when I need data only after the last sign change. Any help would be greatly appreciated.
Example Code If you generate a DataFrame randomly, you must provide a seed. I will generate a new DataFrame as an example because you did not provide a seed. import pandas as pd df = pd.DataFrame([1, 2, 3, -4, 0, 1, 2] , columns=list('v')) df: v 0 1 1 2 2 3 3 -4 4 0 5 1 6 2 Code s = df['v'].mask(df['v'].eq(0)).ffill() grp = s.mul(s.shift()).le(0).cumsum() out = df[grp.eq(grp.max())] out: v 5 1 6 2
2
1
78,009,548
2024-2-16
https://stackoverflow.com/questions/78009548/selenium-urllib-error-httperror-http-error-404-not-found
Traceback (most recent call last): File "C:\Users\nenuk\OneDrive\Desktop\Creator\32.py", line 556, in <module> driver = uc.Chrome(options=opts) File "C:\Users\nenuk\AppData\Local\Programs\Python\Python312\Lib\site-packages\undetected_chromedriver\__init__.py", line 258, in __init__ self.patcher.auto() File "C:\Users\nenuk\AppData\Local\Programs\Python\Python312\Lib\site-packages\undetected_chromedriver\patcher.py", line 178, in auto self.unzip_package(self.fetch_package()) File "C:\Users\nenuk\AppData\Local\Programs\Python\Python312\Lib\site-packages\undetected_chromedriver\patcher.py", line 287, in fetch_package return urlretrieve(download_url)[0] File "C:\Users\nenuk\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 240, in urlretrieve with contextlib.closing(urlopen(url, data)) as fp: File "C:\Users\nenuk\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 215, in urlopen return opener.open(url, data, timeout) File "C:\Users\nenuk\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 521, in open response = meth(req, response) File "C:\Users\nenuk\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 630, in http_response response = self.parent.error( File "C:\Users\nenuk\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 559, in error return self._call_chain(*args) File "C:\Users\nenuk\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 492, in _call_chain result = func(*args) File "C:\Users\nenuk\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 639, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found How i can solve this problem, last time i re-downloaded everything it worked but right now it doesnt.
def fetch_package(self): """ Downloads ChromeDriver from source :return: path to downloaded file """ zip_name = f"chromedriver_{self.platform_name}.zip" if self.is_old_chromedriver: download_url = "%s/%s/%s" % (self.url_repo, self.version_full.vstring, zip_name) else: zip_name = zip_name.replace("_", "-", 1) #download_url = "https://edgedl.me.gvt1.com/edgedl/chrome/chrome-for-testing/%s/%s/%s" download_url = "https://storage.googleapis.com/chrome-for-testing-public/%s/%s/%s" download_url %= (self.version_full.vstring, self.platform_name, zip_name) logger.debug("downloading from %s" % download_url) return urlretrieve(download_url)[0] You need to change fetch_package(self) in undetected_chromedriver\patcher.py Since the old download_url can not work to get the chrome driver, you should use 'https://storage.googleapis.com/' to replace the 'https://edgedl.me.gvt1.com/edgedl/chrome/'.
2
10
78,000,777
2024-2-15
https://stackoverflow.com/questions/78000777/improve-scipy-integrate-quad-vec-performance-for-fitting-integral-equation-work
I am using quad_vec in a function that contains an integral not solvable with analytic methods. I need to fit this equation to data points. However, even the evaluation with fixed values takes multiple seconds, the total fit time grows to 7 min. Is there a way to accelerate the computation? I know quad_vec has the workers keyword, but when I try to use that, the computation does not finish at all. I am currently working in a Jupyter notebook if that has any significance. This is the function definition, I already tried to use numba here with little success. from scipy.special import j0, j1 from scipy.integrate import quad_vec import numpy as np import numba as nb h = 0.00005 G = 1000 E = 3*G R= 0.0002 upsilon = 0.06 gamma = 0.072 @nb.jit(nb.float64(nb.float64, nb.float64, nb.float64, nb.float64, nb.float64),cache=True) def QSzz_1(s,h,z, upsilon, E): # exponential form of the QSzz^-1 function to prevent float overflow, # also assumes nu==0.5 numerator = (1+2*s**2*h**2)*np.exp(-2*s*z) + 0.5*(1+np.exp(-4*s*z)) denominator = 0.5*(1-np.exp(-4*s*z)) - 2*s*h*np.exp(-2*s*z) return (3/(2*s*E)) / (numerator/denominator + (3*upsilon/(2*E))*s) def integrand(s, r, R, upsilon, E, h): return s*(R*j0(s*R) - 2*j1(s*R)/s) * QSzz_1(s,h,h, upsilon, E) * j0(s*r) def style_exact(r, gamma, R, upsilon, E, h): int_out = quad_vec(integrand, 0, np.inf, args=(r,R,upsilon,E,h)) return gamma * int_out[0] # calculate fixed ~10s x_ax = np.linspace(0, 0.0004,101, endpoint=True, dtype=np.float64) zeta = style_exact(x_ax, gamma, 0.0002, upsilon, E, h) # fit to dataset (wetting ridge) ~7 min popt_hemi, pcov_hemi = curve_fit(lambda x, upsilon, E, h: style_exact(x, gamma, R, upsilon, E, h) ,points_x, points_y, p0=(upsilon, E, h), bounds=([0,0,0],[np.inf,np.inf,np.inf])) Edit: here are some exemplar values: points_x =[0.00040030286, 0.00040155788, 0.00040281289, 0.00040406791000000003, 0.00040532292, 0.00040657794, 0.00040783296, 0.00040908797, 0.00041034299, 0.00041159801, 0.00041285302, 0.00041410804, 0.00041536305, 0.00041661807, 0.00041787309, 0.0004191281, 0.00042038312, 0.00042163814, 0.00042289315000000003, 0.00042414817000000003, 0.00042540318, 0.0004266582, 0.00042791322, 0.00042916823, 0.00043042325, 0.00043167827, 0.00043293328, 0.0004341883, 0.00043544332, 0.00043669833, 0.00043795335, 0.00043920836, 0.00044046338, 0.0004417184, 0.00044297341000000003, 0.00044422843000000003, 0.00044548345, 0.00044673846, 0.00044799348, 0.00044924849, 0.00045050351, 0.00045175852999999996, 0.00045301354000000006, 0.00045426856, 0.00045552357999999995, 0.00045677859000000005, 0.00045803361, 0.00045928863000000006, 0.00046054364000000004, 0.00046179866, 0.00046305367, 0.00046430869000000004, 0.00046556371, 0.00046681871999999997, 0.00046807374000000003, 0.00046932876, 0.00047058376999999996, 0.00047183879, 0.0004730938, 0.00047434881999999995, 0.00047560384, 0.00047685885, 0.00047811387000000006, 0.00047936889, 0.0004806239, 0.00048187892000000005, 0.00048313393000000004, 0.00048438895, 0.00048564397000000004, 0.00048689898000000003, 0.000488154, 0.00048940902, 0.00049066403, 0.00049191905, 0.00049317407, 0.00049442908, 0.0004956841, 0.0004969391100000001, 0.00049819413, 0.0004994491500000001, 0.00050070416, 0.00050195918, 0.0005032142, 0.00050446921, 0.00050572423, 0.00050697924, 0.00050823426, 0.00050948928, 0.00051074429, 0.00051199931, 0.00051325433, 0.00051450934, 0.00051576436, 0.00051701937, 0.0005182743900000001, 0.00051952941, 0.00052078442, 0.00052203944, 0.00052329446, 0.00052454947, 0.00052580449, 0.00052705951, 0.00052831452, 0.00052956954, 0.00053082455, 0.00053207957, 0.00053333459, 0.0005345896, 0.00053584462, 0.00053709964, 0.00053835465, 0.00053960967, 0.00054086468, 0.0005421197, 0.0005433747200000001, 0.00054462973, 0.00054588475, 0.00054713977, 0.00054839478, 0.0005496498, 0.00055090482, 0.00055215983, 0.00055341485, 0.00055466986, 0.00055592488, 0.0005571799, 0.00055843491, 0.00055968993, 0.00056094495, 0.0005621999600000001, 0.00056345498, 0.00056470999, 0.00056596501, 0.00056722003, 0.00056847504, 0.00056973006, 0.00057098508, 0.00057224009, 0.00057349511, 0.00057475012, 0.00057600514, 0.00057726016, 0.00057851517, 0.00057977019, 0.00058102521, 0.00058228022, 0.0005835352400000001, 0.00058479026, 0.00058604527, 0.00058730029, 0.0005885553, 0.00058981032, 0.00059106534, 0.00059232035, 0.00059357537, 0.00059483039, 0.0005960854, 0.00059734042, 0.00059859543, 0.00059985045, 0.00060110547, 0.0006023604800000001, 0.0006036155, 0.00060487052, 0.00060612553, 0.00060738055, 0.00060863556, 0.00060989058, 0.0006111456, 0.00061240061, 0.00061365563, 0.00061491065, 0.00061616566, 0.00061742068, 0.0006186757, 0.00061993071, 0.00062118573, 0.00062244074, 0.00062369576, 0.00062495078, 0.00062620579, 0.0006274608100000001, 0.00062871583, 0.00062997084, 0.00063122586, 0.00063248087, 0.00063373589, 0.00063499091, 0.00063624592, 0.00063750094, 0.00063875596, 0.00064001097, 0.00064126599, 0.000642521, 0.00064377602, 0.00064503104, 0.0006462860500000001, 0.00064754107, 0.0006487960900000001, 0.0006500511, 0.00065130612, 0.00065256114, 0.00065381615, 0.00065507117, 0.00065632618, 0.0006575812, 0.00065883622, 0.00066009123, 0.00066134625, 0.00066260127, 0.00066385628, 0.0006651113, 0.00066636631, 0.0006676213300000001, 0.00066887635, 0.00067013136, 0.00067138638, 0.0006726414, 0.00067389641, 0.00067515143, 0.00067640645, 0.00067766146, 0.00067891648, 0.00068017149, 0.00068142651, 0.00068268153, 0.00068393654, 0.00068519156, 0.00068644658, 0.00068770159, 0.00068895661, 0.00069021162, 0.00069146664, 0.0006927216600000001, 0.00069397667, 0.00069523169, 0.00069648671, 0.00069774172, 0.00069899674, 0.00070025175, 0.00070150677, 0.00070276179, 0.0007040168, 0.00070527182, 0.00070652684, 0.00070778185, 0.00070903687, 0.00071029189, 0.0007115469000000001, 0.00071280192, 0.00071405693, 0.00071531195, 0.00071656697, 0.00071782198, 0.000719077, 0.00072033202, 0.00072158703, 0.00072284205, 0.00072409706, 0.00072535208, 0.0007266071, 0.00072786211, 0.00072911713, 0.00073037215, 0.00073162716, 0.0007328821800000001, 0.00073413719, 0.00073539221, 0.00073664723, 0.00073790224, 0.00073915726, 0.00074041228, 0.00074166729, 0.00074292231, 0.00074417733, 0.00074543234, 0.00074668736, 0.00074794237, 0.00074919739, 0.00075045241, 0.0007517074200000001, 0.00075296244, 0.00075421746, 0.00075547247, 0.00075672749, 0.0007579825, 0.00075923752, 0.00076049254, 0.00076174755, 0.00076300257, 0.00076425759, 0.0007655126, 0.00076676762, 0.00076802264, 0.00076927765, 0.00077053267, 0.00077178768, 0.0007730427, 0.00077429772, 0.00077555273, 0.0007768077500000001, 0.00077806277, 0.00077931778, 0.0007805728, 0.00078182781, 0.00078308283, 0.00078433785, 0.00078559286, 0.00078684788, 0.0007881029, 0.00078935791, 0.00079061293, 0.00079186794, 0.00079312296, 0.00079437798, 0.0007956329900000001, 0.00079688801, 0.0007981430300000001, 0.00079939804, 0.00080065306, 0.00080190808, 0.00080316309, 0.00080441811, 0.00080567312, 0.00080692814, 0.00080818316, 0.00080943817, 0.00081069319, 0.00081194821, 0.00081320322, 0.00081445824, 0.00081571325, 0.0008169682700000001, 0.00081822329, 0.0008194783, 0.00082073332, 0.00082198834, 0.00082324335, 0.00082449837, 0.00082575338, 0.0008270084, 0.00082826342, 0.00082951843, 0.00083077345, 0.00083202847, 0.00083328348, 0.0008345385, 0.00083579352, 0.00083704853, 0.00083830355, 0.00083955856, 0.00084081358, 0.0008420686000000001, 0.00084332361, 0.00084457863, 0.00084583365, 0.00084708866, 0.00084834368, 0.00084959869, 0.00085085371, 0.00085210873, 0.00085336374, 0.00085461876, 0.00085587378, 0.00085712879, 0.00085838381, 0.00085963882, 0.0008608938400000001, 0.00086214886, 0.00086340387, 0.00086465889, 0.00086591391, 0.00086716892, 0.00086842394, 0.00086967896, 0.00087093397, 0.00087218899, 0.000873444, 0.00087469902, 0.00087595404, 0.00087720905, 0.00087846407, 0.00087971909, 0.0008809741, 0.0008822291200000001, 0.00088348413, 0.00088473915, 0.00088599417, 0.00088724918, 0.0008885042, 0.00088975922, 0.00089101423, 0.00089226925, 0.00089352427, 0.00089477928, 0.0008960343, 0.00089728931, 0.00089854433, 0.00089979935, 0.0009010543600000001, 0.00090230938, 0.0009035644, 0.00090481941, 0.00090607443, 0.00090732944, 0.00090858446, 0.00090983948, 0.00091109449, 0.00091234951, 0.00091360453, 0.00091485954, 0.00091611456, 0.00091736957, 0.00091862459, 0.00091987961, 0.00092113462, 0.00092238964, 0.00092364466, 0.00092489967, 0.0009261546900000001, 0.00092740971, 0.00092866472, 0.00092991974, 0.00093117475, 0.00093242977, 0.00093368479, 0.0009349398, 0.00093619482, 0.00093744984, 0.00093870485, 0.0009399598700000001, 0.00094121488, 0.0009424699, 0.0009437249200000001, 0.00094497993, 0.00094623495, 0.0009474899700000001, 0.0009487449799999999, 0.00095, 0.0009512550100000001, 0.0009525100299999999, 0.00095376505, 0.0009550200600000001, 0.0009562750799999999, 0.0009575301, 0.0009587851100000001, 0.0009600401299999999, 0.00096129515, 0.00096255017, 0.00096380517, 0.0009650601700000001, 0.00096631517, 0.00096757027, 0.0009688252700000001, 0.00097008027, 0.0009713352699999999, 0.0009725902700000001, 0.00097384527, 0.0009751003699999999, 0.0009763553700000001, 0.00097761037, 0.00097886537, 0.00098012037, 0.0009813753699999999, 0.0009826304699999998, 0.0009838854700000002, 0.00098514047, 0.00098639547, 0.00098765047, 0.0009889054699999998, 0.0009901605699999998, 0.0009914155700000002, 0.00099267057, 0.00099392557, 0.00099518057, 0.0009964355699999998, 0.0009976905700000002, 0.0009989456700000001, 0.00100020067, 0.00100145567, 0.00100271067, 0.0010039656699999998, 0.0010052206700000002, 0.0010064757700000001, 0.00100773077, 0.00100898577, 0.0010102407699999999, 0.0010114957699999998, 0.0010127507700000002, 0.00101400587, 0.00101526087, 0.00101651587, 0.0010177708699999999, 0.0010190258700000002, 0.0010202808700000001, 0.00102153597, 0.00102279097, 0.00102404597, 0.0010253009699999998, 0.0010265559700000002, 0.0010278109700000001, 0.00102906607, 0.00103032107, 0.00103157607, 0.0010328310699999998, 0.0010340860700000002, 0.00103534107, 0.00103659617, 0.00103785117, 0.00103910617, 0.0010403611699999998, 0.0010416161700000002, 0.00104287117, 0.00104412627, 0.00104538127, 0.0010466362699999999, 0.0010478912699999998, 0.0010491462700000002, 0.00105040127, 0.00105165627, 0.00105291137, 0.0010541663699999999, 0.0010554213700000002, 0.0010566763700000001, 0.00105793137, 0.00105918637, 0.00106044147, 0.0010616964699999999, 0.0010629514700000002, 0.0010642064700000001, 0.00106546147, 0.00106671647, 0.00106797157, 0.0010692265699999998, 0.0010704815700000002, 0.00107173657, 0.00107299157, 0.00107424657, 0.00107550167, 0.0010767566699999998, 0.0010780116700000002, 0.00107926667, 0.00108052167, 0.00108177667, 0.0010830317699999999, 0.0010842867699999998, 0.0010855417700000002, 0.00108679677, 0.00108805177, 0.00108930677, 0.0010905618699999999, 0.0010918168700000002, 0.0010930718700000001, 0.00109432687, 0.00109558187, 0.00109683687, 0.0010980919699999999, 0.0010993469700000002, 0.0011006019700000001, 0.00110185697, 0.00110311197, 0.0011043669699999999, 0.0011056219699999998, 0.0011068770700000002, 0.00110813207, 0.00110938707, 0.00111064207, 0.0011118970699999999, 0.0011131520700000002, 0.0011144071700000002, 0.00111566217, 0.00111691717, 0.00111817217, 0.0011194271699999998, 0.0011206821700000002, 0.0011219372700000002, 0.00112319227, 0.00112444727, 0.00112570227, 0.0011269572699999998, 0.0011282122700000002, 0.0011294673700000001, 0.00113072237, 0.00113197737, 0.00113323237, 0.0011344873699999998, 0.0011357423700000002, 0.0011369974700000001, 0.00113825247, 0.00113950747, 0.0011407624699999999, 0.0011420174699999998, 0.0011432724700000002, 0.0011445275700000001, 0.00114578257, 0.00114703757, 0.0011482925699999999, 0.0011495475700000002, 0.0011508025700000001, 0.00115205767, 0.00115331267, 0.00115456767, 0.0011558226699999999, 0.0011570776700000002, 0.0011583326700000001, 0.00115958777, 0.00116084277, 0.00116209777, 0.0011633527699999998, 0.0011646077700000002, 0.00116586277, 0.00116711777, 0.00116837287, 0.00116962787, 0.0011708828699999998, 0.0011721378700000002, 0.00117339287, 0.00117464787, 0.00117590297, 0.0011771579699999999, 0.0011784129699999998, 0.0011796679700000002, 0.00118092297, 0.00118217797, 0.00118343307, 0.0011846880699999999, 0.0011859430700000002, 0.0011871980700000001, 0.00118845307, 0.00118970807, 0.00119096317, 0.0011922181699999999, 0.0011934731700000002, 0.0011947281700000001, 0.00119598317, 0.00119723817, 0.00119849327, 0.0011997482699999998, 0.0012010032700000002, 0.00120225827, 0.00120351327, 0.00120476827, 0.00120602337, 0.0012072783699999998, 0.0012085333700000002, 0.00120978837, 0.00121104337, 0.00121229837, 0.0012135534699999999, 0.0012148084699999998, 0.0012160634700000002, 0.00121731847, 0.00121857347, 0.00121982847, 0.0012210834699999998, 0.0012223385700000002, 0.0012235935700000001, 0.00122484857, 0.00122610357, 0.00122735857, 0.0012286135699999998, 0.0012298686700000002, 0.0012311236700000001, 0.00123237867, 0.00123363367, 0.0012348886699999999, 0.0012361436699999998] points_y = [-2.4929826e-07, -2.3248189e-07, -4.4305314e-07, -1.0689171e-06, -7.0144722e-07, -1.3773717e-06, -9.3672285e-07, -1.6876499e-06, -9.8346007e-07, -1.7992562e-06, -1.0198111e-06, -1.721233e-06, -8.9082583e-07, -1.1925362e-06, -8.3776501e-07, -6.9998957e-07, -7.1134901e-07, -4.5476849e-07, -6.4449894e-07, -3.8765887e-07, -6.3044764e-07, -7.4224008e-07, -7.6114851e-07, -1.0377502e-06, -1.3589471e-06, -1.3342596e-06, -1.3712255e-06, -1.3510569e-06, -1.2278933e-06, -8.2319036e-07, -1.4040568e-06, -6.4183121e-07, -1.1649824e-06, -7.3197454e-07, -1.0537769e-06, -8.3223932e-07, -1.1644648e-06, -1.2177416e-06, -1.4045247e-06, -1.6934001e-06, -1.6157397e-06, -1.8595331e-06, -1.7097882e-06, -1.8031869e-06, -1.5406345e-06, -1.5851084e-06, -1.5695719e-06, -1.4990693e-06, -1.8087735e-06, -1.7151045e-06, -1.8353234e-06, -1.71844e-06, -1.7904118e-06, -1.5297879e-06, -1.6064767e-06, -1.4520618e-06, -1.1090131e-06, -1.2475477e-06, -7.4591269e-07, -1.0619496e-06, -7.5699762e-07, -1.3883064e-06, -1.3300594e-06, -1.9713711e-06, -2.0613271e-06, -2.5116161e-06, -2.4466345e-06, -2.5386926e-06, -2.2368298e-06, -2.2934508e-06, -1.8951084e-06, -1.8117756e-06, -1.6680112e-06, -1.8274169e-06, -1.7569355e-06, -2.1081536e-06, -2.1241154e-06, -2.2742958e-06, -2.4032149e-06, -2.2596226e-06, -2.1889918e-06, -1.9359605e-06, -1.8878718e-06, -1.6144539e-06, -1.6485844e-06, -1.2316506e-06, -1.6932815e-06, -8.1348768e-07, -1.310099e-06, -4.3574574e-07, -1.0726973e-06, -6.6005902e-07, -1.2151841e-06, -9.1100721e-07, -1.4911344e-06, -1.3152027e-06, -1.3695714e-06, -1.3930563e-06, -1.3452594e-06, -1.3228626e-06, -1.3714694e-06, -1.2480971e-06, -1.4622823e-06, -1.5687181e-06, -1.7872703e-06, -1.7135845e-06, -2.0209804e-06, -1.3665688e-06, -1.7074398e-06, -1.1511678e-06, -1.1604734e-06, -1.0173458e-06, -1.0660268e-06, -1.0424449e-06, -1.1101976e-06, -1.0030326e-06, -1.0879421e-06, -8.2978143e-07, -9.3823628e-07, -7.2342249e-07, -9.8929055e-07, -1.0764783e-06, -1.3105722e-06, -1.3954326e-06, -1.5047949e-06, -1.4339143e-06, -1.3061363e-06, -1.3200332e-06, -1.381963e-06, -1.3490984e-06, -1.3526509e-06, -1.463083e-06, -1.2588114e-06, -1.4445926e-06, -1.1240129e-06, -1.3659935e-06, -1.3323392e-06, -1.3695779e-06, -1.7108472e-06, -1.7111548e-06, -2.0250494e-06, -2.1803196e-06, -2.2433208e-06, -2.4435685e-06, -1.9341618e-06, -2.3866277e-06, -1.8497934e-06, -1.8903583e-06, -1.4422203e-06, -1.7661343e-06, -1.5059728e-06, -1.5770287e-06, -1.8108199e-06, -2.0170832e-06, -1.8260586e-06, -2.1429269e-06, -2.0532939e-06, -2.1373399e-06, -2.342127e-06, -2.3871293e-06, -2.5980083e-06, -2.4293864e-06, -2.3568741e-06, -2.0801477e-06, -1.8587702e-06, -1.7074138e-06, -1.5791169e-06, -1.6891695e-06, -1.7635139e-06, -1.9566623e-06, -1.8455385e-06, -2.1080438e-06, -2.0320153e-06, -2.1665641e-06, -2.1571212e-06, -2.3643005e-06, -2.074037e-06, -2.0893195e-06, -1.9232214e-06, -1.7025658e-06, -1.6232691e-06, -1.6510243e-06, -1.7197265e-06, -1.8580166e-06, -1.9258182e-06, -2.0062691e-06, -2.0157544e-06, -2.0394525e-06, -2.0826713e-06, -1.9067459e-06, -2.0218438e-06, -1.9964327e-06, -2.1734356e-06, -2.1242189e-06, -2.4424379e-06, -2.4437198e-06, -2.6022861e-06, -2.4502697e-06, -2.6343237e-06, -2.2225432e-06, -2.3110892e-06, -2.1664638e-06, -2.1287713e-06, -2.011825e-06, -2.2808875e-06, -2.158988e-06, -2.5522458e-06, -2.556647e-06, -2.8299596e-06, -2.9620166e-06, -2.6908558e-06, -3.0163631e-06, -2.6530144e-06, -2.5642676e-06, -2.2324086e-06, -2.0825715e-06, -1.7085644e-06, -1.4025919e-06, -1.4042667e-06, -1.397307e-06, -1.4471031e-06, -1.4352464e-06, -1.6847902e-06, -1.4372545e-06, -1.6405646e-06, -1.5025385e-06, -1.58785e-06, -1.5018164e-06, -1.546755e-06, -1.5307927e-06, -1.5450872e-06, -1.762507e-06, -1.9245396e-06, -2.1342847e-06, -2.083201e-06, -2.1824533e-06, -2.2264199e-06, -1.9521925e-06, -2.1104425e-06, -2.35205e-06, -2.1372429e-06, -2.3874246e-06, -2.3111549e-06, -2.3476044e-06, -1.9828263e-06, -2.1105666e-06, -1.77767e-06, -1.8420129e-06, -1.90373e-06, -1.930438e-06, -2.0727705e-06, -2.1793671e-06, -2.4205829e-06, -2.1275047e-06, -2.4740434e-06, -2.0603233e-06, -2.2409819e-06, -1.7541814e-06, -2.0279909e-06, -1.730486e-06, -1.9476207e-06, -1.7534857e-06, -1.8505329e-06, -2.0095086e-06, -1.618978e-06, -1.8867553e-06, -1.9088163e-06, -1.886491e-06, -1.7468138e-06, -1.8476389e-06, -1.7557932e-06, -1.4058452e-06, -1.6067978e-06, -1.3156005e-06, -1.3659535e-06, -1.0961384e-06, -1.0153987e-06, -9.4432646e-07, -6.6454642e-07, -9.2586387e-07, -1.0025458e-06, -1.0698426e-06, -1.2805659e-06, -1.3957816e-06, -1.504749e-06, -1.3274602e-06, -1.4140738e-06, -1.3504825e-06, -1.3899331e-06, -1.3970904e-06, -1.4744283e-06, -1.4185692e-06, -1.7050143e-06, -1.5382651e-06, -1.5599202e-06, -1.5529446e-06, -1.506719e-06, -1.4330019e-06, -1.240627e-06, -1.2835575e-06, -1.1023492e-06, -1.1632735e-06, -1.1683113e-06, -1.2732747e-06, -1.219676e-06, -1.2890147e-06, -1.1440703e-06, -9.1523203e-07, -8.2035542e-07, -8.7226368e-07, -7.3633722e-07, -9.884313e-07, -8.5961273e-07, -1.2392311e-06, -1.0843573e-06, -1.0707268e-06, -9.571558e-07, -1.0067944e-06, -6.4553431e-07, -4.0506156e-07, -3.3043043e-07, -1.7361598e-07, -1.3118263e-07, -2.9468891e-07, -4.7080768e-07, -6.4225818e-07, -7.5475209e-07, -8.5102358e-07, -6.0803728e-07, -8.1677753e-07, -5.9744241e-07, -7.8274568e-07, -5.3968306e-07, -8.350585e-07, -5.4845851e-07, -5.8427222e-07, -5.1520419e-07, -4.6822083e-07, -6.0910398e-07, -4.4298342e-07, -5.6257054e-07, -2.7562129e-07, -2.5181401e-07, 3.8053095e-08, 2.4159147e-07, 3.6882074e-07, 5.1241897e-07, 4.8644598e-07, 7.2692073e-07, 4.7022181e-07, 7.0384493e-07, 6.8289479e-07, 6.4066943e-07, 8.5657662e-07, 5.8406311e-07, 6.7344028e-07, 4.1435118e-07, 2.7649325e-07, 2.3123522e-07, -1.9399705e-08, 2.6291987e-07, -3.6143527e-08, 4.1732021e-07, 3.3391364e-07, 6.4314122e-07, 7.7139665e-07, 1.1209136e-06, 1.4367421e-06, 1.6319081e-06, 1.7711259e-06, 1.8566403e-06, 1.7371454e-06, 1.4824876e-06, 1.6134811e-06, 1.0707754e-06, 1.3415844e-06, 1.1356512e-06, 1.4106389e-06, 1.4104486e-06, 1.7408528e-06, 2.0744193e-06, 2.079919e-06, 2.1838213e-06, 2.3656145e-06, 2.1909773e-06, 2.3504607e-06, 2.2917643e-06, 2.4505978e-06, 2.0934847e-06, 2.583584e-06, 2.2871518e-06, 2.5116042e-06, 2.6234818e-06, 2.8420594e-06, 3.0011699e-06, 3.3721137e-06, 3.3177881e-06, 3.7014297e-06, 3.4988464e-06, 3.6346743e-06, 3.6031151e-06, 3.6434367e-06, 3.3825082e-06, 3.6445565e-06, 3.2970635e-06, 3.6138927e-06, 3.3753039e-06, 3.7447733e-06, 3.5673385e-06, 3.6078831e-06, 3.5609168e-06, 3.6213054e-06, 3.5038571e-06, 3.7264648e-06, 3.9751613e-06, 3.8206903e-06, 4.1254495e-06, 3.9272576e-06, 4.2386514e-06, 3.815278e-06, 4.2691643e-06, 4.0643683e-06, 4.330484e-06, 4.29042e-06, 4.6035887e-06, 4.4565016e-06, 4.583597e-06, 4.7192276e-06, 4.7442267e-06, 4.734727e-06, 5.0407053e-06, 5.3132589e-06, 5.3419609e-06, 5.7940368e-06, 6.014359e-06, 6.0453411e-06, 6.0996584e-06, 6.064599e-06, 6.1232403e-06, 5.8926808e-06, 6.0748121e-06, 5.9732831e-06, 6.0281785e-06, 5.9558067e-06, 5.8235522e-06, 5.6378228e-06, 5.4438118e-06, 5.3658419e-06, 5.3454619e-06, 5.205238e-06, 5.4038907e-06, 5.0070169e-06, 5.0996156e-06, 4.5688268e-06, 4.5768204e-06, 4.3706204e-06, 4.378131e-06, 4.3035565e-06, 4.4136234e-06, 4.4586055e-06, 4.2999055e-06, 4.2367521e-06, 4.1092479e-06, 3.6691199e-06, 3.7132548e-06, 3.3891334e-06, 3.4132172e-06, 3.3112791e-06, 3.4194779e-06, 3.3548478e-06, 3.4746562e-06, 3.0714297e-06, 3.631046e-06, 2.9155762e-06, 3.3648723e-06, 3.0564361e-06, 3.1977623e-06, 2.9422311e-06, 2.8664619e-06, 2.9553471e-06, 2.6331467e-06, 2.7458985e-06, 2.4857213e-06, 2.5358048e-06, 2.0853043e-06, 2.2717608e-06, 1.7708539e-06, 2.1185441e-06, 1.8057521e-06, 2.1431184e-06, 1.8050008e-06, 2.2162456e-06, 1.8085417e-06, 2.0822527e-06, 1.6735792e-06, 1.9627324e-06, 1.5854033e-06, 1.7829235e-06, 1.7266717e-06, 1.9015957e-06, 2.1904481e-06, 2.0235789e-06, 2.319506e-06, 2.0939101e-06, 1.9725124e-06, 1.8089637e-06, 1.6690528e-06, 1.5539039e-06, 1.5197157e-06, 1.6846562e-06, 1.5117772e-06, 1.6974785e-06, 1.6371901e-06, 1.62875e-06, 1.3414928e-06, 1.5716781e-06, 1.1625945e-06, 1.4214429e-06, 9.0279233e-07, 1.1886867e-06, 1.0201856e-06, 1.1328523e-06, 1.1236831e-06, 1.2940038e-06, 1.5421363e-06, 1.4063536e-06, 1.8374101e-06, 1.4969428e-06, 1.8342753e-06, 1.3619477e-06, 1.6680712e-06, 1.1305091e-06, 1.3914424e-06, 1.1421031e-06, 1.0667439e-06, 9.2969523e-07, 1.0559697e-06, 9.1449135e-07, 1.0647219e-06, 1.0087653e-06, 1.4041236e-06, 1.0653469e-06, 1.5728387e-06, 1.1900372e-06, 1.396327e-06, 9.5831428e-07, 9.8273849e-07, 6.9228567e-07, 8.1312667e-07, 7.1831973e-07, 8.7380434e-07, 1.1589902e-06, 1.1559309e-06, 1.3702947e-06, 1.2662056e-06, 1.5425231e-06, 1.3134162e-06, 1.3669259e-06, 1.3616054e-06, 1.1954299e-06, 1.2969087e-06, 1.2044857e-06, 1.2129433e-06, 1.066829e-06, 1.2888265e-06, 9.1080301e-07, 9.4850302e-07, 5.628118e-07, 7.3976011e-07, 2.4812591e-07, 4.8428843e-07, 2.8183855e-07, 5.7697958e-07, 4.5050618e-07, 8.9398675e-07, 6.1835844e-07, 1.0104357e-06, 8.0297383e-07, 1.048894e-06, 8.5358493e-07, 1.0796489e-06, 8.1287327e-07, 9.8257456e-07, 5.680762e-07, 8.0814245e-07, 5.9584009e-07, 6.3797485e-07, 6.944886e-07, 9.4480563e-07, 9.518683e-07, 1.2123149e-06, 1.5256706e-06, 1.5178348e-06, 1.5515784e-06, 1.7937264e-06, 1.240253e-06, 1.3527793e-06, 1.0316758e-06, 9.3243026e-07, 9.332284e-07, 7.1470557e-07, 8.428729e-07, 6.2857809e-07, 5.4179424e-07, 4.6470621e-07, 3.6276441e-07, 2.814264e-07, 3.295977e-07, 4.0521404e-07, 3.5343026e-07, 3.6958044e-07, 5.0506593e-07, 3.1834025e-07, 3.2582213e-07, 4.3522668e-07, 2.6025184e-07, 3.8900153e-07, 1.0961338e-07, 2.5467694e-07, 1.166893e-07, 5.6772224e-08, 9.1470554e-08, 2.0167496e-07, 3.7911797e-08, 2.6796461e-07, 2.0933361e-07, 2.1677593e-07, 1.5076751e-07, 2.154547e-07, 9.4922825e-08, -1.5619153e-08, 5.6953286e-08, 1.492038e-08, -1.2234541e-07, -7.3945498e-08, -1.4066427e-07, -1.5021338e-07, -8.5765791e-08, -2.5937592e-07, -8.5784093e-08, -3.7865655e-07, -3.3939569e-07, -5.3969692e-07, -6.6329776e-07, -6.7695552e-07, -7.9978318e-07, -6.5715392e-07, -5.8634763e-07, -3.4631253e-07, -2.6236251e-07, -9.1816048e-09, -6.8072671e-08, 4.6884891e-08, -2.1581414e-07, -1.6978596e-07, -3.1446355e-07, -3.5427569e-07, -2.7410849e-07, -2.8441695e-07, -2.4303658e-07, -6.8944944e-08, -1.8188616e-07, 5.0232102e-08, -5.0489499e-08, -3.4827404e-08, -2.0914572e-07, -2.141703e-07]
There are certainly some code optimizations in the comments that can speed things up, so I'll complement that by focusing on the integration. If you can use a GPU, you can get two to three orders of magnitude speedup there. One of the simplest things that you can do to improve the speed without sacrificing too much accuracy is to provide limits of integration that capture the important part of the integral. I plotted the integrand to choose 1e7 as the upper limit, but you can probably develop a heuristic and write something to automatically choose a reasonable value. args= (x_ax, R, upsilon, E, h) # infinite upper limit of integration %timeit -r 1 -n 1 quad_vec(integrand, 0, np.inf, args=args) # 9.41 s Β± 0 ns per loop (mean Β± std. dev. of 1 run, 1 loop each) res = quad_vec(integrand, 0, np.inf, args=args) # large, finite upper limit of integration %timeit quad_vec(integrand, 0, 1e7, args=args) # 595 ms Β± 169 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) res2 = quad_vec(integrand, 0, 1e7 args=args) # Check error between the two # (note that `quad_vec` may not be super accurate, anyway; # consider checking against `mpmath.quad`) from scipy import stats res = quad_vec(integrand, 0, 1e7, args=args) res0 = quad_vec(integrand, 0, np.inf, args=args) stats.gmean(abs((res[0] - res0[0])/res0[0])) # gmean of relative error # 3.643331971383965e-05 You can get much bigger gains without too much trouble by rolling your own integration scheme in CuPy (or find an integration library that makes use of the GPU). For instance, there is an implementation of the tanh-sinh quadrature rule in SciPy that is relatively simple to adapt to something that uses the GPU, especially if you're willing to go with a fixed step size. tanh-sinh is probably not the best rule for this oscillatory integrand; I'm just using it because I'm familiar with it. import cupy as np # still np to avoid changing the existing code from cupyx.scipy.special import j0, j1 # copy-pasted from # https://github.com/scipy/scipy/blob/v1.12.0/scipy/integrate/_tanhsinh.py # then removed comments to reduce length; other optimizations possible # There are plans to make this Array-API compatible and provide a public # interface _N_BASE_STEPS = 8 def _get_base_step(dtype=np.float64): fmin = 4*np.finfo(dtype).tiny tmax = np.arcsinh(np.log(2/fmin - 1) / np.pi) h0 = tmax / _N_BASE_STEPS return h0.astype(dtype) def _compute_pair(k, h0): h = h0 / 2**k max = _N_BASE_STEPS * 2**k j = np.arange(max+1) if k == 0 else np.arange(1, max+1) jh = j * h pi_2 = np.pi / 2 u1 = pi_2*np.cosh(jh) u2 = pi_2*np.sinh(jh) wj = u1 / np.cosh(u2)**2 xjc = 1 / (np.exp(u2) * np.cosh(u2)) wj[0] = wj[0] / 2 if k == 0 else wj[0] return xjc, wj def _transform_to_limits(xjc, wj, a, b): alpha = (b - a) / 2 xj = np.concatenate((-alpha * xjc + b, alpha * xjc + a), axis=-1) wj = wj*alpha wj = np.concatenate((wj, wj), axis=-1) invalid = (xj <= a) | (xj >= b) wj[invalid] = 0 return xj, wj # simple fixed-step integration function def integrate(func, a, b, args): k = 10 # increase this to improve accuracy step0 = _get_base_step() step = step0 / 2**k xjc, wj = _compute_pair(k, step0) xj, wj = _transform_to_limits(xjc, wj, a, b) fj = integrand(xj, *args) return fj @ wj * step # your code def QSzz_1(s,h,z, upsilon, E): # exponential form of the QSzz^-1 function to prevent float overflow, # also assumes nu==0.5 numerator = (1+2*s**2*h**2)*np.exp(-2*s*z) + 0.5*(1+np.exp(-4*s*z)) denominator = 0.5*(1-np.exp(-4*s*z)) - 2*s*h*np.exp(-2*s*z) return (3/(2*s*E)) / (numerator/denominator + (3*upsilon/(2*E))*s) def integrand(s, r, R, upsilon, E, h): return s*(R*j0(s*R) - 2*j1(s*R)/s) * QSzz_1(s, h, h, upsilon, E) * j0(s*r) h = 0.00005 G = 1000 E = 3*G R= 0.0002 upsilon = 0.06 gamma = 0.072 x_ax = np.linspace(0, 0.0004, 101, endpoint=True, dtype=np.float64) r, gamma, R, upsilon, E, h = x_ax, gamma, 0.0002, upsilon, E, h a = 0 b = 1e7 args = (r[:, np.newaxis], R, upsilon, E, h) %timeit integrate(integrand, a, b, args) # 2.14 ms Β± 82.8 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) res = integrate(integrand, a, b, args) stats.gmean(abs((res - res0[0])/res0[0])) # 0.00011028421945971745 You can trade time for accuracy by increasing k. If you don't want to make the upper limit of integration finite, you can also transform the improper integral into one with finite limits of integration with a variable substitution (but you'll need to increase k to compensate). Update: style_exact, fit_func, and the call to curve_fit might look like: def style_exact(r, gamma, R, upsilon, E, h): int_out = integrate(integrand, 0, 1e7, args=(r,R,upsilon,E,h)) return gamma * int_out def fit_func(x, upsilon, E,h): # Ensure that `r` is a CuPy array align led along new axis; the # last axis will be used to evaluate integrand at many abscissae. x = cp.asarray(x)[:, np.newaxis] res = style_exact(x, gamma, 0.0002, upsilon, E, h) return cp.asnumpy(res) popt_hemi, pcov_hemi = curve_fit(fit_func, points_x, points_y, p0=(upsilon, E, h), bounds=([0,0,0],[np.inf, np.inf, np.inf]))
4
3
78,008,970
2024-2-16
https://stackoverflow.com/questions/78008970/flyte-wont-run-in-parallel
I'm new to flyte, trying to do quite simple things at the moment. So I'm trying to to run multiple independant tasks in parallel, which, if I understand the documentation, is exactly the purpose of map_task. However, so far I've been unable to make flyte actually run them in parallel. Any help would be greatly appreciated. Thanks in advance. from flytekit import map_task, task, workflow @task def do_something(value: str) -> str: print(f"launched: {value}", flush=True) time.sleep(60) # fakes long process time return f"{value}-processed" @workflow def do_multiple_things() -> list[str]: values = ["foo", "bar", "baz"] return map_task(do_something)(value=values)
Could you give more context please? How are you trying to run it? Is this running in remote (like on a live Flyte backend) or is this a local run? Local runs will not actually do parallel just yet. (Making flytekit execute local runs in parallel is part of a broader project that we have plans for some day, but no definite timeline). A side note however, as of this writing (circa early 2024) there is a newer version of map task that we are steadily moving towards. If you could, I would suggest using this instead. from flytekit.experimental import map_task At some point the main map task will become this one (though we will keep the experimental import in place for compatibility).
2
2
78,008,394
2024-2-16
https://stackoverflow.com/questions/78008394/how-do-i-get-the-line-numbers-of-a-saxonc-xpath-match
I'm building a report that will show the line numbers of XML elements that match a set of XPaths. I need to support XPath 2.0. Sending the XML to a separate web based processor written in Java or C# is a valid solution, but one I'm avoiding because my entire team works in Python, I want my tool to still work offline, and maintaining another web service is a lot of work. Saxonche supports XPath 2.0. The documentation describes multiple options for enabling line numbers, but never explains how to get the line numbers out once you have enabled them. Here's my code: input_file_path = 'test.xml' # Contents below input_xpath = './/foo' with PySaxonProcessor(license=False) as saxon_proc: # Attempt #1 to enable line numbers saxon_proc.set_configuration_property('l', 'on') doc_builder = saxon_proc.new_document_builder() # Attempt #2 to enable line numbers doc_builder.set_line_numbering(True) xml_tree = doc_builder.parse_xml(xml_file_name=input_file_path) xpath_processor = saxon_proc.new_xpath_processor() xpath_processor.set_context(xdm_item=xml_tree) foo_elements = xpath_processor.evaluate(input_xpath) # Do not see any line numbers on foo_elements in the debugger I inspected the result of evaluate() in the debugger, but I don't see anything that looks like a line number. Both PySaxonProcessor and PyDocumentBuilder have a parse_xml() method. In my code I am using PyDocumentBuilder, but I tried both and didn't notice any differences. test.XML <root> <foo>fah</foo> </root> Apparently there are wrong ways to feed your XML to Saxon that can result in no line numbers, but all of the information I found about that is in other languages. Any ideas about what I am doing wrong?
I am afraid I can't currently tell there is a way for SaxonC HE, for PE/EE you should be able to use the Saxon XPath extension function saxon:line-number e.g. from saxoncee import PySaxonProcessor with PySaxonProcessor(license=True) as saxon_proc: print(saxon_proc.version) doc_builder = saxon_proc.new_document_builder() doc_builder.set_line_numbering(True) xdm_doc = doc_builder.parse_xml(xml_file_name='sample1.xml') xpath_processor = saxon_proc.new_xpath_processor() xpath_processor.set_context(xdm_item=xdm_doc) xpath_processor.declare_namespace('saxon', 'http://saxon.sf.net/') items = xpath_processor.evaluate('//item') for item in items: xpath_processor.set_context(xdm_item=item) print(item, xpath_processor.evaluate_single('saxon:line-number(.)')) As I said, I am currently not sure whether there is a way for SaxonC HE, will try to investigate, https://www.saxonica.com/saxon-c/doc12/html/saxonc.html#PyXdmNode, however, doesn't seem to expose any properties similar to the Java API's XdmNode's https://www.saxonica.com/html/documentation12/javadoc/net/sf/saxon/s9api/XdmNode.html#getLineNumber().
2
3
77,994,344
2024-2-14
https://stackoverflow.com/questions/77994344/how-to-validate-object-attribute-added-like-that-myobj-newattribute-123
In my code (python) I have a class defined as follows: class Properties: def __init__(self,**kwargs): self.otherproperty = kwargs.get("otherproperty") self.channel = kwargs.get("channel") if self.channel not in "A": print("bad channel input - it should be A") pass In the rest of my code, in various places I calculate and add various attributes to the instance of that class: prop = Properties() prop.channel="B" #lots of calculations here prop.otherproperty= "not a channel" But when I do it this way, I get an error: TypeError: 'in <string>' requires string as left operand, not NoneType I already figured out that the following way I have no issues, and the new attribute gets nicely validated, and the user is informed that prop.channel should be different : prop = Properties(otherproperty = "not a channel",channel="B") But because of several reasons it would be inconvenient to define prop object this way. So how do I validate the object's attributes when I'm adding them step-by-step? And how is such step-by-step object building professionally named?
You can define a custom setter for the channel property with Python descriptors: class Properties: def __init__(self, **kwargs): self.otherproperty = kwargs.get("otherproperty") self._channel = None # Private attribute for channel @property def channel(self): return self._channel @channel.setter def channel(self, value): if value not in "A": raise ValueError("Bad channel input, it must be a substring of 'A'") self._channel = value The example usage would be: prop = Properties() prop.channel = "B" # Raises ValueError This way of gradually adding attributes to an object can be called "lazy object initialization" and is useful if you want to avoid long computations before creating an object. Side note: the above code doesn't raise an error if you set prop.channel = "". So you might want to change the validation to if value != "A": if this was an unintended behavior.
2
3
78,000,334
2024-2-15
https://stackoverflow.com/questions/78000334/my-tkinter-app-wont-change-the-file-it-is-loading
I am writing a tkinter app in python that is a flash card quiz. It needs to be able to read from a txt file one folder ahead from the python file. In the cide you can change which file is the file loaded but this is not working even if you choose the same file. When i tried to change the file in filehandling() it broke but when it launched it worked I tried it with keywords.txt which it was already using and test.txt. This is the file structure of the code: from customtkinter import * from CTkMessagebox import CTkMessagebox import glob import os information = [] file_name = 'keywords.txt' app = None def hide(tk): # hides window try: tk.withdraw() except: pass def show(tk): # shows window tk.deiconify() def FileHandling(filename): # file handling information = [] with open("Text/"+filename, 'r') as f: temp = f.read().split("\n") for i in range(0, len(temp), 2): temp_list = [] # makes 2d array temp_list.append(temp[i].capitalize()) temp_list.append(temp[i+1].capitalize()) information.append(temp_list) return information def changeHandler(value): global file_name file_name = value print(value) return file_name def changeFile(): # add change file code here files = getFiles() filePicker(files) def getFiles(): temp = [] os.chdir("../Code/Text") for file in glob.glob("*.txt"): temp.append(file) return temp def filePicker(files): hide(app) FilePicker = CTkToplevel() FilePicker.geometry("500x400") FilePicker.resizable(False, False) FilePicker.title("FIle Picker") Title = CTkLabel( master=FilePicker, text="Which file would you like to load?", font=("Arial", 10)) Title.place(relx=0.5, rely=0.05, anchor="center") Options = CTkComboBox(FilePicker, values=files, variable="Unselected", command=changeHandler) # type: ignore Options.place(relx=0.5, rely=0.5, anchor="center") ExitBtn = CTkButton(master=FilePicker, text="Menu", fg_color="#550000", text_color="#000000", command=lambda: Menu(FilePicker)).place(relx=0.5, rely=0.7, anchor="center") def Menu(tk=None): global information information = FileHandling(file_name) global app # makes app unless app is made then it reveals it. if app == None: app = CTk() app.geometry("500x400+750+300") app.resizable(False, False) app.title("Menu") set_appearance_mode("dark") Title = CTkLabel(master=app, text="Flash Card Project", font=("Arial", 40)) PlayBtn = CTkButton(master=app, text="Start", fg_color="#005500", text_color="#000000", command=lambda: Run(app)) Changebtn = CTkButton(master=app, text="Change File", fg_color="#AA4203", text_color="#000000", command=changeFile) ExitBtn = CTkButton(master=app, text="Exit", fg_color="#550000", text_color="#000000", command=app.quit) Title.place(relx=0.5, rely=0.05, anchor="center") PlayBtn.place(relx=0.5, rely=0.4, anchor="center") Changebtn.place(relx=0.5, rely=0.5, anchor="center") ExitBtn.place(relx=0.5, rely=0.6, anchor="center") app.mainloop() exit() else: show(app) try: tk.destroy() except: pass Menu() keywords.txt Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\tkinter\__init__.py", line 1921, in __call__ return self.func(*args) File "C:\Users\jacen\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\customtkinter\windows\widgets\ctk_button.py", line 554, in _clicked self._command() File "E:\Documents\Python Code\Flash Cards Project\Code\test.py", line 75, in <lambda> fg_color="#550000", text_color="#000000", command=lambda: Menu(FilePicker)).place(relx=0.5, rely=0.7, anchor="center") File "E:\Documents\Python Code\Flash Cards Project\Code\test.py", line 80, in Menu information = FileHandling(file_name) File "E:\Documents\Python Code\Flash Cards Project\Code\test.py", line 27, in FileHandling with open("Text/"+filename, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: 'Text/keywords.txt' test.txt Exception in Tkinter callback Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\tkinter\__init__.py", line 1921, in __call__ return self.func(*args) File "C:\Users\jacen\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\customtkinter\windows\widgets\ctk_button.py", line 554, in _clicked self._command() File "E:\Documents\Python Code\Flash Cards Project\Code\test.py", line 75, in <lambda> fg_color="#550000", text_color="#000000", command=lambda: Menu(FilePicker)).place(relx=0.5, rely=0.7, anchor="center") File "E:\Documents\Python Code\Flash Cards Project\Code\test.py", line 80, in Menu information = FileHandling(file_name) File "E:\Documents\Python Code\Flash Cards Project\Code\test.py", line 27, in FileHandling with open("Text/"+filename, 'r') as f: FileNotFoundError: [Errno 2] No such file or directory: 'Text/test.txt' keywords.txt cellulose Tough substance that makes up the cell walls of green plants. respiration A chemical reaction that causes energy to be released from glucose. haemoglobin A substance which joins to oxygen and carries it round the body in blood. ventilation Breathing. cartilage Tough and smooth substance covering the ends of bones to protect them. cytoplasm Jelly-like part of a cell where chemical reactions happen. nucleus Controls what happens inside a cell. alveoli Tiny air sacs in the lungs. amino acids Produced when proteins are digested. virus The smallest type of microbe. white blood cells Can engulf bacteria or make antibodies. photosynthesis The process of turning carbon dioxide water and light into glucose and oxygen. stomata Small holes in the underside of a leaf. vaccine Dead or inactive forms of a microorganism. fibre A nutrient that cannot be digested. test.txt kTest1 Test1 kTest2 Test2 kTest3 Test3 kTest4 Test4 kTest5 Test5 kTest6 Test6
Code: This code is working fine for me, although I commented out temp_list.append(temp[i+1].capitalize()) as its causing IndexOutOfRange error: from customtkinter import * from CTkMessagebox import CTkMessagebox import glob import os information = [] file_name_old = 'keywords.txt' file_name = './Text/'+file_name_old app = None def hide(tk): # hides window try: tk.withdraw() except: pass def show(tk): # shows window tk.deiconify() def FileHandling(filename): # file handling information = [] with open(filename, 'r') as f: temp = f.read().split("\n") for i in range(0, len(temp), 2): temp_list = [] # makes 2d array temp_list.append(temp[i].capitalize()) print(temp[i]) # temp_list.append(temp[i+1].capitalize()) information.append(temp_list) return information def changeHandler(value): global file_name file_name = value print(value) return file_name def changeFile(): # add change file code here files = getFiles() filePicker(files) def getFiles(): temp = [] current_dir = os.path.dirname(os.path.realpath(__file__)) print(current_dir) text_dir = os.path.join(current_dir, "Text") if os.path.exists(text_dir): os.chdir(text_dir) else: os.chdir(current_dir) for file in glob.glob("*.txt"): temp.append(file) return temp def filePicker(files): hide(app) FilePicker = CTkToplevel() FilePicker.geometry("500x400") FilePicker.resizable(False, False) FilePicker.title("File Picker") Title = CTkLabel( master=FilePicker, text="Which file would you like to load?", font=("Arial", 10)) Title.place(relx=0.5, rely=0.05, anchor="center") Options = CTkComboBox(FilePicker, values=files, variable="Unselected", command=changeHandler) # type: ignore Options.place(relx=0.5, rely=0.5, anchor="center") ExitBtn = CTkButton(master=FilePicker, text="Menu", fg_color="#550000", text_color="#000000", command=lambda: Menu(FilePicker)).place(relx=0.5, rely=0.7, anchor="center") def Menu(tk=None): global app # makes app unless app is made then it reveals it. if app == None: app = CTk() app.geometry("500x400+750+300") app.resizable(False, False) app.title("Menu") set_appearance_mode("dark") Title = CTkLabel(master=app, text="Flash Card Project", font=("Arial", 40)) PlayBtn = CTkButton(master=app, text="Start", fg_color="#005500", text_color="#000000", command=lambda: Run(app)) Changebtn = CTkButton(master=app, text="Change File", fg_color="#AA4203", text_color="#000000", command=changeFile) ExitBtn = CTkButton(master=app, text="Exit", fg_color="#550000", text_color="#000000", command=app.quit) Title.place(relx=0.5, rely=0.05, anchor="center") PlayBtn.place(relx=0.5, rely=0.4, anchor="center") Changebtn.place(relx=0.5, rely=0.5, anchor="center") ExitBtn.place(relx=0.5, rely=0.6, anchor="center") app.mainloop() exit() else: show(app) try: tk.destroy() except: pass Menu() File Tree: Proof:
2
2
78,003,276
2024-2-15
https://stackoverflow.com/questions/78003276/how-to-generate-uniformly-distributed-subintervals-of-an-interval
I have a non-empty integer interval [a; b). I want to generate a random non-empty integer subinterval [c; d) (where a <= c and d <= b). The [c; d) interval must be uniformly distributed in the sense that every point in [a; b) must be equally likely to end up in [c; d). I tried generating uniformly distributed c from [a; b - 1), and then uniformly distributed d from [c + 1; b), like this: a = -100 b = 100 N = 10000 cs = np.random.randint(a, b - 1, N) ds = np.random.randint(cs + 1, b) But when measuring how often each point ends up being sampled, the the distribution is clearly non-unifrom: import numpy as np import matplotlib.pyplot as plt hist = np.zeros(b - a, int) for c, d in zip(cs, ds): hist[c - a:d - a] += 1 plt.plot(np.arange(a, b), hist) plt.show() How do I do this correctly?
If you divide the range [a,b) into N sections, and then select one of those sections at random, then the chance of selecting each point is uniform -- 1/N. It doesn't matter how you divide the range. Here's a solution along those lines that uses a pretty uniform selection of division points. from random import randint a, b, N = -100, 100, 1000000 intervals = [] while len(intervals) < N: # divide the range into 3 intervals [a,x), [x,y), and [y,b) # the distributions of the division points don't change the histogram # we're careful here to make sure none of them are empty p1 = randint(a+1,b-2) p2 = randint(a+1,b-2) x = min(p1,p2) y = max(p1,p2)+1 # select one at random intervals.append([(a,x),(x,y),(y,b)][randint(0,2)])
2
2
78,003,100
2024-2-15
https://stackoverflow.com/questions/78003100/running-pip-install-in-virtual-environment-tries-to-install-packages-in-default
I am trying to install modules for python on my raspberry pi 5 in a virtual environment but it just says that the environment is externally managed. I started with activating the virtual environment and trying to install the package I needed but it just told me the environment was externally managed. clock@system-time:/clock $ source .venv/bin/activate (.venv) clock@system-time:/clock $ sudo python3 -m pip install inputimeout error: externally-managed-environment Γ— This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. For more information visit http://rptl.io/venv note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. I checked which pip and python it was using and it was the one from the virtual environment. (.venv) clock@system-time:/clock $ which pip /clock/.venv/bin/pip (.venv) clock@system-time:/clock $ which python /clock/.venv/bin/python So I tried using --break-system-packages because I though maybe there was just a something wrong. It downloaded and installed the package. (.venv) clock@system-time:/clock $ sudo python3 -m pip install inputimeout --break-system-packages Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple Collecting inputimeout Downloading inputimeout-1.0.4-py3-none-any.whl (4.6 kB) Installing collected packages: inputimeout Successfully installed inputimeout-1.0.4 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv I then tried checking my list of installed packages and it was not there. (.venv) clock@system-time:/clock $ pip list Package Version ---------- ------- pip 23.0.1 setuptools 66.1.1 So I exited and checked the list for the default installation of python and it was there. (.venv) clock@system-time:/clock $ deactivate clock@system-time:/clock $ pip list Package Version ---------------------------------- ---------- arandr 0.1.11 asgiref 3.6.0 astroid 2.14.2 asttokens 2.2.1 av 10.0.0 Babel 2.10.3 beautifulsoup4 4.11.2 blinker 1.5 certifi 2022.9.24 chardet 5.1.0 charset-normalizer 3.0.1 click 8.1.3 colorama 0.4.6 colorzero 2.0 cryptography 38.0.4 cupshelpers 1.0 dbus-python 1.3.2 dill 0.3.6 distro 1.8.0 docutils 0.19 executing 2.0.1 Flask 2.2.2 gpiozero 2.0 html5lib 1.1 icecream 2.1.3 idna 3.3 importlib-metadata 4.12.0 inputimeout 1.0.4 ... I also tried following this but it just didn't work either.
sudo runs a new root shell which has no idea about your current shell's settings (including which virtual environment is active). Absolutely don't use sudo if you want to install things into the currently active user-owned virtual environment. Anyway, the entire purpose of having a virtual environment is that it's completely controlled by yourself; so you do not need root privileges to modify it (and if you somehow managed to, creating root-owned files in there would wreck it, because then you can no longer change those files without becoming root again).
2
3
77,996,886
2024-2-14
https://stackoverflow.com/questions/77996886/voxelization-issue-with-open3d-incomplete-filling-along-some-triangle-faces
I'm currently using Open3D in Python to voxelize meshes, and I've come across a peculiar behavior during the process. When voxelizing a box mesh, it appears that the resulting voxels only align along the lines of the triangles instead of filling the entire surface area of the triangle faces, particularly noticeable on the box's walls. Is this behavior expected in Open3D's voxelization process, or is there a misunderstanding in my approach? I attempted voxelization on a box mesh, expecting the voxel grid to uniformly fill all the triangles of the mesh, that comprises the faces of the box. However, upon visualization, I noticed that while the top and bottom faces are adequately filled, the walls of the box exhibit voxelization primarily along the triangle edges. import open3d as o3d import numpy as np new_room = o3d.geometry.TriangleMesh.create_box(10,10,10) lines = [] for triangle in np.array(new_room.triangles): lines.extend([(triangle[0], triangle[1]), (triangle[1], triangle[2]), (triangle[2], triangle[0])]) line_set = o3d.geometry.LineSet() vertices = np.array(new_room.vertices) line_set.points = o3d.utility.Vector3dVector(vertices) line_set.lines = o3d.utility.Vector2iVector(lines) voxel_grid = o3d.geometry.VoxelGrid.create_from_triangle_mesh(new_room, voxel_size=0.5) o3d.visualization.draw_geometries([new_room, line_set, voxel_grid]) Upon visualization, only the top and bottom faces are adequately filled, while the walls show voxelization predominantly along the diagonals. I'm using Open3D version v0.17.0.
After encountering an issue with Open3D's voxelization process, I discovered that the problem had been raised and addressed on GitHub in this thread. The issue was resolved in Open3D version v0.18.0. However, when attempting to update the package using pip, I encountered difficulties as pip couldn't find the correct compatible tag on PyPI with the desired version. This issue arose due to using an Apple Mac M1, which has some compatibility issues with package tags. Thankfully, I found a solution documented here: SYSTEM_VERSION_COMPAT=0 pip install --upgrade open3d Running this command resolved the compatibility issue and allowed me to successfully update Open3D to the required version. The voxelization is working perfectly now.
2
2
77,996,739
2024-2-14
https://stackoverflow.com/questions/77996739/how-to-build-a-3d-matrix-from-1d-time-series-in-numpy
Suppose I have four (N,) vectors A, B, C, and D: import numpy as np N = 100 A = 1*np.ones(N) # => array([1,1,1,...]) B = 2*np.ones(N) # => array([2,2,2,...]) C = 3*np.ones(N) # => array([3,3,3,...]) D = 4*np.ones(N) # => array([4,4,4,...]) In my application, these are each an element of a matrix and the matrix varies over time (although in this example each is constant in time). I want a matrix with shape (N,2,2) such that I have a 2x2 matrix for each time step, like [[a,b],[c,d]], extended along axis 0. Is there a nicer way to stack these into that kind of shape than what I have below? My solution: A_ = np.reshape(A, (N,1,1)) B_ = np.reshape(B, (N,1,1)) C_ = np.reshape(C, (N,1,1)) D_ = np.reshape(D, (N,1,1)) AB = np.concatenate((A_, B_),axis=2) CD = np.concatenate((C_, D_),axis=2) ABCD = np.concatenate((AB,CD),axis=1) This gives: >>> ABCD array([[[1., 2.], [3., 4.]], [[1., 2.], [3., 4.]], [[1., 2.], [3., 4.]], ... ... [[1., 2.], [3., 4.]], [[1., 2.], [3., 4.]]]) As desired. It's just difficult to understand what it's doing, so I'm wondering if there's a better way.
Concatenate as a 3D intermediate with dstack, then you'll be in the ideal order to perform a simple reshape: np.dstack([A, B, C, D]).reshape(N, 2, 2) Output: array([[[1., 2.], [3., 4.]], [[1., 2.], [3., 4.]], [[1., 2.], [3., 4.]], ..., [[3., 4.], [3., 4.]]]) Intermediate: np.dstack([A,B,C,D]).shape # (1, 100, 4)
3
3
77,996,844
2024-2-14
https://stackoverflow.com/questions/77996844/how-to-explode-a-pandas-dataframe-that-has-nulls-in-some-rows-but-populated-in
So I have many dataframes coming in that need to be exploded. they look something like this: df = pd.DataFrame({'A': [1, [11,22], [111,222]], 'B': [2, [33,44], float('nan')], 'C': [3, [55,66], [333,444]], 'D': [4, [77,88], float('nan')] }) +-----------+---------+-----------+---------+ | A | B | C | D | +-----------+---------+-----------+---------+ | 1 | 2 | 3 | 4 | +-----------+---------+-----------+---------+ | [11,22] | [33,44] | [55,66] | [77,88] | +-----------+---------+-----------+---------+ | [111,222] | NaN | [333,444] | NaN | +-----------+---------+-----------+---------+ Typically if a column couldn't be exploded I'd just remove it from the column list like so: colList = df.columns.values.tolist() colList.remove("B") colList.remove("D") df = df.explode(colList) But that would leave me with a dataframe that looks like: +-----+---------+-----+---------+ | A | B | C | D | +-----+---------+-----+---------+ | 1 | 2 | 3 | 4 | +-----+---------+-----+---------+ | 11 | [33,44] | 55 | [77,88] | +-----+---------+-----+---------+ | 22 | [33,44] | 66 | [77,88] | +-----+---------+-----+---------+ | 111 | NaN | 333 | NaN | +-----+---------+-----+---------+ | 222 | NaN | 444 | NaN | +-----+---------+-----+---------+ I still need to explode those columns (B and D in example), but if I do, it'll throw an error due to the nulls. How can I successfully explode dataframes with this sort of problem?
One option could be to explode each column separately and deduplicate the index before concat: def explode_dedup(s): s = s.explode() return s.set_axis( pd.MultiIndex.from_arrays([s.index, s.groupby(level=0).cumcount()]) ) out = pd.concat({c: explode_dedup(df[c]) for c in df}, axis=1) Output: A B C D 0 0 1 2 3 4 1 0 11 33 55 77 1 22 44 66 88 2 0 111 NaN 333 NaN 1 222 NaN 444 NaN To explode a subset of the columns: cols = ['C', 'D'] others = df.columns.difference(cols) out = (df[others] .join(pd.concat({c: explode_dedup(df[c]) for c in cols}, axis=1) .droplevel(-1) )[df.columns] ) Output: A B C D 0 1 2 3 4 1 [11, 22] [33, 44] 55 77 1 [11, 22] [33, 44] 66 88 2 [111, 222] NaN 333 NaN 2 [111, 222] NaN 444 NaN Reproducible input: df = pd.DataFrame({'A': [1, [11,22], [111,222]], 'B': [2, [33,44], float('nan')], 'C': [3, [55,66], [333,444]], 'D': [4, [77,88], float('nan')] })
2
1
77,996,196
2024-2-14
https://stackoverflow.com/questions/77996196/how-to-duplicate-rows-based-on-the-number-of-weeks-between-two-dates
My input is this dataframe : df = pd.DataFrame( { 'ID': ['ID001', 'ID002', 'ID003'], 'DATE': ['24/12/2023', '01/02/2024', '12/02/2024'], } ) df['DATE'] = pd.to_datetime(df['DATE'], dayfirst=True) print(df) ID DATE 0 ID001 2023-12-24 1 ID002 2024-02-01 2 ID003 2024-02-12 I'm trying to duplicate the rows for each id N times with N being the number of weeks between the column DATE and the current date. At the end, each last row for a given id will have the week of the column DATE. For that I made the code below but it gives me a wrong output : number_of_weeks = (pd.Timestamp('now') - df['DATE']).dt.days // 7 final = df.copy() final['YEAR'] = final['DATE'].dt.isocalendar().year final['WEEK'] = final['DATE'].dt.isocalendar().week final['WEEKS'] = (pd.Timestamp('now') - df['DATE']).dt.days // 7 for index, row in final.iterrows(): for i in range(1, row['WEEKS'] + 1): final.loc[i, 'WEEK'] = i final = final.ffill().drop(columns='WEEKS') print(final) ID DATE YEAR WEEK 0 ID001 2023-12-24 2023 51 1 ID002 2024-02-01 2024 1 2 ID003 2024-02-12 2024 2 3 ID003 2024-02-12 2024 3 4 ID003 2024-02-12 2024 4 5 ID003 2024-02-12 2024 5 6 ID003 2024-02-12 2024 6 7 ID003 2024-02-12 2024 7 Have you guys encountered a similar problem ? I'm open to any suggestion. My expected output is this : ID DATE YEAR WEEK 0 ID001 24/12/2023 2023 51 1 ID001 24/12/2023 2023 52 2 ID001 24/12/2023 2024 1 3 ID001 24/12/2023 2024 2 4 ID001 24/12/2023 2024 3 5 ID001 24/12/2023 2024 4 6 ID001 24/12/2023 2024 5 7 ID001 24/12/2023 2024 6 8 ID001 24/12/2023 2024 7 ################################## 9 ID002 01/02/2024 2024 5 10 ID002 01/02/2024 2024 6 11 ID002 01/02/2024 2024 7 ################################## 12 ID003 12/02/2024 2024 7
I would use periods for that and Index.repeat: # compute exact difference in weeks n_weeks = (df['DATE'].dt.to_period('W') .rsub(pd.Timestamp('now').to_period('W')) .apply(lambda x: x.n) ) # repeat rows, add incrementing days, convert to week number out = (df.loc[df.index.repeat(n_weeks+1)] .assign(WEEK=lambda d: df['DATE'] .add(pd.to_timedelta(d.groupby(level=0).cumcount()*7, unit='D')) .dt.isocalendar().week ) ) Output: ID DATE WEEK 0 ID001 2023-12-24 51 0 ID001 2023-12-24 52 0 ID001 2023-12-24 1 0 ID001 2023-12-24 2 0 ID001 2023-12-24 3 0 ID001 2023-12-24 4 0 ID001 2023-12-24 5 0 ID001 2023-12-24 6 0 ID001 2023-12-24 7 1 ID002 2024-02-01 5 1 ID002 2024-02-01 6 1 ID002 2024-02-01 7 2 ID003 2024-02-12 7
3
2
77,996,089
2024-2-14
https://stackoverflow.com/questions/77996089/polars-to-dicts-is-the-order-of-the-list-of-dictionaries-guaranteed
I am leveraging the function .to_dicts() in my testing the resulting list of dictionaries has the same order as the dataframe. df = pl.DataFrame(data={ "name": ["Alice", "Bob", "Charlie"], "age": [30, 25, 35], "city": ["New York", "London", "Paris"], }).sort(by=['name'], descending=True) data = df.to_dicts() for record in data: print(record) Results in: {'name': 'Charlie', 'age': 35, 'city': 'Paris'} {'name': 'Bob', 'age': 25, 'city': 'London'} {'name': 'Alice', 'age': 30, 'city': 'New York'} Does anyone know if that is guaranteed to happen every time?
TLDR. Yes. This can be seen by inspecting this underlying rust function, which maps over the indices of the DataFrame in order.
4
2
77,995,796
2024-2-14
https://stackoverflow.com/questions/77995796/mocking-directly-imported-function
I am trying to write unit tests for my python project using pytest. I have a module from another repo called sql_services. Within sql_services there is a function called read_sql that I am trying to mock. So far, I have only been able to mock the function if I import the module like import sql_services and invoke sql_services.read_sql but I have been unable to mock it if I import the function like from sql_services import read_sql This is a problem because my codebase uses the latter method for importing the functions Here is the function I am trying to write a unit test for: from sql_services import read_sql def foo(): df = read_sql("SELECT * FROM schema.table") return df Here is the unit test file I have so far: import pytest import unittest.mock as mock import pandas as pd import sql_services from entry import foo @mock.patch("sql_services.read_sql") def test_read_sql(mock_read_sql): mock_read_sql.return_value = pd.DataFrame() df = sql_services.read_sql("SELECT * FROM schema.table") assert df.empty @mock.patch("sql_services.read_sql") def test_do_a_read(mock_read_sql): mock_read_sql.return_value = pd.DataFrame() df = foo() assert df.empty The first test passes and the second fails because it actually reads the data frame from the database. Is there any way I can mock the function from within foo without refactoring my entire codebase?
You just patch the correct name: @mock.patch("entry.read_sql") def test_read_sql(mock_read_sql): mock_read_sql.return_value = pd.DataFrame() df = sql_services.read_sql("SELECT * FROM schema.table") assert df.empty] entry.foo is using the global variable entry.read_sql to access the function you want to mock, not the global variable sql_servies.read_sql.
2
2
77,994,983
2024-2-14
https://stackoverflow.com/questions/77994983/difference-between-and-expression-api
What is the difference between using square brackets [ ] and using Expression APIs like select, filter, etc. when trying to access data from a polars Dataframe? Which one to use when? a polars dataframe df = pl.DataFrame( { "a": ["a", "b", "a", "b", "b", "c"], "b": [2, 1, 1, 3, 2, 1], } )
Usually, it is advised to use the polars expression API as most of the methods are part of polars lazy API. The API defers the evaluation of many operations, such as selection, filtering, and mutation, until the result is actually needed. This allows for the query optimizations that makes polars as efficient as it is. In contrast, accessing columns using square bracket notation only work in eager-mode. Example. As an concrete example, consider reading the first element of a rather large .csv file (~700mb). We start by creating the .csv file and write it to disk. import polars as pl pl.DataFrame({"col": [0] * 100_000_000}).write_csv("df.csv") Using square bracket notation ([). As square bracket notation does not work on LazyFrames, we'll need to use pl.read_parquet to read the file into a DataFrame object. pl.read_csv("df.csv")["col"][0] On my machine, this takes roughly 200ms. Using polars lazy API. The same can be achieved using polars lazy API (pl.scan_csv, select, first, item). pl.scan_csv("df.csv").select("col").first().collect().item() The delay of the evaluation allows polars to avoid reading in the entire file and the execution only takes 300ΞΌs on my machine, i.e. we obtained a ~650x speed-up by using polars' lazy API. Of course, one could argue that much of the time here is saved by avoiding to read the entire .csv file instead of by using select over [. However, note that this is still due to the benefits of the lazy API.
5
2
77,995,211
2024-2-14
https://stackoverflow.com/questions/77995211/polars-manipulation-on-columns-by-dtype-creating-multiple-new-columns
data = {"col1": ['2020/01/01', '2020/02/01'], "col2": ['2020/01/01', '2020/02/01']} df = pl.DataFrame(data, schema={"col1": pl.String, "col2": pl.String}) df = df.with_columns( pl.col('col1').str.to_datetime(), pl.col('col2').str.to_datetime() ) df.with_columns( pl.col(pl.DATETIME_DTYPES).dt.year() ) With the given code I want to create new columns for every column selected by pl.DATETIME_DTYPES with the extracted year. For a single column I would apply .alias() but what to do for potentially n new columns? Any generic way?
For this, you can use the methods in the pl.Expr.name namespace. The most flexible method would be pl.Expr.name.map, but there is also pl.Expr.name.prefix, pl.Expr.name.suffix, etc. df.with_columns( pl.col(pl.DATETIME_DTYPES).dt.year().name.map(lambda s: s + "_year") ) Output. shape: (2, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 ┆ col1_year ┆ col2_year β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs] ┆ datetime[ΞΌs] ┆ i32 ┆ i32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════════β•ͺ═══════════β•ͺ═══════════║ β”‚ 2020-01-01 00:00:00 ┆ 2020-01-01 00:00:00 ┆ 2020 ┆ 2020 β”‚ β”‚ 2020-02-01 00:00:00 ┆ 2020-02-01 00:00:00 ┆ 2020 ┆ 2020 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
2
77,994,817
2024-2-14
https://stackoverflow.com/questions/77994817/how-to-reshape-a-polars-dataframe
I am new to polars, but have worked quite a bit with pandas and numpy. Let's say I have a dataframe df like: Column 1 Column 2 Column 3 Column 4 Column 5 Column 6 A 1 W B 2 X C 3 Y D 4 Z I would like to do something like df.reshape((-1,3)) (in numpy) in order to obtain: Column 1 Column 2 Column 3 A 1 W B 2 X C 3 Y D 4 Z however I haven't been able to find a similar function in polars. df.melt() doesn't work in this case of course and doing df = pl.DataFrame(df.to_numpy().reshape((-1,3))) doesn't seem like an optimal solution. Would appreciate any help! Looked through the documentation to no avail.
I don't think polars has a specialised reshape function for the scenario outlined above. However, you could combine pl.concat to concatenate dataframe fragments created using pl.DataFrame.select. pl.concat([ df.select("Column 1", "Column 2", "Column 3"), df.select( pl.col("Column 4").alias("Column 1"), pl.col("Column 5").alias("Column 2"), pl.col("Column 6").alias("Column 3"), ), ]) Output. shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Column 1 ┆ Column 2 ┆ Column 3 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ══════════║ β”‚ A ┆ 1 ┆ W β”‚ β”‚ C ┆ 3 ┆ Y β”‚ β”‚ B ┆ 2 ┆ X β”‚ β”‚ D ┆ 4 ┆ Z β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
4
3
77,992,562
2024-2-14
https://stackoverflow.com/questions/77992562/how-to-group-by-and-find-new-or-disappearing-items
I am trying to assess in a sales database whether the # of advertisements has changed. The example dataframe I am using is as such: df = pd.DataFrame({"offer-id": [1,1,2,2,3,4,5], "date": ["2024-02-10","2024-02-11","2024-02-10","2024-02-11","2024-02-11","2024-02-11","2024-02-10"], "price": [30,10,30,30,20,25,20]}) And looks like the below: I am now trying to get the # of items that were sold or newly added (I don't care which one, since once I have one the other should be failry easily computable). E.g. in a perfect case the next piece of code tells me that on 10th of February 3 offers were online (ID 1, 2, and 5) and one was sold (ID 5) Or alternatively, it tells me on 11th of February 4 offers are online, and 2 of them are new (from that, since I know the day before 5 were online I can also calculate that one must have sold) Is there a simple way of doing this? I have tried things like df.groupby(['date'])["offer-id"].agg({'nunique'}) but they are missing the "comparison to previous" timestep component.
You could aggregate as a set: offers = df.groupby('date', sort=True)['offer-id'].agg(set) date 2024-02-10 {1, 2, 5} 2024-02-11 {1, 2, 3, 4} Name: offer-id, dtype: object Then getting the diff will give you the new items: offers.diff() date 2024-02-10 NaN 2024-02-11 {3, 4} Name: offer-id, dtype: object Or the sold items: offers.diff(-1) date 2024-02-10 {5} 2024-02-11 NaN Name: offer-id, dtype: object If you want the number of items, chain str.len: offers.diff().str.len().fillna(0).convert_dtypes() date 2024-02-10 0 2024-02-11 2 Name: offer-id, dtype: Int64 And to get those as new columns, map: df['new'] = df['date'].map(offers.diff().str.len().fillna(0).convert_dtypes()) df['sold'] = df['date'].map(offers.diff(-1).str.len().fillna(0).convert_dtypes()) print(df) Output: offer-id date price new sold 0 1 2024-02-10 30 0 1 1 1 2024-02-11 10 2 0 2 2 2024-02-10 30 0 1 3 2 2024-02-11 30 2 0 4 3 2024-02-11 20 2 0 5 4 2024-02-11 25 2 0 6 5 2024-02-10 20 0 1
2
3
77,992,176
2024-2-14
https://stackoverflow.com/questions/77992176/pandas-idxmax-top-n-values
I have this code: import pandas as pd df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48], 'co2_emissions': [37.2, 19.66, 1712]}, index=['Pork', 'Wheat Products', 'Beef']) df['Max'] = df.idxmax(axis=1, skipna=True, numeric_only=True) df I need to find the n largest values. Here there is a technique using apply/lambda. But it returns error. df.apply(lambda s: s.abs().nlargest(2).index.tolist(), axis=1,skipna=True, numeric_only=True) TypeError: () got an unexpected keyword argument 'numeric_only' Is there any way to obtain top N results using idxmax? Is there any way to overcome this error got when using apply lambda method?
Your error is due to passing the skipna and numeric_only parameters to apply. You can fix it with: (df.select_dtypes('number') .apply(lambda s: s.dropna().abs().nlargest(2) .index.tolist(), axis=1) ) Output: Pork [co2_emissions, consumption] Wheat Products [consumption, co2_emissions] Beef [co2_emissions, consumption] dtype: object A more efficient approach using numpy N = 2 tmp = df.select_dtypes('number') out = pd.Series( np.take_along_axis( tmp.columns.to_numpy()[:, None], np.argpartition(tmp, -N)[:, -N:], axis=0 )[:, ::-1].tolist(), index=df.index, )
2
1
77,976,508
2024-2-11
https://stackoverflow.com/questions/77976508/how-to-send-parallel-request-to-google-gemini
I have 107 images and I want to extract text from them, and I am using Gemini API, and this is my code till now: # Gemini Model model = genai.GenerativeModel('gemini-pro-vision', safety_settings=safety_settings) # Code images_to_process = [os.path.join(image_dir, image_name) for image_name in os.listdir(image_dir)] # list of 107 images prompt = """Carefully scan this images: if it has text, extract all the text and return the text from it. If the image does not have text return '<000>'.""" for image_path in tqdm(images_to_process): img = Image.open(image_path) output = model.generate_content([prompt, img]) text = output.text print(text) In this code, I am just taking one image at a time and extracting text from it using Gemini. Problem - I have 107 images and this code is taking ~10 minutes to run. I know that Gemini API can handle 60 requests per minute. How to send 60 images at the same time? How to do it in batch?
2024-10 update: I've added a Cookbook Quickstart on asynchronous requests to show how this works. The advice below is still correct. In synchronous Python you can use something like a ThreadPoolExecutor to make your requests in separate threads. The Gemini Python SDK has an async API though, which can be a bit more natural: $ python -m asyncio >>> import asyncio >>> import google.generativeai as genai >>> import PIL >>> model = genai.GenerativeModel('gemini-pro-vision') >>> imgs = ['/path/img.jpg', ...] >>> prompt = "..." >>> async def process_image(img: str) -> str: ... r = await model.generate_content_async([prompt, PIL.Image.open(img)]) ... # TODO: error handling ... return r.text >>> jobs = asyncio.gather(*[process_image(img) for img in imgs]) >>> results = await jobs # or run_until_complete(jobs) >>> results ['text is here', ...] This uses the implicit asyncio REPL event loop, in a real app you'll need to set up and use your own event loop. See also TaskGroups.
9
7
77,983,609
2024-2-12
https://stackoverflow.com/questions/77983609/merge-some-columns-in-a-polars-dataframe-and-duplicate-the-others
I have a similar problem to how to select all columns from a list in a polars dataframe, but slightly different: import polars as pl import numpy as np import string rng = np.random.default_rng(42) nr = 3 letters = list(string.ascii_letters) uppercase = list(string.ascii_uppercase) words, groups = [], [] for i in range(nr): word = ''.join([rng.choice(letters) for _ in range(rng.integers(3, 20))]) words.append(word) group = rng.choice(uppercase) groups.append(group) df = pl.DataFrame( { "a_0": np.linspace(0, 1, nr), "a_1": np.linspace(1, 2, nr), "a_2": np.linspace(2, 3, nr), "b_0": np.random.rand(nr), "b_1": 2 * np.random.rand(nr), "b_2": 3 * np.random.rand(nr), "words": words, "groups": groups, } ) print(df) shape: (3, 8) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a_0 ┆ a_1 ┆ a_2 ┆ b_0 ┆ b_1 ┆ b_2 ┆ words ┆ groups β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ══════════β•ͺ══════════β•ͺ══════════β•ͺ═════════════════β•ͺ════════║ β”‚ 0.0 ┆ 1.0 ┆ 2.0 ┆ 0.653892 ┆ 0.234362 ┆ 0.880558 ┆ OIww ┆ W β”‚ β”‚ 0.5 ┆ 1.5 ┆ 2.5 ┆ 0.408888 ┆ 0.213767 ┆ 1.833025 ┆ KkeB ┆ Z β”‚ β”‚ 1.0 ┆ 2.0 ┆ 3.0 ┆ 0.423949 ┆ 0.646378 ┆ 0.116173 ┆ NLOAgRxAtjWOHuQ ┆ O β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I want again to concatenate the columns a_0, a_1,... into a column a, columns b_0, b_1,... into a column b. However, unlike the preceding question, this time a = [a_0; a_1; ...]. I.e., all the elements of a_0 go first, followed by all the elements of a_1, etc. All the columns whose name doesn't end with a _ followed by a digit (in this example, words and groups) must be duplicated enough times to match the length of a. Let nr and nc be the number of rows/columns in df. Then the output dataframe must have m*nr rows (m=3 in this case) and nc-2*(m-1) columns, i.e. shape: (9, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ words ┆ groups ┆ a ┆ b β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════β•ͺ═════β•ͺ══════════║ β”‚ OIww ┆ W ┆ 0.0 ┆ 0.653892 β”‚ β”‚ KkeB ┆ Z ┆ 0.5 ┆ 0.408888 β”‚ β”‚ NLOAgRxAtjWOHuQ ┆ O ┆ 1.0 ┆ 0.423949 β”‚ β”‚ OIww ┆ W ┆ 1.0 ┆ 0.234362 β”‚ β”‚ KkeB ┆ Z ┆ 1.5 ┆ 0.213767 β”‚ β”‚ NLOAgRxAtjWOHuQ ┆ O ┆ 2.0 ┆ 0.646378 β”‚ β”‚ OIww ┆ W ┆ 2.0 ┆ 0.880558 β”‚ β”‚ KkeB ┆ Z ┆ 2.5 ┆ 1.833025 β”‚ β”‚ NLOAgRxAtjWOHuQ ┆ O ┆ 3.0 ┆ 0.116173 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ How can I do that?
You can extend this answer to your previous question by @jqurious to include index, such as words and groups, as follows: ( df .unpivot(index=["words", "groups"]) .with_columns(pl.col("variable").str.replace("_.*", "")) .with_columns(index = pl.int_range(pl.len()).over("variable")) .pivot(on="variable", index=["index", "words", "groups"], values="value") .drop("index") ) Explanation Consider a simplified dataset with n = 2 and just three a_* / b_* columns. shape: (2, 8) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a_0 ┆ a_1 ┆ a_2 ┆ b_0 ┆ b_1 ┆ b_2 ┆ words ┆ groups β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ══════════β•ͺ══════════β•ͺ══════════β•ͺ══════════════════β•ͺ════════║ β”‚ 0.0 ┆ 1.0 ┆ 2.0 ┆ 0.285304 ┆ 1.261851 ┆ 0.295949 ┆ VUvcCgzrycGaKSve ┆ I β”‚ β”‚ 1.0 ┆ 2.0 ┆ 3.0 ┆ 0.460023 ┆ 1.89468 ┆ 1.042234 ┆ GXFVckCws ┆ O β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Using index in the initial pl.DataFrame.unpivot adds the content of the words / groups columns to each row after the unpivot. We also ensure that the index column is created within each group defined by words, groups, and variable. ( df .unpivot(index=["words", "groups"]) .with_columns(pl.col("variable").str.replace("_.*", "")) .with_columns(index = pl.int_range(pl.len()).over("variable")) .sort("words", "groups", "variable") ) shape: (12, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ words ┆ groups ┆ variable ┆ value ┆ index β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ f64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ════════β•ͺ══════════β•ͺ══════════β•ͺ═══════║ β”‚ GXFVckCws ┆ O ┆ a ┆ 1.0 ┆ 0 β”‚ β”‚ GXFVckCws ┆ O ┆ a ┆ 2.0 ┆ 1 β”‚ β”‚ GXFVckCws ┆ O ┆ a ┆ 3.0 ┆ 2 β”‚ β”‚ GXFVckCws ┆ O ┆ b ┆ 0.460023 ┆ 0 β”‚ β”‚ GXFVckCws ┆ O ┆ b ┆ 1.89468 ┆ 1 β”‚ β”‚ GXFVckCws ┆ O ┆ b ┆ 1.042234 ┆ 2 β”‚ β”‚ VUvcCgzrycGaKSve ┆ I ┆ a ┆ 0.0 ┆ 0 β”‚ β”‚ VUvcCgzrycGaKSve ┆ I ┆ a ┆ 1.0 ┆ 1 β”‚ β”‚ VUvcCgzrycGaKSve ┆ I ┆ a ┆ 2.0 ┆ 2 β”‚ β”‚ VUvcCgzrycGaKSve ┆ I ┆ b ┆ 0.285304 ┆ 0 β”‚ β”‚ VUvcCgzrycGaKSve ┆ I ┆ b ┆ 1.261851 ┆ 1 β”‚ β”‚ VUvcCgzrycGaKSve ┆ I ┆ b ┆ 0.295949 ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Then, the final pivot does no longer group by just index but also the words and groups columns.
3
3
77,969,964
2024-2-9
https://stackoverflow.com/questions/77969964/deprecation-warning-with-groupby-apply
I have a python script that reads in data from a csv file The code runs fine, but everytime it runs I get this Deprecation message: DeprecationWarning: DataFrameGroupBy.apply operated on the grouping columns. This behavior is deprecated, and in a future version of pandas the grouping columns will be excluded from the operation. Either pass `include_groups=False` to exclude the groupings or explicitly select the grouping columns after groupby to silence this warning. the warning stems from this piece of code: fprice = df.groupby(['StartDate', 'Commodity', 'DealType']).apply(lambda group: -(group['MTMValue'].sum() - (group['FixedPriceStrike'] * group['Quantity']).sum()) / group['Quantity'].sum()).reset_index(name='FloatPrice') to my understanding, I am performing the apply function on my groupings,but then I am disregarding the groupings and not using them anymore to be apart of my dataframe. I am confused about the directions to silence the warning here is some sample data that this code uses: TradeID TradeDate Commodity StartDate ExpiryDate FixedPrice Quantity MTMValue -------- ---------- --------- --------- ---------- ---------- -------- --------- aaa 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 bbb 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 ccc 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 and here is the expected output from this data: TradeID TradeDate Commodity StartDate ExpiryDate FixedPrice Quantity MTMValue FloatPrice -------- ---------- --------- --------- ---------- ---------- -------- --------- ---------- aaa 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 0 bbb 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 0 ccc 01/01/2024 (com1,com2) 01/01/2024 01/01/2024 10 10 100.00 0
About include_groups parameter The include_groups parameter of DataFrameGroupBy.apply is new in pandas version 2.2.0. It is basically a transition period (2.2.0 -> 3.0) parameter added to help communicating a changing behavior (with warnings) and to tackle pandas Issue 7155. In most cases you should be able to just set it to False to silent the warning (see below). Setup Let's say you have a pandas DataFrame df and a dummy function myfunc for apply, and you want to Group by column 'c' Apply myfunc on each group >>> df a value c 0 foo 10 cat1 1 bar 20 cat2 2 baz 30 cat1 3 quux 40 cat2 >>> def myfunc(x): print(x, '\n') include_groups = True (Old behavior) This is the default behavior in pandas <2.2.0 (there is no include_groups parameter) pandas 2.2.0 and above (likely until 3.0) will still default to this but issue a DeprecationWarning. The grouping column(s), here 'c' is included in the DataFrameGroupBy >>> df.groupby('c').apply(myfunc) a value c 0 foo 10 cat1 2 baz 30 cat1 a value c 1 bar 20 cat2 3 quux 40 cat2 Now as mentioned in Issue 7155, keeping the grouping column c in the dataframe passed to apply is unwanted behavior. Most people will not expect c to be present here. The answer of bue has actually an example how this could lead to bugs; apply on np.mean and expect there be less columns (causes a bug if your grouping column is numerical). include_groups = False (new behavior) This will remove the warning in the pandas > 2.2.0 (<3.0) This will be the default in future version of pandas (likely 3.0) This is what you likely would want to have; drop the grouping column 'c': >>> df.groupby('c').apply(myfunc, include_groups=False) a value 0 foo 10 2 baz 30 a value 1 bar 20 3 quux 40 Circumventing need to use include_groups at all Option 1: Explicitly giving column names You may also skip the need for using the include_groups parameter at all by explicitly giving the list of the columns (as pointed out by the warning itself; "..or explicitly select the grouping columns after groupby to silence this warning..", and Cahit in their answer), like this: >>> df.groupby('c')[['a', 'value', 'c']].apply(myfunc) a value c 0 foo 10 cat1 2 baz 30 cat1 a value c 1 bar 20 cat2 3 quux 40 cat2 Empty DataFrame Columns: [] Index: [] Option 2: Setting the index before groupby You may also set the groupby column to the index, as pointed out by Stefan in the comments. >>> df.set_index('c').groupby(level='c').apply(myfunc) a value c cat1 foo 10 cat1 baz 30 a value c cat2 bar 20 cat2 quux 40 Empty DataFrame Columns: [] Index: [] Details just for this use case Your grouping columns are ['StartDate', 'Commodity', 'DealType'] In the apply function you use the following columns: ['MTMValue', 'FixedPriceStrike', 'Quantity'] i.e., you do not need any of the grouping columns in your apply, and therefore you can use include_groups=False which also removes the warning. fprice = df.groupby(['StartDate', 'Commodity', 'DealType']).apply(lambda group: -(group['MTMValue'].sum() - (group['FixedPriceStrike'] * group['Quantity']).sum()) / group['Quantity'].sum(), include_groups=False).reset_index(name='FloatPrice')
43
68
77,974,525
2024-2-10
https://stackoverflow.com/questions/77974525/what-is-the-right-way-to-await-cancelling-an-asyncio-task
The docs for cancel make it sound like you should usually propagate CancelledError exceptions: Therefore, unlike Future.cancel(), Task.cancel() does not guarantee that the Task will be cancelled, although suppressing cancellation completely is not common and is actively discouraged. Should the coroutine nevertheless decide to suppress the cancellation, it needs to call Task.uncancel() in addition to catching the exception. However, neither of the methods for detecting cancellation are awaitable: cancelling which tells you if cancelling is in progress, and cancelled tells you if cancellation is done. So the obvious way to wait for cancellation is this: foo_task.cancel() try: await foo_task except asyncio.CancelledError: pass There are lots of examples of this online even on SO. But the docs warn you asyncio machinery will "misbehave" if you do this: The asyncio components that enable structured concurrency, like asyncio.TaskGroup and asyncio.timeout(), are implemented using cancellation internally and might misbehave if a coroutine swallows asyncio.CancelledError Now you might be wondering why you would wait to block until a task is fully cancelled. The problem is the asyncio event loop only creates weak references to tasks, so if as your class is shutting down (e.g. due to a cleanup method or __aexit__) and you don't await every task you spawn, you might tear down the only strong reference while the task is still running, and then python will yell at you: ERROR base_events.py:1771: Task was destroyed but it is pending! So it seems to avoid the error I am specifically being forced into doing the thing I'm not supposed to do :P The only alternative seems to be weird unpythonic hackery like stuffing every task I make in a global set and awaiting them all at the end of the run.
You have a cleanup problem. The with statement is generally used to solve cleanup. Use asyncio.TaskGroup: async with asyncio.TaskGroup() as tg: tg.create_task(some_coro(...)).cancel()
6
2
77,970,277
2024-2-9
https://stackoverflow.com/questions/77970277/setting-resources-dynamically-on-snakemake
Context I am running a snakemake (v.7.32.4) pipeline using slurm task manager. I have set resources (time and memory for each rule) dynamically based on file size and # of tries, like this: rule index: resources: mem_mb = lambda wildcards, input, attempt: ( 200 * attempt ), runtime = lambda wildcards, input, attempt: ( "{minutes}min".format( minutes=max( int((input.size_mb / 5000) * attempt), 1) ) ) I have 2 related questions (should I split the post?): 1) Is it possible to set resources dinamically outside of snakefile? I tried to set that on the profile config file but didn't success (sometime ago, so cannot say exactly what I tried) 2) Having set resources dinamically inside Snakefile, how do I do a dry run or a rulegraph? If I run a dry-run I get the following error: WorkflowError: Cannot parse runtime value into minutes for setting runtime resource: <TBD> This seems logical to me since the file doesn't exist yet. Nevertheless, I would like to know the specifics of all steps (except resources of course) to be run before actually running them, is this possible? Something similar happens if I try to do a rulegraph: snakemake --profile Config/Profiles/slurm -np --rulegraph | dot -Tsvg > rulegraph.svg In this case, I get an empty file (probably because of the error in the dry run?).
1) Is it possible to set resources dinamically outside of snakefile? Depends on the snakemake version. For v8.14.0, yes. For version 7.32.4, sort of. See both scenarios below. Snakemake version 7.34.4 You can call a function on snakefile and declare the function elsewhere, as proposed by @SultanOrazbayev: Snakefile: from resources import get_runtime rule index: resources: mem_mb = get_mem_mb resources.py: def get_runtime(wildcards: dict, input: str|list[str], attempt: int) -> str: try: minutes = max(int((input.size_mb / 5_000) * attempt), 1) except FileNotFoundError: minutes = 10 # this is some test value return "{minutes}min" Snakemake version 8.14.0 Install the slurm plug in: pip install snakemake-executor-plugin-slurm Then you can specify resources dinamically (entirely) on your workflow profile config.yaml file: executor: slurm set-resources: index: runtime: f"{max(int((input.size_mb / 5_000) * attempt), 1)}min" NOTE: I only tried snakemake v8 using the slurm plugin. And I found this solution on the slurm plugin documentation. Hence, I don't know if using the workflow profile as described above would work without the slurm plugin or not. 2) Having set resources dynamically inside Snakefile, how do I do a dry run or a rulegraph? This question only applies to snakemake version 7 or lower. On snakemake v8 (I specifically tried v8.14.0) dry runs and graphs works fine, even if file does not exist yet. As for snakemake v7.32.4, one way of solving it is by handling the error as described above (proposed by @SultanOrazbayev).
3
1
77,989,391
2024-2-13
https://stackoverflow.com/questions/77989391/how-do-i-extract-data-from-a-document-using-the-openai-api
I want to extract key terms from rental agreements. To do this, I want to send the PDF of the contract to an AI service that must return some key terms in JSON format. What are some of the different libraries and companies that can do this? So far, I've explored the OpenAI API, but it isn't as straightforward as I would have imagined. When using the ChatGPT interface, it works very well, so I thought using the API should be equally simple. It seems like I need to read the PDF text first and then send the text to OpenAI API. Any other ideas to achieve this will be appreciated.
Note: The code below works with the OpenAI Assistants API v1. In April 2024, the OpenAI Assistants API v2 was released. See the migration guide. What you want to use is the Assistants API. As of today, there are 3 tools available: Code Interpreter Knowledge Retrieval Function calling You need to use the Knowledge Retrieval tool. As stated in the official OpenAI documentation: Retrieval augments the Assistant with knowledge from outside its model, such as proprietary product information or documents provided by your users. Once a file is uploaded and passed to the Assistant, OpenAI will automatically chunk your documents, index and store the embeddings, and implement vector search to retrieve relevant content to answer user queries. I've built a customer support chatbot in the past. Take this as an example. In your case, you want the assistant to use your PDF file (I used the knowledge.txt file). Take a look at my GitHub and YouTube. customer_support_chatbot.py import os from openai import OpenAI client = OpenAI() OpenAI.api_key = os.getenv('OPENAI_API_KEY') # Step 1: Upload a File with an "assistants" purpose my_file = client.files.create( file=open("knowledge.txt", "rb"), purpose='assistants' ) print(f"This is the file object: {my_file} \n") # Step 2: Create an Assistant my_assistant = client.beta.assistants.create( model="gpt-3.5-turbo-1106", instructions="You are a customer support chatbot. Use your knowledge base to best respond to customer queries.", name="Customer Support Chatbot", tools=[{"type": "retrieval"}] ) print(f"This is the assistant object: {my_assistant} \n") # Step 3: Create a Thread my_thread = client.beta.threads.create() print(f"This is the thread object: {my_thread} \n") # Step 4: Add a Message to a Thread my_thread_message = client.beta.threads.messages.create( thread_id=my_thread.id, role="user", content="What can I buy in your online store?", file_ids=[my_file.id] ) print(f"This is the message object: {my_thread_message} \n") # Step 5: Run the Assistant my_run = client.beta.threads.runs.create( thread_id=my_thread.id, assistant_id=my_assistant.id, instructions="Please address the user as Rok Benko." ) print(f"This is the run object: {my_run} \n") # Step 6: Periodically retrieve the Run to check on its status to see if it has moved to completed while my_run.status in ["queued", "in_progress"]: keep_retrieving_run = client.beta.threads.runs.retrieve( thread_id=my_thread.id, run_id=my_run.id ) print(f"Run status: {keep_retrieving_run.status}") if keep_retrieving_run.status == "completed": print("\n") # Step 7: Retrieve the Messages added by the Assistant to the Thread all_messages = client.beta.threads.messages.list( thread_id=my_thread.id ) print("------------------------------------------------------------ \n") print(f"User: {my_thread_message.content[0].text.value}") print(f"Assistant: {all_messages.data[0].content[0].text.value}") break elif keep_retrieving_run.status == "queued" or keep_retrieving_run.status == "in_progress": pass else: print(f"Run status: {keep_retrieving_run.status}") break
2
2
77,973,380
2024-2-10
https://stackoverflow.com/questions/77973380/how-to-validate-email-uniqueness-with-pydantic-and-fastapi
i'm trying to validate input email from request body and i have build custom field validator at model. here is code of RegisterModel: from pydantic import BaseModel, EmailStr, Field, field_validator from repository.repository_user import UserRepository from pprint import pprint class RegisterModel(BaseModel): name:str email:EmailStr password:str = Field(..., min_length=7) @field_validator('email') @classmethod def email_must_unique(cls, v): repo = UserRepository() result = repo.find({'email': v}) pprint(result) return result the email_must_unique method will create new instace of UserRepository class and it will call the find method from UserRepository to find specific user based on email. actualy the email_must_unique method not finished yet to validate the email, it just get specific users, but i already facing an error. here is code of UserRepository: from pymongo.database import Database from fastapi import Depends from config.db import db_conn class UserRepository: def __init__(self, db: Database = Depends(db_conn)): self.repository = db.users # users is mongo collection def find(self, filter: dict): result = self.repository.find_one(filter) return result with this code i'm facing error like this: self.repository = db.users ^^^^^^^^ AttributeError: 'Depends' object has no attribute 'users' i have no idea what What caused it. can you give me the solution to solve this error or maybe alternative way to validate email uniqueness ?
As mentioned in the comments by @Chiheb Nexus, Do not call the Depends function directly. Instead, use the Annotated dependency which is more graceful. For example, the config/db.py file should look like this: # config/db.py from typing import Annotated from fastapi import Depends from pymongo import MongoClient from pymongo.database import Database def get_db() -> Database: client = MongoClient("mongodb://localhost:27017/") return client.some_database DbConn = Annotated[Database, Depends(get_db)] # <-- this is the annotated depend You can then use it like this: from config.db import DbConn class UserRepository: def __init__(self, db: DbConn): # <- FastAPI will call `DbConn` self.repository = db.users def find(self, filter: dict): result = self.repository.find_one(filter) return result In addition, db in the __init__ parameter is annotated via DbConn so IDE knows its type. Validating the uniqueness of data is a task best left to the database rather than Pydantic. In other words, you should insert data and handle unique errors, DuplicateKeyError in pymongo.
2
1
77,989,609
2024-2-13
https://stackoverflow.com/questions/77989609/how-to-enable-cors-in-fastapi-for-local-html-file-loaded-via-a-file-url
I am trying to enable CORS in FastAPI on my localhost with credentials enabled. According to the docs we must explicitly set allow_origins in this case: from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware, Response app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=['http://localhost:8000'], allow_credentials=True, allow_methods=['*'], allow_headers=['*']) @app.get('/') def main(): return Response('OK', status_code=200) However, when making a request it does fail with CORS Missing Allow Origin [Cross-source (cross-origin) request blocked: The same origin policy disallows reading the remote resource at http://[::1]:8000/create_session/none. (Reason: CORS header 'Access-Control-Allow-Origin' is missing). Status code: 200.] const response = await fetch('http://localhost:8000/', {credentials: "include"}); The client is Firefox with a local file (file://.../main.html) opened. I already tried all solutions from How can I enable CORS in FastAPI?. What's wrong? Edit: My question is not a duplicate of How to access FastAPI backend from a different machine/IP on the same local network?, because the server and the client are on the same (local)host. Nevertheless I tried setting the suggested --host 0.0.0.0 and the error remains the same. Also it's not a duplicate of FastAPI is not returning cookies to React frontend, because it suggests setting the CORS origin in the same way as the official docs, which does not work as I described above.
Since you haven't provided the actual CORS error in your question, it should be specified here for future readers visiting this post. When sending a JavaScript HTTP request to a server, in this case a FastAPI backend, through a local HTML file that has been loaded into the browser via a file:/// URLβ€”e.g., by dragging-dropping the file into the browser or just double-clicking the fileβ€”with the URL in the browser's address bar looking like this: file:///C:/...index.html browsers that apply CORS would output the following errors. If you are using Chrome (or a chromium-based) browser, you would see a CORS error (Cross-Origin Resource Sharing error: MissingAllowOriginHeader) under the Status column in the Network tab/section, indicating that the response to the CORS request is missing the required Access-Control-Allow-Origin header, in order to determine whether or not the resource can be accessed. The error is made more clear by looking at the console, after submitting the request: Access to fetch at 'http://localhost:8000/' from origin 'null' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. GET http://localhost:8000/ net::ERR_FAILED 200 (OK) If you are using FireFox instead, you would see a similar error in the console: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8000/. (Reason: CORS header 'Access-Control-Allow-Origin' missing). Status code: 200 The reason for the error(s) raised above is that you are trying to perform a cross-origin request through a script that is running via a file:/// URL. In that case, the client's origin is null, but you only added http://localhost:8000 to the allowed origins. Thus, a quick fix would be to add null to the origins list, e.g., Access-Control-Allow-Origin: null. In FastAPI, you could do that as shown in the example below (however, it is not recommended to do soβ€”see more details below the example). Working Example app.py from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware app = FastAPI() origins = ['null'] # NOT recommended - see details below app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) @app.get('/') def main(): return 'ok' index.html <!DOCTYPE html> <html> <body> <h1>Send JS request</h1> <button onclick="getData()">Click me</button> <script> function getData() { fetch('http://localhost:8000/') .then(resp => resp.json()) // or resp.text(), etc. .then(data => { console.log(data); }) .catch(error => { console.error(error); }); } </script> </body> </html> Note However, it is not recommended doing that, as it is insecure. As described in MDN's documentation: Note: null should not be used: "It may seem safe to return Access-Control-Allow-Origin: "null", but the serialization of the Origin of any resource that uses a non-hierarchical scheme (such as data: or file:) and sandboxed documents is defined to be "null". Many User Agents will grant such documents access to a response with an Access-Control-Allow-Origin: "null" header, and any origin can create a hostile document with a "null" Origin. The "null" value for the ACAO header should therefore be avoided." Another great article on the risks of using the null origin, as well as misconfiguring CORS in general, can be found here. As explained in several related questions, as shown here and here, one should rather serve the HTML file(s), e.g., index.html above, using a local web server. FastAPI can do that for you as well, using the HTMLResponse or even Jinja2Templates, as demonstrated in this answer, this answer, as well as this, this and this. Thus, I would strongly recommend having a look at those posts and serve your HTML file(s) through FastAPI, which is pretty straightforward, instead of adding null to the origins. Related answers on CORS can also be found here and here.
3
3
77,982,766
2024-2-12
https://stackoverflow.com/questions/77982766/delete-word-that-follows-a-specific-word
I have e-mails and transcripts in German from conversations with customers. They include personal identifiable information that I need to remove. So it is a data anonymisation task. The text would be "Hello Mr. Smith", "Dear Mr Smith", "Hello Lisa" etc. followed by the conversation. I need to keep the conversation for further analysis. Three solutions come to my mind: A) compiling a list of names: At this stage, I do not know all the names that will be mentioned. I have no access to the CRM database. So compiling a list and adding it to the stopwords corpus will be time-consuming and/or error prone. B) Part-of-Speech Tagging (PoS) / Named Entity Recognition (NER): This would also delete product names and places. I need to keep this information. So NER is unfortunately not an option. C) Regular expression (regex): Use RegEx to match the salutation, e.g. "Dear", and delete the subsequent word. This answer gave me a good starting point, but it assumes that I know the word that follows the name I need to delete which I don't. import re print re.sub(r'(?<=copy )(.*)(?=from)', '', "copy table values from 'a.dat';") How could I modify the code to delete the word that follows the salutation? I read up on lookaround and played around a bit on regex101 but couldn't figure it out. Also, would I need to tokenize the string first? A solution with pandas str.replace is welcome too.
Version 2a You could approach this by setting up lists of salutations and of honorifics (Mr, Mrs, Dr etc), and if any word in your data matched one of these working through the first few words you could remove it. This approach makes it easy to quickly add new salutations and honorifics as you spot them in the transcripts. This first piece of code is an improvement on my earlier answer (which is listed at the end). @Simone added the requirement that the salutation could be a pair of words (Good morning rather than just Hello - for example). The new code copes with this, and also works for double names (such as John Doe as well as just John). It also works with a fairly wide range of less conventional ways of addressing someone (and is not fooled by capitalisation or the lack of it) which might appear in an email or a transcript, as hopefully the example text in the code shows. However, although it is more bullet-proof than the first version, it is not perfect - as you might expect from code that is trying to second-guess human responses. import re text = ["Dear mrs chan Blah blah blah", "Dear fred smith, How are you?", "greetings ms duncan-jones, what Gives?", " My Dear Lisa , this is cool.", "Hello Dr Foster It's good, to hear, from you.", "Meera Whatcha thinkin?", "Good morning Ms Daisy Martin, Hope you are keeping well.", "Hi there seema hows things?", "Jaz Yes ok", "OK raffi you may be right"] salutations_1 = ["dear", "dearest", "hello", "hi", "hiya", "greetings", "salutations", "ok", "good", "my"] salutations_2 = ["morning", "day", "afternoon", "evening", "there", "dear", "dearest"] honorifics = ["mr", "mrs", "dr", 'ms', "sir", "master"] for line in text: line = line.replace(',', ' #', 1) # Mark the position of the first comma line = line.strip() words = re.split(r"[\s]+", line) try: # First salutation word if words[0].lower() in salutations_1: words = words[1:] # Second salutation word if words[0].lower() in salutations_2: words = words[1:] # Honorifics if words[0].lower() in honorifics: words = words[1:] # Remove 1st name (eg John) words = words[1:] # Remove comma marker if present if words[0] == '#': words = words[1:] # Remove possible second name (eg Doe) and comma marker if words[1] == '#': words = words[2:] joined = ' '.join(words) finished = joined.replace(' #', ',') # Restore any unused comma print(finished) except: print("**Line too short**") This gives: Blah blah blah How are you? what Gives? this is cool. It's good, to hear, from you. Whatcha thinkin? Hope you are keeping well. hows things? Yes ok you may be right It has removed all the salutations, honorifics and names without losing any of the accompanying text. This is the earlier version of the code, left here for completeness. import re text = ["Dear mrs smith Blah", "Dear fred, how are you. It is a while, since etc", " Dear Lisa, this is great", "Hello Dr Foster, It's good to hear from you."] salutations = ["dear", "hello", "hi"] honorifics = ["mr", "mrs", "dr"] for line in text: line = line.strip() words = re.split(r"[\s]+", line) #print(words) for word in words: if word.lower() in salutations: words.remove(word) for word in words: if word.lower() in honorifics: words.remove(word) del words[0] # Remove the name print(' '.join(words)) From the data above, this prints out: Blah how are you. It is a while, since etc this is great It's good to hear from you.
2
2
77,990,385
2024-2-13
https://stackoverflow.com/questions/77990385/complex-c-lifetime-issue-in-python-bindings-between-c-and-numpy
I'm looking for advice on how to handle a complex lifetime issue between C++ and numpy / Python. Sorry for the wall of text, but I wanted to provide as much context as possible. I developed cvnp, a library that offers casts between bindings between cv::Mat and py::array objects, so that the memory is shared between the two, when using pybind11. It is originally based on a SO answer by Dan MaΕ‘ek . All is going well and the library is used in several projects, including robotpy, which is a Python library for the FIRST Robotics Competition. However, an issue was raised by a user, that deals with the lifetime of linked cv::Mat and py::array objects. In the direction cv::Mat -> py::array, all is well, as mat_to_nparray will create a py::array that keeps a reference to the linked cv::Mat via a "capsule" (a python handle). However, in the direction py::array -> cv::Mat, nparray_to_mat the cv::Mat will access the data of the py::array, without any reference to the array (so that the lifetime of the py::array is not guaranteed to be the same as the cv::Mat) See mat_to_nparray: py::capsule make_capsule_mat(const cv::Mat& m) { return py::capsule(new cv::Mat(m) , [](void *v) { delete reinterpret_cast<cv::Mat*>(v); } ); } pybind11::array mat_to_nparray(const cv::Mat& m) { return pybind11::array(detail::determine_np_dtype(m.depth()) , detail::determine_shape(m) , detail::determine_strides(m) , m.data , detail::make_capsule_mat(m) ); } and nparray_to_mat: cv::Mat nparray_to_mat(pybind11::array& a) { ... cv::Mat m(size, type, is_not_empty ? a.mutable_data(0) : nullptr); return m; } This worked well so far, until a user wrote this: a bound c++ function that returns the same cv::Mat that was passed as an argument m.def("test", [](cv::Mat mat) { return mat; }); some python code that uses this function img = np.zeros(shape=(480, 640, 3), dtype=np.uint8) img = test(img) In that case, a segmentation fault may occur, because the py::array object is destroyed before the cv::Mat object, and the cv::Mat object tries to access the data of the py::array object. However, the segmentation fault is not systematic, and depends on the OS + python version. I was able to reproduce it in CI via this commit using ASAN. The reproducing code is fairly simple: void test_lifetime() { // We need to create a big array to trigger a segfault auto create_example_array = []() -> pybind11::array { constexpr int rows = 1000, cols = 1000; std::vector<pybind11::ssize_t> a_shape{rows, cols}; std::vector<pybind11::ssize_t> a_strides{}; pybind11::dtype a_dtype = pybind11::dtype(pybind11::format_descriptor<int32_t>::format()); pybind11::array a(a_dtype, a_shape, a_strides); // Set initial values for(int i=0; i<rows; ++i) for(int j=0; j<cols; ++j) *((int32_t *)a.mutable_data(j, i)) = j * rows + i; printf("Created array data address =%p\n%s\n", a.data(), py::str(a).cast<std::string>().c_str()); return a; }; // Let's reimplement the bound version of the test function via pybind11: auto test_bound = [](pybind11::array& a) { cv::Mat m = cvnp::nparray_to_mat(a); return cvnp::mat_to_nparray(m); }; // Now let's reimplement the failing python code in C++ // img = np.zeros(shape=(480, 640, 3), dtype=np.uint8) // img = test(img) auto img = create_example_array(); img = test_bound(img); // Let's try to change the content of the img array *((int32_t *)img.mutable_data(0, 0)) = 14; // This triggers an error that ASAN catches printf("img data address =%p\n%s\n", img.data(), py::str(img).cast<std::string>().c_str()); } I'm looking for advices on how to handle this issue. I see several options: An ideal solution would be to call pybind11::array.inc_ref() when constructing the cv::Mat inside nparray_to_mat make sure that pybind11::array.dec_ref() is called when this particular instance will be destroyed. However, I do not see how to do it. Note: I know that cv::Mat can use a custom allocator, but it is useless here, as the cv::Mat will not allocate the memory itself, but will use the memory of the py::array object. Thanks for reading this far, and thanks in advance for any advice!
Well, the solution was inspired by cv_numpy.cpp in OpenCV source code, and was implemented thanks to the help of Dustin Spicuzza. It uses a custom MatAllocator that uses a numpy array as the data pointer, and will refer to this data instead of allocating. // Translated from cv2_numpy.cpp in OpenCV source code class CvnpAllocator : public cv::MatAllocator { public: CvnpAllocator() = default; ~CvnpAllocator() = default; // Attaches a numpy array object to a cv::Mat static void attach_nparray(cv::Mat &m, pybind11::array& a) { static CvnpAllocator instance; cv::UMatData* u = new cv::UMatData(&instance); u->data = u->origdata = (uchar*)a.mutable_data(0); u->size = a.size(); // This is the secret sauce: we inc the number of ref of the array u->userdata = a.inc_ref().ptr(); u->refcount = 1; m.u = u; m.allocator = &instance; } cv::UMatData* allocate(int dims0, const int* sizes, int type, void* data, size_t* step, cv::AccessFlag flags, cv::UMatUsageFlags usageFlags) const override { throw py::value_error("CvnpAllocator::allocate \"standard\" should never happen"); // return stdAllocator->allocate(dims0, sizes, type, data, step, flags, usageFlags); } bool allocate(cv::UMatData* u, cv::AccessFlag accessFlags, cv::UMatUsageFlags usageFlags) const override { throw py::value_error("CvnpAllocator::allocate \"copy\" should never happen"); // return stdAllocator->allocate(u, accessFlags, usageFlags); } void deallocate(cv::UMatData* u) const override { if(!u) return; // This function can be called from anywhere, so need the GIL py::gil_scoped_acquire gil; assert(u->urefcount >= 0); assert(u->refcount >= 0); if(u->refcount == 0) { PyObject* o = (PyObject*)u->userdata; Py_XDECREF(o); delete u; } }; cv::Mat nparray_to_mat(pybind11::array& a) { bool is_contiguous = is_array_contiguous(a); bool is_not_empty = a.size() != 0; if (! is_contiguous && is_not_empty) { throw std::invalid_argument("cvnp::nparray_to_mat / Only contiguous numpy arrays are supported. / Please use np.ascontiguousarray() to convert your matrix"); } int depth = detail::determine_cv_depth(a.dtype()); int type = detail::determine_cv_type(a, depth); cv::Size size = detail::determine_cv_size(a); cv::Mat m(size, type, is_not_empty ? a.mutable_data(0) : nullptr); if (is_not_empty) { detail::CvnpAllocator::attach_nparray(m, a); //, ndims, size, type, step); } return m; } See code in the repository here and here @dan-maΕ‘ek: your input would be welcome!
2
0
77,989,093
2024-2-13
https://stackoverflow.com/questions/77989093/isinstance-and-not-vs-for-checking-x
I want an if-statement to check if x is an empty tuple. Is there any significant advantage of writing the if-statement as if isinstance(x, tuple) and not x: vs if x == (): given the latter is shorter to write and simpler to read?
More often than not, you rely on object interfaces rather than specific types, so just if not x is sufficient and this is the usual Pythonic expression (see comments by @Π‘Π΅Ρ€Π³Π΅ΠΉΠšΠΎΡ… and @MarkRansom). The advantage of the first case is that tuple subclasses will be accepted, which is usually what is expected in a duck-typed language. The second one is checking whether x is comparable to True against an empty tuple, which could be true for any class that implements a valid __eq__ method. If neither of these is acceptable, then you need to first compare the strict type with type and then check if it's empty. Here is an example of the differences: class Foo(tuple): pass class Bar: def __eq__(self, other): return len(other) == 0: things = {'tuple': tuple(), 'Foo': Foo(), 'Bar': Bar() } for name, thing in things.items(): if thing == (): print(f"{name} is an empty object with __eq__ implementation against empty tuple") if isinstance(thing, tuple) and not thing: print(f"{name} is an empty tuple, or tuple subclass") if type(thing) == type(()) and thing == (): print(f"{name} is an empty tuple") Output: tuple is an empty object with __eq__ implementation against empty tuple tuple is an empty tuple, or tuple subclass tuple is an empty tuple Foo is an empty object with __eq__ implementation against empty tuple Foo is an empty tuple, or tuple subclass Bar is an empty object with __eq__ implementation against empty tuple So it all depends on how strict or loose you want to be with your comparison.
2
3
77,953,389
2024-2-7
https://stackoverflow.com/questions/77953389/problem-importing-dsplot-library-in-pycharm
I am exploring tree plots with the following library and code: from dsplot.graph import Graph graph = Graph( {0: [1, 4, 5], 1: [3, 4], 2: [1], 3: [2, 4], 4: [], 5: []}, directed=True ) graph.plot() When I run the code, I get the following error: from dsplot.graph import Graph ModuleNotFoundError: No module named 'dsplot' I have tried pip installing the dsplot library with this command on the PyCharm terminal: pip install dsplot But this then causes another error: Collecting dsplot Using cached dsplot-0.9.0-py3-none-any.whl (8.8 kB) Collecting pygraphviz<2.0,>=1.7 Using cached pygraphviz-1.12.tar.gz (104 kB) ERROR: Error [WinError 3] The system cannot find the path specified while executing command pip subprocess to install build dependencies Installing build dependencies ... error ERROR: Could not install packages due to an OSError: [WinError 3] The system cannot find the path specified What is the reason for this problem installing the library and how can I resolve it?
It looks like you have not installed pygraphviz. Install pygraphviz Try downloading it from their official website and make sure its added to your path by clicking "Add Graphviz to the System PATH for the current user". In your terminal, run dot -V and if it outputs your version of pygraphviz, you should be all set to run pip install dsplot. If dot -V does not return the Version: Make sure you checked this box during installation: If you checked the box and it is still not in your PATH: Find where Graphviz is installed; it is likely in "C:\Program Files\Graphviz" but is dependent on installation and OS. Click on this directory, and right click on "bin" to copy the path. Open Start Search, type "env", and select "Edit the system environment variables". Click "Environment Variables" in System Properties Find "Path Variables" under "System Variables" and select "Edit" Click "New" Paste in the path of Graphviz, which is likely something like C:\Program Files\Graphviz\bin Click "Ok" on all open windows to apply the changes After adding Graphviz to your path, open a terminal and once again try dot -V. If that returns the version, then you should be able to run pip install dsplot.
6
4
77,987,278
2024-2-13
https://stackoverflow.com/questions/77987278/feature-selection-using-backward-feature-selection-in-scikit-learn-and-pca
i have calculated the scores of all the columns in my DF,which has 312 columns and 650 rows, using PCA with following code: all_pca=PCA(random_state=4) all_pca.fit(tt) all_pca2=all_pca.transform(tt) plt.plot(np.cumsum(all_pca.explained_variance_ratio_) * 100) plt.xlabel('Number of components') plt.grid(which='both', linestyle='--', linewidth=0.5) plt.xticks(np.arange(0, 330, step=25)) plt.yticks(np.arange(0, 110, step=10)) plt.ylabel('Explained variance (%)') plt.savefig('elbow_plot.png', dpi=1000) and the result is the following image. My main goal is to use only important features for Random forest regression, Gradient boosting, OLS regression and LASSO. As you can see, 100 columns describe 95.2% of the variance in my Dataframe. Can I use this threshold (100 Columns) for backward feature selection?
As you can see, 100 columns describe 95.2% of the variance in my Dataframe. The graph tells you that 100 PCA components capture 95% of the variance. These 100 components do not correspond to 100 individual features. Each PCA component is made by combining all features together, which gives you that one component. When 100 PCA components capture 95% of the variance, it means that your original 312 columns can be linearly combined into fewer (100) new columns, and you only lose 5% of the information in the process. It's a measure of the intrinsic dimensionality of the feature set. Can I use this threshold (100 Columns) for backward feature selection? The 100 PCA components that explain 95% don't really tell you which individual features (or how many of them) are important, as each PCA component is a mix of all features. Also, the 95% refers to variability of the features - it doesn't mean the 100 PCA components will be useful for the target. Perhaps you could use the 100 components to guide your choice between using forward vs. backward feature selection. In this case, the intrinsic dimensionality of the dataset is closer to 100 than it is to 312, so I'd opt for forward selection as it seems like the number of useful features might be less than the original size. If you were to run PCA before feature selection, it would create new features (PCA components) out of the original ones, and in the process you could lose interpretability, as the new features can be messy linear combinations of the original ones. One method of identifying useful features is to run forward (or backward) selection on a random forest using the original features, and stop when you hit a score threshold like 95% validation accuracy. Then you can use those selected features for other models. Feature selection is relatively time-consuming as it requires lots of repeated model fitting. Permutation importance is another way of identifying key features.
2
1
77,991,049
2024-2-13
https://stackoverflow.com/questions/77991049/is-there-a-way-to-print-a-formatted-dictionary-to-a-python-log-file
I've got a logging handler setup that prints to stream and file. I'm working with a few dictionaries and modifying dictionaries. Is there a way to format a dictionary for the log file so that it shows up as one block rather than a line? I've gone through a bunch of simpler formatting attempts and read through How to insert newline in python logging?. Before I try writing a custom formatter just for dictionaries, wanted to find out if there's a known way to do this. Thanks! The desired output would be something like this: 2024-02-13 13:27:03,685 [DEBUG] root: shelf_name = 'some shelf', url = 'http://a_url', creation_date = '02/12/2024' 2024-02-13 13:34:55,889 [DEBUG] root: So is there any way to do this for a dictionary block? UPDATED: Removed the old extraneous iterations. I'm posting the closest acceptable result, but still would like to make the dictionary print as a single block class Downloader: def __init__(self, shelf_data) -> None: shelf = ShelfData(shelf_data) [logging.debug(f"shelf['{key}']: {val}") for key, val in shelf.__dict__.items()] Log file: 2024-02-13 16:29:18,024 [DEBUG] root: shelf['shelf_name']: test_shelf 2024-02-13 16:29:18,024 [DEBUG] root: shelf['url']: https://site/group/show/1865-scifi-and-fantasy-book-club 2024-02-13 16:29:18,024 [DEBUG] root: shelf['base_path']: C:\MyProjects\downloads 2024-02-13 16:29:18,024 [DEBUG] root: shelf['creation_date']: 02/12/2024 2024-02-13 16:29:18,039 [DEBUG] root: shelf['sort_order']: descend 2024-02-13 16:29:18,039 [DEBUG] root: shelf['books_per_page']: 100 2024-02-13 16:29:18,039 [DEBUG] root: shelf['download_dir']: None 2024-02-13 16:29:18,039 [DEBUG] root: shelf['book_count']: 37095 2024-02-13 16:29:18,039 [DEBUG] root: shelf['page_count']: 371 Also, might as well add my logger: logging_config = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'standard': { 'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s' }, }, 'handlers': { 'default_handler': { 'class': 'logging.FileHandler', 'level': 'DEBUG', 'formatter': 'standard', 'filename': os.path.join('logs', f'{log_name}.log'), 'encoding': 'utf-8' }, 'stream_handler': { 'class': 'logging.StreamHandler', 'level': 'DEBUG', 'formatter': 'standard',} }, 'loggers': { '': { 'handlers': ['default_handler', 'stream_handler'], 'level': 'DEBUG', 'propagate': False } } } logging.config.dictConfig(logging_config)
Not sure if it's exactly what you're looking for, but I leveraged the pprint module to pretty-print dictionaries storing configurations for various things in an app I'm working on. The helper function and usage is below: import pprint import logging # set up pretty printer pp = pprint.PrettyPrinter(indent=2, sort_dicts=False) def log_pretty(obj): pretty_out = f"{pp.pformat(obj)}" return f'{pretty_out}\n' # set up and log as usual logger = logging.getLogger("myapp") logger.info(f'You could print your logging_config dict like so:\n{log_pretty(logging_config)}') You can also tweak the pprint options to your liking... see here.
2
2
77,990,896
2024-2-13
https://stackoverflow.com/questions/77990896/importerror-dependencies-for-instructorembedding-not-found-while-it-is-install
I already installed InstructorEmbedding, but it keeps giving me the error, in jupyter notebook environment using Python 3.12 (I also tried in 3.11). Kernel restarting didn't help. import torch from langchain.embeddings import HuggingFaceInstructEmbeddings DEVICE = "cuda:0" if torch.cuda.is_available() else "cpu" embedding = HuggingFaceInstructEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2", model_kwargs={"device": DEVICE}) error: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) File /opt/conda/lib/python3.11/site-packages/langchain_community/embeddings/huggingface.py:151, in HuggingFaceInstructEmbeddings.__init__(self, **kwargs) 150 try: --> 151 from InstructorEmbedding import INSTRUCTOR 153 self.client = INSTRUCTOR( 154 self.model_name, cache_folder=self.cache_folder, **self.model_kwargs 155 ) File /opt/conda/lib/python3.11/site-packages/InstructorEmbedding/__init__.py:1 ----> 1 from .instructor import * File /opt/conda/lib/python3.11/site-packages/InstructorEmbedding/instructor.py:9 8 from torch import Tensor, device ----> 9 from sentence_transformers import SentenceTransformer 10 from sentence_transformers.models import Transformer ModuleNotFoundError: No module named 'sentence_transformers' The above exception was the direct cause of the following exception: ImportError Traceback (most recent call last) Cell In[2], line 10 4 DEVICE = "cuda:0" if torch.cuda.is_available() else "cpu" 6 #loader = PyPDFDirectoryLoader("aircraft_pdfs") 7 #docs = loader.load() 8 #print(len(docs)) # length of all pages together ---> 10 embedding = HuggingFaceInstructEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2", model_kwargs={"device": DEVICE}) File /opt/conda/lib/python3.11/site-packages/langchain_community/embeddings/huggingface.py:157, in HuggingFaceInstructEmbeddings.__init__(self, **kwargs) 153 self.client = INSTRUCTOR( 154 self.model_name, cache_folder=self.cache_folder, **self.model_kwargs 155 ) 156 except ImportError as e: --> 157 raise ImportError("Dependencies for InstructorEmbedding not found.") from e ImportError: Dependencies for InstructorEmbedding not found. here is the output of pip freeze transformers==4.37.2 torch==2.2.0 langchain==0.1.6 InstructorEmbedding==1.0.1 ...
I think you would also need to install sentence-transformers. Try installing it via: pip install -U sentence-transformers==2.2.2 and then run your code. Please make sure you install the version 2.2.2 otherwise you'll end up with this error: TypeError: INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token' It seems the latest version of sentence-transformers has some compatibility issues.
2
9
77,988,322
2024-2-13
https://stackoverflow.com/questions/77988322/setting-part-of-title-as-bold-and-normal-in-matplotlib
I want to create an axis title that has part of it in bold font. This is an example of my current code: # Features to plot features = ['Overall Rating', 'Shooting Rating', 'Creativity Rating', 'Passing Rating', 'Dribbling Rating', 'Defending Rating', 'Pressing Rating', 'Aerial Rating'] # Loop through each feature and create density plots for i, feature in enumerate(features): ax = axes[i] # Draw the density plot using the current feature column from the DataFrame sns.kdeplot(df[feature], shade=True, color='white', ax=ax) # Plot marker for df player_value = df[feature].values[0] ax.set_title(f'{feature}\n{int(player_value)}', color='#100097', loc='right', fontweight ="bold") I want the text for feature to be a normal font weight, and for player_value to be a bold font weight. I've tried various methods to no avail: creating a separate title for each section of text, trying to set the fontweight separately within the set_title function, etc...
You can use LaTeX** to achieve this, quite easily using this kind of expression: $\\bf{bold text}$". In this case, the "bold text" will be displayed as bold. So in your case, add this: ax.set_title(f'{feature}\n$\\bf{{{int(player_value)}}}$', color='#100097', loc='right') **: about LaTeX, it seems MathText is what is being called and not LaTeX, I will edit my question later
4
3
77,986,775
2024-2-13
https://stackoverflow.com/questions/77986775/pydantic-fastapi-how-to-set-up-a-case-insensitive-model-with-suggested-values
Using FastAPI I have set up a POST endpoint that takes a command, I want this command to be case insensitive, while still having suggested values (i.e. within the SwaggerUI docs) For this, I have set up an endpoint with a Command class as a schema for the POST body parameters: @router.post("/command", status_code=HTTPStatus.ACCEPTED) # @router is a fully set up APIRouter() async def control_battery(command: Command): result = do_work(command.action) return result For Command I currently have 2 possible versions, which both do not have the full functionality I desire. from fastapi import HTTPException from pydantic import BaseModel, field_validator from typing import Literal ## VERSION 1 class Command(BaseModel): action: Literal["jump", "walk", "sleep"] ## VERSION 2 class Command(BaseModel): action: str @field_validator('action') @classmethod def validate_command(cls, v: str) -> str: """ Checks if command is valid and converts it to lower. """ if v.lower() not in {'jump', 'walk', 'sleep'}: raise HTTPException(status_code=422, detail="Action must be either 'jump', 'walk', or 'sleep'") return v.lower() Version 1 is obviously not case sensitive, but has the correct 'suggested value' behaviour, as below. Whereas Version 2 has the correct case sensitivity and allows for greater control over the validation, but no longer shares suggested values with users of the schema. e.g., in the image above "jump" would be replaced with "string". How do I combine the functionality of both of these approaches?
Specify the type as either a literal or some string and keep the validation: class Command(BaseModel): action: Literal["jump", "walk", "sleep"] | str @field_validator('action') @classmethod def validate_command(cls, v: str) -> str: """ Checks if command is valid and converts it to lower. """ if v.lower() not in {'jump', 'walk', 'sleep'}: raise HTTPException(status_code=422, detail="Action must be either 'jump', 'walk', or 'sleep'") return v.lower() A purist might say that Literal["jump", "walk", "sleep"] | str is the same as str, but in this case both a human and a computer can infer that "jump" and the other two values are special inputs. It works for providing example values. You may not want to repeat the literal values - a mypy-passing solution would be to make them an enum: from enum import Enum class Action(Enum): jump = "jump" walk = "walk" sleep = "sleep" class Command(BaseModel): action: Action | str @field_validator('action') @classmethod def validate_command(cls, v: str) -> str: """ Checks if command is valid and converts it to lower. """ if v.lower() not in Action.__members__: raise HTTPException(status_code=422, detail="Action must be either 'jump', 'walk', or 'sleep'") return v.lower() Another solution would be to keep using the Literal instead of the enum and use get_args to extract the values of the literal type, as explained in this answer
2
1
77,957,072
2024-2-7
https://stackoverflow.com/questions/77957072/how-to-determine-final-state-of-sequential-entries-in-a-pandas-dataframe-using-v
I have a pandas DataFrame with columns representing various attributes of data entries, including a timestamp "dh_processamento_rubrica", a unique identifier "inivalid_iderubrica", and an operation type "operacao". The operations include "inclusao" (insertion), "alteracao" (modification), and "exclusao" (deletion). For each unique identifier, there can be multiple operations performed, where "alteracao" modifies existing entries created by "inclusao", and "exclusao" removes entries. The modifications can include changing certain attributes (like "codinccp_dadosrubrica") or updating the identifier itself using "inivalid_nova_validade" (in the example below updating "inivalid_iderubrica" from 2019-11-01 to 2019-01-01) in the second row. "dh_processamento_rubrica" is when the operation was performed). There can only be one entry for each distinct "inivalid_iderubrica", but if it is changed, a new one can later be created with the old value. Here's a simplified version of my DataFrame: codinccp_dadosrubrica inivalid_iderubrica inivalid_nova_validade operacao dh_processamento_rubrica 11 2019-11-01 NaT inclusao 2020-03-18 23:58:14 11 2019-11-01 2019-01-01 alteracao 2020-05-14 17:27:06 00 2019-11-01 NaT inclusao 2020-06-07 23:46:07 00 2019-01-01 NaT alteracao 2021-07-15 19:57:42 NaN 2019-11-01 NaT exclusao 2021-08-13 15:31:56 Code to generate DataFrame: import pandas as pd data = {'codinccp_dadosrubrica': ['11', '11', '00', '00', None], 'inivalid_iderubrica': [pd.Timestamp('2019-11-01 00:00:00'), pd.Timestamp('2019-11-01 00:00:00'), pd.Timestamp('2019-11-01 00:00:00'), pd.Timestamp('2019-01-01 00:00:00'), pd.Timestamp('2019-11-01 00:00:00')], 'inivalid_nova_validade': [None, pd.Timestamp('2019-01-01 00:00:00'), None, None, None], 'operacao': ['inclusao', 'alteracao', 'inclusao', 'alteracao', 'exclusao'], 'dh_processamento_rubrica': [pd.Timestamp('2020-03-18 23:58:14'), pd.Timestamp('2020-05-14 17:27:06'), pd.Timestamp('2020-06-07 23:46:07'), pd.Timestamp('2021-07-15 19:57:42'), pd.Timestamp('2021-08-13 15:31:56')]} df = pd.DataFrame(data) As per the data, "inclusao" adds a new entry, "alteracao" modifies existing entries, and "exclusao" removes entries. Each "alteracao" references the original entry it modifies by the inivalid_iderubrica attribute. The modifications can either update existing attributes or change the inivalid_iderubrica. In the example: A new entry (i) is added with inivalid_iderubrica == 2019-11-01 Entry (i) is updated to now have inivalid_iderubrica == 2019-01-01 A new entry (ii) is added with inivalid_iderubrica == 2019-11-01 Entry (i) is updated to change codinccp_dadosrubrica from 11 to 00 Entry (ii) is deleted I need to determine the final state of each entry after all the operations have been applied, considering that the modifications are sequential (according to the dates on "dh_processamento_rubrica") and may affect subsequent operations. Final state should something look like: codinccp_dadosrubrica inivalid_iderubrica inivalid_nova_validade operacao dh_processamento_rubrica 00 2019-01-01 NaT alteracao 2021-07-15 19:57:42 Edit: I have the following solution using iterrows, which works but is unfortunately too slow, as I need to process DataFrame which sometimes may have millions of rows: df = df.sort_values(by=['dh_processamento_rubrica']).reset_index(drop=True) df_alt_exc = df[df['operacao'] != 'inclusao'] df = df[df['operacao'] == 'inclusao'] for _, row in df_alt_exc.iterrows(): to_update = df[(df['inivalid_iderubrica'] == row['inivalid_iderubrica'])] if row['operacao'] == 'alteracao': if to_update.size > 0: if not pd.isna(row['inivalid_nova_validade']): row['inivalid_iderubrica'] = row['inivalid_nova_validade'] index_to_update = to_update.index[0] df.loc[index_to_update] = row if row['operacao'] == 'exclusao': if to_update.size > 0: index_to_update = to_update.index[0] df = df.drop(index_to_update) I need to know if there's a way of solving this using vectorized operations (or any other approach that would be similarly as efficient).
This is indeed challenging, since the series of operations is important, which makes it hard to use vectorised operations. An extra challenge is that unique IDs are mutable. Luckily I think it is still possible to write a vectorised solution. The below solution relies on the fact that an alteracao operation that changes the unique ID contains all the up-to-date information about the record, such that we can consider it an inclusao and use the inivalid_nova_validade as the new inivalid_iderubrica. We then group by unique IDs and take the last row which is not exlucsao, relying on the fact that the dataframe is ordered. Finally we search for IDs which are deleted (exclusao) or removed implicitly by having inivalid_iderubrica modified altered, and remove them from the final result. Additional optimisation - converting the text column operacao to a Categorical to save memory. Hopefully this works for your purposes. # sorting, from OP df = df.sort_values(by=['dh_processamento_rubrica']).reset_index(drop=True) # conversion to Categorical dtype - not strictly necessary, but saves memory df["operacao"] = pd.Categorical(df["operacao"]) # check for deleted IDs. deletion means either `exclusao`, or having their `inivalid_iderubrica` replaced deleted = df["operacao"].eq("exclusao") deleted |= (df["operacao"].eq("alteracao")) & (~df["inivalid_nova_validade"].isna()) # if their final status is deleted, we will drop them at the end final_status = deleted.groupby(df["inivalid_iderubrica"]).last() excluded = final_status[final_status].index # Wherever the `alteracao` changes the unique ID, consider it a new `inclusao` operation instead alter_inivalid_mask = (df["operacao"].eq("alteracao")) & (~df["inivalid_nova_validade"].isna()) df.loc[alter_inivalid_mask, "inivalid_iderubrica"] = df.loc[alter_inivalid_mask, "inivalid_nova_validade"] df.loc[alter_inivalid_mask, "inivalid_nova_validade"] = "NaT" df.loc[alter_inivalid_mask, "operacao"] = "inclusao" # get the final state for each unique ID, except when that state is `exclusao` res = df[df["operacao"].ne("exclusao")].groupby("inivalid_iderubrica", dropna=False).last().reset_index() # now to remove those that have final status of excluded res = res[~res["inivalid_iderubrica"].isin(excluded)] Yielding: inivalid_iderubrica codinccp_dadosrubrica inivalid_nova_validade operacao dh_processamento_rubrica 0 2019-01-01 00 NaT alteracao 2021-07-15 19:57:42
2
2
77,985,422
2024-2-13
https://stackoverflow.com/questions/77985422/is-regex-in-python-different-with-regex-in-other-system-like-postgresql
I have this code to parsing street and house number in Python import re def parse_street(address): pattern = r'^(.*?)(?i)\b no\b' match = re.match(pattern, address) if match: return match.group(1).strip() else: return None def parse_housenumber(address): pattern = r'(?i)\bno\.?\s*([\dA-Z]+)' match = re.search(pattern, address) print(match) if match: return match.group(1) else: return None df['street'] = df['address'].apply(lambda x: pd.Series(parse_street(x))) df['house_number'] = df['address'].apply(lambda x: pd.Series(parse_housenumber(x))) Why I am getting error for running the query in postgresql with the same regex syntax query like: SELECT (regexp_matches('Jl. ABC No.01', '^(.*?)(?i)\b no\b'))[1] The error message is: ERROR: invalid regular expression: quantifier operand invalid Is regex in python different with regex in other system? How can I convert my python function to PostgreSQL syntax? I want to add "street" and "house_number" column from "address" column within the same table
Yes, most regex engines' syntax is different, to a lesser or greater degree. This is why the regex tag requires another tag specifying the engine, tool or a programming language to which the regex question pertains. From PostgreSQL docs (emphasis mine): An ARE can begin with embedded options: a sequence (?xyz) (where xyz is one or more alphabetic characters) specifies options affecting the rest of the RE. These options override any previously determined options β€” in particular, they can override the case-sensitivity behavior implied by a regex operator, or the flags parameter to a regex function. The available option letters are shown in Table 9.24. Note that these same option letters are used in the flags parameters of regex functions. Thus, this regex is not legal in PostgreSQL: ^(.*?)(?i)\b no\b But, this one is: (?i)^(.*?)\b no\b However, while legal, it still doesn't mean the same as the Python one, because the meaning of \b is different: for Python, it is a word boundary assertion; for PostgreSQL, it is a backspace character. The word boundary assertion is \y, and you can also be more precise with \m and \M (word start and end assertions, respectively). Thus, the equivalent PostgreSQL regular expression is: (?i)^(.*?)\y no\y
2
4
77,984,267
2024-2-12
https://stackoverflow.com/questions/77984267/expanding-rows-in-pandas-dataframe-based-on-time-intervals-accounting-for-optio
I have a Pandas DataFrame representing timesheets with start and end times, as well as optional rest and meal break intervals. My goal is to expand a single row into multiple rows with correct intervals. It's worth noting that: A timesheet might not have any breaks. A timesheet might have only a rest break, only a meal break, or both. The order of meal and rest breaks can vary (meal break can be both before and after rest break). Here's an example DataFrame: Id Start Time End Time Rest Break Start Time Rest Break End Time Meal Break Start Time Meal Break End Time 1 2024-01-26 07:59 2024-01-26 12:33 2024-01-26 10:43 2024-01-26 10:53 2024-01-26 12:03 2024-01-26 12:33 2 2024-01-26 14:29 2024-01-26 17:35 2024-01-26 16:33 2024-01-26 16:44 NaN NaN 3 2024-01-26 08:02 2024-01-26 12:45 NaN NaN NaN NaN 4 2024-01-26 09:15 2024-01-26 16:15 NaN NaN 2024-01-26 12:15 2024-01-26 12:45 5 2024-01-26 09:10 2024-01-26 16:37 2024-01-26 15:43 2024-01-26 15:55 2024-01-26 13:06 2024-01-26 13:37 The output I need: Id Category Start Time End Time 1 Session 2024-01-26 07:59 2024-01-26 10:43 1 Rest Break 2024-01-26 10:43 2024-01-26 10:53 1 Session 2024-01-26 10:53 2024-01-26 12:03 1 Meal Break 2024-01-26 12:03 2024-01-26 12:33 2 Session 2024-01-26 14:29 2024-01-26 16:33 2 Rest Break 2024-01-26 16:33 2024-01-26 16:44 2 Session 2024-01-26 16:44 2024-01-26 17:35 3 Session 2024-01-26 08:02 2024-01-26 12:45 4 Session 2024-01-26 09:15 2024-01-26 12:15 4 Meal Break 2024-01-26 12:15 2024-01-26 12:45 4 Session 2024-01-26 12:45 2024-01-26 16:15 5 Session 2024-01-26 09:10 2024-01-26 13:06 5 Meal Break 2024-01-26 13:06 2024-01-26 13:37 5 Session 2024-01-26 13:37 2024-01-26 15:43 5 Rest Break 2024-01-26 15:43 2024-01-26 15:55 5 Session 2024-01-26 15:55 2024-01-26 16:37 My current logic involves extracting all unique datetimes from each row, sorting them, and creating intervals. While this approach somewhat works, I'm struggling with assigning a 'Category' column to each interval. import pandas as pd # Your original DataFrame data = {'Id': [1, 2, 3, 4, 5], 'Start Time': ['2024-01-26 07:59', '2024-01-26 14:29', '2024-01-26 08:02', '2024-01-26 09:15', '2024-01-26 09:10'], 'End Time': ['2024-01-26 12:33', '2024-01-26 17:35', '2024-01-26 12:45', '2024-01-26 16:15', '2024-01-26 16:37'], 'Rest Break Start Time': ['2024-01-26 10:43', '2024-01-26 16:33', None, None, '2024-01-26 15:43'], 'Rest Break End Time': ['2024-01-26 10:53', '2024-01-26 16:44', None, None, '2024-01-26 15:55'], 'Meal Break Start Time': ['2024-01-26 12:03', None, None, '2024-01-26 12:15', '2024-01-26 13:06'], 'Meal Break End Time': ['2024-01-26 12:33', None, None, '2024-01-26 12:45', '2024-01-26 13:37']} df = pd.DataFrame(data) # Create an empty DataFrame to store the expanded rows expanded_df = pd.DataFrame(columns=['Id', 'Start Time', 'End Time']) # Iterate through each row of the original DataFrame for index, row in df.iterrows(): id_value = row['Id'] start_time = pd.to_datetime(row['Start Time']) end_time = pd.to_datetime(row['End Time']) # Collect all times times = {start_time, end_time} for column in ['Rest Break Start Time', 'Rest Break End Time', 'Meal Break Start Time', 'Meal Break End Time']: if not pd.isna(row[column]): times.add(pd.to_datetime(row[column])) # Sort the times sorted_times = sorted(times) # Create intervals for i in range(len(sorted_times) - 1): if sorted_times[i] != sorted_times[i + 1]: expanded_df = expanded_df.append({'Id': id_value, 'Start Time': sorted_times[i], 'End Time': sorted_times[i + 1]}, ignore_index=True) # Sort the expanded DataFrame by 'Id' and 'Start Time' expanded_df = expanded_df.sort_values(by=['Id', 'Start Time']).reset_index(drop=True) # Show the result expanded_df Here is the output:
Here is updated version that computes the category by checking the column names for "Start Time"/"End Time": rows = [] for index, row in df.iterrows(): id_value = row["Id"] start_time = pd.to_datetime(row["Start Time"]) end_time = pd.to_datetime(row["End Time"]) # Collect all times times = {(start_time, "Start Time"), (end_time, "End Time")} for column in [ "Rest Break Start Time", "Rest Break End Time", "Meal Break Start Time", "Meal Break End Time", ]: if not pd.isna(row[column]): times.add((pd.to_datetime(row[column]), column)) # Sort the times sorted_times = sorted(times) # Create intervals for i in range(len(sorted_times) - 1): c1 = sorted_times[i][1] if "Start Time" in c1: cat = c1.replace("Start Time", "").strip() or "Session" elif "End Time" in c1: cat = "Session" if sorted_times[i][0] != sorted_times[i + 1][0]: rows.append( { "Id": id_value, "Category": cat, "Start Time": sorted_times[i][0], "End Time": sorted_times[i + 1][0], } ) expanded_df = pd.DataFrame(rows) # Sort the expanded DataFrame by 'Id' and 'Start Time' expanded_df = expanded_df.sort_values(by=["Id", "Start Time"]).reset_index(drop=True) # Show the result print(expanded_df) Prints: Id Category Start Time End Time 0 1 Session 2024-01-26 07:59:00 2024-01-26 10:43:00 1 1 Rest Break 2024-01-26 10:43:00 2024-01-26 10:53:00 2 1 Session 2024-01-26 10:53:00 2024-01-26 12:03:00 3 1 Meal Break 2024-01-26 12:03:00 2024-01-26 12:33:00 4 2 Session 2024-01-26 14:29:00 2024-01-26 16:33:00 5 2 Rest Break 2024-01-26 16:33:00 2024-01-26 16:44:00 6 2 Session 2024-01-26 16:44:00 2024-01-26 17:35:00 7 3 Session 2024-01-26 08:02:00 2024-01-26 12:45:00 8 4 Session 2024-01-26 09:15:00 2024-01-26 12:15:00 9 4 Meal Break 2024-01-26 12:15:00 2024-01-26 12:45:00 10 4 Session 2024-01-26 12:45:00 2024-01-26 16:15:00 11 5 Session 2024-01-26 09:10:00 2024-01-26 13:06:00 12 5 Meal Break 2024-01-26 13:06:00 2024-01-26 13:37:00 13 5 Session 2024-01-26 13:37:00 2024-01-26 15:43:00 14 5 Rest Break 2024-01-26 15:43:00 2024-01-26 15:55:00 15 5 Session 2024-01-26 15:55:00 2024-01-26 16:37:00
3
1
77,974,285
2024-2-10
https://stackoverflow.com/questions/77974285/allowing-a-specific-set-of-undefined-values-in-enums
Given an enum like this: class Result(IntEnum): A = 0 B = 1 C = 3 I'd like to be able to create new enum members dynamically, but only if the value is within a given range. For example, given: class Result(IntEnum, valid_range=(4, 10)): A = 0 B = 1 C = 3 I'd expect the following behaviour: Result(10) would be ok, returning an instance of the enum where .value is set to 10. Result(0) would return Result.A Result(20) or Result("hello") would raise a ValueError Why? This is for the BACnet standard - a building management/HVAC protocol. Grossly simplified it defines a standard set of message types (for example: READ, WRITE, LIST) which map to integer values. I've modelled these as an enum: class MessageType(IntEnum): READ = 0 WRITE = 1 LIST = 2 This is then used as follows: @dataclass class Message: message_type: MessageType data: bytes def decode(raw: bytes) -> Message: message_type = MessageType(unpack("!H", bytes[:2])) data = bytes[2:] return Message(message_type=message_type, data=data) The protocol also allows for vendors to define their own message types but limits them to a specific range of values (i.e. between 10 and 20). Options considered: Define each value as an additional enum attribute (i.e. MessageType.VENDOR_10) explicitly or dynamically but the number of valid values can be in the millions, most of which will not be used. Don't use an enum, just an int, an int TypeAlias, or just a custom class. Change the type annotation on Message to int | MessageType and when decoding, attempt to create a MessageType, catching the ValueError and assigning the int.
It is possible, but requires Python 3.9 or better, and does use one currently undocumented data structure (_value2member_map_): from enum import IntEnum class RangedEnum(IntEnum): def __repr__(self): if self._name_: return f"<{self.__class__.__name__}.{self._name_}: {self._value_}>" else: return f"<{self.__class__.__name__}: {self._value_}>" def __init_subclass__(cls, valid_range): cls.valid_range = valid_range @classmethod def _missing_(cls, value): start, stop = cls.valid_range if start <= value <= stop: member = int.__new__(cls, value) member._name_ = None member._value_ = value cls._value2member_map_[value] = member return member class Result(RangedEnum, valid_range=(4, 10)): A = 0 B = 1 C = 3 and some light test code: >>> for v in (0, 1, 2, 3, 4, 5, 9, 10, 11): ... try: ... Result(v) ... except ValueError: ... print('nope on', v) ... <Result.A: 0> <Result.B: 1> nope on 2 <Result.C: 3> <Result: 4> <Result: 5> <Result: 9> <Result: 10> nope on 11 1 Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library.
2
2
77,982,938
2024-2-12
https://stackoverflow.com/questions/77982938/polars-idiomatic-way-of-aggregating-n-consecutive-rows-of-a-data-frame
I'm new to Polars, and I ended up writing this code to compute some aggregating expression over segments of n rows: import polars as pl df = pl.DataFrame({"a": [1, 1, 3, 8, 62, 535, 4213]}) ( df.with_columns(index=pl.int_range(pl.len(), dtype=pl.Int32)) .group_by_dynamic(index_column="index", every="3i") .agg(pl.col("a").mean()) ) For the example I set n==3 for 7 rows, but think of a smallish n of about 100, for a multicolumn data frame of about 10**6 rows. I was wondering if this is the idiomatic way of doing this type of operation. Somehow group_by_dynamic over an Int32 range seems overkill to me: I was wondering if there is a more direct way of doing the same aggregation.
IMO your solution using group_by_dynamic already follows best practices when it comes to the aggregation. However, you can simplify the creation of the index column quite a bit using pl.DataFrame.with_row_index. As the result is unsigned (and group_by_dyanmic only allows for a signed integer index column), you'll need to pass an expression doing the casting, i.e. ( df .with_row_index() .group_by_dynamic(index_column=pl.col("index").cast(pl.Int32), every="3i") .agg(pl.col("a").mean()) )
3
4
77,973,107
2024-2-10
https://stackoverflow.com/questions/77973107/how-can-i-read-more-than-4096-bytes-from-stdin-copy-pasted-to-a-terminal-on-lin
I have this code: import sys binfile = "data.hex" print("Paste ascii encoded data.") line = sys.stdin.readline() b = bytes.fromhex(line) with open(binfile, "wb") as fp: fp.write(b) Problem is that never more than 4096 bytes are read in the sys.stdin.readline() call. How can I make that buffer larger? I tried to supply a larger number to the call, but that had no effect. Update I changed my stdin reading code to this: line = '' while True: b = sys.stdin.read(1) sys.stdin.flush() line += b if b == "\n": break print(f"Read {len(line)} bytes") Still run into that limit.
This truncate-long-lines-to-4096-bytes behavior is caused by the terminal (TTY) code in the Linux kernel. (Actually, as part of the truncation, the last byte of the 4096 bytes is also replaced with a newline byte.) By the time the (Python) process reads from the TTY as its stdin, the line has already been truncated. There is no easy fix for your use case, i.e. to prevent truncation when copy-pasting long lines to the terminal window. As a workaround, copy-paste to a file (e.g. infile.dat) instead, and then run python script.py <infile.dat. It's easy to reproduce the truncation behavior even without Python, by running dd bs=65536 of=/dev/null, copy-pasting a line longer than 4096 bytes, and then pressing Ctrl-D to indicate EOF. The last line of the output will start with 4096 bytes (4.1 kB, 4.0 KiB) copied,, indicating that only 4096 bytes were read. If you copy-paste multiple long lines, you'll see that each of them will get truncated to 4096 bytes (including the newline byte) separately. More analysis of this Linux kernel behavior: Linux terminal input: reading user input from terminal truncating lines at 4095 character limit Is there any limit on line length when pasting to a terminal in Linux? Both answers above explain how the Linux TTY line editor can be disabled with stty -icanon, and this will prevent the truncation. See more details on how to do it in the answers. However, please don't do it in random programs, because it changes other terminal behavior as well (such as detecting Ctrl-C and disabling input echo), and this will confuse your users. The byte count limit 4096 is hardcoded to the Linux kernel as N_TTY_BUF_SIZE. The rest of my answer demonstrates how Python and the shell (e.g. Bash) work without truncation, thus they are not causing the issue. This is to demonstrate that Python sys.stderr.readline() doesn't truncate, so there is no need to change your Python code. Python sys.stdin.readline() has an unlimited buffer (given that there is enough free memory). I've tried it with Python 2.7, 3.6 and newer Python on Linux. Here is what I've tried: Reading short lines from a pipe immediately (without additional buffering delay in Python): $ (echo -n A; sleep .3; echo a; sleep .3; echo B; sleep .3) | python -c "if 1: for line in iter(__import__('sys').stdin.readline, ''): print([line])" ['Aa\n'] ['B\n'] To try it, run the command without the leading $. It works on Linux for me, I think it will work on macOS, Windows and other systems. On Windows, you may want to drop the if 1: and the line breaks. Reading short lines as bytes (rather than Unicode characters) from a pipe immediately, in Python 3.x, using sys.stdin.buffer.readline(): $ (echo -n A; sleep .3; echo a; sleep .3; echo B; sleep .3) | python -c "if 1: for line in iter(__import__('sys').stdin.buffer.readline, b''): print([line])" [b'Aa\n'] [b'B\n'] To try it, run the command without the leading $. It works on Linux for me, I think it will work on macOS, Windows and other systems. On Windows, you may want to drop the if 1: and the line breaks. Reading a long (longer than 10 MiB) line immediately, without truncation from a pipe: $ python -c "if 1: import sys, time; f = sys.stdout f.write('A' * 10987654); f.flush(); time.sleep(.3) f.write('aaa\n'); f.flush(); time.sleep(.3) f.write('B\n'); f.flush(); time.sleep(.3)" | python -c "if 1: for line in iter(__import__('sys').stdin.readline, ''): print(len(line))" 10987658 2 To try it, run the command without the leading $. It works on Linux for me, I think it will work on macOS, Windows and other systems. On Windows, put the Python code to files a.py and b.py, and then run python a.py | python b.py. In some cases it's useful to pass input as fast as possible (i.e. as soon as the process receives it) to the Python program, i.e. to prevent delay cause by buffering in sys.stdin. Good news: sys.stdin.readline() returns the next line as soon as it is available to the process, it doesn't wait for subsequent lines. In a loop, use for line in iter(sys.stdin.readline, ''):, and don't use for line in sys.stdin:, because the latter waits for more input even if a line is available. See https://stackoverflow.com/a/28919832/97248 and other answers for details. sys.stdin.read(n) typically has buffering delay: even if the process has already read n bytes, sys.stdin.read(n) will be waiting for more bytes until its buffer (of typically 8192 bytes) is filled. To avoid this delay in Python 3, use sys.stdin.buffer.raw.read(n) instead. This will read at most n bytes (not Unicode characters), and it returns as soon as at least 1 byte is available. Don't mix it with sys.stdin.readline(). In Python 2 and 3, use os.read(sys.stdin.fileno(), n) for this. Test the buffering delay using a pipe (e.g. cat | python ...), because without a pipe the system may use a terminal (TTY) device, which has line buffering by default, returning data earlier, at the end-of-line. This is to demonstrate that it's not the shell that causes truncation. Here is how: On your regular Linux GUI, run any of these commands (without the leading $): $ xterm -e dd bs=1 status=progress $ konsole -e dd bs=1 status=progress $ gnome-terminal -- dd bs=1 status=progress A new, empty terminal window will appear. In anther program, copy a line longer than 4096 bytes to the clipboard (alternatively, you may copy a text containing multiple lines, some longer, some shorter). Paste it to the new, empty terminal window. If unsure, press Shift-Insert to paste. If that doesn't work, use Ctrl-Shift-V to paste. If that doesn't work either, use Edit / Paste in the menu. Wait a second, press Enter in the window. dd will display something like ... bytes (...) copied, .... The byte count will be smaller than expected, indicating line truncation at 4096 bytes. You can close the window now. Or just press Ctrl-C or Ctrl-D to exit from dd, causing the window to close. This demo has proved that truncation happens even if there is no shell (or Python process).
2
4
77,982,303
2024-2-12
https://stackoverflow.com/questions/77982303/select-all-integer-columns-except-a-few-in-python-polars
Consider I have a dataframe: import polars as pl import polars.selectors as cs df = pl.DataFrame( { 'p': [1, 2, 1, 3, 1, 2], 'x': list(range(6, 0, -1)), 'y': list(range(2, 8)), 'z': [3, 4, 5, 6, 7, None], "q" : list('abcdef') } ) df shape: (6, 5) p x y z q i64 i64 i64 i64 str 1 6 2 3 "a" 2 5 3 4 "b" 1 4 4 5 "c" 3 3 5 6 "d" 1 2 6 7 "e" 2 1 7 null "f" I need to select all the integer columns except p and z. One way is to manually select each column but, it is not feasible if there are hundreds of columns. What could a better and efficient way?
You can use polars' column selectors to create a selector for all integers columns. Then, you can use pl.Expr.exclude to exclude column p and z from the selection. import polars.selectors as cs df.select(cs.integer().exclude("p", "z"))
3
4
77,982,046
2024-2-12
https://stackoverflow.com/questions/77982046/how-to-select-all-columns-from-a-list-in-a-polars-dataframe
I have a dataframe import polars as pl import numpy as np df = pl.DataFrame( { "nrs": [1, 2, 3, None, 5], "names": ["foo", "ham", "spam", "egg", None], "random": np.random.rand(5), "groups": ["A", "A", "B", "C", "B"], } ) I want to select only the columns in list: mylist = ['nrs', 'random'] This seems to work: import polars.selectors as cs df.select(cs.by_name(mylist))) Is this the idiomatic way to do it? Or are there better ways?
It's actually simpler than that: df.select(['nrs', 'random']) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ nrs ┆ random β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•ͺ══════════║ β”‚ 1 ┆ 0.662732 β”‚ β”‚ 2 ┆ 0.437345 β”‚ β”‚ 3 ┆ 0.43857 β”‚ β”‚ null ┆ 0.701177 β”‚ β”‚ 5 ┆ 0.390494 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ selectors are in general for more complicated selections - like all string columns, column starting with certain phrase and so on. As written in select() documentation you can either path list of columns or expression there. so any of these work - .select('nrs', 'random'), .select(pl.col('nrs', 'random')), .select(pl.col('nrs'), pl.col('random')).
5
4
77,980,972
2024-2-12
https://stackoverflow.com/questions/77980972/merge-groups-of-columns-in-a-polars-dataframe-to-single-columns
I have a polars dataframe with columns a_0, a_1, a_2, b_0, b_1, b_2. I want to convert it to a longer and thinner dataframe (3 x rows, but just 2 columns a and b), so that a contains a_0[0], a_1[0], a_2[0], a_0[1], a_1[1], a_2[1],... and the same for b. How can I do that?
You can use concat_list() to join the columns you want together and then use explode() to convert them into rows. Let's take simple data frame as an example: df = pl.DataFrame( data=[[x for x in range(6)]], schema=[f"a_{i}" for i in range(3)] + [f"b_{i}" for i in range(3)] ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a_0 ┆ a_1 ┆ a_2 ┆ b_0 ┆ b_1 ┆ b_2 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ 0 ┆ 1 ┆ 2 ┆ 3 ┆ 4 ┆ 5 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Now, you can reshape it. First, concat the columns into lists and rename the columns for the final result: import polars.selectors as cs df.select( pl.concat_list(cs.starts_with(x)).alias(x) for x in ['a','b'] ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[i64] ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ [0, 1, 2] ┆ [3, 4, 5] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ No, explode lists into rows: df.select( pl.concat_list(cs.starts_with(x)).alias(x) for x in ['a','b'] ).explode(pl.all()) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 0 ┆ 3 β”‚ β”‚ 1 ┆ 4 β”‚ β”‚ 2 ┆ 5 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
6
2
77,978,595
2024-2-11
https://stackoverflow.com/questions/77978595/tkinter-and-animation-funcanimation-accumulating-delay-and-freeze-gui
I am working on a Tkinter project where I am plotting live data from different sensors. I am using Tkinter to create the GUI and matplotlib animation.FuncAnimation to create live plots that update every given time (i.e. 1 second). The code is written in python 3. This work fine as long as the total number of point is small. As the number of points exceed 300-400 the system starts accumulating delay, slows down and eventually freezes. I.e. if I aim to have a reading every 1 second, in the beginning the system returns a reading every 1 second and few ms. However as the time goes by, it starts increasing the interval with a linear trend (please see image below) I can reproduce the problem by creating a single plot having on the x-axis the number of iteration (i.e. readings) and on the y-axis the delta time between each iteration and updating the plot every second (even if I use a longer time interval the result is the same). I have tried to put the animation function in an independent thread, as it was suggested in other posts, but it did not help at all. If I do not create the plot but I use the generated data (i.e. delta time) to update labels in tkinter I do not have the issue, so it must be related with the creation/update of the plot. Switching on/off blit does not help, and I would prefer to keep it off anyway. Please see below a short version of the code to reproduce the error and the output image after 600 iterations. import tkinter as tk import matplotlib.pyplot as plt from matplotlib.backends import backend_tkagg as bk import matplotlib.animation as animation import numpy as np import time import threading class Application(tk.Frame): def __init__(self, master=None, **kwargs): tk.Frame.__init__(self, master, **kwargs) # ============================================================================= # # Test 1 flags initialization # ============================================================================= self.ani = None # ============================================================================= # Canvas frame # ============================================================================= self.fig = plt.figure(figsize=(15,5)) frm = tk.Frame(self) frm.pack() self.canvas = bk.FigureCanvasTkAgg(self.fig, master=frm) self.canvas.get_tk_widget().pack() # ============================================================================= # # Figure initialization # ============================================================================= self.ax = self.fig.add_subplot(1,1,1) self.ax.plot([],[],'-k', label='delta time') self.ax.legend(loc='upper right') self.ax.set_xlabel('n of readings [-]') self.ax.set_ylabel('time difference between readings [s]') # ============================================================================= # # Start/Quick button # ============================================================================= frm4 = tk.Frame(self) frm4.pack(side='top', fill='x') frm_acquisition = tk.Frame(frm4) frm_acquisition.pack() self.button= tk.Button(frm_acquisition, text="Start acquisition", command=lambda: self.on_click(), bg='green') self.button.grid(column = 0, row=0) # ============================================================================= # # Methods # ============================================================================= def on_click(self): '''the button is a start/stop button ''' if self.ani is None: self.button.config(text='Quit acquisition', bg='red') print('acquisition started') return self.start() else: self.ani.event_source.stop() self.button.config(text='Start acquisition', bg='green') print('acquisition stopped') self.ani = None return def start(self): self.starting_time = time.time() self.time = np.array([]) self.ani = animation.FuncAnimation(self.fig, self.animate_thread, interval =1000, blit=False, cache_frame_data=False) #interval in ms self.ani._start() return # Some post suggested to put animate() in an indipendent thread, but did not solve the problem def animate_thread(self, *args): self.w = threading.Thread(target=self.animate) self.w.daemon = True # Daemonize the thread so it exits when the main program finishes self.w.start() return self.w def animate(self, *args): self.time = np.append(self.time, time.time()-self.starting_time) if len(self.time) > 1: self.ax.scatter(len(self.time),self.time[-1]-self.time[-2], c='k') # root.update() # another post suggested root.update() but did not help either return self.ax, if __name__ == "__main__": root = tk.Tk() app=Application(root) app.pack() root.mainloop() Time delay plot:
Most of the examples matplotlib animation suggests creating the plot axes during intialization of animation instead of creating inside animate function, as your above part of code shows. So a possible way can be create scatter object and hold it in a variable which is intialized once. it actually returns matplotlib collections. Something like this for example, self.ax = self.fig.add_subplot(1,1,1) self.ax.plot([],[],'-k', label='delta time') self.ax.legend(loc='upper right') self.ax.set_xlabel('n of readings [-]') self.ax.set_ylabel('time difference between readings [s]') self.scatplot = self.ax.scatter(x, y, c='k') #here x and y are array data initalized empty. And in animate function, you can utilize 'self.scatplot' like this below with proper data format and customization, Refer here How to animate a scatter plot and FuncAnimation def animate(self, *args): self.time = np.append(self.time, time.time()-self.starting_time) data = np.stack([len(self.time), self.time[-1]-self.time[-2]]) if len(self.time) > 1: self.scatplot.set_offsets(data) return self.scatplot,
2
1
77,965,487
2024-2-9
https://stackoverflow.com/questions/77965487/error-when-using-lid-usage-from-the-swmm-api-package-in-python
I am trying to extract individual LID settings for each subcatchment in SWMM using LID_USAGE from the swmm_api package. I have followed the example at https://markuspichler.gitlab.io/swmm_api/examples/how_to_add_LIDs.html, however, an error occurs. I wonder how to fix the code? I have attached the swmm model for your reference, thank you. The example SWMM model (input file) is available at https://1drv.ms/u/s!AnVl_zW00EHemH5cu620m6vxqXdN?e=S5VSCA. I am using swmm-api 0.4.37. from swmm_api import SwmmInput from swmm_api.input_file.section_labels import SUBAREA,SUBCATCHMENTS,LID_CONTROLS,LID_USAGE import numpy as np import pandas as pd inp = SwmmInput.read_file('Example2.inp') inp.LID_CONTROLS # LID_CONTROLS works smoothly! inp.LID_USAGE # LID_USAGE is not working... inp.LID_USAGE Traceback (most recent call last): File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:780 in from_inp_line return cls(*line_args) TypeError: __init__() takes from 8 to 12 positional arguments but 17 were given During handling of the above exception, another exception occurred: Traceback (most recent call last): Cell In[6], line 1 inp.LID_USAGE File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\inp.py:948 in LID_USAGE return self[LID_USAGE] File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\inp.py:193 in __getitem__ self._convert_section(key) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\inp.py:214 in _convert_section self._data[key] = convert_section(key, self._data[key], self._converter) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:939 in convert_section return section_.from_inp_lines(lines) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:849 in from_inp_lines return cls.create_section(lines) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:831 in create_section sec.add_inp_lines(lines_iter) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:360 in add_inp_lines self.add_multiple(*self._section_object._convert_lines(multi_line_args)) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:866 in _convert_lines yield cls.from_inp_line(*line_args) File C:\ProgramData\miniconda3\envs\swmm\lib\site-packages\swmm_api\input_file\helpers.py:782 in from_inp_line raise TypeError(f'{e} | {cls.__name__}{line_args}\n\n__init__{signature(cls.__init__)}\n\n{getdoc(cls.__init__)}') TypeError: __init__() takes from 8 to 12 positional arguments but 17 were given | LIDUsage('2', 'Bioretention', '1', '5000', '0', '10', '70', '1', '"E:\\Python', 'Examples\\SWMM', 'Calibration\\Example-Yang\\For', 'Calibration\\LID', 'Reports\\S2', 'Bioretention.txt"', '*', '0') __init__(self, subcatchment, lid, n_replicate, area, width, saturation_init, impervious_portion, route_to_pervious=0, fn_lid_report=nan, drain_to=nan, from_pervious=nan) Assignment of LID controls to subcatchments. Args: subcatchment (str): Name of the subcatchment using the LID process. lid (str): Name of an LID process defined in the [LID_CONTROLS] section. n_replicate (int): Number of replicate LID units deployed. area (float): Area of each replicate unit (ft^2 or m^2 ). width (float): Width of the outflow face of each identical LID unit (in ft or m). This parameter applies to roofs, pavement, trenches, and swales that use overland flow to convey surface runoff off of the unit. It can be set to 0 for other LID processes, such as bio-retention cells, rain gardens, and rain barrels that simply spill any excess captured runoff over their berms. saturation_init (float): For bio-retention cells, rain gardens, and green roofs this is the degree to which the unit's soil is initially filled with water (0 % saturation corresponds to the wilting point moisture content, 100 % saturation has the moisture content equal to the porosity). The storage zone beneath the soil zone of the cell is assumed to be completely dry. For other types of LIDs it corresponds to the degree to which their storage zone is initially filled with water impervious_portion (float): Percent of the impervious portion of the subcatchment’s non-LID area whose runoff is treated by the LID practice. (E.g., if rain barrels are used to capture roof runoff and roofs represent 60% of the impervious area, then the impervious area treated is 60%). If the LID unit treats only direct rainfall, such as with a green roof, then this value should be 0. If the LID takes up the entire subcatchment then this field is ignored. route_to_pervious (int): A value of 1 indicates that the surface and drain flow from the LID unit should be routed back onto the pervious area of the subcatchment that contains it. This would be a common choice to make for rain barrels, rooftop disconnection, and possibly green roofs. The default value is 0. fn_lid_report (str): Optional name of a file to which detailed time series results for the LID will be written. Enclose the name in double quotes if it contains spaces and include the full path if it is different than the SWMM input file path. Use β€˜*’ if not applicable and an entry for DrainTo follows drain_to (str): Optional name of subcatchment or node that receives flow from the unit’s drain line, if different from the outlet of the subcatchment that the LID is placed in. from_pervious (float): optional percent of the pervious portion of the subcatchment’s non-LID area whose runoff is treated by the LID practice. The default value is 0 .
This message from the error says what the issue is: TypeError: __init__() takes from 8 to 12 positional arguments but 17 were given You are providing information that the program does not know how to parse, because it is only designed for 8-12 parameters. Why are you inputting 5 extra parameters- what are they for? You are providing too many inputs for the input file. You can find details of the input file from the documentation on page 296. These are the recommended parameters: Subcat, LID, Number, Area, Width, InitSat, FromImp, ToPerv, RptFile, DrainTo And here is an example of how the input could be structured for the input file: [LID_USAGE] ;; assigning lid controls to subcatchments ;;Subcatchment LID Process Number Area Width InitSat FromImp ToPerv RptFile DrainTo FromPerv Sub1 BioCell 5 500 100 0 0 0 "example.txt" * 0 EDIT: The above comment was accurate, but the solution did not properly diagnose your issue. After looking at your files, you have filepaths like this for your RptFile: "E:\Python Examples\SWMM Calibration\Example-Yang\For Calibration\LID Reports\S2 Bioretention.txt" Shown here: Python cannot properly interpret this kind of path and often incorrectly splits it into multiple lines; this is why you are getting an error regarding too many arguments. If Python interpreted this as a single path, you would not be getting the error. So that's the solution: adjust the file paths to have no spaces, then copy-paste into your input file accordingly. I used this file path: /Users/USER/Desktop/swmm_example/example.txt and ran your code with no issues.
2
1
77,978,208
2024-2-11
https://stackoverflow.com/questions/77978208/how-to-count-rows-above-the-current-one-based-on-conditions-in-polars
Let's have a polars df: df = pl.DataFrame( { 'date': ['2022-01-01', '2022-01-02', '2022-01-07', '2022-01-17', '2022-03-02', '2022-06-05', '2022-06-07', '2022-07-02'], 'col1': [4, 4, 2, 2, 2, 3, 2, 1], 'col2': [1, 2, 3, 4, 1, 3, 3, 4], 'col3': [2, 3, 4, 4, 3, 2, 2, 1] } ) date col1 col2 col3 2022-01-01 1 1 2 2022-01-02 1 2 3 2022-01-07 2 3 4 2022-01-17 2 4 1 2022-03-02 3 1 3 2022-06-05 3 2 2 2022-06-07 4 3 4 2022-07-02 4 4 1 The df is sorted by date. I would like to create a column, that would give me a count of all the earlier rows (lower date) with all 3 columns having a value that is greater or equal to the value in the current row. Or in other words: Count rows where row_index < current_row_index & col1[row_index] >= col1[current_row_index] & col2[row_index] >= col2[current_row_index] & col3[row_index] >= col3[current_row_index] ) The result should look like this: date col1 col2 col3 ge 2022-01-01 4 1 2 0 2022-01-02 4 2 3 0 2022-01-07 2 3 4 0 2022-01-17 2 4 4 0 2022-03-02 2 1 3 3 2022-06-05 3 3 2 0 2022-06-07 2 3 2 3 2022-07-02 1 4 1 1 I have tried various combinations of shift, qe, over, when, cum_count etc. but I have not been able to figure it out. I couldn't find a question similar enough to adopt its answer successfully either. Is there a way to achieve this using Polars?
You can also use a struct to put all the conditional logic inside .cumulative_eval() cols = "col1", "col2", "col3" df.with_columns(ge = pl.struct(cols).cumulative_eval( pl.all_horizontal( pl.element().struct[col] >= pl.element().struct[col].last() for col in cols ) .sum() - 1 # subtract 1 as we compare each row against itself ) ) shape: (8, 5) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ date ┆ col1 ┆ col2 ┆ col3 ┆ ge β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ i64 ┆ i64 ┆ i64 ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ══════β•ͺ══════β•ͺ═════║ β”‚ 2022-01-01 ┆ 4 ┆ 1 ┆ 2 ┆ 0 β”‚ β”‚ 2022-01-02 ┆ 4 ┆ 2 ┆ 3 ┆ 0 β”‚ β”‚ 2022-01-07 ┆ 2 ┆ 3 ┆ 4 ┆ 0 β”‚ β”‚ 2022-01-17 ┆ 2 ┆ 4 ┆ 4 ┆ 0 β”‚ β”‚ 2022-03-02 ┆ 2 ┆ 1 ┆ 3 ┆ 3 β”‚ β”‚ 2022-06-05 ┆ 3 ┆ 3 ┆ 2 ┆ 0 β”‚ β”‚ 2022-06-07 ┆ 2 ┆ 3 ┆ 2 ┆ 3 β”‚ β”‚ 2022-07-02 ┆ 1 ┆ 4 ┆ 1 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
6
3
77,978,201
2024-2-11
https://stackoverflow.com/questions/77978201/how-to-let-a-discord-bot-run-a-slash-command-created-by-itself
I am currently searching for a solution to my problem: I want to know if it is possible to let a Discord bot run a slash command which was created by the same Discord bot. This is my current code of the bot: import discord from discord import app_commands class aclient(discord.Client): def __init__(self): super().__init__(intents = discord.Intents.all()) self.synced = False #we use this so the bot doesn't sync commands more than once async def on_ready(self): await self.wait_until_ready() if not self.synced: #check if slash commands have been synced await tree.sync(guild = discord.Object(id=1204091669943816213)) #guild specific: leave blank if global (global registration can take 1-24 hours) self.synced = True channel_id = 1204102053064871946 # Channel ID to clear messages channel = self.get_channel(channel_id) if channel: await channel.purge(limit=1) # Delete all messages in the channel user = client.get_user(int(716235295032344596)) print(f"We have logged in as {self.user}.") class button_view(discord.ui.View): def __init__(self) -> None: super().__init__(timeout=None) @discord.ui.button(label = "Verify", style = discord.ButtonStyle.green, custom_id = "role_button") async def verify(self, interaction: discord.Interaction, button: discord.ui.Button): client.role = interaction.guild.get_role(1204110443258318928) if client.role not in interaction.user.roles: await interaction.user.add_roles(client.role) await interaction.response.send_message(f"I have given you {client.role.mention}!", ephemeral = True) with open("verified.txt", "w") as file: file.write(str(interaction.user.id) + "\n") else: await interaction.response.send_message(f"You already have {client.role.mention}!", ephemeral = True) client = aclient() tree = app_commands.CommandTree(client) @tree.command(guild = discord.Object(id=1204091669943816213), name='rules', description='Rules') async def rules(interaction: discord.Interaction): role_id = 1205862534280642640 role = discord.utils.get(interaction.guild.roles, id=role_id) if role in interaction.user.roles: embed2=discord.Embed(title="πŸ“œ Rules πŸ“œ", description=''' **Β§1 -** __Be respectful__: Treat others with kindness and respect. Harassment, hate speech, or any form of discrimination will not be tolerated.\n **Β§2 -** __Keep discussions civil__: Debates and discussions are encouraged, but avoid personal attacks or insults. Disagreements should be handled respectfully.\n **Β§3 -** __No spam or self-promotion__: Avoid flooding the chat with unnecessary messages or advertisements.\n **Β§4 -** __Use appropriate content__: Keep conversations and content appropriate for all ages. NSFW (Not Safe For Work) content is strictly prohibited.\n **Β§5 -** __No trolling__: Do not engage in trolling, flaming, or intentionally disrupting the server environment. This includes excessive use of emojis or CAPS LOCK.\n **Β§6 -** __Follow Discord's Terms of Service and Community Guidelines__: Make sure all activities within the server comply with Discord's terms and guidelines.\n **Β§7 -** __Respect server staff__: Follow the instructions of moderators and administrators. Disrespect towards server staff will not be tolerated.\n **Β§8 -** __Use channels appropriately__: Post content in relevant channels and avoid off-topic discussions. If unsure, ask a staff member for guidance.\n **Β§9 -** __Report violations__: If you encounter any violations of the rules or Discord's guidelines, report them to the server staff. ''', color=discord.Colour.blue()) embed2.set_thumbnail(url="https://upload.wikimedia.org/wikipedia/commons/thumb/1/10/Userbox_creeper.svg/800px-Userbox_creeper.svg.png") await interaction.response.send_message(embed=embed2, view = button_view()) else: await interaction.response.send_message("You don't have permission to execute this command.", ephemeral=True) client.run('Token') In the on_ready Function I want the bot to run the rules slash command after theif channel:.
I want to know if it is possible to let a Discord Bot run a Slash Command which was created by the same Discord Bot. No. Regardless of who’s command it is, bots cannot execute them. Application commands (slash commands) are user initiated hence you not being able to find any documentation on it. If you’re looking to send the rules embed again there after your if channel then you need to extract the code creating your rules embed, and call that function inside your if channel For example: def getRulesEmbed(): embed2=discord.Embed(title="πŸ“œ Rules πŸ“œ", description=''' **Β§1 -** __Be respectful__: Treat others with kindness and respect. Harassment, hate speech, or any form of discrimination will not be tolerated.\n **Β§2 -** __Keep discussions civil__: Debates and discussions are encouraged, but avoid personal attacks or insults. Disagreements should be handled respectfully.\n **Β§3 -** __No spam or self-promotion__: Avoid flooding the chat with unnecessary messages or advertisements.\n **Β§4 -** __Use appropriate content__: Keep conversations and content appropriate for all ages. NSFW (Not Safe For Work) content is strictly prohibited.\n **Β§5 -** __No trolling__: Do not engage in trolling, flaming, or intentionally disrupting the server environment. This includes excessive use of emojis or CAPS LOCK.\n **Β§6 -** __Follow Discord's Terms of Service and Community Guidelines__: Make sure all activities within the server comply with Discord's terms and guidelines.\n **Β§7 -** __Respect server staff__: Follow the instructions of moderators and administrators. Disrespect towards server staff will not be tolerated.\n **Β§8 -** __Use channels appropriately__: Post content in relevant channels and avoid off-topic discussions. If unsure, ask a staff member for guidance.\n **Β§9 -** __Report violations__: If you encounter any violations of the rules or Discord's guidelines, report them to the server staff. ''', color=discord.Colour.blue()) embed2.set_thumbnail(url="https://upload.wikimedia.org/wikipedia/commons/thumb/1/10/Userbox_creeper.svg/800px-Userbox_creeper.svg.png") return embed2 And then you can get the rules and send it inside the if statement. rules = getRulesEmbed()
2
2
77,978,204
2024-2-11
https://stackoverflow.com/questions/77978204/speeding-up-a-rolling-sum-calculation
I'm doing some work with a fairly large amount of (horse racing!) data for a project, calculating rolling sums of values for various different combinations of data - thus I need to streamline it as much as possible. Essentially I am: calculating the rolling calculation of a points field over time calculating this for various grouped combinations of data [in this case the combination of horse and trainer] looking at the average of the value by group for the last 180 days of data through time The rolling window calculation below works fine - but takes 8.2s [this is about 1/8 of the total dataset - hence would take 1m 5s]. I am looking for ideas of how to streamline this calculation as I'm looking to do it for a number of different combinations of data, and thus speed is of the essence. Thanks. import pandas as pd import time url = 'https://raw.githubusercontent.com/richsdixon/testdata/main/testdata.csv' df = pd.read_csv(url, parse_dates=True) df['RaceDate'] = pd.to_datetime(df['RaceDate'], format='mixed') df.sort_values(by='RaceDate', inplace=True) df['HorseRaceCount90d'] = (df.groupby(['Horse','Trainer'], group_keys=False) .apply(lambda x: x.rolling(window='180D', on='RaceDate', min_periods=1)['Points'].mean()))
Here is a little faster way: df.merge(df.set_index('RaceDate') .groupby(['Horse', 'Trainer'])['Points'] .rolling('180D') .mean() .rename('HorseRaceCount90d_1'), right_index=True, left_on=['Horse', 'Trainer', 'RaceDate']) Your way: 18.5 s Β± 727 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) This way: 1.18 s Β± 36.3 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) Quick Comparison: RaceID RaceDate Horse Jockey Trainer Points HorseRaceCount90d HorseRaceCount90d_1 0 678365 2017-01-08 STRADIVARIUS Andrea Atzeni John Gosden 100.00 100.000000 100.000000 289 680610 2017-01-08 CLASSIC PALACE Brian Hughes Dianne Sayer 1.76 1.760000 1.760000 288 680610 2017-01-08 ROMAN NUMERAL Joe Colliver David Thompson 0.00 0.000000 0.000000 287 680610 2017-01-08 COOPER'S FRIEND Conor O'Farrell R Mike Smith 0.00 0.000000 0.000000 286 680610 2017-01-08 GLEANN NA NDOCHAIS Craig Nichol Alistair Whillans 0.00 0.000000 0.000000 ... ... ... ... ... ... ... ... ... 96817 702712 2018-12-06 URBAN ICON Tom Marquand Richard Hannon 12.50 12.500000 12.500000 96816 702712 2018-12-06 EVEN KEEL Rob Hornby Jonathan Portman 11.07 11.070000 11.070000 96815 702712 2018-12-06 MOLLY BLAKE Hector Crouch Clive Cox 9.73 9.730000 9.730000 96885 702719 2018-12-06 CELTIC ARTISAN Cam Hardie Rebecca Menzies 2.37 1.046667 1.046667 97076 704968 2018-12-06 REVENGE David Allan Tim Easterby 2.47 1.346667 1.346667 [100008 rows x 8 columns]
3
4
77,976,619
2024-2-11
https://stackoverflow.com/questions/77976619/window-must-be-an-integer-0-or-greater-issue-with-30d-style-rolling-calculat
I've had a look and can't seem to find a solution to this issue. I'm wanting to calculate the rolling sum of the previous 30 days' worth of data at each date in the dataframe - by subgroup - for a set of data that isn't daily - it's spaced fairly irregularly. I've been attempting to use ChatGPT which is getting in a twist over it. Initially the suggestion was that I'd not converted the Date column to datetime format to allow for the rolling calculation, but now from the code below: import pandas as pd from datetime import datetime, timedelta import numpy as np # Create a dataset with irregularly spaced dates spanning two years np.random.seed(42) date_rng = pd.date_range(start='2022-01-01', end='2023-12-31', freq='10D') # Every 10 days data = {'Date': np.random.choice(date_rng, size=30), 'Group': np.random.choice(['A', 'B'], size=30), 'Value': np.random.randint(1, 30, size=30)} df = pd.DataFrame(data) # Sort DataFrame by date df.sort_values(by='Date', inplace=True) df['Date'] = pd.to_datetime(df['Date']) # Calculate cumulative sum by group within the previous 30 days from each day df['RollingSum_Last30Days'] = df.groupby('Group')['Value'].transform(lambda x: x.rolling(window='30D', min_periods=1).sum()) I'm getting an error of: ValueError: window must be an integer 0 or greater I've found conflicting comments online as to whether the format '30D' works in rolling windows but I'm none the wiser as to a solution to this. Any help appreciated. Running in VSCode in Python 3.11.8.
The issue if that you need to specify which column to use as Date but don't have access to the Date with groupby.transform. You could use groupby.apply: # Calculate cumulative sum by group within the previous 30 days from each day df['RollingSum_Last30Days'] = (df.groupby('Group', group_keys=False) .apply(lambda x: x.rolling(window='30D', on='Date', min_periods=1)['Value'].sum()) ) Output: Date Group Value RollingSum_Last30Days 9 2022-01-11 A 22 22.0 12 2022-01-11 A 22 44.0 6 2022-01-21 A 4 48.0 1 2022-05-21 B 14 14.0 23 2022-05-21 A 8 8.0 15 2022-07-20 B 26 26.0 4 2022-07-20 A 18 18.0 18 2022-07-30 B 10 36.0 7 2022-07-30 A 2 20.0 5 2022-08-19 A 8 10.0 10 2022-10-18 B 10 10.0 16 2022-11-17 B 12 12.0 11 2023-01-06 B 4 4.0 21 2023-02-15 B 16 16.0 26 2023-04-06 B 28 28.0 19 2023-04-26 A 4 4.0 28 2023-05-16 B 8 8.0 0 2023-05-26 B 3 11.0 8 2023-06-05 A 6 6.0 29 2023-06-25 A 21 27.0 17 2023-07-25 A 2 2.0 20 2023-08-04 B 14 14.0 22 2023-08-14 B 15 29.0 14 2023-08-14 B 18 47.0 3 2023-08-24 A 4 4.0 24 2023-09-03 B 14 47.0 25 2023-09-03 A 23 27.0 27 2023-09-03 A 25 52.0 13 2023-09-23 B 29 43.0 2 2023-12-12 A 17 17.0
2
1
77,975,111
2024-2-10
https://stackoverflow.com/questions/77975111/retrieve-aws-secrets-using-boto3
I want to retrieve AWS secrets using python boto3 and I came across this sample code: aws-doc-sdk-examples/python/example_code/secretsmanager/get_secret_value.py at main Β· awsdocs/aws-doc-sdk-examples Β· GitHub Secrets Manager examples using SDK for Python (Boto3) - AWS SDK Code Examples But it is confusing. I don't see boto3 library import in the python file. Not an expert of Python, so any help in understanding this much appreciated. I was expecting to have the AWS secrets name and boto3 library as part of the python function.
Per the documentation, each of the example folders has one or more main runner scripts. For the Secrets Manager examples, you would run either: python scenario_get_secret.py, or python scenario_get_batch_secrets.py Each of these 'runner' scripts imports the relevant Python code e.g. get_secret_value.py. The code is structured this way so that you can easily test the code while separating the test code from the reusable utility code (that you would potentially use).
5
3
77,973,200
2024-2-10
https://stackoverflow.com/questions/77973200/fixing-the-way-how-to-sort-element-in-a-list-using-python-sorted-method
I have this list containing the following images with these the name structure "number.image" The list stores the elements of a local folder based on a path. [1.image, 2.image, 3.image, 4.image, 5.image, 6.image, 7.image, 8.image, 9.image, 10.image, 11.image, 12.image, 13.image] applying the python build-in sorted() method to make sure the elements in the list are sorted in proper manner I got this result. As you see the order is not correct. 1.image 10.image 11.image 12.image 13.image 2.image 3.image 4.image 5.image 6.image 7.image 8.image 9.image
If you want to use the inbuilt sorted function and not install a third-party library such as natsort, you can use a lambda for the key argument that interprets the stem of the file as an integer: >>> from pathlib import Path >>> filenames = [ ... '1.image', ... '10.image', ... '11.image', ... '12.image', ... '13.image', ... '2.image', ... '3.image', ... '4.image', ... '5.image', ... '6.image', ... '7.image', ... '8.image', ... '9.image', ... ] >>> sorted_filenames = sorted(filenames, key=lambda f: int(Path(f).stem)) >>> # Arguably less readable alternative using list indexing splitting on the period manually: >>> # sorted_filenames = sorted(filenames, key=lambda f: int(f.split('.')[0])) >>> print('\n'.join(sorted_filenames)) 1.image 2.image 3.image 4.image 5.image 6.image 7.image 8.image 9.image 10.image 11.image 12.image 13.image
2
1
77,974,379
2024-2-10
https://stackoverflow.com/questions/77974379/how-to-find-max-value-in-list-datatype-column-in-polars-dataframe
I am using a Polar data frame for the first time. I am trying to match input values with data in the Postgres table. Sharing some sample code which is part of the actual code. I have a column called "Score" of type list[i32]. As a next step, I am trying to find the maximum value in that list. I am getting errors. import polars as pl import jaro def test_polars(): fname='sarah' lname = 'vatssss' data = {"first_name": ['sarah', 'purnima'], "last_name": ['vats', 'malik']} df = pl.DataFrame(data) print(df) df = (df.with_columns( [ (pl.when(pl.col("first_name") == fname).then(1).otherwise(0)).alias("E_FN"), (pl.when(pl.col("last_name") == lname).then(1).otherwise(0)).alias("E_LN"), (pl.when(pl.col("first_name").str.slice(0, 3) == fname[0:3]).then(1).otherwise(0)).alias("F3_FN"), (pl.when(pl.col("first_name").map_elements( lambda first_name: jaro.jaro_winkler_metric(first_name, fname)) >= 0.8).then(1).otherwise(0)).alias( "CMP80_FN"), (pl.when(pl.col("last_name").map_elements( lambda first_name: jaro.jaro_winkler_metric(first_name, lname)) >= 0.9).then(1).otherwise(0)).alias( "CMP90_LN"), ] ) .with_columns( [ pl.concat_list(980 * pl.col("E_FN") , 970 * pl.col("E_LN") ).alias("score") ] ) .with_columns( [pl.max(pl.col("score")).alias("max_score") ] ) ) print(df) if __name__ == '__main__': test_polars() C:\PythonProject\pythonProject\venv\Graph_POC\Scripts\python.exe "C:\PythonProject\pythonProject\polars data.py" shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ first_name ┆ last_name β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ sarah ┆ vats β”‚ β”‚ purnima ┆ malik β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Traceback (most recent call last): File "C:\PythonProject\pythonProject\polars data.py", line 45, in <module> test_polars() File "C:\PythonProject\pythonProject\polars data.py", line 34, in test_polars [pl.max(pl.col("score")).alias("max_score") ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\polars\functions\aggregation\vertical.py", line 175, in max return F.col(*names).max() ^^^^^^^^^^^^^ File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\polars\functions\col.py", line 288, in __new__ return _create_col(name, *more_names) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\polars\functions\col.py", line 67, in _create_col raise TypeError(msg) TypeError: invalid input for `col` Expected `str` or `DataType`, got 'Expr'. Process finished with exit code 1
you can use list.max() function. Simple example: df = pl.DataFrame( { "a": [[1, 8, 3],[6,2,7],[8,10,11]], "b": [4, 5, None], } ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ list[i64] ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════║ β”‚ [1, 8, 3] ┆ 4 β”‚ β”‚ [6, 2, 7] ┆ 5 β”‚ β”‚ [8, 10, 11] ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ df.with_columns( pl.col('a').list.max().alias('max_a') ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ max_a β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ list[i64] ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═══════║ β”‚ [1, 8, 3] ┆ 4 ┆ 8 β”‚ β”‚ [6, 2, 7] ┆ 5 ┆ 7 β”‚ β”‚ [8, 10, 11] ┆ null ┆ 11 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ so in your case just use pl.col("score").list.max instead of pl.max(pl.col("score")). max() is just a syntactic sugar for col(names).max() and supposed to return max value of the column, not max value of the list type cell of the column.
2
2
77,973,237
2024-2-10
https://stackoverflow.com/questions/77973237/deleting-disabled-when-saving-changes-in-formset
I'm creating a table with a formset from a queryset of existing objects. Few fields in the table should change, and other fields should be displayed, but not changed. For displaying non-editable fields, I use the widget with 'disabled' in the form. GET request does everything right. But the POST request does not save changes to the resolved fields, because unresolved fields are required and have disabled and blocks the sending of existing values. I tried to use jQuery from the example, but it doesn't work. Moreover, I do not understand how I can refer to different ids in the formset. Models: class Order(models.Model): car = models.ForeignKey( Car, blank=True, null=True, on_delete=models.PROTECT, related_name='car_orders', ) department = models.ForeignKey( Department, blank=False, null=False, on_delete=models.PROTECT, related_name='dep_orders', ) ... View def orders_list(request, year, month, day): orders = Order.objects.filter( order_date__year=year, order_date__month=month, order_date__day=day ) if request.method == 'POST': formset = OrderCloseFormSet( request.POST or None, queryset=orders, prefix='order' ) if formset.is_valid(): formset.save(commit=False) for form in formset: form.save() formset = OrderCloseFormSet(queryset=orders, prefix='order') context = {'orders': orders, 'formset': formset} return render(request, 'orders/orders_list.html', context) Form class OrderCloseForm(forms.ModelForm): class Meta: model = Order fields = ( 'type_car', 'department', ... ) widgets = { 'car': forms.Select(attrs={'style': 'width: 100%'}), 'department': forms.Select(attrs={'disabled': 'True', 'style': 'width: 100%'}), ... } Template <form method="post" action="{% url 'orders:orders_list' year month day %}"> {% csrf_token %} {{ formset.management_form }} {{ formset.non_form_errors.as_ul }} <table class="table table-order" id="order-formset"> {% for form in formset %} {% if forloop.first %} <thead> <tr> {% for field in form.visible_fields %} <th>{{ field.label|capfirst }}</th> {% endfor %} </tr> </thead> {% endif %} <tbody> <tr> {% for field in form %} <td> {% if field == car %} {% for hidden in form.hidden_fields %} {{ hidden }} {% endfor %} {% endif %} {% if field == form.route_movement %} <a href="{% url 'orders:order_detail' form.id.value %}"> {{ field|addclass:'input-box input-select' }} </a> {% else %} {{ field|addclass:'input-box input-select' }} {% endif %} </td> {% endfor %} </tr> </tbody> {% endfor %} </table> ... </form> <script> $('id_order-0-department').submit(function(e) { $(':disabled').each(function(e) { $(this).removeAttr('disabled'); }) }); </script> In response, get the form id in the formset with next id: <select name="order-0-department" disabled="True" id="id_order-0-department"></select> So how do I make a table with editable fields, but keeps the data in unchanged fields.
Setting disabled will only render the field non-editable, so people can still forge a POST request that would alter the field. You should set the field to .disabled = True, this will not only skip the fields, but also prevent people from forging a POST request that somehow would change these fields, like: class OrderCloseForm(forms.ModelForm): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.fields['department'].disabled = True class Meta: model = Order fields = ( 'type_car', 'department', # … ) widgets = { 'car': forms.Select(attrs={'style': 'width: 100%'}), 'department': forms.Select(attrs={'style': 'width: 100%'}), # … } The view is actually a lot simpler than currently done: you can save the FormSet directly: def orders_list(request, year, month, day): orders = Order.objects.filter( order_date__year=year, order_date__month=month, order_date__day=day ) if request.method == 'POST': formset = OrderCloseFormSet( request.POST, request.FILES, queryset=orders, prefix='order' ) if formset.is_valid(): formset.save() # commit=True return redirect('name-of-some-view') else: formset = OrderCloseFormSet(queryset=orders, prefix='order') context = {'orders': orders, 'formset': formset} return render(request, 'orders/orders_list.html', context) Note: In case of a successful POST request, you should make a redirect [Django-doc] to implement the Post/Redirect/Get pattern [wiki]. This avoids that you make the same POST request when the user refreshes the browser.
2
2
77,972,662
2024-2-10
https://stackoverflow.com/questions/77972662/different-results-in-chi-square-test
I am trying to calculate the Chi-Square result for a simply data table I have two groups, let's call them PG and KG. PG has in category 1 19 values and in category 2 11 values. KG has in category 1 0 values and in category 2 30 values. The python code would be from scipy.stats import chi2_contingency observed = [[0, 30], [19, 11]] chi2, p, dof, expected = chi2_contingency(observed) print("Chi Square value:", chi2) This yields 24.95. But when using Stata or the online calculator https://mathcracker.com/de/chi-quadrat-test-unabhangigkeit, I get 27.80. I don't understand where the difference comes from.
There's parameter correction=, which is in default mode set to True: correction bool, optional If True, and the degrees of freedom is 1, apply Yates’ correction for continuity. The effect of the correction is to adjust each observed value by 0.5 towards the corresponding expected value. from scipy.stats import chi2_contingency observed = [[0, 30], [19, 11]] chi2, p, dof, expected = chi2_contingency(observed, correction=False) print("Chi Square value:", chi2) Prints: Chi Square value: 27.804878048780488
3
2
77,968,626
2024-2-9
https://stackoverflow.com/questions/77968626/dynamic-optimization-in-gekko
I need to optimize this function in gekko and something is wrong. Black function(x2) is how teoretical it should look like. m = GEKKO() m.options.IMODE = 6 m.time = np.linspace(0, 1, 100) x = m.Var(lb=1, ub=3) x2 = m.Var(lb=1, ub=3) J = m.Var(0) t = m.Param(value=m.time) m.Equation(J.dt() == 24*x*t + 2*x.dt()**2 -4*t) m.Equation(x2==t**3 + t + 1) Jf = m.FV() Jf.STATUS = 1 m.Connection(Jf, J, pos2 = 'end') m.Obj(Jf) m.solve() plt.plot(m.time, x.value) plt.plot(m.time, x2.value, color='black') plt.show()
Here is a solution to the problem that aligns with the known solution: from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt m = GEKKO(remote=True) n = 101 m.time = np.linspace(0,1,n) x = m.Var(1,lb=1,ub=3) t = m.Param(value=m.time) p = np.zeros(n); p[-1]=1 final = m.Param(p) m.Equation(final*(x-3)==0) m.Minimize(final*m.integral(24*x*t + 2*x.dt()**2 -4*t)) m.options.IMODE = 6 m.options.SOLVER = 3 m.options.NODES = 2 m.solve() plt.plot(m.time, x.value,'b-',label='x gekko') tm = m.time plt.plot(tm,tm**3+t+1,'r--',label='x solution') plt.legend(); plt.xlabel('time'); plt.ylabel('x') plt.grid(); plt.show() The variable x has an initial value 1 and bounds between 1 and 3. The parameter final is a numpy array with all zeros except for the last element set to one. This setup is used to define a constraint in the model that only applies at the final time point, ensuring x equals 3 at the final time. Don't use m.fix_final(x,3) because it also sets the derivative of x to zero at the final point. The objective function m.Minimize() contains an m.integral() expression that depends on x, t, and the derivative of x with respect to time. It is also multiplied by final so that only the final integral value is used to define the objective function.
2
1
77,971,552
2024-2-10
https://stackoverflow.com/questions/77971552/how-to-write-a-python-function-that-splits-and-selects-first-element-in-each-pan
I have a dataframe df with double entries separated by a , in some columns. I want to write a function to extract only the entry before the , for columns with double entries. Example: 20.15,20.15 split to 20.15 See the dataframe import pandas as pd # initialize data of lists. data = {'Name': ['Tom', 'nick', 'krish', 'jack','Phil','Shaq','Frank','Jerome','Arpan','Sean'], 'Age': ['20.15,20.15', '21.02,21.02', '19.04,19.04','18.17,18.17','65.77,65.77','34.19,34.19','76.12,76.12','65.55,65.55','55.03,55.03','41.11,41.11'], 'Score_1':['10,10', '21,21', '19,19','18,18','65,65','34,34','76,76','65,65','55,55','41,41'], 'Score_2':['11,11', '31,31', '79,79','38,38','75,75','94,94','26,26','15,15','96,96','23,23'], 'Score_3':['101,101', '212,212', '119,119','218,218','765,765','342,342','706,706','615,615','565,565','491,491'], 'Type':[ 'A','C','D','F','B','E','H','G','J','K'], 'bonus':['3.13,3.13','5.02,5.02','4.98,4.98','6.66,6.66','0.13,0.13','4.13,4.13','5.12,5.12','4.28,4.28','6.16,6.16','5.13,5.13'], 'delta':[0.1,0.3,2.3,8.2,7.1,5.7,8.8,9.1,4.3,2.9]} # Create DataFrame df = pd.DataFrame(data) # Print the output. print(df) Desired output (You can copy & paste) # initialize data of lists. df1 = {'Name': ['Tom', 'nick', 'krish', 'jack','Phil','Shaq','Frank','Jerome','Arpan','Sean'], 'Age': ['20.15', '21.02', '19.04','18.17','65.77','34.19','76.12','65.55','55.03','41.11'], 'Score_1':['10', '21', '19','18','65','34','76','65','55','41'], 'Score_2':['11', '31', '79','38','75','94','26','15','96','23'], 'Score_3':['101', '212', '119','218','765','342','706','615','565','491'], 'Type':[ 'A','C','D','F','B','E','H','G','J','K'], 'bonus':['3.13','5.02','4.98','6.66','0.13','4.13','5.12','4.28','6.16','5.13'], 'delta':[0.1,0.3,2.3,8.2,7.1,5.7,8.8,9.1,4.3,2.9]} # Create DataFrame df2 = pd.DataFrame(df1) # Print the output. print(df2) I need help with a more robust function, see my attempt below def stringsplitter(data,column): # select columns with object datatype data1 = data.select_dtypes(include=['object']) cols= data1[column].str.split(',', n=1).str print(cols[0]) # applying stringsplitter to the dataframe final_df = df.apply(stringsplitter) Thanks for your help
You can create the DataFrame and then edit the columns that have a comma. Note that this will only work if you're sure only the columns with duplicated data have commas in their values. # Create DataFrame df = pd.DataFrame(data) for col in df.columns: if df[col].dtype == "object": df[col] = df[col].astype(str).str.split(",").str[0] # Print the output. print(df) The result will be: Name Age Score_1 Score_2 Score_3 Type bonus delta 0 Tom 20.15 10 11 101 A 3.13 0.1 1 nick 21.02 21 31 212 C 5.02 0.3 2 krish 19.04 19 79 119 D 4.98 2.3 3 jack 18.17 18 38 218 F 6.66 8.2 4 Phil 65.77 65 75 765 B 0.13 7.1 5 Shaq 34.19 34 94 342 E 4.13 5.7 6 Frank 76.12 76 26 706 H 5.12 8.8 7 Jerome 65.55 65 15 615 G 4.28 9.1 8 Arpan 55.03 55 96 565 J 6.16 4.3 9 Sean 41.11 41 23 491 K 5.13 2.9
2
3
77,970,330
2024-2-9
https://stackoverflow.com/questions/77970330/in-multilevel-columns-pandas-dataframe-reorder-the-specific-lower-level-of-cate
I got a multilevel dataframe with the multivel columns, df.columns is ultiIndex([( 'county', ''), ( 'month', ''), ( 'Gender', 'Declined to self-identify'), ( 'Gender', 'Female'), ( 'Gender', 'Male'), ( 'Gender', 'Non-Binary / Gender expansive'), ( 'age', '0-4'), ( 'age', '11-13'), ( 'age', '15-18'), ( 'age', '19-24'), ( 'age', '25-34'), ( 'age', '35-44'), ( 'age', '45-54'), ( 'age', '5-10'), ( 'age', '55-64'), ( 'age', '65_vove'), ( 'age', 'Unkown'), ( 'race', 'American Indian or Alaska Native'), ( 'race', 'Asian'), ( 'race', 'Black or African American'), ( 'race', 'Native Hawaiian or other Pacific Islander'), ( 'race', 'UNKnown'), ( 'race', 'White'), ('Ethnicity', 'Hispanic or Latinx'), ('Ethnicity', 'Non Hispanic or Latinx'), ('Ethnicity', 'Unknown')], names=['variable', 'value']) I would like to reorder the race as race_order = ['Black or African American', 'American Indian or Alaska Native','Asian', 'Native Hawaiian or other Pacific Islander','White', 'UNKnown']
Example Code let's make sample input import pandas as pd import numpy as np a = [('county', ''), ('month', ''), ('Gender', 'Declined to self-identify'), ('Gender', 'Female'), ('Gender', 'Male'), ('Gender', 'Non-Binary / Gender expansive'), ('age', '0-4'), ('age', '11-13'), ('age', '15-18'), ('age', '19-24'), ('age', '25-34'), ('age', '35-44'), ('age', '45-54'), ('age', '5-10'), ('age', '55-64'), ('age', '65_vove'), ('age', 'Unkown'), ('race', 'American Indian or Alaska Native'), ('race', 'Asian'), ('race', 'Black or African American'), ('race', 'Native Hawaiian or other Pacific Islander'), ('race', 'UNKnown'), ('race', 'White'), ('Ethnicity', 'Hispanic or Latinx'), ('Ethnicity', 'Non Hispanic or Latinx'), ('Ethnicity', 'Unknown')] np.random.seed(0) df = pd.DataFrame(np.random.randint(0, 10, (3, 26)), columns=pd.MultiIndex.from_tuples(a, names=['variable', 'value'])) df: df1 = df.columns.to_frame(index=False) race_order = ['Black or African American', 'American Indian or Alaska Native','Asian', 'Native Hawaiian or other Pacific Islander','White', 'UNKnown'] df1.loc[df1['variable'].eq('race'), 'value'] = race_order out = df[pd.MultiIndex.from_frame(df1, names=['variable', 'value'])] out:
2
2
77,969,654
2024-2-9
https://stackoverflow.com/questions/77969654/error-while-finding-similarity-between-polars-dataframe-column-and-string-variab
I need to find a similarity between the column in the polar data frame and the input value.I am using jaro_winkler_metric . I am getting errors while doing it. We don't want to use UDF functions as it slows the process. import polars as pl import jaro def test_polars(): name='savah' data = {"first_name": ['sarah', 'purnima'], "last_name": ['vats', 'malik']} df = pl.DataFrame(data) print(df) df = df.with_columns( [ (pl.when( jaro.jaro_winkler_metric(pl.col("first_name"), name) >= 0.8 ).then(1).otherwise(0)).alias("COMP80_FN"), ] ) print(df) if __name__ == '__main__': test_polars() C:\PythonProject\pythonProject\venv\Graph_POC\Scripts\python.exe "C:\PythonProject\pythonProject\polars data.py" shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ first_name ┆ last_name β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ sarah ┆ vats β”‚ β”‚ purnima ┆ malik β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Traceback (most recent call last): File "C:\PythonProject\pythonProject\polars data.py", line 22, in <module> test_polars() File "C:\PythonProject\pythonProject\polars data.py", line 12, in test_polars (pl.when( jaro.jaro_winkler_metric(pl.col("first_name"), name) >= 0.8 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\jaro\__init__.py", line 43, in jaro_winkler_metric return jaro.metric_jaro_winkler(string1, string2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\jaro\jaro.py", line 235, in metric_jaro_winkler ans = string_metrics(string1, string2, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\PythonProject\pythonProject\venv\Graph_POC\Lib\site-packages\jaro\jaro.py", line 159, in string_metrics assert isinstance(s1, str) AssertionError Process finished with exit code 1
The error lies within this snippet: jaro.jaro_winkler_metric(pl.col("first_name"), name) You pass a polars expression to jaro_winkler_metric and I doubt the function such inputs, but expects a string instead. Instead you could try using: df = ( df .with_columns( pl.col("first_name").map_elements(lambda first_name: jaro.jaro_winkler_metric(first_name, name)).alias("jwm") ) .with_columns( pl.when(pl.col("jwm") >= 0.8).then(1).otherwise(0).alias("COMP80_FN") ) )
2
2
77,969,653
2024-2-9
https://stackoverflow.com/questions/77969653/python-docx-twips-object-is-not-callable
I have a simple script to generate a document using python docx. I can generate the document fine, but when I try to change anythig related to the document sections I get the error: ERROR - Error while processing: 'Twips' object is not callable from docx import Document from docx.shared import Inches from docx.section import Section doc = Document() section = doc.sections[0] section.page_width(Inches(5)) section.page_height(Inches(5)) I've tried different units and ways of calling the sections array. I'm assuming the created document already has a section by default, but I've also tried to call doc.add_section() before it and the same issue happens.
Seems it's not correct usage: section.page_width(Inches(5)) section.page_height(Inches(5)) Instead try: section.page_width = Inches(5) section.page_height = Inches(5)
2
2
77,969,526
2024-2-9
https://stackoverflow.com/questions/77969526/optional-parameters-fastapi
please help me. In the function I set the optional parameter param as a boolean value, but the docs (swagger) do not display the data type for param. How to fix it? I'm using python 3.12.1 from fastapi import FastAPI from datetime import date app=FastAPI() @app.get ("/sales") def sales( date_from: date, date_to: date, param: bool | None = None, ): return date_from, date_to, param on version python 3.9 everything is fine
set the optional parameter "param" as a boolean value, but the docs do not display the data type Well, it's not a bool, is it? You decided to make it a bool | None instead. (Sometimes pronounced Optional[bool], same thing.) Let's fix that: param: bool = False,
2
2
77,968,612
2024-2-9
https://stackoverflow.com/questions/77968612/polars-smart-way-to-avoid-window-expression-not-allowed-in-aggregation
I have the following code which works. import numpy as np import polars as pl data = { "date": ["2021-01-01", "2021-01-02", "2021-01-03", "2021-01-04", "2021-01-05", "2021-01-06", "2021-01-07", "2021-01-08", "2021-01-09", "2021-01-10", "2021-01-11", "2021-01-12", "2021-01-13", "2021-01-14", "2021-01-15", "2021-01-16", "2021-01-17", "2021-01-18", "2021-01-19", "2021-01-20"], "close": np.random.randint(100, 110, 10).tolist() + np.random.randint(200, 210, 10).tolist(), "company": ["A", "A", "A", "A", "A", "A", "A", "A", "A", "A", "B", "B", "B", "B", "B", "B", "B", "B", "B", "B"] } df = pl.DataFrame(data).with_columns(date = pl.col("date").cast(pl.Date)) # Calculate Returns R = pl.col("close").pct_change() # Calculate Gains and Losses G = pl.when(R > 0).then(R).otherwise(0).alias("gain") L = pl.when(R < 0).then(R).otherwise(0).alias("loss") # Calculate Moving Averages for Gains and Losses window = 3 MA_G = G.rolling_mean(window).alias("MA_gain") MA_L = L.rolling_mean(window).alias("MA_loss") # Calculate Relative Strength Index based on Moving Averages RSI = (100 - (100 / (1 + MA_G / MA_L))).alias("RSI") df = df.with_columns(R, G, L, MA_G, MA_L, RSI) df.head() I like the ability to compose different steps using polars, because it keeps my code readable and easy to maintain (as opposed to method chaining). Note that ultimately calculations are more complex. However, now I want to calculate the above column but grouped by "company". I tried adding .over("company") where relevant. However, this doesn't work. # Calculate Returns R = pl.col("close").pct_change().over("company") # Calculate Gains and Losses G = pl.when(R > 0).then(R).otherwise(0).alias("gain") L = pl.when(R < 0).then(R).otherwise(0).alias("loss") # Calculate Moving Averages for Gains and Losses window = 3 MA_G = G.rolling_mean(window).alias("MA_gain").over("company") MA_L = L.rolling_mean(window).alias("MA_loss").over("company") # Calculate Relative Strength Index based on Moving Averages RSI = (100 - (100 / (1 + MA_G / MA_L))).over("company").alias("RSI") df = df.with_columns(R, G, L, MA_G, MA_L, RSI) df.head() Questions 1.) What is the best way to fix this "window expression not allowed in aggregation" error while keeping the above code approach? 2.) Related question: why is window expression not allowed in aggregation in the first place. What is the problem with this from a technical perspective? Can someone explain to me in laymans terms? Thanks!
over() applies to the whole expression, so your code actually works if you remove over on MA_G / MA_L columns: # Calculate Returns R = pl.col("close").pct_change().over("company") # Calculate Gains and Losses G = pl.when(R > 0).then(R).otherwise(0).alias("gain") L = pl.when(R < 0).then(R).otherwise(0).alias("loss") # Calculate Moving Averages for Gains and Losses window = 3 MA_G = G.rolling_mean(window).alias("MA_gain") MA_L = L.rolling_mean(window).alias("MA_loss") # Calculate Relative Strength Index based on Moving Averages RSI = (100 - (100 / (1 + MA_G / MA_L))).over("company").alias("RSI") df = df.with_columns(R, G, L, MA_G, MA_L, RSI) df.head() ────────────┬───────────┬─────────┬──────────┬───────────┬──────────┬───────────┬────────────┐ β”‚ date ┆ close ┆ company ┆ gain ┆ loss ┆ MA_gain ┆ MA_loss ┆ RSI β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ f64 ┆ str ┆ f64 ┆ f64 ┆ f64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════════β•ͺ═════════β•ͺ══════════β•ͺ═══════════β•ͺ══════════β•ͺ═══════════β•ͺ════════════║ β”‚ 2021-01-01 ┆ null ┆ A ┆ 0.0 ┆ 0.0 ┆ null ┆ null ┆ null β”‚ β”‚ 2021-01-02 ┆ -0.055046 ┆ A ┆ 0.0 ┆ -0.055046 ┆ null ┆ null ┆ null β”‚ β”‚ 2021-01-03 ┆ 0.019417 ┆ A ┆ 0.019417 ┆ 0.0 ┆ 0.006472 ┆ -0.018349 ┆ -54.5 β”‚ β”‚ 2021-01-04 ┆ -0.038095 ┆ A ┆ 0.0 ┆ -0.038095 ┆ 0.006472 ┆ -0.031047 ┆ -26.338197 β”‚ β”‚ 2021-01-05 ┆ 0.059406 ┆ A ┆ 0.059406 ┆ 0.0 ┆ 0.026274 ┆ -0.012698 ┆ 193.535335 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
1
77,968,009
2024-2-9
https://stackoverflow.com/questions/77968009/how-to-add-spine-arrows-and-offset-the-spine
I can do either one of these separately, but not together. I think the question comes down to: after offsetting spines in a Matplotlib figure, how does one find the spine bounds in a coordinate system that can be used to plot arrowheads on the ends of the spines? The arrows are (obviously) not aligned with the spines. The code for the arrowheads is from here https://matplotlib.org/stable/gallery/spines/centered_spines_with_arrows.html My simplified code is: import matplotlib.pyplot as plt import numpy as np spine_offset = 5 fig, ax = plt.subplots() a = np.linspace(0, 6, 100) ax.plot(a, np.sin(a) + 1) ax.spines.top.set_visible(False) ax.spines.right.set_visible(False) ax.spines.left.set_position(('outward', spine_offset)) ax.spines.bottom.set_position(('outward', spine_offset)) ax.plot(1, 0, ">k", transform=ax.transAxes, clip_on=False) ax.plot(0, 1, "^k", transform=ax.transAxes, clip_on=False)
In set_position(('outward', spine_offset)) the offset is measured in points (1/72 of an inch, similar to how fonts are measured, e.g. a 12 point font). using a distance in "axes coordinates" Instead of 'outward', you can also use 'axes' coordinates: 0 at the bottom (left), 1 at the top (right). As usually the plot isn't square, to get a visual equal distance, you can convert the x-offset to the y-offset taking the aspect ratio of the bounding box into account. import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() x_offset = 0.02 y_offset = x_offset / ax.get_window_extent().height * ax.get_window_extent().width a = np.linspace(0, 2 * np.pi, 1000) ax.plot(np.sin(5 * a), np.sin(6 * a)) ax.set_facecolor('lightsalmon') # set a color to visualize the offset ax.spines.top.set_visible(False) ax.spines.right.set_visible(False) ax.spines.left.set_position(('axes', -x_offset)) ax.spines.bottom.set_position(('axes', -y_offset)) ax.plot(1, -y_offset, ">k", transform=ax.transAxes, clip_on=False) ax.plot(-x_offset, 1, "^k", transform=ax.transAxes, clip_on=False) plt.show() Converting "points" to "axes coordinates" Alternatively, you can convert from points to pixels, and from pixels back to axes coordinates. There are 72 points in an inch, and the figure's dpi tells how many pixels there are in an inchh, import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() spine_offset_points = 5 spine_offset_pixels = fig.dpi * spine_offset_points / 72.0 x_offset, y_offset = ax.transAxes.inverted().transform((spine_offset_pixels, spine_offset_pixels)) \ - ax.transAxes.inverted().transform((0, 0)) a = np.linspace(0, 2 * np.pi, 1000) ax.plot(np.sin(3 * a), np.sin(8 * a)) ax.set_facecolor('violet') ax.spines.top.set_visible(False) ax.spines.right.set_visible(False) ax.spines.left.set_position(('outward', spine_offset_points)) ax.spines.bottom.set_position(('outward', spine_offset_points)) ax.plot(1, -y_offset, ">k", transform=ax.transAxes, clip_on=False) ax.plot(-x_offset, 1, "^k", transform=ax.transAxes, clip_on=False) plt.show()
2
3
77,967,334
2024-2-9
https://stackoverflow.com/questions/77967334/getting-min-max-column-name-in-polars
In polars I can get the horizontal max (maximum value of a set of columns for reach row) like this: df = pl.DataFrame( { "a": [1, 8, 3], "b": [4, 5, None], } ) df.with_columns(max = pl.max_horizontal("a", "b")) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ max β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ═════║ β”‚ 1 ┆ 4 ┆ 4 β”‚ β”‚ 8 ┆ 5 ┆ 8 β”‚ β”‚ 3 ┆ null ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ This corresponds to Pandas df[["a", "b"]].max(axis=1). Now, how do I get the column names instead of the actual max value? In other words, what is the Polars version of Pandas' df[CHANGE_COLS].idxmax(axis=1)? The expected output would be: β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ max β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ═════║ β”‚ 1 ┆ 4 ┆ b β”‚ β”‚ 8 ┆ 5 ┆ a β”‚ β”‚ 3 ┆ null ┆ a β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
You can concatenate the elements into a list using pl.concat_list, get the index of the largest element using pl.Expr.list.arg_max, and replace the index with the column name using pl.Expr.replace. mapping = {0: "a", 1: "b"} ( df .with_columns( pl.concat_list(["a", "b"]).list.arg_max().replace(mapping).alias("max_col") ) ) This can all be wrapped into a function to also handle the creation of the mapping dict. def max_col(cols) -> str: mapping = dict(enumerate(cols)) return pl.concat_list(cols).list.arg_max().replace(mapping) df.with_columns(max_col(["a", "b"]).alias("max_col")) Output. shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ max_col β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ═════════║ β”‚ 1 ┆ 4 ┆ b β”‚ β”‚ 8 ┆ 5 ┆ a β”‚ β”‚ 3 ┆ null ┆ a β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
6
3
77,967,238
2024-2-9
https://stackoverflow.com/questions/77967238/syntax-warning-for-escaped-period-in-triple-quoted-shell-rule
I get the following warning: SyntaxWarning: invalid escape sequence '\.' when the following shell block is present in a rule: """ curl {params.ua} -L {params.url} \ | gzip -dc \ | sd '^.*gene_id "([A-Z0-9\.]+).*"; transcript_id "([A-Z0-9\.]+)".*;.*' '$2\t$1' \ | uniq | gzip > {output} """ The warning gives me a line number in another rule, commenting out the other rule doesn't resolve the warning (and the other rule doesn't have a '.'). Commenting out the rule with the above shell block does resolve the warning. The shell code seems to work as intended, so maybe I just need to format it differently such that Snakemake is happy and it still does what I want it to, but I can't figure this out so far.
It's possible to specify that this is a raw string: r""" ... \. ... """
2
1
77,965,251
2024-2-8
https://stackoverflow.com/questions/77965251/passing-a-numpy-array-to-c-function-through-ctypes-gives-wrong-results-if-the-nu
Consider this simple C source code which computes the mean of an array of int, stores it in a structure and returns an error code: #include <stdio.h> enum errors { NO_ERRORS, EMPTY_ARRAY }; struct Result { double mean; }; enum errors calculateMean(struct Result *result, int *array, int length) { if (length == 0) { return EMPTY_ARRAY; // Return EMPTY_ARRAY if the array is empty } int sum = 0; for (int i = 0; i < length; i++) { sum += array[i]; } result->mean = (double)sum / length; return NO_ERRORS; // Return NO_ERRORS if calculation is successful } The code is compiled to a shared library named libtest_ctypes.so. I have the following ctypes interface: import ctypes import enum import numpy as np import enum lib_path = "./libtest_ctypes.so" lib = ctypes.CDLL(lib_path) # The Result structure class Result(ctypes.Structure): _fields_ = [("mean", ctypes.c_double)] # The Errors enum class Errors(enum.IntEnum): NO_ERRORS = 0 EMPTY_ARRAY = 1 # Defining a signature for `calculateMean` calculate_mean = lib.calculateMean calculate_mean.argtypes = [ctypes.POINTER(Result), ctypes.POINTER(ctypes.c_int), ctypes.c_int] calculate_mean.restype = Errors # Runs `calculateMean` def calculate_mean_interface(array): result = Result() length = len(array) c_array = array.ctypes.data_as(ctypes.POINTER(ctypes.c_int)) error = calculate_mean(ctypes.byref(result), c_array, length) return error, result if __name__ == "__main__": array = np.array([1, 2, 3, 4, 5]) error, result = calculate_mean_interface(array) if error == Errors.EMPTY_ARRAY: print("Error: Empty array!") elif error == Errors.NO_ERRORS: print("Mean:", result.mean) Running the python interface gives a wrong result 1.2. To my undestanding this is due to a difference in types between the numpy array (64 bit integeres) and the C's int on my machine. I can get the right result, 3.0, casting the array to ctype.c_int through numpy's .astype(): def calculate_mean_interface(array): result = Result() length = len(array) #** CAST ARRAY TO CTYPES.C_INT** array = array.astype(ctypes.c_int) c_array = array.ctypes.data_as(ctypes.POINTER(ctypes.c_int)) error = calculate_mean(ctypes.byref(result), c_array, length) return error, result However numpy's casting requires extra memory and time. What is the best way to achieve a correct result without casting? I'd like this to be portable and, if possible, I don't want to specify a dtype when initializing the numpy array.
According to [NumPy]: Scalars - class numpy.int_ (emphasis is mine): Signed integer type, compatible with Python int and C long. So, you'll have to use that in all the places (and be consistent), otherwise you'll get Undefined Behavior. In your case (Little Endian Nix OS (and Python build)), the memory layout is (representation in HEX (2 digits per Byte)): NumPy array - 5 064bit (long) items: (1 + 2 + 3 + 4 + 5) / 5 = 3.0 ╔══════L0══════╗╔══════L1══════╗╔══════L2══════╗╔══════L3══════╗╔══════L4══════╗ 10000000000000002000000000000000300000000000000040000000000000005000000000000000 β•šβ•β•I0β•β•β•β•šβ•β•I1β•β•β•β•šβ•β•I2β•β•β•β•šβ•β•I3β•β•β•β•šβ•β•I4══╝ ... C array (same memory) - 5 032bit (int) items: (1 + 0 + 2 + 0 + 3) / 5 = 1.2 Check [SO]: Maximum and minimum value of C types integers from Python (@CristiFati's answer) for more details on integer types. Here's a working version. dll00.c: #include <stdio.h> #if defined(_WIN32) # define DLL00_EXPORT_API __declspec(dllexport) #else # define DLL00_EXPORT_API #endif typedef enum { SUCCESS = 0, EMPTY_ARRAY, NULL_POINTER, UNSPECIFIED, } Errors; typedef struct { double mean; } Result; #if defined(__cplusplus) extern "C" { #endif DLL00_EXPORT_API Errors calculateMean(Result *result, long *array, int length); #if defined(__cplusplus) } #endif Errors calculateMean(Result *result, long *array, int length) { printf("From C - element size: %zu\n", sizeof(long)); if (length == 0) { return EMPTY_ARRAY; } if ((result == NULL) || (array == NULL)) { return NULL_POINTER; } long sum = 0; for (int i = 0; i < length; ++i) { sum += array[i]; } result->mean = (double)sum / length; return SUCCESS; } code00.py: #!/usr/bin/env python import ctypes as cts import enum import sys import numpy as np class Result(cts.Structure): _fields_ = ( ("mean", cts.c_double), ) class Errors(enum.IntEnum): SUCCESS = 0, EMPTY_ARRAY = 1, NULL_POINTER = 2, UNSPECIFIED = 3, DLL_NAME = "./dll00.{:s}".format("dll" if sys.platform[:3].lower() == "win" else "so") calculate_mean = None # Initialize it in main def calculate_mean_interface(array): result = Result() length = len(array) c_array = array.ctypes.data_as(cts.POINTER(cts.c_long)) error = calculate_mean(cts.byref(result), c_array, length) return error, result def main(*argv): dll = cts.CDLL(DLL_NAME) global calculate_mean calculate_mean = dll.calculateMean calculate_mean.argtypes = (cts.POINTER(Result), cts.POINTER(cts.c_long), cts.c_int) calculate_mean.restype = Errors array = np.array([1, 2, 3, 4, 5]) print(f"NP array type: {array.dtype}") error, result = calculate_mean_interface(array) if error != Errors.SUCCESS: print(f"Error: {error}") else: print(f"Mean: {result.mean}") if __name__ == "__main__": print("Python {:s} {:03d}bit on {:s}\n".format(" ".join(elem.strip() for elem in sys.version.split("\n")), 64 if sys.maxsize > 0x100000000 else 32, sys.platform)) rc = main(*sys.argv[1:]) print("\nDone.\n") sys.exit(rc) Output: Nix (Linux): [cfati@cfati-5510-0:/mnt/e/Work/Dev/StackExchange/StackOverflow/q077965251]> . ~/sopr.sh ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [064bit prompt]> [064bit prompt]> ls code00.py dll00.c [064bit prompt]> [064bit prompt]> gcc -fPIC -shared -o dll00.so dll00.c [064bit prompt]> ls code00.py dll00.c dll00.so [064bit prompt]> [064bit prompt]> /home/cfati/Work/Dev/VEnvs/py_pc064_03.10_test0/bin/python ./code00.py Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] 064bit on linux NP array type: int64 From C - element size: 8 Mean: 3.0 Done. Win: [cfati@CFATI-W10PC064:e:\Work\Dev\StackExchange\StackOverflow\q077965251]> sopr.bat ### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ### [prompt]> [prompt]> "c:\Install\pc032\Microsoft\VisualStudioCommunity\2019\VC\Auxiliary\Build\vcvarsall.bat" x64 > nul [prompt]> dir /b code00.py dll00.c dll00.so [prompt]> cl /nologo /MD /DDLL dll00.c /link /NOLOGO /DLL /OUT:dll00.dll dll00.c Creating library dll00.lib and object dll00.exp [prompt]> dir /b code00.py dll00.c dll00.dll dll00.exp dll00.lib dll00.obj dll00.so [prompt]> [prompt]> "e:\Work\Dev\VEnvs\py_pc064_03.10_test0\Scripts\python.exe" ./code00.py Python 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] 064bit on win32 NP array type: int32 From C - element size: 4 Mean: 3.0 Done. Might also want to check: [SO]: C function called from Python via ctypes returns incorrect value (@CristiFati's answer) [SO]: Sending numpy array (image) to c++ library with ctypes (@CristiFati's answer) [SO]: Boundary problems using a numpy kernel and padding on a matrix within a C functions called from python (@CristiFati's answer) [SO]: How to run a Fortran script with ctypes? (@CristiFati's answer) [SO]: _csv.Error: field larger than field limit (131072) (@CristiFati's answer)
2
1
77,965,769
2024-2-9
https://stackoverflow.com/questions/77965769/remove-top-and-right-spine-in-geoaxessubplot
I'm trying to remove the top and right spine of a plot, and initially tried # Create a figure and axis with PlateCarree projection fig, ax = plt.subplots(figsize=(11, 6), subplot_kw={'projection': ccrs.PlateCarree()}) ax.coastlines() # Reintroduce spines ax.spines['top'].set_visible(True) ax.spines['right'].set_visible(True) ax.set_xticks(range(-180, 181, 30), crs=ccrs.PlateCarree()) ax.set_yticks(range(-90, 91, 30), crs=ccrs.PlateCarree()) # Show the plot plt.show() which gives me this figure, i.e. it clearly didn't work I then tried to remove the frame and add the two spines I want # Create a figure and axis with PlateCarree projection fig, ax = plt.subplots(figsize=(11, 6), subplot_kw={'projection': ccrs.PlateCarree(), 'frameon': False}) ax.coastlines() # Reintroduce spines ax.spines['left'].set_visible(True) ax.spines['bottom'].set_visible(True) ax.set_xticks(range(-180, 181, 30), crs=ccrs.PlateCarree()) ax.set_yticks(range(-90, 91, 30), crs=ccrs.PlateCarree()) # Show the plot plt.show() and this also doesn't quite work - I successfully remove the frame but can't reintroduce the left and bottom spine back I did see this post but when I try to apply this to my code # Create a figure and axis with PlateCarree projection fig, ax = plt.subplots(figsize=(11, 6), subplot_kw={'projection': ccrs.PlateCarree()}) ax.coastlines() # Reintroduce spines ax.outline_patch.set_visible(False) ax.spines['left'].set_visible(True) ax.spines['bottom'].set_visible(True) ax.set_xticks(range(-180, 181, 30), crs=ccrs.PlateCarree()) ax.set_yticks(range(-90, 91, 30), crs=ccrs.PlateCarree()) # Show the plot plt.show() I get the error AttributeError: 'GeoAxes' object has no attribute 'outline_patch' Surely there must be a way to achieve this? Does anyone know how to do this? I'm using python 3.10.
I think the modern equivalent of outline_patch is spines['geo']: # Create a figure and axis with PlateCarree projection fig, ax = plt.subplots(figsize=(11, 6), subplot_kw={'projection': ccrs.PlateCarree()}) ax.coastlines() ax.spines['geo'].set_visible(False) ax.spines['left'].set_visible(True) ax.spines['bottom'].set_visible(True) ax.set_xticks(range(-180, 181, 30), crs=ccrs.PlateCarree()) ax.set_yticks(range(-90, 91, 30), crs=ccrs.PlateCarree()) # Show the plot plt.show()
2
2
77,966,444
2024-2-9
https://stackoverflow.com/questions/77966444/how-to-use-python-pattern-matching-to-match-class-types
How can we use Python's structural pattern matching (introduced in 3.10) to match the type of a variable without invoking the constructor / in a case where a new instantiation of the class is not easily possible? The following code fails: from pydantic import BaseModel # instantiation requires 'name' to be defined class Tree(BaseModel): name: str my_plant = Tree(name='oak') match type(my_plant): case Tree: print('My plant is a tree.') case _: print('error') with error message SyntaxError: name capture 'Tree' makes remaining patterns unreachable An alternative attempt was to re-create an instance during matching (dangerous because of instantiation during matching, but worth a shot...) - it also fails: match type(my_plant): case type(Tree()): print('My plant is a tree.') case _: print('error') TypeError: type() accepts 0 positional sub-patterns (1 given) Checking against an instance of Tree() resolves the SyntaxError, but does not lead to working output, because it always produces "error". I do not want to use the workaround to match against a derived bool (e.g., type(my_plant) == Tree)) because it would limit me to only compare 2 outcomes (True/False) not match against multiple class types.
To expand on what I said in comments: match introduces a value, but case introduces a pattern to match against. It is not an expression that is evaluated. In case the pattern represents a class, the stuff in the parentheses is not passed to a constructor, but is matched against attributes of the match value. Here is an illustrative example: class Tree: def __init__(self, name): self.kind = name def what_is(t): match t: case Tree(kind="oak"): return "oak" case Tree(): return "tree" case _: return "shrug" print(what_is(Tree(name="oak"))) # oak print(what_is(Tree(name="birch"))) # tree print(what_is(17)) # shrug Note here that outside case, Tree(kind="oak") would be an error: TypeError: Tree.__init__() got an unexpected keyword argument 'kind' And, conversely, case Tree(name="oak") would never match, since Tree instances in my example would not normally have an attribute named name. This proves that case does not invoke the constructor, even if it looks like an instantiation. EDIT: About your second error: you wrote case type(Tree()):, and got TypeError: type() accepts 0 positional sub-patterns (1 given). What happened here is this: case type(...) is, again, specifying a pattern. It is not evaluated as an expression. It says the match value needs to be of type type (i.e. be a class), and it has to have attributes in the parentheses. For example, match Tree: case type(__init__=initializer): print("Tree is a type, and this is its initializer:") print(initializer) This would match, and print something like # => Tree is a type, and this is its initializer: # <function Tree.__init__ at 0x1098593a0> The same case would not match an object of type Tree, only the type itself! However, you can only use keyword arguments in this example. Positional arguments are only available if you define __match_args__, like the documentation says. The type type does not define __match_args__, and since it is an immutable built-in type, case type(subpattern, ...) (subpattern! not subexpression!), unlike case type(attribute=subpattern, ...), will always produce an error.
3
4
77,965,457
2024-2-9
https://stackoverflow.com/questions/77965457/taking-specific-value-from-the-csv-file-using-pandas-dataframe
Let's say I want a last specific value from the CSV file. THis is my code. When I run this code, it gives me the value saying "2 Ana" What can I do here to just get the value 'Ana'? import pandas as pd import csv CSV_file = pd.read_csv('appl.csv') name = CSV_file['Name'] v_val = name.iloc[-1:] print(v_val) THe CSV file is this: Name,Age,Location Juan,18,Lisbon James,19,London Ana,20,Madrid I want to get the value 'Ana'
Remove the : from the .iloc: name = CSV_file["Name"].iloc[-1] print(name) Prints: Ana Or use .iat: name = CSV_file["Name"].iat[-1]
2
3
77,965,378
2024-2-8
https://stackoverflow.com/questions/77965378/is-there-a-way-to-delete-several-items-at-once-in-a-dictionary
I'm working with Python 3.11. I have a dictionary whose keys are datetime objects and whose values are strings. dates = {datetime.datetime(2022, 1, 1): "a", datetime.datetime(2022, 5, 4): "b", datetime.datetime(2022, 9, 25): "c", datetime.datetime(2023, 5, 17): "d", datetime.datetime(2023, 12, 15): "e", datetime.datetime(2024, 3, 17): "f", datetime.datetime(2024, 4, 3): "g"} I would like to know if there is a way to remove all the elements that have already passed (other than iterating through all the elements). For example in this case datetime.datetime.now() = datetime.datetime(2024, 2, 9) so the result dictionary will be: dates = {datetime.datetime(2024, 3, 17): "f", datetime.datetime(2024, 4, 3): "g"} This code works in case the dictionary is ordered by date (which may not be the case): new_dates = {} now = datetime.datetime.now() for k, v in reversed(dates.items()): if k < now: break new_dates[k] = v But I would like to know if there is any other way without a loop or thay might work in a disorderly dictionary.
You could form a new Dictionary using a comprehension; order does not matter: dates2 = {k:v for k, v in dates.items() if k > now} which gives {datetime.datetime(2024, 3, 17, 0, 0): 'f', datetime.datetime(2024, 4, 3, 0, 0): 'g'}
2
3
77,958,391
2024-2-7
https://stackoverflow.com/questions/77958391/max-element-in-specific-structure
I have array of length n, from which I build such sequence b that: b[0] = 0 b[1] = b[0] + a[0] b[2] = b[1] + a[0] b[3] = b[2] + a[1] b[4] = b[3] + a[0] b[5] = b[4] + a[1] b[6] = b[5] + a[2] b[7] = b[6] + a[0] b[8] = b[7] + a[1] b[9] = b[8] + a[2] b[10] = b[9] + a[3] #etc. a can contain non-positive values. I need to find max element of b. I only came up with O(n^2) solution. Is there faster approach? def find_max(a): b = [0] i = 0 count = 0 while i < len(a): j = 0 while j <= i: b.append(b[count] + a[j]) count += 1 j += 1 i += 1 return max(b)
O(n) time and O(1) space. Consider this (outer loop) round: b[4] = b[3] + a[0] = b[3] + a[0] b[5] = b[4] + a[1] = b[3] + a[0] + a[1] b[6] = b[5] + a[2] = b[3] + a[0] + a[1] + a[2] You don't need all of these. It's enough to know: The maximum of them. Which is b[3] + max(prefix sums of a[:3]). The last of them, b[6] = b[3] + sum(a[:3]). Because you need that for the next round. In general, to find the maximum of each round, it's enough to know: The b value which the round starts with. The max prefix sum of a[:...]. Add them together to know the max b-value in the round. And return the maximum of these rounds maximums. We can update these values in O(1) time for each round: def find_max_linear(a): b = max_b = 0 sum_a = max_sum_a = 0 for x in a: sum_a += x max_sum_a = max(max_sum_a, sum_a) max_b = max(max_b, b + max_sum_a) b += sum_a return max_b Testing: import random for _ in range(10): a = random.choices(range(-100, 101), k=100) expect = find_max(a) result = find_max_linear(a) print(result == expect, expect, result) Output (Attempt This Online!): True 8277 8277 True 2285 2285 True 5061 5061 True 19261 19261 True 0 0 True 0 0 True 47045 47045 True 531 531 True 703 703 True 24073 24073 Fun oneliner (also O(n) time, but O(n) space due to the unpacking): from itertools import accumulate as acc from operator import add def find_max_linear(a): return max(0, *map(add, acc(acc(a), initial=0), acc(acc(a), max))) Or broken into a few lines with comments: def find_max_linear(a): return max(0, *map( add, acc(acc(a), initial=0), # each round's initial b-value acc(acc(a), max) # each round's max prefix sum of a[:...] )) Attempt This Online!
4
8
77,964,570
2024-2-8
https://stackoverflow.com/questions/77964570/random-random-vs-numpy-random
I read here the following: "The Python stdlib module random contains pseudo-random number generator with a number of methods that are similar to the ones available in Generator." However, the first link for a Python module "random" has a URL that points to Numpy's random.random documentation, not to some general Python library. Is this link wrong or am I just not getting what the documentation is trying to say here? I'm quite confused by all the different options to generate random numbers in Python. I count at least four now: numPy singleton RandomState object numPy RandomState object numPy Generator object Python's general random functionality apparently. Any insight is much welcome.
Is this link wrong? Yes. In context, I think they mean this random. There are three pieces of evidence for this. It says it's in the stdlib, and NumPy is not in the stdlib. It is an optional library for Python. It's talking about a module, and np.random.random() is not a module - it's a function inside a module. Later, in the same paragraph, it says: The Python stdlib module random contains pseudo-random number generator with a number of methods that are similar to the ones available in Generator. It uses Mersenne Twister, and this bit generator can be accessed using MT19937. Generator, besides being NumPy-aware, has the advantage that it provides a much larger number of probability distributions to choose from. It makes no sense to say that np.random.random() is not NumPy-aware - it's part of NumPy. On the other hand, it does make sense to describe the stdlib random module as not being NumPy-aware. The stdlib random module cannot create a NumPy array of random numbers. It's intended to work even if you don't have NumPy installed.
2
5
77,962,295
2024-2-8
https://stackoverflow.com/questions/77962295/issue-with-memoization-in-recursive-function-for-finding-combinations-summing-to
I need to write the following function: Write a function that takes in a target (int) and a list of ints. The function should return a list of any combination of elements that add up to the target, if there is no combination that adds up to target return None. This is my initial solution using recursion: def how_sum(target: int, nums: list[int]) -> list[int] | None: if target == 0: return [] if target < 0: return None for num in nums: remainder = target - num rv = how_sum(remainder, nums) if rv is not None: return rv + [num] return None Then I tried to reduce the time complexity and make my code efficient even for large numbers: def how_sum(target: int, nums: list[int], memo: dict[int, list[int]] = {}) -> list[int] | None: if target in memo: return memo[target] if target == 0: return [] if target < 0: return None for num in nums: remainder = target - num rv = how_sum(remainder, nums, memo) if rv is not None: memo[target] = rv + [num] # Note: if I comment this line everything works fine! return rv + [num] memo[target] = None return None def main(): print(how_sum(7, [2, 3])) # [3, 2, 2] print(how_sum(7, [5, 3, 4, 7])) # [3, 2, 2] print(how_sum(7, [2, 4])) # [3, 2, 2] print(how_sum(8, [2, 3, 5])) # [2, 2, 2, 2] print(how_sum(500, [7, 14])) # [3, 7, 7, 7, 7, 7, ..., 7] main() As you can see in the comments it returns the wrong output. These are the correct output: def main(): print(how_sum(7, [2, 3])) # [3, 2, 2] print(how_sum(7, [5, 3, 4, 7])) # [4, 3] print(how_sum(7, [2, 4])) # None print(how_sum(8, [2, 3, 5])) # None print(how_sum(500, [7, 14])) # None When I comment this line memo[target] = rv + [num] everything works fine but I can't figure out why it doesn't work if I leave it as it is.
There are two issues I see with this code. First your claim about the correct solution to this input: print(how_sum(8, [2, 3, 5])) # None seems incorrect. Given the explanation, either [3, 5] or [2, 2, 2, 2] are valid answers. Similarly for: print(how_sum(7, [5, 3, 4, 7])) # [4, 3] where [7] is also a valid result. As far as your code issue, the problem is a common one in that you're using a dangerous default value in a situation where it isn't warranted: def how_sum(target, nums, memo={}) Since nums is different between top level calls, the memo cache has to be reinitialized at the start of each recursion stack. Otherwise, you have results in it from a previous run where nums was different. One possible approach: def how_sum(target: int, nums: list[int], memo: dict[int, list[int]] = None) -> list[int] | None: if memo is None: memo = {0: []} if target not in memo: memo[target] = None if target > 0: for num in nums: remainder = target - num rv = how_sum(remainder, nums, memo) if rv is not None: memo[target] = [*rv, num] break return memo[target]
2
2
77,962,687
2024-2-8
https://stackoverflow.com/questions/77962687/integration-in-python-midpoint-calculation
I came across this integral approximation function in a book. It seems efficient and provides accurate results with fewer subintervals ((n)). def approximate_integral(a, b, n, f): delta_x = (b - a) / n total_sum = 0 for i in range(1, n + 1): midpoint = 0.5 * (2 * a + delta_x * (2 * i - 1)) total_sum += f(midpoint) return total_sum * delta_x Since the above midpoint calculation isn't obvious, I searched for the mathematical formula and implemented it. However, it approaches the correct answer only when I use far more intervals than the implementation found in the book. midpoint = a + 2 * dx + i * dx def approximate_integral(a, b, n, f): dx = (b - a) / n tot_sum = 0 for i in range(1, n + 1): midpoint = a + (dx / 2) + (i * dx) tot_sum += f(midpoint) return tot_sum * dx Please explain the efficiency and the differences between this formula and the one I used: midpoint = 0.5 * (2 * a + delta_x * (2 * i - 1))
I found that the midpoint in my function accounted for an extra interval. After I reduced the ith iteration by 1, it worked as the formula in the book. midpoint = a + (dx/2) + ((i-1) * dx)
2
1
77,961,619
2024-2-8
https://stackoverflow.com/questions/77961619/why-does-resnet101-have-less-accuracy-than-resnet50-in-classification-of-sport-c
I trained two different type of ResNet model from torchvision.models which is ResNet50 with DEFAULT weight and ResNet101 with DEFAULT weight too but the results of training is really weird, the train accuracy and test accuracy of ResNet50 model is 89 and 85 respectively and for ResNet101 is 34, 28 ! what is wrong? I froze the entire models and just trained the FC layer which is modified to have 4 output(equals to length of the classes) I used 5 epochs for both. why ResNet101 is worse than ResNet50?? shouldn't this be better? because has more layers(depth)
shouldn't this be better? because has more layers(depth) More layers means the model is more complex and has more capacity, but that doesn't mean it will perform better. Deeper models can struggle to converge, resulting in both a lower train and validation score compared to a simpler one. I think this is what's happening in your case. Try training the larger model for more epochs and with a different learning rate. More epochs give the model more time to adapt. I'd start with a smaller learning rate, and see how the model responds to larger rates. Change one thing at a time and observe its effect. If it converges fine, it can still go too far and overfit, which means it'll score highly on the training set but less well on the validation set compared to a simpler model. This will be exacerbated if the dataset is relatively small and non-diverse.
2
6
77,955,224
2024-2-7
https://stackoverflow.com/questions/77955224/winerror-193-1-is-not-a-valid-win32-application-az-bicep
When I try to run az bicep version I'm getting this error The command failed with an unexpected error. Here is the traceback: [WinError 193] %1 is not a valid Win32 application Traceback (most recent call last): File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 233, in invoke File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 664, in execute File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 729, in _run_jobs_serially File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 698, in _run_job File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 334, in __call__ File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/command_operation.py", line 121, in handler File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/resource/custom.py", line 4601, in show_bicep_cli_version File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/resource/_bicep.py", line 94, in run_bicep_command File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/resource/_bicep.py", line 267, in _get_bicep_installed_version File "D:\a\_work\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/resource/_bicep.py", line 305, in _run_command File "subprocess.py", line 548, in run File "subprocess.py", line 1026, in __init__ File "subprocess.py", line 1538, in _execute_child OSError: [WinError 193] %1 is not a valid Win32 application And the same error I have running all az bicep *** commands. I tried to reinstall azure cli. Did not help. I deleted py related paths from PATH env variables. That was a suggestion from some github issue page.
The most likely cause of your error is that the bicep executable installed by Azure CLI is corrupted. See Issue #2364 on their repository. As described in this comment you can clean up the %USERPROFILE\.azure\bin directory and run az bicep install to get it working.
2
5
77,959,301
2024-2-8
https://stackoverflow.com/questions/77959301/how-can-i-move-a-button-before-a-box-that-the-button-uses-or-changes-in-gradio
Example: I have the following Gradio UI: import gradio as gr def dummy(a): return 'hello', {'hell': 'o'} with gr.Blocks() as demo: txt = gr.Textbox(value="test", label="Query", lines=1) answer = gr.Textbox(value="", label="Answer") answerjson = gr.JSON() btn = gr.Button(value="Submit") btn.click(dummy, inputs=[txt], outputs=[answer, answerjson]) gr.ClearButton([answer, answerjson]) demo.launch() How can I change the code so that the "Submit" and "Clear" buttons are shown between the answer and JSON boxes, i.e.: I can't just move the line gr.ClearButton([answer, answerjson]) before answerjson = gr.JSON(), since answerjson needs to be defined in gr.ClearButton([answer, answerjson]).
You can add the components of clear button after initialization. This way, you are able to decouple the component creation order: import gradio as gr def dummy(a): return "hello", {"hell": "o"} with gr.Blocks() as demo: txt = gr.Textbox(value="test", label="Query", lines=1) answer = gr.Textbox(value="", label="Answer") btn = gr.Button(value="Submit") clear_btn = gr.ClearButton() answerjson = gr.JSON() btn.click(dummy, inputs=[txt], outputs=[answer, answerjson]) clear_btn.add([answer, answerjson]) demo.launch(share=True)
2
1
77,958,666
2024-2-8
https://stackoverflow.com/questions/77958666/how-to-achieve-polars-previous-pivot-functionality-pre-0-20-7
Previous to Polars version 0.20.7, the pivot() method, if given multiple values for the columns argument, would apply the aggregation logic against each column in columns individually based on the index column, rather than against a collective set of columns. Before: df = pl.DataFrame( { "foo": ["one", "one", "two", "two", "one", "two"], "bar": ["y", "y", "y", "x", "x", "x"], "biz": ['m', 'f', 'm', 'f', 'm', 'f'], "baz": [1, 2, 3, 4, 5, 6], } ) df.pivot(index='foo', values='baz', columns=('bar', 'biz'), aggregate_function='sum') returns: shape: (2, 5) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ foo ┆ y ┆ x ┆ m ┆ f β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ one ┆ 3 ┆ 5 ┆ 6 ┆ 2 β”‚ β”‚ two ┆ 3 ┆ 10 ┆ 3 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ After (in 0.20.7): shape: (2, 5) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ foo ┆ {"y","m"} ┆ {"y","f"} ┆ {"x","f"} ┆ {"x","m"} β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═══════════β•ͺ═══════════β•ͺ═══════════β•ͺ═══════════║ β”‚ one ┆ 1 ┆ 2 ┆ null ┆ 5 β”‚ β”‚ two ┆ 3 ┆ null ┆ 10 ┆ null β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ I like the previous functionality much better; it's very awkward to deal with the new pivoted table, especially given its column names. Polars devs put this change under "Bug fixes" but it actually broke my code.
I see a couple of ways you could do it: First, you can use melt() for your DataFrame first and then pivot: df.melt(["foo", "baz"]) ─────┬─────┬──────────┬───────┐ β”‚ foo ┆ baz ┆ variable ┆ value β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ══════════β•ͺ═══════║ β”‚ one ┆ 1 ┆ bar ┆ y β”‚ β”‚ one ┆ 2 ┆ bar ┆ y β”‚ β”‚ two ┆ 3 ┆ bar ┆ y β”‚ β”‚ two ┆ 4 ┆ bar ┆ x β”‚ β”‚ one ┆ 5 ┆ bar ┆ x β”‚ β”‚ … ┆ … ┆ … ┆ … β”‚ β”‚ one ┆ 2 ┆ biz ┆ f β”‚ β”‚ two ┆ 3 ┆ biz ┆ m β”‚ β”‚ two ┆ 4 ┆ biz ┆ f β”‚ β”‚ one ┆ 5 ┆ biz ┆ m β”‚ β”‚ two ┆ 6 ┆ biz ┆ f β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ and then pivot the same way: ( df .melt(["foo", "baz"]) .pivot(index='foo', values='baz', columns='value', aggregate_function='sum') ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ foo ┆ y ┆ x ┆ m ┆ f β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ one ┆ 3 ┆ 5 ┆ 6 ┆ 2 β”‚ β”‚ two ┆ 3 ┆ 10 ┆ 3 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ Alternatively, if lists of possible values are not intersecting between columns you need to aggregate, you could use concat() with how = align: pl.concat( [df.pivot(index='foo', values='baz', columns=col, aggregate_function="sum") for col in ['bar','biz']], how='align' ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ foo ┆ y ┆ x ┆ m ┆ f β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════β•ͺ═════β•ͺ═════║ β”‚ one ┆ 3 ┆ 5 ┆ 6 ┆ 2 β”‚ β”‚ two ┆ 3 ┆ 10 ┆ 3 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
3
4
77,958,924
2024-2-8
https://stackoverflow.com/questions/77958924/ckeditor-update-notification
Want to remove "This CKEditor 4.22.1 version is not secure. Consider upgrading to the latest one, 4.24.0-lts." from appearing in my Django admin's RichTextUploadingField. Currently using Django CKEditor 6.7.0m all settings are in settings.py only. Configs: CKEDITOR_CONFIGS = { "default": { "skin": "moono", "toolbar": "Custom", "allowedContent": True, "extraAllowedContent": "object\[id,name,width,height\];", "extraPlugins": "iframe", "iframe_attributes": { "sandbox": "allow-scripts allow-same-origin allow-popups allow-presentation allow-forms", "allowfullscreen": "", "loading": "lazy", "referrerpolicy": "no-referrer-when-downgrade", }, "toolbar_Custom": \[ { "name": "document", "items": \[ "Source", "-", "Save", "NewPage", "Preview", "Print", "-", "Templates", \], }, { "name": "clipboard", "items": \[ "Cut", "Copy", "Paste", "PasteText", "PasteFromWord", "-", "Undo", "Redo", \], }, {"name": "editing", "items": \["Find", "Replace", "-", "SelectAll"\]}, { "name": "forms", "items": \[ "Form", "Checkbox", "Radio", "TextField", "Textarea", "Select", "Button", "ImageButton", "HiddenField", \], }, "/", { "name": "basicstyles", "items": \[ "Bold", "Italic", "Underline", "Strike", "Subscript", "Superscript", "-", "RemoveFormat", \], }, { "name": "paragraph", "items": \[ "NumberedList", "BulletedList", "-", "Outdent", "Indent", "-", "Blockquote", "CreateDiv", "-", "JustifyLeft", "JustifyCenter", "JustifyRight", "JustifyBlock", "-", "BidiLtr", "BidiRtl", "Language", \], }, {"name": "links", "items": \["Link", "Unlink", "Anchor"\]}, { "name": "insert", "items": \[ "Image", "Flash", "Table", "HorizontalRule", "Smiley", "SpecialChar", "PageBreak", "Iframe", "Embed", \], }, "/", { "name": "styles", "items": \["Styles", "Format", "Font", "FontSize"\], }, {"name": "colors", "items": \["TextColor", "BGColor"\]}, {"name": "tools", "items": \["Maximize", "ShowBlocks"\]}, { "name": "about", "items": \[ "About", \], }, \], "language": "en", } } Tried to add "ignoreUpdates": True,"updateCheck": False, to config, but no effect. "CKEDITOR_UPDATE_NOTIFICATION = False" gives nothing too.
apparently the check of the version of the editor is a new configuration option, described here Class Config (CKEDITOR.config) | CKEditor 4 API docs 2 . It was added in CKEditor version 4.22.0; XWiki upgraded to CKEditor 4.22.1 in this ticket Loading... 4 . From what I tested and from what I understand from the ckeditor configuration, the check itself can be disabled by adding: config.versionCheck = false; in the CKEditor configuration in XWiki in the administration in the advanced section of the editor configuration: https://extensions.xwiki.org/xwiki/bin/view/Extension/CKEditor%20Integration/#HAdministrationSection 7 . This will disable the warning because it will prevent the editor from checking its own version, but it will not fix the version of the editor in any way. Hope this helps, Anca https://forum.xwiki.org/t/cke-editor-warning-4-22-1-version-not-secure/14020/4
2
4
77,959,276
2024-2-8
https://stackoverflow.com/questions/77959276/python-requests-get-gives-response-in-different-encoding
Following codes gives different results each time, sometimes in correct human readable ASCII, but other times in some other non-ASCII encoding format. HEADERS = ({'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36', 'Accept-Language': 'en-US,en;q=0.9', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7'}) page = requests.get('https://www.powerball.com/', headers=HEADERS) print(page.text) print(page.encoding) The encoding of the page is always utf-8. What could be the reason for the difference? Tried copy http headers from the request sent from browser but getting the same result.
The page.headers dictionary contains 'Content-Encoding': 'br'. This indicates Brotli compression and is not supported by default from requests (as of version 2.31.0 anyway). Per the requests documentation: When either the brotli or brotlicffi package is installed, requests also decodes Brotli-encoded responses. # Note: pip install brotli import requests import brotli HEADERS = ({'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36', 'Accept-Language': 'en-US,en;q=0.9', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng;q=0.8,application/signed-exchange;v=b3;q=0.7', 'Accept-Encoding': 'gzip'}) page = requests.get('https://www.powerball.com/', headers=HEADERS) print(page.text)
2
1
77,957,242
2024-2-7
https://stackoverflow.com/questions/77957242/how-to-arbitrarily-sort-the-radial-plot-values-in-altair
I'm building a radial chart in python but can't order the values of the plot based on the 'categoria' values. I already tried to sort the df and force through domain and sort in the altair code but can't get the desired result. What I'm doing wrong? Here's a sample of my current dataframe: index da_tipo_servicio_salud categoria porcentaje rubro 57 En ambos, pΓΊblico y privado Menos de $200 pesos 44.44444444444444 Transporte 36 En ambos, pΓΊblico y privado Menos de $200 pesos 7.142857142857142 Medicamentos 15 En ambos, pΓΊblico y privado Menos de $200 pesos 3.571428571428571 Citas mΓ©dicas 48 En ambos, pΓΊblico y privado Entre $200 y $500 pesos 29.629629629629626 Transporte and the altair code I'm using: orden_monto = ['Menos de $200 pesos', 'Entre $200 y $500 pesos', 'Entre $500 y $800 pesos', 'Entre $800 y $1,000 pesos', 'Entre $1,000 y $1,500 pesos', 'Entre $1,500 y $2,000 pesos', 'MΓ‘s de $2,000 pesos'] base = alt.Chart(df_combinado_orden_ambos).transform_filter( alt.datum.rubro == 'Medicamentos' ).encode( theta=alt.Theta(field='porcentaje', type='quantitative', stack=True), radius=alt.Radius(field='porcentaje', type='quantitative', scale=alt.Scale(type='sqrt', zero=True, rangeMin=100)), color=alt.Color('categoria:N', scale=alt.Scale(scheme='blues', domain=orden_monto), sort=orden_monto) ).properties( title={ 'text': ['DistribuciΓ³n porcentual por categorΓ­a'], 'subtitle': ['Medicamentos'], 'anchor': 'start', 'offset': 20 } ) c1 = base.mark_arc(innerRadius=20, cornerRadius=5, stroke="#fff") c2 = base.mark_text(radiusOffset=25).encode(text=alt.Text('porcentaje:Q', format='.0f')) final_chart = c1 + c2 final_chart.display()
Try using the order encoding and set order='categoria'.
2
1