question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
78,675,968 | 2024-6-27 | https://stackoverflow.com/questions/78675968/pandas-compact-rows-when-data-is-missing | I have a list of dicts where each dict can have different keys. I want to create a dataframe with one row where each key is a column and the row is its value: import pandas as pd data = [{"A":1}, {"B":2}, {"C":3}] df = pd.DataFrame(data) print(df.to_string(index=False)) # A B C # 1.0 NaN NaN # NaN 2.0 NaN # NaN NaN 3.0 What I want: # A B C # 1.0 2.0 3.0 How can I drop/compact the rows with NaN values? | One option would be to stack: df.stack().droplevel(0).to_frame().T Or using a dummy groupby: import numpy as np df.groupby(np.repeat(0, len(df))).first() Output: A B C 0 1.0 2.0 3.0 | 3 | 2 |
78,674,348 | 2024-6-26 | https://stackoverflow.com/questions/78674348/how-to-merge-2-pandas-dataframes-based-on-criteria | Iβm having some issues joining 2 dataframes using pandas.merge(), based on certain conditions. I would appreciate some advice In the example below, I wish to join the 2 dataframes on customerId. However, Iβm interested in only the EARLIEST record in loans_df which matches the condition: Customers.EndDate < Loans.Date Loans.amount > 100 This means itβs a one-to-one join (or one to zero if no loans exists for a customer). Note: Iβm assuming the date convention is yyyymmdd. import pandas as pd Customer_df = pd.DataFrame({ 'CustomerId': [1,2,3], 'End date ': [β20240101β, β20220101β, β20250101β] }) Loans_df = pd.DataFrame({ 'LoanId': [1,2,3], 'CustomerId': [1,2,2], βDateβ: [β20240112β, β20230101β, β20240101β], 'Amount': [1000,2000,4000]}) Tried pandas.merge() without success | It looks like a merge_asof after pre-filtering Loans_df to only keep the values above 100: # prerequisite: ensure correct datetime # Customer_df['End date'] = pd.to_datetime(Customer_df['End date'], format='%Y%m%d') # Loans_df['Date'] = pd.to_datetime(Loans_df['Date'], format='%Y%m%d') out = pd.merge_asof(Customer_df.sort_values(by='End date'), Loans_df.query('Amount > 100').sort_values(by='Date'), left_on='End date', right_on='Date', by='CustomerId', direction='forward' ) Output: CustomerId End date LoanId Date Amount 0 2 2022-01-01 2.0 2023-01-01 2000.0 1 1 2024-01-01 1.0 2024-01-12 1000.0 2 3 2025-01-01 NaN NaT NaN If you only want the matches, add a dropna: out = (pd.merge_asof(Customer_df.sort_values(by='End date'), Loans_df.query('Amount > 100').sort_values(by='Date'), left_on='End date', right_on='Date', by='CustomerId', direction='forward') .dropna(subset=['Date']) ) Output: CustomerId End date LoanId Date Amount 0 2 2022-01-01 2.0 2023-01-01 2000.0 1 1 2024-01-01 1.0 2024-01-12 1000.0 | 2 | 3 |
78,672,815 | 2024-6-26 | https://stackoverflow.com/questions/78672815/combine-multiple-conditions-for-finding-elements-in-nested-tuple | Python novice here. I have the following nested tuple containing values, sides and context: my_tuple = [(121, 131, 174, 188, 228, 242, 282), ('Left', 'Right', 'Right', 'Left', 'Left', 'Right', 'Right'), ('Foot Strike', 'Foot Off', 'Foot Strike', 'Foot Off', 'Foot Strike', 'Foot Off', 'Foot Strike')] I would like to extract the values which agree with 'Right' AND 'Foot Strike' (i.e. val = 174 & 282). I know I could extract the first indices of all subtuples by using first = [lis[0] for lis in my_tuple] and subsequently select the first vaule of that tuple, but I am failing to correctly set conditional values to go with this selection. Thank your for helping out the newbie! | out = (el[0] for el in zip(*my_tuple) if el[1]=='Right' and el[2]=='Foot Strike') Explanation: First, we use zip to obtain iterable of (value, side, context) tuples. Then we can go through that iterable, filter on our condition and return first element of each tuple. Output: print(list(out)) [174, 282] | 2 | 4 |
78,671,071 | 2024-6-26 | https://stackoverflow.com/questions/78671071/numpy-array-slicing-with-a-comma | There are multiple questions on StackOverflow, asking how the comma syntax works, but most of them refer to m[:,n] which refers to the nth column. Similarly, m[n,:] refers to the nth row. I find this method of slicing used in the labs of Machine Learning Specialization by Andrew Ng. But does this slicing have any advantage over m[n]? | For an array with 2 or more dimensions, m[n] and m[n, :] are identical. The first can be considered shorthand for the second. For an array with 1 dimension, m[n] will return element n, and m[n, :] will result in an error. I personally would choose m[n, :] in some cases to make the code more human-readable: for example, when you know that m is two-dimensional, then m[n, :] immediately implies this to the reader, whereas m[n] might leave them having to guess at whether m is 1D or 2D. | 3 | 4 |
78,671,711 | 2024-6-26 | https://stackoverflow.com/questions/78671711/how-can-i-filter-groups-by-comparing-the-first-value-of-each-group-and-the-last | My DataFrame: import pandas as pd df = pd.DataFrame( { 'group': ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c', 'd', 'd', 'd', 'e', 'e', 'e'], 'num': [1, 2, 3, 1, 12, 12, 13, 2, 4, 2, 5, 6, 10, 20, 30] } ) Expected output is getting three groups from above df group num 0 a 1 1 a 2 2 a 3 group num 6 c 13 7 c 2 8 c 4 group num 12 e 10 13 e 20 14 e 30 Logic: I want to compare the first value of each group to the last cummax of num column. I can explain better by this code: df['last_num'] = df.groupby('group')['num'].tail(1) df['last_num'] = df.last_num.ffill().cummax() But I think what I really need is this desired_cummax: group num last_num desired_cummax 0 a 1 NaN 3 1 a 2 NaN 3 2 a 3 3.0 3 3 b 1 3.0 3 4 b 12 3.0 3 5 b 12 12.0 3 6 c 13 12.0 3 7 c 2 12.0 3 8 c 4 12.0 4 9 d 2 12.0 4 10 d 5 12.0 4 11 d 6 12.0 4 12 e 10 12.0 4 13 e 20 12.0 4 14 e 30 30.0 30 I don't want a new cummax if the first value of num for each group is less than last_num. For example for group b, the first value of num is 1. Since it is less that its last_num, when it reaches the end of the group b it should not put 12. It should still be 3. Now for group c, since its first value is more than last_num, when it reaches at the end of group c, a new cummax will be set. After that I want to filter the groups. If df.num.iloc[0] > df.desired_cummax.iloc[0] Note that the first group should be in the expected output no matter what. Maybe there is a better approach to solve this. But this is what I have thought might work. My attempt was creating last_num but I don't know how to continue. | IIUC, you can aggregate as first/last per group, mask the unwanted values and map back to the group. Finally shift one row up: tmp = df.groupby('group')['num'].agg(['first', 'last']) s = tmp['last'].where(tmp['last'].shift(fill_value=0).le(tmp['first'])).ffill().cummax() df['desired_cummax'] = df['group'].map(s.shift().bfill()).shift(-1).fillna(df['num']) Output: group num desired_cummax 0 a 1 3.0 1 a 2 3.0 2 a 3 3.0 3 b 1 3.0 4 b 12 3.0 5 b 12 3.0 6 c 13 3.0 7 c 2 3.0 8 c 4 4.0 9 d 2 4.0 10 d 5 4.0 11 d 6 4.0 12 e 10 4.0 13 e 20 4.0 14 e 30 30.0 Intermediates: # computation of the mapping Series "s" first last last.shift(fill_value=0) .le(tmp['first']) where .ffill() group a 1 3 0 True 3.0 3.0 b 1 12 3 False NaN 3.0 c 13 4 12 True 4.0 4.0 d 2 6 4 False NaN 4.0 e 10 30 6 True 30.0 30.0 # shifting before mapping s s.shift() .bfill() group a 3.0 NaN 3.0 b 3.0 3.0 3.0 c 4.0 3.0 3.0 d 4.0 4.0 4.0 e 30.0 4.0 4.0 # mapping group map .shift(-1) .fillna(df['num']) 0 a 3.0 3.0 3.0 1 a 3.0 3.0 3.0 2 a 3.0 3.0 3.0 3 b 3.0 3.0 3.0 4 b 3.0 3.0 3.0 5 b 3.0 3.0 3.0 6 c 3.0 3.0 3.0 7 c 3.0 3.0 3.0 8 c 3.0 4.0 4.0 9 d 4.0 4.0 4.0 10 d 4.0 4.0 4.0 11 d 4.0 4.0 4.0 12 e 4.0 4.0 4.0 13 e 4.0 4.0 4.0 14 e 4.0 NaN 30.0 | 2 | 3 |
78,653,631 | 2024-6-21 | https://stackoverflow.com/questions/78653631/polars-selectors-alias-with-when-then-otherwise | Say I have this: import polars as pl import polars.selectors as cs df = pl.select( j = pl.int_range(10, 99).sample(10, with_replacement=True), k = pl.int_range(10, 99).sample(10, with_replacement=True), l = pl.int_range(10, 99).sample(10, with_replacement=True), ) shape: (10, 3) βββββββ¬ββββββ¬ββββββ β j β k β l β β --- β --- β --- β β i64 β i64 β i64 β βββββββͺββββββͺββββββ‘ β 71 β 79 β 67 β β 26 β 42 β 55 β β 12 β 43 β 85 β β 92 β 96 β 14 β β 95 β 26 β 62 β β 75 β 14 β 56 β β 61 β 41 β 75 β β 74 β 97 β 70 β β 73 β 32 β 10 β β 66 β 98 β 40 β βββββββ΄ββββββ΄ββββββ and I want to apply the same when/then/otherwise condition on multiple columns: df.select( pl.when(cs.numeric() < 50) .then(1) .otherwise(2) ) This fails with: DuplicateError: the name 'literal' is duplicate How do I make this use the currently selected column as the alias? I.e. I want the equivalent of this: df.select( pl.when(pl.col(c) < 50) .then(1) .otherwise(2) .alias(c) for c in df.columns ) shape: (10, 3) βββββββ¬ββββββ¬ββββββ β j β k β l β β --- β --- β --- β β i32 β i32 β i32 β βββββββͺββββββͺββββββ‘ β 2 β 2 β 2 β β 1 β 1 β 2 β β 1 β 1 β 2 β β 2 β 2 β 1 β β 2 β 1 β 2 β β 2 β 1 β 2 β β 2 β 1 β 2 β β 2 β 2 β 2 β β 2 β 1 β 1 β β 2 β 2 β 1 β βββββββ΄ββββββ΄ββββββ | You can use .name.keep() df.select( pl.when(cs.numeric() < 50) .then(1) .otherwise(2) .name.keep() ) shape: (10, 3) βββββββ¬ββββββ¬ββββββ β j β k β l β β --- β --- β --- β β i32 β i32 β i32 β βββββββͺββββββͺββββββ‘ β 1 β 1 β 2 β β 2 β 1 β 1 β β 1 β 2 β 1 β β 2 β 2 β 1 β β 2 β 1 β 2 β β 2 β 2 β 2 β β 2 β 2 β 1 β β 2 β 2 β 2 β β 1 β 1 β 2 β β 2 β 2 β 1 β βββββββ΄ββββββ΄ββββββ | 2 | 5 |
78,668,351 | 2024-6-25 | https://stackoverflow.com/questions/78668351/changing-taipy-selector-lov-at-runtime | I am trying to change the List of Values of a Taipy selector at runtime, but I keep failing. My lov is defined as the keys of a dictionary in my application, called shapes. I define my selector like this: tgb.selector(value="{sel_shape}", lov=list(shapes.keys()), dropdown=True, label="Shapes") At startup, the default values are loaded correctly. However, I need to be able to change the selector lov when I manipulate the shapes dictionary at runtime. How can I achieve that? Here's what I tried so far. Update the shapes dictionary Simply updating the dictionary (e.g. adding a new key shapes['new_shape'] = [1, 2, 3]) does not work, as the selector lov does not seem to be a dynamic property. Define the lov in a state variable I redefined the selector as tgb.selector(value="{sel_shape}", lov="{selector_lov}", dropdown=True, label="Shapes") and defined a module global variable selector_lov = shapes.keys(). Then, manipulating the shapes dictionary has no effect on the selector lov. Reload the page Every time I manipulate the dictionary I make sure to reload the local page (navigate(state, to="emitter", force=True)). In my top module I have: def on_navigate(state, page_name, params): if page_name == "emitter": state.selector_lov = state.shapes.keys() Still no change in the selector lov after editing shapes. | Try this syntax: tgb.selector(value="{sel_shape}", lov="{list(shapes.keys())}", dropdown=True, label="Shapes") Here is a full example: from taipy.gui import Gui import taipy.gui.builder as tgb shapes = {"Mail":["example@example", "example2@example"], "Person":["John Doe", "Jane Doe"]} selected_shape = None def add_shape(state): state.shapes["Skill"] = ["Python", "R"] with tgb.Page() as page: tgb.selector("{selected_shape}", lov="{list(shapes.keys())}", dropdown=True) tgb.button("Add a new shape", on_action=add_shape) Gui(page).run() You normally don't have to navigate to the page and reload it. You could have used a LOV variable as you mentioned: from taipy.gui import Gui import taipy.gui.builder as tgb shapes = {"Mail":["example@example", "example2@example"], "Person":["John Doe", "Jane Doe"]} shapes_lov = list(shapes.keys()) selected_shape = None def add_shape(state): state.shapes["Skill"] = ["Python", "R"] state.shapes_lov = list(state.shapes.keys()) with tgb.Page() as page: tgb.selector("{selected_shape}", lov="{shapes_lov}", dropdown=True) tgb.button("Add a new shape", on_action=add_shape) Gui(page).run() Also, note that usually, changes are propagated to the state when having a direct assignment. state.shapes = new_value and not by doing state.shapes["Skill"] = ["Python", "R"]. Here it is working because this kind of assignment are supported by Taipy for dictionary. | 2 | 1 |
78,650,222 | 2024-6-21 | https://stackoverflow.com/questions/78650222/valueerror-numpy-dtype-size-changed-may-indicate-binary-incompatibility-expec | MRE pip install pandas==2.1.1 numpy==2.0.0 Python 3.10 on Google Colab Output Collecting pandas==2.1.1 Downloading pandas-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.3 MB) ββββββββββββββββββββββββββββββββββββββββ 12.3/12.3 MB 44.7 MB/s eta 0:00:00 Collecting numpy==2.0.0 Using cached numpy-2.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (19.3 MB) Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from pandas==2.1.1) (2.8.2) Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas==2.1.1) (2023.4) Requirement already satisfied: tzdata>=2022.1 in /usr/local/lib/python3.10/dist-packages (from pandas==2.1.1) (2024.1) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.2->pandas==2.1.1) (1.16.0) Installing collected packages: numpy, pandas Attempting uninstall: numpy Found existing installation: numpy 1.26.4 Uninstalling numpy-1.26.4: Successfully uninstalled numpy-1.26.4 Attempting uninstall: pandas Found existing installation: pandas 2.0.3 Uninstalling pandas-2.0.3: Successfully uninstalled pandas-2.0.3 Successfully installed numpy-2.0.0 pandas-2.1.1 When I do : import pandas I get this error: Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py", line 37, in <module> ColabKernelApp.launch_instance() File "/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py", line 992, in launch_instance app.start() File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py", line 619, in start self.io_loop.start() File "/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py", line 195, in start self.asyncio_loop.run_forever() File "/usr/lib/python3.10/asyncio/base_events.py", line 603, in run_forever self._run_once() File "/usr/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once handle._run() File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 685, in <lambda> lambda f: self._run_callback(functools.partial(callback, future)) File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 738, in _run_callback ret = callback() File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 825, in inner self.ctx_run(self.run) File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 786, in run yielded = self.gen.send(value) File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 361, in process_one yield gen.maybe_future(dispatch(*args)) File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper yielded = ctx_run(next, result) File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 261, in dispatch_shell yield gen.maybe_future(handler(stream, idents, msg)) File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper yielded = ctx_run(next, result) File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 539, in execute_request self.do_execute( File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper yielded = ctx_run(next, result) File "/usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py", line 302, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py", line 539, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 2975, in run_cell result = self._run_cell( File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3030, in _run_cell return runner(coro) File "/usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py", line 78, in _pseudo_sync_runner coro.send(None) File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3257, in run_cell_async has_raised = await self.run_ast_nodes(code_ast.body, cell_name, File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3473, in run_ast_nodes if (await self.run_code(code, result, async_=asy)): File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3553, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-1-38d4b0363d82>", line 1, in <cell line: 1> import pandas File "/usr/local/lib/python3.10/dist-packages/pandas/__init__.py", line 23, in <module> from pandas.compat import ( File "/usr/local/lib/python3.10/dist-packages/pandas/compat/__init__.py", line 27, in <module> from pandas.compat.pyarrow import ( File "/usr/local/lib/python3.10/dist-packages/pandas/compat/pyarrow.py", line 8, in <module> import pyarrow as pa File "/usr/local/lib/python3.10/dist-packages/pyarrow/__init__.py", line 65, in <module> import pyarrow.lib as _lib --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) AttributeError: _ARRAY_API not found --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-1-38d4b0363d82> in <cell line: 1>() ----> 1 import pandas 2 frames /usr/local/lib/python3.10/dist-packages/pandas/_libs/__init__.py in <module> 16 import pandas._libs.pandas_parser # noqa: E501 # isort: skip # type: ignore[reportUnusedImport] 17 import pandas._libs.pandas_datetime # noqa: F401,E501 # isort: skip # type: ignore[reportUnusedImport] ---> 18 from pandas._libs.interval import Interval 19 from pandas._libs.tslibs import ( 20 NaT, interval.pyx in init pandas._libs.interval() ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject | I found a solution for now: I need to downgrade numpy to version 1.26.4 pip install numpy==1.26.4 or pip install "numpy<2" Restart session after downgrading numpy Was able to successfully import pandas. Related git : https://github.com/numpy/numpy/issues/26710 | 20 | 48 |
78,646,329 | 2024-6-20 | https://stackoverflow.com/questions/78646329/provide-multiple-languages-via-odoo-rpc-call | I would like to write multiple languages to a ir.ui.view Object's arch_db field. However, if I provide a dict/json value with languages as keys and HTML as values ({"de_DE"=><german-html>, "en_US"=><american-html>}), validation will fail. If I write html with a de_DE context first, and then with en_US context, the latter will overwrite the former for both languages. How can I write different HTML for different languages? Is there e.g. some way to call update_raw via RPC somehow? Example // This example demonstrates how (not) to create a view with translated HTML content. package main import ( "errors" "fmt" "log" "github.com/kolo/xmlrpc" ) func main() { viewArch := TranslatedHTML{ LangDE: `<p>Deutscher Text</p>`, LangEN: `<p>English text</p>`, } cl, err := NewClient( "http://localhost:3017", "odoo_17", "admin", "admin", LangDE, ) panicOnErr(err) reply, err := cl.CreateView(viewArch) panicOnErr(err) fmt.Println(reply) } func panicOnErr(err error) { if err != nil { panic(err) } } func wrapErr(err error, msg string) error { if err != nil { return fmt.Errorf("%s: %w", msg, err) } return nil } type Lang string const LangDE = Lang("de_DE") const LangEN = Lang("en_US") type Client struct { *xmlrpc.Client ContextLang Lang uid int // stores user id after login // Needed per call: OdooDB string Username string Password string } func NewClient(url, odooDB, username, password string, contextLang Lang) (*Client, error) { loginClient, err := xmlrpc.NewClient(fmt.Sprintf("%s/xmlrpc/2/common", url), nil) if err != nil { return nil, wrapErr(err, "failed to create login client") } var uid int err = loginClient.Call("authenticate", []any{ odooDB, username, password, map[string]any{}, }, &uid) if err != nil { return nil, wrapErr(err, "failed to authenticate") } client, err := xmlrpc.NewClient(fmt.Sprintf("%s/xmlrpc/2/object", url), nil) if err != nil { return nil, wrapErr(err, "failed to create object client") } return &Client{client, contextLang, uid, odooDB, username, password}, nil } func (c *Client) WithContextLang(contextLang Lang) *Client { return &Client{c.Client, contextLang, c.uid, c.OdooDB, c.Username, c.Password} } type TranslatedHTML map[Lang]string func (th TranslatedHTML) Langs() []Lang { langs := make([]Lang, 0, len(th)) for lang := range th { langs = append(langs, lang) } return langs } func (cl *Client) ExecuteKW(model, method string, args, reply any) error { return cl.Call( "execute_kw", []any{cl.OdooDB, cl.uid, cl.Password, model, method, args, map[string]any{"context": map[string]string{"lang": string(cl.ContextLang)}}}, reply, ) } func (cl *Client) CreateView(arch TranslatedHTML) (any, error) { langs := arch.Langs() if (len(langs)) == 0 { return nil, errors.New("no translations provided") } firstLang := langs[0] restLangs := langs[1:] var reply any err := cl.WithContextLang(firstLang).ExecuteKW("ir.ui.view", "create", []any{map[string]string{"arch_db": arch[firstLang], "type": "qweb"}}, &reply) if err != nil { return reply, err } log.Printf("created view with ID %d, Lang %s, %s", reply.(int64), firstLang, arch[firstLang]) viewID := reply.(int64) for _, lang := range restLangs { var reply any err := cl.WithContextLang(lang).ExecuteKW("ir.ui.view", "write", []any{viewID, map[string]any{"arch_db": arch[lang]}}, &reply) if err != nil { return reply, err } log.Printf("updated view with Lang %s, %v, %s", lang, reply, arch[lang]) } return nil, nil } | Found solution: One has to use xml_translate to extract translatable terms from arch_db[context lang] and then use it multiple times to extract translations from all arch_db[other lang]. (I built a Flask JSON API to interact with Odoo code not reachable via RPC from Go.) Next, one can create view via RPC (with same context lang) and then provide extracted translations to update_field_translations_sha. | 3 | 0 |
78,640,035 | 2024-6-19 | https://stackoverflow.com/questions/78640035/are-none-and-typenone-really-equivalent-for-type-analysis | According to the PEP 484's "Using None" part: When used in a type hint, the expression None is considered equivalent to type(None). However, I encountered a case where both don't seem equivalent : from typing import Callable, NamedTuple, Type, Union # I define a set of available return types: ReturnType = Union[ int, None, ] # I use this Union type to define other types, like this callable type. SomeCallableType = Callable[..., ReturnType] # But I also want to store some functions metadata (including the function's return type) in a `NamedTuple`: class FuncInfos(NamedTuple): return_type: Type[ReturnType] # This works fine: fi_1 = FuncInfos(return_type=int) # But this issues an error: # main.py:21: error: Argument "return_type" to "FuncInfos" has incompatible type "None"; expected "type[int] | type[None]" [arg-type] # Found 1 error in 1 file (checked 1 source file) fi_2 = FuncInfos(return_type=None) # But this works fine: fi_3 = FuncInfos(return_type=type(None)) It doesn't pose me much problem to write type(None) rather than simply None, but I would've liked to understand the above error issued that seems to contradict the quote from PEP 484. Snippet available for execution here. EDIT: It actually seems to boil down to the following: from typing import Type a: Type[None] # This seems to cause an issue: # main.py:4: error: Incompatible types in assignment (expression has type "None", variable has type "type[None]") [assignment] # Found 1 error in 1 file (checked 1 source file) a = None # This seems to work: a = type(None) Snippet available for execution here. | As showed in the post, the PEP 484 stated something ambiguous: When used in a type hint, the expression None is considered equivalent to type(None). But I found out about the PEP 483 that uses a much clearer wording in its Pragmatics: Where a type is expected, None can be substituted for type(None); e.g. Union[t1, None] == Union[t1, type(None)]. With that in mind, different examples now start to make sense. For instance, my second MRE: from typing import Type a: Type[None] # This seems to cause an issue: # main.py:4: error: Incompatible types in assignment (expression has type "None", variable has type "type[None]") [assignment] # Found 1 error in 1 file (checked 1 source file) a = None # This seems to work: a = type(None) now can be interpreted as: from typing import Type a: Type[type(None)] # Type[...] expects a type. a = None a = type(None) Now, a's type is explicitly "subtype of type(None)", which precisely doesn't match the type of the expression None, hence the error message. However, it's indeed the type of the second expression: type(None) ("Every type is a subtype of itself." according to the PEP 483 itself). So, as suggested by some comments, I basically misinterpreted the PEP 484 and forgot that the "equivalence" took effect only "in a type hint" (or "where a type is expected"). Which wasn't the case when passing parameters or making assignments in my snippets. | 4 | 0 |
78,640,132 | 2024-6-19 | https://stackoverflow.com/questions/78640132/how-to-make-scipy-newton-krylov-use-a-different-derivative-approximation-method | After reading the documentation seems like it uses forward difference as its approximation method, but I can't see any direct way to make it use other method or a custom one. Using the tools in the documentation I tried this to make it use a custom method and did this implementation to test if the results were the same: import numpy as np from scipy.optimize import newton_krylov from scipy.sparse.linalg import LinearOperator # Function def uniform_problem(x, A, b): return b - A@x size = 12 A = np.random.uniform(-1, 1, size=(size, size)) b = np.random.uniform(-1, 1, size=(size, )) xr = np.random.uniform(-1, 1, size=(size, ))# root x0 = np.random.uniform(-1, 1, size=(size, ))# initial guess F = lambda x: uniform_problem(x, A, b) - uniform_problem(xr, A, b) #Arbitrary parameters max_iter = 10 tol = 1e-3 h = 1e-4 repeats = 5000 # Using own implementation of Forward Difference def get_jacobian_vector_product_fdf(F, x, v, h=1e-5): step = h * v return (F(x + step) - F(x)) / h error1 = 0 for i in range(repeats): x = x0.copy() lambdaJv = lambda v: get_jacobian_vector_product_fdf(F, x, v, h) linear_operator = LinearOperator((size, size), matvec=lambdaJv) solution1 = newton_krylov(F, x, method="gmres", inner_maxiter=max_iter, iter=max_iter, callback=None, f_tol=tol, rdiff=h, inner_M=linear_operator) error1 += np.linalg.norm(F(solution1)) error1 /= repeats print(error1) # aprox 1.659173186802721 # Using no custom method error2 = 0 for i in range(repeats): x = x0.copy() solution2 = newton_krylov(F, x, method="gmres", inner_maxiter=max_iter, iter=max_iter, callback=None, f_tol=tol, rdiff=h) error2 += np.linalg.norm(F(solution2)) error2 /= repeats print(error2) # aprox 0.024629534404425796 print(error1/error2) # Orders of magnitude of difference I expected to get the same results, but they are clearly different. I think I'm having trouble understanding what the tools of the documentation do. | I misunderstood, the 'inner_M' parameter does not do what I thought it did, but I found a solution by creating a custom scipy Jacobian class following the same implementation scipy does. import numpy as np import scipy.sparse.linalg from scipy.linalg import norm import scipy.optimize._nonlin from scipy.optimize._nonlin import Jacobian, _nonlin_wrapper # Create your jacobian class using the scipy Jacobian interface. class KrylovJacobianCustom(Jacobian): #... code of custom jacobian ... # (In my case, a variant of the Krylov one) # Make your class visible to the scipy scope. scipy.optimize._nonlin.KrylovJacobianCustom = KrylovJacobianCustom # Define your function using the scipy non linear wrapper. newton_krylov_custom = _nonlin_wrapper('newton_krylov_custom', scipy.optimize._nonlin.KrylovJacobianCustom) This way, this function behaves the same as newton_krylov() or any other non linear solver, but will instead use your implementation of the Jacobian class in the computation. Example: def fun(x): return [x[0] + 0.5 * x[1] - 1.0, 0.5 * (x[1] - x[0]) ** 2] sol = newton_krylov_custom(fun, [0, 0]) >> array([0.66731771, 0.66536458]) | 4 | 0 |
78,650,025 | 2024-6-21 | https://stackoverflow.com/questions/78650025/partial-chained-callbacks-using-plotly-dash | I'm trying to filter data using multiple dropdown bars within a plotly dashboard. There are 5 dropdown options in total. I want the first 3 to operate indepently, while the last two should be chained, both ways, to the first 3. Specifically, the features that I'm aiming to implement are: A default of all values should always be the initial starting point The first 3 options (Year, Season and Month) should act independently. As in, any combination of these 3 can be added to the output. If one item is selected, the output should be updated with those values. However, if an item is selected from another dropdown, those values should be added to the output. Example below in i). Option 4-5 (temp and prec) should be chained, both ways, to the first three dropdown options (Year, Season and Month). This should be reversible or both ways too. If one of the first 3 dropdown options is selected, the table output should be updated with those values and the dropdown lists should be reduced to only allow the user to pick from those values. Example below in ii). To provide concrete examples; i) 2012 is selected from Year in the first dropdown option. The table output displays the relevant values. The user should be able to select any subsequent values in the Year dropdown list (functional). However, if the user wants to also see Spr values from the second dropdown option, that data should be added to the output. ii) For the 4-5 dropdown options which should be chained to first 3, if Hot and Mild are selected in temp and Wet is selected in prec, then the dropdown lists in the first three options should be reduced to: Year = 2013, 2015; Season = Spr, Fall; Month = Apr, Jun, Oct, Dec. import pandas as pd from dash import Dash, dcc, html, Input, Output, dash_table import dash_bootstrap_components as dbc from itertools import cycle import random Year = cycle(['2012','2013','2014','2015']) Season = cycle(['Win','Spr','Sum','Fall']) Month = cycle(['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec']) temp_group = cycle(['Hot','Cold','Mild']) prec_group = cycle(['Dry','Wet']) df = pd.DataFrame(index = range(20)) df['option1'] = [next(Year) for count in range(df.shape[0])] df['option2'] = [next(Season) for count in range(df.shape[0])] df['option3'] = [next(Month) for count in range(df.shape[0])] df['option4'] = [next(temp_group) for count in range(df.shape[0])] df['option5'] = [next(prec_group) for count in range(df.shape[0])] option1_list = sorted(df['option1'].unique().tolist()) option2_list = df['option2'].unique().tolist() option3_list = df['option3'].unique().tolist() option4_list = sorted(df['option4'].unique().tolist()) option5_list = sorted(df['option5'].unique().tolist()) app = Dash(__name__) app.layout = html.Div([ dbc.Card( dbc.CardBody([ dbc.Row([ dbc.Col([ html.P("Option 1"), html.Div([ dcc.Dropdown(id='option1_dropdown', options=option1_list, value=[], placeholder='All', multi=True, clearable=True), ], style={'width': '100%', 'display': 'inline-block'}) ]), dbc.Col([ html.P("Option 2"), html.Div([ dcc.Dropdown(id='option2_dropdown', options=option2_list, value=[], placeholder='All', multi=True, clearable=True), ], style={'width': '100%', 'display': 'inline-block'}) ]), dbc.Col([ html.P("Option 3"), html.Div([ dcc.Dropdown(id='option3_dropdown', options=option3_list, value=[], placeholder='All', multi=True, clearable=True), ], style={'width': '100%', 'display': 'inline-block'}) ]), dbc.Col([ html.P("Option 4"), html.Div([ dcc.Dropdown(id='option4_dropdown', options=option4_list, value=[], placeholder='All', multi=True, clearable=True), ], style={'width': '100%', 'display': 'inline-block'}) ]), dbc.Col([ html.P("Option 5"), html.Div([ dcc.Dropdown(id='option5_dropdown', options=option5_list, value=[], placeholder='All', multi=True, clearable=True), ], style={'width': '100%', 'display': 'inline-block'}) ]), ], align='center'), ]), color='dark' ), dbc.Card( dbc.CardBody([ dbc.Row([ html.Div([ html.Div(id='dd-output-container') ]) ], align='center'), ]), color='dark' ), dbc.Card( dbc.CardBody([ dbc.Row([ html.Div([ dash_table.DataTable( id='table_container', data=df.to_dict('records') ) ]) ], align='center'), ]), color='dark' ) ]) @app.callback( Output('table_container', 'data'), [Input('option1_dropdown', 'value'), Input('option2_dropdown', 'value'), Input('option3_dropdown', 'value'), Input('option4_dropdown', 'value'), Input('option5_dropdown', 'value') ]) def set_dropdown_options(value1, value2, value3, value4, value5): if not value1 or value1 == 'All': value1 = option1_list if not value2 or value2 == 'All': value2 = option2_list if not value3 or value3 == 'All': value3 = option3_list if not value4 or value4 == 'All': value4 = option4_list if not value5 or value5 == 'All': value5 = option5_list ddf = df.query('option1 == @value1 and ' 'option2 == @value2 and ' 'option3 == @value3 and ' 'option4 == @value4 and ' 'option5 == @value5', engine='python') return ddf.to_dict('records') # ====== Using this as a way to view the selections @app.callback( Output('dd-output-container', 'children'), [Input('option1_dropdown', 'value'), Input('option2_dropdown', 'value'), Input('option3_dropdown', 'value'), Input('option4_dropdown', 'value'), Input('option5_dropdown', 'value') ]) def selection(value1, value2, value3, value4, value5): # If value lists are empty or equal to the default of 'All', use the initial df values if not value1 or value1 == 'All': value1 = option1_list if not value2 or value2 == 'All': value2 = option2_list if not value3 or value3 == 'All': value3 = option3_list if not value4 or value4 == 'All': value4 = option4_list if not value5 or value5 == 'All': value5 = option5_list ddf = df.query('option1 == @value1 and ' 'option2 == @value2 and ' 'option3 == @value3 and ' 'option4 == @value4 and ' 'option5 == @value5', engine='python') return if __name__ == '__main__': app.run_server(debug=True, dev_tools_hot_reload = False) Edit 2: Is there a way to include the original column names without converting to using an integer suffix? Year = cycle(["2012", "2013", "2014", "2015"]) Season = cycle(["Win", "Spr", "Sum", "Fall"]) Month = cycle( ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"] ) temp_group = cycle(["Hot", "Cold", "Mild"]) prec_group = cycle(["Dry", "Wet"]) df = pd.DataFrame(index=range(20)) df["Year"] = [next(Year) for count in range(df.shape[0])] df["Season"] = [next(Season) for count in range(df.shape[0])] df["Month"] = [next(Month) for count in range(df.shape[0])] df["Temp"] = [next(temp_group) for count in range(df.shape[0])] df["Prec"] = [next(prec_group) for count in range(df.shape[0])] Year_list = sorted(df["Year"].unique().tolist()) Season_list = df["Season"].unique().tolist() Month_list = df["Month"].unique().tolist() Temp_list = sorted(df["Temp"].unique().tolist()) Prec_list = sorted(df["Prec"].unique().tolist()) df = df.rename(columns = {'Year':'option1', 'Season':'option2', 'Month':'option3', 'Temp':'option4', 'Prec':'option5'}) app = Dash(__name__) app.layout = html.Div( [ dbc.Card( dbc.CardBody( [ dbc.Row( [ dbc.Col( [ html.P("Year"), html.Div( [ dcc.Dropdown( id="Year_dropdown", options=Year_list, value=[], placeholder="All", multi=True, clearable=True, ), ], style={ "width": "100%", "display": "inline-block", }, ), ] ), dbc.Col( [ html.P("Season"), html.Div( [ dcc.Dropdown( id="Season_dropdown", options=Season_list, value=[], placeholder="All", multi=True, clearable=True, ), ], style={ "width": "100%", "display": "inline-block", }, ), ] ), dbc.Col( [ html.P("Month"), html.Div( [ dcc.Dropdown( id="Month_dropdown", options=Month_list, value=[], placeholder="All", multi=True, clearable=True, ), ], style={ "width": "100%", "display": "inline-block", }, ), ] ), dbc.Col( [ html.P("Temp"), html.Div( [ dcc.Dropdown( id="Temp_dropdown", options=Temp_list, value=[], placeholder="All", multi=True, clearable=True, ), ], style={ "width": "100%", "display": "inline-block", }, ), ] ), dbc.Col( [ html.P("Prec"), html.Div( [ dcc.Dropdown( id="Prec_dropdown", options=Prec_list, value=[], placeholder="All", multi=True, clearable=True, ), ], style={ "width": "100%", "display": "inline-block", }, ), ] ), ], align="center", ), ] ), color="dark", ), dbc.Card( dbc.CardBody( [ dbc.Row( [html.Div([html.Div(id="dd-output-container")])], align="center" ), ] ), color="dark", ), dbc.Card( dbc.CardBody( [ dbc.Row( [ html.Div( [ dash_table.DataTable( id="table_container", data=df.to_dict("records") ) ] ) ], align="center", ), ] ), color="dark", ), ] ) df = df.rename(columns = {'Year':'option1', 'Season':'option2', 'Month':'option3', 'Temp':'option4', 'Prec':'option5'}) def construct_query(filter_values): additive_clauses = list() subtractive_clauses = list() for i, filter_value in enumerate(filter_values): if filter_value and filter_value != "All": clause = f"option{i + 1} == @value{i + 1}" if i <= 3: additive_clauses.append(clause) else: subtractive_clauses.append(clause) if len(additive_clauses) > 0 or len(subtractive_clauses) > 0: additive_section = " or ".join(additive_clauses) subtractive_clauses = " and ".join(subtractive_clauses) if additive_section and subtractive_clauses: query = f"({additive_section}) and {subtractive_clauses}" else: query = additive_section or subtractive_clauses return query @app.callback( [ Output("Year_dropdown", "options"), Output("Season_dropdown", "options"), Output("Month_dropdown", "options"), ], [ Input("Temp_dropdown", "value"), Input("Prec_dropdown", "value"), ], ) def update_additive_options(value4, value5): query = None option4_query = "option4 == @value4" option5_query = "option5 == @value5" if value4 and value4 != "All" and value5 and value5 != "All": query = f"{option4_query} and {option5_query}" elif value4 and value4 != "All": query = option4_query elif value5 and value5 != "All": query = option5_query if query: df_filtered = df.query( query, engine="python", ) else: df_filtered = df return ( sorted(df_filtered["option1"].unique().tolist()), df_filtered["option2"].unique().tolist(), df_filtered["option3"].unique().tolist(), ) @app.callback( [Output("Temp_dropdown", "options"), Output("Prec_dropdown", "options")], [ Input("Year_dropdown", "options"), Input("Season_dropdown", "options"), Input("Month_dropdown", "options"), ], ) def update_subtractive_options(value1, value2, value3): query = None additive_clauses = [] for i, filter_value in enumerate([value1, value2, value3]): if filter_value and filter_value != "All": clause = f"option{i + 1} == @value{i + 1}" additive_clauses.append(clause) if len(additive_clauses) > 0: query = " or ".join(additive_clauses) if query: df_filtered = df.query( query, engine="python", ) else: df_filtered = df return ( sorted(df_filtered["option4"].unique().tolist()), sorted(df_filtered["option5"].unique().tolist()), ) @app.callback( Output("table_container", "data"), [ Input("Year_dropdown", "value"), Input("Season_dropdown", "value"), Input("Month_dropdown", "value"), Input("Temp_dropdown", "value"), Input("Prec_dropdown", "value"), ], ) def update_table(value1, value2, value3, value4, value5): query = construct_query(filter_values=[value1, value2, value3, value4, value5]) if query: df_filtered = df.query( query, engine="python", ) else: df_filtered = df return df_filtered.to_dict("records") # ====== Using this as a way to view the selections @app.callback( Output("dd-output-container", "children"), [ Input("Year_dropdown", "value"), Input("Season_dropdown", "value"), Input("Month_dropdown", "value"), Input("Temp_dropdown", "value"), Input("Prec_dropdown", "value"), ], ) def selection(value1, value2, value3, value4, value5): # If value lists are empty or equal to the default of 'All', use the initial df values if not value1 or value1 == "All": value1 = Year_list if not value2 or value2 == "All": value2 = Season_list if not value3 or value3 == "All": value3 = Month_list if not value4 or value4 == "All": value4 = Temp_list if not value5 or value5 == "All": value5 = Prec_list ddf = df.query( "option1 == @value1 and " "option2 == @value2 and " "option3 == @value3 and " "option4 == @value4 and " "option5 == @value5", engine="python", ) return if __name__ == "__main__": app.run_server(debug=True, dev_tools_hot_reload=False) | That can be accomplished with plotly dash. The trick is to use or operators between the first three filters to make them additive and and operators between the last two to make them subtractive. The first three options need to be resolved as one block before involving the last two options. Example query structure: (option1 == @value1 or option2 == @value2 or option3 == @value3) and option4 == @value4 and option5 == @value5 I chose to build the query programmatically based on which filters had values to make the or logic work correctly. See the construct_query() function in the example below. def construct_query(filter_values): additive_clauses = list() subtractive_clauses = list() for i, filter_value in enumerate(filter_values): if filter_value and filter_value != "All": clause = f"option{i + 1} == @value{i + 1}" if i <= 3: additive_clauses.append(clause) else: subtractive_clauses.append(clause) if len(additive_clauses) > 0 or len(subtractive_clauses) > 0: additive_section = " or ".join(additive_clauses) subtractive_clauses = " and ".join(subtractive_clauses) if additive_section and subtractive_clauses: query = f"({additive_section}) and {subtractive_clauses}" else: query = additive_section or subtractive_clauses return query Another challenge is to avoid creating circular callbacks that have the same input and output components. One way to accomplish this is to break large callbacks into multiple separate callbacks so that the inputs and outputs aren't circular. In the example below, I separated the updating of the first three dropdowns into update_additive_options() and the last two dropdowns into update_subtractive_options(). Plotly also describes another way to manage circular callbacks in their advanced-callbacks docs with the context functionality. Example i: Example ii: Here is the full version of my code: import pandas as pd from dash import Dash, dcc, html, Input, Output, dash_table import dash_bootstrap_components as dbc from itertools import cycle import random Year = cycle(["2012", "2013", "2014", "2015"]) Season = cycle(["Win", "Spr", "Sum", "Fall"]) Month = cycle( ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"] ) temp_group = cycle(["Hot", "Cold", "Mild"]) prec_group = cycle(["Dry", "Wet"]) df = pd.DataFrame(index=range(20)) df["option1"] = [next(Year) for count in range(df.shape[0])] df["option2"] = [next(Season) for count in range(df.shape[0])] df["option3"] = [next(Month) for count in range(df.shape[0])] df["option4"] = [next(temp_group) for count in range(df.shape[0])] df["option5"] = [next(prec_group) for count in range(df.shape[0])] option1_list = sorted(df["option1"].unique().tolist()) option2_list = df["option2"].unique().tolist() option3_list = df["option3"].unique().tolist() option4_list = sorted(df["option4"].unique().tolist()) option5_list = sorted(df["option5"].unique().tolist()) app = Dash(__name__) app.layout = html.Div( [ dbc.Card( dbc.CardBody( [ dbc.Row( [ dbc.Col( [ html.P("Option 1"), html.Div( [ dcc.Dropdown( id="option1_dropdown", options=option1_list, value=[], placeholder="All", multi=True, clearable=True, ), ], style={ "width": "100%", "display": "inline-block", }, ), ] ), dbc.Col( [ html.P("Option 2"), html.Div( [ dcc.Dropdown( id="option2_dropdown", options=option2_list, value=[], placeholder="All", multi=True, clearable=True, ), ], style={ "width": "100%", "display": "inline-block", }, ), ] ), dbc.Col( [ html.P("Option 3"), html.Div( [ dcc.Dropdown( id="option3_dropdown", options=option3_list, value=[], placeholder="All", multi=True, clearable=True, ), ], style={ "width": "100%", "display": "inline-block", }, ), ] ), dbc.Col( [ html.P("Option 4"), html.Div( [ dcc.Dropdown( id="option4_dropdown", options=option4_list, value=[], placeholder="All", multi=True, clearable=True, ), ], style={ "width": "100%", "display": "inline-block", }, ), ] ), dbc.Col( [ html.P("Option 5"), html.Div( [ dcc.Dropdown( id="option5_dropdown", options=option5_list, value=[], placeholder="All", multi=True, clearable=True, ), ], style={ "width": "100%", "display": "inline-block", }, ), ] ), ], align="center", ), ] ), color="dark", ), dbc.Card( dbc.CardBody( [ dbc.Row( [html.Div([html.Div(id="dd-output-container")])], align="center" ), ] ), color="dark", ), dbc.Card( dbc.CardBody( [ dbc.Row( [ html.Div( [ dash_table.DataTable( id="table_container", data=df.to_dict("records") ) ] ) ], align="center", ), ] ), color="dark", ), ] ) def construct_query(filter_values): additive_clauses = list() subtractive_clauses = list() for i, filter_value in enumerate(filter_values): if filter_value and filter_value != "All": clause = f"option{i + 1} == @value{i + 1}" if i <= 3: additive_clauses.append(clause) else: subtractive_clauses.append(clause) if len(additive_clauses) > 0 or len(subtractive_clauses) > 0: additive_section = " or ".join(additive_clauses) subtractive_clauses = " and ".join(subtractive_clauses) if additive_section and subtractive_clauses: query = f"({additive_section}) and {subtractive_clauses}" else: query = additive_section or subtractive_clauses return query @app.callback( [ Output("option1_dropdown", "options"), Output("option2_dropdown", "options"), Output("option3_dropdown", "options"), ], [ Input("option4_dropdown", "value"), Input("option5_dropdown", "value"), ], ) def update_additive_options(value4, value5): query = None option4_query = "option4 == @value4" option5_query = "option5 == @value5" if value4 and value4 != "All" and value5 and value5 != "All": query = f"{option4_query} and {option5_query}" elif value4 and value4 != "All": query = option4_query elif value5 and value5 != "All": query = option5_query if query: df_filtered = df.query( query, engine="python", ) else: df_filtered = df return ( sorted(df_filtered["option1"].unique().tolist()), df_filtered["option2"].unique().tolist(), df_filtered["option3"].unique().tolist(), ) @app.callback( [Output("option4_dropdown", "options"), Output("option5_dropdown", "options")], [ Input("option1_dropdown", "value"), Input("option2_dropdown", "value"), Input("option3_dropdown", "value"), ], ) def update_subtractive_options(value1, value2, value3): query = None additive_clauses = [] for i, filter_value in enumerate([value1, value2, value3]): if filter_value and filter_value != "All": clause = f"option{i + 1} == @value{i + 1}" additive_clauses.append(clause) if len(additive_clauses) > 0: query = " or ".join(additive_clauses) if query: df_filtered = df.query( query, engine="python", ) else: df_filtered = df return ( sorted(df_filtered["option4"].unique().tolist()), sorted(df_filtered["option5"].unique().tolist()), ) @app.callback( Output("table_container", "data"), [ Input("option1_dropdown", "value"), Input("option2_dropdown", "value"), Input("option3_dropdown", "value"), Input("option4_dropdown", "value"), Input("option5_dropdown", "value"), ], ) def update_table(value1, value2, value3, value4, value5): query = construct_query(filter_values=[value1, value2, value3, value4, value5]) if query: df_filtered = df.query( query, engine="python", ) else: df_filtered = df return df_filtered.to_dict("records") # ====== Using this as a way to view the selections @app.callback( Output("dd-output-container", "children"), [ Input("option1_dropdown", "value"), Input("option2_dropdown", "value"), Input("option3_dropdown", "value"), Input("option4_dropdown", "value"), Input("option5_dropdown", "value"), ], ) def selection(value1, value2, value3, value4, value5): # If value lists are empty or equal to the default of 'All', use the initial df values if not value1 or value1 == "All": value1 = option1_list if not value2 or value2 == "All": value2 = option2_list if not value3 or value3 == "All": value3 = option3_list if not value4 or value4 == "All": value4 = option4_list if not value5 or value5 == "All": value5 = option5_list ddf = df.query( "option1 == @value1 and " "option2 == @value2 and " "option3 == @value3 and " "option4 == @value4 and " "option5 == @value5", engine="python", ) return if __name__ == "__main__": app.run_server(debug=True, dev_tools_hot_reload=False) | 2 | 2 |
78,669,632 | 2024-6-25 | https://stackoverflow.com/questions/78669632/one-liner-split-and-map-within-list-comprehension | I have this bit for parsing some output from stdout: out_lines = res.stdout.split("\n") out_lines = [e.split() for e in out_lines] out_vals = [{"date":e[0], "time":e[1], "size":e[2], "name":e[3]} for e in out_lines if e] Is there an idiomatic way to merge the second and third lines here so that the splitting and mapping happen within the same line, without redundant calls to e.split()? | @trincot's answer works but can avoid a post-processing filter by mapping the lines to str.split so that the filtering can be done with the if clause of the comprehension instead: out_vals = [ dict(zip(("date", "time", "size", "name"), e)) for e in map(str.split, res.stdout.split("\n")) if e ] | 3 | 2 |
78,652,843 | 2024-6-21 | https://stackoverflow.com/questions/78652843/why-does-csv-reader-with-textiowrapper-include-new-line-characters | I have two functions, one downloads individual csv files and the other downloads a zip with multiple csv files. The download_and_process_csv function works correctly with response.iter_lines() which seems to delete new line characters. 'Chicken, water, cornmeal, salt, dextrose, sugar, sodium phosphate, sodium erythorbate, sodium nitrite. Produced in a facility where allergens are present such as eggs, milk, soy, wheat, mustard, gluten, oats, dairy.' The download_and_process_zip function seems to include new line characters for some reason (\n\n). I've tried newline='' in io.TextIOWrapper however it just replaces it with \r\n. 'Chicken, water, cornmeal, salt, dextrose, sugar, sodium phosphate, sodium erythorbate, sodium nitrite. \n\nProduced in a facility where allergens are present such as eggs, milk, soy, wheat, mustard, gluten, oats, dairy.' Is there a way to modify download_and_process_zip so that new line characters are excluded/replaced or do I have to iterate over all the rows and manually replace the characters? @request_exceptions def download_and_process_csv(client, url, model_class): with closing(client.get(url, stream=True)) as response: response.raise_for_status() response.encoding = 'utf-8' reader = csv.reader(response.iter_lines(decode_unicode=True)) process_copy_from_csv(model_class, reader) @request_exceptions def download_and_process_zip(client, url): with closing(client.get(url, stream=True)) as response: response.raise_for_status() with io.BytesIO(response.content) as buffer: with zipfile.ZipFile(buffer, 'r') as z: for filename in z.namelist(): base_filename, file_extension = os.path.splitext(filename) model_class = apps.get_model(base_filename) if file_extension == '.csv': with z.open(filename) as csv_file: reader = csv.reader(io.TextIOWrapper( csv_file, encoding='utf-8', # newline='', )) process_copy_from_csv(model_class, reader) | I've played around with a mock server which serves this CSV file: "foo bar" The CSV has a single field, "foo\nbar", in a single row. I call a newline in the data an embedded newline. When I use the iter_content method on the Response object: print("Getting CSV") resp = requests.get("http://localhost:8999/csv") x = resp.iter_content(decode_unicode=True) reader = csv.reader(x) for row in reader: print(row) I get the correct output, a single row prints out with a single field of data: Getting CSV ['foo\nbar'] If I change iter_content to iter_lines, I get the wrong output: Getting CSV ['foobar'] I suspect, based on the name, that iter_lines looks for any newline-like character sequence and stops there, before handing the line to the csv reader (without the newline), and so the embedded newline is effectively removed. I cannot speak for your result where the newline appeared to be replaced with a space... there's no replacement going on, just effectively deleting. This popular SO, Use python requests to download CSV, asks the general question about downloading a CSV with the requests module, but every answer seems tailored to the fact that the CSV in question doesn't contain embedded newlines, and so there are a lot of answers with iter_lines. I don't know when iter_content() was added to requests, but no answer makes mention of it. | 2 | 2 |
78,669,613 | 2024-6-25 | https://stackoverflow.com/questions/78669613/how-to-check-if-some-number-can-be-retrieved-as-the-result-of-the-summation-or-d | I have an arbitrary list of positive integers and some number X. I want to check if it is possible to retrieve X using basic operations such as summation and difference. Any number from the list can be used only once. There might be duplicates. We can use any amount of numbers from the list to get X, i.e. we can use just one element, two elements also can be used and it is possible to use all elements from the list. Only True/False answer is enough. E. g.: input_list=[1, 7, 3] X = 4 result: TRUE (e.g.: 7-3) input_list=[1, 7, 3] X = 50 result: FALSE I have attempted to utilize the approach from this question: Find all combinations of a list of numbers with a given sum, specifically this part: [seq for i in range(len(numbers), 0, -1) for seq in itertools.combinations(numbers, i) if sum(seq) == target] My idea was to concat the initial list with the list of the opposite integers: new_list = input_list+list(map(lambda x: -x, input_list)) Then I'm checking if the list comprehension operation described above returns non empty list. But this takes too much time, what I do not like is that there is some sort of duplication in this approach, itertools.combinations may take 1 and it's opposite -1 twice, but I have no idea how to fix that. What is the most effecient way of solving such problem? | For each number in the list, you have to make a decision whether: you add the number you subtract the number you drop the number Here is a solution with breadth-first search of the decision tree: def isRepresentable(input_list, num): reachable = { 0 } for n in input_list: reachable = { y for x in reachable for y in [x, x + n, x - n] } if num in reachable: return True return False print(isRepresentable([1, 7, 3], 4)) # True print(isRepresentable([1, 7, 3], 50)) # False print(isRepresentable([1, 7, 3], 5)) # True The BFS finds the shorter solutions, but DFS should be fine as well for true / false answers. If one also wants to see how the number can be constructed, one has to save the path that lead to that number: def findRepresentation(input_list, num): reachable = { 0: [] } for n in input_list: next_reachable = dict(reachable) for x in reachable: for y in [x, x + n, x - n]: if y not in next_reachable: next_reachable[y] = [*(reachable[x]), y - x] if num == y: return next_reachable[y] reachable = next_reachable return None def explain(input_list, num): print(f'Using {input_list}') print(f'{num} = {findRepresentation(input_list, num)}') explain([1, 7, 3], 4) # 4 = [1, 3] explain([1, 7, 3], 50) # None explain([1, 7, 3], 5) # 5 = [1, 7, -3] hundred_primes = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541 ] explain(hundred_primes, 2024) Not that anybody asked, but 2024 can be represented as the sum of the first 33 prime numbers if one skips 103: 2024 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 107, 109, 113, 127, 131, 137, 139] | 2 | 3 |
78,667,408 | 2024-6-25 | https://stackoverflow.com/questions/78667408/how-to-get-as-fast-as-possible-a-specific-sequence-of-numbers-all-numbers-twice | Knowing a final number (for example 5), I would like to create an array containing the following sequence: [0,1,1,2,2,3,3,4,4,5] Meaning that the list should contain all numbers repeated twice, except for the first and last. Here is the code I use to achieve this : import numpy as np # final number last = 35 # first sequence sa = np.arange(0,last,1) # second sequence (shifted by 1 unit) sb = np.arange (1,last+1,1) # concatenation and flattening sequence = np.stack((sa, sb), axis=1).ravel() # view the result print(sequence) Do you think there would be a more direct and/or effective way to achieve the same result? | What about using arange on 2*N, add 1 and take the floor division by 2? N = 5 out = (np.arange(2*N)+1)//2 # or variant suggested by @TlsChris # out = (np.arange(2*N)+1)>>1 Alternatively, with repeat and excluding the first/last: out = np.repeat(np.arange(N+1), 2)[1:-1] Or with broadcasting: out = (np.arange(N)[:, None]+[0, 1]).ravel() Output: array([0, 1, 1, 2, 2, 3, 3, 4, 4, 5]) timings Comparison of the different answers Relative timings, around 10,000 items, the original answer seems to be the most efficient, otherwise np.repeat(np.arange(N+1), 2)[1:-1] is the fastest: | 3 | 5 |
78,668,507 | 2024-6-25 | https://stackoverflow.com/questions/78668507/python-regex-split-string-with-multiple-delimeters | I know this question has been answered but my use case is slightly different. I am trying to setup a regex pattern to split a few strings into a list. Input Strings: 1. "ABC-QWERT01" 2. "ABC-QWERT01DV" 3. "ABCQWER01" Criteria of the string ABC - QWERT 01 DV 1 2 3 4 5 The string will always start with three chars The dash is optional there will then be 3-10 chars Left padded 0-99 digits the suffix is 2 chars and is optional Expected Output 1. ['ABC','-','QWERT','01'] 1. ['ABC','-','QWERT','01', 'DV'] 1. ['ABC','QWER','01','DV'] I have tried the following patterns a bunch of different ways but I am missing something. My thought was start at the beginning of the string, split after the first three chars or the dash, then split on the occurrence of two decimals. Pattern 1: r"([ -?, \d{2}])+" This works but doesn't break up the string by the first three chars if the dash is missing Pattern 2: r"([^[a-z]{3}, -?, \d{2}])+" This fails as a non-pattern match, nothing gets split Pattern 3: r"([^[a-z]{3}|-?, \d{2}])+" This fails as a non-pattern match, nothing gets split Any tips or suggestions? | You can use a pattern similar to : (?i)([A-Z]{3})(-?)([A-Z]*)([0-9]{2})([A-Z]*) Code: import re def _parts(s): p = r'(?i)([A-Z]{3})(-?)([A-Z]*)([0-9]{2})([A-Z]*)' return re.findall(p, s) print(_parts('ABC-QWERT01DV')) print(_parts('ABCQWER01')) print(_parts('ABC-QWERT01')) Prints [('ABC', '-', 'QWERT', '01', 'DV')] [('ABC', '', 'QWER', '01', '')] [('ABC', '-', 'QWERT', '01', '')] Notes: (?i): insensitive flag. ([A-Z]{3}): capture group 1 with any 3 letters. (-?): capture group 2 with an optional dash. ([A-Z]*): capture group 3 with 0 or more letters. ([0-9]{2}): capture group 4 with 2 digits. ([A-Z]*): capture group 5 with 0 or more letters. | 3 | 4 |
78,666,883 | 2024-6-25 | https://stackoverflow.com/questions/78666883/pip-install-quickfix-failed-in-windows | I'm trying to install the quikcfix library on my windows machine. The python version is 3.12.2. However I get the below error. python setup.py bdist_wheel did not run successfully exit code: 1 [7 lines of output] Testing for std::tr1::shared_ptr... ...not found Testing for std::shared_ptr... ...not found Testing for std::unique_ptr... ...not found error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools I installed the Microsoft C++ build tools as well. But it still gives the same error. Can anyone suggest a solution for this error | Download the quickfix from here: https://github.com/kazcfz/QuickFIX-prebuilt-wheel What I see from the link, the latest version of python which supports quickfix is 3.9 It will not work for python 3.12 | 5 | 3 |
78,666,961 | 2024-6-25 | https://stackoverflow.com/questions/78666961/meta-feature-analysis-split-data-for-computation-on-available-memory | I am working with the meta-feature extractor package: pymfe for complexity analysis. On a small dataset, this is not a problem, for example. pip install -U pymfe from sklearn.datasets import make_classification from sklearn.datasets import load_iris from pymfe.mfe import MFE data = load_iris() X= data.data y = data.target extractor = MFE(features=[ "t1"], groups=["complexity"], summary=["min", "max", "mean", "sd"]) extractor.fit(X,y) extractor.extract() (['t1'], [0.12]) My dataset is large (32690, 80) and this computation gets killed for exessive memory usage. I work on Ubuntu 24.04 having 32GB RAM. To reproduce scenario: # Generate the dataset X, y = make_classification(n_samples=20_000,n_features=80, n_informative=60, n_classes=5, random_state=42) extractor = MFE(features=[ "t1"], groups=["complexity"], summary=["min", "max", "mean", "sd"]) extractor.fit(X,y) extractor.extract() Killed Question: How do I split this task to compute on small partitions of the dataset, and combine final results (averaging)? | Managed to find a workaround. # helper functions def split_dataset(X, y, n_splits): # data splits split_X = np.array_split(X, n_splits) split_y = np.array_split(y, n_splits) return split_X, split_y def compute_meta_features(X, y): # meta-features for a partition extractor = MFE(features=["t1"], groups=["complexity"], summary=["min", "max", "mean", "sd"]) extractor.fit(X, y) return extractor.extract() def average_results(results): # summary of results features = results[0][0] summary_values = np.mean([result[1] for result in results], axis=0) return features, summary_values # Split dataset n_splits = 10 # ten splits split_X, split_y = split_dataset(X, y, n_splits) # meta-features results = [compute_meta_features(X_part, y_part) for X_part, y_part in zip(split_X, split_y)] # Combined results final_features, final_summary = average_results(results) | 2 | 0 |
78,664,686 | 2024-6-24 | https://stackoverflow.com/questions/78664686/itertools-islice-iterate-over-input-even-when-stop-is-smaller-than-start | For a custom container wrapping an iterator, I intended to delegate some of my logic to itertools.islice. One consideration was to avoid unnecessary iterations over the wrapped iterator. When calling itertools.islice(iterable, start, stop, step) with stop<=start, the result is an empty generator, as expected. But even though it's not absolutely needed, itertools.islice(iterable, start, stop, step) will always iterate at least start number of times over the iterable. Test case to repro: from unittest.mock import Mock import itertools iterableMock = Mock() iterableMock.__iter__ = Mock(return_value=iterableMock) iterableMock.__next__ = Mock(side_effect=range(10)) iterable = iterableMock start = 5 stop = 0 step = None isliceResult = list(itertools.islice(iterable, start, stop, step)) assert isliceResult == [] assert iterableMock.__next__.call_count == 0 # <= FAILS since call_count == 5 Is this behavior: Expected / by design. We want to skip the elements until start is reached no matter what. Just a side effect of the current implementation that only impacts the performance of some corner cases. A potential improvement that can be addressed without concerns. Documentation can be a bit misleading/misinterpreted on the expected behavior. elements from the iterable are skipped until start is reached it stops at the specified position The suggested "equivalent implementation" and the source code clearly iterate until start-1. stop parameter is only considered subsequently. | Consuming elements from the underlying iterator is part of the islice contract, and cannot be optimized out. There's even a recipe in the recipes section of the official itertools docs that relies on empty islices consuming elements: def consume(iterator, n=None): "Advance the iterator n-steps ahead. If n is None, consume entirely." # Use functions that consume iterators at C speed. if n is None: collections.deque(iterator, maxlen=0) else: next(islice(iterator, n, n), None) However, the contract arose more by historical accident than actual design. Raymond Hettinger, the person who designed the function and wrote the original docs, did not intend islice to make any promises about the final state of the underlying iterator - not even the promise that the consume recipe relies on. Quoting one of Raymond's posts in a conversation about a similar edge case with step: Currently, there are no promises or guarantees about the final state of the iterator. And their next post in that conversation: I wrote the tools, the docs, and the tests. If you interpret a "promise" in text, I can assure you it was not intended. The behavior is undefined because I never defined it. I'm happy to clarify the docs to make that explicit. As a result of that conversation, the implementation was adjusted to change the behavior in that edge case, and element consumption details were considered defined from then on. Years later, someone else brought up the start>stop case. Raymond's response includes the following line: Having islice() always consume at least "start" number of values seems reasonable enough and it is the documented behavior: "If start is non-zero, then elements from the iterable are skipped until start is reached. ... If stop is None, then iteration continues until the iterator is exhausted, if at all; otherwise, it stops at the specified position." indicating that they considered consuming start items in the start>stop case to be part of the documented contract. | 3 | 4 |
78,641,844 | 2024-6-19 | https://stackoverflow.com/questions/78641844/thread-safeness-and-slow-pyplot-hist | The hist function of matplotlib.pyplot runs very slow which seemingly has to do with the structure I have chosen. I built a front panel in Tkinter which starts a control loop for a camera. To keep the control loop responsive I created an ImageProcessor class which collects, processes and plots the images in cv2. The ImageProcessor object is running in its own thread. This works up to the point where I try to plot the histogram of the image. Since Tkinter is not thread safe I use Agg as a backend and plot the drawn canvas of the pyplot.figure with cv2. Calculating the histogram of the image using pyplot.hist takes more than 20 seconds. Calculating the histogram on its own it takes only 0.5 seconds. How does this manifest? Does Matplotlib have to be run from the main thread or is it sufficient if there is only a single thread interacting with it (as in my case)? Or is there another misunderstanding in my code? import threading import time import numpy as np import matplotlib import matplotlib.pyplot as plt from timeit import default_timer as timer from datetime import timedelta import queue class ImageProcessor(threading.Thread): def __init__(self): matplotlib.use('Agg') threading.Thread.__init__(self) # initialize plot for histograms self.hist_fig = plt.figure() self.loop = True self.continuous_acquisition_var = False self.a = None def run(self): while self.loop: self.a = np.random.uniform(low=0, high=16384, size=12320768).reshape((4096, 3008)) self.hist_fig.clf() # clear histogram plot start = timer() plt.hist(self.a.flatten(), bins=256, range=(0.0, 16384), fc='r', ec='r') end = timer() print(timedelta(seconds=end - start)) def stop(self): self.loop = False def ctl_loop(command): ctl_loop_var = True img_proc = ImageProcessor() img_proc.daemon = True img_proc.start() while ctl_loop_var: # main loop while not command.empty(): q_element = command.get() task = q_element[0] data = q_element[1] func = getattr(img_proc, task) func(data) if task == "stop": ctl_loop_var = False if __name__ == '__main__': cmd_queue = queue.Queue() ctl = threading.Thread(target=ctl_loop, args=(cmd_queue, )) ctl.daemon = True ctl.start() time.sleep(40) cmd_queue.put(('stop', '')) | The solution is straightforward and has nothing to do with plt.hist. Simply add the line time.sleep(0.01) in your main loop. The reason is that threading is not the same as multiprocessing. All threads share the same process (CPU), meaning only one thread can run at a time. In your case, the main thread (the while ctl_loop_var loop) checks as quickly as possible if ctl_loop_var is still True, preventing the other thread from doing anything. Therefore, ensure you are not creating unnecessary CPU load. This applies to multiprocessing as well, though the impact may be less noticeable. def ctl_loop(command): ctl_loop_var = True img_proc = ImageProcessor() img_proc.daemon = True img_proc.start() while ctl_loop_var: # main loop while not command.empty(): q_element = command.get() task = q_element[0] data = q_element[1] if task == "stop": img_proc.stop() ctl_loop_var = False else: func = getattr(img_proc, task) func(data) time.sleep(.01) # give the other thread time to process Furthermore, the code also fixes two bugs in the original code: ImageProcessor.stop takes only one argument ImageProcessor.stop wasn't stopping the thread properly when the main thread is overloaded. What I also observed is, your implementation with plt.hist(..., bins=256, range=(0.0, 16384)) is about 7 times faster than plt.hist(..., bins=list_of_bins)! Don't change it ;). | 3 | 1 |
78,665,686 | 2024-6-25 | https://stackoverflow.com/questions/78665686/calculate-distance-to-the-nearest-object | I need to make a map of distances to the nearest object. I have a solution where i am looping over every point of a map, and every object, calculating the distance to all of them, and then leaving only minimum distance. The problem here is that if I am woking with real data, the map can easily contain 10s of millions of points, and there can be more than 100 objects. Is there any better code implementation for solving this problem? Loading packages import pandas as pd import numpy as np import matplotlib.pyplot as plt Generate synthetic map coord_dict = {"X": [], "Y": []} for x_value in range(0, 10000, 50): for y_value in range(0, 5000, 50): coord_dict["X"].append(x_value) coord_dict["Y"].append(y_value) map_df = pd.DataFrame(coord_dict) Generate points to calculate distance from well_points_dict = {"X": [500, 1500, 4000, 5500, 6250, 7500, 8000, 9000], "Y": [500, 4000, 2000, 1500, 500, 5000, 100, 2500]} wells_df = pd.DataFrame(well_points_dict) Calculate distances calculations_count = 0 distance_map = np.zeros(map_df.shape) for i in range(map_df.shape[0]): d = [] for j in range(wells_df.shape[0]): d.append(((map_df["X"].iloc[i]-wells_df["X"][j])**2 + (map_df["Y"].iloc[i]- wells_df["Y"][j])**2)**0.5) calculations_count += 1 dd = min(d) distance_map[i,1] = dd # print(calculations_count) Print resulting map plt.figure(figsize=(10,10)) plt.scatter(x=map_df["X"],y=map_df["Y"],c=distance_map[:,1],s=1,cmap='terrain') for i in range(len(wells_df)): plt.plot(wells_df["X"][i],wells_df["Y"][i], color='black', marker='o',markersize=3) plt.title('Calculated map') plt.xlabel('X') plt.ylabel('Y') plt.axis('scaled') plt.tight_layout() plt.colorbar(shrink=0.25) Result map example: | KDTree is what you are looking for, there is an implementation of it in scipy.spatial. import numpy as np import matplotlib.pyplot as plt from scipy import spatial Given your trial points: x_value = np.arange(0, 10000, 50) y_value = np.arange(0, 5000, 50) X, Y = np.meshgrid(x_value, y_value) points = np.stack([X.ravel(), Y.ravel()]).T And well points: x_well = np.array([500, 1500, 4000, 5500, 6250, 7500, 8000, 9000]) y_well = np.array([500, 4000, 2000, 1500, 500, 5000, 100, 2500]) wells = np.stack([x_well, y_well]).T We can create a KDTree: interpolator = spatial.KDTree(wells) And query efficiently the tree to get distances and also indices of which point it is closer: distances, indices = interpolator.query(points) # 7.12 ms Β± 711 Β΅s per loop (mean Β± std. dev. of 30 runs, 100 loops each) Plotting the result leads to: fig, axe = plt.subplots() axe.scatter(*points.T, marker=".", c=distances) axe.scatter(*wells.T, color="black") axe.grid() We see the Voronoi diagram appearing on the color map which is a good confirmation distances are correctly interpreted wrt reference points (wells): voronoi = spatial.Voronoi(wells) # ... spatial.voronoi_plot_2d(voronoi, ax=axe) Where the object Voronoi is the Voronoi diagram based on your reference points (wells) and voronoi_plot_2d an helper to draw it on axes. | 2 | 4 |
78,665,006 | 2024-6-25 | https://stackoverflow.com/questions/78665006/time-library-returns-incorrect-execution-time-for-decorator-functions-in-flask | Say I have the following python files in my src directory for my Flask application (Flask-smortest to be specific): src/ ham.py eggs.py endpoint.py ham.py has 1 decorator function, while eggs.py has 3 decorator functions #ham.py script from functools import wraps import time def ham1(func): @wraps(func) def wrapper(*args, **kwargs): start_time = time.monotonic() i = func(*args, **kwargs) time.sleep(2) print(f'Execution time | ham1 -- {(time.monotonic() - start_time)} secs') return True return wrapper #eggs.py script from functools import wraps import time def egg1(func): @wraps(func) def wrapper(*args, **kwargs): i = func(*args, **kwargs) start_time = time.monotonic() time.sleep(20) print(f'Execution time | egg1 -- {(time.monotonic() - start_time)} secs') return True return wrapper def egg2(func): @wraps(func) def wrapper(*args, **kwargs): start_time = time.monotonic() i = func(*args, **kwargs) time.sleep(1) return True print(f'Execution time | egg2 -- {(time.monotonic() - start_time)} secs') return wrapper def egg3(func): @wraps(func) def wrapper(*args, **kwargs): start_time = time.monotonic() i = func(*args, **kwargs) time.sleep(1) print(f'Execution time | egg3 -- {(time.monotonic() - start_time)} secs') return True return wrapper #endpoint.py script from ham import ham1 from eggs import egg1, egg2, egg3 @egg3 @egg2 @egg1 @ham1 def foo(): return True Upon executing foo() in my flask-smortest application it gives me the following output: Execution time | ham1 -- 2 secs Execution time | egg1 -- 20 secs Execution time | egg2 -- 21 secs Execution time | egg3 -- 22 secs The output displays the wrong execution time for func egg2 and func egg3. Its supposed to be 1 second each but for some reason its accumulating the execution time for func egg1. Another thing to note is that func egg1 does not accumulate the execution time for func ham1 which is sitting in a different python file - ham.py . This behavior only happens in functions existing in the same python file - eggs.py. I tried time.perf_counter() as well but the still the same issue. I don't understand why the app is behaving this way. Is there some asynchronous running happening in the background in flask? | If you're trying to measure total time then what you did on egg is correct, but wrong for ham. But if time measuring is only intended for decorator-specific operations excluding decoration target's execution time - you must not put target function execution between time measurements. So either: # assuming return value of func() is needed def egg(func): def wrapper(*args, **kwargs): result = func() # <--- start_time = time.monotonic() time.sleep(2) print(f"{(time.monotonic() - start_time)} secs") return result return wrapper or this: def egg(func): def wrapper(*args, **kwargs): start_time = time.monotonic() time.sleep(2) print(f"{(time.monotonic() - start_time)} secs") return func() # <--- return wrapper should be done instead, not inbetween. Such issue can be quickly identified with tons more prints. import time def ham(func): def wrapper(): print("[Deco:ham] measuring start time") start_time = time.monotonic() print("[Deco:ham] running func") func() print("[Deco:ham] sleeping for 5 secs") time.sleep(5) print(f"[Deco:ham] sleep done in {(time.monotonic() - start_time)} secs") return wrapper def egg(func): def wrapper(): print("[Deco:egg] running func") func() print("[Deco:egg] measuring start time") start_time = time.monotonic() print("[Deco:egg] sleeping for 2 secs") time.sleep(2) print(f"[Deco:egg] sleep done in {(time.monotonic() - start_time)} secs") return wrapper @egg @ham def foo(): print("[foo] In foo") return if __name__ == '__main__': foo() [Deco:egg] measuring start time [Deco:egg] running func [Deco:ham] measuring start time [Deco:ham] running func [foo] In foo [Deco:ham] sleeping for 5 secs [Deco:ham] sleep done in 5.0 secs [Deco:egg] sleeping for 2 secs [Deco:egg] sleep done in 7.0 secs Bonus: contextlib.contextmanager You can use contextlib.contextmanager (or asynccontextmanager for async variant) for simplifying time measuring and such: from contextlib import contextmanager @contextmanager def measure_time(name: str): print(f"[{name}] Start measuring time") start_time = time.monotonic() yield print(f"[{name}] Time taken: {time.monotonic() - start_time} secs") ... with measure_time("name"): ... full code: import time from contextlib import contextmanager @contextmanager def measure_time(name: str): print(f"[{name}] Start measuring time") start_time = time.monotonic() yield print(f"[{name}] Time taken: {time.monotonic() - start_time} secs") def ham(func): def wrapper(): func() with measure_time("Deco:ham"): time.sleep(5) return wrapper def egg(func): def wrapper(): func() with measure_time("Deco:egg"): time.sleep(3) return wrapper @egg @ham def foo(): print("[foo] In foo") return if __name__ == '__main__': foo() [foo] In foo [Deco:ham] Start measuring time [Deco:ham] Time taken: 5.0 secs [Deco:egg] Start measuring time [Deco:egg] Time taken: 3.0 secs | 2 | 1 |
78,664,366 | 2024-6-24 | https://stackoverflow.com/questions/78664366/how-to-add-a-foreignkey-field-to-be-saved-in-modelform-django | While creating my website, I face the following problem: a user can add an article, and so the article model has a ForeignKey field - author. I created a form for adding it, inheriting from ModelForm, since I think it is more logical that way (as it interacts directly with db). The problem is that I cannot add this field to the Form, as it cannot be filled; it has to be taken from the request itself. author = request.user.pk So how can I make a form save the author id as well? Without it, there's an error: NOT NULL CONSTRAINT FAILED, which is logical as the form doesn't save pk and so it is null; but a FOREIGN KEY cannot be null. The only way is to inherit it from class Form? I don't really want to do it... I was thinking about 'overriding' method save() so that it has one more argument - author id: form.save(author=request.user.pk) This way it would work. But I changed my mind, because if something goes wrong, this will mess up the whole database... The save() method is too global. What's more, there might well be another way to solve my problem, which is more effective and clear. Here's the code for my form: class ArticleForm(ModelForm): class Meta: model = Article fields = ['title', 'content'] widgets = {'title' : TextInput(attrs={'class' : 'create_article_title', 'placeholder' : 'Enter the heading...' }), 'content' : Textarea(attrs={'class' : 'create_article_content', 'placeholder' : 'Your article...' })} and my model: class Article(Model): title = CharField(max_length=30) content = CharField(max_length=5000) author = ForeignKey(User, on_delete=CASCADE) rating = IntegerField(default=0) created = DateField(auto_now_add=True) last_updated = DateField(auto_now_add=True) articles = ArticleManager() objects = Manager() and my view: class AddArticle(View): template_name = 'myapp/addarticle.html' def get(self, request): context = {'add_article_form' : ArticleForm()} return render(request, self.template_name, context=context) def post(self, request): form = ArticleForm() form.save() return redirect('index') It seems that I completely misunderstand the basics of ModelForm creation... | There are other ways. Such as adding a hidden field to your form and set its value as the request.user.id. Although, I must say that your initial thought related to the save() method is the correct one. However, not overriding it but using an optional keyword provided by the method itself. Also, do not forget to set form data and validate it, which is one of the main features: class AddArticle(View): ... def post(self, request): form = ArticleForm(request.POST) if form.is_valid(): instance = form.save(commit=False) instance.author = request.user instance.save() return redirect('index') | 2 | 1 |
78,653,824 | 2024-6-21 | https://stackoverflow.com/questions/78653824/efficiently-marking-holidays-in-a-data-column | I'm trying to add a column that indicates whether a date is a holiday or not. I found some code online, but I believe there's a more efficient way to do it, possibly using a polar method instead of map elements and lambda. Example code: import polars as pl import holidays # Initialize the holidays for Chile cl_holidays = holidays.CL() # Sample data data = { "Date": ["2024-06-20 00:00:00", "2024-06-21 00:00:00", "2024-06-22 00:00:00", "2024-06-23 00:00:00", "2024-06-24 00:00:00"], "Amount": [100, 200, 300, 400, 500], "User_Count" : [1, 2, 3, 4, 5] } # Create DataFrame df = pl.DataFrame(data) # Add a new column 'Is_Holiday' based on the Date column df = df.with_columns( (pl.col("Date").map_elements(lambda x: x.split(" ")[0] in cl_holidays, return_dtype=pl.Boolean)).alias("Is_Holiday") ).with_columns(pl.col("Date").str.strptime(pl.Datetime)) df Expected output: shape: (5, 4) βββββββββββββββββββββββ¬βββββββββ¬βββββββββββββ¬βββββββββββββ β Date β Amount β User_Count β Is_Holiday β β --- β --- β --- β --- β β datetime[ΞΌs] β i64 β i64 β bool β βββββββββββββββββββββββͺβββββββββͺβββββββββββββͺβββββββββββββ‘ β 2024-06-20 00:00:00 β 100 β 1 β true β β 2024-06-21 00:00:00 β 200 β 2 β false β β 2024-06-22 00:00:00 β 300 β 3 β false β β 2024-06-23 00:00:00 β 400 β 4 β false β β 2024-06-24 00:00:00 β 500 β 5 β false β βββββββββββββββββββββββ΄βββββββββ΄βββββββββββββ΄βββββββββββββ UPDATE: i tried with @ignoring_gravity aproach, and also tried changing the date format but i keep getting false instead of true UPDATE2: If i try @Hericks aproach i keep getting false. (I'm using polars 0.20.31 ) import polars as pl import holidays # Initialize the holidays for Chile cl_holidays = holidays.CL() # Sample data data = { "Date": ["2024-06-20 00:00:00", "2024-06-21 00:00:00", "2024-06-22 00:00:00", "2024-06-23 00:00:00", "2024-06-24 00:00:00"], "Amount": [100, 200, 300, 400, 500], "User_Count" : [1, 2, 3, 4, 5] } # Create DataFrame df = pl.DataFrame(data) # Add a new column 'Is_Holiday' based on the Date column df.with_columns( Is_Holiday=pl.col('Date').str.to_datetime().dt.date().is_in(cl_holidays.keys()) ) Output: shape: (5, 4) βββββββββββββββββββββββ¬βββββββββ¬βββββββββββββ¬βββββββββββββ β Date β Amount β User_Count β Is_Holiday β β --- β --- β --- β --- β β str β i64 β i64 β bool β βββββββββββββββββββββββͺβββββββββͺβββββββββββββͺβββββββββββββ‘ β 2024-06-20 00:00:00 β 100 β 1 β false β β 2024-06-21 00:00:00 β 200 β 2 β false β β 2024-06-22 00:00:00 β 300 β 3 β false β β 2024-06-23 00:00:00 β 400 β 4 β false β β 2024-06-24 00:00:00 β 500 β 5 β false β βββββββββββββββββββββββ΄βββββββββ΄βββββββββββββ΄βββββββββββββ | After doing some research, I found that I need to pass the years into my 'holidays.CL' constructor import polars as pl import holidays # Initialize the holidays for Chile cl_holidays = holidays.CL() # Sample data data = { "Date": ["2024-06-20 00:00:00", "2024-06-21 00:00:00", "2024-06-22 00:00:00", "2024-06-23 00:00:00", "2024-06-24 00:00:00"], "Amount": [100, 200, 300, 400, 500], "User_Count" : [1, 2, 3, 4, 5] } # Create DataFrame df = pl.DataFrame(data) # Add a new column 'Is_Holiday' based on the Date column df.with_columns( pl.col("Date").str.to_date(format="%Y-%m-%d %H:%M:%S") ).with_columns( pl.col("Date").is_in(list(holidays.CL(years=[2024]).keys())).alias("holiday") ) Output : shape: (5, 4) ββββββββββββββ¬βββββββββ¬βββββββββββββ¬ββββββββββ β Date β Amount β User_Count β holiday β β --- β --- β --- β --- β β date β i64 β i64 β bool β ββββββββββββββͺβββββββββͺβββββββββββββͺββββββββββ‘ β 2024-06-20 β 100 β 1 β true β β 2024-06-21 β 200 β 2 β false β β 2024-06-22 β 300 β 3 β false β β 2024-06-23 β 400 β 4 β false β β 2024-06-24 β 500 β 5 β false β ββββββββββββββ΄βββββββββ΄βββββββββββββ΄ββββββββββ | 5 | 1 |
78,663,783 | 2024-6-24 | https://stackoverflow.com/questions/78663783/polars-datime-range-with-multiple-values-per-date | I'm trying to assign multiple entries from a list per date to my dateframe, so for each day, I would have all the values from the list. I tried passing the list as an argument; however, I'm having problems with the length of the dataframe. Conceptual code: from datetime import datetime import polars as pl Value_list = ["xd","xd1"] dates_df= pl.DataFrame( { "dates": pl.datetime_range( start=datetime(2023, 1, 1), end=datetime(2023, 12, 10), interval="1d", eager=True, closed="both", ), "Producto": Value_list } ) dates_df My actual code: from datetime import datetime import polars as pl dates_df= pl.DataFrame( { "dates": pl.datetime_range( start=datetime(2023, 1, 1), end=datetime(2023, 12, 10), interval="1d", eager=True, closed="both", ), "Producto": "xd" } ) dates_df Expected Output: shape: (344, 2) βββββββββββββββββββββββ¬βββββββββββ β dates β Producto β β --- β --- β β datetime[ΞΌs] β str β βββββββββββββββββββββββͺβββββββββββ‘ β 2023-01-01 00:00:00 β xd β β 2023-01-01 00:00:00 β xd1 β β 2023-01-02 00:00:00 β xd β β 2023-01-02 00:00:00 β xd1 β β 2023-01-03 00:00:00 β xd β β β¦ β β¦ β β 2023-12-08 00:00:00 β xd1 β β 2023-12-09 00:00:00 β xd β β 2023-12-09 00:00:00 β xd1 β β 2023-12-10 00:00:00 β xd β β 2023-12-10 00:00:00 β xd1 β βββββββββββββββββββββββ΄βββββββββββ | Are you looking for a cross-join? from datetime import datetime import polars as pl dates = pl.datetime_range( start=datetime(2023, 1, 1), end=datetime(2023, 12, 10), interval="1d", eager=True, closed="both", ).to_frame('date') values = pl.Series('producto', ['xd', 'xd1']).to_frame() print(dates.join(values, how='cross')) output: shape: (688, 2) βββββββββββββββββββββββ¬βββββββββββ β date β producto β β --- β --- β β datetime[ΞΌs] β str β βββββββββββββββββββββββͺβββββββββββ‘ β 2023-01-01 00:00:00 β xd β β 2023-01-01 00:00:00 β xd1 β β 2023-01-02 00:00:00 β xd β β 2023-01-02 00:00:00 β xd1 β β 2023-01-03 00:00:00 β xd β β β¦ β β¦ β β 2023-12-08 00:00:00 β xd1 β β 2023-12-09 00:00:00 β xd β β 2023-12-09 00:00:00 β xd1 β β 2023-12-10 00:00:00 β xd β β 2023-12-10 00:00:00 β xd1 β βββββββββββββββββββββββ΄βββββββββββ | 2 | 4 |
78,653,708 | 2024-6-21 | https://stackoverflow.com/questions/78653708/parallelizing-numpy-sort | I need to sort uint64 arrays of length 1e8-1e9, which is one of the performance bottlenecks in my current project. I have just recently updated numpy v2.0 version, in which the sorting algorithm is significantly optimized. Testing it on my hardware, its about 5x faster than numpy v1.26 version. But currently numpy's sorting algorithm cannot utilize multi-core CPUs even though it uses SIMD. I tried to parallelize it and sort multiple np.array at the same time. One possible approach is to use numba prange, but numba has always had poor support for numpy sorting. numba.jit even has a slowdown effect on np.sort, and numba v0.60.0 fails to follow up on numpy v2.0's optimizations for sorting (https://github.com/numba/numba/issues/9611). The alternative is cython prange, but cython does not allow the creation of Python objects at nogil. Is there a way to sort numpy.array in parallel using cython or otherwise? If using cpp's parallel sorting libraries, are they faster than numpy's own sorting, taking into account the overhead of data type conversions? arr=np.random.randint(0,2**64,int(3e8),dtype='uint64') sorted_arr=np.sort(arr) # single thread np.sort takes 4 seconds (numpy v2.0.0) | This answer show why a pure-Python, Numba or Cython implementation certainly cannot be used to write a (reasonably-simple) efficient implementation (this summaries the comments). It provides a C++ version which can be called from CPython. The provided version is fast independently of the Numpy version used (so Numpy 2.0 is not required). Why it is certainly not possible directly with Numba/Cython/pure-Python I do no think it is possible to call sort of Numpy in parallel with Cython/Numba because of the GIL and many additional issues. Regarding Numba, parallel loops need the GIL to be release and no object can be manipulated inside it. The Numba sorting function does not actually call Numpy functions, but its own implementation which does not use the GIL nor create any Python object (which require the GIL to be enabled). The Numba sequential implementation is inefficient anyway. While one can try to re-implement a parallel sort from scratch, the parallel features are too limited for the resulting implementation to be really fast or reasonable simple (or both). Indeed, it is limited to a simple parallel for loop called prange (no atomics, critical sections, barriers, TLS storage, etc.). Regarding Cython, prange of Cython requires the GIL to be disabled so creating Python object is not possible in the parallel loop preventing np.sort to be called... Cython provides more parallel features than Numba so re-implementing a parallel sort from scratch seems possible at first glance. However, in practice, it is really complicated (if even possible) to write a fast implementation because of many issues and opened/unknown bugs. Here are the issues I found out while trying to write such a code: OpenMP barriers are not yet available and there is no sane (portable) replacement; critical sections are also not yet available so one need to use manual locks (instead of just #pragma omp critical; arrays must be allocated and freed manually using malloc and free in parallel sections (bug prone and resulting in a more complex code); It is not possible to create views in parallel sections (only outside); Cython does not seems to support well Numpy 2.0 yet causing many compilation errors and also runtime ones (see this post which seems related to this); the documentation of OpenMP functions is rather limited (parts are simply missing); variables of a prange-based loop cannot be reused in a range-based loop outside the prange-loop I also tried to use a ThreadPoolExecutor so to call some optimized Cython/Numba functions and circumvent the aforementioned limitations of the two but it resulted in a very slow implementation (slower than just calling np.sort) mainly because of the GIL (nearly no speed up) and Numpy overhead (mainly temporary arrays and more precisely page-faults). Efficient parallel C++ solution We can write an efficient parallel C++ code performing the following steps: split the input array in N slices perform a bucket sort on each part in parallel so we get M buckets for each slice merge the resulting buckets so to get M buckets from the M x N buckets sort the M buckets in parallel using a SIMD-optimized sort -- this can be done with the x86simdsort C++ library (used internally by Numpy) though it only works on x86-64 CPUs merge the M buckets so to get the final array We need to write a BucketList data structure so to add numbers in a variable-size container. This is basically a linked list of chunks. Note a growing std::vector is not efficient because each resize put too much pressure on memory (and std::deque operations are so slow that is is even slower). Here is the resulting C++ code: // File: wrapper.cpp // Assume x86-simd-sort has been cloned in the same directory and built #include "x86-simd-sort/lib/x86simdsort.h" #include <cstdlib> #include <cstring> #include <forward_list> #include <mutex> #include <omp.h> template <typename T, size_t bucketMaxSize> struct BucketList { using Bucket = std::array<T, bucketMaxSize>; std::forward_list<Bucket> buckets; uint32_t bucketCount; uint32_t lastBucketSize; BucketList() : bucketCount(1), lastBucketSize(0) { buckets.emplace_front(); } void push_back(const T& value) { if (lastBucketSize == bucketMaxSize) { buckets.emplace_front(); lastBucketSize = 0; bucketCount++; } Bucket* lastBucket = &*buckets.begin(); (*lastBucket)[lastBucketSize] = value; lastBucketSize++; } size_t size() const { return (size_t(bucketCount) - 1lu) * bucketMaxSize + lastBucketSize; } size_t bucketSize(size_t idx) const { return idx == 0 ? lastBucketSize : bucketMaxSize; } }; extern "C" void parallel_sort(int64_t* arr, size_t size) { static const size_t bucketSize = 2048; static const size_t radixBits = 11; static const size_t bucketCount = 1 << radixBits; struct alignas(64) Slice { int64_t* data = nullptr; size_t size = 0; size_t global_offset = 0; size_t local_offset = 0; std::mutex mutex; }; std::array<Slice, bucketCount> slices; #pragma omp parallel { std::array<BucketList<int64_t, bucketSize>, bucketCount> tlsBuckets; #pragma omp for nowait for (size_t i = 0; i < size; ++i) { constexpr uint64_t signBit = uint64_t(1) << uint64_t(63); const uint64_t idx = (uint64_t(arr[i]) ^ signBit) >> (64 - radixBits); tlsBuckets[idx].push_back(arr[i]); } #pragma omp critical for (size_t i = 0; i < bucketCount; ++i) slices[i].size += tlsBuckets[i].size(); #pragma omp barrier #pragma omp single { size_t offset = 0; for (size_t i = 0; i < bucketCount; ++i) { Slice& slice = slices[i]; slice.data = &arr[offset]; slice.global_offset = offset; offset += slice.size; } } for (size_t i = 0; i < bucketCount; ++i) { Slice& slice = slices[i]; size_t local_offset; size_t local_offset_end; { std::scoped_lock lock(slice.mutex); local_offset = slice.local_offset; slice.local_offset += tlsBuckets[i].size(); local_offset_end = slice.local_offset; } uint32_t bucketListId = 0; for(const auto& kv : tlsBuckets[i].buckets) { const size_t actualBucketSize = tlsBuckets[i].bucketSize(bucketListId); memcpy(&slice.data[local_offset], &kv[0], sizeof(int64_t) * actualBucketSize); local_offset += actualBucketSize; bucketListId++; } } #pragma omp barrier #pragma omp for schedule(dynamic) for (size_t i = 0; i < bucketCount; ++i) x86simdsort::qsort(&slices[i].data[0], slices[i].size); } } A simple header can be written if you want to call this implementation from Cython (though it can be complicated due to the aforementioned Cython/Numpy-2.0 compatibility issue). Here is an example: // File: wrapper.h #include <stdlib.h> #include <stdint.h> void parallel_sort(int64_t* arr, size_t size) You can compile the code with Clang using the following command lines on Linux: clang++ -O3 -fopenmp -c wrapper.cpp -fPIC -g clang wrapper.o -o wrapper.so -fopenmp --shared -Lx86-simd-sort/build -lx86simdsortcpp The following one may also be needed to find the x86-simd-sort library at runtime once cloned and built: export LD_LIBRARY_PATH=x86-simd-sort/build:$LD_LIBRARY_PATH You can finally use the fast sorting function from a Python code. I personally use ctypes because it worked directly with no issues (except when the code is compiled with GCC for unknown strange reasons). Here is an example: import numpy as np import ctypes lib = ctypes.CDLL('./wrapper.so') parallel_sort = lib.parallel_sort parallel_sort.argtypes = [ctypes.c_voidp, ctypes.c_size_t] parallel_sort.restype = None fullCheck = False print('Generating...') a = np.random.randint(0, (1<<63) - 1, 1024*1024**2) if fullCheck: b = a.copy() print('Benchmark...') #%time a.sort() %time parallel_sort(a.ctypes.data, a.size) print('Full check...' if fullCheck else 'Check...') if fullCheck: b.sort() assert np.array_equal(b, a) else: assert np.all(np.diff(a) >= 0) Notes and performance results Note this require a lot of memory to do the test, especially if fullCheck is set to true. Note that the C++ code is optimized for sorting huge arrays (with >1e8 items). The memory consumption will be significant for smaller arrays compared to their size. The current code will even be slow for small arrays (<1e5). You can tune constants/parameters regarding your needs. For tiny arrays, you can directly call the x86-simd-sort library. Once tuned properly, it should be faster than np.sort for all arrays (whatever their size). I strongly advise you to tune the parameters regarding your specific input and target CPU, especially radixBits. The current code/parameters are optimized for mainstream Intel CPUs (not recent big-little Intel ones nor AMD ones) and positive numbers. If you know there are only positive numbers in the input, you can skip the most-significantly bit (sign bit). Here is the resulting timings on my 6-core i5-9600KF CPU (with Numpy 2.0): np.sort: 19.3 s Proposed C++ code: 4.3 s The C++ parallel implementation is 4.5 times faster than the sequential optimized Numpy one. Note I did not massively test the code but basic checks like the one proposed in the provided Python script reported no error so far (even on negative numbers apparently). Note that this sort is efficient if the highest bits of the sorted numbers are different set (this is the downside of bucket/radix sorts). Ideally, numbers should be uniformly distributed and use all the highest bits. If this is not the case, then the buckets will be unbalanced resulting in a lower scalability. In the worst case, only 1 bucket is used resulting in a serial implementation. You can track the highest bit set so to mitigate this issue. More complex approaches are required when there are some rare big outliers (eg. remapping preserving the ordering). | 7 | 5 |
78,658,850 | 2024-6-23 | https://stackoverflow.com/questions/78658850/difference-between-time-vs-time-vs-timeit-timeit-in-jupyter-notebook | Help me in finding out the exact difference between time and timeit magic commands in jupyter notebook. %time a="stack overflow" time.sleep(2) print(a) It shows only some micro seconds, But I added sleep time as 2 seconds | Magics that are prefixed with double percentage signs (%%) are considered cell magics. This differs from line magics which are prefixed with a single percentage sign (%). It follows that, time is used to time the execution of code either at a line level (%) or cell level (%%). timeit is similar but runs the code multiple times to provide more accurate and statistically significant timing information. The key distinction is that %time and %%time (docs) measure the execution time of code once, while %timeit and %%timeit (docs) measure the execution time of code multiple times to give an average time and standard deviation. Hence, %%time a="stack overflow" time.sleep(2) print(a) This output shows both the CPU time and the wall time. The wall time includes the 2 seconds sleep time, while the CPU time accounts for the actual processing time. Replacing %%time with %time will show the CPU time and the wall time for the next single line that is executed. | 2 | 3 |
78,658,424 | 2024-6-23 | https://stackoverflow.com/questions/78658424/matplotlib-set-ticks-and-labels-at-regular-intervals-but-starting-at-specific-d | I am trying to set the following dates (year only): lab = [1969, 1973, 1977, 1981, 1985, 1989, 1993, 1997, 2001, 2005, 2009, 2013, 2017, 2021, 2025] I tried with dates.YearLocator(base=4), but didn't find a way to set the starting year. I get the labels starting in 1968 instead of 1969. I also tried with ticker.FixedFormatter(lab), but the ticks and dates were shown in the wrong place. # reproducible example import pandas as pd from pandas import Timestamp import numpy as np # np.nan import matplotlib.pyplot as plt from matplotlib import dates from matplotlib import ticker data = {'Date': {1: Timestamp('1969-01-20 00:00:00'), 2: Timestamp('1969-04-01 00:00:00'), 3: Timestamp('1969-07-01 00:00:00'), 4: Timestamp('1969-10-01 00:00:00'), 5: Timestamp('1970-01-01 00:00:00'), 6: Timestamp('1970-04-01 00:00:00'), 7: Timestamp('1970-07-01 00:00:00'), 8: Timestamp('1970-10-01 00:00:00'), 9: Timestamp('1971-01-01 00:00:00'), 10: Timestamp('1971-04-01 00:00:00'), 11: Timestamp('1971-07-01 00:00:00'), 12: Timestamp('1971-10-01 00:00:00'), 13: Timestamp('1972-01-01 00:00:00'), 14: Timestamp('1972-04-01 00:00:00'), 15: Timestamp('1972-07-01 00:00:00'), 16: Timestamp('1972-10-01 00:00:00'), 17: Timestamp('1973-01-01 00:00:00'), 18: Timestamp('1973-04-01 00:00:00'), 19: Timestamp('1973-07-01 00:00:00'), 20: Timestamp('1973-10-01 00:00:00'), 21: Timestamp('1974-01-01 00:00:00'), 22: Timestamp('1974-04-01 00:00:00'), 23: Timestamp('1974-07-01 00:00:00'), 24: Timestamp('1974-08-09 00:00:00'), 25: Timestamp('1974-10-01 00:00:00'), 26: Timestamp('1975-01-01 00:00:00'), 27: Timestamp('1975-04-01 00:00:00'), 28: Timestamp('1975-07-01 00:00:00'), 29: Timestamp('1975-10-01 00:00:00'), 30: Timestamp('1976-01-01 00:00:00'), 31: Timestamp('1976-04-01 00:00:00'), 32: Timestamp('1976-07-01 00:00:00'), 33: Timestamp('1976-10-01 00:00:00'), 34: Timestamp('1977-01-01 00:00:00'), 35: Timestamp('1977-01-20 00:00:00'), 36: Timestamp('1977-04-01 00:00:00'), 37: Timestamp('1977-07-01 00:00:00'), 38: Timestamp('1977-10-01 00:00:00'), 39: Timestamp('1978-01-01 00:00:00'), 40: Timestamp('1978-04-01 00:00:00'), 41: Timestamp('1978-07-01 00:00:00'), 42: Timestamp('1978-10-01 00:00:00'), 43: Timestamp('1979-01-01 00:00:00'), 44: Timestamp('1979-04-01 00:00:00'), 45: Timestamp('1979-07-01 00:00:00'), 46: Timestamp('1979-10-01 00:00:00'), 47: Timestamp('1980-01-01 00:00:00'), 48: Timestamp('1980-04-01 00:00:00'), 49: Timestamp('1980-07-01 00:00:00'), 50: Timestamp('1980-10-01 00:00:00'), 51: Timestamp('1981-01-01 00:00:00'), 52: Timestamp('1981-01-20 00:00:00'), 53: Timestamp('1981-04-01 00:00:00'), 54: Timestamp('1981-07-01 00:00:00'), 55: Timestamp('1981-10-01 00:00:00'), 56: Timestamp('1982-01-01 00:00:00'), 57: Timestamp('1982-04-01 00:00:00'), 58: Timestamp('1982-07-01 00:00:00'), 59: Timestamp('1982-10-01 00:00:00'), 60: Timestamp('1983-01-01 00:00:00'), 61: Timestamp('1983-04-01 00:00:00'), 62: Timestamp('1983-07-01 00:00:00'), 63: Timestamp('1983-10-01 00:00:00'), 64: Timestamp('1984-01-01 00:00:00'), 65: Timestamp('1984-04-01 00:00:00'), 66: Timestamp('1984-07-01 00:00:00'), 67: Timestamp('1984-10-01 00:00:00'), 68: Timestamp('1985-01-01 00:00:00'), 69: Timestamp('1985-04-01 00:00:00'), 70: Timestamp('1985-07-01 00:00:00'), 71: Timestamp('1985-10-01 00:00:00'), 72: Timestamp('1986-01-01 00:00:00'), 73: Timestamp('1986-04-01 00:00:00'), 74: Timestamp('1986-07-01 00:00:00'), 75: Timestamp('1986-10-01 00:00:00'), 76: Timestamp('1987-01-01 00:00:00'), 77: Timestamp('1987-04-01 00:00:00'), 78: Timestamp('1987-07-01 00:00:00'), 79: Timestamp('1987-10-01 00:00:00'), 80: Timestamp('1988-01-01 00:00:00'), 81: Timestamp('1988-04-01 00:00:00'), 82: Timestamp('1988-07-01 00:00:00'), 83: Timestamp('1988-10-01 00:00:00'), 84: Timestamp('1989-01-01 00:00:00'), 85: Timestamp('1989-01-20 00:00:00'), 86: Timestamp('1989-04-01 00:00:00'), 87: Timestamp('1989-07-01 00:00:00'), 88: Timestamp('1989-10-01 00:00:00'), 89: Timestamp('1990-01-01 00:00:00'), 90: Timestamp('1990-04-01 00:00:00'), 91: Timestamp('1990-07-01 00:00:00'), 92: Timestamp('1990-10-01 00:00:00'), 93: Timestamp('1991-01-01 00:00:00'), 94: Timestamp('1991-04-01 00:00:00'), 95: Timestamp('1991-07-01 00:00:00'), 96: Timestamp('1991-10-01 00:00:00'), 97: Timestamp('1992-01-01 00:00:00'), 98: Timestamp('1992-04-01 00:00:00'), 99: Timestamp('1992-07-01 00:00:00'), 100: Timestamp('1992-10-01 00:00:00'), 101: Timestamp('1993-01-01 00:00:00'), 102: Timestamp('1993-01-20 00:00:00'), 103: Timestamp('1993-04-01 00:00:00'), 104: Timestamp('1993-07-01 00:00:00'), 105: Timestamp('1993-10-01 00:00:00'), 106: Timestamp('1994-01-01 00:00:00'), 107: Timestamp('1994-04-01 00:00:00'), 108: Timestamp('1994-07-01 00:00:00'), 109: Timestamp('1994-10-01 00:00:00'), 110: Timestamp('1995-01-01 00:00:00'), 111: Timestamp('1995-04-01 00:00:00'), 112: Timestamp('1995-07-01 00:00:00'), 113: Timestamp('1995-10-01 00:00:00'), 114: Timestamp('1996-01-01 00:00:00'), 115: Timestamp('1996-04-01 00:00:00'), 116: Timestamp('1996-07-01 00:00:00'), 117: Timestamp('1996-10-01 00:00:00'), 118: Timestamp('1997-01-01 00:00:00'), 119: Timestamp('1997-04-01 00:00:00'), 120: Timestamp('1997-07-01 00:00:00'), 121: Timestamp('1997-10-01 00:00:00'), 122: Timestamp('1998-01-01 00:00:00'), 123: Timestamp('1998-04-01 00:00:00'), 124: Timestamp('1998-07-01 00:00:00'), 125: Timestamp('1998-10-01 00:00:00'), 126: Timestamp('1999-01-01 00:00:00'), 127: Timestamp('1999-04-01 00:00:00'), 128: Timestamp('1999-07-01 00:00:00'), 129: Timestamp('1999-10-01 00:00:00'), 130: Timestamp('2000-01-01 00:00:00'), 131: Timestamp('2000-04-01 00:00:00'), 132: Timestamp('2000-07-01 00:00:00'), 133: Timestamp('2000-10-01 00:00:00'), 134: Timestamp('2001-01-01 00:00:00'), 135: Timestamp('2001-01-20 00:00:00'), 136: Timestamp('2001-04-01 00:00:00'), 137: Timestamp('2001-07-01 00:00:00'), 138: Timestamp('2001-10-01 00:00:00'), 139: Timestamp('2002-01-01 00:00:00'), 140: Timestamp('2002-04-01 00:00:00'), 141: Timestamp('2002-07-01 00:00:00'), 142: Timestamp('2002-10-01 00:00:00'), 143: Timestamp('2003-01-01 00:00:00'), 144: Timestamp('2003-04-01 00:00:00'), 145: Timestamp('2003-07-01 00:00:00'), 146: Timestamp('2003-10-01 00:00:00'), 147: Timestamp('2004-01-01 00:00:00'), 148: Timestamp('2004-04-01 00:00:00'), 149: Timestamp('2004-07-01 00:00:00'), 150: Timestamp('2004-10-01 00:00:00'), 151: Timestamp('2005-01-01 00:00:00'), 152: Timestamp('2005-04-01 00:00:00'), 153: Timestamp('2005-07-01 00:00:00'), 154: Timestamp('2005-10-01 00:00:00'), 155: Timestamp('2006-01-01 00:00:00'), 156: Timestamp('2006-04-01 00:00:00'), 157: Timestamp('2006-07-01 00:00:00'), 158: Timestamp('2006-10-01 00:00:00'), 159: Timestamp('2007-01-01 00:00:00'), 160: Timestamp('2007-04-01 00:00:00'), 161: Timestamp('2007-07-01 00:00:00'), 162: Timestamp('2007-10-01 00:00:00'), 163: Timestamp('2008-01-01 00:00:00'), 164: Timestamp('2008-04-01 00:00:00'), 165: Timestamp('2008-07-01 00:00:00'), 166: Timestamp('2008-10-01 00:00:00'), 167: Timestamp('2009-01-01 00:00:00'), 168: Timestamp('2009-01-20 00:00:00'), 169: Timestamp('2009-04-01 00:00:00'), 170: Timestamp('2009-07-01 00:00:00'), 171: Timestamp('2009-10-01 00:00:00'), 172: Timestamp('2010-01-01 00:00:00'), 173: Timestamp('2010-04-01 00:00:00'), 174: Timestamp('2010-07-01 00:00:00'), 175: Timestamp('2010-10-01 00:00:00'), 176: Timestamp('2011-01-01 00:00:00'), 177: Timestamp('2011-04-01 00:00:00'), 178: Timestamp('2011-07-01 00:00:00'), 179: Timestamp('2011-10-01 00:00:00'), 180: Timestamp('2012-01-01 00:00:00'), 181: Timestamp('2012-04-01 00:00:00'), 182: Timestamp('2012-07-01 00:00:00'), 183: Timestamp('2012-10-01 00:00:00'), 184: Timestamp('2013-01-01 00:00:00'), 185: Timestamp('2013-04-01 00:00:00'), 186: Timestamp('2013-07-01 00:00:00'), 187: Timestamp('2013-10-01 00:00:00'), 188: Timestamp('2014-01-01 00:00:00'), 189: Timestamp('2014-04-01 00:00:00'), 190: Timestamp('2014-07-01 00:00:00'), 191: Timestamp('2014-10-01 00:00:00'), 192: Timestamp('2015-01-01 00:00:00'), 193: Timestamp('2015-04-01 00:00:00'), 194: Timestamp('2015-07-01 00:00:00'), 195: Timestamp('2015-10-01 00:00:00'), 196: Timestamp('2016-01-01 00:00:00'), 197: Timestamp('2016-04-01 00:00:00'), 198: Timestamp('2016-07-01 00:00:00'), 199: Timestamp('2016-10-01 00:00:00'), 200: Timestamp('2017-01-01 00:00:00'), 201: Timestamp('2017-01-20 00:00:00'), 202: Timestamp('2017-04-01 00:00:00'), 203: Timestamp('2017-07-01 00:00:00'), 204: Timestamp('2017-10-01 00:00:00'), 205: Timestamp('2018-01-01 00:00:00'), 206: Timestamp('2018-04-01 00:00:00'), 207: Timestamp('2018-07-01 00:00:00'), 208: Timestamp('2018-10-01 00:00:00'), 209: Timestamp('2019-01-01 00:00:00'), 210: Timestamp('2019-04-01 00:00:00'), 211: Timestamp('2019-07-01 00:00:00'), 212: Timestamp('2019-10-01 00:00:00'), 213: Timestamp('2020-01-01 00:00:00'), 214: Timestamp('2020-04-01 00:00:00'), 215: Timestamp('2020-07-01 00:00:00'), 216: Timestamp('2020-10-01 00:00:00'), 217: Timestamp('2021-01-01 00:00:00'), 218: Timestamp('2021-01-20 00:00:00'), 219: Timestamp('2021-04-01 00:00:00'), 220: Timestamp('2021-07-01 00:00:00'), 221: Timestamp('2021-10-01 00:00:00'), 222: Timestamp('2022-01-01 00:00:00'), 223: Timestamp('2022-04-01 00:00:00'), 224: Timestamp('2022-07-01 00:00:00'), 225: Timestamp('2022-10-01 00:00:00'), 226: Timestamp('2023-01-01 00:00:00'), 227: Timestamp('2023-04-01 00:00:00'), 228: Timestamp('2023-07-01 00:00:00'), 229: Timestamp('2023-10-01 00:00:00'), 230: Timestamp('2024-01-01 00:00:00')}, 'Unemployment Rate': {1: 3.4, 2: 3.4, 3: 3.5, 4: 3.7, 5: 3.9, 6: 4.6, 7: 5.0, 8: 5.5, 9: 5.9, 10: 5.9, 11: 6.0, 12: 5.8, 13: 5.8, 14: 5.7, 15: 5.6, 16: 5.6, 17: 4.9, 18: 5.0, 19: 4.8, 20: 4.6, 21: 5.1, 22: 5.1, 23: 5.5, 24: 5.711956521739131, 25: 6.0, 26: 8.1, 27: 8.8, 28: 8.6, 29: 8.4, 30: 7.9, 31: 7.7, 32: 7.8, 33: 7.7, 34: 7.5, 35: 7.4366666666666665, 36: 7.2, 37: 6.9, 38: 6.8, 39: 6.4, 40: 6.1, 41: 6.2, 42: 5.8, 43: 5.9, 44: 5.8, 45: 5.7, 46: 6.0, 47: 6.3, 48: 6.9, 49: 7.8, 50: 7.5, 51: 7.5, 52: 7.4366666666666665, 53: 7.2, 54: 7.2, 55: 7.9, 56: 8.6, 57: 9.3, 58: 9.8, 59: 10.4, 60: 10.4, 61: 10.2, 62: 9.4, 63: 8.8, 64: 8.0, 65: 7.7, 66: 7.5, 67: 7.4, 68: 7.3, 69: 7.3, 70: 7.4, 71: 7.1, 72: 6.7, 73: 7.1, 74: 7.0, 75: 7.0, 76: 6.6, 77: 6.3, 78: 6.1, 79: 6.0, 80: 5.7, 81: 5.4, 82: 5.4, 83: 5.4, 84: 5.4, 85: 5.357777777777778, 86: 5.2, 87: 5.2, 88: 5.3, 89: 5.4, 90: 5.4, 91: 5.5, 92: 5.9, 93: 6.4, 94: 6.7, 95: 6.8, 96: 7.0, 97: 7.3, 98: 7.4, 99: 7.7, 100: 7.3, 101: 7.3, 102: 7.257777777777777, 103: 7.1, 104: 6.9, 105: 6.8, 106: 6.6, 107: 6.4, 108: 6.1, 109: 5.8, 110: 5.6, 111: 5.8, 112: 5.7, 113: 5.5, 114: 5.6, 115: 5.6, 116: 5.5, 117: 5.2, 118: 5.3, 119: 5.1, 120: 4.9, 121: 4.7, 122: 4.6, 123: 4.3, 124: 4.5, 125: 4.5, 126: 4.3, 127: 4.3, 128: 4.3, 129: 4.1, 130: 4.0, 131: 3.8, 132: 4.0, 133: 3.9, 134: 4.2, 135: 4.242222222222223, 136: 4.4, 137: 4.6, 138: 5.3, 139: 5.7, 140: 5.9, 141: 5.8, 142: 5.7, 143: 5.8, 144: 6.0, 145: 6.2, 146: 6.0, 147: 5.7, 148: 5.6, 149: 5.5, 150: 5.5, 151: 5.3, 152: 5.2, 153: 5.0, 154: 5.0, 155: 4.7, 156: 4.7, 157: 4.7, 158: 4.4, 159: 4.6, 160: 4.5, 161: 4.7, 162: 4.7, 163: 5.0, 164: 5.0, 165: 5.8, 166: 6.5, 167: 7.8, 168: 8.053333333333333, 169: 9.0, 170: 9.5, 171: 10.0, 172: 9.8, 173: 9.9, 174: 9.4, 175: 9.4, 176: 9.1, 177: 9.1, 178: 9.0, 179: 8.8, 180: 8.3, 181: 8.2, 182: 8.2, 183: 7.8, 184: 8.0, 185: 7.6, 186: 7.3, 187: 7.2, 188: 6.6, 189: 6.2, 190: 6.2, 191: 5.7, 192: 5.7, 193: 5.4, 194: 5.2, 195: 5.0, 196: 4.8, 197: 5.1, 198: 4.8, 199: 4.9, 200: 4.7, 201: 4.636666666666667, 202: 4.4, 203: 4.3, 204: 4.2, 205: 4.0, 206: 4.0, 207: 3.8, 208: 3.8, 209: 4.0, 210: 3.7, 211: 3.7, 212: 3.6, 213: 3.6, 214: 14.8, 215: 10.2, 216: 6.8, 217: 6.4, 218: 6.336666666666667, 219: 6.1, 220: 5.4, 221: 4.5, 222: 4.0, 223: 3.7, 224: 3.5, 225: 3.6, 226: 3.4, 227: 3.4, 228: 3.5, 229: 3.8, 230: 3.7}, 'Republican': {1: True, 2: True, 3: True, 4: True, 5: True, 6: True, 7: True, 8: True, 9: True, 10: True, 11: True, 12: True, 13: True, 14: True, 15: True, 16: True, 17: True, 18: True, 19: True, 20: True, 21: True, 22: True, 23: True, 24: True, 25: True, 26: True, 27: True, 28: True, 29: True, 30: True, 31: True, 32: True, 33: True, 34: True, 35: False, 36: False, 37: False, 38: False, 39: False, 40: False, 41: False, 42: False, 43: False, 44: False, 45: False, 46: False, 47: False, 48: False, 49: False, 50: False, 51: False, 52: True, 53: True, 54: True, 55: True, 56: True, 57: True, 58: True, 59: True, 60: True, 61: True, 62: True, 63: True, 64: True, 65: True, 66: True, 67: True, 68: True, 69: True, 70: True, 71: True, 72: True, 73: True, 74: True, 75: True, 76: True, 77: True, 78: True, 79: True, 80: True, 81: True, 82: True, 83: True, 84: True, 85: True, 86: True, 87: True, 88: True, 89: True, 90: True, 91: True, 92: True, 93: True, 94: True, 95: True, 96: True, 97: True, 98: True, 99: True, 100: True, 101: True, 102: False, 103: False, 104: False, 105: False, 106: False, 107: False, 108: False, 109: False, 110: False, 111: False, 112: False, 113: False, 114: False, 115: False, 116: False, 117: False, 118: False, 119: False, 120: False, 121: False, 122: False, 123: False, 124: False, 125: False, 126: False, 127: False, 128: False, 129: False, 130: False, 131: False, 132: False, 133: False, 134: False, 135: True, 136: True, 137: True, 138: True, 139: True, 140: True, 141: True, 142: True, 143: True, 144: True, 145: True, 146: True, 147: True, 148: True, 149: True, 150: True, 151: True, 152: True, 153: True, 154: True, 155: True, 156: True, 157: True, 158: True, 159: True, 160: True, 161: True, 162: True, 163: True, 164: True, 165: True, 166: True, 167: True, 168: False, 169: False, 170: False, 171: False, 172: False, 173: False, 174: False, 175: False, 176: False, 177: False, 178: False, 179: False, 180: False, 181: False, 182: False, 183: False, 184: False, 185: False, 186: False, 187: False, 188: False, 189: False, 190: False, 191: False, 192: False, 193: False, 194: False, 195: False, 196: False, 197: False, 198: False, 199: False, 200: False, 201: True, 202: True, 203: True, 204: True, 205: True, 206: True, 207: True, 208: True, 209: True, 210: True, 211: True, 212: True, 213: True, 214: True, 215: True, 216: True, 217: True, 218: False, 219: False, 220: False, 221: False, 222: False, 223: False, 224: False, 225: False, 226: False, 227: False, 228: False, 229: False, 230: False}, 'Democrat': {1: False, 2: False, 3: False, 4: False, 5: False, 6: False, 7: False, 8: False, 9: False, 10: False, 11: False, 12: False, 13: False, 14: False, 15: False, 16: False, 17: False, 18: False, 19: False, 20: False, 21: False, 22: False, 23: False, 24: False, 25: False, 26: False, 27: False, 28: False, 29: False, 30: False, 31: False, 32: False, 33: False, 34: False, 35: True, 36: True, 37: True, 38: True, 39: True, 40: True, 41: True, 42: True, 43: True, 44: True, 45: True, 46: True, 47: True, 48: True, 49: True, 50: True, 51: True, 52: False, 53: False, 54: False, 55: False, 56: False, 57: False, 58: False, 59: False, 60: False, 61: False, 62: False, 63: False, 64: False, 65: False, 66: False, 67: False, 68: False, 69: False, 70: False, 71: False, 72: False, 73: False, 74: False, 75: False, 76: False, 77: False, 78: False, 79: False, 80: False, 81: False, 82: False, 83: False, 84: False, 85: False, 86: False, 87: False, 88: False, 89: False, 90: False, 91: False, 92: False, 93: False, 94: False, 95: False, 96: False, 97: False, 98: False, 99: False, 100: False, 101: False, 102: True, 103: True, 104: True, 105: True, 106: True, 107: True, 108: True, 109: True, 110: True, 111: True, 112: True, 113: True, 114: True, 115: True, 116: True, 117: True, 118: True, 119: True, 120: True, 121: True, 122: True, 123: True, 124: True, 125: True, 126: True, 127: True, 128: True, 129: True, 130: True, 131: True, 132: True, 133: True, 134: True, 135: False, 136: False, 137: False, 138: False, 139: False, 140: False, 141: False, 142: False, 143: False, 144: False, 145: False, 146: False, 147: False, 148: False, 149: False, 150: False, 151: False, 152: False, 153: False, 154: False, 155: False, 156: False, 157: False, 158: False, 159: False, 160: False, 161: False, 162: False, 163: False, 164: False, 165: False, 166: False, 167: False, 168: True, 169: True, 170: True, 171: True, 172: True, 173: True, 174: True, 175: True, 176: True, 177: True, 178: True, 179: True, 180: True, 181: True, 182: True, 183: True, 184: True, 185: True, 186: True, 187: True, 188: True, 189: True, 190: True, 191: True, 192: True, 193: True, 194: True, 195: True, 196: True, 197: True, 198: True, 199: True, 200: True, 201: False, 202: False, 203: False, 204: False, 205: False, 206: False, 207: False, 208: False, 209: False, 210: False, 211: False, 212: False, 213: False, 214: False, 215: False, 216: False, 217: False, 218: True, 219: True, 220: True, 221: True, 222: True, 223: True, 224: True, 225: True, 226: True, 227: True, 228: True, 229: True, 230: True}, 'President': {1: 'Nixon', 2: 'Nixon', 3: 'Nixon', 4: 'Nixon', 5: 'Nixon', 6: 'Nixon', 7: 'Nixon', 8: 'Nixon', 9: 'Nixon', 10: 'Nixon', 11: 'Nixon', 12: 'Nixon', 13: 'Nixon', 14: 'Nixon', 15: 'Nixon', 16: 'Nixon', 17: 'Nixon', 18: 'Nixon', 19: 'Nixon', 20: 'Nixon', 21: 'Nixon', 22: 'Nixon', 23: 'Nixon', 24: 'Ford', 25: 'Ford', 26: 'Ford', 27: 'Ford', 28: 'Ford', 29: 'Ford', 30: 'Ford', 31: 'Ford', 32: 'Ford', 33: 'Ford', 34: 'Ford', 35: 'Carter', 36: 'Carter', 37: 'Carter', 38: 'Carter', 39: 'Carter', 40: 'Carter', 41: 'Carter', 42: 'Carter', 43: 'Carter', 44: 'Carter', 45: 'Carter', 46: 'Carter', 47: 'Carter', 48: 'Carter', 49: 'Carter', 50: 'Carter', 51: 'Carter', 52: 'Reagan', 53: 'Reagan', 54: 'Reagan', 55: 'Reagan', 56: 'Reagan', 57: 'Reagan', 58: 'Reagan', 59: 'Reagan', 60: 'Reagan', 61: 'Reagan', 62: 'Reagan', 63: 'Reagan', 64: 'Reagan', 65: 'Reagan', 66: 'Reagan', 67: 'Reagan', 68: 'Reagan', 69: 'Reagan', 70: 'Reagan', 71: 'Reagan', 72: 'Reagan', 73: 'Reagan', 74: 'Reagan', 75: 'Reagan', 76: 'Reagan', 77: 'Reagan', 78: 'Reagan', 79: 'Reagan', 80: 'Reagan', 81: 'Reagan', 82: 'Reagan', 83: 'Reagan', 84: 'Reagan', 85: 'Bush', 86: 'Bush', 87: 'Bush', 88: 'Bush', 89: 'Bush', 90: 'Bush', 91: 'Bush', 92: 'Bush', 93: 'Bush', 94: 'Bush', 95: 'Bush', 96: 'Bush', 97: 'Bush', 98: 'Bush', 99: 'Bush', 100: 'Bush', 101: 'Bush', 102: 'Clinton', 103: 'Clinton', 104: 'Clinton', 105: 'Clinton', 106: 'Clinton', 107: 'Clinton', 108: 'Clinton', 109: 'Clinton', 110: 'Clinton', 111: 'Clinton', 112: 'Clinton', 113: 'Clinton', 114: 'Clinton', 115: 'Clinton', 116: 'Clinton', 117: 'Clinton', 118: 'Clinton', 119: 'Clinton', 120: 'Clinton', 121: 'Clinton', 122: 'Clinton', 123: 'Clinton', 124: 'Clinton', 125: 'Clinton', 126: 'Clinton', 127: 'Clinton', 128: 'Clinton', 129: 'Clinton', 130: 'Clinton', 131: 'Clinton', 132: 'Clinton', 133: 'Clinton', 134: 'Clinton', 135: 'Bush', 136: 'Bush', 137: 'Bush', 138: 'Bush', 139: 'Bush', 140: 'Bush', 141: 'Bush', 142: 'Bush', 143: 'Bush', 144: 'Bush', 145: 'Bush', 146: 'Bush', 147: 'Bush', 148: 'Bush', 149: 'Bush', 150: 'Bush', 151: 'Bush', 152: 'Bush', 153: 'Bush', 154: 'Bush', 155: 'Bush', 156: 'Bush', 157: 'Bush', 158: 'Bush', 159: 'Bush', 160: 'Bush', 161: 'Bush', 162: 'Bush', 163: 'Bush', 164: 'Bush', 165: 'Bush', 166: 'Bush', 167: 'Bush', 168: 'Obama', 169: 'Obama', 170: 'Obama', 171: 'Obama', 172: 'Obama', 173: 'Obama', 174: 'Obama', 175: 'Obama', 176: 'Obama', 177: 'Obama', 178: 'Obama', 179: 'Obama', 180: 'Obama', 181: 'Obama', 182: 'Obama', 183: 'Obama', 184: 'Obama', 185: 'Obama', 186: 'Obama', 187: 'Obama', 188: 'Obama', 189: 'Obama', 190: 'Obama', 191: 'Obama', 192: 'Obama', 193: 'Obama', 194: 'Obama', 195: 'Obama', 196: 'Obama', 197: 'Obama', 198: 'Obama', 199: 'Obama', 200: 'Obama', 201: 'Trump', 202: 'Trump', 203: 'Trump', 204: 'Trump', 205: 'Trump', 206: 'Trump', 207: 'Trump', 208: 'Trump', 209: 'Trump', 210: 'Trump', 211: 'Trump', 212: 'Trump', 213: 'Trump', 214: 'Trump', 215: 'Trump', 216: 'Trump', 217: 'Trump', 218: 'Biden', 219: 'Biden', 220: 'Biden', 221: 'Biden', 222: 'Biden', 223: 'Biden', 224: 'Biden', 225: 'Biden', 226: 'Biden', 227: 'Biden', 228: 'Biden', 229: 'Biden', 230: 'Biden'}} df = pd.DataFrame.from_dict(data) # set up plot f, ax = plt.subplots(figsize=(12, 8)) df.plot(ax=ax, x="Date", y="Unemployment Rate",color="darkgreen", zorder=2, legend=False)# legend=False is ignored y1, y2 = ax.get_ylim() ax.fill_between(df["Date"], y1=y1, y2=y2, where=df["Republican"], color="#E81B23", alpha=0.5, zorder=1, label="Rebublican") ax.fill_between(df["Date"], y1=y1, y2=y2, where=df["Democrat"], color="#00AEF3", alpha=0.5, zorder=1, label="Democrat") # set labels on every 4th years at selected dates | attempt 1 ax.xaxis.set_major_locator(dates.YearLocator(base=4)) # set labels on every 4th years at selected dates | attempt 2 #lab = list(range(1969,2026,4)) #ax.xaxis.set_major_formatter(ticker.FixedFormatter(lab)) ax.figure.autofmt_xdate(rotation=0, ha="center") ax.set_xlabel('') ax.set_ylabel('Unemployment Rate (%)') ax.set_title("U.S. Unemployment Rate") # set legend ax.legend()# I do want to keep the filled area legends # position the legend ax.legend(bbox_to_anchor=(0.8, 1.08), loc="center") # set grid ax.grid(True) f.tight_layout()# not as tight as I would like plt.show() P.S. Unrelated things I'd like to do: remove the white space outside of the blue/red filled areas; remove the "Unemployment Rate" from the legend (while keeping Republican/Democrat). | You can directly set the x-ticks you want with a list of datetimes: lab = [1969, 1973, 1977, 1981, 1985, 1989, 1993, 1997, 2001, 2005, 2009, 2013, 2017, 2021, 2025] lab_dates = [datetime.date(year, 1, 1) for year in lab] ax.xaxis.set_ticks(lab_dates) that overrides the formatting so that the labels show up like "1973-1-1", but we can reinstate the simple year formatting with ax.xaxis.set_major_formatter(dates.DateFormatter('%Y')) The white space around the blue/red filled areas is the margins. I found that if I set that to zero, the "1969" tick didn't show up, but it's fine if I set it to something very small: ax.margins(0.001) | 2 | 1 |
78,641,150 | 2024-6-19 | https://stackoverflow.com/questions/78641150/a-module-that-was-compiled-using-numpy-1-x-cannot-be-run-in-numpy-2-0-0 | I installed numpy 2.0.0 pip install numpy==2.0.0 import numpy as np np.__version__ #2.0.0 then I installed: pip install opencv-python Requirement already satisfied: opencv-python in /usr/local/lib/python3.10/dist-packages (4.8.0.76) Requirement already satisfied: numpy>=1.21.2 in /usr/local/lib/python3.10/dist-packages (from opencv-python) (2.0.0) Then I did: import cv2 I am getting this error: A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.0 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2. Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py", line 37, in <module> ColabKernelApp.launch_instance() File "/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py", line 992, in launch_instance app.start() File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py", line 619, in start self.io_loop.start() File "/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py", line 195, in start self.asyncio_loop.run_forever() File "/usr/lib/python3.10/asyncio/base_events.py", line 603, in run_forever self._run_once() File "/usr/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once handle._run() File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 685, in <lambda> lambda f: self._run_callback(functools.partial(callback, future)) File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 738, in _run_callback ret = callback() File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 825, in inner self.ctx_run(self.run) File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 786, in run yielded = self.gen.send(value) File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 361, in process_one yield gen.maybe_future(dispatch(*args)) File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper yielded = ctx_run(next, result) File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 261, in dispatch_shell yield gen.maybe_future(handler(stream, idents, msg)) File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper yielded = ctx_run(next, result) File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 539, in execute_request self.do_execute( File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper yielded = ctx_run(next, result) File "/usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py", line 302, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py", line 539, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 2975, in run_cell result = self._run_cell( File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3030, in _run_cell return runner(coro) File "/usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py", line 78, in _pseudo_sync_runner coro.send(None) File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3257, in run_cell_async has_raised = await self.run_ast_nodes(code_ast.body, cell_name, File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3473, in run_ast_nodes if (await self.run_code(code, result, async_=asy)): File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3553, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-4-c8ec22b3e787>", line 1, in <cell line: 1> import cv2 File "/usr/local/lib/python3.10/dist-packages/google/colab/_import_hooks/_cv2.py", line 78, in load_module cv_module = imp.load_module(name, *module_info) File "/usr/lib/python3.10/imp.py", line 245, in load_module return load_package(name, filename) File "/usr/lib/python3.10/imp.py", line 217, in load_package return _load(spec) File "/usr/local/lib/python3.10/dist-packages/cv2/__init__.py", line 181, in <module> bootstrap() File "/usr/local/lib/python3.10/dist-packages/cv2/__init__.py", line 153, in bootstrap native_module = importlib.import_module("cv2") File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/usr/local/lib/python3.10/dist-packages/google/colab/_import_hooks/_cv2.py", line 78, in load_module cv_module = imp.load_module(name, *module_info) File "/usr/lib/python3.10/imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "/usr/lib/python3.10/imp.py", line 343, in load_dynamic return _load(spec) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) AttributeError: _ARRAY_API not found --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-4-c8ec22b3e787> in <cell line: 1>() ----> 1 import cv2 8 frames /usr/lib/python3.10/imp.py in load_dynamic(name, path, file) 341 spec = importlib.machinery.ModuleSpec( 342 name=name, loader=loader, origin=path) --> 343 return _load(spec) 344 345 else: ImportError: numpy.core.multiarray failed to import --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. | There are two solutions for this error: 1. downgrade your numpy to 1.26.4 pip install numpy==1.26.4 or pip install "numpy<2.0" Make sure to restart your kernel after downgrading numpy Another option is: 2. install the latest version of the module which is failing* I had an old version of opencv-python 4.8.0.76 I was able to get this working by installing the latest version of opencv-python by pip install opencv-python==4.10.0.84 *Some modules may still not work with numpy 2.0 'We expect that some modules will need time to support NumPy 2' | 19 | 46 |
78,658,024 | 2024-6-23 | https://stackoverflow.com/questions/78658024/add-the-row-counts-as-a-list-to-column-using-groupby | I am working on an application that needs to provide the count of certain entries in a dataframe. Am I missing something that is not rendering the required outcome? Input: | Release | Mapping | Coding | |-----------|---------|--------| | release_a | A1 | C2 | | release_c | A1 | C2 | | release_a | A1 | C2 | | release_a | A1 | C1 | | release_b | B | C1 | | release_c | B | C2 | | release_c | B | C3 | | release_a | C | C1 | | release_c | A1 | C1 | | release_c | A1 | C3 | | release_a | C | C1 | Outcome expected: | Release | Mapping | |-----------|--------------| | release_a | A1 - 3, C-2 | | release_b | B-1 | | release_c | A1 -3, B - 2 | Code used: df.groupby(['Release', 'Mapping'])['Coding'].agg(count='count') What I am getting: Maybe I don't have a thorough understanding of how use the agg method. If there is any better alternative, please suggest it. | Here's an approach with collections.Counter: from collections import Counter out = (df.groupby('Release')['Mapping'] .agg(lambda x: ", ".join(f"{k} - {v}" for k, v in Counter(x).items())) .reset_index() ) Output: Release Mapping 0 release_a A1 - 3, C - 2 1 release_b B - 1 2 release_c A1 - 3, B - 2 Or via Series.value_counts, but that seems to be slower: out2 = (df.groupby('Release')['Mapping'] .agg(lambda x: ", ".join(f"{k} - {v}" for k, v in x.value_counts() .items())) .reset_index() ) # out.equals(out2) # True Already on a small set like this, both will outperform using df.iterrows (and a double groupby) as suggested in the answer by @VinodBaste. Avoid it where you can, and in general you can (see here). Timings: # Counter 747 Β΅s Β± 25.4 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) # value_counts 1.27 ms Β± 40.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) # iterrows 2.45 ms Β± 48.8 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) Data used import pandas as pd data = { 'Release': ['release_a', 'release_c', 'release_a', 'release_a', 'release_b', 'release_c', 'release_c', 'release_a', 'release_c', 'release_c', 'release_a'], 'Mapping': ['A1', 'A1', 'A1', 'A1', 'B', 'B', 'B', 'C', 'A1', 'A1', 'C'], 'Coding': ['C2', 'C2', 'C2', 'C1', 'C1', 'C2', 'C3', 'C1', 'C1', 'C3', 'C1'] } df = pd.DataFrame(data) | 2 | 4 |
78,657,980 | 2024-6-23 | https://stackoverflow.com/questions/78657980/save-data-to-a-new-tab-in-the-xlsx-file | My program generates a data_list using a while loop (the code for generating the data_list is not important, and I have attached the main code below). At the end of the loop, after the data_list is generated, I want to write this data_list to the test.xlsx file. However, for each new data_list to be saved in a new tab (that is, if I had 10 iterations in the loop, there would be 10 different tabs in the test.xlsx file). At the moment, my code works in such a way that each new tab overwrites the previous one, that is, at the end of the loop, there is one tab in the test.xlsx file. Tell me how to do it in such a way that each new generated data_list is written in a new tab. start=0 finish=1 datas = response.json() while finish < 11: data = datas[start:finish] data_list = [] # some code for generating the data_list df = pd.DataFrame(data_list) writer = pd.ExcelWriter('test.xlsx') df.to_excel(writer, sheet_name=f'{finish}', index=False) writer._save() finish +=1 start +=1 | Instead of saving each sheet use with to create the ExcelWriter with pd.ExcelWriter('test.xlsx') as writer: while finish < 11: data = datas[start:finish] data_list = [] df = pd.DataFrame(data_list) df.to_excel(writer, sheet_name=f'{finish}', index=False) finish += 1 start += 1 | 4 | 1 |
78,657,336 | 2024-6-22 | https://stackoverflow.com/questions/78657336/convex-function-in-cvxpy-does-not-appear-to-be-producing-the-optimal-solution-fo | I am working on optimizing a battery system(1 MWh) that, in conjunction with solar power(200 kW nameplate capacity), aims to reduce electricity costs for a commercial building. For those unfamiliar with commercial electricity pricing, here's a brief overview: charges typically include energy charges (based on total energy consumption over a month) and demand charges (based on the peak energy usage within a 15-minute interval, referred to as peak demand). These rates vary throughout the day based on Time of Use (TOU). The objective is to minimize monthly energy consumption to lower energy charges, while also ensuring enough energy is stored in the battery to mitigate sudden spikes in consumption to reduce demand charges. The battery should achieve this by charging when solar generation exceeds building consumption and discharging when consumption exceeds solar production. This should be straightforward with a basic load-following algorithm when sufficient solar and battery storage are available. I tested this approach with data that successfully optimized battery operations (shown in Figure 1). However, using convex optimization resulted in significantly poorer performance (Figure 2). The optimized solution from the convex solver increased energy consumption and worsened demand charges compared to not using a battery at all. Despite optimizing for TOU rates, the solver's output falls short of an ideal solution. I have thoroughly reviewed my code, objectives, and constraints, and they appear correct to me. My hypothesis is that the solver's algorithm might prioritize sending excess power to the grid (resulting in positive peaks), potentially in an an attempt negative peaks. Maybe that is why there is a random peak on the last data point. Figure 1: Near Ideal Battery Operation from the Load Following Algorithm Figure 2: Battery Operation from the Convex Algorithm Ideally, I aim to minimize both energy and demand charges, except when it's economical to store excess power for anticipated high-demand periods. Any insights or suggestions on refining this approach would be greatly appreciated. Thank you for your assistance. Convex Optimization CVXPy Code: # Import libraries needed import numpy as np import cvxpy as cp import matplotlib.pyplot as plt # One day of 15-minute load data load = [ 36, 42, 40, 42, 40, 44, 42, 42, 40, 32, 32, 32, 32, 30, 34, 30, 32, 30, 32, 32, 32, 32, 32, 32, 30, 32, 32, 34, 54, 62, 66, 66, 76, 76, 80, 78, 80, 80, 82, 78, 46, 104, 78, 76, 74, 78, 82, 88, 96, 84, 94, 92, 92, 92, 92, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 86, 86, 82, 66, 72, 56, 56, 54, 48, 48, 42, 50, 42, 46, 46, 46, 42, 42, 42, 44, 44, 36, 34, 32, 34, 32, 34, 32, 32] # One day of 15-minute solar data solar = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 6, 14, 26, 46, 66, 86, 104, 120, 138, 154, 168, 180, 190, 166, 152, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 190, 178, 164, 148, 132, 114, 96, 76, 58, 40, 22, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] # Define alpha matrix which are the TOU energy charges for one day lg = [31, 16, 25, 20, 4] # Length of each TOU period in 15 minute intervals pk = ['off', 'mid', 'on', 'mid', 'off'] # Classifcation of each TOU period alpha = np.array([]) for i in range(len(lg)): if pk[i] == 'on': mult = 0.1079 elif pk[i] == 'mid': mult = 0.0874 elif pk[i] == 'off': mult = 0.0755 alpha = np.append(alpha, (mult * np.ones(lg[i]))) # Define beta matricies which are the TOU demand charges for one day val = [[0.1709, 0, 0], [0, 0.0874, 0], [0, 0, 0.0755]] beta = {} for i in range(len(val)): beta_i = np.array([]) for j in range(len(lg)): if pk[j] == 'on': mult = val[0][i] elif pk[j] == 'mid': mult = val[1][i] elif pk[j] == 'off': mult = val[2][i] beta_i = np.append(beta_i, (mult * np.ones(lg[j]))) beta[i] = beta_i beta_ON = np.zeros((96, 96)) np.fill_diagonal(beta_ON, beta[0]) beta_MID = np.zeros((96, 96)) np.fill_diagonal(beta_MID, beta[1]) beta_OFF = np.zeros((96, 96)) np.fill_diagonal(beta_OFF, beta[2]) # Declare Parameters eta_plus=0.96 # charging efficiency eta_minus=0.96 # discharging efficiency Emax=900 # SOC upper limit Emin=200 # SOC lower limit E_init=500 # initial state of charge P_B_plus_max=200 # charging power limit P_B_minus_max=200 # discharging power limit opt_load=load #declaring optimal load n=96 #declaring number of timestpes for each optimization del_t=1/4 #time delta d = 1 # int(len(load) / n ) # number of days # Declare the arrays for the data outputs pg = np.array([]) psl = np.array([]) eb = np.array([]) pbp = np.array([]) pbn = np.array([]) for i in range(d): # Declare constraints List cons = [] # Pull solar and load data for nth day P_S = solar[int(n*i) : int(n*i + n)] P_L = load[int(n*i) : int(n*i + n)] # Declare variables P_G = cp.Variable(n) # Power drawn from the grid at t E_B = cp.Variable(n) # Energy in the Battery P_B_plus = cp.Variable(n) # Battery charging power at t P_B_minus = cp.Variable(n) # Battery discharging power at t P_SL = cp.Variable(n) # Solar power fed to load at t obj = cp.Minimize(cp.sum(cp.matmul(alpha, P_G) * del_t) + cp.max(cp.matmul(beta_OFF, P_G)) + cp.max(cp.matmul(beta_MID, P_G)) + cp.max(cp.matmul(beta_ON, P_G))) for t in range(n): # First iteration of constraints has an inital amount of energy for the battery. if t == 0: cons_temp = [ E_B[t] == E_init, E_B[t] >= Emin, E_B[t] <= Emax, P_B_plus[t] >= 0, P_B_plus[t] <= P_B_plus_max, P_B_minus[t] >= 0, P_B_minus[t] <= P_B_minus_max, P_SL[t] + P_B_plus[t]/eta_plus == P_S[t], P_SL[t] + P_G[t] + P_B_minus[t]*eta_minus == P_L[t], P_SL[t] >= 0 ] # Subsequent iterations use have the amount of energy from the battery calcuated from the previous constraint else: cons_temp = [ E_B[t] == E_B[t - 1] + del_t*(P_B_plus[t - 1] - P_B_minus[t - 1]), E_B[t] >= Emin, E_B[t] <= Emax, P_B_plus[t] >= 0, P_B_plus[t] <= P_B_plus_max, P_B_minus[t] >= 0, P_B_minus[t] <= P_B_minus_max, P_SL[t] + P_B_plus[t]/eta_plus == P_S[t], P_SL[t] + P_G[t] + P_B_minus[t]*eta_minus == P_L[t], P_SL[t] >= 0 ] cons += cons_temp # Solve CVX Problem prob = cp.Problem(obj, cons) prob.solve(solver=cp.CBC, verbose = True, qcp = True) # Store solution pg = np.append(pg, P_G.value) psl = np.append(psl, P_SL.value) eb = np.append(eb, E_B.value) pbp = np.append(pbp, P_B_plus.value) pbn = np.append(pbn, P_B_minus.value) # Update energy stored in battery for next iteration E_init = E_B[n - 1] # Plot Output time = np.arange(0, 24, 0.25) # 24 hours, 15-minute intervals plt.figure(figsize=(10, 6)) plt.plot(time, solar, label='Solar') plt.plot(time, [i * -1 for i in load], label='Load before Optimization') plt.plot(time, [i * -1 for i in pg], label='Load after Optimization') plt.plot(time, pbn - pbp, label='Battery Operation') # Adding labels and title plt.xlabel('Time') plt.ylabel('Demand (kW)') plt.title('Battery Optimization Output') # Adding legend plt.legend() # Display the plot plt.grid(True) plt.show() Convex Optimization CVXPy Output: =============================================================================== CVXPY v1.3.2 =============================================================================== (CVXPY) Jun 22 03:24:36 PM: Your problem has 480 variables, 960 constraints, and 0 parameters. (CVXPY) Jun 22 03:24:36 PM: It is compliant with the following grammars: DCP, DQCP (CVXPY) Jun 22 03:24:36 PM: (If you need to solve this problem multiple times, but with different data, consider using parameters.) (CVXPY) Jun 22 03:24:36 PM: CVXPY will first compile your problem; then, it will invoke a numerical solver to obtain a solution. ------------------------------------------------------------------------------- Compilation ------------------------------------------------------------------------------- (CVXPY) Jun 22 03:24:36 PM: Compiling problem (target solver=CBC). (CVXPY) Jun 22 03:24:36 PM: Reduction chain: Dcp2Cone -> CvxAttr2Constr -> ConeMatrixStuffing -> CBC (CVXPY) Jun 22 03:24:36 PM: Applying reduction Dcp2Cone (CVXPY) Jun 22 03:24:36 PM: Applying reduction CvxAttr2Constr (CVXPY) Jun 22 03:24:36 PM: Applying reduction ConeMatrixStuffing (CVXPY) Jun 22 03:24:36 PM: Applying reduction CBC (CVXPY) Jun 22 03:24:37 PM: Finished problem compilation (took 8.116e-01 seconds). ------------------------------------------------------------------------------- Numerical solver ------------------------------------------------------------------------------- (CVXPY) Jun 22 03:24:37 PM: Invoking solver CBC to obtain a solution. ------------------------------------------------------------------------------- Summary ------------------------------------------------------------------------------- (CVXPY) Jun 22 03:24:37 PM: Problem status: optimal (CVXPY) Jun 22 03:24:37 PM: Optimal value: -4.894e+01 (CVXPY) Jun 22 03:24:37 PM: Compilation took 8.116e-01 seconds (CVXPY) Jun 22 03:24:37 PM: Solver (including time spent in interface) took 5.628e-03 seconds Load Following Algorithm Code: # Import libraries needed import matplotlib.pyplot as plt # One day of 15-minute load data load = [ 36, 42, 40, 42, 40, 44, 42, 42, 40, 32, 32, 32, 32, 30, 34, 30, 32, 30, 32, 32, 32, 32, 32, 32, 30, 32, 32, 34, 54, 62, 66, 66, 76, 76, 80, 78, 80, 80, 82, 78, 46, 104, 78, 76, 74, 78, 82, 88, 96, 84, 94, 92, 92, 92, 92, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 86, 86, 82, 66, 72, 56, 56, 54, 48, 48, 42, 50, 42, 46, 46, 46, 42, 42, 42, 44, 44, 36, 34, 32, 34, 32, 34, 32, 32] # One day of 15-minute solar data solar = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 6, 14, 26, 46, 66, 86, 104, 120, 138, 154, 168, 180, 190, 166, 152, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 190, 178, 164, 148, 132, 114, 96, 76, 58, 40, 22, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] battery = 500 output = [] # soc = [] # State of Charge of Battery net_load = [] # "Optimized Load" for i in range(96): # With non fully charged battery and excess solar: Pull power from the solar panels to charge the batteries if (battery < 900) and ((solar[i] - load[i]) >= 0): # Battery can only charge up to 100 kW if (solar[i] - load[i]) > 200: output.append(-200) else: output.append(load[i] - solar[i]) # With non depleted charged battery and excessive load: Discharge the batteries and send power to the gtid elif (battery > (200 + (load[i]/4))) and ((solar[i] - load[i]) < 0): # Battery can only discharge up to 100 kW if (solar[i] - load[i]) < -200: output.append(200) else: output.append(load[i] - solar[i]) else: output.append(0) battery += (-0.25 * output[i]) soc.append(battery / 1000) net_load.append(solar[i] - load[i] + output[i]) # Plot Output time = np.arange(0, 24, 0.25) # 24 hours, 15-minute intervals plt.figure(figsize=(10, 6)) plt.plot(time, solar, label='Solar') plt.plot(time, [i * -1 for i in load], label='Load before Optimization') plt.plot(time, net_load, label='Load after Optimization') plt.plot(time, output, label='Battery Operation') # Adding labels and title plt.xlabel('Time') plt.ylabel('Demand (kW)') plt.title('Battery Optimization Output') # Adding legend plt.legend() # Display the plot plt.grid(True) plt.show() | A couple caveats: I didn't look at the load following stuff, but you should compute the costs for that and compare with the optimization I think the way you are constructing the cost vector with all of the numpy/diagonals/ones/zeros/etc. is pretty confusing, but that is just an aside. I plotted the costs (alpha) and it looks fine. First thing: You are asking CVXPY solve to use CBC solver in your solve statement. CBC is NOT a nonlinear solver and you have max() functions in your objective, which is nonlinear, unless CVXPY is doing some enormously complex linearization under the hood (unlikely) you are getting a junk result by using that solver, unless you reformulate and get rid of the max() stuff. (Another caveat: I don't know what you're doing with the max stuff in the obj, but it isn't super relevant). So try just letting CVXPY choose by omitting that flag (as I show below) After that, the results look ... well ... more credible. Over to you. I plotted the cost (x1000) so it would show, added the battery state, and flipped your battery usage line such that positive == charging (makes more sense to me.) The battery charges when rates are low and dumps what it has at peak rate, so that looks credible. Your code (w/ minor tweaks mentioned): import sys # Import libraries needed import numpy as np import cvxpy as cp import matplotlib.pyplot as plt # One day of 15-minute load data load = [36, 42, 40, 42, 40, 44, 42, 42, 40, 32, 32, 32, 32, 30, 34, 30, 32, 30, 32, 32, 32, 32, 32, 32, 30, 32, 32, 34, 54, 62, 66, 66, 76, 76, 80, 78, 80, 80, 82, 78, 46, 104, 78, 76, 74, 78, 82, 88, 96, 84, 94, 92, 92, 92, 92, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 86, 86, 82, 66, 72, 56, 56, 54, 48, 48, 42, 50, 42, 46, 46, 46, 42, 42, 42, 44, 44, 36, 34, 32, 34, 32, 34, 32, 32] # One day of 15-minute solar data solar = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 6, 14, 26, 46, 66, 86, 104, 120, 138, 154, 168, 180, 190, 166, 152, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200, 190, 178, 164, 148, 132, 114, 96, 76, 58, 40, 22, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] # Define alpha matrix which are the TOU energy charges for one day lg = [31, 16, 25, 20, 4] # Length of each TOU period in 15 minute intervals pk = ['off', 'mid', 'on', 'mid', 'off'] # Classifcation of each TOU period alpha = np.array([]) for i in range(len(lg)): if pk[i] == 'on': mult = 0.1079 elif pk[i] == 'mid': mult = 0.0874 elif pk[i] == 'off': mult = 0.0755 alpha = np.append(alpha, (mult * np.ones(lg[i]))) # Define beta matricies which are the TOU demand charges for one day val = [[0.1709, 0, 0], [0, 0.0874, 0], [0, 0, 0.0755]] beta = {} for i in range(len(val)): beta_i = np.array([]) for j in range(len(lg)): if pk[j] == 'on': mult = val[0][i] elif pk[j] == 'mid': mult = val[1][i] elif pk[j] == 'off': mult = val[2][i] beta_i = np.append(beta_i, (mult * np.ones(lg[j]))) beta[i] = beta_i beta_ON = np.zeros((96, 96)) np.fill_diagonal(beta_ON, beta[0]) beta_MID = np.zeros((96, 96)) np.fill_diagonal(beta_MID, beta[1]) beta_OFF = np.zeros((96, 96)) np.fill_diagonal(beta_OFF, beta[2]) # Declare Parameters eta_plus = 0.96 # charging efficiency eta_minus = 0.96 # discharging efficiency Emax = 900 # SOC upper limit Emin = 200 # SOC lower limit E_init = 500 # initial state of charge P_B_plus_max = 200 # charging power limit P_B_minus_max = 200 # discharging power limit opt_load = load # declaring optimal load n = 96 # declaring number of timestpes for each optimization del_t = 1 / 4 # time delta d = 1 # int(len(load) / n ) # number of days # Declare the arrays for the data outputs pg = np.array([]) psl = np.array([]) eb = np.array([]) pbp = np.array([]) pbn = np.array([]) print(alpha) for i in range(d): # Declare constraints List cons = [] # Pull solar and load data for nth day P_S = solar[int(n * i): int(n * i + n)] P_L = load[int(n * i): int(n * i + n)] # Declare variables P_G = cp.Variable(n) # Power drawn from the grid at t E_B = cp.Variable(n) # Energy in the Battery P_B_plus = cp.Variable(n) # Battery charging power at t P_B_minus = cp.Variable(n) # Battery discharging power at t P_SL = cp.Variable(n) # Solar power fed to load at t obj = cp.Minimize( cp.sum(cp.matmul(alpha, P_G) * del_t) + cp.max(cp.matmul(beta_OFF, P_G)) + cp.max( cp.matmul(beta_MID, P_G)) + cp.max(cp.matmul(beta_ON, P_G))) print(obj) for t in range(n): # First iteration of constraints has an inital amount of energy for the battery. if t == 0: cons_temp = [ E_B[t] == E_init, E_B[t] >= Emin, E_B[t] <= Emax, P_B_plus[t] >= 0, P_B_plus[t] <= P_B_plus_max, P_B_minus[t] >= 0, P_B_minus[t] <= P_B_minus_max, P_SL[t] + P_B_plus[t] / eta_plus == P_S[t], P_SL[t] + P_G[t] + P_B_minus[t] * eta_minus == P_L[t], P_SL[t] >= 0 ] # Subsequent iterations use have the amount of energy from the battery calcuated from the previous constraint else: cons_temp = [ E_B[t] == E_B[t - 1] + del_t * (P_B_plus[t - 1] - P_B_minus[t - 1]), E_B[t] >= Emin, E_B[t] <= Emax, P_B_plus[t] >= 0, P_B_plus[t] <= P_B_plus_max, P_B_minus[t] >= 0, P_B_minus[t] <= P_B_minus_max, P_SL[t] + P_B_plus[t] / eta_plus == P_S[t], P_SL[t] + P_G[t] + P_B_minus[t] * eta_minus == P_L[t], P_SL[t] >= 0 ] cons += cons_temp # Solve CVX Problem prob = cp.Problem(obj, cons) prob.solve(verbose=True, qcp=True) # Store solution pg = np.append(pg, P_G.value) psl = np.append(psl, P_SL.value) eb = np.append(eb, E_B.value) pbp = np.append(pbp, P_B_plus.value) pbn = np.append(pbn, P_B_minus.value) # Update energy stored in battery for next iteration E_init = E_B[n - 1] # Plot Output time = np.arange(0, 24, 0.25) # 24 hours, 15-minute intervals plt.figure(figsize=(10, 6)) plt.plot(time, solar, label='Solar') plt.plot(time, [i * -1 for i in load], label='Load before Optimization') plt.plot(time, [i * -1 for i in pg], label='Load after Optimization') plt.plot(time, pbp - pbn, label='Battery Operation') plt.plot(time, [t*10000 for t in alpha], label='cost[x10000]') plt.plot(time, eb, label='Battery state') # Adding labels and title plt.xlabel('Time') plt.ylabel('Demand (kW)') plt.title('Battery Optimization Output') # Adding legend plt.legend() # Display the plot plt.grid(True) plt.show() | 2 | 1 |
78,657,011 | 2024-6-22 | https://stackoverflow.com/questions/78657011/how-to-work-with-a-type-from-a-rust-made-library-in-python | I am making a Rust based library for Python which implements different cryptographic protocols, and I'm having trouble working with EphemeralSecret, as I keep getting the error log: "no method named to_bytes found for struct EphemeralSecret in the current scope method not found in EphemeralSecret" This is being used to create a Rust lib to use it in Python so I can implement different cryptographic protocols and I need to work with the created keys in my Python side of things. I understand I maybe shouldnΒ΄t need to have a privateKey serialized for security reasons, but this is my case, so how do I "pass" it? Or should I maybe convert it in another way to work with it and be able to pass it around? I also have the trouble when converting it "back" from python to rust as there is no conversion back to it from bytes. The trouble is in the following section: // Elliptic Curve Diffie-Hellman #[pyfunction] fn generate_ecdh_key() -> PyResult<(Vec<u8>, Vec<u8>)> { let private_key = EphemeralSecret::random_from_rng(OsRng); let public_key = PublicKey::from(&private_key); Ok((private_key.to_bytes().to_vec(), public_key.as_bytes().to_vec())) } #[pyfunction] fn derive_ecdh_shared_key(private_key_bytes: Vec<u8>, server_public_key_bytes: Vec<u8>) -> PyResult<Vec<u8>> { let private_key = EphemeralSecret::from(private_key_bytes.as_slice().try_into().unwrap()); let server_public_key = PublicKey::from(server_public_key_bytes.as_slice().try_into().unwrap()); let shared_secret = private_key.diffie_hellman(&server_public_key); Ok(shared_secret.as_bytes().to_vec()) } It is part of the complete code: use pyo3::prelude::*; use pyo3::wrap_pyfunction; use x25519_dalek::{EphemeralSecret, PublicKey}; use rand::Rng; use rand::rngs::OsRng; use num_bigint::{BigUint, RandBigInt}; use rsa::{RsaPrivateKey, RsaPublicKey, Pkcs1v15Encrypt}; use rsa::pkcs1::{DecodeRsaPrivateKey, DecodeRsaPublicKey, EncodeRsaPrivateKey, EncodeRsaPublicKey}; use rsa::pkcs8::LineEnding; // Diffie-Hellman #[pyfunction] fn generate_dh_key(p: &str, g: &str) -> PyResult<(String, String)> { let p = BigUint::parse_bytes(p.as_bytes(), 10).ok_or_else(|| pyo3::exceptions::PyValueError::new_err("Invalid p"))?; let g = BigUint::parse_bytes(g.as_bytes(), 10).ok_or_else(|| pyo3::exceptions::PyValueError::new_err("Invalid g"))?; let mut rng = OsRng; let private_key = rng.gen_biguint_below(&p); let public_key = g.modpow(&private_key, &p); Ok((private_key.to_str_radix(10), public_key.to_str_radix(10))) } #[pyfunction] fn derive_dh_shared_key(private_key: &str, server_public_key: &str, p: &str) -> PyResult<String> { let p = BigUint::parse_bytes(p.as_bytes(), 10).ok_or_else(|| pyo3::exceptions::PyValueError::new_err("Invalid p"))?; let private_key = BigUint::parse_bytes(private_key.as_bytes(), 10).ok_or_else(|| pyo3::exceptions::PyValueError::new_err("Invalid private_key"))?; let server_public_key = BigUint::parse_bytes(server_public_key.as_bytes(), 10).ok_or_else(|| pyo3::exceptions::PyValueError::new_err("Invalid server_public_key"))?; let shared_secret = server_public_key.modpow(&private_key, &p); Ok(shared_secret.to_str_radix(10)) } // Elliptic Curve Diffie-Hellman #[pyfunction] fn generate_ecdh_key() -> PyResult<(Vec<u8>, Vec<u8>)> { let private_key = EphemeralSecret::random_from_rng(OsRng); let public_key = PublicKey::from(&private_key); Ok((private_key.to_bytes().to_vec(), public_key.as_bytes().to_vec())) } #[pyfunction] fn derive_ecdh_shared_key(private_key_bytes: Vec<u8>, server_public_key_bytes: Vec<u8>) -> PyResult<Vec<u8>> { let private_key = EphemeralSecret::from(private_key_bytes.as_slice().try_into().unwrap()); let server_public_key = PublicKey::from(server_public_key_bytes.as_slice().try_into().unwrap()); let shared_secret = private_key.diffie_hellman(&server_public_key); Ok(shared_secret.as_bytes().to_vec()) } // RSA Functions #[pyfunction] fn generate_rsa_key() -> (String, String) { let mut rng = OsRng; let bits = 2048; let private_key = RsaPrivateKey::new(&mut rng, bits).unwrap(); let public_key = RsaPublicKey::from(&private_key); let private_pem = private_key.to_pkcs1_pem(LineEnding::LF).unwrap(); let public_pem = public_key.to_pkcs1_pem(LineEnding::LF).unwrap(); (private_pem.to_string(), public_pem) } #[pyfunction] fn rsa_encrypt(public_key_pem: &str, message: &str) -> Vec<u8> { let public_key = RsaPublicKey::from_pkcs1_pem(public_key_pem).unwrap(); let mut rng = OsRng; public_key.encrypt(&mut rng, Pkcs1v15Encrypt, message.as_bytes()).unwrap() } #[pyfunction] fn rsa_decrypt(private_key_pem: &str, encrypted_data: Vec<u8>) -> String { let private_key = RsaPrivateKey::from_pkcs1_pem(private_key_pem).unwrap(); let decrypted_data = private_key.decrypt(Pkcs1v15Encrypt, &encrypted_data).unwrap(); String::from_utf8(decrypted_data).unwrap() } // Swoosh NIKE key generation and exchange #[pyfunction] fn swoosh_generate_keys(parameters: (usize, usize, usize)) -> PyResult<(Vec<i8>, Vec<i8>)> { let (q, _d, n) = parameters; let mut rng = rand::thread_rng(); let a: Vec<i8> = (0..n * n).map(|_| rng.gen_range(0..q) as i8).collect(); let s: Vec<i8> = (0..n).map(|_| rng.gen_range(-1..=1)).collect(); let e: Vec<i8> = (0..n).map(|_| rng.gen_range(-1..=1)).collect(); let public_key: Vec<i8> = a.chunks(n).zip(&s).map(|(row, &s)| (row.iter().sum::<i8>() + e[s as usize]) % q as i8).collect(); Ok((s, public_key)) } #[pyfunction] fn swoosh_derive_shared_key(private_key: Vec<i8>, public_key: Vec<i8>, q: usize) -> PyResult<Vec<i8>> { let shared_key: Vec<i8> = private_key.iter().zip(&public_key).map(|(&s, &p)| (s * p) % q as i8).collect(); Ok(shared_key) } #[pymodule] fn shadowCrypt(m: &Bound<'_, PyModule>) -> PyResult<()> { m.add_function(wrap_pyfunction!(generate_dh_key, m)?)?; m.add_function(wrap_pyfunction!(derive_dh_shared_key, m)?)?; m.add_function(wrap_pyfunction!(generate_ecdh_key, m)?)?; m.add_function(wrap_pyfunction!(derive_ecdh_shared_key, m)?)?; m.add_function(wrap_pyfunction!(generate_rsa_key, m)?)?; m.add_function(wrap_pyfunction!(rsa_encrypt, m)?)?; m.add_function(wrap_pyfunction!(rsa_decrypt, m)?)?; m.add_function(wrap_pyfunction!(swoosh_generate_keys, m)?)?; m.add_function(wrap_pyfunction!(swoosh_derive_shared_key, m)?)?; Ok(()) } | EphemeralSecret does not have a to_bytes method as Ephemeral Secrets cannot be serialized to Vec as mentioned in the docs. This type is identical to the StaticSecret type, except that the EphemeralSecret::diffie_hellman method consumes and then wipes the secret key, and there are no serialization methods defined To convert to Vec<u8> you can use StaticSecret which has a StaticSecret::to_bytes method and can be used multiple times as well as serialized. So the final code would be something like use num_bigint::{BigUint, RandBigInt}; use pyo3::prelude::*; use pyo3::wrap_pyfunction; use rand::rngs::OsRng; use rand::Rng; use rsa::pkcs1::{ DecodeRsaPrivateKey, DecodeRsaPublicKey, EncodeRsaPrivateKey, EncodeRsaPublicKey, }; use rsa::pkcs8::LineEnding; use rsa::{Pkcs1v15Encrypt, RsaPrivateKey, RsaPublicKey}; use x25519_dalek::{PublicKey, StaticSecret}; // Diffie-Hellman #[pyfunction] fn generate_dh_key(p: &str, g: &str) -> PyResult<(String, String)> { let p = BigUint::parse_bytes(p.as_bytes(), 10) .ok_or_else(|| pyo3::exceptions::PyValueError::new_err("Invalid p"))?; let g = BigUint::parse_bytes(g.as_bytes(), 10) .ok_or_else(|| pyo3::exceptions::PyValueError::new_err("Invalid g"))?; let mut rng = OsRng; let private_key = rng.gen_biguint_below(&p); let public_key = g.modpow(&private_key, &p); Ok((private_key.to_str_radix(10), public_key.to_str_radix(10))) } #[pyfunction] fn derive_dh_shared_key(private_key: &str, server_public_key: &str, p: &str) -> PyResult<String> { let p = BigUint::parse_bytes(p.as_bytes(), 10) .ok_or_else(|| pyo3::exceptions::PyValueError::new_err("Invalid p"))?; let private_key = BigUint::parse_bytes(private_key.as_bytes(), 10) .ok_or_else(|| pyo3::exceptions::PyValueError::new_err("Invalid private_key"))?; let server_public_key = BigUint::parse_bytes(server_public_key.as_bytes(), 10) .ok_or_else(|| pyo3::exceptions::PyValueError::new_err("Invalid server_public_key"))?; let shared_secret = server_public_key.modpow(&private_key, &p); Ok(shared_secret.to_str_radix(10)) } // Elliptic Curve Diffie-Hellman #[pyfunction] fn generate_ecdh_key() -> PyResult<(Vec<u8>, Vec<u8>)> { let private_key = StaticSecret::random_from_rng(OsRng); let public_key = PublicKey::from(&private_key); Ok(( private_key.to_bytes().to_vec(), public_key.as_bytes().to_vec(), )) } #[pyfunction] fn derive_ecdh_shared_key( private_key_bytes: Vec<u8>, server_public_key_bytes: Vec<u8>, ) -> PyResult<Vec<u8>> { let private_key = StaticSecret::from(private_key_bytes.as_slice().try_into().unwrap()); let server_public_key = PublicKey::from(server_public_key_bytes.as_slice().try_into().unwrap()); let shared_secret = private_key.diffie_hellman(&server_public_key); Ok(shared_secret.as_bytes().to_vec()) } // RSA Functions #[pyfunction] fn generate_rsa_key() -> (String, String) { let mut rng = OsRng; let bits = 2048; let private_key = RsaPrivateKey::new(&mut rng, bits).unwrap(); let public_key = RsaPublicKey::from(&private_key); let private_pem = private_key.to_pkcs1_pem(LineEnding::LF).unwrap(); let public_pem = public_key.to_pkcs1_pem(LineEnding::LF).unwrap(); (private_pem.to_string(), public_pem) } #[pyfunction] fn rsa_encrypt(public_key_pem: &str, message: &str) -> Vec<u8> { let public_key = RsaPublicKey::from_pkcs1_pem(public_key_pem).unwrap(); let mut rng = OsRng; public_key .encrypt(&mut rng, Pkcs1v15Encrypt, message.as_bytes()) .unwrap() } #[pyfunction] fn rsa_decrypt(private_key_pem: &str, encrypted_data: Vec<u8>) -> String { let private_key = RsaPrivateKey::from_pkcs1_pem(private_key_pem).unwrap(); let decrypted_data = private_key .decrypt(Pkcs1v15Encrypt, &encrypted_data) .unwrap(); String::from_utf8(decrypted_data).unwrap() } // Swoosh NIKE key generation and exchange #[pyfunction] fn swoosh_generate_keys(parameters: (usize, usize, usize)) -> PyResult<(Vec<i8>, Vec<i8>)> { let (q, _d, n) = parameters; let mut rng = rand::thread_rng(); let a: Vec<i8> = (0..n * n).map(|_| rng.gen_range(0..q) as i8).collect(); let s: Vec<i8> = (0..n).map(|_| rng.gen_range(-1..=1)).collect(); let e: Vec<i8> = (0..n).map(|_| rng.gen_range(-1..=1)).collect(); let public_key: Vec<i8> = a .chunks(n) .zip(&s) .map(|(row, &s)| (row.iter().sum::<i8>() + e[s as usize]) % q as i8) .collect(); Ok((s, public_key)) } #[pyfunction] fn swoosh_derive_shared_key( private_key: Vec<i8>, public_key: Vec<i8>, q: usize, ) -> PyResult<Vec<i8>> { let shared_key: Vec<i8> = private_key .iter() .zip(&public_key) .map(|(&s, &p)| (s * p) % q as i8) .collect(); Ok(shared_key) } #[pymodule] fn shadow_crypt(m: &Bound<'_, PyModule>) -> PyResult<()> { m.add_function(wrap_pyfunction!(generate_dh_key, m)?)?; m.add_function(wrap_pyfunction!(derive_dh_shared_key, m)?)?; m.add_function(wrap_pyfunction!(generate_ecdh_key, m)?)?; m.add_function(wrap_pyfunction!(derive_ecdh_shared_key, m)?)?; m.add_function(wrap_pyfunction!(generate_rsa_key, m)?)?; m.add_function(wrap_pyfunction!(rsa_encrypt, m)?)?; m.add_function(wrap_pyfunction!(rsa_decrypt, m)?)?; m.add_function(wrap_pyfunction!(swoosh_generate_keys, m)?)?; m.add_function(wrap_pyfunction!(swoosh_derive_shared_key, m)?)?; Ok(()) } Note: The Cargo.toml file needs to be modified as follows: x25519_dalek = { version = "2.0.1", features = ["static_secrets"] } | 2 | 4 |
78,652,436 | 2024-6-21 | https://stackoverflow.com/questions/78652436/pandas-extract-sequence-where-prev-value-current-value | Need to extract sequence of negative values where earlier negative value is smaller than current value and next value is smaller than current value import pandas as pd # Create the DataFrame with the given values data = { 'Value': [0.3, 0.2, 0.1, -0.1, -0.2, -0.3, -0.4, -0.35, -0.25, 0.1, -0.15, -0.25, -0.13, -0.1, 1] } df = pd.DataFrame(data) print("Original DataFrame:") print(df) My Code: # Initialize a list to hold the sequences sequences = [] current_sequence = [] # Iterate through the DataFrame to apply the condition for i in range(1, len(df) - 1): prev_value = df.loc[i - 1, 'Value'] curr_value = df.loc[i, 'Value'] next_value = df.loc[i + 1, 'Value'] # Check the condition if curr_value < prev_value and curr_value < next_value: current_sequence.append(curr_value) else: # If the current sequence is not empty and it's a valid sequence, add it to sequences list and reset if current_sequence: sequences.append(current_sequence) current_sequence = [] # Add the last sequence if it's not empty if current_sequence: sequences.append(current_sequence) My Output: Extracted Sequences: [-0.4] [-0.25] Expected Output: [-0.1,-0.2,-0.3,-0.4] [-0.15,-0.25] | You can build masks to identify the negative values and consecutive decreasing values and use groupby to split: # is the value negative? m1 = df['Value'].lt(0) # is the value decreasing? m2 = df['Value'].diff().le(0) m = m1&m2 # aggregate out = df[m].groupby((~m).cumsum())['Value'].agg(list).tolist() Output: [[-0.1, -0.2, -0.3, -0.4], [-0.15, -0.25]] If you just want to filter: out = df[m] Output: Value 3 -0.10 4 -0.20 5 -0.30 6 -0.40 10 -0.15 11 -0.25 Intermediates: Value m1 df['Value'].diff() m2 m1&m2 0 0.30 False NaN False False 1 0.20 False -0.10 True False 2 0.10 False -0.10 True False 3 -0.10 True -0.20 True True 4 -0.20 True -0.10 True True 5 -0.30 True -0.10 True True 6 -0.40 True -0.10 True True 7 -0.35 True 0.05 False False 8 -0.25 True 0.10 False False 9 0.10 False 0.35 False False 10 -0.15 True -0.25 True True 11 -0.25 True -0.10 True True 12 -0.13 True 0.12 False False 13 -0.10 True 0.03 False False 14 1.00 False 1.10 False False | 3 | 1 |
78,646,747 | 2024-6-20 | https://stackoverflow.com/questions/78646747/wrong-shape-at-fully-connected-layer-mat1-and-mat2-shapes-cannot-be-multiplied | I have the following model. It is training well. The shapes of my splits are: X_train (98, 1, 40, 844) X_val (21, 1, 40, 844) X_test (21, 1, 40, 844) However, I am getting the following error at x = F.relu(self.fc1(x)) in forward. When I attempt to interpret the model on the validation set. # Create a DataLoader for the validation set valid_dl = learn.dls.test_dl(X_val, y_val) # Get predictions and interpret them on the validation set interp = ClassificationInterpretation.from_learner(learn, dl=valid_dl) RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x2110 and 67520x128) I have checked dozens of similar questions but I am unable to find a solution. Here is the code. from fastai.vision.all import * import librosa import numpy as np from sklearn.model_selection import train_test_split import torch import torch.nn as nn from torchsummary import summary [...] #labels in y can be [0,1,2,3] # Split the data X_train, X_temp, y_train, y_temp = train_test_split(X, y, test_size=0.3, random_state=42) X_val, X_test, y_val, y_test = train_test_split(X_temp, y_temp, test_size=0.5, random_state=42) # Reshape data for CNN input (add channel dimension) X_train = X_train[:, np.newaxis, :, :] X_val = X_val[:, np.newaxis, :, :] X_test = X_test[:, np.newaxis, :, :] #X_train.shape, X_val.shape, X_test.shape #((98, 1, 40, 844), (21, 1, 40, 844), (21, 1, 40, 844)) class DraftCNN(nn.Module): def __init__(self): super(DraftCNN, self).__init__() self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1) self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1) # Calculate flattened size based on input dimensions with torch.no_grad(): dummy_input = torch.zeros(1, 1, 40, 844) # shape of one input sample dummy_output = self.pool(self.conv2(self.pool(F.relu(self.conv1(dummy_input))))) self.flattened_size = dummy_output.view(dummy_output.size(0), -1).size(1) self.fc1 = nn.Linear(self.flattened_size, 128) self.fc2 = nn.Linear(128, 4) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(x.size(0), -1) # Flatten the output of convolutions x = F.relu(self.fc1(x)) x = self.fc2(x) return x # Initialize the model and the Learner model = AudioCNN() learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), metrics=[accuracy, Precision(average='macro'), Recall(average='macro'), F1Score(average='macro')]) # Train the model learn.fit_one_cycle(8) print(summary(model, (1, 40, 844))) # Create a DataLoader for the validation set valid_dl = learn.dls.test_dl(X_val, y_val) # Get predictions and interpret them on the validation set interp = ClassificationInterpretation.from_learner(learn, dl=valid_dl) interp.plot_confusion_matrix() interp.plot_top_losses(5) I tried changing the forward function and the shapes of the layers but I keep getting the same error. Edit. Upon request, I have added more code. | I was able to pass the data to the fastai.Learner to train the model and get results from plot_confusion_matrix. My conclusion that fastai is not designed to work with custom Datasets and DataLoaders and expecting you to use their API for loading the data. I think that in your case it might be worth to switch to TabularDataLoaders and load the data using TabularDataLoaders.from_df. Or alternatively use ImageBlock if you are working with images. Basically to give the most optimal solution to the question it is important to know what data are you using. Are you working with images? Are images stored in files? What type of files? Or the input data are simple arrays? Also function plot_top_losses doesn't work well if you have numpy dataset as the input. Function plots worst_k examples, and most probably it works only with data that is loaded using ImageBlocks and etc. Given current inputs there is two options how to fix the code: Assume numpy inputs. Create custom dataloader using fastai API. Assume numpy inputs. Create custom dataloaders using pytorch API. Solution 1: Building custom numpy dataloader for fastai Learner using fastai DataBlock API: from fastai.vision.all import * from fastai.data.all import * def make_dataloaders_from_numpy_data(image, label, loader=False): def pass_index(idx): return idx def get_x(i): val = image[i] return torch.Tensor(val) def get_y(i): # val = [label[i]] # res = torch.Tensor(val).to(torch.int64) return label[i] dblock = DataBlock( blocks=(DataBlock, CategoryBlock), get_items=pass_index, get_x=get_x, get_y=get_y) # pass in a list of index num_images = image.shape[0] source = list(range(num_images)) if not loader: ds = dblock.datasets(source) return ds return dblock.dataloaders(source, batch_size = 1) train_ds = make_dataloaders_from_numpy_data(X_train, y_train) test_ds = make_dataloaders_from_numpy_data(X_test, y_test) train_ld = DataLoader(train_ds, batch_size=64) test_ld = DataLoader(test_ds, batch_size=64) dls = DataLoaders(train_ld, test_ld) dls_val = make_dataloaders_from_numpy_data(X_val, y_val,loader=True) # Initialize the model and the Learner model = DraftCNN() learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), metrics=[accuracy, Precision(average='macro'), Recall(average='macro'), F1Score(average='macro')]) # # # # Train the model learn.fit_one_cycle(1) # Get predictions and interpret them on the validation set interp = ClassificationInterpretation.from_learner(learn, dl=dls_val) interp.plot_confusion_matrix() plt.show() Solution 2 Building custom numpy dataloader for fastai Learner using pytorch API for Dataset and DataLoader from torch.utils.data import Dataset from fastai.data.core import DataLoaders class CustomDataclass(Dataset): def __init__(self, X: np.ndarray, y: np.ndarray): """ Will iterate over the dataset """ self.data = X self.labels = y # Not clear what is self.vocab for # However part of this attribute is used for plotting labels in `ClassificationInterpretation` self.vocab = (None, ['class_0', 'class_1', 'class_2', 'class_3']) def __len__(self): return self.data.shape[0] def __getitem__(self, idx: int): data = self.data[idx,...] # labels can't be single values and must be converted to a list labels = [self.labels[idx]] return (torch.Tensor(data), torch.Tensor(labels).to(torch.int64) # labels must be integers ) train_ds = CustomDataclass(X_train, y_train) test_ds = CustomDataclass(X_test, y_test) val_ds = CustomDataclass(X_val, y_val) from torch.utils.data import DataLoader bs = 64 train_loader = DataLoader(train_ds, batch_size = bs) test_loader = DataLoader(test_ds, batch_size = bs) # Val dataset used in interpretation phase where pytorch dataloaders doesn't work from fastai.data.core import DataLoader val_loader = DataLoader(val_ds, batch_size = bs) dls = DataLoaders(train_loader, test_loader) # Initialize the model and the Learner model = DraftCNN() learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), metrics=[accuracy, Precision(average='macro'), Recall(average='macro'), F1Score(average='macro')]) # # # # Train the model learn.fit_one_cycle(4) # Get predictions and interpret them on the validation set interp = ClassificationInterpretation.from_learner(learn, dl=val_loader) interp.plot_confusion_matrix() plt.show() Other errors that are fixed by my code: dataset must return labels in lists: learn.fit_one_cycle(4) ... return torch.stack(batch, 0, out=out) RuntimeError: stack expects each tensor to be equal size, but got [3] at entry 0 and [0] at entry 1 Added vocab attribute to pytorch DataClass: interp = ClassificationInterpretation.from_learner(learn, dl=val_loader) ... File "/Users/ivanpetrov/.pyenv/versions/3.11.6/envs/stack_overflow_env/lib/python3.11/site-packages/fastcore/basics.py", line 507, in __getattr__ if attr is not None: return getattr(attr,k) ^^^^^^^^^^^^^^^ AttributeError: 'CustomDataclass' object has no attribute 'vocab' | 2 | 1 |
78,655,628 | 2024-6-22 | https://stackoverflow.com/questions/78655628/inverse-fourier-transform-x-axis-scaling | I'm implementing a discrete inverse Fourier transform in Python to approximate the inverse Fourier transform of a Gaussian function. The input function is sqrt(pi) * e^(-w^2/4) so the output must be e^(-x^2). While the shape of the resulting function looks correct, the x-axis scaling seems to be off (There might be just some normalization issue). I expect to see a Gaussian function of the form e^(-x^2), but my result is much narrower. This is my implementation: import matplotlib.pyplot as plt import numpy as np from sympy import symbols, exp, pi, lambdify, sqrt # Defining the Fourier transform of a Gaussian function, sqrt(pi) * exp(-omega ** 2 / 4) x, omega = symbols('x omega') f_gaussian_symbolic = exp(-omega ** 2 / 4) * sqrt(pi) f_gaussian_function = lambdify(omega, f_gaussian_symbolic, 'numpy') def fourier_inverse(f, n, omega_max): """ This function computes the inverse Fourier transform of a function f. :param f: The function to be transformed :param n: Number of samples :param omega_max: The max frequency we want to be sampled """ omega_range = np.linspace(-omega_max, omega_max, n) f_values = f(omega_range) inverse_f = np.fft.ifftshift(np.fft.ifft(np.fft.fftshift(f_values))) delta_omega = omega_range[1] - omega_range[0] x_range = np.fft.ifftshift(np.fft.fftfreq(n, d=delta_omega)) inverse_f *= delta_omega * n / (2 * np.pi) return x_range, inverse_f plt.figure(figsize=(10, 5)) x_range, inverse_f = fourier_inverse(f_gaussian_function, 10000, 100) plt.plot(x_range, inverse_f.real) plt.ylim(-2, 2) plt.xlim(-4, 4) plt.show() I expect the plot to be: Buy my output is this: The shape of the function looks correct, but it's much narrower than expected. I suspect there's an issue with how I'm calculating or scaling the x_range in my fourier_inverse function. What am I doing wrong in my implementation, and how can I correct the x-axis scaling to get the expected Gaussian function e^(-x^2)? | It looks like you're using frequency in the x-axis when you expect angular frequency. You should modify your x_range computation like this: x_range = 2 * np.pi * np.fft.ifftshift(np.fft.fftfreq(n, d=delta_omega)) With this change, the resulting plot looks like this: | 3 | 3 |
78,655,732 | 2024-6-22 | https://stackoverflow.com/questions/78655732/how-to-efficiently-compare-2-dataframe-and-produce-a-column-value-based-on-condi | data1 = { 'alias_cd': ['12345', '12345', '12345'], 'country_cd': ['AU', 'AU', 'AU2'], 'pos_name': ['st1', 'Jh', 'Jh'], 'ts_allocated': [100, 100, 100], 'tr_id': ['None', 'None', 'None'], 'ty_name': ['E2E', 'E2E', 'E2E'] } data2 = { 'alias_cd': ['12345', '12345'], 'country_cd': ['AU', 'AU3'], 'pos_name': ['st1', 'st2'], 'ts_allocated': [200, 100], 'tr_id': ['None', 'None'], 'ty_name': ['E2E', 'E2E'] } df1 = pd.DataFrame(data1) df2 = pd.DataFrame(data2) output should be alias_cd country_cd pos_name ts_allocated tr_id ty_name etl_flag 1 12345 AU st1 200 None E2E U 2 12345 AU3 st2 100 None E2E D 3 12345 AU st1 100 None E2E I 4 12345 AU Jh 100 None E2E I 5 12345 AU2 Jh 100 None E2E I Because: The combination of alias_cd and country_cd acts as a primary key. 1.If a combination exists in df2 and df1 (12345 AU), it will be marked for 'Update' in df2, and all corresponding rows in df1 for the same combination will be marked as 'Insert'. for the above example for 12345 AU records in df2 will be etl_flag= 'Update' and add records for the same combination from df1 to df2 with etl_flag as 'Insert' 2.12345 AU3 exists in df2 but not in df1, so it will be tagged as 'DELETE' in the etl_flag column. 3.Any new combination that appears in df1 and not present in df2 will be tagged as 'Insert' in the etl_flag column. How can I achieve this efficiently? This is what I tried but its doesn't give correct output: df2['etl_flag'] = 'U' to_insert = df1[~df1.apply(lambda x: (df2['alias_cd'] == x['alias_cd']) & (df2['country_cd'] == x['country_cd']), axis=1)] to_insert['etl_flag'] = 'I' df2 = pd.concat([df2, to_insert], ignore_index=True) to_delete = df2[~df2.apply(lambda x: (df1['alias_cd'] == x['alias_cd']) & (df1['country_cd'] == x['country_cd']), axis=1)] to_delete['etl_flag'] = 'D' final_df = pd.concat([df2, to_delete], ignore_index=True) final_df.sort_values(by=['alias_cd', 'country_cd'], inplace=True) print(final_df[['alias_cd', 'country_cd', 'pos_name', 'ts_allocated', 'tr_id', 'ty_name', 'etl_flag']]) | IIUC, you can first perform a left-merge with indicator on df2 to identify the U/D status, then concat. df1 always gets an I: # columns used as primary key cols = ['alias_cd', 'country_cd'] out = pd.concat( [df2.assign(etl_flag=df2.merge(df1[cols].drop_duplicates(), on=cols, how='left', indicator=True) ['_merge'].map({'left_only': 'D', 'both': 'U'}) .values), df1.assign(etl_flag='I')] ) Output: alias_cd country_cd pos_name ts_allocated tr_id ty_name etl_flag 0 12345 AU st1 200 None E2E U 1 12345 AU3 st2 100 None E2E D 0 12345 AU st1 100 None E2E I 1 12345 AU Jh 100 None E2E I 2 12345 AU2 Jh 100 None E2E I | 2 | 2 |
78,653,471 | 2024-6-21 | https://stackoverflow.com/questions/78653471/how-can-i-form-groups-by-a-mask-and-n-rows-after-that-mask | My DataFrame is: import pandas as pd df = pd.DataFrame( { 'a': [False, True, False, True, False, True, False, True, True, False, False], } ) Expected output is forming groups like this: a 1 True 2 False 3 True a 5 True 6 False 7 True a 8 True 9 False 10 False The logic is: Basically I want to form groups where df.a == True and two rows after that. For example, in order to create the first group, the first True should be found which is row 1. Then the first group is rows 1, 2 and 3. For the second group the next True must be found which is not in the first group. That row is row 5. So the second group is consisted of rows 5, 6 and 7. This image clarifies the point: And this is my attempt that didn't work: N = 2 mask = ((df.a.eq(True)) .cummax().cumsum() .between(1, N+1) ) out = df[mask] | Since you problem is inherently iterative, you must loop. The most straightforward option IMO is to use a simple python loop: def split(df, N=2): i = 0 a = df['a'].to_numpy() while i < len(df): if a[i]: yield df.iloc[i:i+N+1] i+=N i+=1 out = list(split(df)) Output: [ a 1 True 2 False 3 True, a 5 True 6 False 7 True, a 8 True 9 False 10 False] If you want a simple mask and a unique DataFrame as output, you could improve the speed with numba: from numba import jit @jit(nopython=True) def get_indices(a, N=2): i = 0 out = [] while i < len(a): if a[i]: out.extend([True]*(N+1)) i += N+1 else: out.append(False) i += 1 return out[:len(a)] out = df.loc[get_indices(df['a'].to_numpy())] out['group'] = np.arange(len(out))//3 Output: a group 1 True 0 2 False 0 3 True 0 5 True 1 6 False 1 7 True 1 8 True 2 9 False 2 10 False 2 Timings for N = 2 relative timings: Timings for N = 100 NB. @Triky's solution is not providing correct results here relative timings | 3 | 1 |
78,654,841 | 2024-6-22 | https://stackoverflow.com/questions/78654841/scipy-integrate-quad-gives-incorrect-value | I was trying defining a trajectory using velocity and curvature by time; i.e. v(t) and k(t). To get x and y position at time t, I used scipy.integrate.quad but it gives a wrong value. Below is what I've tried. import numpy as np from scipy.integrate import quad # Define v(t) and k(t) def v(t): return 10.0 def k(t): return 0.0 def dtheta(t): # heading rate. return v(t) * k(t); # Compute theta(t) by integrating dtheta def theta(t): result, _ = quad(dtheta, 0, t, epsabs=1e-4, limit=100) # initial heading 0 print(f"heading {result}") return result # return 0 # Define the integrands for x(t) and y(t) def dx(t): ret = v(t) * np.cos(theta(t)) print(f"dx {ret}") return ret def dy(t): return v(t) * np.sin(theta(t)) # Integrate to find x(t) and y(t) def x(t): result, _ = quad(dx, 0, t, epsabs=1e-4, limit=100) # initial x 0 return result def y(t): result, _ = quad(dy, 0, t, epsabs=1e-4, limit=100) # initial y 0 return result print(x(0.5)) As you can see, to test the code, I just put really simple values; constant 10 to velocity and constant 0 to curvature. Therefore, the heading should be constantly 0 and the expected x(0.5) is 5.0. However, it gives a strange number 4.235577740302726e-06. I checked that theta(t) always outputs 0 and dx(t) always outputs 10.0, heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 heading 0.0 dx 10.0 4.235577740302726e-06 It gives me a correct value (i.e. 5.0) if I manually return 0 for theta(t) but I don't like to do this cause it harms its applicability. What am I doing wrong here? | You can use solve_ivp(), which is designed for solving initial value problems for ODEs: import numpy as np from scipy.integrate import solve_ivp def system(t, y, v, k): x, y_pos, theta = y dxdt = v(t) * np.cos(theta) dydt = v(t) * np.sin(theta) dthetadt = v(t) * k(t) return [dxdt, dydt, dthetadt] def v(t): return 10.0 * np.exp(-t) def k(t): return 0.1 * np.sin(2 * np.pi * t) ts, te = (0, 0.5), np.linspace(0, 0.5, 100) IVP = solve_ivp(lambda t, y: system(t, y, v, k), ts, [0, 0, 0], t_eval=te) x1, y1 = IVP.y[0, -1], IVP.y[1, -1] print(f"x(0.5) = {x1}, y(0.5) = {y1}") Prints x(0.5) = 3.89226601538334, y(0.5) = 0.4613683813905819 | 2 | 1 |
78,654,580 | 2024-6-21 | https://stackoverflow.com/questions/78654580/how-to-perform-single-synchronous-and-multiple-asynchronous-requests-in-python | I'm working on a Python script where I need to make an initial request to obtain an ID. Once I have the ID, I need to make several additional requests to get data related to that ID. I understand that these subsequent requests can be made asynchronously to improve performance. However, I'm not sure how to implement this effectively. Here's a simplified version of my current synchronous approach: import requests # Initial request to get the ID response = requests.get('https://api.example.com/get_id') id = response.json()['id'] # Subsequent requests to get data related to the ID data1 = requests.get(f'https://api.example.com/data/{id}/info1').json() data2 = requests.get(f'https://api.example.com/data/{id}/info2').json() data3 = requests.get(f'https://api.example.com/data/{id}/info3').json() # Processing the data process_data(data1, data2, data3) I would like to make the requests to info1, info2, and info3 asynchronously. How can I achieve this using asyncio or any other library? I've looked into httpx, but I'm not sure how to structure the code correctly. Any help or example code would be greatly appreciated! | Similar but using a task group (python >= 3.11) import httpx import asyncio async def main(): async with httpx.AsyncClient() as client: response = await client.get('https://api.example.com/get_id') id = response.json()['id'] async with asyncio.TaskGroup() as tg: task1 = tg.create_task(client.get(f'https://api.example.com/data/{id}/info1')) task2 = tg.create_task(client.get(f'https://api.example.com/data/{id}/info2')) task3 = tg.create_task(client.get(f'https://api.example.com/data/{id}/info3')) # Processing the data process_data(task1.result(), task2.result(), task3.result()) | 4 | 3 |
78,654,017 | 2024-6-21 | https://stackoverflow.com/questions/78654017/how-to-type-hint-dictionaries-that-may-have-different-custom-keys-and-or-values | In the previous versions of our app, people would just pass some arguments with plain strings to certain functions, as we did not have specific type hinting or data types for some of them. Something like: # Hidden function signature: def dummy(var: str): pass # Users: dummy("cat") But now we want to implement custom data types for those function signatures, while providing backward compatibility. Say something like this: # Signature: def dummy(var: Union[NewDataType, Literal["cat"]]) # Backward compatibility: dummy("cat") # New feature: dummy(NewDataType.cat) Achieving this for simple function signatures is fine, but the problem comes when the signatures are more complex. How to implement this if the argument of dummy is a dictionary that can take both Literal["cat"] and NewDataType as keys? Furthermore, how to achieve this if the argument is a dictionary with the same previous key type combination, but that could also have str and int as values (and the four possible combinations)? All of this must be compliant with mypy, pylint and use Python 3.9 (no StrEnum or TypeAlias). I have tried many different combinations like the following: from typing import TypedDict, Literal, Dict, Union from enum import Enum # For old support: AnimalsLiteral = Literal[ "cat", "dog", "snake", ] # New datatypes: class Animals(Enum): cat = "cat" dog = "dog" snake = "snake" # Union of Animals Enum and Literal types for full support: DataType = Union[Animals, AnimalsLiteral] # option 1, which fails: def dummy(a: Dict[DataType, str]): pass # option 2, which also fails: # def dummy(a: Union[Dict[DataType, str], Dict[Animals, str], Dict[AnimalsLiteral, str]]): # pass if __name__ == "__main__": # Dictionary with keys as Animals Enum input_data1 = { Animals.dog: "dog", } dummy(input_data1) # Dictionary with keys as Literal["cat", "dog", "snake"] input_data2 = { "dog": "dog", } dummy(input_data2) # Dictionary with mixed keys: Animals Enum and Literal string input_data3 = { Animals.dog: "dog", "dog": "dog", } dummy(input_data3) dummy(input_data1) is fine, but dummy(input_data2) gives the following mypy errors with signature 2 for dummy: Argument 1 to "dummy" has incompatible type "dict[str, str]"; expected "Union[dict[Union[Animals, Literal['cat', 'dog', 'snake']], str], dict[Animals, str], dict[Literal['cat', 'dog', 'snake'], str]]"Mypyarg-type Argument 1 to "dummy" has incompatible type "dict[str, str]"; expected "Union[dict[Union[Animals, Literal['cat', 'dog', 'snake']], str], dict[Animals, str], dict[Literal['cat', 'dog', 'snake'], str]]"Mypyarg-type (variable) input_data2: dict[str, str] Of course doing something like: input_data2: DataTypes = { "dog": "dog", } would solve it, but I can't ask the users to always do that when they create their datatypes. Also, I have tried another alternative using TypedDict, but I still run into the same type of mypy errors. In the end, I want to be able to create mypy and pylint compliant typehints of dictionaries which may take custom key types (as in the example) and even custom value types, or combination of the above. | The core issue is this: Of course doing something like: input_data2: DataTypes = { "dog": "dog", } would solve it, but I can't ask the users to always do that when they create their datatypes. If you don't want your users to provide annotations, then they will have to pass the data directly to the function (dummy({"dog": "dog"})) for the function's parameter type inference to kick in. This is because when a type-checker infers the type of an unannotated name in an assignment from a dict, they don't infer the type as literal (see mypy Playground, Pyright Playground): a = {"dog": "dog"} reveal_type(a) # dict[str, str] I suspect that if the type-checkers tried to infer literal keys on unannotated assignments, other users would complain of false positives because they'd want dict[str, str]. dict[str, str] can never fulfil a more tightly-annotated parameter in your functions (def dummy(a: Dict[DataType, str]): ...). In my opinion, you have 2 choices: Fulfil stricter typing by asking your users to annotate (it isn't clear from the question who is providing the DataType definitions - is it you/library-maintainers or the users)? Don't ask your users to annotate, but make a @typing.overload which allows looser annotations: from typing import overload @overload def dummy(a: dict[DataType, str]): ... @overload def dummy(a: dict[str, str]): ... As a bonus, when mypy gains support, you can use PEP 702: @warnings.deprecated to warn your users if their typing is too loose. See an example at Pyright Playground. An additional note: In your question details, you mentioned: All of this must be compliant with mypy, pylint and use Python 3.9 (no StrEnum or TypeAlias). Python versions which aren't end-of-life are capable of utilising most newer features from typing. This is because type checkers are required to understand imports from typing_extensions, regardless of whether this module exists at runtime. So, TypeAlias and the union syntax int | str are available in Python 3.9 via the following, as long as you don't need to introspect annotations at runtime: from __future__ import annotations var1: int | str = 1 from typing import TYPE_CHECKING if TYPE_CHECKING: from typing_extensions import TypeAlias IntStrAlias: TypeAlias = int | str # Or IntStrAlias: TypeAlias = "int | str" Python 3.11's enum.StrEnum is also easily imitated in Python 3.9 (see the note in the docs), and type-checkers are required to understand this: from enum import Enum class StrEnum(str, Enum): dog = "dog" >>> reveal_type(StrEnum.dog) # Literal[StrEnum.dog] >>> print(StrEnum.dog + " barks!") dog barks! | 2 | 2 |
78,642,079 | 2024-6-19 | https://stackoverflow.com/questions/78642079/how-to-properly-calculate-psd-plot-power-spectrum-density-plot-for-images-in-o | I'm trying to remove periodic noise from an image using PSDP, I had some success, but I'm not sure if what I'm doing is 100% correct. This is basically a kind of follow up to this video lecture which discusses this very subject on 1d signals. What I have done so far: Initially I tried flattening the whole image, and then treating it as a 1D signal, this obviously gives me a plot, but the plot doesn't look right honestly and the final result is not that appealing. This is the first try: # img link https://github.com/VladKarpushin/Periodic-noise-removing-filter/blob/master/www/images/period_input.jpg?raw=true img = cv2.imread('./img/periodic_noisy_image2.jpg',0) img_flattened = img.flatten() n = img_flattened.shape[0] # 447561 fft = np.fft.fft(img_flattened, img_flattened.shape[0]) # the values range is just absurdly large, so # we have to use log at some point to get the # values range to become sensible! psd = fft*np.conj(fft)/n freq = 1/n * np.arange(n) L = np.arange(1,np.floor(n/2),dtype='int') # use log so we have a sensible range! psd_log = np.log(psd) print(f'{psd_log.min()=} {psd_log.max()=}') # cut off range to remove noise! indexes = psd_log<15 # use exp to get the original vlaues for plotting comparison psd_cleaned = np.exp(psd_log * indexes) # get the denoised fft fft_cleaned = fft * indexes # in case the initial parts were affected, # lets restore it from fft so the final image looks well span = 10 fft_cleaned[:span] = fft[:span] # get back the image denoised_img = np.fft.ifftn(fft_cleaned).real.clip(0,255).astype(np.uint8).reshape(img.shape) plt.subplot(2,2,1), plt.imshow(img,cmap='gray'), plt.title('original image') plt.subplot(2,2,2), plt.imshow(denoised_img, cmap='gray'), plt.title('denoise image') plt.subplot(2,2,3), plt.plot(freq[L],psd[L]), plt.title('PSD') plt.subplot(2,2,4), plt.plot(freq[L],psd_cleaned[L]), plt.title('PSD clean') plt.show() This is the output, the image is denoised a bit, but overall, it doesn't sit right with me, as I assume I should at least get as good a result as my second attempt, the plots also look weird. in my second attempt, I simply calculated the power spectrum the normal way, and got a much better result imho: # Read the image in grayscale img = cv2.imread('./img/periodic_noisy_image2.jpg', 0) # Perform 2D Fourier transform fft = np.fft.fftn(img) fft_shift = np.fft.fftshift(fft) # Calculate Power Spectrum Density, it's the same as doing fft_shift*np.conj(fft_shift) # note that abs(fft_shitf) calculates square root of powerspectrum, so we **2 it to get the actual power spectrum! # but we still need to divide it by the frequency to get the plot (for visualization only)! # this is what we do next! # I use log to make large numbers smaller and small numbers larger so they show up properly in visualization psd = np.log(np.abs(fft_shift)**2) # now I can filter out the bright spots which signal noise # take the indexes belonging to these large values # and then use that to set them in the actual fft to 0 to suppress them # 20-22 image gets too smoothed out, and >24, its still visibly noisy ind = psd<23 psd2 = psd*ind fft_shift2 = ind * fft_shift # since this is not accurate, we may very well endup destroying # the center of the fft which contains low freq important image information # (it has large values there as well) so we grab that area from fft and copy # it back to restore the lost values this way! cx,cy = img.shape[0]//2, img.shape[1]//2 area = 20 # restore the center in case it was overwritten! fft_shift2[cx-area:cx+area,cy-area:cy+area] = fft_shift[cx-area:cx+area,cy-area:cy+area] ifft_shift2 = np.fft.ifftshift(fft_shift2) denoised_img = np.fft.ifftn(ifft_shift2).real.clip(0,255).astype(np.uint8) # Get frequencies for each dimension freq_x = np.fft.fftfreq(img.shape[0]) freq_y = np.fft.fftfreq(img.shape[1]) # Create a meshgrid of frequencies freq_x, freq_y = np.meshgrid(freq_x, freq_y) # Plot the PSD plt.figure(figsize=(10, 7)) plt.subplot(2,2,1), plt.imshow(img, cmap='gray'), plt.title('img') plt.subplot(2,2,2), plt.imshow(denoised_img, cmap='gray'), plt.title('denoised image') #plt.subplot(2,2,3), plt.imshow(((1-ind)*255)), plt.title('mask-inv') plt.subplot(2,2,3), plt.imshow(psd2, extent=(np.min(freq_x), np.max(freq_x), np.min(freq_y), np.max(freq_y))), plt.title('Power Spectrum Density[cleaned]') plt.subplot(2,2,4), plt.imshow(psd, extent=(np.min(freq_x), np.max(freq_x), np.min(freq_y), np.max(freq_y))),plt.title('Power Spectrum Density[default]') plt.xlabel('Frequency (X)') plt.ylabel('Frequency (Y)') plt.colorbar() plt.show() This seems to work, but I'm not getting a good result, I'm not sure if I am doing something wrong here, or this is the best that can be achieved. What I did next was, I tried to completely set a rectangle around all the bright spots and set them all to zeros, this way we I make sure the surrounding values are also taken care of as much as possible and this is what I get as the output: img = cv2.imread('./img/periodic_noisy_image2.jpg') while (True): # calculate the dft ffts = np.fft.fftn(img) # now shift to center for better interpretation ffts_shifted = np.fft.fftshift(ffts) # power spectrum ffts_shifted_mag = (20*np.log(np.abs(ffts_shifted))).astype(np.uint8) # use selectROI to select the spots we want to set to 0! noise_rois = cv2.selectROIs('select periodic noise spots(press Spc to take selection, press esc to end selection)', ffts_shifted_mag,False, False,False) print(f'{noise_rois=}') # now set the area in fft_shifted to zero for y,x,h,w in noise_rois: # we need to provide a complex number! ffts_shifted[x:x+w,y:y+h] = 0+0j # shift back iffts_shifted = np.fft.ifftshift(ffts_shifted) iffts = np.fft.ifftn(iffts_shifted) # getback the image img_denoised = iffts.real.clip(0,255).astype(np.uint8) # lets calculate the new image magnitude denoise_ffts = np.fft.fftn(img_denoised) denoise_ffts_shifted = np.fft.fftshift(denoise_ffts) denoise_mag = (20*np.log(np.abs(denoise_ffts_shifted))).astype(np.uint8) cv2.imshow('img-with-periodic-noise', img) cv2.imshow('ffts_shifted_mag', ffts_shifted_mag) cv2.imshow('denoise_mag',denoise_mag) cv2.imshow('img_denoised', img_denoised) # note we are using 0 so it only goes next when we press it, otherwise we can't see the result! key = cv2.waitKey(0)&0xFF cv2.destroyAllWindows() if key == ord('q'): break Again I had the assumption, by removing these periodic noise, the image would look much better, but I still can see patterns which means they are not removed completely! but at the same time, I did remove the bright spots. This gets even harder (so far impossible) to get this image denoised using this method: This is clearly a periodic noise, so what is it that I'm missing or doing wrong here? For the reference this is the other image with periodic noise which I have been experimenting with: Update : After reading the comments and suggestions so far, I came up with the following version, which overall works decently, but I face these issues: I don't get tiny imaginary values, even when the output looks fairly good! I can't seem to rely on this check to see what has gone wrong, it exists when there are very little/barely noticeable noise, and when there are noise everywhere. Still face a considerable amount of noise in some images (example given) I'd be great to know if this is expected and I should move on, or there's something wrong which needs to be addressed. def onchange(x):pass cv2.namedWindow('options') cv2.createTrackbar('threshold', 'options', 130, 255, onchange) cv2.createTrackbar('area', 'options', 40, max(*img.shape[:2]), onchange) cv2.createTrackbar('pad', 'options', 0, max(*img.shape[:2]), onchange) cv2.createTrackbar('normalize_output', 'options', 0, 1, onchange) while(True): threshold = cv2.getTrackbarPos('threshold', 'options') area = cv2.getTrackbarPos('area', 'options') pad = cv2.getTrackbarPos('pad', 'options') normalize_output = cv2.getTrackbarPos('normalize_output', 'options') input_img = cv2.copyMakeBorder(img, pad, pad, pad, pad, cv2.BORDER_REFLECT) if pad>0 else img fft = np.fft.fftn(input_img) fft_shift = np.fft.fftshift(fft) # note since we plan on normalizing the magnitude spectrum, # we dont clip and we dont cast here! # +1 so for the images that have 0s we dont get -inf down the road and dont face issues when we want to normalize and create a mask out of it! fft_shift_mag = 20*np.log(np.abs(fft_shift)+1) # now lets normalize and get a mask out of it, # the idea is to identify bright spot and set them to 0 # while retaining the center of the fft as it has a lot # of image information fft_shift_mag_norm = cv2.normalize(fft_shift_mag, None, 0,255, cv2.NORM_MINMAX) # now lets threshold and get our mask if img.ndim>2: mask = np.array([cv2.threshold(fft_shift_mag_norm[...,i], threshold, 255, cv2.THRESH_BINARY)[1] for i in range(3)]) # the mask/img needs to be contiguous, (a simple .copy() would work as well!) mask = np.ascontiguousarray(mask.transpose((1,2,0))) else: ret, mask = cv2.threshold(fft_shift_mag_norm, threshold, 255, cv2.THRESH_BINARY) w,h = input_img.shape[:2] cx,cy = w//2, h//2 mask = cv2.circle(mask, (cy,cx), radius=area, color=0, thickness=cv2.FILLED) # now that we have our mask prepared, we can simply use it with the actual fft to # set all these bright places to 0 fft_shift[mask!=0] = 0+0j ifft_shift = np.fft.ifftshift(fft_shift) img_denoised = np.fft.ifftn(ifft_shift).real.clip(0,255).astype(np.uint8) img_denoised = img_denoised[pad:w-pad,pad:h-pad] # check the ifft imaginary parts are close to zero otherwise sth is wrong! almost_zero = np.all(np.isclose(ifft_shift.imag,0,atol=1e-8)) if not almost_zero: print('imaginary components not close to 0, something is wrong!') else: print(f'all is good!') # do a final contrast stretching: if normalize_output: p2, p98 = np.percentile(img_denoised, (2, 98)) img_denoised = img_denoised.clip(p2, p98) img_denoised = cv2.normalize(img_denoised, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U) cv2.imshow('input_img', input_img) cv2.imshow('fft-shift-mag-norm', fft_shift_mag_norm) cv2.imshow('fft_shift_mag', ((fft_shift_mag.real/fft_shift_mag.real.max())*255).clip(0,255).astype(np.uint8)) cv2.imshow('mask', mask) cv2.imshow('denoised', img_denoised) key = cv2.waitKey(30)&0xFF if key == ord('q') or key == 27: cv2.destroyAllWindows() break relatively good output: Not so much! This is the one example I still get lots of noise. I'm not sure if this is the best I can expect, or there is still room for improvements: There are other samples, where I couldn't remove all the noise either, such as this one(I could tweak it a bit but there would still be artifacts): I attributed this to the low quality of the image itself and accepted it, However, I expected the second example to have room for improvements, I thought I should be able to ultimately get something like this or close to it: Are my assumptions incorrect? Are these artifacts/noises we are seeing in the outputs, periodic noise or some other types of noise? Relatively speaking, Is this the best one can achieve/hope for when using this method? I mean by purely removing periodic noise and not resorting to anything advanced? | Here are some things you can do to improve your results: The hard transition from 1 to 0 in your frequency-domain kernel (ind in the 2nd block of code, it is implicit in the 3rd) means that youβll get lots of ringing artifacts back in the spatial domain. This is 99% of the strange stuff in your output. To see this ringing, try contrast-stretching the output instead of clipping (clipping is correct, but the alternative method shows you all the artifacts youβre clipping away). plt.imshow will show you the contrast-stretched image if you leave it as a floating-point array. [I.e. just do plt.imshow(np.fft.ifftn(ifft_shift2).real). You could also inverse-transform the kernel ind. Youβll see it has a very large extent and does a lot of ringing. The better approach to create the frequency-domain kernel is to draw Gaussian-shaped blobs, or in some other way taper the edges of the squares you draw in the 3rd code block. One easy way to draw rectangles with tapered edges is to use the function dip.DrawBandlimitedBox in DIPlib (disclaimer: Iβm an author). Iβm not sure if there are other image processing libraries with an equivalent function. Handle edge effects. These are not very visible yet, but once you take care of #1, theyβll become more apparent. This is not easy in this application, because the noise pattern has to be continued at the image edge in a different way from the signal. See this Q&A for an example. Also, do note that the frequency-domain kernel you construct must be perfectly symmetric around the origin. For every box you draw on the left half of the image, you need to draw a box on the right side at exactly the same location (mirror the coordinates both horizontally and vertically). Verify that the imaginary component of the inverse transform is approximately 0, if the boxes are not perfectly symmetric it wonβt be. When the kernel not perfectly symmetric, youβll discard some of the signal when you take the real part of the inverse transform, and this discarded signal has a pattern of its ownβ¦ There are more strong dots at higher frequencies from the ones you are removing. Removing these will further improve the results. Alternatively, use a low-pass filter that removes all of the frequencies at the dots and higher (draw a disk with tapered edges around the origin in the frequency domain). This would match what we see when we look at the image from a bit of a distance. Here's how I would implement this using DIPlib: import diplib as dip img = dip.ImageRead("7LOLyaeK.jpg") # the soldier image # Fourier transform F = dip.FourierTransform(img) F.ResetPixelSize() # so we can see coordinates in pixels dip.viewer.ShowModal(F) # click on "MAG" (in 3rd column) and "LOG" (in 2nd column) # I see peaks at the following locations (one of each pair of peaks): pos = [ (513, 103), (655, 170), (799, 236), (654, 303), ] # Let's ignore all the other peaks for now, though we should take care of them too # Maks out peaks mask = F.Similar("SFLOAT") mask.Fill(1) origin = (mask.Size(0) // 2, mask.Size(1) // 2) sigma = 5 value = 2 * 3.14159 * sigma**2 # we need to undo the normalization in dip.DrawBandlimitedPoint() for p in pos: dip.DrawBandlimitedPoint(mask, origin=p, value=-value, sigmas=sigma) p = (2 * origin[0] - p[0], 2 * origin[1] - p[1]) dip.DrawBandlimitedPoint(mask, origin=p, value=-value, sigmas=sigma) dip.viewer.ShowModal(mask) # Apply the filter and inverse transform out = dip.InverseFourierTransform(F * mask, {"real"}) dip.viewer.Show(img) dip.viewer.Show(out) dip.viewer.Spin() This doesn't look very good because we didn't take care of all the other peaks. The dithering pattern is not just four sine waves, it's quite a bit more complex than that. But we don't actually expect there to be any frequencies in the image above that of the dithering pattern. So, you're actually better off simply applying a low-pass filter in this case: out2 = dip.Gauss(img, 2.1) # Finding the best cutoff is a bit of a trial-and-error dip.viewer.Show(img) dip.viewer.Show(out) dip.viewer.Show(out2) dip.viewer.Spin() The Veritasium image is a big old mess, looking at the Fourier transform, there's just not a whole lot left that we can recover. Again, applying a low-pass filter gives you a lower bound on what you could potentially accomplish with a linear filter. | 2 | 3 |
78,650,040 | 2024-6-21 | https://stackoverflow.com/questions/78650040/optimization-challenge-due-to-l1-cache-with-numba | I've been working on optimizing the calculation of differences between elements in NumPy arrays. I have been using Numba for performance improvements, but I get a 100-microsecond jump when the array size surpasses 1 MB. I assume this is due to my CPU's Ryzen 7950X 1 MB L1 cache size. Here is an example code: @jit(nopython=True) def extract_difference_1(random_array): shape0, shape1 = random_array.shape difference_arr = np.empty((shape0, shape1), dtype=np.float64) for i in range(shape0): difference_arr[i] = random_array[i,0] - random_array[i,1], random_array[i,1] - random_array[i,2], random_array[i,2] - random_array[i,3], random_array[i,3] - random_array[i,4], random_array[i,4] - random_array[i,5], random_array[i,5] - random_array[i,6], random_array[i,6] - random_array[i,0] return difference_arr @jit(nopython=True) def extract_difference_2(random_array): shape0, shape1 = random_array.shape split_index = shape0 // 2 part_1 = extract_difference_1(random_array[:split_index]) part_2 = extract_difference_1(random_array[split_index:]) return part_1 , part_2 x_list = [18500, 18700, 18900] y = 7 for x in x_list: random_array = np.random.rand(x, y) print(f"\nFor (x,y) = ({x}, {y}), random_array size is {array_size_string(random_array)}:\n") for func in [extract_difference_1, extract_difference_2]: func(random_array) # compile the function timing_result = %timeit -q -o func(random_array) print(f"{func.__name__}:\t {timing_result_message(timing_result)}") The timing results are: For (x,y) = (18500, 7), random_array size is 0.988 MB, 1011.72 KB: extract_difference_1: 32.4 Β΅s Β± 832 ns, b: 31.5 Β΅s, w: 34.3 Β΅s, (l: 7, r: 10000), extract_difference_2: 33.8 Β΅s Β± 279 ns, b: 33.5 Β΅s, w: 34.3 Β΅s, (l: 7, r: 10000), For (x,y) = (18700, 7), random_array size is 0.999 MB, 1022.66 KB: extract_difference_1: 184 Β΅s Β± 2.15 Β΅s, b: 181 Β΅s, w: 188 Β΅s, (l: 7, r: 10000), extract_difference_2: 34.4 Β΅s Β± 51.2 ns, b: 34.3 Β΅s, w: 34.5 Β΅s, (l: 7, r: 10000), For (x,y) = (18900, 7), random_array size is 1.009 MB, 1033.59 KB: extract_difference_1: 201 Β΅s Β± 3.3 Β΅s, b: 196 Β΅s, w: 205 Β΅s, (l: 7, r: 10000), extract_difference_2: 34.5 Β΅s Β± 75.2 ns, b: 34.4 Β΅s, w: 34.6 Β΅s, (l: 7, r: 10000), Splitting the resulting difference_arr into two does it, but I prefer if the result is a single array. Especially as later, I will be increasing the y to 10, 50, 100, 1000 and x to 20000. When combining the split arrays part_1 and part_2 into the difference_arr, I found it slower than extract_difference_1. I think the slowdown is due to the extract_difference_1 being larger than 1 MB, resulting in L1 cache not being used. Is there a way to maintain the performance while having the result be a single array with Python, Numba or any other package? Or is there a way that will allow me to recombine these arrays without a performance penalty for the resulting array exceeding the L1 cache size? | TL;DR: The performance issue is not caused by your CPU cache. It comes from the behaviour of the allocator on your target platform which is certainly Windows. Analysis I assume this is due to my CPU's Ryzen 7950X 1 MB L1 cache size. First of all, the AMD Ryzen 7950X CPU is a Zen4 CPU. This architecture have L1D caches of 32 KiB not 1 MiB. That being said, the L2 cache is 1 MiB on this architecture. While the cache-size hypothesis is a tempting idea at first glance. There are two major issues with it: First, the same amount of data is read and written by the two functions. The fact that the array is split in two parts does not change this fact. Thus, if cache misses happens in the first function due to the L2 capacity, it should also be the case on the other function. Regarding memory accesses, the only major difference between the two function is the order of the access which should not have a significant performance impact anyway (since the array is sufficiently large so latency issues are mitigated). Moreover, the L2 cache on Zen4 is not so much slower than the L3 one. Indeed, It should not be more than twice slower while experimental results show a >5x times bigger execution time. I can reproduce this on a Cascade Lake CPU (with a L2 cache of also 1 MiB) on Windows. Here is the result: For (x,y) = (18500, 7), random_array size is 0.988006591796875: extract_difference_1: 68.6 Β΅s Β± 3.63 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) extract_difference_2: 70.8 Β΅s Β± 5.2 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) For (x,y) = (18700, 7), random_array size is 0.998687744140625: extract_difference_1: 342 Β΅s Β± 8.31 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each) extract_difference_2: 69.7 Β΅s Β± 2.67 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) For (x,y) = (18900, 7), random_array size is 1.009368896484375: extract_difference_1: 386 Β΅s Β± 7.34 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each) extract_difference_2: 67 Β΅s Β± 4.51 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) New hypothesis: allocation overheads Splitting the resulting difference_arr into two does it The main difference between the two functions is that one performs 2 small allocations rather than 1 big. This rises a new hypothesis: can the allocation timings explain the issue? We can easily answer this question based on this previous post: Why is allocation using np.empty not O(1). We can see that there is a big performance gap between allocations of 0.76 MiB (np.empty(10**5)) and the next bigger one >1 MiB. Here are the provided results of the target answer: np.empty(10**5) # 620 ns Β± 2.83 ns per loop (on 7 runs, 1000000 loops each) np.empty(10**6) # 9.61 Β΅s Β± 34.2 ns per loop (on 7 runs, 100000 loops each) More precisely, here is new benchmarks on my current machine: %timeit -n 10_000 np.empty(1000*1024, np.uint8) 793 ns Β± 18.8 ns per loop (mean Β± std. dev. of 7 runs, 10000 loops each) %timeit -n 10_000 np.empty(1024*1024, np.uint8) 6.6 Β΅s Β± 173 ns per loop (mean Β± std. dev. of 7 runs, 10000 loops each) We can see that the gap is close to 1 MiB. Note that the timings between 1000 KiB and 1024 are not very stable (showing that the result is dependent of hidden low-level parameters -- possibly packing/alignment issues). This Numpy allocation behaviour is AFAIK specific to Windows and AFAIR not visible on Linux (gaps might be seen but not that big and not at the same threshold). An explanation is provided in the linked answer : expensive kernel calls are performed beyond a threshold (huge-pages might also play a role too). Solutions Is there a way to maintain the performance while having the result be a single array with Python Yes. You can preallocate the output array memory so not to pay the expensive allocation overhead. An alternative solution is to use another allocator (e.g. jemalloc, tcmalloc). Here is a modified code preallocating memory: @nb.jit(nopython=True) def extract_difference_1(random_array, scratchMem): shape0, shape1 = random_array.shape difference_arr = scratchMem[:shape0*shape1].reshape((shape0, shape1))#np.empty((shape0, shape1), dtype=np.float64) for i in range(shape0): difference_arr[i] = random_array[i,0] - random_array[i,1], random_array[i,1] - random_array[i,2], random_array[i,2] - random_array[i,3], random_array[i,3] - random_array[i,4], random_array[i,4] - random_array[i,5], random_array[i,5] - random_array[i,6], random_array[i,6] - random_array[i,0] return difference_arr @nb.jit(nopython=True) def extract_difference_2(random_array, scratchMem): shape0, shape1 = random_array.shape split_index = shape0 // 2 part_1 = extract_difference_1(random_array[:split_index], np.empty((split_index, shape1))) part_2 = extract_difference_1(random_array[split_index:], np.empty((split_index, shape1))) return part_1 , part_2 x_list = [18500, 18700, 18900] y = 7 scratchMem = np.empty(16*1024*1024) for x in x_list: random_array = np.random.rand(x, y) print(f"\nFor (x,y) = ({x}, {y}), random_array size is {x*y*8/1024/1024}:\n") for func in [extract_difference_1, extract_difference_2]: func(random_array, scratchMem) # compile the function timing_result = %timeit -q -o func(random_array, scratchMem) print(f"{func.__name__}:\t {timing_result}") Here is the result: For (x,y) = (18500, 7), random_array size is 0.988006591796875: extract_difference_1: 65.1 Β΅s Β± 2.48 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) extract_difference_2: 71 Β΅s Β± 2.36 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) For (x,y) = (18700, 7), random_array size is 0.998687744140625: extract_difference_1: 69.3 Β΅s Β± 4.05 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) extract_difference_2: 68.3 Β΅s Β± 3.06 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) For (x,y) = (18900, 7), random_array size is 1.009368896484375: extract_difference_1: 68.5 Β΅s Β± 1.98 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) extract_difference_2: 68.7 Β΅s Β± 3.14 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) We can see that the problem is now gone! Thus, this confirms the hypothesis that allocations were the main source of the performance issue. | 5 | 4 |
78,645,142 | 2024-6-20 | https://stackoverflow.com/questions/78645142/attention-weights-on-top-of-image | h = 16 fig, ax = plt.subplots(ncols=3, nrows=1, figsize=(15, 5)) for i, q_id in enumerate(sorted_indices[0]): logit = itm_logit[:, q_id, :] prob = torch.nn.functional.softmax(logit, dim=1) name = f'{prob[0, 1]:.3f}_query_id_{q_id}' # Attention map attention_map = avg_cross_att[0, q_id, :-1].view(h, h).detach().cpu().numpy() # Image raw_image_resized = raw_image.resize((596, 596)) ax[0].set_title(name) ax[0].imshow(attention_map, cmap='viridis') ax[0].axis('off') ax[1].set_title(caption) ax[1].imshow(raw_image_resized) ax[1].axis('off') ax[2].set_title(f'Overlay: {name}') ax[2].imshow(raw_image_resized) ax[2].imshow(attention_map, cmap='viridis', alpha=0.6) ax[2].axis('off') ax[0].set_aspect('equal') ax[1].set_aspect('equal') ax[2].set_aspect('equal') plt.tight_layout() plt.savefig(f"./att_maps/{name}.jpg") plt.show() break What I am trying to do is overlay the attention weights on top of the image (on thrid axes), so I can see which part of the image attention weight is more focused on. However, the code that I put only overlap the attention weight on top of the image. What might be the problem in this case? | The root cause of this is the different resolution of the image and the attention map. This way, the second imshow call reduced the displayed area to a tiny corner of the original image, with an overlay of the 16x16 attention map. To fix this, the attention map needs to be upscaled (e.g. via np.repeat) to the image resolution. Here's an example: import numpy as np from matplotlib import pyplot as plt from matplotlib import image attention_map = np.random.rand(16, 16) img = image.imread("merlion.jpg") plt.figure("uneven shapes") plt.imshow(img) plt.imshow(attention_map, cmap='viridis', alpha=0.3) # naive upscaling via np.repeat in both dimensions attention_map_upscale = np.repeat(np.repeat(attention_map, img.shape[0] // attention_map.shape[0], axis=0), img.shape[1] // attention_map.shape[1], axis=1) plt.figure("even shapes") plt.imshow(img) plt.imshow(attention_map_upscale, cmap='viridis', alpha=0.3) plt.show() | 2 | 1 |
78,649,817 | 2024-6-20 | https://stackoverflow.com/questions/78649817/pandas-groupby-expanding-mean-does-not-accept-missing-values | I've been looking to retrieve group-based expanding means from the following dataset: df = pd.DataFrame({'id':[1,1,1,2,2,2],'y':[1,2,3,1,2,3]}) and df.groupby('id').expanding().mean().values returns the correct: array([[1. ], [1.5], [2. ], [1. ], [1.5], [2. ]]) However, in my specific case I have to deal with some missing values as well, so that: df2 = pd.DataFrame({'id':[1,1,1,2,2,2],'y':[1,pd.NA,3,1,2,3]}) My expected result applying the same logic would be to ignore the NaN in the computation of the mean, so that from df2.groupby('id').expanding().mean().values I would expect array([[1. ], [1.], [2. ], [1. ], [1.5], [2. ]]) Instead, Pandas returns an error due to applying some type assertion to float in the backend. None of my naive attempts (e.g., .expanding().apply(lambda x: np.nansum(x)) are solving this. Any (possibly equally compact) solution? | You can convert column 'y' with pd.to_numeric, which will coerce pd.NaN into nan. The latter can be interpreted correctly by the following operations: df2["y"] = pd.to_numeric(df2["y"]) df2 = df2.groupby("id").expanding().mean().values [[1. ] [1. ] [2. ] [1. ] [1.5] [2. ]] | 2 | 3 |
78,648,876 | 2024-6-20 | https://stackoverflow.com/questions/78648876/nested-condition-on-simple-data | I have a dataframe having 3 columns, two boolean type and one column as string. from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, BooleanType, StringType # Create a Spark session spark = SparkSession.builder \ .appName("Condition Test") \ .getOrCreate() # Sample data data = [ (True, 'CA', None), (True, 'US', None), (False, 'CA', None) ] # Define schema for the dataframe schema = StructType([ StructField("is_flag", BooleanType(), nullable=False), StructField("country", StringType(), nullable=False), StructField("rule", BooleanType(), nullable=True) ]) # Create DataFrame df = spark.createDataFrame(data, schema=schema) # Show initial dataframe df.show(truncate=False) condition = ( (~col("is_flag")) | ((col("is_flag")) & (trim(col("country")) != 'CA') & nvl(col("rule"),lit(False)) != True) ) df = df.filter(condition) # show filtered dataframe df.show(truncate=False) Above code is returning below data. +-------+-------+----+ |is_flag|country|rule| +-------+-------+----+ |true |CA |NULL| |true |US |NULL| |false |CA |NULL| +-------+-------+----+ However since I'm explicitely mentioning ((col("is_flag")) & (trim(col("country")) != 'CA') & nvl(col("rule"),lit(False)) != True) ie. trim(col("country")) != 'CA' when is_flag is true, I'm not expecting first record, I need results like below. +-------+-------+----+ |is_flag|country|rule| +-------+-------+----+ |true |US |NULL| |false |CA |NULL| +-------+-------+----+ Question: why the above code also returns 1st record |true |CA |NULL|, where as we have explicitly mentioned country != 'CA' when is_flag is true (boolean). However same when confition is applied via sql returns expected result. select * from df where ( not is_flag or (is_flag and trim(country) != 'CA' and nvl(rule,False) != True) ) | The condition is invalid because it doesn't consider operator precedence and hence the wrong result. Operator & has a higher precedence than !=. Here's the updated condition with parentheses: condition = ( (~col("is_flag")) | ((col("is_flag")) & (trim(col("country")) != 'CA') & (nvl(col("rule"),lit(False)) != True)) ) Output: +-------+-------+----+ |is_flag|country|rule| +-------+-------+----+ |true |US |NULL| |false |CA |NULL| +-------+-------+----+ | 2 | 3 |
78,649,010 | 2024-6-20 | https://stackoverflow.com/questions/78649010/how-to-pass-optimization-options-such-as-read-only-true-to-pandas-read-excel | I want to use pandas.read_excel to read an Excel file with the option engine="openpyxl". However, I also want to pass additional optimization options to openpyxl such as: read_only=True data_only=True keep_links=False How do I do this? | These are already implemented by default. From version 2.2: def load_workbook( self, filepath_or_buffer: FilePath | ReadBuffer[bytes], engine_kwargs ) -> Workbook: from openpyxl import load_workbook default_kwargs = {"read_only": True, "data_only": True, "keep_links": False} return load_workbook( filepath_or_buffer, **(default_kwargs | engine_kwargs), ) | 2 | 6 |
78,645,965 | 2024-6-20 | https://stackoverflow.com/questions/78645965/why-is-the-bat-board-not-moving-up-or-down-in-the-pong-game-when-the-screen-lis | Using the turtle module to make a pong game. For this part, when I press the up/down keys the board doesn't respond to the onkey listen function. I created the board/bat (or bat specs function from turtle called bat_specs_p2() ) and set the positions of the board/bat (x and y) on the screen. All is fine at this point. I know that the y-coord will have to change once the up/down keys are pressed. Testing with print shows that the y-coords are been updated everytime up/down is pressed, but the board/bat is not moving on the screen. I think it is still holding the starting position of the board/bat, somewhere the y-coord is not updating Bat class from turtle import Turtle P1_x = -200 P1_y = 0 P2_x = 200 P2_y = 0 class Bat: def __init__(self): self.p2_position = (P2_x, P2_y) self.p1_position = (P1_x, P1_y) def bat_specs_p2(self): global P2_y bat = Turtle() bat.penup() bat.shape("square") bat.color("white") bat.shapesize(stretch_wid=4, stretch_len=1) bat.setposition(self.p2_position) def p2move_up(self): global P2_y P2_y += 20 self.bat_specs_p2() print(P2_x, P2_y) def p2move_down(self): global P2_y P2_y -= 20 self.bat_specs_p2() print(P2_x, P2_y) main.py from turtle import Screen import time from bat import Bat screen = Screen() screen.setup(width=600, height=600) screen.bgcolor("black") screen.title("Pong Game") screen.tracer(0) bat = Bat() bat.bat_specs_p2() screen.listen() screen.onkey(bat.p2move_up, "Up") screen.onkey(bat.p2move_down, "Down") screen.update() screen.exitonclick() | You have two correctness problems: If you use tracer(0) to disable turtle's control of the rendering loop, you need to call screen.update() whenever you want to perform a redraw of the canvas. You're changing variables, but those variables aren't associated with any turtle object, so they're meaningless as far as turtle is concerned. Turtles have internal position variables, so you don't need to track them separately. bat.setposition(self.p2_position) is an attempt at connecting them, but self.p2_position = (P2_x, P2_y) copies the values of P2_x and P2_y, so when they change globally, the tuple doesn't update (tuples are immutable and primitives are not pass-by-reference). Beyond that, your design is a little unusual. Avoid global, especially when you're trying to write a class that should have instance-specific state. The Bat class should only be responsible for one player's bat. Make separate instances of the class for each bat, along with self. for all dynamic state. Here's a simple example: from turtle import Screen, Turtle class Bat: def __init__(self, y=0, x=0, speed=20): self.speed = speed self.t = t = Turtle() t.penup() t.shape("square") t.color("white") t.shapesize(stretch_wid=4, stretch_len=1) t.setposition((x, y)) def move_up(self): self.t.sety(self.t.ycor() + self.speed) def move_down(self): self.t.sety(self.t.ycor() - self.speed) screen = Screen() screen.tracer(0) screen.setup(width=600, height=600) screen.bgcolor("black") screen.title("Pong Game") screen.listen() p1 = Bat(x=200) p2 = Bat(x=-200) def with_update(action): def update(): screen.update() action() return update screen.onkey(with_update(p1.move_up), "Up") screen.onkey(with_update(p1.move_down), "Down") screen.onkey(with_update(p2.move_up), "w") screen.onkey(with_update(p2.move_down), "s") screen.update() screen.exitonclick() Now, this isn't great for real-time movement. If you need smooth movement and/or support for multiple key presses at once, you'll want to use ontimer as described in How to bind several key presses together in turtle graphics? to run an update loop independent of the key handlers. Applying those techniques to pong, see Using 2 onkeypress-es (with a thread/process) in Python Turtle. | 2 | 2 |
78,646,217 | 2024-6-20 | https://stackoverflow.com/questions/78646217/tkinter-updating-window-while-calculating-otherthings | I am triying to write sudoku solver. This is really complicated code for me. I want to update board while python calculating other things. However, code could not do that. Should I try threading or is there easy way to do that? CURRENT SITUATION: end of the calculation. I am inserting values. Then, I click solve. I am changing text of label (via code), but label waits untill end of calculations and suddenly applies labels' changes. related codes in "def solve() > def set_text(), def check()" my complete code: import tkinter from copy import deepcopy import time import threading window = tkinter.Tk() window.title("Sudoku Solver") window.config(padx=30, pady=30) zerolist = [] # adΔ± posibles olacak entrylist = [] exactvals = [] labellist = [] def create000(): global zerolist global exactvals for i in range(9): zerolistemp = [] exectemp = [] for j in range(9): zerolistemp.append([]) exectemp.append(0) zerolist.append(deepcopy(zerolistemp)) exactvals.append(deepcopy(exectemp)) zerolistemp.clear() exectemp.clear() return zerolist # def solve(): # def check(i,j): # global zerolist # for i in range(9): # for j in range(9): # for element in zerolist[i][j]: # if j == 9: def solve(): def set_text(x, y, text): time.sleep(1) global labellist if str(text) == "": labellist[x][y].config(text=str(text), background="gray") elif len(zerolist[x][y]) == 1: labellist[x][y].config(text=str(text), background="lightgreen") elif text != 0: labellist[x][y].config(text=str(text), background="red") return def check(i, j): print("check") global zerolist global exactvals columnpart = int(i / 3) rowpart = int(j / 3) for element in zerolist[i][j]: print(element) set_text(i, j, element) time.sleep(0.5) # elementi lable a yaz kΔ±rmΔ±zΔ± yap # 3x3 for m in range(3): for n in range(3): if element == exactvals[columnpart + m][rowpart + n]: zerolist[i][j].pop(element) set_text(i, j, "") return False if element in exactvals[i] or element in [row[j] for row in exactvals]: zerolist[i][j].pop(element) set_text(i, j, "") return False exactvals[i][j] = element if j != 8: if check(i, j + 1) == False: exactvals[i][j] = 0 zerolist[i][j].pop(element) set_text(i, j, "") return False elif i != 8: if check(i + 1, 0) == False: exactvals[i][j] = 0 zerolist[i][j].pop(element) set_text(i, j, "") return False else: # write yeΕil set_text(i, j, element) return True check(0, 0) return exactvals def collect_data(): # collec ederken zaten var i j kullanΔ±p uctan ekleyebiliriz global zerolist global exactvals global labellist for i in range(9): for j in range(9): # yatay liste deΔiΕtiriyor if len(entrylist[i][j].get()) != 0: value = [int(entrylist[i][j].get())] exc = int(entrylist[i][j].get()) labellist[i][j].config(text=exc, background="lightgreen") else: value = [x + 1 for x in range(9)] exc = 0 zerolist[i][j] = value exactvals[i][j] = exc solve() return zerolist for row in range(9): tempentry = [] for column in range(9): a = tkinter.Entry(width=3) a.grid(column=column, row=row) tempentry.append(a) entrylist.append(tempentry) emptylabel = tkinter.Label(width=3) emptylabel.grid(column=9,row=0,rowspan=9) for row in range(9): templabel = [] for column in range(10, 19): a = tkinter.Label(width=3,borderwidth=2, relief="groove") a.grid(column=column, row=row) templabel.append(a) labellist.append(templabel) but_solve = tkinter.Button(command=collect_data, text="SOLVE", width=9, pady=5) but_solve.grid(columnspan=3, row=9, column=3) create000() window.mainloop() I want to see changes in real-time while python calculating other things. I tried threading like this: [![threading code][1]][1] [1]: https://i.sstatic.net/ffyYu36t.png but tkinter does not give answer. ie stops. | Add a window.update() in your set_text function. This should update the tkinter window and show you the labels in real-time. | 4 | 4 |
78,645,930 | 2024-6-20 | https://stackoverflow.com/questions/78645930/how-can-i-find-the-first-row-after-a-number-of-duplicated-rows | My DataFrame is: import pandas as pd df = pd.DataFrame( { 'x': ['a', 'a', 'a','b', 'b','c', 'c', 'c',], 'y': list(range(8)) } ) And this is the expected output. I want to create column z: x y z 0 a 0 NaN 1 a 1 NaN 2 a 2 NaN 3 b 3 3 4 b 4 NaN 5 c 5 NaN 6 c 6 NaN 7 c 7 NaN The logic is: I want to find the first row after the first group of duplicated rows. For example in column x, the value a is the first duplicated value. I want to find one row after the a values end. And then put the y of that row for z column. This is my attempt that did not give me the output: m = (df.x.duplicated()) out = df[m] | One option, using a custom mask: # flag rows after the first group m = df['x'].ne(df['x'].iat[0]).cummax() # pick the first one out = df[m & ~m.shift(fill_value=False)] If your first value is always a and you want to find the first non-a you could also use: m2 = df['x'].eq('a') out = df[m2.shift(fill_value=False) & ~m2] Or, if you're sure there is at least one row after the leading as: out = df.loc[[df['x'].ne('a').idxmax()]] Output: x y 3 b 3 Some intermediates (all approaches): x y m ~m.shift(fill_value=False) m2 m2.shift(fill_value=False) df['x'].ne('a') 0 a 0 False True True False False 1 a 1 False True True True False 2 a 2 False True True True False 3 b 3 True True False True True 4 b 4 True False False False True 5 c 5 True False False False True 6 c 6 True False False False True 7 c 7 True False False False True | 3 | 2 |
78,645,653 | 2024-6-20 | https://stackoverflow.com/questions/78645653/typing-with-typevar-converts-a-type-to-an-object | I am trying to implement a generator that will return a pair of a sequence element and a boolean value indicating whether the element is the last one. from collections.abc import Generator, Iterable from itertools import chain, tee from typing import TypeVar _T1 = TypeVar('_T1') _MISSING = object() def pairwise(iterable: Iterable[_T1]) -> Iterable[tuple[_T1, _T1]]: # See https://docs.python.org/3.9/library/itertools.html#itertools-recipes a, b = tee(iterable) next(b, None) return zip(a, b) def annotated_last(sequence: Iterable[_T1]) -> Generator[tuple[_T1, bool], Any, None]: for current_item, next_item in pairwise(chain(sequence, [_MISSING])): is_last = next_item is _MISSING yield current_item, is_last # <-- mypy error However, mypy returns this error: Incompatible types in "yield" (actual type "tuple[object, bool]", expected type "tuple[_T1, bool]") Tell me how to correctly annotate types in these functions. I am using Python version 3.9.19 | This is because you created a chain iterator like so: chain(sequence, [_MISSING]), and the type inference has to infer the most generic type from these arguments, but _MISSING is object, so it has to be an iterator of object. Note, you can implement the function you want with the signature you want straightforwardly (albeit, less elegantly) doing something like: from collections.abc import Iterator, Iterable from typing import TypeVar _T1 = TypeVar('_T1') def annotated_last(sequence: Iterable[_T1]) -> Iterator[tuple[_T1, bool]]: it = iter(sequence) try: previous = next(it) except StopIteration: return for current in it: yield previous, False previous = current yield previous, True | 2 | 3 |
78,645,378 | 2024-6-20 | https://stackoverflow.com/questions/78645378/how-to-write-type-hint-for-decorated-implemented-by-a-class | Here's a classical example of a decorator implemented by a class: class Decorator: def __init__(self, func): self.func = func def __call__(self, *args, **kwargs): self.func(*args, **kwargs) How to make __call__ have the same signature and type hints as func has? I've tried the following code: from typing import Callable, TypeVar, ParamSpec, Generic PT = ParamSpec('PT') RT = TypeVar('RT') class Decorator(Generic[PT, RT]): def __init__(self, func: Callable[PT, RT]) -> None: self.func = func def __call__(self, *args: PT.args, **kwargs: PT.kwargs) -> RT: return self.func(*args, **kwargs) @Decorator def add(x: int, y: int) -> int: return x + y But I failed to get the correct argument list of add in PyCharm. Is it PyCharm's fault? | You want to explicitly replace the function with a callable with a specific signature, instead of replacing the function with an instance of a callable class that works the same but does not share the signature. This shows the difference: from typing import Callable, TypeVar, ParamSpec, Generic PT = ParamSpec('PT') RT = TypeVar('RT') class Decorator(Generic[PT, RT]): def __init__(self, func: Callable[PT, RT]) -> None: self.func = func def __call__(self, *args: PT.args, **kwargs: PT.kwargs) -> RT: return self.func(*args, **kwargs) def decorator(func: Callable[PT, RT]) -> Callable[PT, RT]: return Decorator(func) @decorator def add_func_decorated(x: int, y: int) -> int: return x + y @Decorator def add_class_decorated(x: int, y: int) -> int: return x + y print(add_func_decorated(1, 2)) print(add_class_decorated(1, 2)) You will will find that both type hints and type checking work correctly for add_func_decorated, in PyCharm as well. While, for add_class_decorated type checking works, but the type hint displays as you showed in your question. | 2 | 2 |
78,645,337 | 2024-6-20 | https://stackoverflow.com/questions/78645337/python-failed-to-initialize-a-2d-array-as-class-attributes | I am implementing a transition tables as a class attribute with, class Phase: num_states = 15 transition_table = [[False for _ in range(num_states)] for _ in range(num_states)] but it failed with NameError: name 'num_states' is not defined. However, 1d array works as expected, class Phase: num_states = 15 transition_table = [False for _ in range(num_states)] # this works I was wondering why this is the case, as num_states is defined before transition_table and it should have access to the previous one? Edit: this is due to Python's implement of class scopes; Use class attribs in outer scope for loop will not give an error, but inner scope will. There is a link in the comment that explains this in detail. | It looks like num_states is out of scope from the class definition. You can workaround this by passing the class variable into a lambda and run the list comprehension inside the lambda class Phase: num_states = 15 transition_table = (lambda x=num_states: [[False for _ in range(x)] for _ in range(x)])() | 2 | 2 |
78,645,106 | 2024-6-20 | https://stackoverflow.com/questions/78645106/get-distance-from-a-point-to-the-nearest-box | I have a 3D space where positions are stored as tuples, eg: (2, 0.5, -4). If I want to know the distance between two points I just do dist = (abs(x1 -x2), abs(y1 - y2), abs(z1 - z2)) and if I want a radius distf = (dist[0] + dist[1] + dist[2]) / 3. Now I have boxes each defined by two min / max positions (eg: (-4 8 -16) to (4, 12, 6)) and I want to know the distance between my point to the closest one: What is the simplest way to know the distance to the closest face in all 3 directions, or 0 in case the position is inside a box? Just looking for the lightest solution that doesn't require numpy or libraries other than defaults like math since I'm not using those in my project. This is my messy solution which should probably work but I'd like to know if there's anything better. point = (8, 12, 16) box_min = (-4, -4, -4) box_max = (4, 4, 4) box_center = ((box_min[0] + box_max[0]) / 2, (box_min[1] + box_max[1]) / 2, (box_min[2] + box_max[2]) / 2) box_scale = (abs(box_max[0] - box_min[0]), abs(box_max[1] - box_min[1]), abs(box_max[2] - box_min[2])) dist = (abs(box_center[0] - point[0]) + box_scale[0] / 2, abs(box_center[1] - point[1]) + box_scale[1] / 2, abs(box_center[2] - point[2]) + box_scale[2] / 2) | You can use (dx ** 2 + dy ** 2 + dz ** 2) ** 0.5 for calculating the distance: def _dist(A, B, C): dx = max(B[0] - A[0], 0, A[0] - C[0]) dy = max(B[1] - A[1], 0, A[1] - C[1]) dz = max(B[2] - A[2], 0, A[2] - C[2]) return (dx ** 2 + dy ** 2 + dz ** 2) ** 0.5 def _dist_b(A, B, C): dx = max(B[0] - A[0], 0, A[0] - C[0]) dy = max(B[1] - A[1], 0, A[1] - C[1]) dz = max(B[2] - A[2], 0, A[2] - C[2]) return (dx, dy, dz) print(_dist((8, 12, 16), (-4, -4, -4), (4, 4, 4))) print(_dist_b((8, 12, 16), (-4, -4, -4), (4, 4, 4))) Prints 14.966629547095765 (4, 8, 12) | 2 | 1 |
78,645,037 | 2024-6-20 | https://stackoverflow.com/questions/78645037/how-do-i-perform-pandas-cumsum-while-skipping-rows-that-are-duplicated-in-anothe | I am trying to use the pandas.cumsum() function, but in a way that ignores rows with a value in the ID column that is duplicated and specifically only adds the last value to the cumulative sum, ignoring all earlier values. Example code below (I couldn't share the real code, which is for work). import pandas as pd, numpy as np import random as rand id = ['a','b','c','a','b','e','f','a','b','k'] value = [12,14,3,13,16,7,4,6,10,18] df = pd.DataFrame({'id':id, 'value':value}) df["cumsum_of_value"] = df['value'].cumsum() df["desired_output"] = [ 12,26,29,30,32,39,43,36,30,48 ] df["comments"] = [""]*len(df) df.loc[df.index==0, "comments"]="standard cumsum" df.loc[df.index==1, "comments"]="standard cumsum" df.loc[df.index==2, "comments"]="standard cumsum" df.loc[df.index==3, "comments"]="cumsum of rows 1-3, ignore row 0" df.loc[df.index==4, "comments"]="cumsum of rows 2-4, ignore rows 0, 1" df.loc[df.index==5, "comments"]="cumsum of rows 2-5, ignore rows 0, 1" df.loc[df.index==6, "comments"]="cumsum of rows 2-6, ignore rows 0, 1" df.loc[df.index==7, "comments"]="cumsum of rows 2,4-7, ignore rows 0, 1, 3" df.loc[df.index==8, "comments"]="cumsum of rows 2,5-8, ignore rows 0, 1, 3, 4" df.loc[df.index==9, "comments"]="cumsum of rows 2,5-9, ignore rows 0, 1, 3, 4" print(df) In this example, there are seven (7) unique values in the ID column (a, b, c ,d, e, f, g), so the cumsum should only ever sum a max of seven (7) records as its output on any row. Is this possible using combinations of functions such as cumsum(), groupby(), duplicated(), drop_duplicates(), and avoiding the use of an iterative loop? I've tried the below df["duped"] = np.where(df["id"].duplicated(keep='last'),0,1) df["value_duped"] = df["duped"] * df["value"] df["desired_output_attempt"] = df["cumsum_of_value"] - df["value_duped"] But it doesn't come close to the correct answer. I can't think of how to get something like this to result in the desired output without iterating. | Try: df["out"] = ( df.groupby("id")["value"].transform("diff").fillna(df["value"]).cumsum().astype(int) ) print(df) Prints: id value cumsum_of_value desired_output out 0 a 12 12 12 12 1 b 14 26 26 26 2 c 3 29 29 29 3 a 13 42 30 30 4 b 16 58 32 32 5 e 7 65 39 39 6 f 4 69 43 43 7 a 6 75 36 36 8 b 10 85 30 30 9 k 18 103 48 48 | 11 | 8 |
78,642,891 | 2024-6-19 | https://stackoverflow.com/questions/78642891/create-array-by-combining-neighbouring-pairs-of-items | I have the following array of four elements: arr = [{"location": 10, "value": 50}, {"location": 21, "value": 70}, {"location": 33, "value": 20}, {"location": 48, "value": 0}] I would like to create a new array of three elements combining adjacent items: [ {index 0, 1}, {index 1, 2}, {index 2, 3} ] The function to run on each pair is simple: def combine(current, next): return (next["location"] - current["location"]) * current["value"] Obviously, this can be done using a loop, but is there a more pythonic way of achieving this? I would like to generate the following output: [550, 840, 300] // (21 - 10) * 50, (33 - 21) * 70, (48 - 33) * 20 | You can use zip(arr, arr[1:]): def get_combine_neighbors(A): _comb = lambda x, y: (y["location"] - x["location"]) * x["value"] return [_comb(x, y) for x, y in zip(A, A[1:])] A = [{"location": 10, "value": 50}, {"location": 21, "value": 70}, {"location": 33, "value": 20}, {"location": 48, "value": 0}] print(get_combine_neighbors(A)) Prints [550, 840, 300] | 2 | 3 |
78,641,381 | 2024-6-19 | https://stackoverflow.com/questions/78641381/display-pdf-without-pymupdf | Is there a way to display a PDF (only a single page, if that matters) in Python, without using the PyMuPDF library? I want to export my project via PyInstaller and including PyMuPDF increases the file size from ~40 to ~105 MB. Since I only want to display my pdf and don't need any of the advanced functionalities of PyMuPDF (or any editing/manipulation at all), I was wondering if there was a way to do so without so much overhead. The PDF is created during runtime with ReportLab. | I got it to work with the pypdfium2 package, which only adds ~3MB to the exported .exe instead of the ~60MB from mupdf. import pypdfium2 as pdfium image_buffer = io.BytesIO() doc2 = pdfium.PdfDocument(input=pdf_buffer, autoclose=True) doc2[0].render().to_pil().save(image_buffer, format='PNG') image_bytes = image_buffer.getvalue() Here, both pdf_buffer and image_buffer are io.Bytes() for the pdf file input and the image file output respectively. However, setting this up for PyInstaller took bit: In the .spec file for the PyInstaller script, you need to manually add 3 files in order for pypdfium2 to properly work. In the a=Analysis(...) call you need to add these 3 lines to the datas argument: datas=[ (f'{site_packages_location}/pypdfium2_raw/pdfium.dll', 'pypdfium2_raw'), (f'{site_packages_location}/pypdfium2_raw/version.json', 'pypdfium2_raw'), (f'{site_packages_location}/pypdfium2/version.json', 'pypdfium2') ] where site_packages_location = f"{os.getenv('LOCALAPPDATA')}/Programs/Python/Python312/Lib/site-packages/" This answer has more info on that. | 4 | 2 |
78,642,298 | 2024-6-19 | https://stackoverflow.com/questions/78642298/check-following-element-in-list-in-pandas-dataframe | I have created the following pandas dataframe import pandas as pd import numpy as np ds = { 'col1' : [ ['U', 'U', 'U', 'U', 'U', 1, 0, 0, 0, 'U','U', None], [6, 5, 4, 3, 2], [0, 0, 0, 'U', 'U'], [0, 1, 'U', 'U', 'U'], [0, 'U', 'U', 'U', None] ] } df = pd.DataFrame(data=ds) The dataframe looks like this: print(df) col1 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 1 [6, 5, 4, 3, 2] 2 [0, 0, 0, U, U] 3 [0, 1, U, U, U] 4 [0, U, U, U, None] For each row in col1, I need to check if every element equals to U in the list is followed (from left to right) by any value apart from U and None: in that case I'd create a new column (called iCount) with value of 1. Else 0. In the example above, the resulting dataframe would look like this: col1 iCount 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 1 1 [6, 5, 4, 3, 2] 0 2 [0, 0, 0, U, U] 0 3 [0, 1, U, U, U] 0 4 [0, U, U, U, None] 0 Only in the first row the value U is followed by a value which is neither U nor None (it is 1) I have tried this code: col5 = np.array(df['col1']) for i in range(len(df)): iCount = 0 for j in range(len(col5[i])-1): print(col5[i][j]) if((col5[i][j] == "U") & ((col5[i][j+1] != None) & (col5[i][j+1] != "U"))): iCount += 1 else: iCount = iCount But I get this (wrong) dataframe: col1 iCount 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 0 1 [6, 5, 4, 3, 2] 0 2 [0, 0, 0, U, U] 0 3 [0, 1, U, U, U] 0 4 [0, U, U, U, None] 0 Can anyone help me please? | If you only want to test if there is at least one case in which a non-None follow a U, use itertools.pairwise and any: from itertools import pairwise def count_after_U(lst): return int(any(a=='U' and b not in {'U', None} for a, b in pairwise(lst))) df['iCount'] = list(map(count_after_U, df['col1'])) Output: col1 iCount 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 1 1 [6, 5, 4, 3, 2] 0 2 [0, 0, 0, U, U] 0 3 [0, 1, U, U, U] 0 4 [0, U, U, U, None] 0 5 [U, U, 4, U, U, 1, 0, U, U, None, 1, U, None] 1 6 [U, None, 1, U] 0 If you also want to check the other values until the next U, use a custom function: def any_after_U(lst): flag = False for item in lst: if item == 'U': flag = True else: if flag and item is not None: return 1 return 0 df['iCount'] = list(map(any_after_U, df['col1'])) Example: col1 iCount 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 1 1 [6, 5, 4, 3, 2] 0 2 [0, 0, 0, U, U] 0 3 [0, 1, U, U, U] 0 4 [0, U, U, U, None] 0 5 [U, U, 4, U, U, 1, 0, U, U, None, 1, U, None] 1 6 [U, None, 1, U] 1 original answer before clarification approach 1: considering only the first item after U IIUC, use a custom python function: from itertools import pairwise def count_after_U(lst): return sum(a=='U' and b not in {'U', None} for a,b in pairwise(lst)) df['iCount'] = list(map(count_after_U, df['col1'])) Or, to be more flexible with the conditions: def count_after_U(lst): flag = False iCount = 0 for item in lst: if item == 'U': flag = True else: if flag and item is not None: iCount += 1 flag = False return iCount df['iCount'] = list(map(count_after_U, df['col1'])) Output: col1 iCount 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 1 1 [6, 5, 4, 3, 2] 0 2 [0, 0, 0, U, U] 0 3 [0, 1, U, U, U] 0 4 [0, U, U, U, None] 0 More complex example: col1 iCount 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 1 1 [6, 5, 4, 3, 2] 0 2 [0, 0, 0, U, U] 0 3 [0, 1, U, U, U] 0 4 [0, U, U, U, None] 0 5 [U, U, 4, U, U, 1, 0, U, U, None, 1, U, None] 2 approach 2: considering all values after U: Just indent the flag reset in the previous approach to only reset it if a value was not yet found: def count_after_U(lst): flag = False iCount = 0 for item in lst: if item == 'U': flag = True else: if flag and item is not None: iCount += 1 flag = False return iCount df['iCount'] = list(map(count_after_U, df['col1'])) Example: col1 iCount 0 [U, U, U, U, U, 1, 0, 0, 0, U, U, None] 1 1 [6, 5, 4, 3, 2] 0 2 [0, 0, 0, U, U] 0 3 [0, 1, U, U, U] 0 4 [0, U, U, U, None] 0 5 [U, U, 4, U, U, 1, 0, U, U, None, 1, U, None] 3 | 4 | 2 |
78,640,519 | 2024-6-19 | https://stackoverflow.com/questions/78640519/remove-items-from-list-starting-with-a-list-of-prefixes | I have a list of strings and a list of prefixes. I want to remove all elements from the list of strings that start with a prefix from the list of prefixes. I used a for loop, but why doesn't it seem to work? list_of_strings = ['test-1: foo', 'test-2: bar', 'test-3: cat'] list_of_prefixes = ['test1', 'test-2'] final_list = [] for i in list_of_strings: for j in list_of_prefixes: if not i.startswith(j): final_list.append(i) print(list(set(final_list))) Currently the output is ['test-3: cat', 'test-1: foo', 'test-2: bar'] The output I want to get is final_list = ['test-3: cat'] | Your approach doesn't work because you potentially perform an append for each element in list_of_prefixes, but if the string does start with one of the prefixes, it's guaranteed to not start with one of the others, so they all get added. With list comprehensions, generator expressions, and any, this is very straightforward. >>> list_of_strings = ['test-1: foo', 'test-2: bar', 'test-3: cat'] >>> list_of_prefixes = ['test1', 'test-2'] >>> filtered = [ ... s ... for s in list_of_strings ... if not any(s.startswith(p) for p in list_of_prefixes) ... ] >>> filtered ['test-1: foo', 'test-3: cat'] Note that 'test-1: foo' does not start with 'test1' or 'test-2'. If you meant for the list_of_prefixes to include 'test-1' then you would get the output you expect. | 5 | 2 |
78,640,057 | 2024-6-19 | https://stackoverflow.com/questions/78640057/matplotlib-colormap-not-showing-in-legend | I was working on this introduction to geospatial data analysis with Python. I've replicated each line of code in my own Jupyter notebook and have obtained the same results except for the last graph. The code for the graph is: fig, ax = plt.subplots(1, figsize=(20,20)) base = country[country['NAME'].isin(['Alaska','Hawaii']) == False].plot(ax=ax, color='#3B3C6E') florence.plot(ax=base, column='Wind', marker="<", markersize=10, cmap='cool', label="Wind speed(mph)") _ = ax.axis('off') plt.legend() ax.set_title("Hurricane Florence in US Map", fontsize=25) plt.savefig('Hurricane_footage.png',bbox_inches='tight') and should yield the following graph : However, when I copy that line of code in my notebook the legend is not working : I thought it was my version of matplotlib so I updated it, but it's still not working. I don't see what else could be wrong. | They must be doing some classification by quantiles that wasn't covered in their tutorial : fig, ax = plt.subplots(1, figsize=(20, 20)) base = ( country[country["NAME"].isin(["Alaska", "Hawaii"]) == False].plot( ax=ax, color="#3B3C6E" ) ) florence.plot( ax=base, column="Wind", marker="<", markersize=10, cmap="cool", scheme="Quantiles", legend=True, legend_kwds={"title": "Wind speed(mph)"}, ) _ = ax.axis("off") ax.set_title("Hurricane Florence in US Map", fontsize=25) plt.savefig("Hurricane_footage.png", dpi=300, bbox_inches="tight") | 2 | 1 |
78,639,491 | 2024-6-18 | https://stackoverflow.com/questions/78639491/how-to-shutdown-resources-using-dependency-injector | I'm using dependency_injector to manage DI. I don't understand how to release my resources using this library. I found shutdown_resources method but have no idea how to use it properly. Example: class Resource: """Resource example.""" def __init__(self): """.""" # Initialize session def close(self): """Release resources.""" # Close session class ApplicationContainer(DeclarativeContainer): """Application container.""" resource: Singleton[Resource] = Singleton[Resource](Resource) container = ApplicationContainer() # Do something container.shutdown_resources() # Call close method here | It took some time but I found out. An initialization generator should be used for these purposes instead of a default constructor: from dependency_injector import containers, providers class Resource: """Resource example.""" def __init__(self): """.""" # Initialize session def close(self): """Release resources.""" # Close session def init_resource() -> None: resource = Resource() yield resource resource.close() class ApplicationContainer(containers.DeclarativeContainer): """Application container.""" resource = providers.Resource(init_resource) container = ApplicationContainer() # Do something container.shutdown_resources() # Will call resource.close() here UPD: I found even more clean and concise way to do it! You can inherit your resource from resources.Resource as mentioned here and override methods init and shutdown. This methods will be called on calling init_resources and shutdown_resources from container. from dependency_injector import resources, containers class Resource(resources.Resource): """Resource example.""" def init(self) -> Session: return create_session(...) def shutdown(self, session: Session) -> None: session.close() class ApplicationContainer(containers.DeclarativeContainer): """Application container.""" resource: resources.Resource[Resource] = resources.Resource[Resource](Resource) if __name__ == "__main__": container = ApplicationContainer() container.init_resources() # init will be called here container.shutdown_resources() # shutdown will be called here | 2 | 1 |
78,635,838 | 2024-6-18 | https://stackoverflow.com/questions/78635838/bundling-python-app-compiled-with-cython-with-pyinstaller | Problem I have an application which is bundled with pyinstaller. Now a new feature request is, that parts are compiled with cyphon to c libraries. After the compilation inside the activated virtual environment (poetry) the app runs as expected. BUT, when I bundle it with pyinstaller the executable afterwards can't find packages which are not imported in the main.py file. With my understanding, this is totally fine, because the Analysis stage of the pyinstaller can't read the conntent of the compiled c code ( In the following example modules/test/test.py which is available for the pyinstaller as modules/test/test.cpython-311-x86_64-linux-gnu.so). Folder overview: βββ compile_with_cython.py βββ main.py βββ main.spec βββ main_window.py βββ poetry.lock βββ pyproject.toml main.py import sys from PySide6.QtWidgets import QApplication from main_window import MainWindow if __name__ == '__main__': app = QApplication(sys.argv) mainWin = MainWindow() mainWin.show() sys.exit(app.exec_()) main_window.py MVP PySide6 Application which uses tomllib to load some toml file import sys from PySide6.QtWidgets import QApplication, QMainWindow, QPushButton, QDialog, QVBoxLayout, QTextEdit from PySide6.QtCore import Slot class MainWindow(QMainWindow): def __init__(self): super().__init__() ... Error code ./main Traceback (most recent call last): File "main.py", line 12, in <module> File "modules/test/test.py", line 3, in init modules.test.test ModuleNotFoundError: No module named 'tomllib' [174092] Failed to execute script 'main' due to unhandled exception! | Problem The main problem pyinstaller faces is that it can't follow imports of files/modules compiled by cython. Therefore, it can only resolve and package files & libraries named in main.py, but not in main_window.py. To make it work, we need to specify all imports that are hidden from pyinstaller. I have found two suitable solutions for using pyinstaller with cython compiled binaries. Solution 1: Add any import needed by any script to the main python file, e.g: # imports needed by the main.py file import argparse import logging import sys import time # dummy imports (needed by the main_window.py file) import tomllib import pydantic This will work, but is only suitable for small projects. Moreover the stated imports will be deleted by various linters because the imports are not really used by this file... Solution 2 I found the following in the pyinstaller documentation, to get it to work I changed my `.spec' file as follows: a = Analysis( ['main.py'], pathex=[], binaries=[], datas=[], hiddenimports=['tomllib', 'pydantic'], Bonus Since the code above was clearly just an example, and I had a project with hundreds of Python files and libraries, I came up with the following code to automatically generate the contents of the `hiddenimports' variable each time the pipeline builds the package: def find_all_hidden_imports(directory_path: Path) -> set: imports_set = set() for file_path in directory_path.rglob('*.py'): if ".venv" not in str(file_path): imports_set.update(get_imports_of_file(file_path)) return imports_set def get_imports_of_file(file_path: Path) -> set: imports_set = set() with open(file_path, 'r', encoding='utf-8') as file: content = file.read() try: tree = ast.parse(content) for node in ast.walk(tree): if isinstance(node, ast.Import): for name in node.names: imports_set.add(name.name) elif isinstance(node, ast.ImportFrom): if node.module is not None: imports_set.add(node.module) except SyntaxError: print(f"Syntax error in file: {file_path}") return imports_set This set is then converted to the correct list format string and this string is then inserted into the current .spec file... | 5 | 1 |
78,638,290 | 2024-6-18 | https://stackoverflow.com/questions/78638290/client-error-404-not-found-for-url-http-localhost11434-api-chat-while-usi | I am following this tutorial, https://youtu.be/JLmI0GJuGlY?si=eeffNvHjaRHVV6r7&t=1915, and trying to build a simple LLM agent. I am on WSL2, Windows 11, and I am coding in VSC. I use Ollama to download and store my LLMs. My python is 3.9. My script my_main3.py is very simple: from llama_index.llms.ollama import Ollama from llama_parse import LlamaParse from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, PromptTemplate from llama_index.core.embeddings import resolve_embed_model from llama_index.core.tools import QueryEngineTool, ToolMetadata from llama_index.core.agent import ReActAgent from prompts import context from dotenv import load_dotenv load_dotenv() llm = Ollama(model="mistral", request_timeout=30.0) parser = LlamaParse(result_type="markdown") file_extractor = {".pdf": parser} documents = SimpleDirectoryReader("./data", file_extractor=file_extractor).load_data() embed_model = resolve_embed_model("local:BAAI/bge-m3") vector_index = VectorStoreIndex.from_documents(documents, embed_model=embed_model) query_engine = vector_index.as_query_engine(llm=llm) tools = [ QueryEngineTool( query_engine=query_engine, metadata=ToolMetadata( name="api_documentation", description="this gives documentation about code for an API. Use this for reading docs for the API", ), ) ] code_llm = Ollama(model="codellama") agent = ReActAgent.from_tools(tools, llm=code_llm, verbose=True, context=context) # context is from prompts.py while (prompt := input("Enter a prompt (q to quit): ")) != "q": result = agent.query(prompt) print(result) Then I run Python main.py in my Terminal. The script runs well until the while loop. It prompts me to input, then in input: Enter a prompt (q to quit): send a post request to make a new item using the api in Python. It then throws me this error. Traceback (most recent call last): File "/home/ubuntu2022/MyUbunDev/210_AI_agent_basic/my_main3.py", line 38, in <module> result = agent.query(prompt) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 102, in wrapper self.span_drop(*args, id=id, err=e, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 77, in span_drop h.span_drop(*args, id=id, err=err, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/base.py", line 48, in span_drop span = self.prepare_to_drop_span(*args, id=id, err=err, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/null.py", line 35, in prepare_to_drop_span raise err File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 100, in wrapper result = func(*args, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/base/base_query_engine.py", line 51, in query query_result = self._query(str_or_query_bundle) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper return func(self, *args, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/types.py", line 40, in _query agent_response = self.chat( File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 102, in wrapper self.span_drop(*args, id=id, err=e, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 77, in span_drop h.span_drop(*args, id=id, err=err, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/base.py", line 48, in span_drop span = self.prepare_to_drop_span(*args, id=id, err=err, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/null.py", line 35, in prepare_to_drop_span raise err File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 100, in wrapper result = func(*args, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper return func(self, *args, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/runner/base.py", line 604, in chat chat_response = self._chat( File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 102, in wrapper self.span_drop(*args, id=id, err=e, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 77, in span_drop h.span_drop(*args, id=id, err=err, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/base.py", line 48, in span_drop span = self.prepare_to_drop_span(*args, id=id, err=err, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/null.py", line 35, in prepare_to_drop_span raise err File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 100, in wrapper result = func(*args, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/runner/base.py", line 539, in _chat cur_step_output = self._run_step( File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 102, in wrapper self.span_drop(*args, id=id, err=e, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 77, in span_drop h.span_drop(*args, id=id, err=err, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/base.py", line 48, in span_drop span = self.prepare_to_drop_span(*args, id=id, err=err, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/span_handlers/null.py", line 35, in prepare_to_drop_span raise err File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/instrumentation/dispatcher.py", line 100, in wrapper result = func(*args, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/runner/base.py", line 382, in _run_step cur_step_output = self.agent_worker.run_step(step, task, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper return func(self, *args, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/react/step.py", line 653, in run_step return self._run_step(step, task) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/agent/react/step.py", line 463, in _run_step chat_response = self._llm.chat(input_chat) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/core/llms/callbacks.py", line 130, in wrapped_llm_chat f_return_val = f(_self, messages, **kwargs) File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/llama_index/llms/ollama/base.py", line 105, in chat response.raise_for_status() File "/home/ubuntu2022/miniconda/envs/llm/lib/python3.9/site-packages/httpx/_models.py", line 761, in raise_for_status raise HTTPStatusError(message, request=request, response=self) httpx.HTTPStatusError: Client error '404 Not Found' for url 'http://localhost:11434/api/chat' For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404 I checked my Edge browser, http://localhost:11434/ is running Ollama. Is this causing the clash? And Noticed I have never set up that http://localhost:11434/api/chat endpoint in my script. | Someone posted further down in the comments on the video. I had this same issue. @HodBuri 1 month ago Error 404 not found - local host - api - chat [FIX] If anyone else gets an error like that when trying to run the llamacode agent, just run the llamacode llm in terminal to download it, as it did not download it automatically for me at least as he said around 29:11 So similar to what he showed at the start with Mistral: ollama run mistral. You can run this in a new terminal to download codellama: ollama run codellama After running the line above in a new terminal I kept it up and reran the main.py in the terminal I was previously working in and it worked | 4 | 2 |
78,636,327 | 2024-6-18 | https://stackoverflow.com/questions/78636327/pyinstaller-in-virtual-environment-still-yields-very-large-exe-file | I have a Python code of 78 lines using the following packages: import pandas as pd import base64 from bs4 import BeautifulSoup import os import win32com.client as win32 import pathlib I ran the following commands: venv\Scripts\activate python -m pip install pandas python -m pip install pybase64 python -m pip install bs4 python -m pip install pywin32 python -m pip install pyinstaller pyinstaller --onefile to_HTML.py I have created a virtual environment in which I installed only the above packages. Yet the EXE file created is 740Mb!! What am I doing wrong? How can I reduce it? | It doesn't look like you've installed pyinstaller within your virtual environment. I suspect it's attempting to use your global pyinstaller, which may be attempting to wrap any other packages you've installed globally. Try this: venv\Scripts\activate python -m pip install pyinstaller # install other dependencies as needed venv\Scripts\pyinstaller.exe --onefile to_HTML.py Based on my quick test case, I get a very different file size when running with the virtual environment pyinstaller vs. the global one. | 2 | 2 |
78,639,613 | 2024-6-18 | https://stackoverflow.com/questions/78639613/from-a-list-containing-letters-and-numbers-how-do-i-create-a-list-of-the-positi | I have a list created from a list(input()), that contains letter and numbers, like ['F', 'G', 'H', '1', '5', 'H'] I don't know the contents of the list before hand. How would I create a new list that shows the positions of strings that fit a predefined parameter so that I could receive an output like number_list = [3, 4] and letter_list = [0, 1, 2, 5]? I think my answer might involve index, enumerate(), and a for loop but I'm not sure how to go about filtering list contents in the way I want. I want the positional values of the strings in the list so I can use min() and max() functions so that the leftmost letter cant appear after the rightmost number. And later prevent the first number being used from being a 0. | You can use a simple for loop with isdigit(): def _collect(L): nums, alphas = [], [] for i, val in enumerate(L): if val.isdigit(): nums.append(i) else: alphas.append(i) return nums, alphas print(_collect(['F', 'G', 'H', '1', '5', 'H'])) Prints ([3, 4], [0, 1, 2, 5]) | 3 | 1 |
78,639,769 | 2024-6-18 | https://stackoverflow.com/questions/78639769/how-to-scrape-data-from-arbitrary-number-of-row-listings-using-python-selenium | So I'm trying to create a bot that identifies nft loan listings on blur that meet certain criteria such as the loans total value being 80% or less than its floor price or APY being greater than 100%. I've figured out the basics of loading up chrome using selenium and navigating to the correct section of the website to view a collections loans. But I'm struggling to actually extract the data from the table of loans. What id like to be able to do is extract the table of loan listings into an array of arrays or array of dictionaries, with each array/dictionary containing data representing each name, status, borrow amount, LTV, and APY. What I have working thus far: import selenium from selenium.webdriver.common.by import By from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.keys import Keys import time path = "/Users/#########/Desktop/chromedriver-mac-arm64/chromedriver" # Create an instance of ChromeOptions options = Options() options.add_experimental_option("detach", True) options.add_argument("disable-infobars"); # Specify the path to the ChromeDriver service = Service(path) # Initialize the WebDriver with the service and options driver = webdriver.Chrome(service=service, options=options) # Open Blur beanz collection and navigating to active loans page driver.maximize_window driver.get("https://blur.io/eth/collection/beanzofficial/loans") time.sleep(3) loan_button = driver.find_element(By.XPATH, "/html/body/div/div/main/div/div[3]/div/div[2]/div[1]/div[1]/nav/button[2]") loan_button.click() I'm honestly new to selenium, so I've just been toying around with my intuition and chatgpt trying to solve this. The best guess I've had so far was the following bit of code that tried to extract the APY of all the loans. This did not work, as im sure there was some faulty intuition. elements = driver.find_elements(By.CSS_SELECTOR, 'Text-sc-m23s7f-0 hAGCAO') # Initialize an empty list to store the percentage values percentages = [] # Iterate through each element and extract its text (which contains the percentage) for element in elements: percentage = element.text percentages.append(percentage) # Print the extracted percentage values print(percentages) time.sleep(10) # Close the WebDriver driver.quit() I also feel like this is a bit complex, having to extract each column in the table rather than each row at a time. Not sure if there is a simpler way to do this, if there was that would be great. If not ok too! | I would recommend searching "how to locate elements with Selenium" and doing some reading. But maybe this get you started... Your XPATH to select the "ALL LOANS" button is /html/body/div/div/main/div/div[3]/div/div[2]/div[1]/div[1]/nav/button[2]--it's clear you got this by clicking "Copy XPATH" in developer console. This is generally not a good approach, because if anything about the page structure changes your code will break (imagine a developer decides to add or remove a div anywhere in that hierarchy). Instead, try to find a unique way to select the element that is unlikely to change. Selecting by ID (eg: driver.find_element(By.ID, "main-menu")) is preferred if possible. On this page, the "ALL LOANS" button doesn't have an ID nor a unique class name--in that case, I prefer using text to locate the element. My recommended XPATH would be: //button[.='All Loans']. This XPATH means, select all <button> elements anywhere on the page whose string content is equal to "All Loans". (Note that this text matching IS case sensitive--even though the text "ALL LOANS" displays uppercase on the page, if you examine the HTML code you can see the capitalization of the actual text). To test an XPATH in Chrome, open the HTML viewer (Elements tab of developer console) and hit CTRL+F to open the Find searchbar, then enter your XPATH (without enclosing quotes). You will see how many elements are located using the XPATH. For selecting the loans, you can use this XPATH: //div[@id= 'COLLECTION_MAIN']//div[@role='rowgroup']//div[@role='row']. First it locates the <div> with an id of 'COLLECTION_MAIN'--this is the center area of the screen. Then it locates the "rowgroup" div descendant element which contains the Loans, and finally all the rows themselves. You can try playing around with this XPATH--if you remove either of the first components of the selector it will not work, because it will locate additional divs with the 'row' role. You can then iterate over these rows to get any details you want. Putting it together: from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC ... options = Options() options.add_experimental_option("detach", True) # disable the message "Chrome being controlled by automated test software" options.add_experimental_option("excludeSwitches",["enable-automation"]) driver = webdriver.Chrome(service=service, options=options) # Open Blur beanz collection and navigating to active loans page driver.maximize_window() # fixed typo driver.get("https://blur.io/eth/collection/beanzofficial/loans") # wait until element is clickable then click it loans_button = WebDriverWait(driver, 20).until( EC.element_to_be_clickable((By.XPATH, "//button[.='All Loans']"))) loans_button.click() percentages = [] # wait for table to load WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.XPATH, "//div[@id= 'COLLECTION_MAIN']//div[@role='rowgroup']//div[@role='row']"))) for loan_row in driver.find_elements(By.XPATH, "//div[@id= 'COLLECTION_MAIN']//div[@role='rowgroup']//div[@role='row']"): apy = loan_row.find_element(By.XPATH, "div[5]").text # select the 5th column percentages.append(apy) # could use float(apy[:-1]) to convert to number Edit: Andrej's answer is far superior in terms of ease and speed--using Selenium for data scraping should really be a last resort | 2 | 1 |
78,634,235 | 2024-6-17 | https://stackoverflow.com/questions/78634235/numpy-dtype-size-changed-may-indicate-binary-incompatibility-expected-96-from | I want to call my Python module from the Matlab. I received the error: Error using numpy_ops>init thinc.backends.numpy_ops Python Error: ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject. The Python script is as follows import spacy def text_recognizer(model_path, text): try: # Load the trained model nlp = spacy.load(model_path) print("Model loaded successfully.") # Process the given text doc = nlp(text) ent_labels = [(ent.text, ent.label_) for ent in doc.ents] return ent_labels The Matlab script is as follows % Set up the Python environment pe = pyenv; py.importlib.import_module('final_output'); % Add the directory containing the Python script to the Python path path_add = fileparts(which('final_output.py')); if count(py.sys.path, path_add) == 0 insert(py.sys.path, int64(0), path_add); end % Define model path and text to process model_path = 'D:\trained_model\\output\\model-best'; text = 'Roses are red'; % Call the Python function pyOut = py.final_output.text_recognizer(model_path, text); % Convert the output to a MATLAB cell array entity_labels = cell(pyOut); disp(entity_labels); I found one solution to update Numpy, what I did, but nothing changed. I am using Python 3.9 and Numpy version 2.0.0 The error was received when I tried to call the Python module using a Matlab script. How can I fix the issue? | The reason is that pandas defines its numpy dependency freely as "anything newer than certain version of numpy". The problem occured, when numpy==2.0.0 has been released on June 16th 2024, because it is no longer compatible with your pandas version. The solution is to pin down the numpy version to any before the 2.0.0. Today it could be (this is the most recent numpy 1 release): numpy==1.26.4 To be added in your requirements or to the pip command you use (but together with installing pandas). Nowadays pip is very flexible and can handle the issue flawesly. You just need to ask it to install both pandas and numpy of given versions in the same pip install invocation. | 138 | 212 |
78,633,798 | 2024-6-17 | https://stackoverflow.com/questions/78633798/is-there-a-way-to-mock-strip-for-unit-testing-in-python-2-7s-unittest-module | I am using Python 2.7 for a coding project. I'd love to switch to Python 3, but unfortunately I'm writing scripts for a program that only has a python package in 2.7 and even if it had one in 3 our codebase would be impractical to switch over, so it's not possible. My code involves checking if a path it is given in string form exists, then because I don't know if os.path.exists does this itself, if it does not, it runs .strip() on the file name and tries again. I am trying to unit test this. I've run a test on it not existing at all by patching os.path.exists to return False. But I can't figure out how to unit test the case where it returns False before .strip(), and True after. Here is the relevant portion of the function being tested (the if/elif/else is relevant for the unit tests): import os class Runner: def __init__(self, fname): self.fname = fname def input_check(self): if not os.path.exists(self.fname): self.fname = self.fname.strip() if not os.path.exists(self.fname): raise ValueError('input is not a valid path') if os.path.isfile(self.fname): self.ftype = 'file' elif os.path.isdir(self.fname): self.ftype = 'folder' else: raise ValueError('how is your input neither a file nor a folder??') And two examples of what I have tried for unit testing: Example 1 import unittest from mock import patch class TestRunner(unittest.TestCase): @patch('.strip') @patch('os.path.exists') def test_input_check_exists_after_strip(self, patchexist, patchstrip): runner = Runner('test ') patchstrip.return_value = 'test' patchexist.return_value = False if runner.fname[-1] == ' ' else True with self.assertRaisesRegexp(ValueError, 'how is your input neither a file nor a folder??'): runner.input_check() This one, seems I can't figure out how to get it to actually patch .strip, and the answers I have found through Google seem to say that there is no way to patch certain builtin functions (I also tried builtins.strip, didn't work either.) It says empty module name or no module named builtins. Example 2 import unittest from mock import patch class TestRunner(unittest.TestCase): @patch('os.path.exists') def test_input_check_exists_after_strip(self, patchexist): runner = Runner('test') patchexist.return_value = patchexist.called with self.assertRaisesRegexp(ValueError, 'how is your input neither a file nor a folder??'): runner.input_check() This one returns with the 'input is not a valid path' ValueError. I am guessing that the patch's return value is simply not updated during the running of input_check(), which makes sense even if it's inconvenient for me. Is there a way to test this? Is this even necessary, or does os.path.exists() deal with there being extraneous whitespace already? I am pretty new to unit testing and even newer to the concept of mocking, so I would appreciate any help. | Use side_effect for os.path.exists; not mock strip() I suggest you to use the attribute side_effect (see here for documentation) of Mock object patchexist instead return_value. In this way you can return False at first called of os.path.exists() and True at second called (after strip()). Furthermore: It is not necessary to patch the strip() function. Python 3 and Python 2.7 I have tested the code with Python 3 and not with Python 2.7; I think it is easy for you adapt it for Python 2.7; for example: from unittest.mock import patch in Python 2.7 becomes: from mock import patch For other info about the difference between Python 3 and Python 2.7 see this post. Test code Below I'll show you the test code which contains 3 tests (which are 3 test cases): test_input_check_exists_after_strip(): the file test and the file test don't exist (your code raises a ValueError Exception) test_file_exist_after_strip(): the file test doesn't exist, but the file test (after strip()) exists test_directory_exist_after_strip(): the file test doesn't exist, but the directory test (after strip()) exists import unittest from runner import Runner from unittest.mock import patch import os class TestRunner(unittest.TestCase): @patch('os.path.exists') def test_input_check_exists_after_strip(self, patchexist): # the file `test ` doesn't exist; the file `test` doesn't exist patchexist.side_effect = [False, False] runner = Runner('test ') with self.assertRaises(ValueError): runner.input_check() @patch('os.path.isfile') @patch('os.path.exists') def test_file_exist_after_strip(self, patchexist, patchisfile): # the file `test ` doesn't exist; the file `test` exist patchexist.side_effect = [False, True] patchisfile.return_value = True runner = Runner('test ') runner.input_check() self.assertEqual('file', runner.ftype) @patch('os.path.isdir') @patch('os.path.isfile') @patch('os.path.exists') def test_directory_exist_after_strip(self, patchexist, patchisfile, patchisdir): # the file `test ` doesn't exist; the directory `test` exist patchexist.side_effect = [False, True] patchisfile.return_value = False patchisdir.return_value = True runner = Runner('test ') runner.input_check() self.assertEqual('folder', runner.ftype) if __name__ == '__main__': unittest.main() The execution of all tests calls the real method strip() without mocking it. Below the output of the execution of the tests in my system: ... ---------------------------------------------------------------------- Ran 3 tests in 0.002s OK | 2 | 2 |
78,639,645 | 2024-6-18 | https://stackoverflow.com/questions/78639645/python-powershell-combo-make-pwsh-load-even-faster | My main programming is done within Python, and want to invoke custom Powershell cmdlets I wrote. Added my .psm1 file to the $PSModulePath, and my cmdlets are always loaded. And I -NoProfile, and -NoLogo to invoke pwsh cmd a little bit faster. Something like cmd = ['pwsh', '-NoLogo', '-NoProfile', '-Command', cmdToRun] process = Popen(cmd, stderr=PIPE, stdout=PIPE) But this is still taking 5+ secs to return/process. Does anyone know if there are other enhancements to run powershell scripts even faster? TIA | Python cannot host PowerShell in-process, so you cannot avoid the costly creation of a PowerShell child process, via pwsh, the PowerShell (Core) 7 CLI (the same applies analogously to powershell.exe, the Windows PowerShell CLI). -NoProfile would only make a difference if large / slowly executing $PROFILE file(s) were present, and -NoLogo is redundant, as it is implied by -Command. If you need to make PowerShell calls repeatedly in a given invocation of your application, you can mitigate the performance impact by launching a PowerShell CLI process in REPL mode, by passing -Command - and then feeding commands to it on demand via stdin, terminating each with two newlines (to ensure that the end of the command is recognized), similar to the approach in this post. You would then incur the penalty of the PowerShell child-process creation only once per run of your application, before submitting the first PowerShell command. | 4 | 4 |
78,639,630 | 2024-6-18 | https://stackoverflow.com/questions/78639630/scalable-approach-instead-of-apply-in-python | I use apply to loop the rows and get the column names of feat1, feat2 or feat3 if they are equal to 1 and scored is equal to 0. The column names are then inserted into a new feature called reason. This solution doesn't scale to larger dataset. I'm looking for faster approach. How can I do that? df = pd.DataFrame({'ID':[1,2,3], 'feat1_tax':[1,0,0], 'feat2_move':[1,0,0], 'feat3_coffee': [0,1,0], 'scored':[0,0,1]}) def get_not_scored_reason(row): exclusions_list = [col for col in df.columns if col.startswith('feat')] reasons = [col for col in exclusions_list if row[col] == 1] return ', '.join(reasons) if reasons else None df['reason'] = df.apply(lambda row: get_not_scored_reason(row) if row['scored'] == 0 else None, axis=1) print(df) ID feat1_tax feat2_move feat3_coffee scored reason 0 1 1 1 0 0 feat1_tax, feat2_move 1 2 0 0 1 0 feat3_coffee 2 3 0 0 0 1 None | Another possible solution, around 8x faster than @Andrej Kesely's, according to rough estimates: feat_columns = df.filter(regex=r"^feat").columns reasons = df.mask(df["scored"] != 0, 0)[feat_columns].to_numpy() df["reason"] = np.array( [", ".join(feat_columns[row == 1]) if np.any(row == 1) else None for row in reasons] ) ID feat1_tax feat2_move feat3_coffee scored reason 0 1 1 1 0 0 feat1_tax, feat2_move 1 2 0 0 1 0 feat3_coffee 2 3 0 0 0 1 None | 3 | 3 |
78,634,781 | 2024-6-17 | https://stackoverflow.com/questions/78634781/how-to-find-which-labeled-rows-from-a-table-are-at-or-above-certain-points-from | I'm new to Python and am struggling to understand how to code this specific situation. I have included an Excel screenshot to better describe the tables and graphs I am working with. From Table 1, column headings 10-13 serve as the x-values. Row # Label provides which row between 1-6 is being affected. Table 2 provides 2 points: A and B. How can we determine which of the 1-6 rows intersect or are above point A? What about for point B? Logically, I know point A should be at or below all 6 rows and B should be at or below rows 4 and 6. Python should print {1, 2, 3, 4, 5, 6} when asked about A and print {4, 6} when asked about B. However, how do we translate this process to be done in Python where there are two tables set up just like these? Table 1 and Table 2 I have tried something like this, but it not working and I think it would only output the total number of rows like 6 for point A and 2 for point B instead of the specific Row # Label that I am looking for as well. # Iterate through the points in Table #1 for i in range(len(table_one)): x = table_one[i][0] y = table_one[i][1] # Iterate through the matrices in Table #2 for j in range(len(table_two)): m = table_two[j] # Calculate the x and y values of the matrix m_x = np.sum(m, axis=0) * [i] m_y = np.sum(m, axis=1) * [i] # Compare the x and y values of the point with those of the matrix if np.any((np.abs(x - m_x) <= 0.5) & (np.abs(y - m_y) <= 0.5)): # Increment the counter variable intersection_count = intersection_count + 1 | When working with tables in Python, I suggest that you use pandas. It is typically imported like this import pandas as pd. Let us consider the following example tables: data1= {'11': [0.2, 0.3, 0.1, 2, 0.6, 1.2], '12': [0.3, 0.33, 0.18, 2.5, 1, 1.4]} data2= {'Point': ["A","B"], 'X': [11,12], 'Y': [0.18, 1.24]} table1 = pd.DataFrame(data=data1) table2 = pd.DataFrame(data=data2) table1 is similar to the left-hand side table in your snapshot, table2 is the one is the right-hand side. If I understand your question correctly, for a given abscissa (X) in table1 you want to check which samples are above the ordinate (Y) in table2. Let's consider: abscissa = 11 Then your threshold is: threshold = table2[table2.X==abscissa]['Y'].to_numpy() The corresponding column name in the first table is X = str(abscissa). To check which values in the corresponding pandas series are greater or equal than the threshold you can do the following: table1[X].ge(threshold[0]) Of course, this returns "True" for all rows. If you repeat the same using abscissa = 12, this will return: 0 False 1 False 2 False 3 True 4 False 5 True Name: 12, dtype: bool EDIT: to answer your additional question in the comment section, pandas cannot handle columns sharing the same name. For instance, if you try importing a csv file containing multiple columns named 11, pandas will rename them to 11, 11.1, 11.2, etc... That being said you could easily extract the abscissa from these column labels: for column in table1.columns: abscissa = int(float(column)) threshold = table2[table2.X==abscissa]['Y'].to_numpy() table1[column].ge(threshold[0]) Then it is just a matter of concatenating the different series to form the table that you are after. | 2 | 2 |
78,637,658 | 2024-6-18 | https://stackoverflow.com/questions/78637658/accessing-attributes-of-a-python-descriptor | Not sure if this is feasible or not. The implementation/example below is dummy, FYI. I have a Python class, Person. Each person has a public first name and a public last name attribute with corresponding private attributes - I'm using a descriptor pattern to manage access to the underlying private attributes. I am using the descriptor to count the number of times the attribute is accessed as well as obtain the underlying result. class AttributeAccessCounter: def __init__(self): self._access_count = 0 def __get__(self, instance, owner): self._access_count += 1 return getattr(instance, self.attrib_name) def __set_name__(self, obj, name): self.attrib_name = f'_{name}' @property def counter(self): return self._access_count class Person: first = AttributeAccessCounter() last = AttributeAccessCounter() def __init__(self, first, last): self._first = first self._last = last From an instance of the class Person, how can I access the _access_count or property counter? john = Person('John','Smith') print(john.first) # 'John' print(john.first.counter) # AttributeError: 'str' object has no attribute 'counter' | Currently you don't differentiate when the descriptor is accessed through the instance or through the class itself. property does this for example. It gives you the descriptor object when you access it through the class. You can do the same: def __get__(self, instance, owner): if instance is None: return self self._access_count += 1 return getattr(instance, self.attrib_name) Now the counter property of the descriptor class can be accessed. However there is another problem pointed out by @user2357112 in the comment. You store this count on descriptor and it's shared between different instances. You can't really tell first attribute of which instance of Person is accessed n times. To solve that, if you still want to store it in the descriptor object, one way is to use a dictionary and call e method for getting the count. Here is the complete code: from collections import defaultdict class AttributeAccessCounter: def __init__(self): self._access_counts = defaultdict(int) def __get__(self, instance, owner): if instance is None: return self self._access_counts[instance] += 1 return getattr(instance, self.attrib_name) def __set_name__(self, obj, name): self.attrib_name = f"_{name}" def count(self, obj): return self._access_counts[obj] class Person: first = AttributeAccessCounter() last = AttributeAccessCounter() def __init__(self, first, last): self._first = first self._last = last john = Person("John", "Smith") foo = Person("foo", "bar") print(Person.first.count(john)) # 0 print(john.first) # 'John' print(Person.first.count(john)) # 1 print(Person.first.count(foo)) # 0 | 3 | 3 |
78,633,756 | 2024-6-17 | https://stackoverflow.com/questions/78633756/how-to-resize-an-image-in-gradio | I'm looking for an approach to resize an image as a header in Gradio generated UI to be smaller. According to a closed issue on their Github, I followed the following manner: import gradio as gr with gr.Blocks() as app: gr.Image("logo.png", label="Top Image").style(width=600, height=400) app.launch(server_name="0.0.0.0", server_port=7860, debug=True) But it raises: AttributeError: 'Markdown' object has no attribute 'style'. Did you mean: 'scale'? I also tried using the Markdown() or HTML() method rather than Image() however, the issue is with this approach it cannot load an image locally. Here's the thing I've done so far: import gradio as gr def greet(name): return f"Hello {name}!" # Load your local image image_path = "/file/logo.png" with gr.Blocks() as demo: html_header = f""" <div style="text-align: center;"> <img src="{image_path}" alt="Header Image" width="200" height="100"> </div> """ gr.HTML(html_header) name_input = gr.Textbox(label="Enter your name:") submit_button = gr.Button("Submit") output = gr.Textbox(label="Greeting:") submit_button.click(fn=greet, inputs=name_input, outputs=output) demo.launch() I also tried image_path = "/file=logo.png", image_path = "/file/logo.png", image_path = "file=logo.png", and image_path = "./logo.png" routes without any results. I should add that the logo and the .py file are next to each other. | Documentation for Image shows that you can use Image(..., width=..., height=...) I test it and it works but it can't be smaller than width=160. And it needs to change also min_width= to smaller value because it has default value 160 If you don't need label and download button then you can uses Image(..., show_label=False, show_download_button=False) If you want to put own <img src="..."> then it can make problem because DevTools in Firefox/Chrome shows that it uses path like this http://0.0.0.0:7860/file=/tmp/user/1000/gradio/f1ca8fcde634bae0360273c73b61af0bac43f7a8/logo.png Maybe there is random value or maybe it is hash value calculated for this file. I checked it created file on disk (on Linux) /tmp/user/1000/gradio/f1ca8fcde634bae0360273c73b61af0bac43f7a8/logo.png I found that you can use local file if you add it to allowed_path in app.launch() src="/file=logo.png" app.launch(..., allowed_paths=["logo.png"]) image_path = "logo.png" gr.HTML(f"""<img src="/file={image_path}" width="100" height="100">""") app.launch(server_name="0.0.0.0", server_port=7860, debug=True, allowed_paths=[image_path]) You may also use folder (absolute or relative) src="/file=/some/folder/logo.png" app.launch(..., allowed_paths=["/some/folder/"]) folder = "/other/folder" image_path_1 = f"{folder}/logo.png" gr.HTML(f"""<img src="/file={image_path_1}" width="100" height="100">""") image_path_2 = f"{folder}/other.png" gr.HTML(f"""<img src="/file={image_path_2}" width="100" height="100">""") app.launch(server_name="0.0.0.0", server_port=7860, debug=True, allowed_paths=[folder]) It can be relative folder src="/file=subfolder/images/logo.png" app.launch(..., allowed_paths=["subfolder/images/"]) or you may create absolute path absolute_folder = os.path.join(os.path.dirname(os.path.abspath(__file__)), "subfolder", "images") and this path you have to use in src="/file=..." and add to allowed_paths. There is also set_static_paths to add folder with static elements - like layout graphics, logo. gr.set_static_paths(paths=["static/images/"]) image_path = "static/images/logo.png" gr.HTML(f"""<img src="/file={image_path}" width="100" height="100">""") and this doesn't need to add to allowed_paths | 2 | 3 |
78,637,169 | 2024-6-18 | https://stackoverflow.com/questions/78637169/both-increment-and-decrement-in-while-loop-in-matrix-in-python | I created a 15 x 15 matrix in a pandas dataframe type. I want to make changes in some cells, following this logic: The diagonal of the matrix is set to 0 In each row, if 0 appears in any position, the values in the next/ previous 5 columns should be updated to 1. (df.iloc[i][j] = 0 --> df.iloc[i][j+5] = 1 or df.iloc[i][j-5] = 1 Input (every cell is 0.9): row, col = 15, 16 a = pd.DataFrame.from_records([[0.9]*col]*row) a = a.loc[ : , a.columns != 0] Expected output: Below is my current script (which keeps running without an end): row, col = 15, 16 a = pd.DataFrame.from_records([[0.9]*col]*row) a = a.loc[ : , a.columns != 0] column3 = list(a) for i, j in a.iterrows(): for k in column3: if k == i + 1: a.iloc[i][k] = 0 for i, j in a.iterrows(): for k in column3: step = +5 if k <=5 else -5 while k <= len(a.index): if a.iloc[i][k] == 0: k += step a.iloc[i][k] = 1 pd.set_option('display.max_rows', None) pd.set_option('display.max_columns', None) a Many thanks in advance for your help! | I would use numpy here and build a mask with roll: # distance to 0 N = 5 # replace diagonal with 0 np.fill_diagonal(a.values, 0) # build mask m = (a==0).to_numpy() # apply mask iteratively for i in range(1, a.shape[1]//N): a[np.roll(m, i*N, axis=1)] = 1 Variant using pandas' shift: N = 5 np.fill_diagonal(a.values, 0) m = (a==0) for i in range(1, a.shape[1]//N): a[m.shift(i*N, axis=1, fill_value=False)] = 1 a[m.shift(-i*N, axis=1, fill_value=False)] = 1 Output: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 0.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 1 0.9 0.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 2 0.9 0.9 0.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 3 0.9 0.9 0.9 0.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 1.0 0.9 4 0.9 0.9 0.9 0.9 0.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 1.0 5 1.0 0.9 0.9 0.9 0.9 0.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 6 0.9 1.0 0.9 0.9 0.9 0.9 0.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 7 0.9 0.9 1.0 0.9 0.9 0.9 0.9 0.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 8 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 0.0 0.9 0.9 0.9 0.9 1.0 0.9 9 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 0.0 0.9 0.9 0.9 0.9 1.0 10 1.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 0.0 0.9 0.9 0.9 0.9 11 0.9 1.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 0.0 0.9 0.9 0.9 12 0.9 0.9 1.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 0.0 0.9 0.9 13 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 0.0 0.9 14 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 1.0 0.9 0.9 0.9 0.9 0.0 Output as image for clarity: | 2 | 2 |
78,636,238 | 2024-6-18 | https://stackoverflow.com/questions/78636238/wrap-around-2d-coordinates-of-numpy-array | I have a (5, 5) 2D Numpy array: map_height = 5 map_width = 5 # Define a 2D np array- a = np.arange(map_height * map_width).reshape(map_height, map_width) # a ''' array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) ''' I can wrap this array around on both of its axes using 'pad()': a_wrapped = np.pad(array = a, pad_width = 1, mode = 'wrap') a_wrapped ''' array([[24, 20, 21, 22, 23, 24, 20], [ 4, 0, 1, 2, 3, 4, 0], [ 9, 5, 6, 7, 8, 9, 5], [14, 10, 11, 12, 13, 14, 10], [19, 15, 16, 17, 18, 19, 15], [24, 20, 21, 22, 23, 24, 20], [ 4, 0, 1, 2, 3, 4, 0]]) ''' The 2D coordinates of (5, 5) 'a' are computed (inefficiently) as: # 2D coordinates - # 1st channel/axis = row indices & 2 channel/axis = column indices. a_2d_coords = np.zeros((map_height, map_width, 2), dtype = np.int16) for row_idx in range(map_height): for col_idx in range(map_width): a_2d_coords[row_idx, col_idx][0] = row_idx a_2d_coords[row_idx, col_idx][1] = col_idx # a_2d_coords.shape # (5, 5, 2) I want to wrap this 2D coordinates array 'a_2d_coords' as well by doing: a_2d_coords_wrapped = np.pad(array = a_2d_coords, pad_width = 1, mode = 'wrap') # a_2d_coords_wrapped.shape # (7, 7, 4) It also wraps the 3rd axis/dimension which should not be done! The goal is that the coords of a[1, 4] = (1, 4) and its neighbor's coords to right hand side (RHS) should be a[1, 0] = (1, 0). This is wrapping around the x-axis. Similarly, y-axis 2D coordinates should also be wrapped. | First of all, you can simplify your calculation of the 2D coordinates as follows: a_2d_coords = np.moveaxis(np.mgrid[:map_height, :map_width], 0, -1) Let's check if this still produces the same result: import numpy as np # For the check, use different height and width values to detect swaps map_height, map_width = 3, 7 # Original implementation for coords a_2d_coords_given = np.zeros((map_height, map_width, 2), dtype = np.int16) for row_idx in range(map_height): for col_idx in range(map_width): a_2d_coords_given[row_idx, col_idx][0] = row_idx a_2d_coords_given[row_idx, col_idx][1] = col_idx # Proposed implementation for coords a_2d_coords = np.moveaxis(np.mgrid[:map_height, :map_width], 0, -1) # Equality check assert np.all(a_2d_coords == a_2d_coords_given) Second, if you note that np.pad's pad_width argument can also take axis-specific values, then you can achieve the desired coordinate padding as follows: a_2d_coords_wrapped = np.pad(a_2d_coords, mode="wrap", pad_width=((1, 1), (1, 1), (0, 0))) Now we pad both axis 0 and axis 1 with one leading and one trailing value ((1, 1)), but leave axis 2 alone ((0, 0)). Altogether, we have: import numpy as np map_height, map_width = 5, 5 a = np.arange(map_height * map_width).reshape(map_height, map_width) a_2d_coords = np.moveaxis(np.mgrid[:map_height, :map_width], 0, -1) a_2d_coords_wrapped = np.pad(a_2d_coords, mode="wrap", pad_width=((1, 1), (1, 1), (0, 0))) print(a[a_2d_coords_wrapped[..., 0], a_2d_coords_wrapped[..., 1]]) # Prints # [[24 20 21 22 23 24 20] # [ 4 0 1 2 3 4 0] # [ 9 5 6 7 8 9 5] # [14 10 11 12 13 14 10] # [19 15 16 17 18 19 15] # [24 20 21 22 23 24 20] # [ 4 0 1 2 3 4 0]] Further simplification We can simplify the code even one step further by providing a_2d_coords/a_2d_coords_wrapped as a 2ΓHΓW array rather than a HΓWΓ2 array: import numpy as np map_height, map_width = 5, 5 a = np.arange(map_height * map_width).reshape(map_height, map_width) a_2d_coords = np.mgrid[:map_height, :map_width] a_2d_coords_wrapped = np.pad(a_2d_coords, mode="wrap", pad_width=((0, 0), (1, 1), (1, 1))) print(a[*a_2d_coords_wrapped]) # Prints # [[24 20 21 22 23 24 20] # [ 4 0 1 2 3 4 0] # [ 9 5 6 7 8 9 5] # [14 10 11 12 13 14 10] # [19 15 16 17 18 19 15] # [24 20 21 22 23 24 20] # [ 4 0 1 2 3 4 0]] Mind the * in a[*a_2d_coords_wrapped], which is necessary for "unpacking" the coordinates, so that two HΓW arrays rather than one 2ΓHΓW array are provided for indexing. | 2 | 2 |
78,636,936 | 2024-6-18 | https://stackoverflow.com/questions/78636936/avoiding-merge-in-pandas | I have a data frame that looks like this : I want to group the data frame by #PROD and #CURRENCY and replace TP with the contents of the Offshore data in the Loc column Without creating two data frames and joining them. The final output will look something like: I was able to create the output by splitting the data frame into two (Onshore and Offshore ) and then creating a join on #PROD and #CURRENCY. However, I was wondering if there is a cleaner way to do this ? The Code for the Dataframe is : import pandas as pd data=[['Offshore','NY','A','USD','ABC_USD'],['Onshore','BH','A','USD',''], ['Onshore','AE','A','USD',''],\ ['Offshore','NY','A','GBP','GBP_ABC'],['Onshore','BH','A','GBP',''], ['Onshore','AE','A','GBP',''],\ ['Onshore','BH','A','EUR',''],['Onshore','AE','A','EUR','']] df = pd.DataFrame(data, columns=['Loc', 'Country','#PROD','#CURRENCY','TP']) df | I think a merge is the most straightforward and efficient way to do this: df['TP'] = df[cols].merge(df[df['Loc'].eq('Offshore')], how='left')['TP'].values No need to sort, no need to worry about which values are initially present. Alternatively: cols = ['#PROD', '#CURRENCY'] s = (df[cols].reset_index().merge(df[df['Loc'].eq('Offshore')]) .set_index('index')['TP'] ) df.loc[s.index, 'TP'] = s Output: Loc Country #PROD #CURRENCY TP 0 Offshore NY A USD ABC_USD 1 Onshore BH A USD ABC_USD 2 Onshore AE A USD ABC_USD 3 Offshore NY A GBP GBP_ABC 4 Onshore BH A GBP GBP_ABC 5 Onshore AE A GBP GBP_ABC 6 Onshore BH A EUR NaN 7 Onshore AE A EUR NaN | 2 | 1 |
78,637,002 | 2024-6-18 | https://stackoverflow.com/questions/78637002/child-classs-attribute-did-not-override-which-was-defined-in-parent-class | I have a parent and two child classes defined like this import pygame class Person(): def __init__(self): self.image = pygame.image.load('person.png').convert_alpha() self.image = pygame.transform.scale(self.image, (int(self.image.get_width() * 0.5), int(self.image.get_height() * 0.5))) print('size: ', self.image.get_size()) class Teacher(Person): def __init__(self): super().__init__() self.image = pygame.image.load('teacher.png').convert_alpha() class Doctor(Person): def __init__(self): super().__init__() self.image = pygame.image.load('doctor.png').convert_alpha() self.image = pygame.transform.scale(self.image, (int(self.image.get_width() * 1.2), int(self.image.get_height() * 0.75))) ... The size of the picture of person.png, teacher.png and doctor.png is 98x106, 125x173 and 97x178 respectively. When I run the following code, its output confused me. It seemed the code pygame.image.load() and pygame.transform.scale() in the child classes Teacher and Doctor didn't override the attribute defined in the parent class Person. pygame.display.set_mode((500, 500)) players = {'Teacher': Teacher(), 'Doctor': Doctor()} Output: pygame 2.4.0 (SDL 2.26.4, Python 3.10.9) Hello from the pygame community. https://www.pygame.org/contribute.html size: (49, 53) <---- expected to be (62, 86) size: (49, 53) <---- expected to be (116, 133) What happened? What did I do wrong? | It's a matter of the order how everything is executed: In the subclasses, you call the super-constructor (super().__init__()), which still will load the picture person.png and output the size. Only afterwards the code in the constructors of the subclasses will be executed. So after the construction, the image property should be set correctly, but at the moment of the printout it is not. You could for example solve it, by having a file-name argument in the super constructor to which you hand the file names, fitting for the sub classes. Similarly, you should proceed with the scaling: class Person(): def __init__(self, filename='person.png', x_scale=0.5, y_scale=0.5): self.image = pygame.image.load(filename).convert_alpha() self.image = pygame.transform.scale(self.image, (int(self.image.get_width() * x_scale), int(self.image.get_height() * y_scale))) print('size: ', self.image.get_size()) class Teacher(Person): def __init__(self): super().__init__('teacher.png', 1.0, 1.0) class Doctor(Person): def __init__(self): super().__init__('doctor.png', 1.2, 0.75) | 2 | 4 |
78,636,490 | 2024-6-18 | https://stackoverflow.com/questions/78636490/counting-items-in-an-array-and-making-counts-into-columns | I am working in databricks where I have a dataframe as follows: dummy_df names items Ash [c1,c2,c2,c3] Bob [c1,c2] May [] Amy [c2,c3,c3] Where names column contains strings for values and items is a column of arrays. I would like to count how many times each item appears for each name and make each count into a column. So the desired output will look something like so: names items c1_count c2_count c3_count Ash [c1,c2,c2,c3] 1 2 1 Bob [c1,c2] 1 1 0 May [] 0 0 0 Amy [c2,c3,c3] 0 1 2 So far my approach is to count each item separately using aggregate as so: c1_count = dummy_df.select("*", explode("items").alias("exploded"))\ .where(col("exploded").isin(['c1']))\ .groupBy("names", "items")\ .agg(count("exploded").alias("c1_count")) c2_count = ... c3_count = ... And then I concatenate a new df by taking count columns from my new 3 dataframes and adding them to my dummy_df. But that is very inefficient and if I were to have many items in my arrays (say 50-100), it may even be impractical. I wonder if there is a better way? Can I somehow calculate the count of all items and make such counts into the columns without needing to count them individually and do massive concatenation as the end? | Explode the column items and then use crosstab to get the number of items for each name. exploded_df = dummy_df.explode('items') crosstab_df = (pd.crosstab(exploded_df['names'], exploded_df['items'], dropna=False) .drop(columns=np.nan) .add_prefix('Count_') .reset_index() ) new_df = df.merge(crosstab_df, on='names') Output: names items Count_c1 Count_c2 Count_c3 0 Ash [c1, c2, c2, c3] 1 2 1 1 Bob [c1, c2] 1 1 0 2 May [] 0 0 0 3 Amy [c2, c3, c3] 0 1 2 This is the data I used: data = {'names': ['Ash', 'Bob', 'May', 'Amy'], 'items': [['c1', 'c2', 'c2', 'c3'], ['c1', 'c2'], [], ['c2', 'c3', 'c3']]} dummy_df = pd.DataFrame(data) | 2 | 4 |
78,626,515 | 2024-6-15 | https://stackoverflow.com/questions/78626515/what-exactly-is-slowing-np-sum-down | It is known that np.sum(arr) is quite a lot slower than arr.sum(). For example: import numpy as np np.random.seed(7) A = np.random.random(1000) %timeit np.sum(A) 2.94 Β΅s Β± 13.8 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each) %timeit A.sum() 1.8 Β΅s Β± 40.8 ns per loop (mean Β± std. dev. of 7 runs, 1,000,000 loops each) Can anyone give a detailed code-based explanation of what np.sum(arr) is doing that arr.sum() is not? The difference is insignificant for much longer arrays. But it is relatively significant for arrays of length 1000 or less, for example. In my code I do millions of array sums so the difference is particularly significant. | When I run np.sum(a) in debug mode on my PC, it steps into the following code. https://github.com/numpy/numpy/blob/v1.26.5/numpy/core/fromnumeric.py#L2178 The following is the part of the code where it is relevant. import numpy as np import types def _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs): passkwargs = {k: v for k, v in kwargs.items() if v is not np._NoValue} if type(obj) is not np.ndarray: raise NotImplementedError return ufunc.reduce(obj, axis, dtype, out, **passkwargs) def copied_np_sum(a, axis=None, dtype=None, out=None, keepdims=np._NoValue, initial=np._NoValue, where=np._NoValue): if isinstance(a, types.GeneratorType): raise NotImplementedError return _wrapreduction( a, np.add, 'sum', axis, dtype, out, keepdims=keepdims, initial=initial, where=where ) Note that this ends up calling np.add.reduce(a). Benchmark: import timeit def benchmark(setup, stmt, repeat, number): print(f"{stmt:16}: {min(timeit.repeat(setup=setup, stmt=stmt, globals=globals(), repeat=repeat, number=number)) / number}") n_item = 10 ** 3 n_loop = 1000 n_set = 1000 data_setup = f"""\ import numpy as np rng = np.random.default_rng(0) a = rng.random({n_item}) """ benchmark(setup=data_setup, stmt="np.sum(a)", repeat=n_set, number=n_loop) benchmark(setup=data_setup, stmt="a.sum()", repeat=n_set, number=n_loop) benchmark(setup=data_setup, stmt="copied_np_sum(a)", repeat=n_set, number=n_loop) benchmark(setup=data_setup, stmt="np.add.reduce(a)", repeat=n_set, number=n_loop) np.sum(a) : 2.6407251134514808e-06 a.sum() : 1.3474803417921066e-06 copied_np_sum(a): 2.50667380169034e-06 np.add.reduce(a): 1.195137854665518e-06 As you can see, copied_np_sum performs similarly to np.sum, and np.add.reduce is similar to a.sum. So the majority of the difference between np.sum and a.sum is likely due to what copied_np_sum does before calling np.add.reduce. In other words, it's the overhead caused by the dict comprehension and the additional function calls. However, although there is a significant difference in the above benchmark that reproduces the OP's one, as pointed out in the comment, this may be overstated. Because timeit repeatedly executes the code and uses the (best of) average, with a small array like in this benchmark, the array may already be in the CPU cache when it is measured. This is not necessarily an unfair condition. The same thing could happen in actual use. Rather, it should be so whenever possible. That being said, for a canonical answer, we should measure it. Based on @user3666197 advice, we can create a large array immediately after creating a to evicts a from the cache. Note that I decided to use np.arange here, which I confirmed has the same effect but runs faster. import timeit def benchmark(setup, stmt, repeat, number): print(f"{stmt:16}: {min(timeit.repeat(setup=setup, stmt=stmt, globals=globals(), repeat=repeat, number=number)) / number}") n_item = 10 ** 3 n_loop = 1 n_set = 100 data_setup = f"""\ import numpy as np rng = np.random.default_rng(0) a = rng.random({n_item}) _ = np.arange(10 ** 9, dtype=np.uint8) # To evict `a` from the CPU cache. """ benchmark(setup=data_setup, stmt="np.sum(a)", repeat=n_set, number=n_loop) benchmark(setup=data_setup, stmt="a.sum()", repeat=n_set, number=n_loop) benchmark(setup=data_setup, stmt="copied_np_sum(a)", repeat=n_set, number=n_loop) benchmark(setup=data_setup, stmt="np.add.reduce(a)", repeat=n_set, number=n_loop) Without eviction (With cache): np.sum(a) : 2.6407251134514808e-06 a.sum() : 1.3474803417921066e-06 copied_np_sum(a): 2.50667380169034e-06 np.add.reduce(a): 1.195137854665518e-06 With eviction (Without cache): np.sum(a) : 4.916824400424957e-05 a.sum() : 3.245798870921135e-05 copied_np_sum(a): 4.7205016016960144e-05 np.add.reduce(a): 3.0195806175470352e-05 Naturally, the presence or absence of cache makes a huge impact on performance. However, although the difference has become smaller, it can still be said to be a significant difference. Also, since these four relationships remain the same as before, the conclusion also remains the same. There are a few things I should add. Note1 The claim regarding method loading is incorrect. benchmark(setup=f"{data_setup}f = np.sum", stmt="f(a)", repeat=n_set, number=n_loop) benchmark(setup=f"{data_setup}f = a.sum", stmt="f()", repeat=n_set, number=n_loop) np.sum(a) : 4.916824400424957e-05 a.sum() : 3.245798870921135e-05 f(a) : 4.6479981392621994e-05 <-- Same as np.sum. f() : 3.27317975461483e-05 <-- Same as a.sum. np.add.reduce(a): 3.0195806175470352e-05 <-- Also, note that this one is fast. Note2 As all benchmarks show, np.add.reduce is the fastest (least overhead). If your actual application also deals only with 1D arrays, and such a small difference is important to you, you should consider using np.add.reduce. Note3 Actually, numba may be the fastest in this case. from numba import njit import numpy as np import math @njit(cache=True) def nb_numpy_sum(a): # This will be a reduce sum. return np.sum(a) @njit(cache=True) def nb_pairwise_sum(a): # https://en.wikipedia.org/wiki/Pairwise_summation N = 2 if len(a) <= N: return np.sum(a) # reduce sum else: m = len(a) // 2 return nb_pairwise_sum(a[:m]) + nb_pairwise_sum(a[m:]) @njit(cache=True) def nb_kahan_sum(a): # https://en.wikipedia.org/wiki/Kahan_summation_algorithm total = a.dtype.type(0.0) c = total for i in range(len(a)): y = a[i] - c t = total + y c = (t - total) - y total = t return total def test(): candidates = [ ("np.sum", np.sum), ("math.fsum", math.fsum), ("nb_numpy_sum", nb_numpy_sum), ("nb_pairwise_sum", nb_pairwise_sum), ("nb_kahan_sum", nb_kahan_sum), ] n = 10 ** 7 + 1 a = np.full(n, 0.1, dtype=np.float64) for name, f in candidates: print(f"{name:16}: {f(a)}") test() Accuracy: np.sum : 1000000.0999999782 math.fsum : 1000000.1000000001 nb_numpy_sum : 1000000.0998389754 nb_pairwise_sum : 1000000.1 nb_kahan_sum : 1000000.1000000001 Timing: np.sum(a) : 4.7777313739061356e-05 a.sum() : 3.219071435928345e-05 np.add.reduce(a) : 2.9000919312238693e-05 nb_numpy_sum(a) : 1.0361894965171814e-05 nb_pairwise_sum(a): 1.4733988791704178e-05 nb_kahan_sum(a) : 1.2937933206558228e-05 Note that although nb_pairwise_sum and nb_kahan_sum have mathematical accuracy comparable to NumPy, neither is intended to be an exact replica of NumPy's implementation. So there is no guarantee that the results will be exactly the same as NumPy's. It should also be clarified that this difference is due to the amount of overhead, and NumPy is significantly faster for large arrays (e.g. >10000). The following section was added after this answer was accepted. Below is an improved version of @JΓ©rΓ΄meRichard's pairwise sum that sacrifices some accuracy for faster performance on larger arrays. See the comments for more details. import numba as nb import numpy as np # Very fast function which should be inlined by LLVM. # The loop should be completely unrolled and designed so the SLP-vectorizer # could emit SIMD instructions, though in practice it does not... @nb.njit(cache=True) def nb_sum_x16(a): v1 = a[0] v2 = a[1] for i in range(2, 16, 2): v1 += a[i] v2 += a[i+1] return v1 + v2 @nb.njit(cache=True) def nb_pairwise_sum(a): n = len(a) m = n // 2 # Trivial case for tiny arrays if n < 16: return sum(a[:m]) + sum(a[m:]) # Computation of a chunk (of 16~256 items) using an iterative # implementation so to reduce the overhead of function calls. if n <= 256: v = nb_sum_x16(a[0:16]) i = 16 # Main loop iterating on blocks (of exactly 16 items) while i + 15 < n: v += nb_sum_x16(a[i:i+16]) i += 16 return v + sum(a[i:]) # OPTIONAL OPTIMIZATION: only for array with 1_000~100_000 items # Same logic than above but with bigger chunks # It is meant to reduce branch prediction issues with small # chunks by splitting them in equal size. if n <= 4096: v = nb_pairwise_sum(a[:256]) i = 256 while i + 255 < n: v += nb_pairwise_sum(a[i:i+256]) i += 256 return v + nb_pairwise_sum(a[i:]) return nb_pairwise_sum(a[:m]) + nb_pairwise_sum(a[m:]) | 6 | 10 |
78,633,947 | 2024-6-17 | https://stackoverflow.com/questions/78633947/filter-dataframe-events-not-in-time-windows-dataframe | I have a DataFrame of events (Event Name - Time) and a DataFrame of time windows (Start Time - End Time). I want to get a DataFrame containing only the events not in any of the time windows. I am looking for a "pythonic" way to filter the DataFrame. Example: Events DataFrame: Event Name Event Time Event1 02/01/2000 00:00:00 Event2 05/01/2000 10:00:00 Event3 07/01/2000 09:00:00 Event4 10/01/2000 02:00:00 Time Windows DataFrame: Time Window Name Start Time End Time Window1 01/01/2000 00:00:00 06/01/2000 00:00:00 Window2 10/01/2000 01:00:00 10/01/2000 04:00:00 Result: Filtered Events DataFrame: Event Name Event Time Event3 07/01/2000 09:00:00 Setup: import pandas as pd events_data = { 'Event Name': ['Event1', 'Event2', 'Event3', 'Event4'], 'Event Time': ['02/01/2000 00:00:00', '05/01/2000 10:00:00', '07/01/2000 09:00:00', '10/01/2000 02:00:00'] } time_windows_data = { 'Time Window Name': ['Window1', 'Window2'], 'Start Time': ['01/01/2000 00:00:00', '10/01/2000 01:00:00'], 'End Time': ['06/01/2000 00:00:00', '10/01/2000 04:00:00'] } events_df = pd.DataFrame(events_data) time_windows_df = pd.DataFrame(time_windows_data) events_df['Event Time'] = pd.to_datetime(events_df['Event Time'], format='%d/%m/%Y %H:%M:%S') time_windows_df['Start Time'] = pd.to_datetime(time_windows_df['Start Time'], format='%d/%m/%Y %H:%M:%S') time_windows_df['End Time'] = pd.to_datetime(time_windows_df['End Time'], format='%d/%m/%Y %H:%M:%S') | You can build an IntervalIndex then create a boolean mask with reindex: # build IntervalIndex idx = pd.IntervalIndex.from_arrays(df_time['Start Time'], df_time['End Time']) # build boolean mask m = (pd.Series(False, index=idx) .reindex(df_events['Event Time'],fill_value=True) .to_numpy() ) # select non-matching rows out = df_events[m] Alternative to build m: m = idx.reindex(df_events['Event Time'])[1] == -1 Output: Event Name Event Time 2 Event3 2000-01-07 09:00:00 Intermediates: # idx IntervalIndex([(2000-01-01 00:00:00, 2000-01-06 00:00:00], (2000-01-10 01:00:00, 2000-01-10 04:00:00]], dtype='interval[datetime64[ns], right]') # m array([False, False, True, False]) Reproducible inputs: import pandas as pd from pandas import Timestamp df_events = pd.DataFrame({'Event Name': ['Event1', 'Event2', 'Event3', 'Event4'], 'Event Time': [Timestamp('2000-01-02 00:00:00'), Timestamp('2000-01-05 10:00:00'), Timestamp('2000-01-07 09:00:00'), Timestamp('2000-01-10 02:00:00')]}) df_time = pd.DataFrame({'Time Window Name': ['Window1', 'Window2'], 'Start Time': [Timestamp('2000-01-01 00:00:00'), Timestamp('2000-01-10 01:00:00')], 'End Time': [Timestamp('2000-01-06 00:00:00'), Timestamp('2000-01-10 04:00:00')]}) | 2 | 3 |
78,630,047 | 2024-6-16 | https://stackoverflow.com/questions/78630047/how-to-stop-numpy-floats-being-displayed-as-np-float64 | I have a large library with many doctests. All doctests pass on my computer. When I push changes to GitHub, GitHub Actions runs the same tests in Python 3.8, 3.9, 3.10 and 3.11. All tests run correctly on on Python 3.8; however, on Python 3.9, 3.10 and 3.11, I get many errors of the following type: Expected: [13.0, 12.0, 7.0] Got: [np.float64(13.0), np.float64(12.0), np.float64(7.0)] I.e., the results are correct, but for some reason, they are displayed inside "np.float64". In my code, I do not use np.float64 at all, so I do not know why this happens. Also, as the tests pass on my computer, I do not know how to debug the error, and it is hard to produce a minimal working example. Is there a way I can make the doctests pass again, without changing each individual test? | This is due to a change in how scalars are printed in numpy 2: numpy 1.x.x: >>> repr(np.array([1.0])[0]) '1.0' numpy 2.x.x: >>> repr(np.array([1.0])[0]) 'np.float64(1.0)' You should restrict the version of numpy to be 1.x.x in your requirements file to make sure you don't end up installing numpy 2.x.x: numpy ~> 1.26 (same as numpy >= 1.26, == 1.*, see this answer) or update your code to work with numpy 2 and change it to numpy ~> 2.0. | 4 | 5 |
78,632,725 | 2024-6-17 | https://stackoverflow.com/questions/78632725/python-django-access-fields-from-inherited-model | Hi I have a question related to model inheritance and accessing the fields in a Django template. My Model: class Security(models.Model): name = models.CharField(max_length=100, blank=False) class Stock(Security): wkn = models.CharField(max_length=15) isin = models.CharField(max_length=25) class Asset(models.Model): security = models.ForeignKey(Security, on_delete=models.CASCADE,blank=False) My views.py context["assets"] = Asset.objects.all() My template: {% for asset in assets %} {{ asset.security.wkn }} ... This gives me an error as wkn is no field of security. Any idea how I can solve this? Thanks in advance! | Django does not support polymorphism out of the box: even if you use inheritance and you retrieve an item from the database that is a Stock, if you query through Security, you retrieve it as a Security object, so without the specific Stock fields. Inheritance and polymorphism are often a pain in relational databases, and therefore it is often something you should not do, unless there are very good reasons to do so. In this case, you can fetch it with: {{ asset.security.stock.wkn }} But this will thus make extra queries, and generate an N+1 problem. You can work with django-polymorphic [readthedocs.io] where you can subclass from PolymorphicModel. Queries will then walk down the class hierarchy and add a lot of extra LEFT OUTER JOINs to the database, and then based on the data it receives generate a Stock or a Security object. But as you probably understand, this querying generates a lot of extra work for the database and for Python. Furthermore it makes behavior less predictable, and constraints less easy to enfroce. | 2 | 1 |
78,632,001 | 2024-6-17 | https://stackoverflow.com/questions/78632001/need-to-provide-addition-steps-to-pydantic-model-initialisation-method | I am trying to add custom steps to the Pydantic model __init__ method. Pseudo Sample Code : class Model(BaseModel): a: int b: Optional[List[int]] = None c: Optional[int] = None def __init__(self, *args, **kwargs): super.__init__(*args, **kwargs) self.method() def method(self): assert self.a is not None, "Value for a is not set" # As a safety precaution let's say self.c = sum(self.b) # Adds all the values in B and sets the sum to C When I initialise the method [In]: x = Model(a=1, b=[1, 2, 3, 4]) [Out]: pydantic.error_wrappers.ValidationError: 1 validation error for A response -> Model -> 0 Value for a is not set (type=value_error) I get this error even thou I have set both a and b in the model. Can someone help me out The same problem works if the method is called outside the __init__ method, but I want the method to be called upon initialisation and not manually Pseudo Sample Code : class Model(BaseModel): a: int b: Optional[List[int]] = None c: Optional[int] = None def method(self): assert self.a is not None, "Value for a is not set" # As a safety precaution let's say self.c = sum(self.b) # Adds all the values in B and sets the sum to C When I initialise the method [In]: x = Model(a=1, b=[1, 2, 3, 4]) [Out]: Model(a=1, b=[1, 2 , 3 , 4], c=None) [In]: x.method() [Out]: Model(a=1, b=[1, 2 , 3 , 4], c=10) Note: I'm restricted with the pydantic version '1.10.14'and cannot upgrade it for several reasons. Update Jun 18th After getting a few answers and help, I went with the validator method. But I went through few hiccups, which I have explained below Updated pseudo code : from pydantic import validator class Model(BaseModel): a: int b: Optional[List[int]] = None c: Optional[int] = None @root_validator(pre=False, allow_reuse=False) def method(cls, values, **kwargs): assert values["b"] is not None, "Value for b is not set" values["c"] = sum(values["b"]) return values def serialise(self): ... // Creates protobuf message @classmethod def deserialize(cls, message) -> "Model": ... // reads the protobuf message and creates the cls return cls( a = message.a c = message.c ) >>> x = Model(a=1, b=[1, 2, 3, 4])) Model(a=1 b=[1, 2, 3, 4] c=10) >>> >>> x.serialise() xxxxxxxxxxxxxxxxxxxxx >>> >>> y = Model.desirealise("xxxxxxxxxxxxxxxxxxxxx") Value for b is not set In this case, I don't want to run the validator when c is manually set during the model is initiated. Hence, I set skip_on_failure flag to True for the validator and I have made the below changes to prevent the check from failing. from pydantic import validator class Model(BaseModel): a: int b: Optional[List[int]] = None c: Optional[int] = None @root_validator(pre=False, allow_reuse=False) def method(cls, values, **kwargs): if values["c"] is not None: return values assert values["b"] is not None, "Value for b is not set" values["c"] sum(values["b"]) return values def serialise(self): ... // Creates protobuf message @classmethod def deserialize(cls, message) -> "Model": ... // reads the protobuf message and creates the cls return cls( a = message.a c = message.c ) >>> x = Model(a=1, b=[1, 2, 3, 4])) Model(a=1 b=[1, 2, 3, 4] c=10) >>> >>> x.serialise() xxxxxxxxxxxxxxxxxxxxx >>> >>> y = Model.desirealise("xxxxxxxxxxxxxxxxxxxxx") Model(a=1, b=None, c=10) This code resolves my issues and works properly with the system, but I want to know if this is a best practice or If any other way is there to do this. | Pydantic v2 answer: If you want to both validate and add logic to your property during initialization, you can use model_validator: from pydantic import BaseModel from pydantic import model_validator class Model(BaseModel): a: int b: Optional[List[int]] = None c: Optional[int] = None @model_validator(mode='after') def method(self): assert self.a is not None, "Value for a is not set" # As a safety precausion let say self.c = sum(self.b) # Adds all the value in B and sets the sum to C print(Model(a=1, b=[1, 2, 3, 4])) # a=1 b=[1, 2, 3, 4] c=10 But I think it's a better approach to separate responsabilities with @model_validator and @computed_field: from pydantic import computed_field from pydantic import model_validator class Model(BaseModel): a: int b: Optional[List[int]] = None @model_validator(mode='after') def method(self): assert self.a is not None, "Value for a is not set" @computed_field @property def c(self) -> int: return sum(self.b) print(Model(a=1, b=[1, 2, 3, 4])) # a=1 b=[1, 2, 3, 4] c=10 Pydantic v1 (1.10.14) answer: Since pydantic v1 don't offer computed_field, you can still rely on model/field validations. In this case, the validator decorator: from pydantic import validator class Model(BaseModel): a: int b: Optional[List[int]] = None c: Optional[int] = None @validator("c", always=True) def method(cls, v, values, **kwargs): assert values["a"] is not None, "Value for a is not set" return sum(values["b"]) print(Model(a=1, b=[1, 2, 3, 4])) # a=1 b=[1, 2, 3, 4] c=10 | 2 | 3 |
78,628,027 | 2024-6-16 | https://stackoverflow.com/questions/78628027/broken-pipe-passing-python-output-to-c-input-due-to-size | I'm trying to transform an image into a matrix of it's rbg values in c++, i really like the simplicity of PIL on handling different images extensions, so i currently have two codes from PIL import Image img=Image.open("oi.png") pixel=img.load() width,height=img.size print(height) print(width) def rgb(r, g, b): return ((r & 0xff) << 16) + ((g & 0xff) << 8) + (b & 0xff) for x in range(width): for y in range(height): print(rgb(pixel[x,y][0],pixel[x,y][1],pixel[x,y][2])) and to recieve in C++ #include <bits/stdc++.h> using namespace std; #define __ ios_base::sync_with_stdio(false);cin.tie(NULL); int main(){__ long long height,width;cin>>height>>width; unsigned long img[width][height]; for(long long j=0; j<height;j++) for (long long i=0; i<width;i++){ cin>>img[i][j]; } return 0;} and i am connecting both trough terminal withpython3 code.py | ./code it works for really small images, but for bigger ones it returns BrokenPipeError: [Errno 32] Broken pipe What should i do? is there a better way to achieve what i am trying to achieve? I want to connect python output to c++ input even with big outputs without Broken pipe error | "Broken pipe" means that a program tried to write to a pipe that no longer had any programs reading from it. This means that your C++ program is exiting before it should. For a 1000x1000 image, assuming you're running this on x86_64 Linux, img is 8MB. I suspect that's too big for the stack, which is causing your C++ program to crash. You can fix that by allocating img on the heap instead. | 2 | 1 |
78,628,103 | 2024-6-16 | https://stackoverflow.com/questions/78628103/assign-each-list-element-to-a-row-of-pandas-dataframe-sequentially-and-equally | I have a pandas dataframe with 25 rows, and also a list with 5 elements. How do I: - assign 1st element of the list to first row of the dataframe - 2nd element of the list to second row - ... - 1st element to 6th row of dataframe etc Eg: Need to assign a doctor to each patient sequentially df: | Name | Gender | | -------- | -------------- | | First | Male | | Second | Female | | Third | Male | | Fourth | Male | | Fifth | Male | list_doctor = ['andrea','anup'] Required Output: | Name | Gender | Doctor | | -------- | -------------- |---------| | First | Male |andrea | | Second | Female |anup | | Third | Male |andrea | | Fourth | Male |anup | | Fifth | Male |andrea | Tried with iterrows, but all rows are being assigned to the same name for index, row in df.iterrows(): for doctor in l_doctor: print(index, doctor) df.loc[index,'Assignee'] = doctor | Approach To avoid Python for loop we can use itertools functions cycle and islice as follows: cycle to make an iterator that will indefinitely loop through a list islice to create an iterator that returns selected elements from the iterable list to instantiate the iterator elements Code import pandas as pd from itertools import cycle, islice df = pd.DataFrame({ 'Name': ['First', 'Second', 'Third', 'Fourth', 'Fifth'], 'Gender': ['Male', 'Female', 'Male', 'Male', 'Male'] }) docs = ['andrea', 'anup'] df['Doctor'] = list(islice(cycle(docs), 0, len(df))) print(df) Output Name Gender Doctor 0 First Male andrea 1 Second Female anup 2 Third Male andrea 3 Fourth Male anup 4 Fifth Male andrea | 3 | 4 |
78,628,078 | 2024-6-16 | https://stackoverflow.com/questions/78628078/why-does-this-error-when-converting-a-python-list-of-lists-to-a-numpy-array-only | I have a somewhat peculiar structure of python list of lists that I need to convert to a numpy array, so far I have managed to simply get by using np.array(myarray, dtype = object), however a seemingly insignificant change to the structure of myarray has caused me to get an error. I have managed to reduce my issue down into two lines of code, the following is what I was using previously and works exactly how I want it to: import numpy as np myarray = [np.array([[1,2,3,4],[5,6,7,8]]), np.array([[9,10],[11,12]]), np.array([[13,14],[15,16],[17,18]])] np.array(myarray,dtype = object) However, simply removing the last [17,18] array we have import numpy as np myarray = [np.array([[1,2,3,4],[5,6,7,8]]), np.array([[9,10],[11,12]]), np.array([[13,14],[15,16]])] np.array(myarray,dtype = object) Which gives "ValueError: could not broadcast input array from shape (2,4) into shape (2,)" when it attempts to run the second line. It seems to me that this only happens when the arrays all have the same length but the underlying lists have different lengths, what I don't understand is why setting dtype = object doesnt cover this especially considering it handles the more complicated list of lists shape. | np.array tries, as first priority, to make a n-d numeric array - one where all elements are numeric, and the shape is consistent in all dimensions. i.e. no 'ragged' array. In [36]: alist = [np.array([[1,2,3,4],[5,6,7,8]]), np.array([[9,10],[11,12]]), np.array([[13,14],[15,16],[17,18]])] In [38]: [a.shape for a in alist] Out[38]: [(2, 4), (2, 2), (3, 2)] alist works making a 3 element array of arrays. Your problem case: In [39]: blist = [np.array([[1,2,3,4],[5,6,7,8]]), np.array([[9,10],[11,12]]), np.array([[13,14],[15,16]])] In [40]: [a.shape for a in blist] Out[40]: [(2, 4), (2, 2), (2, 2)] Note that all subarrays have the same first dimension. That's what's giving the problem. The safe way to make such an array is to start with a 'dummy' of the right shape, and fill it: In [41]: res = np.empty(3,object); res[:] = blist; res Out[41]: array([array([[1, 2, 3, 4], [5, 6, 7, 8]]), array([[ 9, 10], [11, 12]]), array([[13, 14], [15, 16]])], dtype=object) In [42]: res = np.empty(3,object); res[:] = alist; res Out[42]: array([array([[1, 2, 3, 4], [5, 6, 7, 8]]), array([[ 9, 10], [11, 12]]), array([[13, 14], [15, 16], [17, 18]])], dtype=object) It also works when all subarrays/lists have the same shape In [43]: clist = [np.array([[1,2],[7,8]]), np.array([[9,10],[11,12]]), np.array([[13,14],[15,16]])] In [44]: res = np.empty(3,object); res[:] = clist; res Out[44]: array([array([[1, 2], [7, 8]]), array([[ 9, 10], [11, 12]]), array([[13, 14], [15, 16]])], dtype=object) Without that clist produces a (3,2,2) array of number objects: In [45]: np.array(clist, object) Out[45]: array([[[1, 2], [7, 8]], [[9, 10], [11, 12]], [[13, 14], [15, 16]]], dtype=object) One way to think of it, np.array does not give you a way of specifying the 'depth' or 'shape' of object array. It has to 'guess', and in some cases guesses wrong. | 4 | 3 |
78,626,866 | 2024-6-15 | https://stackoverflow.com/questions/78626866/keep-button-visible-after-click-to-change-graph | When I click on the buttons to change the graph they become invisible, but are still usable. How do you keep them always displayed on the window? I know it's plt.clf() that clears the content, but how to place buttons that will always stay under the graph? import matplotlib.pyplot as plt from matplotlib.widgets import Button def plot1(event): plt.clf() plt.scatter(liste_x,liste_graphe_1,s=5,c="red") plt.draw() def plot2(event): plt.clf() plt.scatter(liste_x,liste_graphe_2,s=5,c="green") plt.draw() liste_x=[1,2,3] liste_graphe_1=[1,2,3] liste_graphe_2=[3,2,1] fig,ax=plt.subplots() plt.scatter(liste_x,liste_graphe_1,s=5,c="red") axBtn1=plt.axes([0.0,0.0,0.2,0.05]) Btn1=Button(axBtn1,label="graphe_1",color="grey",hovercolor="yellow") Btn1.on_clicked(plot1) axBtn2=plt.axes([0.2,0.0,0.2,0.05]) Btn2=Button(axBtn2,label="graphe_2",color="grey",hovercolor="yellow") Btn2.on_clicked(plot2) plt.show() Buttons are placed, they change the graph as expected. I tried to take off plt.clf(), but the graphs are not replaced without this function. | You have to use ax. instead of plt.. First you have to use it to create plot ax.scatter(...) Next you have to use it to remove plot ax.clear() # PEP8: use space after `,` around `=`, etc. import matplotlib.pyplot as plt from matplotlib.widgets import Button def plot1(event): ax.clear() ax.scatter(liste_x, liste_graphe_1, s=5, c="red") plt.draw() def plot2(event): ax.clear() ax.scatter(liste_x, liste_graphe_2, s=5, c="green") plt.draw() liste_x = [1, 2, 3] liste_graphe_1 = [1, 2, 3] liste_graphe_2 = [3, 2, 1] fig, ax = plt.subplots() ax.scatter(liste_x, liste_graphe_1, s=5, c="red") ax_btn1 = plt.axes([0.0, 0.0, 0.2, 0.05]) # PEP8: `lower_case_names` for variables btn1 = Button(ax_btn1, label="graphe_1", color="grey", hovercolor="yellow") btn1.on_clicked(plot1) ax_btn2 = plt.axes([0.2, 0.0, 0.2, 0.05]) btn2 = Button(ax_btn2, label="graphe_2", color="grey", hovercolor="yellow") btn2.on_clicked(plot2) plt.show() PEP 8 -- Style Guide for Python Code | 2 | 1 |
78,628,020 | 2024-6-16 | https://stackoverflow.com/questions/78628020/how-can-i-rewrite-this-so-that-it-does-not-repeat | I'm looking for suggestions on how I could rewrite this so that the code isn't repeating. It's suppose to separate the float(digits) from the string(alphabetical characters) within a dictionary's value, and subtract or add a user's numerical input to the float. Afterwards, it rejoins the digits and the letters, converts them back into a string and stores the value. An example of what it should do is take "19.0: chicken" from the dictionary, split it, add 3 from the user's input, and return "22.0 chicken" to the dictionary. if modify_options == "1": quantity_value = float(re.findall(r"[0-9]+(?:\.[0-9]*)?", inventory_list[key]) [0]) + quantity quantity_measurement = ''.join(re.findall(r'(?i)[A-Z]', str(inventory_list[key]))) inventory_list[key] = str(quantity_value) + str(quantity_measurement) return inventory_list elif modify_options == "2": quantity_value = float(re.findall(r"[0-9]+(?:\.[0-9]*)?", inventory_list[key])[0]) - quantity quantity_measurement = ''.join(re.findall(r'(?i)[A-Z]', str(inventory_list[key]))) inventory_list[key] = str(quantity_value) + str(quantity_measurement) return inventory_list | The best practice for this would be wrapping up the code within a function. Infact whenever you find some part of code is repeating multiple times, you are supposed to put it into a function. Here's how to do it: def update_inventory(key, option, quantity): quantity_value = float(re.findall(r"[0-9]+(?:\.[0-9]*)?", inventory_list[key]) [0]) quantity_measurement = ''.join(re.findall(r'(?i)[A-Z]', str(inventory_list[key]))) if option=="1": quantity_value += quantity elif option=="2": quantity_value -= quantity inventory_list[key] = str(quantity_value) + str(quantity_measurement) return inventory_list #now you can update the inventory_list by calling the function update_inventory(2, "1", 9) update_inventory(1, "2", 4) | 2 | 0 |
78,627,912 | 2024-6-15 | https://stackoverflow.com/questions/78627912/python-dataframe-info-output-doesnt-reflect-dropped-rows | I have a dataset with 2111 rows in it. When I drop the 27 duplicate rows the DataFrame.info output still shows the rows numbered from 0 to 2110, but reports 2085 rows. Is there a refresh command associated with DataFrame metadata I need to call? Original unpreprocessed DataFrame.info output: !!!!!!!!!!!!!!!!!! Size and shape and info before preprocess 40109 (2111, 19) <bound method DataFrame.info of id Gender Age Height Weight ... TUE CALC MTRANS NObeyesdad BMI 0 1 female 21 1.6200 64 ... 3 to 5 no public_transportation normal_weight 24.3865 1 2 female 21 1.5200 56 ... 0 to 2 sometimes public_transportation normal_weight 24.2382 2 3 male 23 1.8000 77 ... 3 to 5 frequently public_transportation normal_weight 23.7654 3 4 male 27 1.8000 87 ... 0 to 2 frequently walking overweight_level_i 26.8519 4 5 male 22 1.7800 90 ... 0 to 2 sometimes public_transportation overweight_level_ii 28.3424 ... ... ... ... ... ... ... ... ... ... ... ... 2106 2,107 female 21 1.7107 131 ... 3 to 5 sometimes public_transportation obesity_type_iii 44.9015 2107 2,108 female 22 1.7486 134 ... 3 to 5 sometimes public_transportation obesity_type_iii 43.7419 2108 2,109 female 23 1.7522 134 ... 3 to 5 sometimes public_transportation obesity_type_iii 43.5438 2109 2,110 female 24 1.7394 133 ... 3 to 5 sometimes public_transportation obesity_type_iii 44.0715 2110 2,111 female 24 1.7388 133 ... 3 to 5 sometimes public_transportation obesity_type_iii 44.1443 [2111 rows x 19 columns] After preprocessing drops 27 duplicate rows from a dataset (going from 2111 rows to 2085). Post row dropping, DataFrames.info output shows: !!!!!!!!!!!!!!!!!! Size and shape After preprocess 37512 (2084, 18) <bound method DataFrame.info of Gender Age Height Weight FHWO FAVC FCVC NCP CAEC SMOKE CH2O SCC FAF TUE CALC MTRANS NObeyesdad BMI 0 2 21 1.6200 64 2 1 2 3 2 1 2 1 1 2 1 3 2 24.3865 1 2 21 1.5200 56 2 1 3 3 2 2 3 2 4 1 2 3 2 24.2382 2 1 23 1.8000 77 2 1 2 3 2 1 2 1 3 2 3 3 2 23.7654 3 1 27 1.8000 87 1 1 3 3 2 1 2 1 3 1 3 5 3 26.8519 4 1 22 1.7800 90 1 1 2 1 2 1 2 1 1 1 2 3 4 28.3424 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 2106 2 21 1.7107 131 2 2 3 3 2 1 2 1 3 2 2 3 7 44.9015 2107 2 22 1.7486 134 2 2 3 3 2 1 2 1 2 2 2 3 7 43.7419 2108 2 23 1.7522 134 2 2 3 3 2 1 2 1 2 2 2 3 7 43.5438 2109 2 24 1.7394 133 2 2 3 3 2 1 3 1 2 2 2 3 7 44.0715 2110 2 24 1.7388 133 2 2 3 3 2 1 3 1 2 2 2 3 7 44.1443 [2084 rows x 18 columns] NOTE: the info output shows the last row numbered 2110, but rows as 2084. I've tried using DataFrame.dropduplicates with both inplace=True and inplace=False, but the result is the same: #Example of inplace = False and inplace=True return_df = return_df.drop_duplicates(inplace=False) return_df.drop_duplicates(inplace=True) Here's the relevant row dropping code: # removing duplicates count_dup = return_df.duplicated().sum() if (verbose > 0): print (f"Number of Duplicates : {count_dup}") if count_dup > 0: if (verbose > 0): print ("Dropping Duplicates") #return_df.drop_duplicates(inplace=True) return_df = return_df.drop_duplicates(inplace=False) else: if (verbose > 0): print ("No duplicates found.") return return_df | The rows are being dropped correctly. However please note that the index won't automatically reset after dropping the duplicates. You need to reset the index to get your desired output: df.reset_index(drop=True, inplace=True) | 2 | 1 |
78,626,707 | 2024-6-15 | https://stackoverflow.com/questions/78626707/leetcode-417-bfs-time-limit-exceeded | I am working on Leetcode 417 Pacific Atlantic Water Flow: There is an m x n rectangular island that borders both the Pacific Ocean and Atlantic Ocean. The Pacific Ocean touches the island's left and top edges, and the Atlantic Ocean touches the island's right and bottom edges. The island is partitioned into a grid of square cells. You are given an m x n integer matrix heights where heights[r][c] represents the height above sea level of the cell at coordinate (r, c). The island receives a lot of rain, and the rain water can flow to neighboring cells directly north, south, east, and west if the neighboring cell's height is less than or equal to the current cell's height. Water can flow from any cell adjacent to an ocean into the ocean. Return a 2D list of grid coordinates result where result[i] = [ri, ci] denotes that rain water can flow from cell (ri, ci) to both the Pacific and Atlantic oceans. My solution is below. I am having Time Limit Exceeded error for a very large test case like [[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19],[72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,20],[71,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,90,21],[70,135,192,193,194,195,196,197,198,199,200,201,202,203,204,205,152,91,22], ...] I cannot figure out why my BFS did not work within a reasonable time. What am I missing? class Solution: def pacificAtlantic(self, heights): rows, cols = len(heights), len(heights[0]) # define directions directions = [[0, 1], [0, -1], [1, 0], [-1, 0]] def bfs(node, visited): Q = [node] while Q: x, y = Q.pop(0) visited[x][y] = True for dx, dy in directions: next_x, next_y = x + dx, y + dy if next_x < 0 or next_x >= rows: continue if next_y < 0 or next_y >= cols: continue if visited[next_x][next_y]: continue if heights[x][y] > heights[next_x][next_y]: continue Q.append((next_x, next_y)) # pacific pacific_start = [[0, i] for i in range(cols)] + [[i, 0] for i in range(1, rows)] pacific_visited = [[False for _ in range(cols)] for _ in range(rows)] for row, col in pacific_start: if not pacific_visited[row][col]: bfs((row, col), pacific_visited, 0) # atlantic atlantic_start = [[rows - 1, i] for i in range(cols)] + [[i, cols - 1] for i in range(0, rows - 1)] atlantic_visited = [[False for _ in range(cols)] for _ in range(rows)] for row, col in atlantic_start: if not atlantic_visited[row][col]: bfs((row, col), atlantic_visited, 0) # find the common land ans = [] for i in range(rows): for j in range(cols): if pacific_visited[i][j] and atlantic_visited[i][j]: ans.append((i, j)) return ans | There are two issues in your bfs function that negatively affect performance: pop(0) is not efficient on a list. Instead use a deque You mark a node as visited, after having taken it from the queue, but that means you can have several copies of the same cell in the queue, increasing the number of iterations the BFS loop will make. Instead mark a node as visited at the time you push it on the queue. Here is the code of your bfs function, with minimal changes needed to resolve those two issues: def bfs(node, visited, test): visited[node[0]][node[1]] = True # mark node when it enters the queue Q = deque([node]) # Use a deque, not a list while Q: x, y = Q.popleft() # now it's an efficient operation for dx, dy in directions: next_x, next_y = x + dx, y + dy if next_x < 0 or next_x >= rows: continue if next_y < 0 or next_y >= cols: continue if visited[next_x][next_y]: continue if heights[x][y] > heights[next_x][next_y]: continue visited[next_x][next_y] = True # mark node when it enters the queue Q.append((next_x, next_y)) | 2 | 2 |
78,620,310 | 2024-6-13 | https://stackoverflow.com/questions/78620310/calculate-windowed-event-chains | Given a Polars DataFrame data = pl.DataFrame({"user_id": [1, 1, 1, 1, 1, 2, 2, 2, 2], "event": [False, True, True, False, True, True, True, False, False] I wish to calculate a column event_chain which counts the streak of times where a user has an event, where in any of the previous 4 rows they also had an event. Every time a new event happens, when the user already has a streak active, the streak counter is incremented, it should be then set to zero if they don't have another event for another 4 rows user_id event event_chain reason for value 1 False 0 no events yet 1 True 0 No events in last 4 rows (not inclusive of current row) 1 True 1 event this row, and 1 event in last 4 rows 1 False 1 Does not reset to 0 as there is an event within the next 4 rows 1 True 2 event this row and event last 4 rows, increment the streak 2 True 0 No previous events 2 True 1 Event this row and in last 4 rows for user 2 False 0 No event this row, and no events in next 4 rows for user, resets to 0 2 False 0 I have working code as follows to do this, but I think there should be a cleaner way to do it data.with_columns( rows_since_last_event=pl.int_range(pl.len()).over("user_id") - pl.when("event").then(pl.int_range(pl.len())).forward_fill() .over("user_id"), rows_till_next_event=pl.when("event").then(pl.int_range(pl.len())) .backward_fill().over("user_id") - pl.int_range(pl.len()).over("athlete_id") ) .with_columns( chain_event=pl.when( pl.col("event") .fill_null(0) .rolling_sum(window_size=4, min_periods=1) .over("user_id") - pl.col("event").fill_null(0) > 0 ) .then(1) .otherwise(0) ) .with_columns( chain_event_change=pl.when( pl.col("chain_event").eq(1), pl.col("chain_event").shift().eq(0), pl.col("rows_since_last_event").fill_null(5) > 3, ) .then(1) .when( pl.col("congested_event").eq(0), pl.col("congested_event").shift().eq(1), pl.col("rows_till_next_event").fill_null(5) > 3, ) .then(1) .otherwise(0) ) .with_columns( chain_event_identifier=pl.col("chain_event_change") .cum_sum() .over("user_id") ) .with_columns( event_chain=pl.col("chain_event") .cum_sum() .over("user_id", "chain_event_identifier") ) ) | updated version I looked at @jqurious answer and I think you can make it even more concise .sum_horizontal() to precalculate counter while checking previous N rows. We only need sum for previous rows, for next rows we just need to know if they exist, so max is enough. Also note that we use window of size 5 (including current row) instead so we don't need special case for 'starting' event. ( data .with_columns( chain_event = pl.sum_horizontal(pl.col.event.shift(i) for i in range(5)) .over('user_id'), next = pl.max_horizontal(pl.col.event.shift(-i) for i in range(1,5)) .over('user_id').fill_null(False) ).with_columns( pl .when(event = False, next = False).then(0) .when(event = False, chain_event = 0).then(0) .otherwise(pl.col.chain_event - 1) .alias('chain_event') # or even shorter but a bit more cryptic # pl # .when(event = False, next = False).then(0) # .otherwise(pl.col.chain_event - pl.col.event) # .alias('chain_event') ) ) βββββββββββ¬ββββββββ¬βββββββ¬ββββββββ¬ββββββββββββββ β user_id β event β prev β next β chain_event β β --- β --- β --- β --- β --- β β i64 β bool β u32 β bool β i64 β βββββββββββͺββββββββͺβββββββͺββββββββͺββββββββββββββ‘ β 1 β false β 0 β true β 0 β β 1 β true β 1 β true β 0 β β 1 β true β 2 β true β 1 β β 1 β false β 2 β true β 1 β β 1 β true β 3 β false β 2 β β 2 β true β 1 β true β 0 β β 2 β true β 2 β false β 1 β β 2 β false β 2 β false β 0 β β 2 β false β 2 β false β 0 β βββββββββββ΄ββββββββ΄βββββββ΄ββββββββ΄ββββββββββββββ previous version .shift() to get 4 previous and 4 next rows. .max_horizontal() so we know if there's an event within these windows. .rle_id() to create continuous groups of events so we can restart the counter. .cum_sum() to increment counters. .when().then().otherwise() to only take into account groups with events. Basically, the most important part here is that we treat row as being within chain if either one of the conditions is met: There's event within current row. There's event within previous 4 rows (otherwise we've restarted the counter already) AND there's event within next 4 rows (otherwise we're going to reset the counter). ( data .with_columns( pl.max_horizontal(pl.col("event").shift(i + 1).over('user_id') for i in range(4)).alias("max_lag").fill_null(False), pl.max_horizontal(pl.col("event").shift(-i - 1).over('user_id') for i in range(4)).alias("max_lead").fill_null(False) ).with_columns( event_chain = (pl.col("max_lag") & pl.col("max_lead")) | pl.col('event') ).select( pl.col('user_id','event'), pl.when(pl.col('event_chain')) .then( pl.col('event').cum_sum().over('user_id', pl.col('event_chain').rle_id().over('user_id')) - 1 ).otherwise(0) .alias('event_chain') ) ) βββββββββββ¬ββββββββ¬ββββββββββββββ β user_id β event β event_chain β β --- β --- β --- β β i64 β bool β i64 β βββββββββββͺββββββββͺββββββββββββββ‘ β 1 β false β 0 β β 1 β true β 0 β β 1 β true β 1 β β 1 β false β 1 β β 1 β true β 2 β β 2 β true β 0 β β 2 β true β 1 β β 2 β false β 0 β β 2 β false β 0 β βββββββββββ΄ββββββββ΄ββββββββββββββ Alternatively .rolling_max() to calculate if there's event within previous 4 rows same with .reverse() to calculate if there's event within next 4 rows ( data .with_columns( (pl.col('event').cast(pl.Int32).shift(1).rolling_max(4, min_periods=0)).over('user_id').fill_null(0).alias('max_lag'), (pl.col('event').reverse().cast(pl.Int32).shift(1).rolling_max(4, min_periods=0).reverse()).over('user_id').fill_null(0).alias('max_lead') ).with_columns( event_chain = ((pl.col("max_lag") == 1) & (pl.col("max_lead") == 1)) | pl.col('event') ).select( pl.col('user_id','event'), pl.when(pl.col('event_chain')) .then( pl.col('event').cum_sum().over('user_id', pl.col('event_chain').rle_id().over('user_id')) - 1 ).otherwise(0) .alias('event_chain') ) ) | 2 | 2 |
78,624,158 | 2024-6-14 | https://stackoverflow.com/questions/78624158/arma-model-function-for-future-unseen-data-with-start-and-end-dates | I have a dataframe like this lstvals = [30.81,27.16,82.15,31.00,9.13,11.77,25.58,7.57,7.98,7.98] lstdates = ['2021-01-01', '2021-01-05', '2021-01-09', '2021-01-13', '2021-01-17', '2021-01-21', '2021-01-25', '2021-01-29', '2021-02-02', '2021-02-06'] data = { "Dates": lstdates, "Market Value": lstvals } df = pd.DataFrame(data) df.set_index('Dates', inplace = True) df I want to forecast the values which are out of this sample, for example, from '2021-02-10' to '2022-04-23' (in my dataset, I have data from '2021-01-01' to '2023-11-09', and want to forecast for next year, from '2024-01-01' to '2023-11-09) https://www.statsmodels.org/devel/examples/notebooks/generated/statespace_forecasting.html I have defined and fitted my model as follows, which predicts the test data: train = df['Market Value'].iloc[:1187] test = df['Market Value'].iloc[-200:] ... ARMAmodel = SARIMAX(y, order = (2,1,2)) ARMAResults = ARMAmodel.fit() ... y_pred = ARMAResults.get_forecast(len(test.index)) y_pred_df = y_pred.conf_int(alpha = 0.05) y_pred_df["Predictions"] = ARMAResults.predict(start = y_pred_df.index[0], end = y_pred_df.index[-1]) y_pred_df.index = test.index y_pred_out = y_pred_df["Predictions"] ... plt.plot(train, color = "black") plt.plot(test, color = "red") plt.ylabel('Market Value ($M)') plt.xlabel('Date') plt.xticks(rotation=45) plt.title("Train/Test/Prediction for Market Data") plt.plot(y_pred_out, color='green', label = 'Predictions') plt.legend() plt.show() How can I make predictions for future dates? I have just tried to input future dates with the forecast method, and apparently, it is not working for me ARMAResults.forecast(start = '2024-01-01', end = '2024-11-09') TypeError: statsmodels.tsa.statespace.mlemodel.MLEResults.predict() got multiple values for keyword argument 'start' https://www.statsmodels.org/devel/examples/notebooks/generated/statespace_forecasting.html | Issues: Specify the frequency for the dates index: df = pd.DataFrame(data) df['Dates'] = pd.to_datetime(df['Dates']) df.set_index('Dates', inplace=True) df = df.asfreq('4D') forecast is strictly for out-of-sample forecasts, and has no start or end parameters. (Note that its steps parameter can be passed a string or datetime type.) Either use predict or get_prediction, which support both in-sample and out-of-sample results. ARMAResults.predict(start='2024-01-01', end='2024-11-09') or # mean ARMAResults.get_prediction(start='2024-01-01', end='2024-11-09').predicted_mean # mean, standard error, prediction interval ARMAResults.get_prediction(start='2024-01-01', end='2024-11-09').summary_frame() | 2 | 1 |
78,623,142 | 2024-6-14 | https://stackoverflow.com/questions/78623142/python-numexpr-evaluating-complex-numbers-in-scientific-notation-yields-valuee | I am using the Python numexpr module to evaluate user inputs (numeric or formula). Numbers can be complex, and this works as long as I'm avoiding scientific notation: >>> import numexpr as ne >>> ne.evaluate("1000000000000j") array(0.+1.e+12j) >>> ne.evaluate("0.+1.e+12j") ValueError: Expression 0.+1.e+12j has forbidden control characters. I tried to evaluate complex numbers in scientific notation with some variations and was expecting numexpr to process these numbers. However, I always ended up with the ValueError described above. Is this a bug or am I doing something wrong? | The errors you see with numexpr when trying to evaluate the expression "0.+1.e+12j" are due to the fact that numexpr parses complex numbers differently than standard Python. It does not accept "0.+1.e+12j" as valid because it prefers expressions where operations between numerical and logical units are explicitly defined. However, if you reformify the expression to "1.e+12 * 1j", numexpr handles it correctly because this option explicitly separates the scalar and imaginary units by the multiplication operation, which numexpr can be handled effectively: >>> ne.evaluate("1.e+12 * 1j") array(0.+1.e+12j) | 2 | 1 |
78,617,024 | 2024-6-13 | https://stackoverflow.com/questions/78617024/get-swagger-ui-html-causing-unwanted-server-options-to-display-in-fastapi-applic | I need to change the styling and add extra HTML to the default generated docs in my FastAPI application, so I use get_swagger_ui_html to achieve this. This causes the operation level options for 'Servers' to appear when clicking the 'Try it out' section button - I do not want this visible for users. I am using FastAPI version 0.111.0 for context. Bug also raised in this question, but without the use of get_swagger_ui_html: fastapi swagger interface showing operation level options override server options This minimal example does not have the problem: # imports from fastapi import FastAPI import uvicorn # set up app app = FastAPI( description='some description', title=' a title', version="1.0.0", ) @app.get("/dummy_endpoint") async def endpoint(): return 'woohoo' if __name__ == "__main__": # run API with uvicorn uvicorn.run(app, host="127.0.0.1", port=8000) If we introduce the use of get_swagger_ui_html, then we get problems. See the code and screenshot below: # imports from fastapi import FastAPI from fastapi.responses import HTMLResponse from fastapi.openapi.docs import ( get_swagger_ui_html, get_swagger_ui_oauth2_redirect_html, ) import uvicorn # set up app app = FastAPI( docs_url = None, description='some description', title=' a title', version="1.0.0", ) @app.get("/dummy_endpoint") async def endpoint(): return 'woohoo' @app.get("/docs", include_in_schema=False) async def custom_swagger_ui_html(): swagger_ui_content = get_swagger_ui_html( openapi_url=app.openapi_url, swagger_ui_parameters={ "syntaxHighlight": False, "defaultModelsExpandDepth": -1}, title=app.title, oauth2_redirect_url=app.swagger_ui_oauth2_redirect_url, swagger_js_url="https://unpkg.com/swagger-ui-dist@5/swagger-ui-bundle.js", swagger_css_url="https://unpkg.com/swagger-ui-dist@5/swagger-ui.css", ) #allows us to add header, footer, css etc. html_content = f'{swagger_ui_content.body.decode("utf-8")}' return HTMLResponse(html_content) if __name__ == "__main__": # run API with uvicorn uvicorn.run(app, host="127.0.0.1", port=8000) Screenshot: Can we simply remove the Servers part from the HTML? It seems unobvious from any docs how to do this. At the moment I get the HTML elements and add CSS to hide them, but I'd rather a proper solution. Thanks! | It's most likely this Swagger UI bug that affects OpenAPI 3.1 documents. The bug seems to have been introduced in Swagger UI v. 5.9.2. FastAPI 0.111 uses Swagger UI v. 5.9.0 by default, whereas your 2nd example fetches v. 5.17.14 as of this writing, hence the difference. (By the way, the next update of FastAPI will switch from the pinned Swagger UI v. 5.9.0 to "latest 5.x" so it will also be affected by this bug.) Until Swagger UI fixes this bug, the only workaround is to fix the Swagger UI layout manually. Option 1. Hide operation-level "Servers" by using CSS (You said this is what you're currently doing.) For future readers - you'll need to add something like this to your Swagger UI CSS: .swagger-ui .operation-servers { display: none; } Option 2. Hide operation-level "Servers" by using a Swagger UI plugin Another (more verbose) option is to write a Swagger UI plugin to remove the unwanted element from the DOM entirely. The code of simple "hide this element" plugins can be added directly to the Swagger UI initialization code. For example, to hide operation-level servers, change your Swagger UI initialization code as follows; the changes are marked "Part 1" and "Part 2": window.onload = function() { // Part 1. Custom plugin to hide the operation-level "Servers" section const HideOperationServers = () => { return { wrapComponents: { OperationServers: () => () => null } } } // END of custom plugin window.ui = SwaggerUIBundle({ ... dom_id: '#swagger-ui', ... plugins: [ SwaggerUIBundle.plugins.DownloadUrl, HideOperationServers // <------ Part 2. Add the custom plugin to this list ], layout: "StandaloneLayout" }); }; | 2 | 1 |
78,621,820 | 2024-6-14 | https://stackoverflow.com/questions/78621820/efficiently-look-up-a-column-value-in-a-column-containing-lists-with-pandas | Assuming the following pandas data frame: data lookup_val 0 [1.3, 4.5, 6.4, 7.3, 8.9, 10.3] 5 1 [2.5, 4.7, 6.4, 6.6, 8.5, 9.3, 17.4] 3 2 [3.3, 4.2, 5.1, 7.8, 9.2, 11.5] 6 I need to look up the value within the list of each 'data' column at the position of the value in the 'lookup_val' column. Expected output would be a new column like this: data lookup_val output 0 [1.3, 4.5, 6.4, 7.3, 8.9, 10.3] 5 8.9 1 [2.5, 4.7, 6.4, 6.6, 8.5, 9.3, 17.4] 3 6.4 2 [3.3, 4.2, 5.1, 7.8, 9.2, 11.5] 6 11.5 What is the most efficient way to do so, assuming the data frame has millions of rows like this with each list having a different length, but no longer than 50 values? Iterating over the data frame or using apply with a simple indexing takes literally hours and a more performant structure is needed. Code to generate the above sample: import pandas as pd df = pd.DataFrame( [ {'data': [1.3, 4.5, 6.4, 7.3, 8.9, 10.3], 'lookup_val': 5}, {'data': [2.5, 4.7, 6.4, 6.6, 8.5, 9.3, 17.4], 'lookup_val': 3}, {'data': [3.3, 4.2, 5.1, 7.8, 9.2, 11.5], 'lookup_val': 6}, ] ) | There is no efficient vectorial way to perform this. You need to loop. The easiest is most likely to use a list comprehension and zip: df['output'] = [l[i-1] for l, i in zip(df['data'], df['lookup_val'])] Since your lookup values use a one-based indexing, you must subtract 1. Output: data lookup_val output 0 [1.3, 4.5, 6.4, 7.3, 8.9, 10.3] 5 8.9 1 [2.5, 4.7, 6.4, 6.6, 8.5, 9.3, 17.4] 3 6.4 2 [3.3, 4.2, 5.1, 7.8, 9.2, 11.5] 6 11.5 If there is a chance that lookup values are incorrect, you should add a manual check. For example checking for the upper bound: df['output'] = [l[i-1] if i<= len(l) else None for l, i in zip(df['data'], df['lookup_val'])] Or with a custom function: def get_val(lst, i): try: return lst[i-1] except IndexError: return None df['output'] = [get_val(*x) for x in zip(df['data'], df['lookup_val'])] Output: data lookup_val output 0 [1.3, 4.5, 6.4, 7.3, 8.9, 10.3] 5 8.9 1 [2.5, 4.7, 6.4, 6.6, 8.5, 9.3, 17.4] 3 6.4 2 [3.3, 4.2, 5.1, 7.8, 9.2, 11.5] 7 NaN timings Using up to 1M rows with a random number of items in each list between 1 and 50: Code to set up the random example: def init(N): ns = np.random.randint(1, 51, size=N) df = pd.DataFrame({'data': [list(range(n)) for n in ns]}) df['lookup_val'] = np.random.randint(0, ns, size=N) return df I also tested the implication of incorrect values for the custom function. For this I generated a second lookup column with 90% chance that the value in out of bounds. This seems to have a limited impact on efficiency (the other two approaches still use a valid lookup value in this graph): | 5 | 7 |
78,618,055 | 2024-6-13 | https://stackoverflow.com/questions/78618055/derive-approximate-rotation-transform-matrix-numpy-on-a-unit-sphere-given-a | While I am aware of vaguely similar questions, I seem to be stuck on this. Buckminster Fuller introduced a spherical mapping of the world onto an icosahedron - it's known as the Dymaxion Map A common way of identifying the cartesian coordinates of an icosahedron is by using the coordinates, where π is the golden ratio: (1+β5)/2) or 2cos(Ο/5.0) (0,Β±1,Β±1π),(Β±1,Β±1π,0),(Β±1π,0,Β±1) Expanding this out gives me the location of the 12 vertices of a regular icosahedron with side length 2: π = 2.0 * math.cos(Ο / 5.0) ico_vertices = [ (0, -1, -π), (0, -1, +π), (0, +1, -π), (0, +1, +π), (-1, -π, 0), (-1, +π, 0), (+1, -π, 0), (+1, +π, 0), (-π, 0, -1), (-π, 0, +1), (+π, 0, -1), (+π, 0, +1) ] Needless to say these need to be normalised. iv = np.array(ico_vertices) iv_n = ((iv[:, None] ** 2).sum(2) ** 0.5).reshape(-1, 1) ico = iv / iv_n #This is the starting set of vertices. Here is an image of the golden ratio π coordinates projected onto Google Maps. Fuller's icosahedron, mapped onto a spherical projection of the globe, defined using these 12 vertices: (the xyz values are already normalised to a unit sphere) { "vertices":[ {"name":"CHINA", "ll": [[39.10000000, "N"],[122.30000000,"E"]], "xyz":[-0.41468222, 0.65596241, 0.63067581]}, {"name":"NORWAY", "ll": [[64.70000000, "N"],[10.53619898 ,"E"]], "xyz":[ 0.42015243, 0.07814525, 0.90408255]}, {"name":"ARABIAN SEA", "ll": [[10.44734504, "N"],[58.15770555 ,"E"]], "xyz":[ 0.51883673, 0.83542038, 0.18133184]}, {"name":"LIBERIA", "ll": [[2.30088201 , "N"],[5.24539058 ,"W"]], "xyz":[ 0.99500944, -0.09134780, 0.04014717]}, {"name":"PUERTO RICO", "ll": [[23.71792533, "N"],[67.13232659 ,"W"]], "xyz":[ 0.35578140, -0.84358000, 0.40223423]}, {"name":"ALASKA", "ll": [[50.10320164, "N"],[143.47849033,"W"]], "xyz":[-0.51545596, -0.38171689, 0.76720099]}, {"name":"BUENOS AIRES", "ll": [[39.10000000, "S"],[57.70000000 ,"W"]], "xyz":[ 0.41468222, -0.65596241,-0.63067581]}, {"name":"ANTARCTICA", "ll": [[64.70000000, "S"],[169.46380102,"W"]], "xyz":[-0.42015243, -0.07814525,-0.90408255]}, {"name":"PITCAIRN ISLAND", "ll": [[10.44734504, "S"],[121.84229445,"W"]], "xyz":[-0.51883673, -0.83542038,-0.18133184]}, {"name":"GILBERT ISLAND", "ll": [[2.30088201 , "S"],[174.75460942,"E"]], "xyz":[-0.99500944, 0.09134780, -0.04014717]}, {"name":"AUSTRALIA", "ll": [[23.71792533, "S"],[112.86767341,"E"]], "xyz":[-0.35578140, 0.84358000, -0.40223423]}, {"name":"PRINCE EDWARD ISLAND", "ll": [[50.10320164, "S"],[36.52150967 ,"E"]], "xyz":[ 0.51545596, 0.38171689, -0.76720099]} ] } Here is an image of the dymaxion coordinates projected onto Google Maps. The dymaxion coordinates are (via a json load) loaded into a numpy array 'dym_in'. The order of the two definitions is not the same - so the mapping is (this may be wrong). i2d = [6, 4, 10, 0, 8, 9, 3, 2, 7, 5, 11, 1] # ico[i] is equivalent to dym_in[i2d[i]] dym = np.array([dym_in[m] for m in i2d ]) So now I have 12 normalised vertices in 'ico' and 12 dymaxion map vertices in 'dym', which are ordered such that ico[x] => dym[x]. I want to find the rotation (or approximate rotation) matrix that transforms ico to dym. I say approximate, because the coordinates in given dym may not exactly mathematically define an icosahedron. I do not know because I do not know how to derive the transform! What I know for sure is that the geoid is not relevant here - the Dymaxion starts from a spherical earth projection. Likewise, I freely admit there may be bugs in my assumptions above. What I want is to be able to derive the rotational matrix of any set of 12 icosahedral points from the initial golden-ratio starting set - bearing in mind that there are several 12(?) rotations to choose from, of course. | Note: the permutation matrix i2d supplied in the original post is not correct and does not give a correct mapping of points (so NO method would be able to compute the rotation matrix from it). An alternative i2d array was found by searching permutations and is included in the code below. Note that, due to the symmetries of the icosahedron, there are many possible permutation arrays and this is just one. Note also that you do have to check that you have a pure rotation (det(R)=1) and not one including a reflection (det(R)=-1). Having just been caught out by that I've now put a determinant check in at the end. You are asking for a 3x3 rotation matrix R taking position vectors U1 to V1, U2 to V2, U3 to V3 etc. Choose any linearly independent triplet U1, U2, U3. Then, as a matrix equation (Make sure that the matrices are formed from successive column vectors.) Then just post-multiply by the inverse of the U matrix. This gives you R: The code below produces the rotation matrix (for this particular mapping of icosahedral vertices). It also does a determinant check to make sure that you have a pure rotation and not an additional reflection (which could also map the point). import math import numpy as np phi = 2.0 * math.cos(math.pi / 5.0) ico = np.array( [ [ 0, -1, -phi], [ 0, -1, +phi], [ 0, +1, -phi], [ 0, +1, +phi], [ -1, -phi, 0], [ -1, +phi, 0], [ +1, -phi, 0], [ +1, +phi, 0], [-phi, 0, -1], [-phi, 0, +1], [+phi, 0, -1], [+phi, 0, +1] ] ) for i in range( 12 ): ico[i] = ico[i] / np.linalg.norm( ico[i] ) # Normalise dym_in = [ [-0.41468222, 0.65596241, 0.63067581], [ 0.42015243, 0.07814525, 0.90408255], [ 0.51883673, 0.83542038, 0.18133184], [ 0.99500944, -0.09134780, 0.04014717], [ 0.35578140, -0.84358000, 0.40223423], [-0.51545596, -0.38171689, 0.76720099], [ 0.41468222, -0.65596241,-0.63067581], [-0.42015243, -0.07814525,-0.90408255], [-0.51883673, -0.83542038,-0.18133184], [-0.99500944, 0.09134780, -0.04014717], [-0.35578140, 0.84358000, -0.40223423], [ 0.51545596, 0.38171689, -0.76720099] ] # ico[i] corresponds to dym_in[i2d[i]] # i2d = [ 6, 4, 10, 0, 8, 9, 3, 2, 7, 5, 11, 1 ] # Original permutation array: this is INCORRECT i2d = [ 0, 3, 9, 6, 2, 7, 1, 8, 10, 11, 5, 4 ] # Found by searching permutations dym = np.array([dym_in[m] for m in i2d ]) U = np.zeros( ( 3, 3 ) ) V = np.zeros( ( 3, 3 ) ) independent = ( 0, 4, 8 ) for r in range( 3 ): for c in range( 3 ): U[r,c] = ico[independent[c],r] V[r,c] = dym[independent[c],r] R = V @ np.linalg.inv( U ) for i in range( 12 ): print( "Vertex ", i, " dym[i] = ", dym[i], " R.ico = ", R @ (ico[i].T) ) print( "\nRotation matrix:\n", R ) print( "\nCheck determinant = ", np.linalg.det( R ) ) Output: Vertex 0 dym[i] = [-0.41468222 0.65596241 0.63067581] R.ico = [-0.41468222 0.65596241 0.63067581] Vertex 1 dym[i] = [ 0.99500944 -0.0913478 0.04014717] R.ico = [ 0.99500943 -0.0913478 0.04014718] Vertex 2 dym[i] = [-0.99500944 0.0913478 -0.04014717] R.ico = [-0.99500943 0.0913478 -0.04014718] Vertex 3 dym[i] = [ 0.41468222 -0.65596241 -0.63067581] R.ico = [ 0.41468222 -0.65596241 -0.63067581] Vertex 4 dym[i] = [0.51883673 0.83542038 0.18133184] R.ico = [0.51883673 0.83542038 0.18133184] Vertex 5 dym[i] = [-0.42015243 -0.07814525 -0.90408255] R.ico = [-0.42015243 -0.07814525 -0.90408256] Vertex 6 dym[i] = [0.42015243 0.07814525 0.90408255] R.ico = [0.42015243 0.07814525 0.90408256] Vertex 7 dym[i] = [-0.51883673 -0.83542038 -0.18133184] R.ico = [-0.51883673 -0.83542038 -0.18133184] Vertex 8 dym[i] = [-0.3557814 0.84358 -0.40223423] R.ico = [-0.3557814 0.84358 -0.40223423] Vertex 9 dym[i] = [ 0.51545596 0.38171689 -0.76720099] R.ico = [ 0.51545596 0.38171689 -0.767201 ] Vertex 10 dym[i] = [-0.51545596 -0.38171689 0.76720099] R.ico = [-0.51545596 -0.38171689 0.767201 ] Vertex 11 dym[i] = [ 0.3557814 -0.84358 0.40223423] R.ico = [ 0.3557814 -0.84358 0.40223423] Rotation matrix: [[-0.09385435 -0.55192398 0.82859596] [-0.72021144 -0.53698041 -0.43925792] [ 0.68737678 -0.63799058 -0.34710402]] Check determinant = 0.999999999723162 | 3 | 1 |
78,620,337 | 2024-6-13 | https://stackoverflow.com/questions/78620337/issues-with-double-gaussian-fit-using-curve-fit-in-python | I used find_peaks to locate the peaks and estimated the initial parameters for the double Gaussian fit. I expected the curve_fit function to accurately fit the double Gaussian to my data, aligning the peaks and widths correctly. However, the resulting fit does not match the data well, and the Gaussian peaks are misaligned. Is there a way to achieve a more accurate fit? import matplotlib.pyplot as plt import numpy as np from scipy.signal import find_peaks from scipy.optimize import curve_fit # Data arrays y_data = np.array([ 1.500e-04, 1.500e-04, 1.500e-04, 1.500e-04, 1.700e-04, 1.600e-04, 1.800e-04, 1.600e-04, 1.700e-04, 2.300e-04, 2.500e-04, 3.200e-04, 3.200e-04, 3.800e-04, 4.000e-04, 5.000e-04, 5.600e-04, 5.600e-04, 6.500e-04, 7.500e-04, 9.100e-04, 1.180e-03, 1.550e-03, 2.110e-03, 2.880e-03, 3.850e-03, 5.200e-03, 6.780e-03, 8.950e-03, 1.123e-02, 1.403e-02, 1.723e-02, 2.031e-02, 2.330e-02, 2.495e-02, 2.433e-02, 2.171e-02, 1.725e-02, 1.231e-02, 8.080e-03, 4.980e-03, 2.960e-03, 1.970e-03, 1.830e-03, 2.220e-03, 2.880e-03, 3.700e-03, 4.650e-03, 5.820e-03, 7.150e-03, 8.450e-03, 9.510e-03, 1.006e-02, 9.660e-03, 8.560e-03, 6.910e-03, 5.100e-03, 3.380e-03, 2.170e-03, 1.230e-03, 7.000e-04, 3.600e-04, 2.200e-04, 1.700e-04, 1.500e-04, 1.600e-04, 1.100e-04, 1.100e-04, 1.200e-04, 1.000e-04, 1.200e-04, 1.100e-04, 1.300e-04, 1.500e-04, 1.200e-04, 1.600e-04, 1.200e-04, 1.200e-04, 1.200e-04, 1.200e-04, 1.100e-04, 1.500e-04, 1.500e-04, 1.300e-04, 1.100e-04, 8.000e-05, 1.200e-04, 1.200e-04, 1.100e-04, 1.100e-04, 1.500e-04]) x_data = np.array([ 6555.101, 6555.201, 6555.301, 6555.401, 6555.501, 6555.601, 6555.701, 6555.801, 6555.901, 6556.001, 6556.101, 6556.201, 6556.301, 6556.401, 6556.501, 6556.601, 6556.701, 6556.801, 6556.901, 6557.001, 6557.101, 6557.201, 6557.301, 6557.401, 6557.501, 6557.601, 6557.701, 6557.801, 6557.901, 6558.001, 6558.101, 6558.201, 6558.301, 6558.401, 6558.501, 6558.601, 6558.701, 6558.801, 6558.901, 6559.001, 6559.101, 6559.201, 6559.301, 6559.401, 6559.501, 6559.601, 6559.701, 6559.801, 6559.901, 6560.001, 6560.101, 6560.201, 6560.301, 6560.401, 6560.501, 6560.601, 6560.701, 6560.801, 6560.901, 6561.001, 6561.101, 6561.201, 6561.301, 6561.401, 6561.501, 6561.601, 6561.701, 6561.801, 6561.901, 6562.001, 6562.101, 6562.201, 6562.301, 6562.401, 6562.501, 6562.601, 6562.701, 6562.801, 6562.901, 6563.001, 6563.101, 6563.201, 6563.301, 6563.401, 6563.501, 6563.601, 6563.701, 6563.801, 6563.901, 6564.001, 6564.001]) # Find peaks peaks, _ = find_peaks(y_data, height=0.0005) # Define double Gaussian function def double_gaussian(x, a1, b1, c1, a2, b2, c2): g1 = a1 * np.exp(-((x - b1) ** 2) / (2 * c1 ** 2)) g2 = a2 * np.exp(-((x - b2) ** 2) / (2 * c2 ** 2)) return g1 + g2 # Function to find Full Width at Half Maximum (FWHM) def find_fwhm(x, y, peak_index): half_max = y[peak_index] / 2 left_idx = np.where(y[:peak_index] <= half_max)[0][-1] right_idx = np.where(y[peak_index:] <= half_max)[0][0] + peak_index left_x = x[left_idx] + (half_max - y[left_idx]) * (x[left_idx + 1] - x[left_idx]) / (y[left_idx + 1] - y[left_idx]) right_x = x[right_idx] + (half_max - y[right_idx]) * (x[right_idx + 1] - x[right_idx]) / (y[right_idx + 1] - y[right_idx]) return right_x - left_x # Calculate FWHM for each peak fwhm1 = find_fwhm(x_data, y_data, peaks[0]) / 2.355 fwhm2 = find_fwhm(x_data, y_data, peaks[1]) / 2.355 # Initial guess for the parameters initial_guess = [y_data[peaks[0]], x_data[peaks[0]], fwhm1, y_data[peaks[1]], x_data[peaks[1]], fwhm2] # Fit the data using curve_fit params, covariance = curve_fit(double_gaussian, x_data, y_data, p0=initial_guess) # Generate fitted data x_fit = np.linspace(min(x_data), max(x_data), 10000) fitted_data = double_gaussian(x_fit, *params) # Separate the two Gaussian components g1 = params[0] * np.exp(-((x_fit - params[1]) ** 2 ) / (2 * params[2] ** 2)) g2 = params[3] * np.exp(-((x_fit - params[4]) ** 2) / (2 * params[5] ** 2)) # Plot the data and the fit plt.scatter(x_data, y_data, label='Data') plt.plot(x_fit, g1, 'g--', label='Gaussian 1') plt.plot(x_fit, g2, 'm--', label='Gaussian 2') plt.plot(x_fit, fitted_data, 'r--', label='Double Gaussian') plt.xlabel('x') plt.ylabel('y') plt.legend() plt.show() And here is the resulting plot: | When data are not exactly Gaussian (peaks have bigger tailing, even in a small extent), it is a common approach to fit a Voigt or a Pseudo-Voigt (which is easier to compute) profile instead of Gaussian. import numpy as np import matplotlib.pyplot as plt from scipy import optimize, stats We define the Pseudo Voigt peak: def pseudo_voigt(x, eta, sigma, gamma, x0, A): G = stats.norm.pdf(x, scale=sigma, loc=x0) L = stats.cauchy.pdf(x, scale=2. * gamma, loc=x0) return A * ((1. - eta) * G + eta * L) And the model for double peaks: def model2(x, eta0, sigma0, gamma0, x00, A0, eta1, sigma1, gamma1, x01, A1): return pseudo_voigt(x, eta0, sigma0, gamma0, x00, A0) + pseudo_voigt(x, eta1, sigma1, gamma1, x01, A1) We fit: popt2, pcov2 = optimize.curve_fit( model2, x_data, y_data, p0=[0.5, 1., 1., 6559, 0.025, 0.5, 1., 1., 6561, 0.01], bounds=[ (0, 0, 0, 0, 0, 0, 0, 0, 0, 0), (1, np.inf, np.inf, np.inf, np.inf, 1, np.inf, np.inf, np.inf, np.inf) ] ) # array([5.04788238e-01, 3.39173299e-01, 2.55686905e-01, 6.55847681e+03, # 2.75922544e-02, 3.34888395e-19, 3.35457589e-01, 4.01394103e+00, # 6.56029885e+03, 7.93603468e-03]) Results is not totally perfect: But at least the first peak reach its nominal height and its tailing is taken into account. | 2 | 2 |
78,620,797 | 2024-6-14 | https://stackoverflow.com/questions/78620797/defaultdict-ignores-its-default-factory-argument-when-assigned-explicitly | I ran into this problem when working with defaultdict. Here's a program that demonstrates it: from collections import defaultdict d1 = defaultdict(default_factory=dict) d2 = defaultdict(dict) print("d1's default_factory:", d1.default_factory) print("d2's default_factory:", d2.default_factory) try: d1['key'].update({'a': 'b'}) except KeyError: print("d1 caused an exception") try: d2['key'].update({'a': 'b'}) except KeyError: print("d2 caused an exception") The above outputs: d1's default_factory: None d2's default_factory: <class 'dict'> d1 caused an exception Should this happen? | default_factory is a positional-only argument. It cannot be passed by keyword. d1 = defaultdict(default_factory=dict) creates a defaultdict with no default factory and a key 'default_factory' with value dict. It's as if you did d1 = defaultdict() d1['default_factory'] = dict | 2 | 4 |
78,620,709 | 2024-6-14 | https://stackoverflow.com/questions/78620709/sorting-months-inside-a-multi-index-groupby-object | Its the sample input. i wanted to group according to the Year column,and wanted to use value counts on month column, then to sort the 'month' column according to the month order. Year month 2000 Oct 2002 Jan 2002 Mar 2000 Oct 2002 Mar 2000 Jan I did this: df.groupby(['Year'])['month'].value_counts() i got the following output: year month 2000 Oct 2 Jan 1 2002 Mar 2 Jan 1 now i need to sort the month in the original month order.what can i do? i want the following output: year month 2000 Jan 1 Oct 2 2002 Jan 1 Mar 2 | You can use groupby() and sort_values(by=['Year', 'month']): import pandas as pd def _sort_month(df): month_order = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] df['month'] = pd.Categorical(df['month'], categories=month_order, ordered=True) GB = df.groupby(['Year'])['month'].value_counts() G = GB.reset_index(name='count') res = G.sort_values(by=['Year', 'month']) res.set_index(['Year', 'month'], inplace=True) return res[res['count'] > 0] df = pd.DataFrame({'Year': [2000, 2002, 2002, 2000, 2002, 2000], 'month': ['Oct', 'Jan', 'Mar', 'Oct', 'Mar', 'Jan']}) print(_sort_month(df)) Prints count Year month 2000 Jan 1 Oct 2 2002 Jan 1 Mar 2 | 4 | 2 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.