question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
77,418,144 | 2023-11-3 | https://stackoverflow.com/questions/77418144/for-each-element-of-2d-array-sum-higher-elements-in-the-row-and-column | My input dataframe (or 2D numpy array) shape is 11kx12k For each element, I check how many values are higher in the given row and column. For example: array([[1, 3, 1, 5, 3], [3, 7, 3, 2, 1], [2, 3, 1, 8, 9]]) row-wise: 3 1 3 0 1 1 0 1 3 4 3 2 4 1 0 column-wise: 2 1 1 1 1 0 0 0 2 2 1 1 1 0 0 total higher values for that element: 5 2 4 1 2 1 0 1 5 6 4 3 5 1 0 this code works good but for this shape of matrix It takes ~1 hour. dfT = df.T rows = df.apply(lambda x: np.sum(df>x),axis=1) cols = dfT.apply(lambda x: np.sum(dfT>x),axis=1).T output = rows+cols I wonder is there any way to do this more efficient way? I also tried numpy but for this I have splited my 2D to 12kx100 or 12kx200 shapes and merged all arrays again, in the end runtime was close to eachother so couldn't get any progress. np.sum(matrix > matrix[:,None], axis=1) | I came up with the following: from scipy.stats import rankdata def element_wise_bigger_than(x, axis): return x.shape[axis] - rankdata(x, method='max', axis=axis) What you want is similar to a ranking of elements, which basically tells you how many items are smaller than a given element (maybe with subtracting 1 if your ranking is 1, 2, 3, ...). rankdata does this ranking, and can do it row-wise or column-wise. We can convert the "smaller than" to "bigger than" by inverting the ranks, relative to the number of items being counted (in each row/column). I.e., subtract the ranks from the number of rows/columns. For rows/columns without ties (e.g., the third row in your example), this works as expected. There is a slight complication with ties and how they are ranked. Through trial and error, I found that using method='max' gave the desired behavior. To be frank, I am still wrapping my head on why/how this works, so take the answer with a grain of salt. Still, it gives the desired output on the input data: >>> x = np.array([[1, 3, 1, 5, 3], [3, 7, 3, 2, 1], [2, 3, 1, 8, 9]]) >>> element_wise_bigger_than(x, 1) array([[3, 1, 3, 0, 1], [1, 0, 1, 3, 4], [3, 2, 4, 1, 0]]) >>> element_wise_bigger_than(x, 0) array([[2, 1, 1, 1, 1], [0, 0, 0, 2, 2], [1, 1, 1, 0, 0]]) And it can handle your size of data in far less than an hour: def experiment(): x = np.random.randint(0, 10, size=(11000, 12000)) rows = element_wise_bigger_than(x, 1) cols = element_wise_bigger_than(x, 0) output = rows + cols return output %%time experiment() # CPU times: user 8.68 s, sys: 805 ms, total: 9.49 s # Wall time: 9.47 s Takes about 10 seconds. My understanding is the ranking should be similar to a sorting operation. My answer is relying on a scipy/numpy implementation of the rank/sort, rather than a user-written loop/"apply" (which tend to be much slower). | 3 | 2 |
77,416,929 | 2023-11-3 | https://stackoverflow.com/questions/77416929/can-i-use-the-same-lock-on-multiple-threads-if-they-do-not-access-the-same-varia | Imagine I have two threads, each modifying a different variable. Can I pass the same lock object to them, or shall I use two separate locks? In general, when shall I use multiple locks? Here is a toy example: from threading import Thread, Lock from time import sleep def task(lock, var): with lock: var = 1 sleep(5) lock = Lock() var1 = [] var2 = [] Thread(target=task, args=(lock, var1)).start() Thread(target=task, args=(lock, var2)).start() or is it better lock1 = Lock() lock2 = Lock() var1 = [] var2 = [] Thread(target=task, args=(lock1, var1)).start() Thread(target=task, args=(lock2, var2)).start() | Only one thread can hold a lock at a time, and a lock is used to provide mutually-exclusive access to a piece of data (or variable). Based on your example, where each thread accesses separate variables, you want two separate locks—one per variable. Note that if only one thread accesses a variable, though, you do not need to use a lock for it. If you were to only use one lock, only one thread would be able to perform "work" at a time—even though their work is completely independent. | 2 | 2 |
77,416,725 | 2023-11-3 | https://stackoverflow.com/questions/77416725/unable-to-launch-selenium-encountering-deprecationwarning-and-webdriverexceptio | Suddenly today, Selenium could not be launched in my project. The error messages are as follows: main.py:11: DeprecationWarning: executable_path has been deprecated, please pass in a Service object driver = webdriver.Chrome(options=options,executable_path='drivers/chromedriver-linux64/chromedriver') Traceback (most recent call last): File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module> cli.main() File "/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main run() File "/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file runpy.run_path(target, run_name="__main__") File "/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path return _run_module_code(code, init_globals, run_name, File "/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code _run_code(code, mod_globals, init_globals, File "/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code exec(code, run_globals) File "/app/main.py", line 11, in <module> driver = webdriver.Chrome(options=options,executable_path='drivers/chromedriver-linux64/chromedriver') File "/usr/local/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 69, in __init__ super().__init__(DesiredCapabilities.CHROME['browserName'], "goog", File "/usr/local/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 89, in __init__ self.service.start() File "/usr/local/lib/python3.10/site-packages/selenium/webdriver/common/service.py", line 98, in start self.assert_process_still_running() File "/usr/local/lib/python3.10/site-packages/selenium/webdriver/common/service.py", line 110, in assert_process_still_running raise WebDriverException( selenium.common.exceptions.WebDriverException: Message: Service drivers/chromedriver-linux64/chromedriver unexpectedly exited. Status code was: 255 The settings of my Dockerfile are as follows: FROM python:3.10-buster # Install necessary packages RUN apt-get update && apt-get install -y \ curl unzip gettext python-babel \ ffmpeg \ poppler-utils \ fonts-takao-* fonts-wqy-microhei fonts-unfonts-core # Install Chrome RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb && \ dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install \ && rm google-chrome-stable_current_amd64.deb # Download and extract the latest version of ChromeDriver RUN CHROME_DRIVER_VERSION=$(curl -sL "https://chromedriver.storage.googleapis.com/LATEST_RELEASE") && \ curl -sL "https://chromedriver.storage.googleapis.com/$CHROME_DRIVER_VERSION/chromedriver_linux64.zip" > chromedriver.zip && \ unzip chromedriver.zip -d /usr/local/bin && \ rm chromedriver.zip # Install Python dependencies COPY requirements.txt requirements.txt RUN python -m pip install --upgrade pip && pip install -r requirements.txt # Set and move to APP_HOME ENV APP_HOME /app WORKDIR $APP_HOME ENV PYTHONPATH $APP_HOME # Copy local code to the container image COPY . . requirements.txt selenium==4.15.1 And, I created a simple main.py to just launch Selenium. from selenium import webdriver from selenium.webdriver.chrome.options import Options options = Options() options.headless = True options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome(options=options) driver.get('https://www.google.com') print(driver.title) driver.quit() What I did I downloaded the drivers of version 119 and 120 from this link, set them in executable_path, and executed it, but the same error was returned. https://googlechromelabs.github.io/chrome-for-testing/#stable Please help me! Environment MacOS 13.6 Apple M2 Docker desktop 4.21.1 | In Selenium 4 executable_path is deprecated you have to use an instance of the Service() class with ChromeDriverManager().install() Deprecate all but Options and Service arguments in driver instantiation. (#9125,#9128) https://github.com/SeleniumHQ/selenium/blob/d6acda7c0254f9681574bf4078ff2001705bf940/py/CHANGES#L101 You can try: from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager driver = webdriver.Chrome(service=Service(ChromeDriverManager().install())) driver.get("https://www.your_website_.com") Edit: After looking at your example I think you can try: from selenium import webdriver from selenium.webdriver.chrome.service import Service service = Service(r"C:\chromedriver.exe") # your path to chromedriver executable file options = webdriver.ChromeOptions() driver = webdriver.Chrome(service=service, options=options) driver = webdriver.Chrome(options=options) driver.get('https://www.google.com') print(driver.title) driver.quit() | 2 | 2 |
77,415,734 | 2023-11-3 | https://stackoverflow.com/questions/77415734/cant-write-try-else-without-except-in-python | I want to write something shaped like this this except the invalid_task and work functions are of course written inline. unfinished_tasks = 0 async def worker(task): nonlocal unfinished_tasks unfinished_tasks += 1 try: if is_invalid(task): return None result = work(task) if is_error(result): raise Exception() return result else: unfinished_tasks -= 1 But you can't write just: try: ... else: ... You need an except to write an else (try-else-finally also doesn't work). The use case for an else without an except is to run some code on a non exception exit of the block. There are a few ways to exit a block, like return, break, or reaching the end of one of the code paths inside it. Should python allow try-else without except? Is there a better way to write this? My current solution is: try: ... except None: raise else: ... | For this case specifically you can use: unfinished_tasks = 0 async def worker(task): nonlocal unfinished_tasks try: if is_invalid(task): return None result = work(task) if is_error(result): raise Exception() return result except: unfinished_tasks += 1 For general cases pass can be used: try: ... except: pass else: ... | 4 | 2 |
77,414,942 | 2023-11-3 | https://stackoverflow.com/questions/77414942/pandas-groupby-before-merge | I have dataframe like this. But there are about ten thousand rows. import pandas as pd import numpy as np data = {'gameId': [1, 1, 1, 1, 1, 2, 2, 2, 2, 2], 'eventId': [1, 2, 3, 4, 5, 1, 2, 3, 4, 5], 'player': ['A', 'B', 'C', 'D', 'E', 'A', 'B', 'C', 'D', 'E'], 'related_eventId': [2, 1, 4, 3, np.nan, 2, 1, 4, 3, np.nan]} So I need to create column "related_player" based on the player from row which eventId is equal related_eventId. If I would not have column gameId I can do it by merging result = df.merge(df[['eventId', 'player']], left_on='related_eventId', right_on='eventId', how='left', suffixes=('', '_related')) result.rename(columns={'player_related': 'related_player', 'eventId_related': 'related_eventId'}, inplace=True) result = result[['eventId', 'player', 'related_eventId', 'related_player']] But output is not correct because I need to group by gameId. In R it is pretty simple, but I don't understand how to correctly do it in Python. My expected output should be like this gameId eventId player related_eventId related_player 1 1 A 2 B 1 2 B 1 A 1 3 C 4 D 1 4 D 3 C 1 5 E NaN NaN 2 1 A 2 B 2 2 B 1 A 2 3 C 4 D 2 4 D 3 C 2 5 E NaN NaN | Add column to list ['eventId', 'player','gameId'] and to parameters left_on and right_on: result = df.merge(df[['eventId', 'player','gameId']], left_on=['gameId','related_eventId'], right_on=['gameId','eventId'], how='left', suffixes=('', '_related')) result.rename(columns={'player_related': 'related_player', 'eventId_related': 'related_eventId'}, inplace=True) result = result[['eventId', 'player', 'related_eventId', 'related_player']] print (result) eventId player related_eventId related_eventId related_player 0 1 A 2.0 2.0 B 1 2 B 1.0 1.0 A 2 3 C 4.0 4.0 D 3 4 D 3.0 3.0 C 4 5 E NaN NaN NaN 5 1 A 2.0 2.0 B 6 2 B 1.0 1.0 A 7 3 C 4.0 4.0 D 8 4 D 3.0 3.0 C 9 5 E NaN NaN NaN | 4 | 6 |
77,410,905 | 2023-11-2 | https://stackoverflow.com/questions/77410905/visual-studio-code-terminal-shows-multiple-conda-envs | I have VSCode in Windows 11. I have WSL (Ubuntu 22.04) and launch VSCode from the terminal like code . from the project folder. When I open the built-in terminal it shows two conda (Anaconda) environments in parentheses, so I have no idea which one is active, if any. On subsequent conda deactivate you can see in the attached screenshot that the prompt and the active env changing but there is surely something messed up here. Also, in VSCode, when I set Python Interpreter to a conda env, in a few seconds the built-in terminal prompt picks up the change and the env name in the first parens changes to the new value. Any idea how to fix it? (The prompt should obviously show just one (the active) conda env and that one should change whenever the Python interpreter is updated in the command palette.) I looked into my ~/.bashrc file but there is just the seemingly normal >>> conda initialize block at the bottom that was added when installing Anaconda | It turns out that I had to delete ~/.vscode-server directory and let it being autogenerated again on the next code . run. I also had to click the "Inherit Env" setting of the built-in Terminal (I forgot, it may not be the default). With these steps, a newly launched VSCode behaves almost exactly like I expected: If there was no built-in terminal open in the previous VSCode session, opening a new terminal looks like as in the bottom right tab: prompt starts with (base) env but immediately picks up correct end and returns correct prompt if there was a built-in terminal open when killing the previous session, on the next launch the terminal will open automatically with the incorrect (base) env, see bottom left tab, but any subsequent new terminal added will look like the right window, which is now correct. This is still a bit inconvenient, having to either do a manual conda activate <correct_env> or close the terminal and open a fresh one, now with the correct conda env anyways, I think I can live with this | 20 | 2 |
77,412,437 | 2023-11-2 | https://stackoverflow.com/questions/77412437/pandas-dataframe-from-unique-numpy-elements | I want to construct a pandas dataframe where columns have elements from specified arrays with unique elements only. I want to find the most efficient pythonic way of doing this. For example, I have the following numpy arrays as input: a = np.array(["a1"]) b = np.array(["b1", "b2"]) c = np.array(["c1", "c2", "c3"]) Below is how I want my Pandas dataframe to be like a b c 0 a1 b1 c1 1 a1 b1 c2 2 a1 b1 c3 3 a1 b2 c1 4 a1 b2 c2 5 a1 b2 c3 Below is the code that I am using: import pandas as pd hashmap = {"a":[], "b":[], "c":[]} for a_elem in a: for b_elem in b: for c_elem in c: hashmap["a"] += [a_elem] hashmap["b"] += [b_elem] hashmap["c"] += [c_elem] df = pd.DataFrame.from_dict(hashmap) How can I make this code more efficient? | Use itertools.product: from itertools import product df = pd.DataFrame(product(a, b, c), columns=['a', 'b', 'c']) Output: a b c 0 a1 b1 c1 1 a1 b1 c2 2 a1 b1 c3 3 a1 b2 c1 4 a1 b2 c2 5 a1 b2 c3 Alternative with MultiIndex.from_product: df = pd.MultiIndex.from_product([a, b, c], names=['a', 'b', 'c'] ).to_frame(index=False) | 2 | 1 |
77,412,205 | 2023-11-2 | https://stackoverflow.com/questions/77412205/count-number-of-values-less-than-value-in-another-column-in-dataframe | Given a dataframe like the one below, for each date in Enrol Date, how can I count the number of values in preceding rows in Close Date that are earlier? Ideally I would like to add the results as a new column. Class Enrol Date Close Date A 30/10/2003 05/12/2003 A 22/12/2003 23/09/2005 A 06/09/2005 29/09/2005 A 15/11/2005 07/12/2005 A 27/02/2006 28/03/2006 Desired result: Class Enrol Date Close Date Prior Dates A 30/10/2003 05/12/2003 0 A 22/12/2003 23/09/2005 1 A 06/09/2005 29/09/2005 1 A 15/11/2005 07/12/2005 3 A 27/02/2006 28/03/2006 4 | A possible option using triu : cols = ["Enrol Date", "Close Date"] df[cols] = df[cols].apply(pd.to_datetime, dayfirst=True) enrol = df["Enrol Date"].to_numpy() close = df["Close Date"].to_numpy()[:, None] df["Prior Dates"] = np.triu(enrol>close).sum(0) Output : print(df) Class Enrol Date Close Date Prior Dates 0 A 2003-10-30 2003-12-05 0 1 A 2003-12-22 2005-09-23 1 2 A 2005-09-06 2005-09-29 1 3 A 2005-11-15 2005-12-07 3 4 A 2006-02-27 2006-03-28 4 | 3 | 5 |
77,410,366 | 2023-11-2 | https://stackoverflow.com/questions/77410366/apple-silicon-m2-error-could-not-build-wheels-for-lightgbm-which-is-required-t | I am experiencing an issue with installing lighgbm on Apple Silicon, the full installation process is as follows: python3 -m pip install lightgbm Defaulting to user installation because normal site-packages is not writeable Collecting lightgbm Using cached lightgbm-4.1.0.tar.gz (1.7 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: numpy in ./Library/Python/3.11/lib/python/site-packages (from lightgbm) (1.23.5) Requirement already satisfied: scipy in ./Library/Python/3.11/lib/python/site-packages (from lightgbm) (1.10.1) Building wheels for collected packages: lightgbm Building wheel for lightgbm (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for lightgbm (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [45 lines of output] 2023-11-02 14:58:09,352 - scikit_build_core - INFO - CMake version: 3.27.7 *** scikit-build-core 0.6.0 using CMake 3.27.7 (wheel) 2023-11-02 14:58:09,356 - scikit_build_core - INFO - Build directory: /private/var/folders/7t/ys9t4mvn1fx74wckbywxd2sw0000gp/T/tmptyogi8su/build *** Configuring CMake... 2023-11-02 14:58:09,372 - scikit_build_core - INFO - Ninja version: 1.11.1 2023-11-02 14:58:09,372 - scikit_build_core - WARNING - libdir/ldlibrary: /Library/Frameworks/Python.framework/Versions/3.11/lib/Python.framework/Versions/3.11/Python is not a real file! loading initial cache file /var/folders/7t/ys9t4mvn1fx74wckbywxd2sw0000gp/T/tmptyogi8su/build/CMakeInit.txt CMake Deprecation Warning at CMakeLists.txt:35 (cmake_minimum_required): Compatibility with CMake < 3.5 will be removed from a future version of CMake. Update the VERSION argument <min> value or use a ...<max> suffix to tell CMake that the project does not need compatibility with older versions. -- The C compiler identification is AppleClang 15.0.0.15000040 -- The CXX compiler identification is AppleClang 15.0.0.15000040 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Could NOT find OpenMP_C (missing: OpenMP_C_FLAGS OpenMP_C_LIB_NAMES) -- Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES) -- Could NOT find OpenMP (missing: OpenMP_C_FOUND OpenMP_CXX_FOUND) -- Found OpenMP_C: -Xpreprocessor -fopenmp -I/include -- Found OpenMP_CXX: -Xpreprocessor -fopenmp -I/include -- Found OpenMP: TRUE -- Performing Test MM_PREFETCH -- Performing Test MM_PREFETCH - Failed -- Performing Test MM_MALLOC -- Performing Test MM_MALLOC - Success -- Using _mm_malloc -- Configuring done (9.3s) -- Generating done (0.0s) -- Build files have been written to: /var/folders/7t/ys9t4mvn1fx74wckbywxd2sw0000gp/T/tmptyogi8su/build *** Building project with Ninja... ninja: error: '/lib/libomp.dylib', needed by '/private/var/folders/7t/ys9t4mvn1fx74wckbywxd2sw0000gp/T/pip-install-zb1bf93d/lightgbm_d399d75f04b14379ba9d10c8bffd1542/lib_lightgbm.so', missing and no known rule to make it *** CMake build failed [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for lightgbm Failed to build lightgbm ERROR: Could not build wheels for lightgbm, which is required to install pyproject.toml-based projects as I understand I need to have libomp before installing lightgbm. However, I already have it installed and I still continue to experience the same issue. Is it related to the directories being different than where it is supposed to be? How can I check what the issue is? brew install libomp ==> Downloading https://formulae.brew.sh/api/formula.jws.json #=#=- # # #=O#- # # -#O=- # # # -=O#- # # # -=O=-# # # # -=O=- # # # # -=O=- # # # # ######################################################################################################################################################################################################################################################### 100.0% ==> Downloading https://formulae.brew.sh/api/cask.jws.json ######################################################################################################################################################################################################################################################### 100.0% Warning: libomp 17.0.4 is already installed and up-to-date. To reinstall 17.0.4, run: brew reinstall libomp Thank you for your time. | I found the solution, basically I deleted the python that was installed and installed the python with homebrew then everything worked. | 2 | 0 |
77,409,035 | 2023-11-2 | https://stackoverflow.com/questions/77409035/get-layer-of-shape-in-visio-in-python | I'm trying to select all shapes within an existing Visio-file that are i. e. in layer "help" to delete them. Is there a way to achive that using python package "vsdx"? | You can use CreateSelection method with parameter visSelTypeByLayer = 3. Right now I haven't Python environment there, my simple VBA code Sub SOF_77409035() Dim Sh As Shape Dim sl As Selection ActiveWindow.DeselectAll ' deselect all Set sl = ActivePage.CreateSelection(3, 0, "help") ' create selection in memory For Each Sh In sl ' iterate all shapes in selection ' ActiveWindow.Select Sh, 2 ' select shape visSelect = 2 sh.delete Next End Sub UPDATE This python-code can print a list of shapes belonging to a layer. But it cannot select these shapes. import win32com.client as w32 visio = w32.Dispatch("visio.Application") visio.Visible = 1 win = visio.activewindow doc = visio.activedocument pag = doc.pages(1) sl = pag.createselection(3, 0,"help") for i in range (1, sl.count+1): sh = sl.item(i) sh.delete PS vsdx python package have poor documentation, I suspect that's not possible. | 2 | 3 |
77,410,154 | 2023-11-2 | https://stackoverflow.com/questions/77410154/how-to-format-currency-column-in-streamlit-for-dataframes | I have the following piece of code: import pandas as pd import streamlit as st cashflow = pd.DataFrame(data=[ (2023, 10000), (2022, 95000), (2021, 90000), (2020, 80000), ], columns=['year', 'cashflow']) st.dataframe( data=cashflow, column_config={ 'year': st.column_config.NumberColumn(format='%d'), 'cashflow': st.column_config.NumberColumn(label='cashflow', format='$%.0f'), } ) But, this outputs the cashflow column as $95000 but I want it output $95,000 | Prefer use the Styler instead of the DataFrame: import pandas as pd import streamlit as st cashflow = pd.DataFrame(data=[ (2023, 10000), (2022, 95000), (2021, 90000), (2020, 80000), ], columns=['year', 'cashflow']) st.dataframe( data=cashflow.style.format({'cashflow': '${:,}'}), # <- HERE ) Output: | 3 | 2 |
77,403,707 | 2023-11-1 | https://stackoverflow.com/questions/77403707/how-to-combine-filter-conditions-in-polars | In this example Pokémon data, let's say we want to exclude any Pokémon that is both Type 1 = Dragon and Total = 600. That means we want to exclude just one record (highlighted below): import polars as pl df = pl.read_csv("https://gist.githubusercontent.com/ritchie46/cac6b337ea52281aa23c049250a4ff03/raw/89a957ff3919d90e6ef2d34235e6bf22304f3366/pokemon.csv") df = df.sort("Total", descending=True) df ┌─────┬───────────────────────────┬─────────┬─────────┬───────┬─────┬────────┬─────────┬─────────┬─────────┬───────┬────────────┬───────────┐ │ # ┆ Name ┆ Type 1 ┆ Type 2 ┆ Total ┆ HP ┆ Attack ┆ Defense ┆ Sp. Atk ┆ Sp. Def ┆ Speed ┆ Generation ┆ Legendary │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ str ┆ str ┆ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 ┆ i64 ┆ bool │ ╞═════╪═══════════════════════════╪═════════╪═════════╪═══════╪═════╪════════╪═════════╪═════════╪═════════╪═══════╪════════════╪═══════════╡ │ 150 ┆ Mewtwo ┆ Psychic ┆ null ┆ 680 ┆ 106 ┆ 110 ┆ 90 ┆ 154 ┆ 90 ┆ 130 ┆ 1 ┆ true │ │ 130 ┆ GyaradosMega Gyarados ┆ Water ┆ Dark ┆ 640 ┆ 95 ┆ 155 ┆ 109 ┆ 70 ┆ 130 ┆ 81 ┆ 1 ┆ false │ │ 6 ┆ CharizardMega Charizard X ┆ Fire ┆ Dragon ┆ 634 ┆ 78 ┆ 130 ┆ 111 ┆ 130 ┆ 85 ┆ 100 ┆ 1 ┆ false │ │ 6 ┆ CharizardMega Charizard Y ┆ Fire ┆ Flying ┆ 634 ┆ 78 ┆ 104 ┆ 78 ┆ 159 ┆ 115 ┆ 100 ┆ 1 ┆ false │ │ 9 ┆ BlastoiseMega Blastoise ┆ Water ┆ null ┆ 630 ┆ 79 ┆ 103 ┆ 120 ┆ 135 ┆ 115 ┆ 78 ┆ 1 ┆ false │ │ 3 ┆ VenusaurMega Venusaur ┆ Grass ┆ Poison ┆ 625 ┆ 80 ┆ 100 ┆ 123 ┆ 122 ┆ 120 ┆ 80 ┆ 1 ┆ false │ │ 142 ┆ AerodactylMega Aerodactyl ┆ Rock ┆ Flying ┆ 615 ┆ 80 ┆ 135 ┆ 85 ┆ 70 ┆ 95 ┆ 150 ┆ 1 ┆ false │ │ 94 ┆ GengarMega Gengar ┆ Ghost ┆ Poison ┆ 600 ┆ 60 ┆ 65 ┆ 80 ┆ 170 ┆ 95 ┆ 130 ┆ 1 ┆ false │ │ 127 ┆ PinsirMega Pinsir ┆ Bug ┆ Flying ┆ 600 ┆ 65 ┆ 155 ┆ 120 ┆ 65 ┆ 90 ┆ 105 ┆ 1 ┆ false │ │ 149 ┆ ***Dragonite*** ┆*Dragon* ┆ Flying ┆*600* ┆ 91 ┆ 134 ┆ 95 ┆ 100 ┆ 100 ┆ 80 ┆ 1 ┆ false │ │ 65 ┆ AlakazamMega Alakazam ┆ Psychic ┆ null ┆ 590 ┆ 55 ┆ 50 ┆ 65 ┆ 175 ┆ 95 ┆ 150 ┆ 1 ┆ false │ │ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │ │ 14 ┆ Kakuna ┆ Bug ┆ Poison ┆ 205 ┆ 45 ┆ 25 ┆ 50 ┆ 25 ┆ 25 ┆ 35 ┆ 1 ┆ false │ │ 129 ┆ Magikarp ┆ Water ┆ null ┆ 200 ┆ 20 ┆ 10 ┆ 55 ┆ 15 ┆ 20 ┆ 80 ┆ 1 ┆ false │ │ 10 ┆ Caterpie ┆ Bug ┆ null ┆ 195 ┆ 45 ┆ 30 ┆ 35 ┆ 20 ┆ 20 ┆ 45 ┆ 1 ┆ false │ │ 13 ┆ Weedle ┆ Bug ┆ Poison ┆ 195 ┆ 40 ┆ 35 ┆ 30 ┆ 20 ┆ 20 ┆ 50 ┆ 1 ┆ false │ └─────┴───────────────────────────┴─────────┴─────────┴───────┴─────┴────────┴─────────┴─────────┴─────────┴───────┴────────────┴───────────┘ I would try to remove this one row like this: df.filter(pl.col("Type 1") != "Dragon" & pl.col("Total") != 600) but this gives a TypeError: the truth value of an Expr is ambiguous. If we change to .ne() then it runs, but it's removing anything that is a Dragon and removing anything with a Total score of 600 (5 records), when we instead want to remove anything that is BOTH a Dragon and a Total score of 600 (1 record). df.filter(pl.col("Type 1").ne("Dragon") & pl.col("Total").ne(600)) What is the proper way to combine multiple filter conditions like this? | The error is happening because when you use multiple conditions you need to wrap then in parenthesis. Nevertheless to get rid of that single line the proper way is: df.filter((pl.col("Type 1") != "Dragon") | (pl.col("Total") != 600)) Using a parenthesis over both conditions, and use instead of & the operator | (OR) operator to retain the records that don't meet either of these conditions, you only exclude the records that are both 'Dragon' and have a Total of 600. | 2 | 4 |
77,405,100 | 2023-11-1 | https://stackoverflow.com/questions/77405100/immortal-objects-in-python-how-do-they-work | Recently I came across PEP 683 – Immortal Objects, Using a Fixed Refcount https://peps.python.org/pep-0683/ What I figured out that objects like None, True, False will be shared across interpreter instead of creating copies. These were already immutable, weren't they? Can someone please explain in easy terms. Thanks. | That PEP is about CPython, which is the reference implementation of Python. As such, it talks about things on a very different level than what most of us are thinking about. CPython keeps track of how many references there are to any given object (the refcount). If you do something like a = some_expression(), the object that is assigned to a gets its refcount increased. Same with something like my_list.append(something). If you clear a list, reassign or delete a variable, or something else like that, CPython decreases the refcount of the objects involved by one. When the refcount reaches 0, it knows it can clean up that object. Now, certain values like None, True, and False are never going to get to 0 as long as the program is running, so they are effectively immortal. But up until 3.12, CPython didn't know that it didn't need to bother changing the refcount for these objects, so it still updated it every time they got a new reference or a reference disappeared. So while the Python-facing part of these objects were immutable, on the C-side, None and objects like it actually have a bit of memory attached to it that was mutable. And that has impacts on performance (outlined in the "Motivation" section of the PEP). This is all very nitty-gritty, and I don't understand the full implications of all this myself, but unless you're writing a C extension, you probably don't need to worry about it. | 2 | 3 |
77,403,801 | 2023-11-1 | https://stackoverflow.com/questions/77403801/make-a-httpx-get-request-with-data-body | When I do a curl request for GET endpoint of a REST API, I would go it like this curl -X 'GET' \ 'http://localhost:8000/user/?limit=10' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '[ { "item_type": "user", "skip": 0 } ]' Directly translated into requests, I would get import requests headers = { 'accept': 'application/json', 'Content-Type': 'application/json', } params = { 'limit': '10' } json_data = [ { 'item_type': 'user', 'skip': 0, }, ] response = requests.get('http://localhost:8000/user/', params=params, headers=headers, json=json_data) # Note: json_data will not be serialized by requests # exactly as it was in the original request. #data = '[\n {\n "item_type": "document",\n "skip": 0\n }\n]' #response = requests.get('http://localhost:8000/user/', params=params, headers=headers, data=data) Now for my test environment I am going to use httpx's AsyncClient, but I cannot find a way to use the data part, i.e., [{"item_type": "user", "skip": 0}], in a get request. It would be terrific, if you know how to address this. | According to the httpx docs, .get doesn't support request bodies. It suggests you use the more generic .request function instead. The HTTP GET, DELETE, HEAD, and OPTIONS methods are specified as not supporting a request body. To stay in line with this, the .get, .delete, .head and .options functions do not support content, files, data, or json arguments. # possible alternative httpx.request( method="GET", url="https://www.example.com/", content=b'A request body on a GET request.' ) Important note: the HTTP spec states clients should not use body content with GET requests and if they do, there's no guarantee that it will show up at the server (as intermediaries are allowed to strip it off): Although request message framing is independent of the method used, content received in a GET request has no generally defined semantics, cannot alter the meaning or target of the request, and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack (Section 11.2 of [HTTP/1.1]). A client SHOULD NOT generate content in a GET request unless it is made directly to an origin server that has previously indicated, in or out of band, that such a request has a purpose and will be adequately supported. An origin server SHOULD NOT rely on private agreements to receive content, since participants in HTTP communication are often unaware of intermediaries along the request chain | 2 | 3 |
77,403,294 | 2023-11-1 | https://stackoverflow.com/questions/77403294/how-can-i-use-commas-inside-a-csv-column-to-separate-multiple-floating-point-val | I have a question that is closely related to this question here: How to convert .wav files into a Pandas DataFrame in order to feed it to a neural network? I have created a pandas DataFrame with the following code: df = pd.DataFrame(data={"wavsamples": pd.Series(wavsamples), "wavsamplerate": pd.Series(wavsamplerate), "wavname": pd.Series(wavname)}, copy=False, columns = ['wavsamples','wavsamplerate','wavname']) df.index.name = 'filenumber' If I print the second column inside my pandas DataFrame with print(df.wavsamples.to_string(index=False)) it shows me the pandas series 'wavsamples' that looks like this: [0.02709961, 0.06796265, -0.011810303, -0.23361... [0.0068969727, 0.04547119, 0.043029785, -0.1025... [-0.005432129, 0.021057129, 0.078063965, 0.0270... [0.00079345703, 0.064941406, 0.09710693, -0.088... [-0.0067749023, 0.008087158, 0.06536865, 0.0219... [-0.008758545, 0.015106201, 0.08139038, 0.02600... [-0.0034179688, 0.039733887, 0.07711792, 0.1164... [-0.0008087158, -0.000579834, -0.00062561035, -... [0.021026611, 0.029907227, 0.040527344, 0.05448... [0.017288208, 0.026321411, 0.0340271, 0.0403137... [0.019561768, 0.026611328, 0.03668213, 0.047576... [0.022827148, 0.03414917, 0.056289673, 0.078018... Each of these 12 rows represents the raw floating point sample values of a .wav file. Now if I write these arrays inside a column of a CSV-file with: df.to_csv("./test.csv", sep=',', columns = ['wavsamples','wavsamplerate','wavname']) I get the following csv file: filenumber,wavsamples,wavsamplerate,wavname 0,"[ 0.02709961 0.06796265 -0.0118103 ... -0.36627197 -0.36645508 -0.3657837 ]",44100,Audio1.wav 1,"[ 0.00689697 0.04547119 0.04302979 ... -0.03359985 -0.03244019 -0.03167725]",44100,Audio2.wav 2,"[-0.00543213 0.02105713 0.07806396 ... 0.45645142 0.45541382 0.45510864]",44100,Audio3.wav 3,[0.00079346 0.06494141 0.09710693 ... 0.22116089 0.22421265 0.22741699],44100,Audio4.wav 4,"[-0.0067749 0.00808716 0.06536865 ... 0.24209595 0.23977661 0.23754883]",44100,Audio5.wav 5,"[-0.00875854 0.0151062 0.08139038 ... -0.0256958 -0.0184021 -0.01156616]",44100,Audio6.wav 6,"[-0.00341797 0.03973389 0.07711792 ... 0.41384888 0.41375732 0.41348267]",44100,Audio7.wav 7,"[-0.00080872 -0.00057983 -0.00062561 ... 0.0100708 0.0100708 0.01000977]",44100,Audio8.wav 8,[0.02102661 0.02990723 0.04052734 ... 0.00976562 0.00965881 0.00990295],44100,Audio9.wav 9,[0.01728821 0.02632141 0.0340271 ... 0.01344299 0.01341248 0.01325989],44100,Audio10.wav 10,[0.01956177 0.02661133 0.03668213 ... 0.0141449 0.01400757 0.01402283],44100,Audio11.wav 11,[0.02282715 0.03414917 0.05628967 ... 0.01019287 0.01037598 0.01025391],44100,Audio12.wav So the column 'wavsamples' lost all of its commas. If I now read and print the column from the csv file with: with open("./test.csv", "r") as csv_file: reader = csv.reader(csv_file) rows = list(reader) audiofile = rows[12][1] print(audiofile) I just get: [0.02282715 0.03414917 0.05628967 ... 0.01019287 0.01037598 0.01025391] Not only have all the commas been removed, but as the wavsamples column gets treated like a character string the three dots get mistaken as literal dot characters so all the sample values in between get lost when writing them into the csv ... I know that csv is possibly the worst format to store .wav data like pointed out a lot of times here on stack overflow ... but I'm just curious - is there any way to store audio arrays with commas between the floating point values inside a csv column? I want to get a result like this when I read something from the csv: [0.022827148, 0.03414917, 0.056289673, 0.078018... Instead of this: [0.02282715 0.03414917 0.05628967 ... 0.01019287 0.01037598 0.01025391] How could I write the csv column so that I could read it correctly afterwards? | The CSV format will not support list types in a column, you need scalar values. What happens here is that pandas will implicitly cast that column containing the list type to a string. It has nothing to do with your chosen delimiter. One possible way to handle this if you have to have CSV format is to parse it back to a list type using ast.literal_eval, to be applied across that column, when you read the data back in. import pandas as pd import numpy as np df = pd.DataFrame({'a': [[1, 2], [2, 3], [3, 4]], 'b': [4, 5, 6]}) print(df.head()) df.to_csv('nested_test.csv', index=False) df = pd.read_csv('nested_test.csv') print(df.head() for _, row in df.iterrows(): # Note that, though it *looked* like a list in df.head() # we just get [ printed, as the first character of the # string it actually is print(row['a'][0]) import ast df['a'] = df['a'].apply(ast.literal_eval) for _, row in df.iterrows(): print(row['a'][0]) # Now we get the first item in the list If you used polars instead of pandas, this implicit cast would not be allowed and it would throw an exception. This is despite the fact that it has a List type as a first-class citizen. For this kind of data, you really should be looking into a format such as parquet, which is not only many times faster to parse in, but will natively handle the nested structure of your column(s). Finally, in your question, you specify using the csv module to read the data back in. You can do this, but I don't suppose it's particularly elegant, given the restrictions on CSV that I mentioned. This works for the example I gave, which assumes that all other non-list columns will be int, otherwise you'll need to handle them one-by-one. import csv with open('nested_test.csv') as infile: reader = csv.reader(infile) headers = next(reader) rebuilt = [] for row in reader: rebuilt.extend([ast.literal_eval(row[0]), *map(int, row[1:])]) print(rebuilt) Just to complicate things further, you don't actually have lists in your column but actually np.ndarray objects. When they get converted to strings, you lose the commas from __repr__ on top of the other complications. arr = np.array([1., 2., 3.]) print(arr) Save yourself an extra headache by using: df['a'] = df['a'].apply(np.ndarray.tolist) before df.to_csv() ... you might now be seeing why CSV is not a great format here ... | 3 | 2 |
77,400,070 | 2023-11-1 | https://stackoverflow.com/questions/77400070/add-missing-value-of-pair | I have a dataframe: [{'Date': Timestamp('2023-01-01 00:00:00'),'Sex':'M', 'Value':11, 'Target':5, 'A':48}, {'Date': Timestamp('2023-01-01 00:00:00'),'Sex':'F', 'Value':25, 'Target':7, 'A':20}, {'Date': Timestamp('2023-01-10 00:00:00'),'Sex':'M', 'Value':45, 'Target':6, 'A':20}, {'Date': Timestamp('2023-01-10 00:00:00'),'Sex':'F', 'Value':5, 'Target':2, 'A':16}, {'Date': Timestamp('2023-01-20 00:00:00'),'Sex':'M', 'Value':10, 'Target':8, 'A':30}] {'Date': Timestamp('2023-01-20 00:00:00'),'Sex':'M', 'Value':1, 'Target':18, 'A':3}] Date Sex Value Target A 0 2023-01-01 M 11 5 48 1 2023-01-01 F 25 7 20 2 2023-01-10 M 45 6 20 3 2023-01-10 F 5 2 16 4 2023-01-20 M 10 8 30 5 2023-01-20 M 1 18 3 And like to fill up the missing Date: 2023-01-20 Sex: F as 0 to Value, Target and A. Result: Date Sex Value Target A 0 2023-01-01 M 11 5 48 1 2023-01-01 F 25 7 20 2 2023-01-10 M 45 6 20 3 2023-01-10 F 5 2 16 4 2023-01-20 M 10 8 30 5 2023-01-20 M 1 18 3 6 2023-01-20 F 0 0 0 | You can perform a double merge, once to combine the dates and M/F, then to add the missing combinations to the original data. out = (df[['Date']] .drop_duplicates() .merge(pd.Series(['M', 'F'], name='Sex'), how='cross') .merge(df, how='left').fillna(0).convert_dtypes() ) Alternatively, using janitor's complete: import janitor out = df.complete('Date', {'Sex': ['M', 'F']}, fill_value=0) Output: Date Sex Value Target A 0 2023-01-01 M 11 5 48 1 2023-01-01 F 25 7 20 2 2023-01-10 M 45 6 20 3 2023-01-10 F 5 2 16 4 2023-01-20 M 10 8 30 5 2023-01-20 M 1 18 3 6 2023-01-20 F 0 0 0 Another option is to build a new dataframe and run an outer/left merge (similar to what complete does internally) : index = pd.MultiIndex.from_product([df.Date.unique(), df.Sex.unique()], names = ['Date', 'Sex']) index = pd.DataFrame([], index = index) index.merge(df, on=['Date','Sex'],how='left').fillna(0) Date Sex Value Target A 0 2023-01-01 M 11.0 5.0 48.0 1 2023-01-01 F 25.0 7.0 20.0 2 2023-01-10 M 45.0 6.0 20.0 3 2023-01-10 F 5.0 2.0 16.0 4 2023-01-20 M 10.0 8.0 30.0 5 2023-01-20 M 1.0 18.0 3.0 6 2023-01-20 F 0.0 0.0 0.0 | 3 | 2 |
77,401,175 | 2023-11-1 | https://stackoverflow.com/questions/77401175/how-to-make-flake8-ignore-syntax-within-strings | I'm suddenly getting flake8 errors for syntax within strings. For example, for the following line of code: tags.append(f'error_type:{error.get("name")}') I'm getting this error: E231 missing whitespace after ':'. I don't want to ignore all E231 errors, because I care about them when they don't refer to text within strings. I also don't want to have to go and add # noqa comments to each of my strings. I've tried to pin my flake8 version to 6.0.0 (which is the version that previously didn't raise these errors before). I'm running flake8 with pre-commit (if that is relevant). Why am I suddenly getting these errors for strings and how can I turn them off? I should also mention that this is happening in Github Actions, specifically. | This issue seems limited to Python 3.12+ and it was fixed by flake8 version 6.1.0. The errors stopped appearing when I upgrade to flake8 6.1.0. | 10 | 23 |
77,401,947 | 2023-11-1 | https://stackoverflow.com/questions/77401947/fill-null-values-with-the-closest-non-null-value-from-other-columns | I have the following polars dataframe import polars as pl data = { "product_id": ["1", "2", "3", "4", "5", "6", "7", "8", "9"], "col1": ["a", "a", "a", "a", "a", "a", "a", "a", "a",], "col2": ["b", None, "b", None, "b", None, "b", None, "b"], "col3": ["c", None, "c", None, None, None, None, None, None], "col4": [None, None, None, None, None, "d", None, None, "d"] } df = pl.DataFrame(data) ┌────────────┬──────┬──────┬──────┬──────┐ │ product_id ┆ col1 ┆ col2 ┆ col3 ┆ col4 │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ str ┆ str │ ╞════════════╪══════╪══════╪══════╪══════╡ │ 1 ┆ a ┆ b ┆ c ┆ null │ │ 2 ┆ a ┆ null ┆ null ┆ null │ │ 3 ┆ a ┆ b ┆ c ┆ null │ │ 4 ┆ a ┆ null ┆ null ┆ null │ │ 5 ┆ a ┆ b ┆ null ┆ null │ │ 6 ┆ a ┆ null ┆ null ┆ d │ │ 7 ┆ a ┆ b ┆ null ┆ null │ │ 8 ┆ a ┆ null ┆ null ┆ null │ │ 9 ┆ a ┆ b ┆ null ┆ d │ └────────────┴──────┴──────┴──────┴──────┘ I need to fill null values in column col4 with the closest non-null value from the columns to the left (not considering product_id column) and create a new column based on those values. The desired output should be the following data = { "product_id": ["1", "2", "3", "4", "5", "6", "7", "8", "9"], "col1": ["a", "a", "a", "a", "a", "a", "a", "a", "a",], "col2": ["b", None, "b", None, "b", None, "b", None, "b"], "col3": ["c", None, "c", None, None, None, None, None, None], "col4": [None, None, None, None, None, "d", None, None, "d"], "desired_column": ["c", "a", "c", "a", "b", "d", "b", "a", "d"] } df = pl.DataFrame(data) shape: (9, 6) ┌────────────┬──────┬──────┬──────┬──────┬────────────────┐ │ product_id ┆ col1 ┆ col2 ┆ col3 ┆ col4 ┆ desired_column │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ str ┆ str ┆ str │ ╞════════════╪══════╪══════╪══════╪══════╪════════════════╡ │ 1 ┆ a ┆ b ┆ c ┆ null ┆ c │ │ 2 ┆ a ┆ null ┆ null ┆ null ┆ a │ │ 3 ┆ a ┆ b ┆ c ┆ null ┆ c │ │ 4 ┆ a ┆ null ┆ null ┆ null ┆ a │ │ 5 ┆ a ┆ b ┆ null ┆ null ┆ b │ │ 6 ┆ a ┆ null ┆ null ┆ d ┆ d │ │ 7 ┆ a ┆ b ┆ null ┆ null ┆ b │ │ 8 ┆ a ┆ null ┆ null ┆ null ┆ a │ │ 9 ┆ a ┆ b ┆ null ┆ d ┆ d │ └────────────┴──────┴──────┴──────┴──────┴────────────────┘ I tried different approaches but didn't have any success. Is there a way to achieve this behaviour within polars? | It looks like "closest" means to favor the higher numbered columns ie 4,3,2,1. You can do that with just a coalesce df.with_columns( desired_column=pl.coalesce( 'col4', 'col3', 'col2', 'col1' ) ) shape: (9, 6) ┌────────────┬──────┬──────┬──────┬──────┬────────────────┐ │ product_id ┆ col1 ┆ col2 ┆ col3 ┆ col4 ┆ desired_column │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ str ┆ str ┆ str │ ╞════════════╪══════╪══════╪══════╪══════╪════════════════╡ │ 1 ┆ a ┆ b ┆ c ┆ null ┆ c │ │ 2 ┆ a ┆ null ┆ null ┆ null ┆ a │ │ 3 ┆ a ┆ b ┆ c ┆ null ┆ c │ │ 4 ┆ a ┆ null ┆ null ┆ null ┆ a │ │ 5 ┆ a ┆ b ┆ null ┆ null ┆ b │ │ 6 ┆ a ┆ null ┆ null ┆ d ┆ d │ │ 7 ┆ a ┆ b ┆ null ┆ null ┆ b │ │ 8 ┆ a ┆ null ┆ null ┆ null ┆ a │ │ 9 ┆ a ┆ b ┆ null ┆ d ┆ d │ └────────────┴──────┴──────┴──────┴──────┴────────────────┘ You can do it more dynamically with a generator: pl.coalesce(x for x in df.columns[::-1] if x!='product_id') | 2 | 4 |
77,399,830 | 2023-11-1 | https://stackoverflow.com/questions/77399830/python-script-is-unable-to-see-environment-variable-injected-by-docker | I have instructed Docker to inject an environment variable called "INJECT_THIS" using two different methodologies (env.list file via 'docker run', and Dockerfile). My Dockerfile's RUN command invokes a Python file (run.py), but the environment variable is not seen by the Python code's os.environ object: environ({'OLDPWD': '/', 'PATH': '/command:/lsiopy/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'PWD': '/app', 'LC_CTYPE': 'C.UTF-8'}) I've created a reproducible example on Github, but will also paste file contents in this question. Dockerfile FROM linuxserver/blender WORKDIR /app ENV INJECT_THIS = "inject this" COPY run.py ./ RUN apt-get update && apt-get -y install python3-pip CMD ["python3", "-u", "run.py"] run.py import os # Notice that the injected environment variable "INJECT_THIS" is not present. print('os.environ', os.environ) env.list INJECT_THIS="inject this" | When you do docker inspect <your image name> you will see, there's ENTRYPOINT specified. This entrypoint messes with environment variables. You can specify your ENTRYPOINT instead of CMD (note I removed the spaces in ENV): FROM linuxserver/blender WORKDIR /app ENV INJECT_THIS="inject this" COPY run.py ./ RUN apt-get update && apt-get -y install python3-pip ENTRYPOINT ["python3", "-u", "run.py"] Then run the docker image: $ docker run -t <your image name> Prints: os.environ environ({'PATH': '/lsiopy/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOSTNAME': 'bb98d241500d', 'TERM': 'xterm', 'HOME': '/config', 'LANGUAGE': 'en_US.UTF-8', 'LANG': 'en_US.UTF-8', 'S6_CMD_WAIT_FOR_SERVICES_MAXTIME': '0', 'S6_VERBOSITY': '1', 'S6_STAGE2_HOOK': '/docker-mods', 'VIRTUAL_ENV': '/lsiopy', 'DISPLAY': ':1', 'PERL5LIB': '/usr/local/bin', 'OMP_WAIT_POLICY': 'PASSIVE', 'GOMP_SPINCOUNT': '0', 'START_DOCKER': 'true', 'PULSE_RUNTIME_PATH': '/defaults', 'NVIDIA_DRIVER_CAPABILITIES': 'all', 'LSIO_FIRST_PARTY': 'true', 'TITLE': 'Blender', 'INJECT_THIS': 'inject this'}) | 3 | 2 |
77,390,826 | 2023-10-30 | https://stackoverflow.com/questions/77390826/sqlalchemy-error-invalidrequesterror-cant-operate-on-closed-transaction-insi | I'm encountering an issue while working with SQLAlchemy in a FastAPI project. I've set up a route that's supposed to add items to a database using a context manager and nested transactions. If a single item is failed to be added (due to constraints or any reason) it should not be included in the commit. However the remaining items, added both before or after, should be included. When using nested transactions, I would expect to be able to keep track of my failed and succesful additions. However, I keep running into the following error: sqlalchemy.exc.InvalidRequestError: Can't operate on a closed transaction inside a context manager. I've provided the relevant code below: router = APIRouter() @lru_cache() def get_session_maker() -> sessionmaker: # create a reusable factory for new AsyncSession instances engine = create_engine(SQLALCHEMY_DATABASE_URI, echo=True) return sessionmaker(engine) def get_session() -> Generator[Session, None, None]: cached_sessionmaker = get_session_maker() with cached_sessionmaker.begin() as session: yield session @router.post("/items") def add_items( session: Session = Depends(get_database.get_session), ) -> Dict[str, Any]: request_inputs = [ RequestInput(name="chair", used_for="sitting"), RequestInput(name="table", used_for="dining"), RequestInput(name="tv", used_for="watching"), ] uploaded_items = [] failed_items = [] for request_input in request_inputs: try: with session.begin_nested(): item= Item( **request_input.dict() ) session.add(item) session.refresh(item) uploaded_items += 1 except IntegrityError as e: # Handle any integrity constraint violations here session.rollback() failed_items += 1 except Exception as e: # Handle other exceptions session.rollback() failed_items += 1 session.commit() return { "uploaded": uploaded_items, "failed": failed_items, } It is obviously caused by my session to be closed prematurely, however I cannot figure out where I am closing the transaction to early, whilst trying to add all non failed items to my db. Can someone please help me understand why I'm encountering this error and how to fix it? Thank you in advance for your assistance. I tried to use session.begin_nested() to keep track of the status of my transaction, however it seems to close somewhere. if not used the begin_nested(), I only commit the items before the failed instance. All items afterwards are excluded. | Summary The error is caused by the rollbacks inside the except blocks. They roll back the outer transaction, rendering the outer context manager unusable. The inner transaction created by begin_nested() will roll back automatically if an exception occurs, so there is no need to roll back the outer transaction. Detail What is happening is that the line session.refresh(item) raises an exception InvalidRequestError: Instance '<Item at 0x7faa69941070>' is not persistent within this Session because only instances that be committed can be refreshed. This exception is trapped by second except clause, which calls the outer session's rollback() method. The rollback closes the outer session's transaction, triggering the reported error InvalidRequestError: Can't operate on a closed transaction inside a context manager. when the session's commit() method is called. In fact the explicit commits and rollbacks are unnecessary, because the begin() methods of the inner and outer transactions commit and rollback automatically. The code (leaving out the FastAPI-specific parts) can be reduced to this: import sqlalchemy as sa from sqlalchemy import orm from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column class Base(DeclarativeBase): pass class Item(Base): __tablename__ = "items" id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] = mapped_column(unique=True) used_for: Mapped[str] engine = sa.create_engine("sqlite://", echo=True) Base.metadata.create_all(engine) Session = orm.sessionmaker(engine) with Session.begin() as session: request_inputs = [ dict(name="chair", used_for="sitting"), dict(name="table", used_for="dining"), dict(name="chair", used_for="sitting"), # Force an IntegrityError ] uploaded_items = failed_items = 0 for request_input in request_inputs: try: with session.begin_nested(): item = Item(**request_input) session.add(item) uploaded_items += 1 # If an exception is raised inside the begin_nested() method # the inner transaction will be rolled back and the exception # will be re-raised. Trap it inside the outer session to prevent # the outer session from being rolled back. except sa.exc.IntegrityError as e: # Handle any integrity constraint violations here failed_items += 1 except Exception as e: # Handle other exceptions failed_items += 1 result = { "uploaded": uploaded_items, "failed": failed_items, } print(f"{result = }") | 3 | 0 |
77,363,596 | 2023-10-26 | https://stackoverflow.com/questions/77363596/how-to-get-more-word-suggestions-from-hunspell-with-pyhunspell | I'm using hunspell with the pyhunspell wrapper. I'm calling: hunspell.suggest("Yokk") But this is returning only ["Yolk", "Yoke"]. I saw that "York" is in the dictionary but is not being returned. Is there a way to return more than 2 suggestions, either by increasing the distance threshold or the number of top suggestions? The text I'm trying to correct is "New York" and I have my own ranker that ranks the suggestions downstream. I just need more suggestions. I tried aspell and by default its returning 10 suggestions, one of which is in fact "York". Note: The documentation doesn't mention any other arguments for method suggest. Even using the CLI I only get two suggestions: hunspell -d en_US Hunspell 1.7.2 yokk & yokk 2 0: yolk, yoke I've checked the default dictionaries are properly loaded using: hunspell -D SEARCH PATH: ... AVAILABLE DICTIONARIES (path is not mandatory for -d option): /Library/Spelling/en_US LOADED DICTIONARY: /Library/Spelling/en_US.aff /Library/Spelling/en_US.dic ➜ 2 subl /Library/Spelling/en_US.dic And I've also checked that the expected "York" is in the dictionary: cat /Library/Spelling/en_US.dic | grep York York/M I wonder if there is some other configuration I can set somewhere, I can't see anything evident in either the wrapper or the CLI documentation: https://github.com/pyhunspell/pyhunspell/wiki/Documentation https://github.com/hunspell/hunspell | I have installed 2020 dictionaries from this link: http://wordlist.aspell.net/dicts/ the newest dictionaries available here: https://github.com/LibreOffice/dictionaries Then tested the code: import hunspell #0.5.5 hobj = hunspell.HunSpell('./en_US.dic', './en_US.aff') tt = hobj.suggest("Yokk") print(tt) And got the output that is different from yours: ['York', 'Yoko', 'Yolk', 'Yoke'] | 4 | 2 |
77,364,550 | 2023-10-26 | https://stackoverflow.com/questions/77364550/attributeerror-module-pkgutil-has-no-attribute-impimporter-did-you-mean | Earlier I installed some packages like Matplotlib, NumPy, pip (version 23.3.1), wheel (version 0.41.2), etc., and did some programming with those. I used the command C:\Users\UserName>pip list to find the list of packages that I have installed, and I am using Python 3.12.0 (by employing code C:\Users\UserName>py -V). I need to use pyspedas to analyse some data. I am following the instruction that that I received from site to install the package, with a variation (I am not sure whether it matters or not: I am using py, instead of python). The commands that I use, in the order, are: py -m venv pyspedas .\pyspedas\Scripts\activate pip install pyspedas After the last step, I am getting the following output: Collecting pyspedas Using cached pyspedas-1.4.47-py3-none-any.whl.metadata (14 kB) Collecting numpy>=1.19.5 (from pyspedas) Using cached numpy-1.26.1-cp312-cp312-win_amd64.whl.metadata (61 kB) Collecting requests (from pyspedas) Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB) Collecting geopack>=1.0.10 (from pyspedas) Using cached geopack-1.0.10-py3-none-any.whl (114 kB) Collecting cdflib<1.0.0 (from pyspedas) Using cached cdflib-0.4.9-py3-none-any.whl (72 kB) Collecting cdasws>=1.7.24 (from pyspedas) Using cached cdasws-1.7.43.tar.gz (21 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting netCDF4>=1.6.2 (from pyspedas) Using cached netCDF4-1.6.5-cp312-cp312-win_amd64.whl.metadata (1.8 kB) Collecting pywavelets (from pyspedas) Using cached PyWavelets-1.4.1.tar.gz (4.6 MB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [33 lines of output] Traceback (most recent call last): File "C:\Users\UserName\pyspedas\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module> main() File "C:\Users\UserName\pyspedas\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\UserName\pyspedas\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 112, in get_requires_for_build_wheel backend = _build_backend() ^^^^^^^^^^^^^^^^ File "C:\Users\UserName\pyspedas\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 77, in _build_backend obj = import_module(mod_path) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\UserName\AppData\Local\Programs\Python\Python312\Lib\importlib\__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1381, in _gcd_import File "<frozen importlib._bootstrap>", line 1354, in _find_and_load File "<frozen importlib._bootstrap>", line 1304, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1381, in _gcd_import File "<frozen importlib._bootstrap>", line 1354, in _find_and_load File "<frozen importlib._bootstrap>", line 1325, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 929, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 994, in exec_module File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed File "C:\Users\UserName\AppData\Local\Temp\pip-build-env-_lgbq70y\overlay\Lib\site-packages\setuptools\__init__.py", line 16, in <module> import setuptools.version File "C:\Users\UserName\AppData\Local\Temp\pip-build-env-_lgbq70y\overlay\Lib\site-packages\setuptools\version.py", line 1, in <module> import pkg_resources File "C:\Users\UserName\AppData\Local\Temp\pip-build-env-_lgbq70y\overlay\Lib\site-packages\pkg_resources\__init__.py", line 2191, in <module> register_finder(pkgutil.ImpImporter, find_on_path) ^^^^^^^^^^^^^^^^^^^ AttributeError: module 'pkgutil' has no attribute 'ImpImporter'. Did you mean: 'zipimporter'? [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. After little bit of googling, I came to know that this issues was reported at multiple places, but none for this package. I did install wheel in the new environment as mentioned in the answer here, but the problem still persists. Instead of setting up a virtual environment, I simply executed the command py -m pip install pyspedas. But I am still getting the error. What I could gather is that the program has an issue with Collecting pywavelets (from pyspedas) Using cached PyWavelets-1.4.1.tar.gz (4.6 MB) Installing build dependencies ... done I am using IDLE in Windows 11. | Due to the removal of the long-deprecated pkgutil.ImpImporter class, the pip command may not work for Python 3.12. You just have to manually install pip for Python 3.12 python -m ensurepip --upgrade python -m pip install --upgrade setuptools python -m pip install <module> In your virtual environment: pip install --upgrade setuptools Python comes with an ensurepip, which can install pip in a Python environment. https://pip.pypa.io/en/stable/installation/ On Linux/macOS terminal: python -m ensurepip --upgrade On Windows: py -m ensurepip --upgrade also, make sure to upgrade pip: py -m pip install --upgrade pip To install numpy on Python 3.12, you must use numpy version 1.26.4 pip install numpy==1.26.4 https://github.com/numpy/numpy/issues/23808#issuecomment-1722440746 for Ubuntu sudo apt install python3.12-dev or python3.12 -m pip install --upgrade setuptools | 206 | 290 |
77,385,142 | 2023-10-29 | https://stackoverflow.com/questions/77385142/using-a-pipe-symbol-in-typing-literal-string | I have a function that accepts certain literals for a specific argument: from typing import Literal def fn(x: Literal["foo", "bar", "foo|bar"]) -> None: reveal_type(x) The third contains a pipe symbol (|), "foo|bar". This is interpreted by mypy as an error, as the name foo is not defined. I guess this happens due to how forward references are evaluated? I use Python 3.8 with: from __future__ import annotations Is there a way to make this work? I can not change the string due to breaking backward compatibility, but currently, the whole annotation is revealed as Any, i.e. it holds no value. | This bug is now fixed. The change hasn't been released yet, however. | 5 | 2 |
77,391,156 | 2023-10-30 | https://stackoverflow.com/questions/77391156/custom-labels-in-vertex-ai-pipeline-pipelinejobschedule | I would like to know the steps involved in adding custom labels to a Vertex AI pipeline’s PipelineJobSchedule. Can anyone please provide me with the necessary guidance as it's not working when I am adding inside the Pipelinejob parameters? # https://cloud.google.com/vertex-ai/docs/pipelines/schedule-pipeline-run#create-a-schedule pipeline_job = aiplatform.PipelineJob( template_path="COMPILED_PIPELINE_PATH", pipeline_root="PIPELINE_ROOT_PATH", display_name="DISPLAY_NAME", labels="{"name":"test_xx"}" ) pipeline_job_schedule = aiplatform.PipelineJobSchedule( pipeline_job=pipeline_job, display_name="SCHEDULE_NAME" ) pipeline_job_schedule.create( cron="TZ=CRON", max_concurrent_run_count=MAX_CONCURRENT_RUN_COUNT, max_run_count=MAX_RUN_COUNT, ) | There was a bug in the Vertex AI platform SDK which has been discussed in the Github issue ticket below. It has been fixed in SDK versions 1.37.0 (released on December 5th, 2023). GitHub Issue ticket https://github.com/googleapis/python-aiplatform/issues/2929 | 3 | 1 |
77,388,920 | 2023-10-30 | https://stackoverflow.com/questions/77388920/error-could-not-build-wheels-for-aiohttp-which-is-required-to-install-pyprojec | Newbie here. I have been trying to installen the openai library into python, but I keep running into problems. I have already installed C++ libraries. It seems to have problems specific with aio http, and I get the error below. I a running a Windows 11 laptop without admin restrictions. Error "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\sande\AppData\Local\Programs\Python\Python312\include -IC:\Users\sande\AppData\Local\Programs\Python\Python312\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt" /Tcaiohttp/_websocket.c /Fobuild\temp.win-amd64-cpython-312\Release\aiohttp/_websocket.obj _websocket.c aiohttp/_websocket.c(1475): warning C4996: 'Py_OptimizeFlag': deprecated in 3.12 aiohttp/_websocket.c(3042): error C2039: 'ob_digit': is not a member of '_longobject' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for aiohttp Failed to build aiohttp ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects | Either use python 3.11 or pip install aiohttp==3.9.0b0 installs their current beta release that supports python 3.12.x then try openai installation Link to git :https://github.com/KillianLucas/open-interpreter/issues/581 | 9 | 10 |
77,362,308 | 2023-10-25 | https://stackoverflow.com/questions/77362308/whats-the-difference-between-pip3-install-e-and-python3-setup-py-develop | When installing a Python package locally in editable mode, both 'pip3 install -e .', and 'python3 setup.py develop', will both install the package locally in editable mode. I know that 'pip3 install -e .' is recommended over 'python3 setup.py develop', but I am unclear as to why and what the differences are between the two. This answer for a separate question (https://stackoverflow.com/a/19048754/8903959) seems to indicate that it's because it's more difficult to uninstall when using python3 setup.py develop. Why is that the case? Additionally, the answer (https://stackoverflow.com/a/19048754/8903959) says that dependencies are managed differently and incorrectly. What are the specific differences surrounding dependencies and why do they exist, especially if 'python3 setup.py develop' manages them incorrectly? Apologies if this is a trivial question, but try as I might I could not find the answer or documentation for this reasoning elsewhere. | At this point in time, trying to get an accurate description of the differences between python setup.py install and python -m pip install seems rather pointless. python setup.py develop and python setup.py install originate from a time before pip existed. The behavior of these commands has not been kept up to date with all the latest standards while pip's behavior has. setuptools' place and role in the Python packaging ecosystem has shifted, it is now simply a "build back-end". setup.py should now be considered as a configuration file only, not as an executable script. I recommend reading this article for a lot of historical and technical background on the topic: Why you shouldn't invoke setup.py directly. Also this article on the Python packaging user guide: Is setup.py deprecated? | 2 | 4 |
77,371,281 | 2023-10-27 | https://stackoverflow.com/questions/77371281/why-does-unittests-mock-patch-start-re-run-the-function-in-which-the-patcher | Let's say we have two files: to_patch.py from unittest.mock import patch def patch_a_function(): print("Patching!") patcher = patch("to_be_patched.function") patcher.start() print("Done patching!") to_be_patched.py from to_patch import patch_a_function def function(): pass patch_a_function() function() And we run python -m to_be_patched. This will output: Patching! Patching! Why isn't Done patching! ever printed? Why is Patching! printed twice? I've narrowed the answer to (2) down; the call to patch.start seems to trigger patch_a_function again. I suspect this is because it's imported in to_be_patched.py, but am not sure why the function itself would run for a second time. Similarly, I'm not sure why the Done patching! line is not reached in either of the calls to patch_a_function. patcher.start() can't be blocking, because the program exits nicely instead of hanging there... right? Edit: Huh. It looks like no one can reproduce Done patching! not being printed (which was honestly the main difficulty)—so I guess that's just a my-side problem | Why isn't Done patching! ever printed? Can not reproduce. $ python -m to_be_patched Patching! Patching! Done patching! Done patching! Why is Patching! printed twice? Your module gets imported twice. If you add print(__name__) into the file to_be_patched.py it will be clear: from to_patch import patch_a_function print(f"{__name__=}") def function(): pass patch_a_function() function() # note: this line doesn't actually do anything, and could be commented out Result: $ python -m to_be_patched __name__='__main__' Patching! __name__='to_be_patched' Patching! Done patching! Done patching! When you use python -m to_be_patched your module to_be_patched will be loaded as top-level code, i.e. the module __name__ will be "__main__". When mock.patch is used, mock will first import the patch target. When given a patch target as a string like "to_be_patched.function" mock will use importlib, via pkgutil.resolve_name, to find the correct namespace in which to patch. This method loads the target module with __name__ as "to_be_patched", it's not the top-level code environment. Although it's the same underlying .py file being loaded, there is a cache miss in sys.modules, because of the name mismatch: "__main__" != "to_be_patched". The function patch_a_function now has dual identities and exists in the module __main__ as well as the module to_be_patched, so what you're seeing is each one getting called. The first call triggers the second call, by the double-import mechanism described. | 2 | 2 |
77,358,061 | 2023-10-25 | https://stackoverflow.com/questions/77358061/how-does-ctrl-c-work-with-multiple-processes-in-python | I am trying to distribute work across multiple processes. Exceptions that occur in another process should be propagated back and handled in the main process. This seems to work for exceptions thrown in worker, but not for Ctrl-C. import time from concurrent.futures import ProcessPoolExecutor, Future, wait import traceback def worker(): time.sleep(5) # raise RuntimeError("Some Error") return True def main(): with ProcessPoolExecutor() as executor: stop = False tasks = set() while True: try: # keep submitting new tasks if not stop: time.sleep(0.1) print("Submitting task to worker") future = executor.submit(worker) tasks.add(future) done, not_done = wait(tasks, timeout=0) tasks = not_done # get the result for future in done: try: result = future.result() except Exception as e: print(f"Exception in worker: {type(e).__name__}: {e}") else: print(f"Worker finished with result: {result}") # exit loop if there are no tasks left and loop was stopped if stop: print(f"Waiting for {len(tasks)} to finish.") if not len(tasks): print("Finished all remaining tasks") break time.sleep(1) except KeyboardInterrupt: print("Recieved Ctrl-C") stop = True except Exception as e: print(f"Caught {e}") stop = True if __name__ == "__main__": main() Some of my observations: When you run this script and press Ctrl-C, the KeyboardInterrupt exception is thrown multiple times. If you remove the KeyboardInterrupt exception, Ctrl-C is not caught at all. But my understanding is that the second except should catch all exceptions. All exceptions thrown in the worker are reraised by future.result (expected behaviour) I suspect that if Ctrl-C is pressed while a process is being spawned it could lead to some unexpected behaviour. Edit: These problems occur on both Linux and Windows. Ideally there is a solution for both, but in case of doubt the solution should work on Linux | It wasn't clear to me whether you want your worker function to continue running until completion (normal or abnormal) ignoring any Ctrl-C events. Assuming that to be the case, the following code should work under both Linux and Windows. The idea is to use a "pool initializer", i.e. a function that will run in each pool process prior to executing submitted tasks. Here the initializer executes code to ignore Ctrl-C events (KeyboardInterrupt exceptions). Note that I have made a few other code adjustments (marked with comments). import time from concurrent.futures import ProcessPoolExecutor, Future, wait import traceback import signal def init_pool_processes(): # Ignore Ctrl-C signal.signal(signal.SIGINT, signal.SIG_IGN) def worker(): time.sleep(5) # raise RuntimeError("Some Error") return True def main(): # Create pool child processes, which will now # ignore Ctrl-C with ProcessPoolExecutor(initializer=init_pool_processes) as executor: stop = False tasks = set() while True: try: # keep submitting new tasks if not stop: print("Submitting task to worker") future = executor.submit(worker) tasks.add(future) # Move to here: time.sleep(0.1) done, not_done = wait(tasks, timeout=0) tasks = not_done # get the result for future in done: try: result = future.result() except Exception as e: print(f"Exception in worker: {type(e).__name__}: {e}") else: print(f"Worker finished with result: {result}") # exit loop if there are no tasks left and loop was stopped if stop: print(f"Waiting for {len(tasks)} to finish.") if not len(tasks): print("Finished all remaining tasks") break time.sleep(1) except KeyboardInterrupt: # Ignore future Ctrl-C events: signal.signal(signal.SIGINT, signal.SIG_IGN) print("Received Ctrl-C") # spelling stop = True except Exception as e: print(f"Caught {e}") # Ignore future Ctrl-C events: if not stop: signal.signal(signal.SIGINT, signal.SIG_IGN) stop = True if __name__ == "__main__": main() | 3 | 1 |
77,397,920 | 2023-10-31 | https://stackoverflow.com/questions/77397920/python-poetry-cant-publish-to-google-artifact-registry-repository-the-reque | I have a proprietary Python project that I manage using Poetry. I stored the code in the Google Artifact Registry in the past, but now I switched to a different dev environment (Arch Linux in a VM -> Windows + Ubuntu WSL) and I can't upload it the AR, while getting this error (this is on my new WSL machine): $ poetry publish --build --repository all-mpm There are 2 files ready for publishing. Build anyway? (yes/no) [no] yes Building mpm (0.1.54) - Building sdist - Built mpm-0.1.54.tar.gz - Building wheel - Built mpm-0.1.54-py3-none-any.whl Publishing mpm (0.1.54) to all-mpm - Uploading mpm-0.1.54-py3-none-any.whl FAILED HTTP Error 401: Unauthorized | b'The request does not have valid authentication credentials.\n' Here's the catch: This is only happens to this repository. If I create a new repository on the same machine using the same project, I don't get an error and am able to upload my package successfully. I believe this is a keyring issue. Listing all my keyring backends: $ keyring --list-backends keyrings.gauth.GooglePythonAuth (priority: 9) keyring.backends.chainer.ChainerBackend (priority: 10) keyring.backends.SecretService.Keyring (priority: 5) keyring.backends.fail.Keyring (priority: 0) I used gsutil for its creation: gcloud artifacts repositories create my-repo \ --repository-format=python \ --location=europe-west3 After that I set for my Python project which I manage using poetry. poetry config repository.myrepo "https://europe-west3-python.pkg.dev/my-project/my-repo/ My usual build process: poetry publish --build --repository my-repo If I run this command with the verbose option (-vvv), I get the following details: $ poetry publish --build --repository all-mpm -vvv Loading configuration file /home/til/.config/pypoetry/config.toml Loading configuration file /home/til/.config/pypoetry/auth.toml There are 2 files ready for publishing. Build anyway? (yes/no) [no] yes Using virtualenv: /home/til/.cache/pypoetry/virtualenvs/mpm-W0UszHAu-py3.11 Building mpm (0.1.54) - Building sdist Ignoring: tests/__pycache__/test_plytix.cpython-311-pytest-7.4.2.pyc ********redacted******** - Adding: /home/til/code/aplus-automation/src/mpm/__init__.py ********redacted******** - Adding: pyproject.toml - Adding: README.md - Built mpm-0.1.54.tar.gz - Building wheel Ignoring: htmlcov/d_a44f0ac069e85531_test_plytix_py.html ********redacted******** - Adding: /home/til/code/aplus-automation/src/mpm/__init__.py ********redacted******** Skipping: /home/til/code/aplus-automation/LICENSE Skipping: /home/til/code/aplus-automation/COPYING - Built mpm-0.1.54-py3-none-any.whl [keyring.backend] Loading KWallet [keyring.backend] Loading SecretService [keyring.backend] Loading Windows [keyring.backend] Loading chainer [keyring.backend] Loading libsecret [keyring.backend] Loading macOS [keyring.backend] Loading Google Auth Found authentication information for all-mpm. Publishing mpm (0.1.54) to all-mpm - Uploading mpm-0.1.54-py3-none-any.whl 0%[urllib3.connectionpool] Starting new HTTPS connection (1): europe-west3-python.pkg.dev:443 - Uploading mpm-0.1.54-py3-none-any.whl 100%[urllib3.connectionpool] https://europe-west3-python.pkg.dev:443 "POST /mpm99-398708/all-mpm HTTP/1.1" 401 60 - Uploading mpm-0.1.54-py3-none-any.whl FAILED Stack trace: 1 ~/.pyenv/versions/3.11.6/lib/python3.11/site-packages/poetry/publishing/uploader.py:265 in _upload_file 263│ bar.display() 264│ else: → 265│ resp.raise_for_status() 266│ except (requests.ConnectionError, requests.HTTPError) as e: 267│ if self._io.output.is_decorated(): HTTPError 401 Client Error: Unauthorized for url: https://europe-west3-python.pkg.dev/mpm99-398708/all-mpm at ~/.local/lib/python3.11/site-packages/requests/models.py:1021 in raise_for_status 1017│ f"{self.status_code} Server Error: {reason} for url: {self.url}" 1018│ ) 1019│ 1020│ if http_error_msg: → 1021│ raise HTTPError(http_error_msg, response=self) 1022│ 1023│ def close(self): 1024│ """Releases the connection back to the pool. Once this method has been 1025│ called the underlying ``raw`` object must not be accessed again. The following error occurred when trying to handle this error: Stack trace: 11 ~/.local/lib/python3.11/site-packages/cleo/application.py:327 in run 325│ 326│ try: → 327│ exit_code = self._run(io) 328│ except BrokenPipeError: 329│ # If we are piped to another process, it may close early and send a 10 ~/.pyenv/versions/3.11.6/lib/python3.11/site-packages/poetry/console/application.py:190 in _run 188│ self._load_plugins(io) 189│ → 190│ exit_code: int = super()._run(io) 191│ return exit_code 192│ 9 ~/.local/lib/python3.11/site-packages/cleo/application.py:431 in _run 429│ io.input.interactive(interactive) 430│ → 431│ exit_code = self._run_command(command, io) 432│ self._running_command = None 433│ 8 ~/.local/lib/python3.11/site-packages/cleo/application.py:473 in _run_command 471│ 472│ if error is not None: → 473│ raise error 474│ 475│ return terminate_event.exit_code 7 ~/.local/lib/python3.11/site-packages/cleo/application.py:457 in _run_command 455│ 456│ if command_event.command_should_run(): → 457│ exit_code = command.run(io) 458│ else: 459│ exit_code = ConsoleCommandEvent.RETURN_CODE_DISABLED 6 ~/.local/lib/python3.11/site-packages/cleo/commands/base_command.py:119 in run 117│ io.input.validate() 118│ → 119│ status_code = self.execute(io) 120│ 121│ if status_code is None: 5 ~/.local/lib/python3.11/site-packages/cleo/commands/command.py:62 in execute 60│ 61│ try: → 62│ return self.handle() 63│ except KeyboardInterrupt: 64│ return 1 4 ~/.pyenv/versions/3.11.6/lib/python3.11/site-packages/poetry/console/commands/publish.py:82 in handle 80│ ) 81│ → 82│ publisher.publish( 83│ self.option("repository"), 84│ self.option("username"), 3 ~/.pyenv/versions/3.11.6/lib/python3.11/site-packages/poetry/publishing/publisher.py:86 in publish 84│ ) 85│ → 86│ self._uploader.upload( 87│ url, 88│ cert=resolved_cert, 2 ~/.pyenv/versions/3.11.6/lib/python3.11/site-packages/poetry/publishing/uploader.py:107 in upload 105│ 106│ try: → 107│ self._upload(session, url, dry_run, skip_existing) 108│ finally: 109│ session.close() 1 ~/.pyenv/versions/3.11.6/lib/python3.11/site-packages/poetry/publishing/uploader.py:191 in _upload 189│ ) -> None: 190│ for file in self.files: → 191│ self._upload_file(session, url, file, dry_run, skip_existing) 192│ 193│ def _upload_file( UploadError HTTP Error 401: Unauthorized | b'The request does not have valid authentication credentials.\n' at ~/.pyenv/versions/3.11.6/lib/python3.11/site-packages/poetry/publishing/uploader.py:271 in _upload_file 267│ if self._io.output.is_decorated(): 268│ self._io.overwrite( 269│ f" - Uploading {file.name} FAILED" 270│ ) → 271│ raise UploadError(e) 272│ finally: 273│ self._io.write_line("") 274│ 275│ def _register(self, session: requests.Session, url: str) -> requests.Response: Other measures I took: reinstallation of gsutils consulted the AR docs, setting up using a) ADC, b) a service account, c) .pypirc file added the keyring to poetry More debug info: Poetry plugins: $ poetry self show plugins • poetry-plugin-export (1.5.0) Poetry plugin to export the dependencies to various formats 1 application plugin Dependencies - poetry (>=1.5.0,<2.0.0) - poetry-core (>=1.6.0,<2.0.0) After removing .config/gcloud/ and setting up using gcloud init again (still getting the bad credentials error) $ gcloud init Welcome! This command will take you through the configuration of gcloud. Your current configuration has been set to: [default] You can skip diagnostics next time by using the following flag: gcloud init --skip-diagnostics Network diagnostic detects and fixes local network connection issues. Checking network connection...done. Reachability Check passed. Network diagnostic passed (1/1 checks passed). You must log in to continue. Would you like to log in (Y/n)? y Go to the following link in your browser: ****redacted**** Enter authorization code: ****redacted**** Updates are available for some Google Cloud CLI components. To install them, please run: $ gcloud components update You are logged in as: [***redacted****]. Pick cloud project to use: ****redacted**** Please enter numeric choice or text value (must exactly match list item): 3 Your current project has been set to: [****redacted****]. There is a slight difference when uploading to any other registry: [keyring.backend] Loading KWallet [keyring.backend] Loading SecretService [keyring.backend] Loading Windows [keyring.backend] Loading chainer [keyring.backend] Loading libsecret [keyring.backend] Loading macOS [keyring.backend] Loading Google Auth ### this is different! vvvvv [google.auth._default] Checking None for explicit credentials as part of auth process... [google.auth._default] Checking Cloud SDK credentials as part of auth process... [google.auth.transport.requests] Making request: POST https://oauth2.googleapis.com/token [urllib3.connectionpool] Starting new HTTPS connection (1): oauth2.googleapis.com:443 [urllib3.connectionpool] https://oauth2.googleapis.com:443 "POST /token HTTP/1.1" 200 None Found authentication information for all-mpm2. I'm at a loss at what to try next. | As @robert-g commented, the solution can be found in https://github.com/python-poetry/poetry/issues/7545 For posterity, this fixed my issue: poetry config http-basic.my-repo oauth2accesstoken $(gcloud auth print-access-token) | 3 | 7 |
77,371,521 | 2023-10-27 | https://stackoverflow.com/questions/77371521/making-function-single-threaded-and-running-in-background | I am trying to implement in-memory queue like Kafka using Python using concepts like re-entrant locks and threads. I am new to Python threading. I have a consumer which will subscribe to the topic and read a message from it. So far it's working fine but I have doubts related to threading. I am trying to make the consumerRunner process single- threaded. When checking the output I am seeing MainThread and Thread-1. If the function is single-threaded, should it not be displaying the message single time by the thread which is executing this function? Message2 and Message4 has been consumed twice. With the single thread they should be consumed only once. I know main Thread is the default Thread but is it correct to have msg printed twice? Sorry if I am asking silly question. Output: (py3_8) ninjakx@Kritis-MacBook-Pro kafka % python queueDemo.py Msg: message1 has been published to topic: topic1 Msg: message2 has been published to topic: topic1 Msg: message3 has been published to topic: topic2 Msg: message4 has been published to topic: topic1 Msg: message1 has been consumed by consumer: consumer1 at offset: 0 with current thread: MainThread Msg: message3 has been consumed by consumer: consumer1 at offset: 0 with current thread: MainThread Msg: message2 has been consumed by consumer: consumer1 at offset: 1 with current thread: Thread-1 Msg: message2 has been consumed by consumer: consumer1 at offset: 1 with current thread: MainThread Msg: message4 has been consumed by consumer: consumer1 at offset: 2 with current thread: Thread-1 Msg: message4 has been consumed by consumer: consumer1 at offset: 2 with current thread: MainThread ConsumerImpl.py import zope.interface from ..interface.iConsumer import iConsumer from collections import OrderedDict from mediator.QueueMediatorImpl import QueueMediatorImpl import threading from threading import Thread import time @zope.interface.implementer(iConsumer) class ConsumerImpl: # will keep all the topic it has subscribed to and their offset def __init__(self, consumerName:str): self.__consumerName = consumerName self.__topicList = [] self.__topicVsOffset = OrderedDict() self.__queueMediator = QueueMediatorImpl() self.threadInit() def threadInit(self): thread = Thread(target = self._consumerRunner) thread.start() # thread.join() # print("thread finished...exiting") def __getConsumerName(self): return self.__consumerName def __getQueueMediator(self): return self.__queueMediator def __getSubscribedTopics(self)->list: return self.__topicList def __setTopicOffset(self, topicName:str, offset:int)->int: self.__topicVsOffset[topicName] = offset def __getTopicOffset(self, topicName:str)->int: return self.__topicVsOffset[topicName] def __addToTopicList(self, topicName:str)->None: self.__topicList.append(topicName) def _subToTopic(self, topicName:str): self.__addToTopicList(topicName) self.__topicVsOffset[topicName] = 0 def __consumeMsg(self, msg:str, offset:int): print(f"Msg: {msg} has been consumed by consumer: {self.__getConsumerName()} at offset: {offset} with current thread: {threading.current_thread().name}\n") # pull based mechanism # running on single thread def _consumerRunner(self): while(True): for topicName in self.__getSubscribedTopics(): curOffset = self.__getTopicOffset(topicName) qmd = self.__getQueueMediator() msg = qmd._readMsgIfPresent(topicName, curOffset) if msg is not None: self.__consumeMsg(msg._getMessage(), curOffset) curOffset += 1 # update offset self.__setTopicOffset(topicName, curOffset) try: #sleep for 100 milliseconds #thread sleep # "sleep() makes the calling thread sleep until seconds seconds have elapsed or a signal arrives which is not ignored." time.sleep(0.1) except Exception as e: print(f"Error: {e}") QueueDemo.py from service.QueueServiceImpl import QueueServiceImpl if __name__ == "__main__": queueService = QueueServiceImpl() producer1 = queueService._createProducer("producer1") producer2 = queueService._createProducer("producer2") producer3 = queueService._createProducer("producer3") producer4 = queueService._createProducer("producer4") consumer1 = queueService._createConsumer("consumer1") consumer2 = queueService._createConsumer("consumer2") consumer3 = queueService._createConsumer("consumer3") producer1._publishToTopic("topic1", "message1") producer1._publishToTopic("topic1", "message2") producer2._publishToTopic("topic2", "message3") producer1._publishToTopic("topic1", "message4") consumer1._subToTopic("topic1") consumer1._subToTopic("topic2") consumer1._consumerRunner() | When you create a consumer object you are creating a new thread as part of startup: class ConsumerImpl: ... def __init__(self, consumerName:str): ... self.threadInit() def threadInit(self): thread = Thread(target = self._consumerRunner) thread.start() ... This will execute _consumerRunner on a new thread. This method spins forever with a while True loop. In the main of your application, you are then calling _consumerRunner directly. if __name__ == "__main__": ... consumer1 = queueService._createConsumer("consumer1") ... consumer1._consumerRunner() So now you have called _consumerRunner twice. Once directly in the main thread, and once when creating the consumer object in the constructor. This is why you're seeing the duplicate lines: Msg: message2 has been consumed by consumer: consumer1 at offset: 1 with current thread: Thread-1 Msg: message2 has been consumed by consumer: consumer1 at offset: 1 with current thread: MainThread So you should remove the direct call to consumer1._consumerRunner() from the main. Some style notes: Python uses snake_case predominantly. You should update your methods accordingly (e.g. _consumer_runner vs _consumerRunner). You mark many value as private (in Python, the standard convention is to use a leading _ in the name), however, you are accessing these methods and attributes from outside of the class. If you intend to use these methods external of the class which defines them, leave off the leading underscore, as these methods should be listed as your public contract. I would strongly avoid using dunder __ method and attribute names. Nothing in Python is truly private, and if you follow standard conventions, you shouldn't be using anything marked private normally with a single underscore anyways. The dunder variables also have special logic which mangles the name external to the class, which can cause a lot of not-so-fun bugs. Only use them if you have MASSIVE classes, and need to strongly guarantee that two layers in your class hierarchy do not conflict on naming choices. The zope.interface package seems like overkill to me. There are many other ways to keep contracts in check, such as normal abstract classes (using the abc class), and by using static analysis tools such as mypy or pyright to ensure objects are used appropriately. This keeps heavy boilerplate libraries out of your code keeping things more terse. After you create your threads, you never keep track of them. For proper shutdown of the application you should add logic to collect them and ensure they are closed and joined. A ThreadPoolExecutor, or ExitStack could help with that. Get into the habit of using the logging module. It is a lot more flexible than calls to print and will provide a uniform look to your logging across the application. | 2 | 4 |
77,387,669 | 2023-10-30 | https://stackoverflow.com/questions/77387669/how-can-i-concat-strings-that-might-be-empty | Currently I am trying to concat three strings in my Ansible playbook. Two of them can be unset (None/null) and I need them to be separated by underscores. Here is an example: type_mode="A", level_mode=None, and what_to_run="C" combines to A_C. If both where None it would just be C, if all where set it would be A_B_C. My idea was to do this bit in Python: - name: "Set Name" set_fact: name: "{{ ('' if type_mode is None else type_mode + '_') + ('' if level_mode is None else level_mode + '_') + what_to_run }}" But then I get this error: { "msg": "template error while templating string: Could not load \"None\": 'None'. String: {{ ('' if type_mode is None else type_mode + '_') + ('' if level_mode is None else level_mode + '_') + what_to_run }}. Could not load \"None\": 'None'", "_ansible_no_log": false } Do you have a idea how I can concat the (possible missing) values with an underscore separator? | Make a list out of your strings: [type_mode, level_mode, what_to_run] Exclude the blank parts with the help of the select filter: [type_mode, level_mode, what_to_run] | select Join the remaining chunks [type_mode, level_mode, what_to_run] | select | join('_') Your set_fact task ends up being: - set_fact: name: "{{ [type_mode, level_mode, what_to_run] | select | join('_') }}" If you also happen to have undefined variables, then you can add defaults to your strings: - set_fact: name: >- {{ [ type_mode | default(''), level_mode | default(''), what_to_run ] | select | join('_') }} Given a playbook containing. only this task, running it with: Only what_to_run ansible-playbook play.ym --verbose \ --extra-vars "what_to_run=foo" \ --extra-vars "level_mode=" \ --extra-vars "type_mode=" would give TASK [set_fact] ************************************************* ok: [localhost] => changed=false ansible_facts: var: foo Both what_to_run and level_mode ansible-playbook play.ym --verbose \ --extra-vars "what_to_run=foo" \ --extra-vars "level_mode=bar" \ --extra-vars "type_mode=" would give TASK [set_fact] ************************************************* ok: [localhost] => changed=false ansible_facts: var: bar_foo The three variables filled in ansible-playbook play.ym --verbose \ --extra-vars "what_to_run=foo" \ --extra-vars "level_mode=bar" \ --extra-vars "type_mode=baz" would give TASK [set_fact] ************************************************* ok: [localhost] => changed=false ansible_facts: var: baz_bar_foo | 2 | 4 |
77,397,374 | 2023-10-31 | https://stackoverflow.com/questions/77397374/using-index-better-than-sequential-scan-when-every-hundredth-row-is-needed-but | I have a table (under RDS Postgres v. 15.4 instance db.m7g.large): CREATE TABLE MyTable ( content_id integer, part integer, vector "char"[] ); There is a B-Tree index on content_id. My data consists of 100M rows. There are 1M (0 .. 10^6-1) different values of content_id. For each value of content_id there are 100 (0..99) values of part. The column vector contains an array of 384 byte-size numbers if content_id is divisible by 100 without a remainder. It is NULL otherwise. I have constructed this artificial data to test performance of the following query submitted from a Python script (it will become clear in a moment why I left it in Python for the question): query = f""" WITH Vars(key) as ( VALUES (array_fill(1, ARRAY[{384}])::vector) ), Projection as ( SELECT * FROM MyTable P WHERE P.content_id in ({str(list(range(0, 999999, 100)))[1:-1]}) ) SELECT P.content_id, P.part FROM Projection P, Vars ORDER BY P.vector::int[]::vector <#> key LIMIT {10}; """ <#> is the dot product operator of the pgvector extension, and vector is the type defined by that extension, which to my understanding is similar to real[]. Note that the WHERE clause specifies an explicit list of 10K values of content_id (which correspond to 1M rows, i.e. every hundredth row in the table). Because of this large explicit list, I have to leave my query in Python and cannot run EXPLAIN ANALYZE. The above query takes ~6 seconds to execute. However, when I prepend this query with SET enable_seqscan = off; the query takes only ~3 seconds. Question 1: Given that we need every 100-th row and that much of computation is about computing the dot products and ordering by them, how come sequential scans are not better than using the index? (All the more so, I can't understand how using the index could result in an improvement by a factor of 2.) Question 2: How come this improvement disappears if I change the explicit list of values for generate_series as shown below? WHERE content_id IN (SELECT generate_series(0, 999999, 100)) Now, for this latter query I have the output for EXPLAIN ANALYZE: Limit (cost=1591694.63..1591695.46 rows=10 width=24) (actual time=6169.118..6169.125 rows=10 loops=1) -> Result (cost=1591694.63..2731827.31 rows=13819790 width=24) (actual time=6169.117..6169.122 rows=10 loops=1) -> Sort (cost=1591694.63..1626244.11 rows=13819790 width=424) (actual time=6169.114..6169.117 rows=10 loops=1) Sort Key: ((((p.vector)::integer[])::vector <#> '[1,1,...,1]'::vector)) Sort Method: top-N heapsort Memory: 34kB -> Nested Loop (cost=194.30..1293053.94 rows=13819790 width=424) (actual time=2.629..6025.693 rows=1000000 loops=1) -> HashAggregate (cost=175.02..177.02 rows=200 width=4) (actual time=2.588..5.321 rows=10000 loops=1) Group Key: generate_series(0, 999999, 100) Batches: 1 Memory Usage: 929kB -> ProjectSet (cost=0.00..50.02 rows=10000 width=4) (actual time=0.002..0.674 rows=10000 loops=1) -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=1 loops=1) -> Bitmap Heap Scan on mytable p (cost=19.28..4204.85 rows=1382 width=416) (actual time=0.007..0.020 rows=100 loops=10000) Recheck Cond: (content_id = (generate_series(0, 999999, 100))) Heap Blocks: exact=64444 -> Bitmap Index Scan on idx_content_on_mytable (cost=0.00..18.93 rows=1382 width=0) (actual time=0.005..0.005 rows=100 loops=10000) Index Cond: (content_id = (generate_series(0, 999999, 100))) Planning Time: 0.213 ms Execution Time: 6169.260 ms (18 rows) UPDATE @jjanes commented regarding my first question: Assuming your data is clustered on content_id, you need 100 consecutive rows out of every set of 10,000 rows. That is very different than needing every 100th row. If I understand correctly, this means that each of the 10K look-ups of the index returns a range rather than 100 individual rows. That range can be then scanned sequentially. Following are the outputs of EXPLAIN (ANALYZE, BUFFERS) for all three queries: The original query: Limit (cost=1430170.64..1430171.81 rows=10 width=16) (actual time=6300.232..6300.394 rows=10 loops=1) Buffers: shared hit=55868 read=436879 I/O Timings: shared/local read=1027.617 -> Gather Merge (cost=1430170.64..2773605.03 rows=11514348 width=16) (actual time=6300.230..6300.391 rows=10 loops=1) Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=55868 read=436879 I/O Timings: shared/local read=1027.617 -> Sort (cost=1429170.62..1443563.55 rows=5757174 width=16) (actual time=6291.083..6291.085 rows=8 loops=3) Sort Key: ((((p.vector)::integer[])::vector <#> '[1,1,...,1]'::vector)) Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=55868 read=436879 I/O Timings: shared/local read=1027.617 Worker 0: Sort Method: top-N heapsort Memory: 25kB Worker 1: Sort Method: top-N heapsort Memory: 25kB -> Parallel Seq Scan on mytable p (cost=25.00..1304760.16 rows=5757174 width=16) (actual time=1913.156..6237.441 rows=333333 loops=3) Filter: (content_id = ANY ('{0,100,...,999900}'::integer[])) Rows Removed by Filter: 33000000 Buffers: shared hit=55754 read=436879 I/O Timings: shared/local read=1027.617 Planning: Buffers: shared hit=149 Planning Time: 8.444 ms Execution Time: 6300.452 ms (24 rows) The query with SET enable_seqscan = off; Limit (cost=1578577.14..1578578.31 rows=10 width=16) (actual time=3121.539..3123.430 rows=10 loops=1) Buffers: shared hit=95578 -> Gather Merge (cost=1578577.14..2922011.54 rows=11514348 width=16) (actual time=3121.537..3123.426 rows=10 loops=1) Workers Planned: 2 Workers Launched: 2 Buffers: shared hit=95578 -> Sort (cost=1577577.12..1591970.05 rows=5757174 width=16) (actual time=3108.995..3108.997 rows=9 loops=3) Sort Key: ((((p.vector)::integer[])::vector <#> '[1,1,...,1]'::vector)) Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=95578 Worker 0: Sort Method: top-N heapsort Memory: 25kB Worker 1: Sort Method: top-N heapsort Memory: 25kB -> Parallel Bitmap Heap Scan on mytable p (cost=184260.30..1453166.66 rows=5757174 width=16) (actual time=42.277..3057.887 rows=333333 loops=3) Recheck Cond: (content_id = ANY ('{0,100,...,999900}'::integer[])) Buffers: shared hit=40000 Planning: Buffers: shared hit=149 Planning Time: 8.591 ms Execution Time: 3123.638 ms (23 rows) Like 2, but with generate_series: Limit (cost=1591694.63..1591694.66 rows=10 width=16) (actual time=6155.109..6155.114 rows=10 loops=1) Buffers: shared hit=104447 -> Sort (cost=1591694.63..1626244.11 rows=13819790 width=16) (actual time=6155.107..6155.111 rows=10 loops=1) Sort Key: ((((p.vector)::integer[])::vector <#> '[1,1,...,1]'::vector)) Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=104447 -> Nested Loop (cost=194.30..1293053.94 rows=13819790 width=16) (actual time=2.912..6034.798 rows=1000000 loops=1) Buffers: shared hit=104444 -> HashAggregate (cost=175.02..177.02 rows=200 width=4) (actual time=2.870..5.484 rows=10000 loops=1) Group Key: generate_series(0, 999999, 100) Batches: 1 Memory Usage: 929kB -> ProjectSet (cost=0.00..50.02 rows=10000 width=4) (actual time=0.002..0.736 rows=10000 loops=1) -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=1 loops=1) -> Bitmap Heap Scan on mytable p (cost=19.28..4204.85 rows=1382 width=416) (actual time=0.007..0.020 rows=100 loops=10000) Recheck Cond: (content_id = (generate_series(0, 999999, 100))) Heap Blocks: exact=64444 Buffers: shared hit=104444 -> Bitmap Index Scan on idx_content_on_mytable (cost=0.00..18.93 rows=1382 width=0) (actual time=0.005..0.005 rows=100 loops=10000) Index Cond: (content_id = (generate_series(0, 999999, 100))) Buffers: shared hit=40000 Planning: Buffers: shared hit=180 Planning Time: 1.012 ms Execution Time: 6155.251 ms (24 rows) | plans 2 vs 3 The difference between plans 2 (literal IN list with bitmap) and 3 (generate_series with bitmap) is easy to explain. PostgreSQL doesn't know how to parallelize the function scan over generate_series, so it doesn't use parallelization in that case. This is indicated by the lack of a "Gather..." node in that plan. My understanding of AWS graviton instances is that their vCPU count is equal to their actual CPU count, so your ".large" machine has 2 actual processors to spread the work over. And that plan is about twice as fast, so everything there makes sense. (It tried to spread the work over 3 processes (2 workers plus the leader), but those 3 processes had to share 2 processors). But, is this actually a good thing? If the other processor would otherwise go unused, yes. But if your real server will be busy enough that all CPU are almost always in use, then you are just robbing Peter to pay Paul. Your speed of queries run in isolation will be faster, but your total throughput when not running in isolation would not be better. plans 1 vs 2 The reason for the difference in actual execution time between plan 1 and plan 2 is also easy to see. Most of the time is spend in processing the tuples (only 1028 ms out of 6300 ms is spent in IO) and the 1st plan has to process all of the tuples, while the 2nd plan uses an index to rule most of those tuples out without actually processing them. (Another part of the difference might be just that everything is already in memory for plan 2, presumably because it was run with a "hot cache", but that can only explain at most 1/3 of the difference as the IO only took one second to do on the slower plan) So why doesn't the planner choose the 2nd plan? This is more speculative. It looks like the planner overestimates the cost of IO, and therefore over estimates the importance of optimizing IO. The sequential plan is entirely sequential, while the bitmap scan is only kind-of sequential, so the first one should be better. And then it overestimates the importance of that difference. Also, it greatly overestimates the number of tuples actually returned by that part of the plan, expected 5757174 vs actual 333333. For the seq scan itself, this doesn't matter as inspecting and throwing away a tuple is about the same work as inspecting and keeping it. But for the bitmap heap scan, this isn't the case. Tuples can be thrown away though the use of the index without ever inspecting them, so overestimating how many tuples will be returned underestimates the utility of the bitmap. (Now that I type this up, I think this part is more important than the one given in my previous paragraph.) Why is the estimate so bad? First, try to ANALYZE the table and see if that fixes the problem. If not, then inspect (or show us) the contents of pg_stats for attribute "content_id" of relation "MyTable". I also note that plan 2 is incomplete. You show a bitmap heap scan, but no corresponding bitmap index scan. But you must have removed it, as I don't think it would be possible for it not to be there. In this case, I doubt seeing this would change any of my conclusions, but it is disconcerting for it to be missing. Why do I say it over-estimates the importance of IO? The seq scan accesses 55754 + 436879 = 492633 buffers. Those are actual counts, but there is no reason for the unshown estimates used by the planner for a simple sequential scan (that doesn't use TOAST, which I am assuming is the case) to differ significantly from the total actual counts. Assuming you haven't changed seq_page_cost from its default of 1.0, that contributes 492633/1304760.16 = 37.8% of the planned cost of that node, while the actual timing is much less at 1027.617/6237.441 = 16.5%. | 2 | 1 |
77,390,094 | 2023-10-30 | https://stackoverflow.com/questions/77390094/how-to-use-user-profile-with-seleniumbase | Code: from seleniumbase import Driver driver = Driver(uc=True) driver.get("https://example.com") driver.click("a") p_text = driver.find_element("p").text print(p_text) this code works fine but i want to add a user profile but when i try from seleniumbase import Driver ud = r"C:\Users\USER\AppData\Local\Google\Chrome\User Data\Profile 9" driver = Driver(uc=True, user_data_dir=ud) driver.get("https://example.com") driver.click("a") p_text = driver.find_element("p").text print(p_text) this makes a profile called person 1 that works like a normal user and has everything saved but what if i want to access a specific profile? edit: it goes to the path i give it but appends a \Default so it goes to the default profile of that path so: C:\Users\USER\AppData\Local\Google\Chrome\User Data\Profile 9 would be C:\Users\USER\AppData\Local\Google\Chrome\User Data\Profile 9\Default Command Line "C:\Program Files\Google\Chrome\Application\chrome.exe" --window-size=1280,840 --disable-dev-shm-usage --disable-application-cache --disable-browser-side-navigation --disable-save-password-bubble --disable-single-click-autofill --allow-file-access-from-files --disable-prompt-on-repost --dns-prefetch-disable --disable-translate --disable-renderer-backgrounding --disable-backgrounding-occluded-windows --disable-features=OptimizationHintsFetching,OptimizationTargetPrediction --disable-popup-blocking --homepage=chrome://new-tab-page/ --remote-debugging-host=127.0.0.1 --remote-debugging-port=53654 --user-data-dir="C:\Users\fun64\AppData\Local\Google\Chrome\User Data\Profile 9" --lang=en-US --no-default-browser-check --no-first-run --no-service-autorun --password-store=basic --log-level=0 --flag-switches-begin --flag-switches-end --origin-trial-disabled-features=WebGPU Executable Path C:\Program Files\Google\Chrome\Application\chrome.exe Profile Path C:\Users\fun64\AppData\Local\Google\Chrome\User Data\Profile 9\Default | Partially due to reasons outlined in https://stackoverflow.com/a/67960202/7058266, the best way to handle multiple profiles is to have a unique user-data-dir for each one. Also, since you're using SeleniumBase UC Mode, for reasons outlined in https://www.youtube.com/watch?v=5dMFI3e85ig at the 21-minute mark, don't mix a non-UC-Mode user-data-dir with a UC Mode user-data-dir, because you could get detected that way. Use a user-data-dir that gets created by UC Mode: from seleniumbase import Driver driver_1 = Driver(uc=True, user_data_dir="my_user_dir_1") driver_2 = Driver(uc=True, user_data_dir="my_user_dir_2") # ... driver_1.quit() driver_2.quit() Longer example: from seleniumbase import Driver driver = Driver(uc=True, user_data_dir="my_user_dir_1") try: driver.get("https://example.com") driver.click("a") print(driver.get_text("p")) finally: driver.quit() | 5 | 3 |
77,398,359 | 2023-10-31 | https://stackoverflow.com/questions/77398359/iterate-through-dataframe-and-create-new-column-based-if-values-in-columns-are-n | df = pd.DataFrame({ 'subsegment': ['corp', np.nan, 'terr'], 'region': ['japan', np.nan, np.nan], 'subregion': [np.nan, 'se', 'ne'], 'segment': [np.nan,'ent','comm'] }) I am trying to iterate through the above dataframe and if the value is not NaN than adding the column header as the value or part of the value (depending on how many NaNs) in the new column "Mode". Original DF subsegment region subregion segment corp japan NaN NaN NaN NaN se ent terr NaN ne comm Desired Output DF subsegment region subregion segment mode corp japan NaN NaN subsegment-region NaN NaN se ent subregion-segment terr NaN ne comm subsegment-subregion-segment I have tried to create separate smaller dfs with all the combinations of the columns to which are not null and then concatenating those dfs together but this seems extremely inefficient. df1 = df.loc[~(df['subsegment'].isna()) & (~df['region'].isna()) & (~df['region'].isna())] df2 = df.loc[~(df['region'].isna()) & (~df['subregion'].isna()) & (~df['segment'].isna())] df3 = df.loc[~(df['subsegment'].isna()) & (~df['subregion'].isna()) & (~df['segment'].isna())] pd.concat(df1,df2,df3.....) | You can use a dot product: df['mode'] = (df.notna() @ (df.columns+'-')).str[:-1] Output: subsegment region subregion segment mode 0 corp japan NaN NaN subsegment-region 1 NaN NaN se ent subregion-segment 2 terr NaN ne comm subsegment-subregion-segment Alternatively, with a classical groupby.agg: s = df.notna().stack() df['mode'] = s[s].reset_index().groupby('level_0')['level_1'].agg('-'.join) Or a custom aggregation: df['mode'] = df.notna().mul(df.columns).agg(lambda x: '-'.join(x[x.ne('')]), axis=1) | 3 | 7 |
77,392,627 | 2023-10-31 | https://stackoverflow.com/questions/77392627/how-to-bar-plot-the-top-n-categories-for-each-year | I am trying to plot a bar graph which highlights only the top 10 areas in Auckland district by the money spent on gambling. I have written the code to filter for the top 10 areas and also plot a bar plot in Seaborn. The issue is that the x-axis is crowded with labels of every area in Auckland district from the dataframe. I only want the labels for the top 10 areas to show up. Will appreciate any help from the kind folks out here. This is a snapshot of the dataframe I am using: Date,AU2017_code,crime,n,Pop,AU_GMP_PER_CAPITA,Dep_Index,AU2017_name,TA2018_name,TALB 2018-02-01,500100.0,Abduction,0.0,401.0,28.890063,10.0,Awanui,Far North District,Far North District 2018-03-01,500100.0,Abduction,0.0,402.0,28.890063,10.0,Awanui,Far North District,Far North District 2018-04-01,500100.0,Abduction,0.0,408.0,28.890063,10.0,Awanui,Far North District,Far North District 2018-05-01,500100.0,Abduction,0.0,409.0,28.890063,10.0,Awanui,Far North District,Far North District 2018-06-01,500100.0,Abduction,0.0,410.0,28.890063,10.0,Awanui,Far North District,Far North District The complete dataframe is availiable as a .csv file here: https://github.com/yyshastri/NZ-Police-Community-Dataset.git The code for the creation of the bar plot is as follows: import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns # Extract the year from the Date column and create a new 'Year' column merged_data['Year'] = merged_data.index.year # Filter data for areas that come under Auckland in the TA2018_name column auckland_data = merged_data[merged_data['TA2018_name'] == 'Auckland'] # Calculate the average AU_GMP_PER_CAPITA for each area within Auckland avg_gmp_per_area = auckland_data.groupby('AU2017_name')['AU_GMP_PER_CAPITA'].mean() # Select the top 10 areas by AU_GMP_PER_CAPITA within Auckland top_10_areas = avg_gmp_per_area.nlargest(10).index # Further filter the auckland_data to include only the top 10 areas filtered_data = auckland_data[auckland_data['AU2017_name'].isin(top_10_areas)] # Use seaborn to create the barplot sns.barplot(x='AU2017_name', y='AU_GMP_PER_CAPITA', hue='Year', data=filtered_data) plt.title('The top 10 areas for gambling spend in Auckland') plt.xticks(rotation=60) plt.legend(title='Year', loc='upper right') plt.figure(figsize = (20, 10)) plt.show() The chart this code is generating has a garbled x-axis,as every area name in Auckland district being populated in the labels. | seaborn is a high-level API for matplotlib, and pandas uses matplotlib as the default plotting backend. In this case, it's more direct to plot with pandas.DataFrame.plot, and avoid the extra import and dataframe reshaping. .pivot_table is used to reshape the dataframe and aggregate multiple values with 'mean'. The data for each year must be separately sorted, as the top 10 cities may not be the same for each year. Given the long city names, using kind='barh', horizontal bars, looks cleaner than using kind='bar'. Tested in python 3.12.0, pandas 2.1.1, matplotlib 3.8.0, seaborn 0.13.0 import pandas as pd # read the data from github df = pd.read_csv('https://raw.githubusercontent.com/yyshastri/NZ-Police-Community-Dataset/main/Merged_Community_Police_Data.xls') # select Auckland data auckland_data = df[df['TA2018_name'] == 'Auckland'].copy() # reshape the data with pivot table and aggregate the mean dfp = auckland_data.pivot_table(index='Year', columns='AU2017_name', values='AU_GMP_PER_CAPITA', aggfunc='mean') # for each year find the top 10 cities, and concat them into a single dataframe top10 = pd.concat([data.sort_values(ascending=False).iloc[:10].to_frame() for _, data in dfp.iterrows()], axis=1) # since the city names are long, use a horizontal bar (barh), otherwise use kind='bar' ax = top10.plot(kind='barh', figsize=(5, 8), width=0.8, xlabel='Mean GMP PER CAPITA', ylabel='City', title='Yearly Top 10 Cities') ax = top10.plot(kind='bar', figsize=(20, 6), width=0.8, rot=0, ylabel='Mean GMP PER CAPITA', xlabel='City', title='Yearly Top 10 Cities') Using seaborn requires converting top10 from wide, to long-form, with pandas.DataFrame.melt. The figure-level function sns.catplot with kind='bar' is used, but the axes-level function sns.barplot will also work. Figure-level vs. axes-level functions import seaborn as sns # reshape the the dataframe to long form top10m = top10.melt(var_name='Year', value_name='Mean GMP PER CAPITA', ignore_index=False).reset_index(names=['City']) # plot g = sns.catplot(data=top10m, kind='bar', x='City', y='Mean GMP PER CAPITA', hue='Year', height=5, aspect=4, palette='tab10', legend='full') Data Views df auckland_data looks the same as df except it's a subset AU2017_code crime n Pop AU_GMP_PER_CAPITA Dep_Index AU2017_name TA2018_name TALB Year 0 500100 Abduction 0 401 28.890063 10.0 Awanui Far North District Far North District 2018 1 500100 Abduction 0 402 28.890063 10.0 Awanui Far North District Far North District 2018 2 500100 Abduction 0 408 28.890063 10.0 Awanui Far North District Far North District 2018 3 500100 Abduction 0 409 28.890063 10.0 Awanui Far North District Far North District 2018 4 500100 Abduction 0 410 28.890063 10.0 Awanui Far North District Far North District 2018 dfp.iloc[:, :10] The first 10 columns, otherwise it's to much data to post AU2017_name Abbotts Park Aiguilles Island Akarana Albany Algies Bay Ambury Aorere Arahanga Arch Hill Ardmore Year 2018 41.995023 0.0 48.619904 34.953781 8.989871 57.940325 111.343778 78.498990 58.685772 40.572675 2019 40.569120 0.0 47.898409 34.046811 9.073010 57.053751 112.236632 78.707498 57.905275 38.060297 2020 27.936208 0.0 35.284514 25.236172 6.720755 42.324155 84.505122 57.954157 41.092557 26.683718 top10 2018 2019 2020 AU2017_name Matheson Bay 214.762992 224.552738 172.133803 Point Wells 181.298995 188.588469 143.436274 Leigh 168.446421 172.428979 129.395604 Papakura North 128.974569 124.977594 90.942141 Fairburn 128.231566 127.925022 91.885721 Otahuhu West 127.002810 125.271241 90.084230 Otahuhu North 123.810519 123.690082 87.164136 Dingwall 118.963782 NaN 83.436386 Papatoetoe North 118.210508 113.328798 NaN Puhinui South 116.787094 113.630079 85.114301 Papakura Central NaN 113.442014 NaN Aorere NaN NaN 84.505122 top10m.head() City Year Mean GMP PER CAPITA 0 Matheson Bay 2018 214.762992 1 Point Wells 2018 181.298995 2 Leigh 2018 168.446421 3 Papakura North 2018 128.974569 4 Fairburn 2018 128.231566 | 2 | 2 |
77,397,466 | 2023-10-31 | https://stackoverflow.com/questions/77397466/polars-python-select-based-on-dtype-pl-list | Hi I want to select those cols of a polars df that are of the dtype list. Selecting by dtypes works ususally fine with df.select(pl.col(pl.Utf8)). However for the type list this does not seem to work... MRE import polars as pl df = pl.DataFrame({"foo": [[c] for c in ["100CT pen", "pencils 250CT", "what 125CT soever", "this is a thing"]]} ) df Output: foo list[str] ["100CT pen"] ["pencils 250CT"] ["what 125CT soever"] ["this is a thing"] df.select(pl.col(pl.List)) Output: shape: (0, 0) | You need to provide the type of the items in the List unlike primitive types (where print(df.select(pl.col(pl.Int64))) would work in the below example). import polars as pl df = pl.DataFrame({ "foo": [[c] for c in ["100CT pen", "pencils 250CT", "what 125CT soever", "this is a thing"]], "bar": [1, 2, 3, 4] } ) print(df.select(pl.col(pl.List(str)))) I can't seem to find anything that's generic across types that the List contains. There is a NESTED_DTYPES here and this answer suggests that you might be able to use it in a more "catch-all" manner, but it doesn't seem to work if you want to grab columns that contain a nested type regardless of the type of data it contains. Thanks to @jqurious for pointing out that this seems to be a requested feature in an open ticket. This has an interesting use-case for me in that, the only reason I've switched dfs back to pandas recently is that polars refuses to write List to CSV so I either filter out all such columns by name or, if this is implemented, I could drop them in one go. I didn't create those columns and I don't want them in the output. | 2 | 3 |
77,385,587 | 2023-10-29 | https://stackoverflow.com/questions/77385587/persist-parentdocumentretriever-of-langchain | I am using ParentDocumentRetriever of langchain. Using mostly the code from their webpage I managed to create an instance of ParentDocumentRetriever using bge_large embeddings, NLTK text splitter and chromadb. I added documents to it, so that I c embedding_function = HuggingFaceEmbeddings(model_name='BAAI/bge-large-en-v1.5', cache_folder=hf_embed_path) # This text splitter is used to create the child documents child_splitter = NLTKTextSplitter(chunk_size=400) # The vectorstore to use to index the child chunks vectorstore = Chroma( collection_name="full_documents", embedding_function=embedding_function, persist_directory="./chroma_db_child" ) # The storage layer for the parent documents store = InMemoryStore() retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter, ) retriever.add_documents(docs, ids=None) I added documents to it, so that I can query using the small chunks to match but to return the full document: matching_docs = retriever.get_relevant_documents(query_text) Chromadb collection 'full_documents' was stored in /chroma_db_child. I can read the collection and query it. I get back the chunks, which is what is expected: vector_db = Chroma( collection_name="full_documents", embedding_function=embedding_function, persist_directory="./chroma_db_child" ) matching_doc = vector_db.max_marginal_relevance_search('whatever', 3) len(matching_doc) >>3 One thing I can't figure out is how to persist the whole structure. This code uses store = InMemoryStore(), which means that once I stopped execution, it goes away. Is there a way, perhaps using something else instead of InMemoryStore(), to create ParentDocumentRetriever and persist both full documents and the chunks, so that I can restore them later without having to go through retriever.add_documents(docs, ids=None) step? | I had the same problem and found the solution here: https://github.com/langchain-ai/langchain/issues/9345 You need to use the create_kv_docstore() function like this: from langchain.storage._lc_store import create_kv_docstore fs = LocalFileStore("./store_location") store = create_kv_docstore(fs) parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000) child_splitter = RecursiveCharacterTextSplitter(chunk_size=400) vectorstore = Chroma(collection_name="split_parents", embedding_function=embeddings, persist_directory="./db") retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter, parent_splitter=parent_splitter, ) retriever.add_documents(documents, ids=None) You will end up with 2 folders: the chroma db "db" with the child chunks and the "data" folder with the parents documents. I think there is also a possibility of saving the documents in a Redis db or Azure blobstorage (https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_container) but I am not sure. | 6 | 6 |
77,396,921 | 2023-10-31 | https://stackoverflow.com/questions/77396921/sum-multiple-columns-in-pandas-dataframe-without-losing-data-types | I have a pandas DataFrame that is typically grouped on a level (or several) of the MultiIndex, then summed. This produces a DataFrame where the rows correspond to the unique values in the grouping level(s), as expected. However, for my application, I am trying to allow for there to be no grouping while maintaining the same output format (with just one row in the DataFrame), but sum() behaves differently with and without grouping. Specifically, if I call sum() on the DataFrame without grouping, it returns a Series, which conflates the data types. Example If I have >>> import pandas as pd >>> df = pd.DataFrame( { 'x': [1, 2, 3, 4], 'y': [0.0, 1.1, 5.2, 4.3] }, index = pd.MultiIndex.from_tuples( [('A', 1), ('A', 2), ('B', 1), ('B', 2)], names = ('ind1', 'ind2') ) ) >>> df x y ind1 ind2 A 1 1 0.0 2 2 1.1 B 1 3 5.2 2 4 4.3 Then grouping on ind1 gives me the expected >>> df.groupby(level='ind1')[['x', 'y']].sum() x y ind1 A 3 1.1 B 7 9.5 However, without the grouping, I get a Series >>> df[['x', 'y']].sum() x 10.0 y 10.6 dtype: float64 The problems are that 1). I need to continue processing the output as a DataFrame, and 2). the sum of 'x' is now a float rather than an integer. I am able to produce the desired results using a dummy grouping variable, >>> df['dummy'] = 1 >>> df.groupby('dummy')[['x', 'y']].sum().reset_index(drop=True) x y 0 10 10.6 But I would like to know if there is a cleaner way that is easier to understand / explain. | sum returns a Series, which is why you have a single dtype. You can use agg passing a list to force a DataFrame output: df.agg(['sum']) Output: x y sum 10 10.6 A simpler version of the dummy grouper would be: df.groupby([0]*len(df)).sum() Output: x y 0 10 10.6 | 3 | 3 |
77,394,812 | 2023-10-31 | https://stackoverflow.com/questions/77394812/awaiting-request-json-in-fastapi-hangs-forever | I added the exception handling as given here (https://github.com/tiangolo/fastapi/discussions/6678) to my code but I want to print the complete request body to see the complete content. However, when I await the request.json() it never terminates. request.json() returns a coroutine, so I need to wait for the coroutine to complete before printing the result. How can I print the content of the request in case an invalid request was sent to the endpoint? Code example from github with 2 changes by me in the error handler and a simple endpoint. import logging from fastapi import FastAPI, Request, status from fastapi.exceptions import RequestValidationError from fastapi.responses import JSONResponse from pydantic import BaseModel app = FastAPI() @app.exception_handler(RequestValidationError) async def validation_exception_handler( request: Request, exc: RequestValidationError ) -> JSONResponse: exc_str = f"{exc}".replace("\n", " ").replace(" ", " ") logging.error(f"{request}: {exc_str}") body = await request.json() # This line was added by me and never completes logging.error(body) # This line was added by me content = {"status_code": 10422, "message": exc_str, "data": None} return JSONResponse( content=content, status_code=status.HTTP_422_UNPROCESSABLE_ENTITY ) class User(BaseModel): name: str @app.post("/") async def test(body: User) -> User: return body | When awaiting request.json, you are trying to read the request body (not the response body, as you may have assumed) out from stream; however, this operation has already taken place in your API endpoint (behind the scenes, in your case). Hence, calling await request.json() is allowed only once in the API's request life-cycle, and attempting reading it again will keep on waiting for it indefinitely (since reading the request body got exhausted), which causes the hanging issue. You may find this answer helpful as well, regarding reading/logging the request body (and/or response body) in a FastAPI/Starlette middleware, before passing the request to the endpoint. As for getting the error body from the response in the RequestValidationError exception handler, you would not need to read the request body, but the response body and you can do that by calling exc.body, instead of request.json(), as demonstrated here and here. Please refer to those answers for more details. Example from fastapi.exceptions import RequestValidationError @app.exception_handler(RequestValidationError) async def validation_exception_handler(request: Request, exc: RequestValidationError): return JSONResponse( status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, content=jsonable_encoder({"detail": exc.errors(), # optionally include the errors "body": exc.body, "custom msg": {"Your error message"}}), ) | 4 | 5 |
77,387,280 | 2023-10-30 | https://stackoverflow.com/questions/77387280/performance-issue-with-low-microservice-utilization-in-k8s-impact-to-developmen | When I designed microservice and made deployment to K8s, I saw that I had problem to get higher utilization for my microservices (max. utilization was only 0.1-0.3 CPU). Do you have best practices, how can we increase microservice CPU utilization? Let me describe the LAB environment: K8s with 5 nodes each node with 14 CPU and 128 GB RAM (nodes are build on virtual machines with VMWare) K8s with nginx with setting full log, etc. Microservice In python language (GIL limitation for processing in one process, it means max. 1 CPU utilization) I used three pods Interface REST request/response (without addition I/O operation) The processing time per one call is ~100ms We made performance tests, and you can see these outputs: Microservice utilization max. 0.1-0.3 CPU in each pod I expect the issue is, that K8s management (routing, log, …) generate higher utilization of sources and cannot provide high throughput for utilization of our microservices. I think, the best practices for higher utilization of microservices can be: 1] Increase amount of pods Pros, we will get higher microservice utilization but amount of pods are limited per one K8s node Cons, the utilization of microservice per pod will be still the same 2] Use micro batch processing Pros, we can support bundling of calls (per e.g. one, two seconds) and in this case, that processing time on microservice side will be higher Cons, we will increase processing time because bundling (not ideal scenario for real-time processing) 3] K8s change log level Pros, we can decrease level of logs in nginx, … to error Cons, possible issue for detail issue tracking 4] Use K8s nodes with physical HW (not VMware) Pros, better performance Cons, this change can generate addition costs (new HW) and maintenance Do you use other best practices, ideas for high microservice utilization in k8s (my aim is to get 0.8-1 CPU per pod for this python code)? | Performance testing is a very complex topic, it requires a lot of precision when building the testing setup, and solid knowledge for all the building parts, since it's very easy to mess things up (I did that many times). Couple of ideas from my side: If you run a single-threaded app on a pod with more than 1 CPU configured, then you'll never see high CPU usage on a pod level. Even if you run a multi-threaded app that has a heavy I/O-bound workload (lots of external HTTP calls for example), you'll still not see high CPU usage, since the threads will be most of the time in a non-runnable state. Kubernetes management workflows do have some overhead that can be observed when looking on cluster-level (or even node-level) metrics but pod-level metrics are fully related to your application (especially CPU usage). So to see high CPU usage on a pod-level, you can do 2 things: Run a single-threaded app (that does CPU-heavy tasks) with a pod configured with 1 CPU If you have a multi-threaded app, the pod CPU cores should be the same with the number of threads in your app (and the app workload should be CPU-bound, of course), to get the max CPU usage. | 2 | 2 |
77,391,095 | 2023-10-30 | https://stackoverflow.com/questions/77391095/how-to-calculate-time-difference-from-previous-value-change-in-pyspark-dataframe | Suppose I have the following dataframe in pyspark: object time has_changed A 1 0 A 2 1 A 4 0 A 7 1 B 2 1 B 5 0 What I want is to add a new column that, for each row, keeps track of the time difference with respect to the last value change for the current object (or first element of the corresponding partition if no value changes exists). For the table I've posted above, the result would be the following: object time has_changed time_alive A 1 0 0 A 2 1 1 A 4 0 2 A 7 1 5 B 2 1 0 B 5 0 3 That is, within each partition by the "object" column, sorted by the "time" column, each value of the corresponding row is calculated as the difference between the time of that row and the previous time at which there is a 1 in the "has_changed" column (if a 1 is not found, the window will scroll to the first element of the partition). What I would like to implement would be something like the following (pseudo-code): from pyspark.sql.window import Window as w from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() # Define the data data = [("A", 1, 0), ("A", 2, 1), ("A", 4, 0), ("A", 7, 1), ("B", 2, 1), ("B", 5, 0)] # Define the schema schema = ["object", "time", "has_changed"] # Create the DataFrame df = spark.createDataFrame(data, schema) # Window function (pseudo-code, this won't work) window = ( w.partitionBy("object") .orderBy("time") .rowsBetween(f.when(f.col("has_changed") == 1), w.currentRow) ) df.withColumn("time_alive", f.col("time") - f.lag("time", 1).over(window)) | Step by step solution Create a window specification W = Window.partitionBy('object').orderBy('time') Mask the values in time column where has_changed is 0 masked = F.when(F.col('has_changed') == 1, F.col('time')) df = df.withColumn('masked', masked) # +------+----+-----------+------+ # |object|time|has_changed|masked| # +------+----+-----------+------+ # | A| 1| 0| NULL| # | A| 2| 1| 2| # | A| 4| 0| NULL| # | A| 7| 1| 7| # | B| 2| 1| 2| # | B| 5| 0| NULL| # +------+----+-----------+------+ Calculate the first value in time per group df = df.withColumn('first', F.first('time').over(W)) # +------+----+-----------+------+-----+ # |object|time|has_changed|masked|first| # +------+----+-----------+------+-----+ # | A| 1| 0| NULL| 1| # | A| 2| 1| 2| 1| # | A| 4| 0| NULL| 1| # | A| 7| 1| 7| 1| # | B| 2| 1| 2| 2| # | B| 5| 0| NULL| 2| # +------+----+-----------+------+-----+ Forward fill and shift the last valid value in masked time column over the window last_changed = F.lag(F.last('masked', ignorenulls=True).over(W)).over(W) df = df.withColumn('last_changed', last_changed) # +------+----+-----------+------+-----+------------+ # |object|time|has_changed|masked|first|last_changed| # +------+----+-----------+------+-----+------------+ # | A| 1| 0| NULL| 1| NULL| # | A| 2| 1| 2| 1| NULL| # | A| 4| 0| NULL| 1| 2| # | A| 7| 1| 7| 1| 2| # | B| 2| 1| 2| 2| NULL| # | B| 5| 0| NULL| 2| 2| # +------+----+-----------+------+-----+------------+ Fill the nulls in last_changed with the first value in group last_changed = F.when(last_changed.isNull(), first).otherwise(last_changed) df = df.withColumn('last_changed', last_changed) # +------+----+-----------+------+-----+------------+ # |object|time|has_changed|masked|first|last_changed| # +------+----+-----------+------+-----+------------+ # | A| 1| 0| NULL| 1| 1| # | A| 2| 1| 2| 1| 1| # | A| 4| 0| NULL| 1| 2| # | A| 7| 1| 7| 1| 2| # | B| 2| 1| 2| 2| 2| # | B| 5| 0| NULL| 2| 2| # +------+----+-----------+------+-----+------------+ Subtract time column from last_changed to calculate time_alive df = df.withColumn('time_alive', F.col('time') - last_changed) # +------+----+-----------+------+-----+------------+----------+ # |object|time|has_changed|masked|first|last_changed|time_alive| # +------+----+-----------+------+-----+------------+----------+ # | A| 1| 0| NULL| 1| 1| 0| # | A| 2| 1| 2| 1| 1| 1| # | A| 4| 0| NULL| 1| 2| 2| # | A| 7| 1| 7| 1| 2| 5| # | B| 2| 1| 2| 2| 2| 0| # | B| 5| 0| NULL| 2| 2| 3| # +------+----+-----------+------+-----+------------+----------+ | 2 | 2 |
77,392,718 | 2023-10-31 | https://stackoverflow.com/questions/77392718/is-there-a-way-to-expand-an-array-like-a-struct-in-pyspark-star-does-not-work | I have a data frame with the following nesting in Pyspark: >content: struct importantId: string >data: array >element: struct importantCol0: string importantCol1: string I need the following output: importantId importantCol0 importantCol1 10800005 0397AZ 0397AZ 10800006 0397BZ 0397BZ I tried the following code: df1 = df0.select(F.col('content.*')) I got: importantId data 10800005 {importantCol0: 0397AZ, importantCol1: 0397AZ} 10800006 {importantCol0: 0397BZ, importantCol1: 0397BZ} I followed with: df2 = df1.select(F.col('importantId'), F.col('data.*') but I get the following error: AnalysisException: Can only star expand struct data types. Attribute: ArrayBuffer(data). Does anyone know how to fix this? I was expecting a way to expand an array like a struct | Let us use inline to explode array of structs to columns and rows result = df.select('*', F.inline('data')).drop('data') Example, df.show() +------------+--------------------+ |importantCol| data| +------------+--------------------+ | 1| [{1, 2}, {4, 3}]| | 2|[{10, 20}, {40, 30}]| +------------+--------------------+ result.show() +------------+-------------+-------------+ |importantCol|importantCol0|importantCol1| +------------+-------------+-------------+ | 1| 1| 2| | 1| 4| 3| | 2| 10| 20| | 2| 40| 30| +------------+-------------+-------------+ | 3 | 3 |
77,392,792 | 2023-10-31 | https://stackoverflow.com/questions/77392792/is-there-a-way-to-interpolate-variables-into-a-python-string-without-using-the-p | Every example I have seen of Python string variable interpolation uses the print function. For example: num = 6 # str.format method print("number is {}".format(num)) # % placeholders print("number is %s"%(num)) # named .format method print("number is {num}".format(num=num)) Can you interpolate variables into strings without using print? | Ok...so, it's kind of easy. The old method: num = 6 mystr = 'number is %s' % num print(mystr) # number is 6 The newer .format method: num = 6 mystr = "number is {}".format(num) print(mystr) # number is 6 The .format method using named variable (useful for when sequence can't be depended upon): num = 6 mystr = "number is {num}".format(num=num) print(mystr) # number is 6 The shorter f-string method: (thank you @mayur) num = 6 mystr = f"number is {num}" print(mystr) # number is 6 | 7 | 9 |
77,392,339 | 2023-10-30 | https://stackoverflow.com/questions/77392339/select-top-n-groups-in-pandas-dataframe | I have the following dataframe: Country Crop Harvest Year Area (ha) Afghanistan Maize 2019 94910 Afghanistan Maize 2020 140498 Afghanistan Maize 2021 92144 Afghanistan Winter Wheat 2019 2334000 Afghanistan Winter Wheat 2020 2668000 Afghanistan Winter Wheat 2021 1833357 Argentina Maize 2019 7232761 Argentina Maize 2020 7730506 Argentina Maize 2021 8146596 Argentina Winter Wheat 2019 6050953 Argentina Winter Wheat 2020 6729838 Argentina Winter Wheat 2021 6394102 China Maize 2019 41309740 China Maize 2020 41292000 China Maize 2021 43355859 China Winter Wheat 2019 23732560 China Winter Wheat 2020 23383000 China Winter Wheat 2021 23571400 Ethiopia Maize 2019 2274306 Ethiopia Maize 2020 2363507 Ethiopia Maize 2021 2530000 Ethiopia Winter Wheat 2019 1789372 Ethiopia Winter Wheat 2020 1829051 Ethiopia Winter Wheat 2021 1950000 France Maize 2019 1506100 France Maize 2020 1691130 France Maize 2021 1549520 France Winter Wheat 2019 5244250 France Winter Wheat 2020 4512420 France Winter Wheat 2021 5276730 India Maize 2019 9027130 India Maize 2020 9569060 India Maize 2021 9860000 India Winter Wheat 2019 29318780 India Winter Wheat 2020 31357020 India Winter Wheat 2021 31610000 Namibia Maize 2019 21123 Namibia Maize 2020 35000 Namibia Maize 2021 46070 Namibia Winter Wheat 2019 1079 Namibia Winter Wheat 2020 2000 Namibia Winter Wheat 2021 3026 I want to select the top 2 countries by the average value of Area (ha) column across the `Harvest Year's. I tried this but it does not work: df = df.groupby("Crop", dropna=False).apply( lambda x: x.nlargest(2, "Area (ha)") ) Output should be, here China and india are the countries with the largest average Area (ha) for both maize and Winter Wheat, but in the full datasets different countries would have largest values for different crops: Country Crop Harvest Year Area (ha) China Maize 2019 41309740 China Maize 2020 41292000 China Maize 2021 43355859 China Winter Wheat 2019 23732560 China Winter Wheat 2020 23383000 China Winter Wheat 2021 23571400 India Maize 2019 9027130 India Maize 2020 9569060 India Maize 2021 9860000 India Winter Wheat 2019 29318780 India Winter Wheat 2020 31357020 India Winter Wheat 2021 31610000 | IIUC, you can do double .groupby: x = ( df.groupby("Crop") .apply(lambda x: x.groupby("Country")["Area (ha)"].mean()) .stack() .groupby(level=0, group_keys=False) .nlargest(2) ) print(x) Prints top 2 Crop/Countries by average area: Crop Country Maize China 4.198587e+07 India 9.485397e+06 Winter Wheat India 3.076193e+07 China 2.356232e+07 dtype: float64 Then you can use this index to filter the original dataframe: out = df.set_index(["Crop", "Country"]).loc[x.index].reset_index() print(out) Prints: Crop Country Harvest Year Area (ha) 0 Maize China 2019 41309740 1 Maize China 2020 41292000 2 Maize China 2021 43355859 3 Maize India 2019 9027130 4 Maize India 2020 9569060 5 Maize India 2021 9860000 6 Winter Wheat India 2019 29318780 7 Winter Wheat India 2020 31357020 8 Winter Wheat India 2021 31610000 9 Winter Wheat China 2019 23732560 10 Winter Wheat China 2020 23383000 11 Winter Wheat China 2021 23571400 | 2 | 1 |
77,392,276 | 2023-10-30 | https://stackoverflow.com/questions/77392276/is-it-possible-to-make-django-urlpatterns-from-a-string | I have a list of strings that is used to define navigation pane in a Django layout template. I want to use the same list for view function names and use a loop to define urlpatterns in urls.py accordingly. Example: menu = ["register", "login", "logout"] urlpatterns = [path("", views.index, name="index"),] From the above, I want to arrive at urlpatterns = [ path("", views.index, name="index"), path("/register", views.register, name="register"), path("/login", views.login, name="login"), path("/logout", views.logout, name="logout"), ] I am able to pass the route as str and kwargs as a dict, but struggling with transforming the string into a name of a callable views function. Is this possible? for i in menu: url = f'"/{i}"' #works when passed as parameter to path() rev = {"name": i} #works when passed as parameter to path() v = f"views.{i}" #this does not work v = include(f"views.{i}", namespace="myapp") #this does not work either urlpatterns.append(path(url, v, rev)) | Yes, you can use: menu = ['register', 'login', 'logout'] urlpatterns = [ path('', views.index, name='index'), *[path(f'{name}/', getattr(views, name), name=name) for name in menu], ] | 2 | 2 |
77,391,553 | 2023-10-30 | https://stackoverflow.com/questions/77391553/extract-all-field-names-from-nested-dataclasses | I have a dataclass that contains within it another dataclass: @dataclass class A: var_1: str var_2: int @dataclass class B: var_3: float var_4: A I would like to create a list of all field names for attributes that aren't dataclasses, and if the attribute is a dataclass the to list the attributes of that class, so in this case the output would be ['var_3', 'var_1', 'var_2'] I know it's possible to use dataclasses.fields to get the fields of a simple dataclass, but I can't work out how to recursively do it for nested dataclasses. Ideally I would like to be able to do it by just passing the class type B (in the same way you can pass the type to dataclasses.fields), rather than an instance of B. Is it possible to do this? Thank you! | Use dataclasses.fields() to iterate over all the fields, making a list of their names. Use dataclasses.is_dataclass() to tell if a field is a nested dataclass. If so, recurse into it instead of adding its name to the list. from dataclasses import fields, is_dataclass def all_fields(c: type) -> list[str]: field_list = [] for f in fields(c): if is_dataclass(f.type): field_list.extend(all_fields(f.type)) else: field_list.append(f.name) return field_list | 2 | 4 |
77,391,383 | 2023-10-30 | https://stackoverflow.com/questions/77391383/resamplew-weird-results | I have a pandas DataFrame containing daily dates, and within this DataFrame, some dates are missing. I aim to generate a new time series that includes only the last day of each week from that DataFrame. For instance, if there are only Wednesday and Thursday entries for a specific week, the resulting time series should retain only the Thursday data point for that week. For example I tried the following: import pandas as pd import numpy as np # Create a sample time series with date index #SUNDAY, MONDAY, TUESDAY, WED, THURSDAY date_list = ['2023-10-01', '2023-10-02', '2023-10-03', '2023-10-04', '2023-10-05'] # Convert the date list to a pandas datetime index date_rng = pd.to_datetime(date_list) data = np.random.rand(len(date_rng)) time_series = pd.Series(data, index=date_rng) # Resample the time series to weekly frequency and select the last observation for each week weekly_last = time_series.resample('W').last() weekly_last['Day of the Week'] = weekly_last.index.day_name() # Print the result print(weekly_last) which prints: ['SUNDAY', 'SUNDAY'] whereas it should print ['SUNDAY', 'THURSDAY'] So I don't really know how to achieve what I want? Thank you very much for your help | You can use the isocalendar() method to get the week of the year, and then group on that: df = pd.DataFrame({'date': date_rng, 'value': data}) df['week'] = df['date'].dt.isocalendar().week grouped = df.groupby('week').last().set_index('date') weekly_last = grouped['value'].copy() Then you get the expected result: >>> weekly_last.index.day_name() Index(['Sunday', 'Thursday'], dtype='object', name='date') When you resample using 'W', you are grouping the data to every week, as you want. However, the assigned label on the index will be the corresponding Sunday. You can pick a different day anchor, but this doesn't help you. So you basically need to do the same grouping by keep your original dates as labels. To do this, you can group on the numbered week of the year, take the last observation each week, and use that date for the index. This requires a few extra steps beyond one call to resample/groupby, because. A couple notes: If you directly group on the week of the year (time_series.groupby(time_series.index.isocalendar().week).last(), you lose the dates and will only see the week of the year in the index. That's why I collect things in a dataframe - then the dates are preserved in their own column, and you can use set_index() to make them the index. You currently can also use the weekofyear attribute to get the week of the year. But I get a deprecation warning when using this, and an instruction to use isocalendar(). | 2 | 1 |
77,390,600 | 2023-10-30 | https://stackoverflow.com/questions/77390600/calling-same-function-by-different-objects-by-pressing-different-key-in-python | I have created two turtles say 'tur1' and 'tur2' in python using turtle module. I have created event listener on the screen. I have created a function 'move_fwd(turte_name)' which moves the turtle forward by 20 steps. I want that if key 'w' is pressed then this method shoud be called by 'tur1' and tur1 should move and if key 'i' is pressed then this function should be called by 'tur2' turtle and it should move. I dont know how to pass argument 'turtle_name' while this method is called by event_listener. from turtle import Turtle, Screen screen = Screen() tur1 = Turtle() tur1.shape("turtle") tur1.shape("turtle") tur1.penup() tur1.setposition(x=-200, y=-100) tur2 = Turtle() tur2.shape("turtle") tur2.penup() tur2.setposition(x=200, y=100) def move_fwd(turtle_name): turtle_name.forward(20) screen.listen() screen.onkey(move_fwd, "w") # how to tell that this is to be called for tur1 screen.onkey(move_fwd, "i") # how to tell that this is to be called for tur2 screen.exitonclick() I have some idea how to do this in javascript by adding addEventListener .. document.addEventListener("keypress", function(event) { makeSound(event.key); buttonAnimation(event.key); }); function makeSound(key) { switch (key) { case "w": var tom1 = new Audio("sounds/tom-1.mp3"); tom1.play(); break; case "a": var tom2 = new Audio("sounds/tom-2.mp3"); tom2.play(); break; case "s": var tom3 = new Audio('sounds/tom-3.mp3'); tom3.play(); break; But no idea about how to do in python. Thanks for your valuable time. | Use a lambda screen.onkey(lambda: move_fwd(tur1), "w") screen.onkey(lambda: move_fwd(tur2), "i") | 2 | 2 |
77,387,525 | 2023-10-30 | https://stackoverflow.com/questions/77387525/create-a-objective-function-to-minimize-the-number-of-maximum-task-assigned-and | I've a set of student and task which I wish to assign. How can I add an objective function so that I get a balance number of task assigned to each student? Subsequently, I will also add some constraints to restrict certain student(s) from receiving certain task hence I would like to balance out the task per student as much as I could. For example the codes below, I have 5 student and 13 tasks, some student would have received 2 task or 3 task which is the most optimal. The objective function I could think of is using the max number task - min number of task but I am unable to form the logic using the or-tool syntax. Thanks in advance! students = [0,1,2,3,4] tasks = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] model = cp_model.CpModel() max_task = math.ceil(len(tasks)/len(students)) x = {} for student in students: for task in tasks: x[(student ,task)] = model.NewBoolVar(f'student_{student}_task_{task}') # add constraint: each task must be assigned to exactly one student for task in tasks: model.Add(sum(x[student ,task] for student in students) == 1) for student in students: model.Add(sum(x[(student,task)] for task in tasks) > 0) model.Add(sum(x[(student,task)] for task in tasks) <= max_task) # add a objective function here solver = cp_model.CpSolver() status = solver.Solve(model) print(status) if status == cp_model.OPTIMAL or status == cp_model.FEASIBLE: for worker, task in x: if solver.Value(x[(student,task)])==1: print(f'Student {student} is assigned to Task {task}') else: print("No solution found.") | You can have a look at the balance_group example. It uses this trick to minimize the spread e = model.NewIntVar(0, 550, "epsilon") # Constrain the sum of values in one group around the average sum per group. for g in all_groups: model.Add( sum(item_in_group[(i, g)] * values[i] for i in all_items) <= average_sum_per_group + e ) model.Add( sum(item_in_group[(i, g)] * values[i] for i in all_items) >= average_sum_per_group - e ) | 2 | 2 |
77,387,556 | 2023-10-30 | https://stackoverflow.com/questions/77387556/equivalent-pandas-dataframe-in-javascript | Do you know what is the equivalent for javascript ? This is my code : def sort_df(datas): df_not_sorted = pd.read_json(StringIO(datas)) df_not_sorted["Date"] = df_not_sorted["Date"].apply(lambda dates: [datetime.fromisoformat(date.rstrip('Z')) for date in dates]) df_not_sorted["time"] = pd.to_datetime(df_not_sorted["time"]) df_not_sorted["time"] = df_not_sorted["time"].dt.tz_localize(None) df = df_not_sorted.set_index(["Bus", "time"]).sort_index() return df | You can use Danfo.js: Danfo.js is heavily inspired by the Pandas library and provides a similar interface and API. This means users familiar with the Pandas API can easily use Danfo.js. https://danfo.jsdata.org/ https://danfo.jsdata.org/getting-started | 4 | 2 |
77,386,620 | 2023-10-30 | https://stackoverflow.com/questions/77386620/is-it-possible-to-call-pyright-from-code-as-an-api | It seems that Pyright (the Python type checker made by Microsoft) can only be used as a command line tool or from VS Code. But is it possible to call pyright from code (as an API)? For example, mypy supports usage like: import sys from mypy import api result = api.run("your code") | like @Grismar said, this might be an xy problem... if not, here is a general solution: import subprocess command = ['pyright', 'path/to/your/file.py'] result = subprocess.run(command, capture_output=True, text=True) output = result.stdout print(output) | 2 | 3 |
77,384,806 | 2023-10-29 | https://stackoverflow.com/questions/77384806/pandas-conditionally-fill-a-column-using-values-from-another-dataframe | I have a dataframe containing costs for time periods: valid_from valid_to cost 2018-10-09 23:00:00 2019-09-30 23:00:00 28.6700 2019-09-30 23:00:00 2021-03-18 00:00:00 26.2700 2022-10-13 23:00:00 NaT 39.7339 If 'valid_to' is NaT it means the cost is still current. I want to add the correct cost to the time periods in the second dataframe which is broken into 30 minute periods: valid_from valid_to consumption cost 2023-09-16 23:30:00 2023-09-17 00:00:00 0.040 39.7339 2023-09-17 00:00:00 2023-09-17 00:30:00 0.030 39.7339 2019-10-17 00:30:00 2019-10-17 01:00:00 0.030 26.2700 2018-10-16 20:30:00 2018-10-16 21:00:00 0.030 28.6700 How do I achieve this? | The main idea is to get cost for rows where valid_from and valid_to overlap between the two dataframes - this is a form of inequality join which is effectively handled by conditional_join: # pip install pyjanitor import pandas as pd import janitor # fill df1's valid_to with a timestamp in the future df1.valid_to = df1.valid_to.fillna(pd.Timestamp.max) (df1 .conditional_join( df2, ('valid_from', 'valid_from', '<='), ('valid_to', 'valid_to', '>='), df_columns='cost') #.move(source='cost',position='after',axis=1) ) cost valid_from valid_to consumption 0 28.6700 2018-10-16 20:30:00 2018-10-16 21:00:00 0.03 1 26.2700 2019-10-17 00:30:00 2019-10-17 01:00:00 0.03 2 39.7339 2023-09-16 23:30:00 2023-09-17 00:00:00 0.04 3 39.7339 2023-09-17 00:00:00 2023-09-17 00:30:00 0.03 | 3 | 1 |
77,385,134 | 2023-10-29 | https://stackoverflow.com/questions/77385134/tkinter-fullscreen-error-in-raspberry-pi-64-bits-os | I want to make a tkinter-based GUI in a RaspberryPi. I need it to be on a fullscreen mode. I have the following script: from tkinter import * win = Tk() win.geometry("650x250") label = Label(win, text="Hello World!", font=('Times New Roman bold', 20)) label.pack(padx=10, pady=10) # Utiliza wm_attributes para establecer el modo fullscreen win.wm_attributes('-fullscreen', 'true') win.mainloop() I know it will sound stupid, but it only works sometimes (as seen in images). Is there any way to update the necessary libraries or similar? What options do I have? I've been using tkinter for a year and never had such a problem, but I always worked with the 32-bit Raspi OS. Due to a library I need, I must stay with the 64-bit OS. Thanks! | As a workaround try to make the application fullscreen after some time, e.g.: Instead: win.wm_attributes('-fullscreen', 'true') try: win.after(1000, lambda: win.wm_attributes('-fullscreen', 'true')) | 2 | 3 |
77,362,703 | 2023-10-25 | https://stackoverflow.com/questions/77362703/how-to-plot-an-histogram-correctly-with-numpy-and-match-it-with-the-density-fun | TL;DR: How to plot the result of np.histogram(..., density=True) correctly with Numpy? Using density=True should help to match the histogram of the sample, and the density function of the underlying random variable, but it doesn't: import numpy as np import scipy.stats import matplotlib.pyplot as plt y = np.random.randn(10000) h, bins = np.histogram(y, bins=1000, density=True) plt.bar(bins[:-1], h) x = np.linspace(-10, 10, 100) f = scipy.stats.norm.pdf(x) plt.plot(x, f, color="green") plt.show() Why aren't the histogram and probability density functions scaled accordingly? In this case, an observation shows that a 1.6 scaling would be better: plt.plot(x, 1.6 * f, color="green") Also, this works normally: plt.hist(y, bins=100, density=True) Why? | TL;DR: The default bar width of .bar() is too big (.hist() adjusts width internally). The number of bars is too high for the default figsize (that's why 100 bins is OK but 1000 is not). In Axes.hist, bar widths are computed by np.diff(bins) (source code). Since it allows a multi-dimensional array, there are a whole lot of validation and reshaping done behind the scenes but if we set all that aside, for a 1D array, Axes.hist is just a wrapper around np.histogram and Axes.bar whose (abridged) code looks like the following: height, bins = np.histogram(x, bins) width = np.diff(bins) boffset = 0.5 * width Axes.bar(bins[:-1]+boffset, height, width) On the other hand, Axes.bar iteratively adds matplotlib.patches.Rectangle objects to an Axes using a default width of 0.8, (source implementation), so if a bar is particularly tall and subsequent bars are short, the short ones will be hidden behind the tall ones. The following code (kind of) illustrates the points made above. The histograms are the same; for example the tallest bars are the same. Note that, in the figure below the figsize is 12"x5" and each Axes is roughly 3" wide), so given that the default dpi is 100, it can only show roughly 300 dots horizontally which means it cannot show all 1000 bars correctly. We need an appropriately wide figure to show all bars correctly. import numpy as np import matplotlib.pyplot as plt from scipy import stats # sample y = np.random.default_rng(0).standard_normal(10000) N = 1000 h, bins = np.histogram(y, bins=N, density=True) x = np.linspace(-5, 5, 100) f = stats.norm.pdf(x) # figure fig, axs = plt.subplots(1, 3, figsize=(12,5)) a0 = axs[0].bar(bins[:-1], h) a1 = axs[1].bar(bins[:-1]+0.5*np.diff(bins), h, np.diff(bins)) h_hist, bins_hist, a2 = axs[2].hist(y, bins=N, density=True) for a, t in zip(axs, ['Axes.bar with default width', 'Axes.bar with width\nrelative to bins', 'Axes.hist']): a.plot(x, f, color="green") a.set_title(t) # label tallest bar of each Axes axs[0].bar_label(a0, [f'{h.max():.2f}' if b.get_height() == h.max() else '' for b in a0], fontsize=7) axs[1].bar_label(a1, [f'{h.max():.2f}' if b.get_height() == h.max() else '' for b in a1], fontsize=7) axs[2].bar_label(a2, [f'{h_hist.max():.2f}' if b.get_height() == h_hist.max() else '' for b in a2], fontsize=7) # the bin edges and heights from np.histogram and Axes.hist are the same (h == h_hist).all() and (bins == bins_hist).all() # True For example, if we plot the same figure with figsize=(60,5) (everything else the same), we get the following figure where the bars are correctly shown (in particular Axes.hist and Axes.bar with adjusted widths are the same). | 3 | 4 |
77,381,775 | 2023-10-29 | https://stackoverflow.com/questions/77381775/format-y-axis-as-trillions-of-u-s-dollars | Here's a Python program which does the following: Makes an API call to treasury.gov to retrieve data Stores the data in a Pandas dataframe Plots the data import requests import pandas as pd import matplotlib.pyplot as plt page_size = 10000 url = 'https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v2/accounting/od/debt_to_penny' url_params = f'?page[size]={page_size}' response = requests.get(url + url_params) result_json = response.json() df = pd.DataFrame(result_json['data']) df['record_date'] = pd.to_datetime(df['record_date']) rows = df[df['debt_held_public_amt'] != 'null'] rows['debt_held_public_amt'] = pd.to_numeric(rows['debt_held_public_amt']) plt.ion() rows.plot(x='record_date', y='debt_held_public_amt', kind='line', rot=90) Here's the resulting chart that I get: Question The debt_held_public_amt is in U.S. dollars: What's a good way to format the y-axis as trillions of U.S. dollars? | I looked around for someone who did this in a nice looking way, and I found this graph on Wikipedia. I decided to do that. First, I converted the amount to a number. I found that your code for doing this conversion didn't work for me - it created a SettingWithCopy warning. rows = df[df['debt_held_public_amt'] != 'null'].copy() rows['debt_held_public_amt'] = pd.to_numeric(rows['debt_held_public_amt']) Next, I did a unit conversion, and set the tick format to put '$ T' around each number. rows['debt_held_public_amt_tr'] = rows['debt_held_public_amt'] / 1e12 ax = rows.plot(x='record_date', y='debt_held_public_amt_tr', kind='line', rot=90) ax.yaxis.set_major_formatter('${x:.0f} T') See also Dollar ticks from the matplotlib documentation. | 2 | 1 |
77,379,001 | 2023-10-28 | https://stackoverflow.com/questions/77379001/efficient-computation-of-the-set-of-surjective-functions | A function f : X -> Y is surjective when every element of Y has at least one preimage in X. When X = {0,...,m-1} and Y = {0,...,n-1} are two finite sets, then f corresponds to an m-tuple of numbers < n, and it is surjective precisely when every number < n appears at least once. (When we require that every number appears exactly once, we have n=m and are talking about permutations.) I would like to know an efficient algorithm for computing the set of all surjective tuples for two given numbers n and m. The number of these tuples can be computed very efficiently with the inclusion-exclusion principle (see for example here), but I don't think that this is useful here (since we would first compute all tuples and then remove the non-surjective ones step by step, and I assume that the computation of all tuples will take longer*.). A different approach goes as follows: Consider for example the tuple (1,6,4,2,1,6,0,2,5,1,3,2,3) in which every number < 7 appears at least once. Look at the largest number and erase it: (1,*,4,2,1,*,0,2,5,1,3,2,3) It appears in the indices 1 and 5, so this corresponds to the set {1,5}, a subset of the indices. The rest corresponds to the tuple (1,4,2,1,0,2,5,1,3,2,3) with the property that every number < 6 appears at least once. We see that the surjective m-tuples of numbers < n correspond to the pairs (T,a), where T is a non-empty subset of {0,...,m-1} and a is a surjective (m-k)-tuple of numbers < n-1, where T has k elements. This leads to the following recursive implementation (written in Python): import itertools def surjective_tuples(m: int, n: int) -> set[tuple]: """Set of all m-tuples of numbers < n where every number < n appears at least once. Arguments: m: length of the tuple n: number of distinct values """ if n == 0: return set() if m > 0 else {()} if n > m: return set() result = set() for k in range(1, m + 1): smaller_tuples = surjective_tuples(m - k, n - 1) subsets = itertools.combinations(range(m), k) for subset in subsets: for smaller_tuple in smaller_tuples: my_tuple = [] count = 0 for i in range(m): if i in subset: my_tuple.append(n - 1) count += 1 else: my_tuple.append(smaller_tuple[i - count]) result.add(tuple(my_tuple)) return result I noticed that this is quite slow, though, when the input numbers are large. For example when (m,n)=(10,6) the computation takes 32 seconds on my (old) PC, the set has 16435440 elements here. I suspect that there is a faster algorithm. *In fact, the following implementation is very slow. def surjective_tuples_stupid(m: int, n: int) -> list[int]: all_tuples = list(itertools.product(*(range(n) for _ in range(m)))) surjective_tuples = filter(lambda t: all(i in t for i in range(n)), all_tuples) return list(surjective_tuples) | Just optimized yours a little, mainly by using insert to build the tuple. About 5x faster than yours for m=9, n=7. def surjective_tuples(m: int, n: int) -> list[tuple]: """List of all m-tuples of numbers < n where every number < n appears at least once. Arguments: m: length of the tuple n: number of distinct values """ if not n: return [] if m else [()] if n > m: return [] n -= 1 result = [] for k in range(1, m - n + 1): smaller_tuples = surjective_tuples(m - k, n) subsets = itertools.combinations(range(m), k) for subset in subsets: for smaller_tuple in smaller_tuples: my_tuple = [*smaller_tuple] for i in subset: my_tuple.insert(i, n) result.append(tuple(my_tuple)) return result | 3 | 3 |
77,379,133 | 2023-10-28 | https://stackoverflow.com/questions/77379133/aggregate-string-column-close-in-time-in-pandas | I'm trying to group messages, which have been sent shortly after another. A parameter defines the maximum duration between messages for them to be considered part of a block. If a message is added to the block, the time window is extended for more messages to be considered part of the block. Example Input datetime message 0 2023-01-01 12:00:00 A 1 2023-01-01 12:20:00 B 2 2023-01-01 12:30:00 C 3 2023-01-01 12:30:55 D 4 2023-01-01 12:31:20 E 5 2023-01-01 15:00:00 F 6 2023-01-01 15:30:30 G 7 2023-01-01 15:30:55 H Expected output for the parameter set to 1min datetime message datetime_last n_block 0 2023-01-01 12:00:00 A 2023-01-01 12:00:00 1 1 2023-01-01 12:20:00 B 2023-01-01 12:20:00 1 2 2023-01-01 12:30:00 C\nD\nE 2023-01-01 12:31:20 3 3 2023-01-01 15:00:00 F 2023-01-01 15:00:00 1 4 2023-01-01 15:30:30 G\nH 2023-01-01 15:30:55 2 My failing attempt I was hoping to achieve that with a rolling window, which would continuously append the message rows. def join_messages(x): return '\n'.join(x) df.rolling(window='1min', on='datetime').agg({ 'datetime': ['first', 'last'], 'message': [join_messages, "count"]}) #Somehow overwrite datetime with the aggregated datetime.first. Both aggregations fail on a ValueError: invalid on specified as datetime, must be a column (of DataFrame), an Index or None. I don't see a clean way to get datetime "accessible" in the Window. Besides, rolling does not work well with strings either. I have the impression that this is a dead end and that there is a cleaner approach to this. Snippets for input and expected data df = pd.DataFrame({ 'datetime': [pd.Timestamp('2023-01-01 12:00'), pd.Timestamp('2023-01-01 12:20'), pd.Timestamp('2023-01-01 12:30:00'), pd.Timestamp('2023-01-01 12:30:55'), pd.Timestamp('2023-01-01 12:31:20'), pd.Timestamp('2023-01-01 15:00'), pd.Timestamp('2023-01-01 15:30:30'), pd.Timestamp('2023-01-01 15:30:55'),], 'message': list('ABCDEFGH')}) df_expected = pd.DataFrame({ 'datetime': [pd.Timestamp('2023-01-01 12:00'), pd.Timestamp('2023-01-01 12:20'), pd.Timestamp('2023-01-01 12:30:00'), pd.Timestamp('2023-01-01 15:00'), pd.Timestamp('2023-01-01 15:30:30'),], 'message': ['A', 'B', 'C\nD\nE', 'F', 'G\nH'], 'datetime_last': [pd.Timestamp('2023-01-01 12:00'), pd.Timestamp('2023-01-01 12:20'), pd.Timestamp('2023-01-01 12:31:20'), pd.Timestamp('2023-01-01 15:00'), pd.Timestamp('2023-01-01 15:30:55'),], 'n_block': [1, 1, 3, 1, 2]}) | Compare the current and previous datetime values to flag the rows where difference is greater than 1 min then apply cumulative sum on the flag to distinguish between different blocks of datetimes. Now, group the dataframe by these blocks and aggregate to get the result m = df['datetime'].diff() > pd.Timedelta(minutes=1) df.groupby(m.cumsum(), as_index=False).agg(datetime=('datetime', 'first'), datetime_last=('datetime', 'last'), message=('message', '\n'.join), n_block=('message', 'count')) datetime datetime_last message n_block 0 2023-01-01 12:00:00 2023-01-01 12:00:00 A 1 1 2023-01-01 12:20:00 2023-01-01 12:20:00 B 1 2 2023-01-01 12:30:00 2023-01-01 12:31:20 C\nD\nE 3 3 2023-01-01 15:00:00 2023-01-01 15:00:00 F 1 4 2023-01-01 15:30:30 2023-01-01 15:30:55 G\nH 2 | 3 | 4 |
77,377,439 | 2023-10-27 | https://stackoverflow.com/questions/77377439/how-to-change-font-size-in-streamlit | I want to change the fontsize of the label above an input widget in my streamlit app. What I have so far: import streamlit as st label = "Enter text here" st.text_input(label) This renders the following: I want to make the label "Enter text here" bigger. I know there are various ways to change fontsize in st.write(). So, I tried some of them: the markdown headers syntax: st.write(f"# {label}") # <--- works st.text_input(f"# {label}") # <--- doesn't work some CSS: s = f"<p style='font-size:20px;'>{label}</p>" st.markdown(s, unsafe_allow_html=True) # <--- works st.text_input(s) # <--- doesn't work but as commented above, neither work. How do I make it work? | Option 1: Components API So did a little digging and it turns out that streamlit has a components API which can be used to render an html string. So we can basically use a little javascript to change the font size of a specific label. Since labels are unique for each widget, we can simply search for the paragraph element whose inner text that matches the label. A working example: label = "Enter text here" st.text_input(label) st.components.v1.html( f""" <script> var elems = window.parent.document.querySelectorAll('div[class*="stTextInput"] p'); var elem = Array.from(elems).find(x => x.innerText == '{label}'); elem.style.fontSize = '20px'; // the fontsize you want to set it to </script> """ ) It renders a field that looks like the following: A convenience function that can be used to change font size, color and font family (you can actually add more as you wish): def change_label_style(label, font_size='12px', font_color='black', font_family='sans-serif'): html = f""" <script> var elems = window.parent.document.querySelectorAll('p'); var elem = Array.from(elems).find(x => x.innerText == '{label}'); elem.style.fontSize = '{font_size}'; elem.style.color = '{font_color}'; elem.style.fontFamily = '{font_family}'; </script> """ st.components.v1.html(html) label = "My text here" st.text_input(label) change_label_style(label, '20px') Option 2: Latex text It turns out you can use LaTeX expressions, so I ended up using latex text in math mode because you can change font size using \Huge, \LARGE etc. in Latex. Since the default font in streamlit is sans-serif, I used \textsf{}. A working example: import streamlit as st st.text_input(r"$\textsf{\Large Enter text here}$") which renders The above uses \Large font size. The following is an example using all possible font size options: label = r''' $\textsf{ \Huge Text \huge Text \LARGE Text \Large Text \large Text \normalsize Text \small Text \footnotesize Text \scriptsize Text \tiny Text }$ ''' st.text_input(label) | 3 | 10 |
77,375,881 | 2023-10-27 | https://stackoverflow.com/questions/77375881/how-to-type-a-python-function-the-same-way-as-another-function | For writing a wrapper around an existing function, I want to that wrapper to have the same, or very similar, type. For example: import os def my_open(*args, **kwargs): return os.open(*args, **kwargs) Tye type signature for os.open() is complex and may change over time as its functionality and typings evolve, so I do not want to copy-paste the type signature of os.open() into my code. Instead, I want to infer the type for my_open(), so that it "copies" the type of os.open()'s parameters and return values. my_open() shall have the same type as the wrapped function os.open(). I would like to do the same thing with a decorated function: @contextmanager def scoped_open(*args, **kwargs): """Like `os.open`, but as a `contextmanager` yielding the FD. """ fd = os.open(*args, **kwargs) try: yield fd finally: os.close(fd) Here, the inferred function arguments of scoped_open() shall be the same ones as os.open(), but the return type shall be a Generator of the inferred return type of os.open() (currently int, but again I do not wish to copy-paste that int). I read some things about PEP 612 here: Python Typing: Copy `**kwargs` from one function to another Python 3 type hinting for decorator These seem related, but the examples given there still always copy-paste at least some part of the types. How can this be done in pyright/mypy/general? | You could simply pass [the function you want to copy the signature of] into [a decorator factory] which produces a no-op decorator that affects the typing API of the decorated function. The following example can be checked on pyright-play.net (requires Python >= 3.12, as it uses syntax from PEP 695). from __future__ import annotations import typing_extensions as t if t.TYPE_CHECKING: import collections.abc as cx def withParameterAndReturnTypesOf[F: cx.Callable[..., t.Any]](f: F, /) -> cx.Callable[[F], F]: """ Capture the exact type of `f`, then pretend that the decorated function is of this exact type. """ return lambda _: _ def withParameterTypesOf[**P, R](f: cx.Callable[P, t.Any], /) -> cx.Callable[[cx.Callable[P, R]], cx.Callable[P, R]]: """ Capture the parameters type of `f`, then pretend that the decorated function's parameters are of this type. `f`'s return type is ignored, and the decorated function's return type is preserved. """ return lambda _: _ import os from contextlib import contextmanager @withParameterAndReturnTypesOf(os.open) def my_open(*args: t.Any, **kwargs: t.Any) -> t.Any: return os.open(*args, **kwargs) @contextmanager @withParameterTypesOf(os.open) def scoped_open(*args: t.Any, **kwargs: t.Any) -> cx.Generator[int, None, None]: fd = os.open(*args, **kwargs) try: yield fd finally: os.close(fd) Expression of type "int" cannot be assigned to declared type "str" "int" is incompatible with "str" (reportGeneralTypeIssues) vvvvvv vvvvvvv >>> a: str = my_open(1, 2) ^ Argument of type "Literal[1]" cannot be assigned to parameter "path" of type "StrOrBytesPath" in function "open" Type "Literal[1]" cannot be assigned to type "StrOrBytesPath" "Literal[1]" is incompatible with "str" "Literal[1]" is incompatible with "bytes" "Literal[1]" is incompatible with protocol "PathLike[str]" "__fspath__" is not present "Literal[1]" is incompatible with protocol "PathLike[bytes]" "__fspath__" is not present (reportGeneralTypeIssues) v >>> with scoped_open(1, 2) as a: pass ^^^^^^^^^^^ ^^^^ Expression of type "int" cannot be assigned to declared type "str" "int" is incompatible with "str" (reportGeneralTypeIssues) | 3 | 3 |
77,377,886 | 2023-10-28 | https://stackoverflow.com/questions/77377886/does-mypy-check-never-type-at-all | I was playing with Never type in mypy. If I have a function foo(x: int) I expected that when called with a value of type Never mypy would complain, but it silently typechecks the call: from typing import Never def foo(x: int): pass def bar(x: Never): foo(x) # ok, I exected a type error foo("foo") # err --- edit --- Just for reference my solution to create a uninhabited type is this from abc import ABC, abstractmethod, final @final class Never(ABC): @abstractmethod def __init__(self) -> None: ... | This is normal. Never is a subtype of every other type. After all, Never is the type with no values. All values of Never type are values of every other type, because there are no values of Never type. | 2 | 3 |
77,376,677 | 2023-10-27 | https://stackoverflow.com/questions/77376677/why-does-datetime-module-behave-this-way-with-timezones | I am trying to work with time and timezones. I am in the US/Mountain time zone and my computer (Windows) is configured to that time zone. import datetime import zoneinfo utc = zoneinfo.ZoneInfo('UTC') mt = zoneinfo.ZoneInfo('US/Mountain') print(datetime.datetime.now()) print(datetime.datetime.now().astimezone(mt)) print(datetime.datetime.now().astimezone(utc)) # 2023-10-27 13:17:18.840857 # 2023-10-27 13:17:18.840857-06:00 # 2023-10-27 19:17:18.840857+00:00 The last line is the one that confuses me. I thought the code datetime.datetime.now() creates a timezone naive object, then astimezone(utc) converts it to a timezone aware object, but doesn't change the "value" of the time. But here you can see that astimezone(utc) causes 6 hours to be added to the value of the time, as if the time generated by datetime.datetime.now() was a mountain time object. | See the docs on astimezone: Return a datetime object with new tzinfo attribute tz, adjusting the date and time data so the result is the same UTC time as self, but in tz’s local time. If provided, tz must be an instance of a tzinfo subclass, and its utcoffset() and dst() methods must not return None. If self is naive, it is presumed to represent time in the system timezone. ... If you merely want to attach a time zone object tz to a datetime dt without adjustment of date and time data, use dt.replace(tzinfo=tz). If you merely want to remove the time zone object from an aware datetime dt without conversion of date and time data, use dt.replace(tzinfo=None). I get the expected behavior if I use replace: import datetime import zoneinfo utc = zoneinfo.ZoneInfo('UTC') mt = zoneinfo.ZoneInfo('US/Mountain') print(datetime.datetime.now()) print(datetime.datetime.now().replace(tzinfo=mt)) print(datetime.datetime.now().replace(tzinfo=utc)) # 2023-10-27 13:45:35.897159 # 2023-10-27 13:45:35.897159-06:00 # 2023-10-27 13:45:35.897159+00:00 What was happening in my code is that when I use astimezone() the datetime modules DOES infer a timezone from my system time and then uses that to convert the time to the SAME utc time. replace is the function to use to just "drop in" a timezone without making adjustments. | 2 | 1 |
77,376,535 | 2023-10-27 | https://stackoverflow.com/questions/77376535/how-to-sort-values-in-order-with-a-pandas-dataframe | I have a dataframe as follow: armed signs_of_mental_illness count gun False 628 gun True 155 knife False 142 vehicle False 104 knife True 84 metal pole True 1 metal rake True 1 I want to sort this dataframe as follow: armed signs_of_mental_illness count gun False 628 gun True 155 knife False 142 knife True 84 I tired armed_mental = focus_age_group.groupby(['armed', 'signs_of_mental_illness'])['id'].count().sort_values(ascending=False) that had product above result. But I have difficulty getting what I want. The category (armed) with highest number(True + False) of should be on top of the dataframe. Then follow by True and False. | If you want to sort by the total per "armed", y first need to combine the counts with groupby.transform: import numpy as np order = np.lexsort([df['signs_of_mental_illness'], -df.groupby('armed')['count'].transform('sum')]) out = df.iloc[order] Alternative: out = (df.assign(total=df.groupby('armed')['count'].transform('sum')) .sort_values(by=['total', 'signs_of_mental_illness'], ascending=[False, True]) .drop(columns='total') ) Output: armed signs_of_mental_illness count 0 gun False 628 1 gun True 155 2 knife False 142 4 knife True 84 3 vehicle False 104 5 metal pole True 1 6 metal rake True 1 | 2 | 2 |
77,375,755 | 2023-10-27 | https://stackoverflow.com/questions/77375755/how-can-i-dump-a-python-dataclass-to-yaml-without-tags | I have a nested dataclasses that I would like to convert and save to the yaml format using the PyYaml library. The resultant YAML output contains YAML tags which I would like to remove. I have the following Python code: from dataclasses import dataclass import yaml @dataclass class Database: host: str username: str password: str @dataclass class Environment: name: str database: Database @dataclass class Config: environment: Environment database = Database(host="localhost", username="admin", password="secret") environment = Environment(name="local", database=database) config = Config(environment=environment) print(yaml.dump(config)) which produces the YAML output: !!python/object:__main__.Config environment: !!python/object:__main__.Environment database: !!python/object:__main__.Database host: localhost password: secret username: admin name: local How can I produce a YAML output of nested dataclasses without the YAML tags included? The desired outcome should look something like: environment: database: host: localhost password: secret username: admin name: local | When dumping instances of Python classes, PyYaml will serialize the contents and add a tag indicating the class in order to allow reading the yaml output back to the designated class. PyYaml does not tag native Python objects like lists and dicts, so by converting the dataclass instances to dictionaries with the asdict method the YAML dump will contain no tags: from dataclasses import dataclass, asdict import yaml @dataclass class Database: host: str username: str password: str @dataclass class Environment: name: str database: Database @dataclass class Config: environment: Environment database = Database(host="localhost", username="admin", password="secret") environment = Environment(name="local", database=database) config = Config(environment=environment) print(yaml.dump(asdict(config))) The updated code above produces the following output: environment: database: host: localhost password: secret username: admin name: local | 5 | 4 |
77,375,606 | 2023-10-27 | https://stackoverflow.com/questions/77375606/separate-input-prompts-from-return-values-when-calling-python-script-inside-shel | I have a simple Python script that asks the user for an ID and returns a serial number: ... id = input("Enter ID (0-indexed):\n") ... print(serial_number) I'm using this .py script inside of a larger bash script via command substitution: serial_number=$(./get_serial_no.py) echo "$serial_number" The problem I'm facing is the following: since command substitution takes the entire output of the command as input, the input prompt inside the Python script is also stored in the serial_number variable and never gets displayed on screen (until the echo). Therefore, if I run the shell script, the console waits for input without giving me a prompt. Is there a way to separate the Python prompt from the value that I want returned via command substitution (i.e. the serial number)? The only things I can think of is either write the serial number to another file, or ask for input inside of the shell script and pass the ID as an option to the Python script. | I recommend against prompts if you are using a python script in a shell script. Use command-line options with a library such as argparse. It is so much easier, plus you get input validation as a bonus: import argparse parser.add_argument( '-s', '--serial_number', type = int, default = 12345, required = False, help = re.sub( r'\s+', r' ', '''Serial number (default: %(default)s)''')) options = parser.parse_args() print(optiions.serial_number) | 2 | 2 |
77,370,805 | 2023-10-26 | https://stackoverflow.com/questions/77370805/using-python-subprocess-to-open-powershell-causes-encoding-errors-in-stdout | I'm trying to run a Powershell script from python and print the output, but the output contains special characters "é". process = subprocess.Popen([r'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe', 'echo é'], stdout=subprocess.PIPE) print(process.stdout.read().decode('cp1252')) returns "," process = subprocess.run(r'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe echo é', stdout=subprocess.PIPE) print(process.stdout.decode('cp1252')) returns "," print(subprocess.check_output(r'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe echo é').decode('cp1252')) returns "," Is there an alternate method other than subprocess, or maybe a different encoding I should be using? UTF-8 gives an error for é but returns an "r" for ®. UTF-16-le gives the error "UnicodeDecodeError: 'utf-16-le' codec can't decode byte 0x0a in position 2: truncated data". | The powershell.exe, the Windows PowerShell CLI,[1] uses the active console window's code page to encode its stdout and stderr output, as reflected in the output from chcp, which by default is the legacy system locale's OEM code page, e.g. (expressed in Python terms) cp437. By contrast, the code page you used - cp1252 - is an ANSI code page. Note: Python uses the system's ANSI code page by default for encoding its stdout and stderr output, which, however, is nonstandard behavior: console applications are expected to use the current console's output code page, which is what powershell.exe does and which, as stated, is the system's OEM code page. One option is to simply query the console window for its active (output) code page via the WinAPI and use the encoding returned: import subprocess from ctypes import windll # Get the console's (output) code page, which the PowerShell CLI # uses to encode its output. cp = windll.kernel32.GetConsoleOutputCP() process = subprocess.Popen(r'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe echo é', stdout=subprocess.PIPE) # Decode based on the active code page. print(process.stdout.read().decode('cp' + str(cp))) However, note that the OEM code page limits you to 256 characters; while é can be represented in CP437, for instance, other Unicode characters, such as €, cannot. Therefore the robust option is to (temporarily) set the console output code page to 65001, which is UTF-8: import subprocess from ctypes import windll # Save the current console output code page and switch to 65001 (UTF-8) previousCp = windll.kernel32.GetConsoleOutputCP() windll.kernel32.SetConsoleOutputCP(65001) process = subprocess.Popen(r'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe echo é€', stdout=subprocess.PIPE) # Decode as UTF-8 print(process.stdout.read().decode('utf8')) # Restore the previous output console code page. windll.kernel32.SetConsoleOutputCP(previousCp) Note: The above only ensures that the PowerShell child process emits UTF-8 and that its output is decoded as such inside the Python process, which is unrelated to what character encoding Python itself uses for its output streams. To put Python v3.7+ itself in Python UTF-8 Mode, which makes it decode input as UTF-8 and produce UTF-8 output, pass command-line option -X utf8 or define environment variable PYTHONUTF8 with a value of 1 before invocation. To additionally make an interactive shell session use UTF-8 (use the 65001 code page) for the remainder of the session: In a cmd.exe session: chcp 65001 In a PowerShell session: $OutputEncoding = [Console]::InputEncoding = [Console]::OutputEncoding = [System.Text.UTF8Encoding]::new() A simpler alternative via a one-time configuration step is to configure your system to use UTF-8 system-wide, in which case both the OEM and the ANSI code pages are set to 65001. However, this has far-reaching consequences - see this answer. [1] The same applies to pwsh.exe, the CLI of the modern PowerShell (Core) 7+ edition. | 2 | 2 |
77,371,542 | 2023-10-27 | https://stackoverflow.com/questions/77371542/i-have-a-problem-with-python-in-vscode-typeerror-is-not-a-function | In VS Code, the latest Pylance extension (v2023.10.50) failed with this error: TypeError: _0x2f33cc[(_0x1efd68(...) + _0x1efd68(...))] is not a function (When I highlight on any line of code). I tried to delete some suspected libs but nothing happened. | The same error occurred when using Pylance v2023.10.50, and downgraded to v2023.10.40 and the problem was resolved. | 4 | 13 |
77,370,226 | 2023-10-26 | https://stackoverflow.com/questions/77370226/pyinstaller-generate-exe-file-outside-dist-folder | I'am using Pyinstaller package to build a python desktop app, however Pyinstaller seems to put the bundled exe file outside _internal folder is there a solution I can try ? Thanks in advance output by Pyinstaller _internal folder structure my goal is to package all the app in a folder (dist) that exe file must be inside _internal folder | From the update logs,on Pyinstaller's document:https://pyinstaller.org/en/stable/CHANGES.html#id2 it says: Restructure onedir mode builds so that everything except the executable (and if you’re using external PYZ archive mode) are hidden inside a sub-directory. This sub-directory’s name defaults to but may be configured with a new --contents-directory option. Onefile applications and macOS bundles are unaffected. (#7713.pkg_internal.app) It means: from the verison ,6.0.0, it will build with a dir named _interne,when you choose to build by onedir. So,a simple way to go back,is , make the version no bigger than 6.0.0.such as 5.13.2. It works well for me. Also ,from the github issue.Below: https://github.com/pyinstaller/pyinstaller/pull/7713 Pyinstall may give us a way to build exe file (use onedir),no create a dir named _interne.But,I can not find it. Sorry for that,English is not my mother tongue.I have tried my best to get the grammar correct. | 2 | 3 |
77,370,398 | 2023-10-26 | https://stackoverflow.com/questions/77370398/python-decimal-sum-returning-wrong-value | I'm using decimal module to avoid float rounding errors. In my case the values are money so I want two decimal places. To do that I do decimal.getcontext().prec = 2 but then I get some surprising results which make me think I'm missing something. In this code the first assertion works, but the second fails from decimal import getcontext, Decimal assert Decimal("3000") + Decimal("20") == 3020 getcontext().prec = 2 assert Decimal("3000") + Decimal("20") == 3020 # fails Since 3000 and 20 are integers I was expecting this to hold, but I get 3000 instead. Any ideas on what is happening? | decimal does not implement fixed-point arithmetic directly. It implements base 10 floating-point arithmetic. The precision (prec) is the total number of significant digits retained and has nothing to do with the position of the radix point. Try displaying the computed value in your last example: >>> Decimal("3000") + Decimal("20") Decimal('3.0E+3') The exact result (3020) is rounded back to the 2 (because you set prec to 2) most significant digits, so the trailing "20" is thrown away. If, e.g., you want 2 places after the decimal point, you'll have to arrange for that yourself. Search the docs for the question "Once I have valid two place inputs, how do I maintain that invariant throughout an application?". | 2 | 4 |
77,370,358 | 2023-10-26 | https://stackoverflow.com/questions/77370358/python-check-if-value-in-tuple-fails-with-numpy-array | I've got a function that returns a tuple, and I need to check if a specific element is in the tuple. I don't know types of elements will be in the tuple, but I know that I want an exact match. For example, I want 1 in (1, [0, 6], 0) --> True 1 in ([1], 0, 6]) --> False This should be really straightforward, right? I just check 1 in tuple_output_from_function. This breaks if there is a numpy array as an element in the tuple import numpy as np s = tuple((2, [0, 6], 1)) 4 in s --> False t = tuple((2, np.arrary([0, 6]), 1)) 4 in t --> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I expect the second case to return False because 4 is not in the tuple. Even if it were in the array, I would still expect False. I can do 0 in t[1] without error. Why does this break, and how can I make my check robust to it without assuming that there will be a numpy array or having to check for it explicitly? | It breaks because if you in operator, Python under the hood is using the equality operator (==). Consider this: t = (2, np.array([0, 6]), 1) v = 1 print(v in t) Python checks each value in the tuple t for equality. For numpy array the operation 1 == np.array([0, 6]) is another boolean array [False False]. Python then checks if this result is truthy, which throws the exception you see. You can use any() and check if the current value you check is the type of the value is int: t = (2, np.array([0, 6]), 1) v = 1 x = any(v == i for i in t if isinstance(i, int)) print(x) Prints: True | 5 | 2 |
77,368,894 | 2023-10-26 | https://stackoverflow.com/questions/77368894/python-packagenotfounderror-no-package-metadata-was-found-for-myproject | I have an existing python project and trying to debug in via Microsoft Visual Studio Code. Therefore I have the following launch.json: { "version": "0.2.0", "configurations": [ { "name": "Python: myproject", "type": "python", "request": "launch", "program": "E:\\<path>\\__main__.py", "cwd": "${workspaceFolder}", "env": {"PYTHONPATH": "${cwd}" }, "args": [ "publish", "--config=E:\\<path>\\config.yaml" ], } ] } Unfortunately it runs up to one excpetion: Exception has occurred: PackageNotFoundError No package metadata was found for myproject importlib.metadata.PackageNotFoundError: No package metadata was found for myproject I already tried pip install importlib.metadata But it doesn't help. The lines of code that are running the exception are in the return: import atexit, copy, getopt, importlib.metadata, json, logging, os, re, signal, shutil, sys, textwrap, time, urllib . . . return importlib.metadata.version(package) Using the interpreter: Anyone could help how to fix this? Thanks in advance. Cheers | After additional research it was just an only the following command, that helped: pip install -e . If someone has the same issue (the current project wasn't installed) | 2 | 7 |
77,367,745 | 2023-10-26 | https://stackoverflow.com/questions/77367745/numpy-and-linear-algebra-how-to-code-ax%ea%9e%8fy | I've some difficulty in matching what Numpy expects when performing the dot product and vector representation in linear algebra, in term of shapes. Let's say I've a matrix and two column vectors represented by Numpy arrays: import numpy as np A = np.array([[1,2], [3,1], [-5,2]]) x = np.array([[0], [2]]) y = np.array([[-2], [0], [3]]) and I want to compute Axꞏy, Ax being a matrix multiplication and ꞏ being the dot product. This doesn't work: # a = Axꞏy a = (A @ x).dot(y) shapes (3,1) and (3,1) not aligned: 1 (dim 1) != 3 (dim 0) Ax: [[4], [2], [4]] is indeed a column vector and the dot product of the two column vectors is the scalar: -8+0+12=4, this is the result I'm expecting. What is the correct operation or reshaping required, using vectors as they are defined? | You can't compute a dot product between two (3, 1) arrays, a dot product A @ B is only valid if the shape of A and B are (n, k) and (k, m). It looks like you want: (A@x).T @ y Output: [[4]] Or as scalar: ((A@x).T @ y).item() Output: 4 NB. in the context of 2D arrays @ and dot are equivalent. Other operations that would have been valid given the shapes: (A@x) @ y.T array([[-8, 0, 12], [-4, 0, 6], [-8, 0, 12]]) (A.T*x) @ y array([[0], [4]]) | 3 | 4 |
77,366,058 | 2023-10-26 | https://stackoverflow.com/questions/77366058/sort-rows-in-sub-cluster-of-cluster-in-pandas-dataframe | I have a dataframe as below: Part Date Quantity A 2023-10-26 -1 A 2023-10-26 1 A 2023-11-03 1 A 2023-12-15 -1 B 2023-11-09 2 B 2023-11-14 -2 B 2023-11-14 2 B 2023-11-19 2 Each part is a cluster, and each date within a part is a sub-cluster. I want to order the Quantity values for each date for each part based on: positive values first. Result should be: Part Date Quantity A 2023-10-26 1 A 2023-10-26 -1 A 2023-11-03 1 A 2023-12-15 -1 B 2023-11-09 2 B 2023-11-14 2 B 2023-11-14 -2 B 2023-11-19 2 It is possible to use some sort of double groupby, or should I look for a different solution? | Use key parameter in DataFrame.sort_values with specify column Quantity for False for positive values by compare bySeries.lt or Series.le: out = df.sort_values(['Part', 'Date', 'Quantity'], key=lambda x: x.le(0) if x.name=='Quantity' else x) print (out) Part Date Quantity 1 A 2023-10-26 1 0 A 2023-10-26 -1 2 A 2023-11-03 1 3 A 2023-12-15 -1 4 B 2023-11-09 2 6 B 2023-11-14 2 5 B 2023-11-14 -2 7 B 2023-11-19 2 | 3 | 4 |
77,362,216 | 2023-10-25 | https://stackoverflow.com/questions/77362216/add-startup-shutdown-handlers-to-fastapi-app-with-lifespan-api | Consider a FastAPI using the lifespan parameter like this: def lifespan(app): print('lifespan start') yield print('lifespan end') app = FastAPI(lifespan=lifespan) Now I want to register a sub app with its own lifecycle functions: app.mount(mount_path, sub_app) How can I register startup/shutdown handlers for the sub app? All solutions I could find either require control over the lifespan generator (which I don't have) or involve deprecated methods like add_event_handler (which doesn't work when lifespan is set). Update Minimal reproducible example: from fastapi import FastAPI # --- main app --- def lifespan(_): print("startup") yield print("shutdown") app = FastAPI(lifespan=lifespan) @app.get("/") async def root(): return {"message": "Hello World"} # --- sub app --- sub_app = FastAPI() @sub_app.get("/") async def sub_root(): return {"message": "Hello Sub World"} app.mount("/sub", sub_app) app.on_event("startup")(lambda: print("sub startup")) # doesn't work app.on_event("shutdown")(lambda: print("sub shutdown")) # doesn't work Run with: uvicorn my_app:app --port 8000 | I found a solution, but I'm not sure if I like it... It accesses the existing lifespan generator via app.router.lifespan_context and wraps it with additional startup/shutdown commands: from contextlib import asynccontextmanager ... main_app_lifespan = app.router.lifespan_context @asynccontextmanager async def lifespan_wrapper(app): print("sub startup") async with main_app_lifespan(app) as maybe_state: yield maybe_state print("sub shutdown") app.router.lifespan_context = lifespan_wrapper Output: INFO: Waiting for application startup. sub startup startup INFO: Application startup complete. ... INFO: Shutting down INFO: Waiting for application shutdown. shutdown sub shutdown INFO: Application shutdown complete. | 8 | 7 |
77,363,773 | 2023-10-26 | https://stackoverflow.com/questions/77363773/append-string-to-list-directly-before-it-in-a-list-of-lists-and-strings | I have a list of lists and strings and would like to append the string elements into the list element directly before it. Below is a sample of the data. data = [['way','say','may','lay'], 'wake', ['hay','pay','yay'], 'lake', ['tay'], 'shake'] The desire output should be something like this: out = [['way','say','may','lay','wake'], ['hay','pay','yay','lake'], ['tay', 'shake']] I have tried converting the list into a dataframe and use groupby and cumsum() however this seems to only partly solve my problem. Thanks! | You could zip together the odd and even parts of data and use list addition of those parts to build your desired output: out = [odd + [even] for odd, even in zip(data[0::2], data[1::2])] Output: [ ['way', 'say', 'may', 'lay', 'wake'], ['hay', 'pay', 'yay', 'lake'], ['tay', 'shake'] ] | 3 | 2 |
77,363,316 | 2023-10-25 | https://stackoverflow.com/questions/77363316/how-to-resolve-issues-with-a-bar-plot-x-axis-being-overcrowded | Here's a Python program which does the following: Makes an API call to treasury.gov to retrieve data Stores the data in a Pandas dataframe Plots the data as a bar chart import requests import pandas as pd import matplotlib.pyplot as plt date = '1900-01-01' transaction_type = 'Withdrawals' transaction_catg = 'Interest on Treasury Securities' page_size = 10000 url = 'https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v1/accounting/dts/deposits_withdrawals_operating_cash' url_params = f'?filter=record_date:gt:{date},transaction_type:eq:{transaction_type},transaction_catg:eq:{transaction_catg}&page[size]={page_size}' response = requests.get(url + url_params) result_json = response.json() df = pd.DataFrame(result_json['data']) # Convert transaction_today_amt to numeric df['transaction_today_amt'] = pd.to_numeric(df['transaction_today_amt']) df.plot.bar(x='record_date', y='transaction_today_amt') plt.show() Here's the resulting chart that I get: As you can see, there are too many x-axis labels. Question What's a good way to setup the chart so that the x-axis labels are legible? | The 'record_date' values are strings, not datetimes. df.record_date = pd.to_datetime(df.record_date) Typically, a line plot should be used for continuous timeseries data. Line Plot ax = df.plot(x='record_date', y='transaction_today_amt', figsize=(12, 7)) Scatter Plot ax = df.plot(kind='scatter', x='record_date', y='transaction_today_amt', marker='.', figsize=(12, 7)) Bar Plot Use matplotlib directly, because the xticks are datetime ordinals, which will label the x-axis with the Year. ax.get_xticks() → array([12418., 13149., 13879., 14610., 15340., 16071., 16801., 17532., 18262., 18993., 19723.]) fig, ax = plt.subplots(figsize=(8, 5), dpi=200) ax.bar(x='record_date', height='transaction_today_amt', data=df) ax.set_yscale('log') pandas treats the xticks as discrete, and plots all of them. ax = df.plot(kind='bar', x='record_date', y='transaction_today_amt', figsize=(12, 7)) | 3 | 1 |
77,361,304 | 2023-10-25 | https://stackoverflow.com/questions/77361304/fast-direct-pixel-access-in-python-go-or-julia | I wrote a small program that creates random noise and displays it full screen (5K resolution). I used pygame for it. However the refresh rate is horribly slow. Both the surfarray.blit_array and random generation take a lot of time. Any way to speed this up? I am also flexible to use julia or golang instead. or also psychopy or octave with psychotoolbox (however those do not seem to be working under linux/wayland). Here is what I wrote: import pygame import numpy as N import pygame.surfarray as surfarray from numpy import int32, uint8, uint def main(): pygame.init() #flags = pygame.OPENGL | pygame.FULLSCREEN # OpenGL does not want to work with surfarray flags = pygame.FULLSCREEN screen = pygame.display.set_mode((0,0), flags=flags, vsync=1) w, h = screen.get_width(), screen.get_height() clock = pygame.time.Clock() font = pygame.font.SysFont("Arial" , 18 , bold = True) # define a variable to control the main loop running = True def fps_counter(): fps = str(int(clock.get_fps())) fps_t = font.render(fps , 1, pygame.Color("RED")) screen.blit(fps_t,(0,0)) # main loop while running: # event handling, gets all event from the event queue for event in pygame.event.get(): # only do something if the event is of type QUIT if event.type == pygame.QUIT: # change the value to False, to exit the main loop running = False elif event.type == pygame.KEYDOWN: if event.key == pygame.K_ESCAPE: pygame.quit() return array_img = N.random.randint(0, high=100, size=(w,h,3), dtype=uint) surfarray.blit_array(screen, array_img) fps_counter() pygame.display.flip() clock.tick() #print(clock.get_fps()) # run the main function only if this module is executed as the main script # (if you import this as a module then nothing is executed) if __name__=="__main__": # call the main function main() I need a refresh rate of at least 30 fps for it to be useful | Faster random number generation Generating random numbers is expensive. This is especially true when the random number generator (RNG) needs to be statistically accurate (i.e. random numbers needs to look very random even after some transformation), and when number are generated sequentially. Indeed, for cryptographic usages or some mathematical (Monte Carlo) simulations, the target RNG needs to be sufficiently advanced so that there is no statistical correlation between several subsequent generated random numbers. In practice, software methods to do that are so expensive that modern mainstream processors provide a way to do that with specific instructions. Not all processors support this though, and AFAIK Numpy does not use that (certainly for sake of portability since a random sequence with the same seed on multiple machines is expected to give the same results). Fortunately, RNGs often do not need to be so accurate in most other use-case. They just need to look quite random. There are many methods to do that (e.g. Mersenne Twister, Xoshiro, Xorshift, PCG/LCG). There is often a trade-off between performance, accuracy and the specialization of the RNG. Since Numpy needs to provide a generic RNG that is relatively accurate (though AFAIK not meant to be used for cryptographic use-cases), it is not surprising its performance is sub-optimal. An interesting review of many different methods is available here (though results should be taken with a grain of salt, especially regarding performance since details like being SIMD-friendly is critical for performance in many use-cases). Implementing a very-fast random number generator in pure-Python (using CPython) is not possible but one can use Numba (or Cython) to do that. There is maybe fast existing modules written in native languages to do that though. On top of that we can use multiple threads to speed up the operation. I choose to implement a Xorshift64 RNG for sake of simplicity (and also because it is relatively fast). import numba as nb @nb.njit('uint64(uint64,)') def xorshift64_step(seed): seed ^= seed << np.uint64(13) seed ^= seed >> np.uint64(7) seed ^= seed << np.uint64(17) return seed @nb.njit('uint64()') def init_xorshift64(): seed = np.uint64(np.random.randint(0x10000000, 0x7FFFFFFF)) # Bootstrap return xorshift64_step(seed) @nb.njit('(uint64, int_)') def random_pixel(seed, high): # Must be a constant for sake of performance and in the range [0;256] max_range = np.uint64(high) # Generate 3 group of 16 bits from the RNG bits1 = seed & np.uint64(0xFFFF) bits2 = (seed >> np.uint64(16)) & np.uint64(0xFFFF) bits3 = seed >> np.uint64(48) # Scale the numbers using a trick to avoid a modulo # (since modulo are inefficient and statistically incorrect here) r = np.uint8(np.uint64(bits1 * max_range) >> np.uint64(16)) g = np.uint8(np.uint64(bits2 * max_range) >> np.uint64(16)) b = np.uint8(np.uint64(bits3 * max_range) >> np.uint64(16)) new_seed = xorshift64_step(seed) return (r, g, b, new_seed) @nb.njit('(int_, int_, int_)', parallel=True) def pseudo_random_image(w, h, high): res = np.empty((w, h, 3), dtype=np.uint8) for i in nb.prange(w): # RNG seed initialization seed = init_xorshift64() for j in range(h): r, g, b, seed = random_pixel(seed, high) res[i, j, 0] = r res[i, j, 1] = g res[i, j, 2] = b return res The code is quite big but it is about 22 times faster than Numpy on my i5-9600KF CPU with 6 cores. Note that a similar code can be used in Julia so to get a fast implementation (since Julia use a JIT based on LLVM similar to Numba). On my machine this is sufficient to reach 75 FPS (maximum) while the initial code reached 16 FPS. Faster operations & rendering Generating new random array is limited by the speed of page faults on most platforms. This can significantly slow down the computation. The only way to mitigate this in Python is to create the brame buffer once and perform in-place operation. Moreover, PyGame certainly performs copies internally (and also probably many draw calls) so using a lower-level API can be significantly faster. Still, the operation will likely be memory-bound then and there is nothing to avoid that. It might be fast enough to you though at this point. On top of that, the frames are rendered on the GPU so the CPU needs to send/copy the buffer on the GPU, typically through the PCIe interconnect for discrete GPU. This operation is not very fast for wide screens. Actually, you can generate the random image directly on the GPU using shaders (or tools like OpenCL/CUDA). This avoid the above overhead and GPUs can do that faster than CPUs. | 2 | 3 |
77,362,116 | 2023-10-25 | https://stackoverflow.com/questions/77362116/add-missing-index-dtype-string-to-value-counts-of-df-in-pandas | I can't seem to find this anywhere else, and I could be wording the question incorrectly, but I am getting the value counts for the 'Type' column in adf grouped by account ID and getting results like so: print(df_activities.groupby(['Account ID'])['Type'].value_counts().sort_index()) Account ID Type 0011N000017WOso Call 5 Email 119 Null 2 Outreach 65 Unclassified 1 ... 001o0000017a8o5 Email 46 LinkedIn 11 Null 2 Others 6 Outreach 8 What I would like is to only extract Call, Email, LinkedIn, and Meeting from the value counts, but if an ID is missing one of these values entirely, I need it to show that there are 0 present. For example, this would be the ideal output of the above: Account ID Type 0011N000017WOso Call 5 Email 119 LinkedIn 0 Meeting 0 ... 001o0000017a8o5 Email 46 Call 0 LinkedIn 11 Meeting 0 I want to map these to separate columns afterwards, with something like this: df_acc['Email Activity'] = df_acc['Account ID'].map(df_contacts.groupby(['Account ID'])['Type']='Email'.sum()) | Instead of value_counts, use crosstab and stack: out = pd.crosstab(df['Account ID'], df['Type']).stack() For only specific types: types = ['Call', 'Email', 'LinkedIn', 'Meeting'] out = (pd.crosstab(df['Account ID'], df['Type']) .reindex(columns=types, fill_value=0).stack() ) Output: Account ID Type 0011N000017WOso Call 5 Email 119 LinkedIn 0 Meeting 0 001o0000017a8o5 Call 0 Email 46 LinkedIn 11 Meeting 0 dtype: int64 Variant that might be a bit more efficient if there are many non-target categories: types = ['Call', 'Email', 'LinkedIn', 'Meeting'] m = df['Type'].isin(types) out = (pd.crosstab(df.loc[m, 'Account ID'], df.loc[m, 'Type']) .reindex(columns=types, fill_value=0).stack() ) | 2 | 2 |
77,357,723 | 2023-10-25 | https://stackoverflow.com/questions/77357723/how-can-i-create-xticks-with-varying-intervals | I am drawing a line chart using matplotlib as shown below. import matplotlib.pyplot as plt import pandas as pd import io temp = u""" tenor,yield 1M,5.381 3M,5.451 6M,5.505 1Y,5.393 5Y,4.255 10Y,4.109 """ data = pd.read_csv(io.StringIO(temp), sep=",") plt.plot(data['tenor'], data['yield']) Output: The tick intervals on the x-axis are all the same. What I want : Set the tick interval of the x-axis differently as shown in the screen below Is there any way to set tick intvel differently? | In the column 'tenor', 'M' represents month and 'Y' represents year. Create a 'Month' column with 'Y' scaled by 12 Months. It's more concise to plot the data directly with pandas.DataFrame.plot, and use .set_xticks to change the xtick-labels. Tested in python 3.12.0, pandas 2.1.1, matplotlib 3.8.0 data = pd.read_csv(io.StringIO(temp), sep=",") # Add a column "Month" from the the column "tenor" data["Month"] = data['tenor'].apply(lambda x : int(x[:-1]) *12 if 'Y' in x else int(x[:-1])) # plot yield vs Month ax = data.plot(x='Month', y='yield', figsize=(17, 5), legend=False) # set the xticklabels _ = ax.set_xticks(data.Month, data.tenor) The output: .apply with a lambda function is fastest Given 6M rows data = pd.DataFrame({'tenor': ['1M', '3M', '6M', '1Y', '5Y', '10Y'] * 1000000}) Compare Implementations with %timeit in JupyterLab .apply & lambda %timeit data['tenor'].apply(lambda x: int(x[:-1]) *12 if 'Y' in x else int(x[:-1])) 2 s ± 50.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) .apply with a function call def scale(x): v = int(x[:-1]) return v * 12 if 'Y' in x else v %timeit data['tenor'].apply(scale) 2.02 s ± 20.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Vectorized np.where with assignment expression %timeit np.where(data.tenor.str.contains('Y'), (v := data.tenor.str[:-1].astype(int)) * 12 , v) 2.44 s ± 26.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Vectorized np.where without assignment expression %timeit np.where(data.tenor.str.contains('Y'), data.tenor.str[:-1].astype(int) * 12 , data.tenor.str[:-1].astype(int)) 3.36 s ± 5.38 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) | 3 | 3 |
77,360,289 | 2023-10-25 | https://stackoverflow.com/questions/77360289/visual-studio-code-missing-select-linter | I just installed the latest version (1.83.1) of VSC on Ubuntu. I plan to develop in Python, and am just getting started using Linters. I have installed the VSC extensions for Python, including the Pylance extension. I also installed the pylint and flake8 extensions as I wanted to compare the two linters. All documentation I've seen, including that on Microsoft's web site, is that I should be able to select which linter to use by entering the command (ctrl-shft-p) "python: select linter". However that command does not does not appear in the command list. Both linter extensions are enabled, and both appear to work, as I get pop up messages with lint corrections. Has there been a change in the linter selection that does not appear in the documentation? | The Python: Select Linter command was removed in the 2023.18.0 release / between that release and the 2023.16.0 release. You can see the command definition and the command palette menu item definition still there in the 2023.16.0 release's extension manifest (search "setLinter"), whereas it's gone in the 2023.18.0 extension manifest. For more info on why this change (probably) happened, see Why does the VS Code Python extension (circa v2018.19) no longer provide support for several Python tools like linters and formatters?. TL;DR the Python extension is deferring linting to separate extensions now. | 2 | 5 |
77,361,799 | 2023-10-25 | https://stackoverflow.com/questions/77361799/attributeerror-dataframe-object-has-no-attribute-group-by | I am trying to group by a polars dataframe following the document: https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.group_by.html#polars.DataFrame.group_by import polars as pl df = pl.DataFrame( { "a": ["a", "b", "a", "b", "c"], "b": [1, 2, 1, 3, 3], "c": [5, 4, 3, 2, 1], } ) df.group_by("a").agg(pl.col("b").sum()) However, I am getting this error: AttributeError: 'DataFrame' object has no attribute 'group_by' | On checking I found my polars version : pl.__version__ 0.17.3 https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.groupby.html I need to do: df.groupby("a").agg(pl.col("b").sum()) # there is no underscore in groupby #output shape: (3, 2) a b str i64 "a" 2 "c" 3 "b" 5 and the document says : Deprecated since version 0.19.0: This method has been renamed to DataFrame.group_by(). This is the new document for polars version 0.19 https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.group_by.html#polars-dataframe-group-by | 5 | 6 |
77,360,947 | 2023-10-25 | https://stackoverflow.com/questions/77360947/how-to-make-dataframe-like-output-to-dictionary | I'm fetching list format depending on URL link and I want to extract terminal output to pandas Dataframe structure before doing so I must convert it to dictionary. How can I achieve that? Here's the code: import subprocess import pandas as pd url = 'https://www.youtube.com/watch?v=kjYW63CVbsE' command = subprocess.getoutput('yt-dlp --list-formats "{url}"'.format(url=url)) print(command Here's the output I tried to split it into lines but not my expected result import subprocess import pandas as pd url = 'https://www.youtube.com/watch?v=kjYW63CVbsE' command = subprocess.getoutput('yt-dlp --list-formats "{url}"'.format(url=url)) output_lines = command.split('\n') headers = output_lines[6].split() format_list = [] for line in output_lines[::]: values = line.split() format_info = {} for i in range(len(headers)): format_info[headers[i]] = values[i] format_list.append(format_info) df = pd.DataFrame(format_list) print(df) | I would recommend against parsing the raw string and suggest using yt_dlp.YoutubeDL (or use --dump-json or --print "%()j" like the answer @AndrejKesely provided) as said in the yt-dlp doc: Your program should avoid parsing the normal stdout since they may change in future versions. Instead they should use options such as -J, --print, --progress-template, --exec etc to create console output that you can reliably reproduce and parse. From a Python program, you can embed yt-dlp in a more powerful fashion: import pandas as pd from yt_dlp import YoutubeDL url = 'https://www.youtube.com/watch?v=kjYW63CVbsE' ydl_opts = {'listformats':True} with YoutubeDL(ydl_opts) as ydl: info_dict = ydl.extract_info(url, download=False) data = pd.DataFrame() df = pd.DataFrame(ydl.sanitize_info(info_dict).get("formats")) print(df) Prints: format_id format_note ext protocol acodec vcodec ... audio_channels language_preference dynamic_range container downloader_options filesize_approx 0 sb3 storyboard mhtml mhtml none none ... NaN NaN NaN NaN NaN NaN 1 sb2 storyboard mhtml mhtml none none ... NaN NaN NaN NaN NaN NaN 2 sb1 storyboard mhtml mhtml none none ... NaN NaN NaN NaN NaN NaN 3 sb0 storyboard mhtml mhtml none none ... NaN NaN NaN NaN NaN NaN 4 233 Default mp4 m3u8_native NaN none ... NaN NaN NaN NaN NaN NaN 5 234 Default mp4 m3u8_native NaN none ... NaN NaN NaN NaN NaN NaN 6 293 Default mp4 m3u8_native NaN none ... NaN NaN NaN NaN NaN NaN 7 294 Default mp4 m3u8_native NaN none ... NaN NaN NaN NaN NaN NaN 8 599 ultralow m4a https mp4a.40.5 none ... 2.0 -1.0 None m4a_dash {'http_chunk_size': 10485760} NaN 9 600 ultralow webm https opus none ... 2.0 -1.0 None webm_dash {'http_chunk_size': 10485760} NaN 10 139 low m4a https mp4a.40.5 none ... 2.0 -1.0 None m4a_dash {'http_chunk_size': 10485760} NaN 11 249 low webm https opus none ... 2.0 -1.0 None webm_dash {'http_chunk_size': 10485760} NaN 12 250 low webm https opus none ... 2.0 -1.0 None webm_dash {'http_chunk_size': 10485760} NaN 13 140 medium m4a https mp4a.40.2 none ... 2.0 -1.0 None m4a_dash {'http_chunk_size': 10485760} NaN 14 251 medium webm https opus none ... 2.0 -1.0 None webm_dash {'http_chunk_size': 10485760} NaN 15 17 144p 3gp https mp4a.40.2 mp4v.20.3 ... 1.0 -1.0 SDR NaN {'http_chunk_size': 10485760} NaN 16 602 NaN mp4 m3u8_native none vp09.00.10.08 ... NaN NaN SDR NaN NaN NaN 17 597 144p mp4 https none avc1.4d400b ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 18 598 144p webm https none vp9 ... NaN -1.0 SDR webm_dash {'http_chunk_size': 10485760} NaN 19 269 NaN mp4 m3u8_native none avc1.4D400C ... NaN NaN SDR NaN NaN NaN 20 281 NaN mp4 m3u8_native none avc1.4D400C ... NaN NaN SDR NaN NaN NaN 21 603 NaN mp4 m3u8_native none vp09.00.11.08 ... NaN NaN SDR NaN NaN NaN 22 394 144p mp4 https none av01.0.00M.08 ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 23 160 144p mp4 https none avc1.4D400C ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 24 278 144p webm https none vp09.00.11.08 ... NaN -1.0 SDR webm_dash {'http_chunk_size': 10485760} NaN 25 229 NaN mp4 m3u8_native none avc1.4D4015 ... NaN NaN SDR NaN NaN NaN 26 282 NaN mp4 m3u8_native none avc1.4D4015 ... NaN NaN SDR NaN NaN NaN 27 604 NaN mp4 m3u8_native none vp09.00.20.08 ... NaN NaN SDR NaN NaN NaN 28 395 240p mp4 https none av01.0.00M.08 ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 29 133 240p mp4 https none avc1.4D4015 ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 30 242 240p webm https none vp09.00.20.08 ... NaN -1.0 SDR webm_dash {'http_chunk_size': 10485760} NaN 31 230 NaN mp4 m3u8_native none avc1.4D401E ... NaN NaN SDR NaN NaN NaN 32 283 NaN mp4 m3u8_native none avc1.4D401E ... NaN NaN SDR NaN NaN NaN 33 605 NaN mp4 m3u8_native none vp09.00.21.08 ... NaN NaN SDR NaN NaN NaN 34 396 360p mp4 https none av01.0.01M.08 ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 35 134 360p mp4 https none avc1.4D401E ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 36 18 360p mp4 https mp4a.40.2 avc1.42001E ... 2.0 -1.0 SDR NaN {'http_chunk_size': 10485760} 9195945.0 37 243 360p webm https none vp09.00.21.08 ... NaN -1.0 SDR webm_dash {'http_chunk_size': 10485760} NaN 38 231 NaN mp4 m3u8_native none avc1.4D401F ... NaN NaN SDR NaN NaN NaN 39 284 NaN mp4 m3u8_native none avc1.4D401F ... NaN NaN SDR NaN NaN NaN 40 606 NaN mp4 m3u8_native none vp09.00.30.08 ... NaN NaN SDR NaN NaN NaN 41 397 480p mp4 https none av01.0.04M.08 ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 42 135 480p mp4 https none avc1.4D401F ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 43 244 480p webm https none vp09.00.30.08 ... NaN -1.0 SDR webm_dash {'http_chunk_size': 10485760} NaN 44 287 NaN mp4 m3u8_native none avc1.4D401F ... NaN NaN SDR NaN NaN NaN 45 232 NaN mp4 m3u8_native none avc1.4D401F ... NaN NaN SDR NaN NaN NaN 46 609 NaN mp4 m3u8_native none vp09.00.31.08 ... NaN NaN SDR NaN NaN NaN 47 22 720p mp4 https mp4a.40.2 avc1.64001F ... 2.0 -1.0 SDR NaN {'http_chunk_size': 10485760} 16860413.0 48 398 720p mp4 https none av01.0.05M.08 ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 49 136 720p mp4 https none avc1.4D401F ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 50 247 720p webm https none vp09.00.31.08 ... NaN -1.0 SDR webm_dash {'http_chunk_size': 10485760} NaN 51 290 NaN mp4 m3u8_native none avc1.640028 ... NaN NaN SDR NaN NaN NaN 52 270 NaN mp4 m3u8_native none avc1.640028 ... NaN NaN SDR NaN NaN NaN 53 614 NaN mp4 m3u8_native none vp09.00.40.08 ... NaN NaN SDR NaN NaN NaN 54 399 1080p mp4 https none av01.0.08M.08 ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 55 137 1080p mp4 https none avc1.640028 ... NaN -1.0 SDR mp4_dash {'http_chunk_size': 10485760} NaN 56 248 1080p webm https none vp09.00.40.08 ... NaN -1.0 SDR webm_dash {'http_chunk_size': 10485760} NaN 57 616 Premium mp4 m3u8_native none vp09.00.40.08 ... NaN NaN SDR NaN NaN NaN [58 rows x 37 columns] | 4 | 1 |
77,359,543 | 2023-10-25 | https://stackoverflow.com/questions/77359543/strip-strings-and-date-time-from-mixed-string-with-numbers | I have this kind of dataset: import pandas as pd import numpy as np x = np.array([ '355395.7037', '355369.6383', '355367.881', '355381.419', '357394.9D7a82te7o6fm4o9n4t3h7 print: 06/10/202', '357405.7897626596']) y = np.array([ '4521429.292', '4521430.0229', ' 4521430.1191', '4521430.1256', '3 13:36 4521735.552137422', '4521512.725']) df = pd.DataFrame({'X':x, 'Y':y}) So, sometimes, I may have strings mixed with numbers. A Solution I thought. If you note 357394.9D7a82te7o5fm4o9n4t3h7 print: 06/10/202 for example, there are the words Date of month inside number 357394.97827649437. and at y column : '3 13:36 4521735.552137422' there is the 3 that came from 2023 from previous print: 06/10/202 and the time 13:36. I want to get rid of them in order to have only the numbers. I may have different situations like: '357394.9D7a82te7o6fm4o9n4t3h7 print: 06/10/2023' 13:36 4521735.552137422 for example. Or, '357394.9 D7a82te7o6fm4o9n4t3h7 print: ', '06/10/2023 13:36 4521735.552137422' If you see the numbers, for X column for example, all numbers have 6 digits before the decimal point, so we could take for example, the first 6 digits from '357394.9D7a82te7o6fm4o9n4t3h7 print: 06/10/202', -> 357394 and apply decimal point and fill with the rest numbers until a white space or a word (print) exists. So, the number to take is 357394.97827649437 But the thing is that we have a string and we cannot apply for example float or int to process it. For the second case, for Y column: '3 13:36 4521735.552137422', I think we must search from the end and when we see a decimal point, count 7 digits (Y columns has 7 digits before decimal) and stop there. ** UPD ** If we use: x = np.array([ '355395.7037', '355369.6383', '355367.881', '355381.419', '357394.9D7a82te7o6fm4o9n4t3h7 p', '357405.7897626596']) y = np.array([ '4521429.292', '4521430.0229', '4521430.1191', '4521430.1256', 'rint: 06/10/2023 13:36 4521735.552137422', '4521512.725']) then the solution gives: X Y 0 355395.7037 4521429.292 1 355369.6383 4521430.0229 2 355367.881 4521430.1191 3 355381.419 4521430.1256 4 357394.97827649437 06102023 5 357405.7897626596 4521512.725 where the Y values is: 06102023 instead of 4521735.552137422 | The best would be to try avoiding this format in the first place, especially if numbers get mixed with your value. That said, you could try to get rid of letters with replace, and then to use a regex to str.extract the number: out = df.apply(lambda s: s.str.replace('[^\d .]+', '', regex=True) .str.extract(r'(\d{6,}(?:\.\d+)?)', expand=False)) For a dynamic solution to specify the number of digits per column: d = {'X': 6, 'Y': 7} out = df.apply(lambda s: s.str.replace('[^\d .]+', '', regex=True) .str.extract(fr'(\d{{{d[s.name]},}}(?:\.\d+)?)', expand=False) ) Output: X Y 0 355395.7037 4521429.292 1 355369.6383 4521430.0229 2 355367.881 4521430.1191 3 355381.419 4521430.1256 4 357394.97827649437 4521735.552137422 5 357405.7897626596 4521512.725 updated example: Just avoid removing the date separator: out = df.apply(lambda s: s.str.replace('[^\d ./]+', '', regex=True) .str.extract(r'(\d{6,}(?:\.\d+)?)', expand=False)) Or only remove letters: out = df.apply(lambda s: s.str.replace('[a-zA-Z]+', '', regex=True) .str.extract(r'(\d{6,}(?:\.\d+)?)', expand=False)) Output: X Y 0 355395.7037 4521429.292 1 355369.6383 4521430.0229 2 355367.881 4521430.1191 3 355381.419 4521430.1256 4 357394.97827649437 4521735.552137422 5 357405.7897626596 4521512.725 | 3 | 1 |
77,359,803 | 2023-10-25 | https://stackoverflow.com/questions/77359803/pandas-check-if-count-of-occurence-of-each-element-of-a-dataframe-column-is-equ | Giving two dataframes df1: col1 ----- 1 1 1 2 3 1 1 2 and df2: colA | colB ------------ 1 | 2 1 | 4 2 | 1 3 | 1 5 | 1 4 | 5 I want to return True if the count of the occurence of every element e in col1 in df1 is equal to the count of the occurence of e in both colA and colB in df2. In the example above for df1 and df2 it should return True. Explanation: count of occurence of 1 in col1 in df1= 5 = count of occurence of 1 in colA and colB in df2. count of occurence of 2 in col1 = 2 = count of occurence of 2 in colA and colB in df2. count of occurence of 3 in col1 = 1 = count of occurence of 3 in colA and colB in df2 Logic: The idea is to groupby the elements of col1 in df1 and count the occurences of each one of it then go search the count of their occurence in colA and colB in df2. I've tried the following code: def records_check(df1, df2): if df1.groupby(['col1']).size() == df2.groupby(['colA', 'colA']).size(): return True else: return False But I've got this: ValueError: Can only compare identically-labeled Series objects What is the most efficient way to achieve this especially when dealing with huge data please? | You can use value_counts on col1 and the stacked version of df2, then compare the outputs: c1 = df1['col1'].value_counts(sort=False) c2 = df2.stack().value_counts(sort=False) # if more columns # c2 = df2[['colA', 'colB']].stack().value_counts() out = c1.eq(c2.reindex_like(c1)).all() numpy variant: # get values and counts for each dataset n1, c1 = np.unique(df1['col1'], return_counts=True) n2, c2 = np.unique(df2[['colA', 'colB']].to_numpy().ravel(), return_counts=True) # compare the counts for existing values in `df1['col1']` np.array_equal(c1, c2[np.isin(n2, n1)]) Output: True | 3 | 1 |
77,358,638 | 2023-10-25 | https://stackoverflow.com/questions/77358638/dynamic-numpy-conditions-based-on-values-from-array | I am trying to find out how I can use np.where in a dynamic way, where I select some predefined values, pass them to a function and let them create the condition. Ideally I want to create long conditions with several logical operators. In the code below I am stupidly trying to use a string: cond_arr[1]['cond'] as a logical operator, to illustrate what I want, because I can't find out how to proceed with this? Is there an elegant(or just working) way to create these dynamic conditions? import numpy as np import pandas as pd import random from datetime import datetime, timedelta data = { 'open': [random.uniform(50, 100) for _ in range(30)], 'high': [random.uniform(100, 150) for _ in range(30)], 'low': [random.uniform(25, 50) for _ in range(30)], 'close': [random.uniform(50, 100) for _ in range(30)], 'volume': [random.randint(1000, 10000) for _ in range(30)], 'datetime': [datetime(2023, 10, 1, 0, 0) + timedelta(hours=i) for i in range(30)] } df = pd.DataFrame(data) # The meat and potatoes indicators = [{"ind": "open"},{"ind": "rsi"}, {"ind": "macd"}] conds = [{"cond": "<"}, {"cond": ">"}, {"cond": "=="}, {"cond": "and"}, {"cond": "or"}] values = [{"val": 10}, {"val": 100}, {"val": 15}, {"val": 17}, {"val": 18}, {"val": 7}] def create_condition(cond_array): print(f'{cond_array[0]["ind"]}') # Use double curly braces to escape #df["signal"] = np.where(df["open"] > 10, 1, -1) <-- what i want to do below df["signal"] = np.where(df[f'{cond_array[0]["ind"]}'] cond_arr[1]['cond'] df[f'{cond_array[2 ["val"]}'], 1, -1) selected_conds = [indicators[0],conds[0],values[0]] create_condition(selected_conds) | With pd.eval to evaluate a dynamic expression: def make_clause(df, col_d, cond_d, val_d): col, cond, val = col_d['ind'], cond_d['cond'], val_d['val'] df["signal"] = np.where(pd.eval(f'df.{col} {cond} {val}', target=df), 1, -1) selected_conds = [indicators[0], conds[0], values[0]] make_clause(df, *selected_conds) In case if a target column name contains spaces, use f'df["{col}"] {cond} {val}' as an expression. | 2 | 2 |
77,356,307 | 2023-10-25 | https://stackoverflow.com/questions/77356307/boolean-mask-if-timestamp-from-a-df-is-with-two-time-points-from-second-df-pyt | I have two distinct data frames. Both contain timestamps and corresponding values. I'm aiming to subset or provide boolean indexing where the time from one df falls within two timepoints from a second df. df contains names and start/end times. I want to use this info for df2. So where the same name is present (John), I want to provide a boolean index if Time falls within Start Time and End Time. df = pd.DataFrame({ 'Start Time' : ['2010-03-20 09:27:00','2010-03-20 10:15:00','2010-03-20 11:10:38','2010-03-20 11:32:15','2010-03-20 11:45:38'], 'End Time' : ['2010-03-20 09:40:00','2010-03-20 10:32:15','2010-03-20 11:35:38','2010-03-20 11:38:15','2010-03-20 11:50:38'], "Name":['John', 'Brian', 'Suni', 'Gary', 'Li'], "Occ":[1, 2, 3, 4, 5], }) df2 = pd.DataFrame({ 'Time' : ['2010-03-20 09:27:28','2010-03-20 09:29:15','2010-03-20 09:30:38','2010-03-20 09:32:15','2010-03-20 09:38:38', '2010-03-20 10:15:08','2010-03-20 10:16:36','2010-03-20 10:30:12','2010-03-20 10:31:08','2010-03-20 10:32:48'], 'Name':['John', 'John', 'John', 'John', 'John', 'John', 'John', 'Li', 'Li', 'Li'], }) df['Start Time'] = pd.to_datetime(df['Start Time']) df['End Time'] = pd.to_datetime(df['End Time']) df2['Time'] = pd.to_datetime(df2['Time']) mask1 = (df2['Time'] > df['Start Time']) & (df2['Time'] < df['End Time']) raise ValueError("Can only compare identically-labeled Series objects") ValueError: Can only compare identically-labeled Series objects I/O: Use mask to find rows that meet criteria and subset df2. True True True True True False False False False False Time Name 0 2010-03-20 09:27:28 John 1 2010-03-20 09:29:15 John 2 2010-03-20 09:30:38 John 3 2010-03-20 09:32:15 John 4 2010-03-20 09:38:38 John Edit 2: Is it possible to assign values from df to df2 where same conditions are met? For ex, df is the same except for an additional col: df = pd.DataFrame({ 'Start Time' : ['2010-03-20 09:27:00','2010-03-20 10:15:00','2010-03-20 11:10:38','2010-03-20 11:32:15','2010-03-20 11:45:38'], 'End Time' : ['2010-03-20 09:40:00','2010-03-20 10:32:15','2010-03-20 11:35:38','2010-03-20 11:38:15','2010-03-20 11:50:38'], "Name":['John', 'Brian', 'Suni', 'John', 'Li'], "Occ":[1, 2, 3, 4, 5], }) df2 = pd.DataFrame({ 'Time' : ['2010-03-20 09:27:28','2010-03-20 09:29:15','2010-03-20 09:30:38','2010-03-20 09:32:15','2010-03-20 09:38:38', '2010-03-20 11:11:08','2010-03-20 11:16:36','2010-03-20 11:30:12','2010-03-20 11:31:08','2010-03-20 11:32:48', ], 'Name':['John', 'John', 'John', 'John', 'John', 'Suni', 'Suni', 'Li', 'John', 'John', ], "desc":[6, 6, 6, np.nan, np.nan, 89, 89, np.nan, 2, 2 ], }) Could we use the same function but pass Occ to df2 for the relevant rows. I/O: Time Name desc msk Occ 0 2010-03-20 09:27:28 John 6.0 True 1.0 1 2010-03-20 09:29:15 John 6.0 True 1.0 2 2010-03-20 09:30:38 John 6.0 True 1.0 3 2010-03-20 09:32:15 John NaN True 1.0 4 2010-03-20 09:38:38 John NaN True 1.0 5 2010-03-20 11:11:08 Suni 89.0 True 3.0 6 2010-03-20 11:16:36 Suni 89.0 True 3.0 7 2010-03-20 11:30:12 Li NaN False NaN 8 2010-03-20 11:31:08 John 2.0 False NaN 9 2010-03-20 11:32:48 John 2.0 True 4.0 | You can merge your dataframes as first approach. You need to reset_index to preserve the df2 index: idx = (df2.reset_index().merge(df, on='Name') .loc[lambda x:x['Time'].between(x['Start Time'], x['End Time']), 'index']) msk = df2.index.isin(idx) You have to re Output: >>> msk array([ True, True, True, True, True, False, False, False, False, False]) >>> pd.Series(msk, index=df2.index) 0 True 1 True 2 True 3 True 4 True 5 False 6 False 7 False 8 False 9 False dtype: bool Maybe merge_asof can be better? idx = (pd.merge_asof(df2.reset_index().sort_values('Time'), df.sort_values('Start Time'), left_on='Time', right_on='Start Time', by='Name') .loc[lambda x:x['Time'].between(x['Start Time'], x['End Time']), 'index']) msk = df2.index.isin(idx) Edit 2 You can do: cx = (df2.reset_index().merge(df, on='Name') .loc[lambda x:x['Time'].between(x['Start Time'], x['End Time'])] .set_index('index').rename_axis(None)) df2['mask'] = df2.index.isin(cx.index) df2['Occ'] = cx['Occ'] Output: >>> df2 Time Name desc mask Occ 0 2010-03-20 09:27:28 John 6.0 True 1.0 1 2010-03-20 09:29:15 John 6.0 True 1.0 2 2010-03-20 09:30:38 John 6.0 True 1.0 3 2010-03-20 09:32:15 John NaN True 1.0 4 2010-03-20 09:38:38 John NaN True 1.0 5 2010-03-20 11:11:08 Suni 89.0 True 3.0 6 2010-03-20 11:16:36 Suni 89.0 True 3.0 7 2010-03-20 11:30:12 Li NaN False NaN 8 2010-03-20 11:31:08 John 2.0 False NaN 9 2010-03-20 11:32:48 John 2.0 True 4.0 | 2 | 2 |
77,353,981 | 2023-10-24 | https://stackoverflow.com/questions/77353981/layer-conv2d-11-expected-2-variables-but-received-0-variables-during-loading | I wanted to load a model using tf.keras.models.load_model('ACM1_9035P.keras') as I usually do but all of a sudden this time it wouldn't load. I got the error message seen in the title. This method has worked many times before but this time it didn't. What's the problem and how can I fix it? I can provide more information about the model and system if needed. I used google colab to train, save and load the model. The model was saved using model.save() The model is saved as a .keras file | I found the answer: The model was created and saved using TensorFlow version 2.13.0 I was trying to load it using TensorFlow version 2.14.0 Downgrading the version in Google Collab to 2.13.0 allowed me to load the model as I have done before. To see how you can change the TensorFlow version in Collab, check out this page: https://gist.github.com/mervess/5efbec9f62a55a4cd47c900999db7927 Edit: After deploying many models over different tensorflow versions and OSs, I have found it is much less of a headache to use the .h5 file. | 4 | 5 |
77,333,833 | 2023-10-20 | https://stackoverflow.com/questions/77333833/how-can-i-update-a-toga-progress-bar-after-each-download | When the download button is pressed it is supposed to download a set of files from a specified url. After each download I update the progress bar accordingly, but the bar only updates after all the files have been downloaded. The UI is being blocked until all downloads are complete. I am using Beeware's Toga for the UI. I have tried using async and await but the download function is synchronous so it doesn't work. Below is the code: """ Downloader """ import pathlib, os import toga from toga.style import Pack from toga.style.pack import COLUMN, ROW, LEFT, CENTER, TOP from downloader.my_libs.downloader import downloader WHITE = '#ffffff' PRIMARY_COLOR = '#000000' SECONDARY_COLOR = '#fbfbfb' ACCENT_COLOR = '#80ff80' BACKGROUND_COLOR = '#ffffff' class Downloader(toga.App): def __init__(self, formal_name=None, app_id=None, app_name=None, id=None, icon=None, author=None, version=None, home_page=None, description=None, startup=None, windows=None, on_exit=None, factory=None): super().__init__(formal_name, app_id, app_name, id, icon, author, version, home_page, description, startup, windows, on_exit, factory) self.resource_folder = pathlib.Path(__file__).joinpath('../resources').resolve() self.url_filename = self.resource_folder.joinpath('url.txt') self.output_path = pathlib.Path(__file__).joinpath('../../../../files').resolve() self.downloader = Downloader(url='', output_path=self.output_path) self.number_of_items_downloaded = 0 self.number_of_items_to_download = 0 def startup(self): main_box = toga.Box(style=Pack(background_color=BACKGROUND_COLOR)) box = toga.Box(style=Pack(background_color=BACKGROUND_COLOR)) url_box = toga.Box() title_label = toga.Label(text='Downloader', style=Pack(text_align=CENTER, font_size=20, flex=1, background_color=BACKGROUND_COLOR, color=PRIMARY_COLOR)) self.url_input = toga.TextInput(style=Pack(flex=4, font_size=12, background_color=SECONDARY_COLOR, color=PRIMARY_COLOR)) self.download_button = toga.Button('Download', on_press=self.on_download, style=Pack(flex=1, font_size=12, padding_top=5, background_color=ACCENT_COLOR, color=PRIMARY_COLOR)) self.info_box = toga.MultilineTextInput(readonly=True, style=Pack(flex=1, padding_top=10, font_size=12, background_color=SECONDARY_COLOR, color=PRIMARY_COLOR)) self.progress_bar = toga.ProgressBar(max=100, value=0, style=Pack(padding_top=10, background_color=SECONDARY_COLOR, color=ACCENT_COLOR)) box.add(title_label) url_box.add(self.url_input) box.add(url_box) box.add(self.download_button) box.add(self.info_box) box.add(self.progress_bar) main_box.add(box) box.style.update(direction=COLUMN, padding=50, flex=1) url_box.style.update(direction=ROW, flex=1, padding_top=5) self.main_window = toga.MainWindow(title=self.formal_name) self.main_window.content = main_box self.main_window.show() def on_download(self, widget): self.progress_bar.start() self.download_files() self.progress_bar.stop() def download_files(self): self.number_of_items_to_download = len(self.downloader.urls) for url in self.downloader.urls: try: self.download_file(url) self.update_progress() except Exception as exception: print('Could not download from:', url) print(exception, '\n\n') self.info_box.value = self.info_box.value + f'\nCould not download from: {url}' + f'exception\n\n' def update_progress(self): self.progress_bar.value = (self.number_of_items_downloaded / self.number_of_items_to_download) * 100 def download_file(self, url): title, filepath = self.downloader.download(url) self.info_box.value = self.info_box.value + f'Downloaded {title} \nTo {filepath}\n\n' self.number_of_items_downloaded += 1 def main(): return Downloader() | The main principles to remember in most UI frameworks are: Don't block the main thread for more than a small fraction of a second. Don't touch UI objects on any thread except the main one. Here's a solution that follows both of those rules. I haven't tested it, but it should be pretty close: def on_download(self, widget): # As of Toga 0.4, self.loop is pre-set by Toga itself, # so the next line should be removed. self.loop = asyncio.get_event_loop() threading.Thread(target=self.download_files).start() def download_files(self): self.number_of_items_to_download = len(self.downloader.urls) for url in self.downloader.urls: try: self.download_file(url) except Exception as exception: print('Could not download from:', url) print(exception, '\n\n') # TODO: use call_soon_threadsafe to update the UI # self.info_box.value = self.info_box.value + f'\nCould not download from: {url}' + f'exception\n\n' def update_progress(self, title, filepath): self.info_box.value = self.info_box.value + f'Downloaded {title} \nTo {filepath}\n\n' self.progress_bar.value = (self.number_of_items_downloaded / self.number_of_items_to_download) * 100 def download_file(self, url): title, filepath = self.downloader.download(url) self.number_of_items_downloaded += 1 self.loop.call_soon_threadsafe(self.update_progress, title, filepath) There's no need to join the thread. If you want to update the UI when the download is complete, just do that with another call to call_soon_threadsafe. Even better, you could use a higher-level API like run_in_executor: # Must add `async`! async def on_download(self, widget): loop = asyncio.get_event_loop() self.number_of_items_to_download = len(self.downloader.urls) for url in self.downloader.urls: try: title, filepath = await loop.run_in_executor( None, self.downloader.download, url ) self.info_box.value = self.info_box.value + f'Downloaded {title} \nTo {filepath}\n\n' self.number_of_items_downloaded += 1 self.progress_bar.value = (self.number_of_items_downloaded / self.number_of_items_to_download) * 100 except Exception as exception: print('Could not download from:', url) print(exception, '\n\n') self.info_box.value = self.info_box.value + f'\nCould not download from: {url}' + f'exception\n\n' | 2 | 2 |
77,324,915 | 2023-10-19 | https://stackoverflow.com/questions/77324915/nested-for-loops-to-do-a-check-of-the-elements-of-a-python-list | I need to insert within two new lists, objects that are not present in list 'PListaServiceID' but are present in list 'SListaServiceID', the same but in reverse. so I started by doing this PListaServiceName = [] PListaServiceID = [] SListaServiceName = [] SListaServiceID = [] CListaServiceID = [] C2ListaServiceID = [] for i in jsonPResponse["result"][0]["data"]: x = i['dimensionMap'] PListaServiceName.append(x['dt.entity.service.name']) PListaServiceID.append(x['dt.entity.service']) for p in jsonSResponse["result"][0]["data"]: z = p['dimensionMap'] SListaServiceName.append(z['dt.entity.service.name']) SListaServiceID.append(z['dt.entity.service']) writer = pd.ExcelWriter("tabella.xlsx", engine="xlsxwriter") #print("debug: ",NomePrimoMese) CreateExcel(PListaServiceName,PListaServiceID, NomePrimoMese, writer) CreateExcel(SListaServiceName,SListaServiceID, NomeSecondoMese, writer) for c in PListaServiceID: for z in SListaServiceID: if c!=z: CListaServiceID.append(c) CreateExcel(CListaServiceID, SListaServiceID, "Confronto", writer) writer.close() since I would have to repeat the last for loop also to fill the other new list, can there be a better way to do all this stuff? | I found a difference, it depends on how you want the final data, in my case I wanted a list with all the repetitions within the list. either way works #Elements present in PListaServiceID but not in SListaServiceID #without repetition CListaServiceID = list(set(PListaServiceName) - set(SListaServiceName)) #with repetition ConfrontoListaService = [ x for x in PListaServiceName if x not in SListaServiceName] | 2 | 2 |
77,333,100 | 2023-10-20 | https://stackoverflow.com/questions/77333100/geoalchemy2-geometry-schema-for-pydantic-fastapi | I want to use PostGIS with FastAPI and therefore use geoalchemy2 with alembic to create the table in the DB. But I'm not able to declare the schema in pydantic v2 correctly. My Code looks as follows: # auto-generated from env.py from alembic import op import sqlalchemy as sa from geoalchemy2 import Geometry from sqlalchemy.dialects import postgresql ... def upgrade() -> None: # ### commands auto generated by Alembic - please adjust! ### op.create_geospatial_table('solarpark', sa.Column('id', sa.Integer(), nullable=False), sa.Column('name_of_model', sa.String(), nullable=True), sa.Column('comment', sa.String(), nullable=True), sa.Column('lat', sa.ARRAY(sa.Float()), nullable=True), sa.Column('lon', sa.ARRAY(sa.Float()), nullable=True), sa.Column('geom', Geometry(geometry_type='POLYGON', srid=4326, spatial_index=False, from_text='ST_GeomFromEWKT', name='geometry'), nullable=True), sa.PrimaryKeyConstraint('id') ) op.create_geospatial_index('idx_solarpark_geom', 'solarpark', ['geom'], unique=False, postgresql_using='gist', postgresql_ops={}) op.create_index(op.f('ix_solarpark_id'), 'solarpark', ['id'], unique=False) # ### end Alembic commands ### # models.py from geoalchemy2 import Geometry from sqlalchemy import ARRAY, Column, Date, Float, Integer, String from app.db.base_class import Base class SolarPark(Base): id = Column(Integer, primary_key=True, index=True) name_of_model = Column(String) comment = Column(String, default="None") lat = Column(ARRAY(item_type=Float)) lon = Column(ARRAY(item_type=Float)) geom = Column(Geometry("POLYGON", srid=4326)) # schemas.py from typing import List from pydantic import ConfigDict, BaseModel, Field class SolarParkBase(BaseModel): model_config = ConfigDict(from_attributes=True, arbitrary_types_allowed=True) name_of_model: str = Field("test-model") comment: str = "None" lat: List[float] = Field([599968.55, 599970.90, 599973.65, 599971.31, 599968.55]) lon: List[float] = Field([5570202.63, 5570205.59, 5570203.42, 5570200.46, 5570202.63]) geom: [WHAT TO INSERT HERE?] = Field('POLYGON ((599968.55 5570202.63, 599970.90 5570205.59, 599973.65 5570203.42, 599971.31 5570200.46, 599968.55 5570202.63))') I want the column geom to be a type of geometry to perform spatial operations on it. But how can I declare that in pydantic v2? Thanks a lot in advance! | So I found the answer: # schemas.py from typing import List from pydantic import ConfigDict, BaseModel, Field from geoalchemy2.types import WKBElement from typing_extensions import Annotated class SolarParkBase(BaseModel): model_config = ConfigDict(from_attributes=True, arbitrary_types_allowed=True) name_of_model: str = Field("test-model") comment: str = "None" lat: List[float] = Field([599968.55, 599970.90, 599973.65, 599971.31, 599968.55]) lon: List[float] = Field([5570202.63, 5570205.59, 5570203.42, 5570200.46, 5570202.63]) geom: Annotated[str, WKBElement] = Field('POLYGON ((599968.55 5570202.63, 599970.90 5570205.59, 599973.65 5570203.42, 599971.31 5570200.46, 599968.55 5570202.63))') But you also need to change your CRUD function: # crud_solarpark.py class CRUDSolarPark(CRUDBase[SolarPark, SolarParkCreate, SolarParkUpdate]): def get(self, db: Session, *, id: int) -> SolarPark: db_obj = db.query(SolarPark).filter(SolarPark.id == id).first() if db_obj is None: return None if isinstance(db_obj.geom, str): db_obj.geom = WKTElement(db_obj.geom) db_obj.geom = to_shape(db_obj.geom).wkt return db_obj def get_multi(self, db: Session, *, skip: int = 0, limit: int = 100) -> SolarPark: db_obj = db.query(SolarPark).offset(skip).limit(limit).all() if db_obj is None: return None for obj in db_obj: if isinstance(obj.geom, str): obj.geom = WKTElement(obj.geom) obj.geom = to_shape(obj.geom).wkt return db_obj def create(self, db: Session, *, obj_in: SolarParkCreate) -> SolarPark: obj_in_data = jsonable_encoder(obj_in) db_obj = SolarPark(**obj_in_data) # type: ignore db.add(db_obj) db.commit() db.refresh(db_obj) db_obj.geom = to_shape(db_obj.geom).wkt return db_obj def update( self, db: Session, *, db_obj: SolarPark, obj_in: Union[SolarParkUpdate, Dict[str, Any]], ) -> SolarPark: obj_data = jsonable_encoder(db_obj) if isinstance(obj_in, dict): update_data = obj_in else: update_data = obj_in.dict(exclude_unset=True) for field in obj_data: if field in update_data: setattr(db_obj, field, update_data[field]) db.add(db_obj) db.commit() db.refresh(db_obj) db_obj.geom = to_shape(db_obj.geom).wkt return db_obj Hope that helps anybody who uses FastAPI and PostGIS :) | 3 | 8 |
77,344,749 | 2023-10-23 | https://stackoverflow.com/questions/77344749/pythonvirtualenvoperator-gives-error-no-module-named-unusual-prefix-dag | I'm using airflow 2.5.3 with Kubernetes executor and Python 3.7. I've tried to make a simple DAG with only one PythonVirtualnvOperator and two context variables ({{ ts }} and {{ dag }}) passed into it. from datetime import timedelta from pathlib import Path import airflow from airflow import DAG from airflow.operators.python import PythonOperator, PythonVirtualenvOperator import pendulum dag = DAG( default_args={ 'retries': 2, 'retry_delay': timedelta(minutes=10), }, dag_id='fs_rb_cashflow_test5', schedule_interval='0 5 * * 1', start_date=pendulum.datetime(2020, 1, 1, tz='UTC'), catchup=False, tags=['Feature Store', 'RB', 'u_m1ahn'], render_template_as_native_obj=True, ) context = {"ts": "{{ ts }}", "dag": "{{ dag }}"} op_args = [context, Path(__file__).parent.absolute()] def make_foo(*args, **kwargs): print("---> making foo!") print("make foo(...): args") print(args) print("make foo(...): kwargs") print(kwargs) make_foo_task = PythonVirtualenvOperator( task_id='make_foo', python_callable=make_foo, provide_context=True, use_dill=True, system_site_packages=False, op_args=op_args, op_kwargs={ "execution_date_str": '{{ execution_date }}', }, requirements=["dill", "pytz", f"apache-airflow=={airflow.__version__}", "psycopg2-binary >= 2.9, < 3"], dag=dag) Alas, when I'm trying to trigger this DAG, airflow gives me the following error: [2023-10-23, 13:30:40] {process_utils.py:187} INFO - Traceback (most recent call last): [2023-10-23, 13:30:40] {process_utils.py:187} INFO - File "/tmp/venv5ifve2a5/script.py", line 17, in <module> [2023-10-23, 13:30:40] {process_utils.py:187} INFO - arg_dict = dill.load(file) [2023-10-23, 13:30:40] {process_utils.py:187} INFO - File "/tmp/venv5ifve2a5/lib/python3.7/site-packages/dill/_dill.py", line 287, in load [2023-10-23, 13:30:40] {process_utils.py:187} INFO - return Unpickler(file, ignore=ignore, **kwds).load() [2023-10-23, 13:30:40] {process_utils.py:187} INFO - File "/tmp/venv5ifve2a5/lib/python3.7/site-packages/dill/_dill.py", line 442, in load [2023-10-23, 13:30:40] {process_utils.py:187} INFO - obj = StockUnpickler.load(self) [2023-10-23, 13:30:40] {process_utils.py:187} INFO - File "/tmp/venv5ifve2a5/lib/python3.7/site-packages/dill/_dill.py", line 432, in find_class [2023-10-23, 13:30:40] {process_utils.py:187} INFO - return StockUnpickler.find_class(self, module, name) [2023-10-23, 13:30:40] {process_utils.py:187} INFO - ModuleNotFoundError: No module named 'unusual_prefix_4c3a45107010a4223aa054ffc5f7bffc78cce4e7_dag' Why does it give me this strange error -- and how can it be fixed? | This problem occurs if this function make_foo that is passed to an airflow operator as python_callable argument is defined in the same Python source with the DAG object. And the DAG has finally started working when I moved the make_foo function to another Python module. Here's my code now: dags/strange_pickling_error/dag.py: import datetime import pendulum import airflow from airflow import DAG from airflow.operators.python import PythonOperator, PythonVirtualenvOperator import dill from strange_pickling_error.some_moar_code import make_foo dag = DAG( dag_id='strange_pickling_error_dag', schedule_interval='0 5 * * 1', start_date=datetime.datetime(2020, 1, 1), catchup=False, render_template_as_native_obj=True, ) context = {"ts": "{{ ts }}", "dag_run": "{{ dag_run }}"} make_foo_task = PythonVirtualenvOperator( task_id='make_foo', python_callable=make_foo, use_dill=True, system_site_packages=False, op_args=[context], requirements=[f"dill=={dill.__version__}", f"apache-airflow=={airflow.__version__}", "psycopg2-binary >= 2.9, < 3", f"pendulum=={pendulum.__version__}", "lazy-object-proxy"], dag=dag) dags/strange_pickling_error/some_moar_code.py: def make_foo(*args, **kwargs): print("---> making foo!") print("make foo(...): args") print(args) print("make foo(...): kwargs") print(kwargs) | 4 | 0 |
77,333,437 | 2023-10-20 | https://stackoverflow.com/questions/77333437/faster-way-to-pass-a-numpy-array-through-a-protobuf-message | I have a 921000 x 3 numpy array (921k 3D points, one point per row) that I am trying to pack into a protobuf message and I am running into performance issues. I have control over the protocol and can change it as needed. I am using Python 3.10 and numpy 1.26.1. I am using protocol buffers because I'm using gRPC. For the very first unoptimized attempt I was using the following message structure: message Point { float x = 1; float y = 2; float z = 3; } message BodyData { int32 id = 1; repeated Point points = 2; } And packing the points in one at a time (let data be the large numpy array): body = BodyData() for row in data: body.points.append(Point(x=row[0], y=row[1], z=row[2])) This takes approximately 1.6 seconds, which is way too slow. For the next attempt I ditched the Point structure and decided to transmit the points as a flat array of X/Y/Z triplets: message Points { repeated float xyz = 1; } message BodyData { int32 id = 1; Points points = 2; } I did some performance tests to determine the fastest way to append a 2D numpy array to a list, and got the following results: # Time: 80.1ms points = [] points.extend(data.flatten()) # Time: 96.8ms points = [] points.extend(data.reshape((data.shape[0] * data.shape[1],))) # Time: 76.5ms - FASTEST points = [] points.extend(data.flatten().tolist()) From this I determined that .extend(data.flatten().tolist()) was the fastest. However, when I applied this to the protobuf message, it slowed way down: # Time: 436.0ms body = BodyData() body.points.xyz.extend(data.flatten().tolist()) So the fastest I've been able to figure out how to pack the numpy array into any protobuf message is 436ms for 921000 points. This is very far short of my performance target, which is ~12ms per copy. I'm not sure if I can get close to that but, is there any way I can do this more quickly? | If your goal is just to send something over gRPC to another program that you control, then you don't actually have to convert everything into "native" protobuf messages; you can use a protobuf bytes field to store another serialization format, such as numpy tobytes() output, or Arrow. This will be much faster. | 2 | 4 |
77,353,466 | 2023-10-24 | https://stackoverflow.com/questions/77353466/decrypting-a-chacha20-poly1305-string-without-using-tag-mac | I am able to successfully encrypt and decrypt a string using Chacha20-Poly1305 in python without using the tag (or mac) as follows (using pycryptodome library): from Crypto.Cipher import ChaCha20_Poly1305 key = '20d821e770a6d3e4fc171fd3a437c7841d58463cb1bc7f7cce6b4225ae1dd900' #random generated key nonce = '18c02beda4f8b22aa782444a' #random generated nonce def decode(ciphertext): cipher = ChaCha20_Poly1305.new(key=bytes.fromhex(key), nonce=bytes.fromhex(nonce)) plaintext = cipher.decrypt(bytes.fromhex(ciphertext)) return plaintext.decode('utf-8') def encode(plaintext): cipher = ChaCha20_Poly1305.new(key=bytes.fromhex(key), nonce=bytes.fromhex(nonce)) ciphertext = cipher.encrypt(plaintext.encode('utf-8')) return ciphertext.hex() encrypted = encode('abcdefg123') print(encrypted) # will print: ab6cf9f9e0cf73833194 print(decode(encrypted)) # will print: abcdefg123 However, taking this to Javascript, I cannot find a library that will decrypt this without requiring the mac (that I would have gotten if I had used encrypt_and_digest()). I tried virtually every library I could find (Npm, as this would be used in a React application), they all require the mac part for the decryption. for example: libsodium-wrappers, js-chacha20 etc. How can I overcome this? P.S. I know it is less safe to not use the mac part, this is for educational purposes. === EDIT === The string was encrypted with Chacha20-Poly1305 and its encrypted output is already given. It cannot be re-encrypted using a different algorithm. I cannot encrypt using Chacha20-Poly1305 and decrypt with Chacha20 or vice versa because encrypting with the same key and nonce using only Chacha20 (rather than Chacha20-Poly1305) gives me a different encrypted output and this it is not helping. | The issue is caused by different values for the initial counter regarding encryption/decryption in the libraries applied. Background: ChaCha20 is operated in counter mode to derive a key stream that is XORed with the plaintext. Thereby an (increment-by-one) counter counts through a sequence of input blocks for the ChaCha20 block function. The initial counter is the start value of that counter. For more details on ChaCha20, see here and RFC 8439. A similar situation regarding different initial counters can be found in the PyCryptodome library itself: With ChaCha20-Poly1305, the counter 0 is used to determine the authentication tag, so the counter 1 is the first value applied for encryption, s. here. PyCryptodome follows this logic within the ChaCha20_Poly1305 implementation for encrypt() as well, to ensure that encrypt() and digest() give the same result as encrypt_and_digest(). However, within the ChaCha20 implementation, the initial counter for encryption is 0 by default (and not 1). Therefore, to allow decryption of the ciphertext generated with ChaCha20_Poly1305#encrypt() using ChaCha205#decrypt(), the counter must be explicitly set to 1 for the latter. This can be achieved with the seek() method (note that seek() requires the position in bytes; since a ChaCha20 block is 64 bytes in size, the counter value 1 corresponds to byte index 64 in the key stream): from Crypto.Cipher import ChaCha20, ChaCha20_Poly1305 key = '20d821e770a6d3e4fc171fd3a437c7841d58463cb1bc7f7cce6b4225ae1dd900' # 32 bytes key nonce = '18c02beda4f8b22aa782444a' # 12 bytes nonce plaintext = 'abcdefg123' # Encrypt with ChaCha20_Poly1305#encrypt() cipher = ChaCha20_Poly1305.new(key=bytes.fromhex(key), nonce=bytes.fromhex(nonce)) ciphertext = cipher.encrypt(plaintext.encode('utf-8')) print(ciphertext.hex()) # ab6cf9f9e0cf73833194 # Decrypt with ChaCha20#decrypt() decipher = ChaCha20.new(key=bytes.fromhex(key), nonce=bytes.fromhex(nonce)) decipher.seek(64) # set counter to 1 (seek() requires the position in bytes, a ChaCha20 block is 64 bytes) decrypted = decipher.decrypt(ciphertext) print(decrypted.decode('utf-8')) # ab6cf9f9e0cf73833194 The same applies to js-chacha20, a ChaCha20 implementation for JavaScript. By default, 0 is applied as the initial counter for encryption, s. here. Thus, to be compatible with ciphertexts created with PyCryptodome's ChaCha20_Poly1305#encrypt(), the initial counter must be explicitly set to 1: const key = hex2ab("20d821e770a6d3e4fc171fd3a437c7841d58463cb1bc7f7cce6b4225ae1dd900"); // 32 bytes key const nonce = hex2ab("18c02beda4f8b22aa782444a"); // 12 bytes nonce const ciphertext = hex2ab("ab6cf9f9e0cf73833194"); // ciphertext from PyCryptodome const decryptor = new JSChaCha20(key, nonce, 1); // Fix: set initial counter = 1 in the 3rd parameter const plaintext = decryptor.decrypt(ciphertext); console.log(new TextDecoder().decode(plaintext)); // abcdefg123 function hex2ab(hex){ return new Uint8Array(hex.match(/[\da-f]{2}/gi).map(function (h) {return parseInt(h, 16)})); } <script src=" https://cdn.jsdelivr.net/npm/[email protected]/src/jschacha20.min.js "></script> With this change, the ciphertext is decrypted correctly. For completeness: The ChaCha20 specification allows any value for the initial counter and explicitly lists the values 1 (e.g. in the context of an AEAD algorithm) and 0 as the usual values. In chapter 2.4. The ChaCha20 Encryption Algorithm of RFC 8439, ChaCha20 and Poly1305 for IETF Protocols it states regarding the counter: A 32-bit initial counter. This can be set to any number, but will usually be zero or one. It makes sense to use one if we use the zero block for something else, such as generating a one-time authenticator key as part of an AEAD algorithm. | 2 | 3 |
77,344,919 | 2023-10-23 | https://stackoverflow.com/questions/77344919/airflow-importing-dag-with-dependencies | I deployed airflow on kubernetes using the official helm chart. I'm using KubernetesExecutor and git-sync. I am using a seperate docker image for my webserver and my workers - each DAG gets its own docker image. I am running into DAG import errors at the airflow home page. E.g. if one of my DAGs is using pandas then I'll get Broken DAG: [/opt/airflow/dags/repo/dags/airflow_demo/ieso.py] Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/opt/airflow/dags/repo/dags/project1/dag1.py", line 7, in <module> from pandas import read_parquet ModuleNotFoundError: No module named 'pandas' I dont have pandas installed on the webserver or scheduler docker images, because if I understand it correctly you shouldn't install the individual dependencies on these. I am getting the same error when running airflow dags list-import-errors on the scheduler pod. I do have pandas installed on the worker pod, but it doesn't get run, because the DAG cannot be discovered through these errors. How do I make airflow discover this DAG without installing pandas to either scheduler or webserver? I know installing it on both will fix this, however I am not interested in doing it this way. | It's great crafting an answer laid out by the OP's comments 😜 In the comments, @user430953 provided this link to Airflow's documentation, where it states: One of the important factors impacting DAG loading time, that might be overlooked by Python developers is that top-level imports might take surprisingly a lot of time and they can generate a lot of overhead and this can be easily avoided by converting them to local imports inside Python callables for example. This makes sense: The scheduler will monitor the Dag Folder and it will try to load instances of class DAG by executing the .py files it finds in the folder as if they were regular "importable" Python modules (erm... well... because they are) and then "fishing" entities (objects? instances?) of type DAG out the module's variables. That happens, more specifically, here and then here (very broadly speaking and just if you're curious) As with any regular Python module, the Python interpreter will "run" the code on import, so any top level import will be attempted. If the module imports a package that the scheduler's Python interpreter doesn't have, an ImportError will be thrown. Since the OP stated... I do have pandas installed on the worker pod ... this means the actual work will be doable. We just need to avoid the scheduler from running this import. The fastest way moving the import statement into the function itself, so the pandas module is only imported when the actual work is performed (unsurprisingly, in the worker machines... which do have Pandas installed) So moving from: import pandas as pd with DAG(...) as dag: @task def some_pandas_work(**context): # ... to... with DAG(...) as dag: @task def some_pandas_work(**context): import pandas as pd # ... ... should do the trick | 2 | 1 |
77,354,058 | 2023-10-24 | https://stackoverflow.com/questions/77354058/micromamba-and-dockerfile-error-bin-bash-activate-no-such-file-or-directory | Used to have a Dockerfile with conda and flask that worked locally but after installing pytorch had to switch to micromamba, the old CMD no longer works after switching to image mambaorg/micromamba:1-focal-cuda-11.7.1, this is how my Dockerfile looks like right now: FROM mambaorg/micromamba:1-focal-cuda-11.7.1 WORKDIR /app COPY environment.yml . RUN micromamba env create --file environment.yml EXPOSE 5001 COPY app.py . CMD ["/bin/bash", "-c", "source activate env-app && flask run --host=0.0.0.0 --port=5001"] Now I get error: /bin/bash: activate: No such file or directory but before when I was using conda it was working, I think the CMD command needs to be updated. | Assuming these 2 files: environment.yml name: testenv channels: - conda-forge dependencies: - python >= 3.9 - flask app.py from flask import Flask # Create a Flask application app = Flask(__name__) # Define a route for the root URL @app.route("/") def hello_world(): return "Hello, World!" if __name__ == "__main__": # Run the Flask app on the local development server app.run() Then in your Dockerfile you don't activate the environmnent but use the path to environment, e.g. /opt/conda/envs/testenv/bin/flask (note the testenv in the path and in the environment.yaml): Dockerfile FROM mambaorg/micromamba:1-focal-cuda-11.7.1 WORKDIR /app COPY environment.yml . RUN micromamba env create --file environment.yml EXPOSE 5001 COPY app.py . CMD micromamba run -n testenv flask run --host=0.0.0.0 --port=5001 To build: docker build . -f Dockerfile -t my/image:name To run: docker run --rm -i -t -p 5001:5001 my/image:name Then navigating the browser to localhost:5001 you should see Hello World. | 2 | 3 |
77,350,095 | 2023-10-24 | https://stackoverflow.com/questions/77350095/differences-between-matlab-and-scipy-numerical-integration-results-with-bessel-f | I'm looking to replicate the results of the following MATLAB code in SciPy. MATLAB Version: f = @(x, y) besselh(0, 2, x.^2 + y.^2); integral2(f, -0.1, 0.1, -0.1, 0.1) The MATLAB result is: ans = 0.0400 + 0.1390i SciPy Version: f = lambda x, y: scipy.special.hankel2(0, x**2 + y**2) scipy.integrate.dblquad(f, -0.1, 0.1, lambda x: -0.1, lambda x: 0.1) However, when I run the SciPy code, I receive the following warning: IntegrationWarning: The occurrence of roundoff error is detected ... And the result is different from the MATLAB output: (nan, 2.22e-15) Can someone explain why I'm getting this warning and how to achieve a result similar to MATLAB's? | I believe the issue is that the integral goes through the origin, which produces NaN for H0. This is because H0 = J0 - i*Y0, and Y0 asymptotes the y-axis. For this specific case, you can use the H0 definition above to split the integrals into the real and complex parts. When doing this, you can make sure you don't cross 0 for the complex part of the integral. import scipy j0 = lambda x, y: scipy.special.jv(0, x**2 + y**2) y0 = lambda x, y: scipy.special.yv(0, x**2 + y**2) r1 = scipy.integrate.dblquad(j0, -0.1, 0.1, -0.1, 0.1) r21 = scipy.integrate.dblquad(y0, 1e-10, 0.1, -0.1, 0.1) r22 = scipy.integrate.dblquad(y0, -0.1, -1e-10, -0.1, 0.1) res = r1[0] - 1j*(r21[0]+r22[0]) # 0.039999377783047595+0.13896313686573833j | 4 | 3 |
77,347,140 | 2023-10-23 | https://stackoverflow.com/questions/77347140/how-to-create-a-single-figure-legend-for-geoaxes-subplots | I have looked at the many other questions on here to try and solve this but for whatever reason I cannot. Each solution seems to give me the same error, or returns nothing at all. I have a list of six dataframes I am looping through to create a figure of 6 maps. Each dataframe is formatted similar with the only differenc being their temporal column. Each map has the same classification scheme created through cartopy. The colors are determined with a colormap, the dataframe itself has no colors related to the values. I want a singular legend for all the maps, so that it is more visible to readers, and less redundant. Here is my code: import cartopy.crs as ccrs import cartopy.feature as cfeature import mapclassify from matplotlib.colors import rgb2hex from matplotlib.colors import ListedColormap plt.style.use('seaborn-v0_8-dark') # Define the Robinson projection robinson = ccrs.Robinson() # Create a 3x2 grid of subplots fig, axs = plt.subplots(3, 2, figsize=(12, 12), subplot_kw={'projection': robinson}) # Flatten the subplot array for easy iteration axs = axs.flatten() # Define color map and how many bins needed cmap = plt.cm.get_cmap('YlOrRd', 5) #Blues #Greens #PuRd #YlOrRd # Any countries with NaN values will be colored grey missing_kwds = dict(color='grey', label='No Data') # Loop through the dataframes and create submaps for i, df in enumerate(dataframes): # Create figure and axis with Robinson projection mentionsgdf_robinson = df.to_crs(robinson.proj4_init) # Plot the submap ax = axs[i] # Add land mask and gridlines ax.add_feature(cfeature.LAND.with_scale('50m'), facecolor='lightgrey') gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True, linewidth=1, color='gray', alpha=0.3, linestyle='--') gl.xlabel_style = {'fontsize': 7} gl.ylabel_style = {'fontsize': 7} # Classification scheme options: EqualInterval, Quantiles, NaturalBreaks, UserDefined etc. mentionsgdf_robinson.plot(column='mentions', ax=ax, legend=True, #True cmap=cmap, legend_kwds=({"loc":'center left', 'title': 'Number of Mentions', 'prop': {'size': 7, 'family': 'serif'}}), missing_kwds=missing_kwds, scheme="UserDefined", classification_kwds = {'bins':[20, 50, 150, 300, 510]}) # Set the titles of each submap ax.set_title(f'20{i+15}', size = 15, family = 'Serif') # Define the bounds of the classification scheme upper_bounds = mapclassify.UserDefined(mentionsgdf_robinson.mentions, bins=[20, 50, 150, 300, 510]).bins bounds = [] for index, upper_bound in enumerate(upper_bounds): if index == 0: lower_bound = mentionsgdf_robinson.mentions.min() else: lower_bound = upper_bounds[index-1] bound = f'{lower_bound:.0f} - {upper_bound:.0f}' bounds.append(bound) # replace the legend title and increase font size legend_title = ax.get_legend().get_title() legend_title.set_fontsize(8) legend_title.set_family('serif') # get all the legend labels and increase font size legend_labels = ax.get_legend().get_texts() # replace the legend labels for bound, legend_label in zip(bounds, legend_labels): legend_label.set_text(bound) fig.suptitle(' Yearly Country Mentions in Online News about Species Threatened by Trade ', fontsize=15, family = 'Serif') # Adjust spacing between subplots plt.tight_layout(pad=4.0) # Save the figure #plt.savefig('figures/submaps_5years.png', dpi=300) # Show the submap plt.show() And here is my result as of right now. I would like to have just one legend somewhere to the side of center of the maps. I have tried this code as suggested here but only received a UserWarning: Legend does not support handles for PatchCollection instances. Also I didn't know how to possibly incorporate all the other modifications I need for the legend outside of the loop (bounds, font, bins, etc.) handles, labels = ax.get_legend_handles_labels() fig.legend(handles, labels, loc='upper center') Here's data for three years 2015-2017: https://jmp.sh/s/ohkSJpaMZ4c1GifIX0nu Here's all the files for the global shapefile that I've used: https://jmp.sh/uTP9DZsC Using this data and the following code should allow you to run the full visualization code shared above. Thank you. import geopandas as gpd import pandas as pd # Read in globe shapefile dataframe world = gpd.read_file("TM_WORLD_BORDERS-0.3.shp") # Read in sample dataframe df = pd.read_csv("fifsixseventeen.csv", sep = ";") # Separate according to date column fifteen = df[(df['date'] == 2015)].reset_index(drop=True) sixteen = df[(df['date'] == 2016)].reset_index(drop=True) seventeen = df[(df['date'] == 2017)].reset_index(drop=True) # Function to merge isocodes of the countries with world shapefile def merge_isocodes(df): # Groupby iso3 column in order to merge with shapefile allmentions = df.groupby("iso3")['mentions'].sum().sort_values(ascending = False).reset_index() # Merge on iso3 code mentionsgdf = pd.merge(allmentions, world, left_on=allmentions["iso3"], right_on=world["ISO3"], how="right").drop(columns = "key_0") # Redefine as a geodataframe mentionsgdf = gpd.GeoDataFrame(mentionsgdf, geometry='geometry') return mentionsgdf onefive = merge_isocodes(fifteen) onesix = merge_isocodes(sixteen) oneseven = merge_isocodes(seventeen) # Create a list to store each years' dataframes dataframes = [onefive, onesix, oneseven] | These Axes are cartopy.mpl.geoaxes.GeoAxes axs[0].get_legend_handles_labels() results in UserWarning: Legend does not support handles for PatchCollection instances., and returns ([], []). Use Axes.get_legend() to get the Legend instance. .legend_handles is new in matplotlib 3.7.2 and returns the list of Artist objects. Alternatively, use .legendHandles, but it is deprecated. See How to put the legend outside the plot for details about positioning the legend. Also relevant, Moving matplotlib legend outside of the axis makes it cutoff by the figure box: fig.savefig('fig.png', bbox_inches='tight') Caveat: assumes all legends are the same. Tested in python 3.9.18, geopandas 0.12.2, matplotlib 3.7.2, cartopy 0.22.0. ... for i, df in enumerate(dataframes): ... # after the for loop, use the following code # extract the legend from an axes - used the last axes for the smaller sample data l = axs[2].get_legend() # extract the handles handles = l.legend_handles # get the label text labels = [v.get_text() for v in l.texts] # get the title text title = l.get_title().get_text() # create the figure legend fig.legend(title=title, handles=handles, labels=labels, bbox_to_anchor=(1, 0.5), loc='center left', frameon=False) # iterate through each Axes for ax in axs: # if the legend isn't None (if this condition isn't required, remove it and use only ax.get_legend().remove()) if gt := ax.get_legend(): # remove the legend gt.remove() | 2 | 1 |
77,352,151 | 2023-10-24 | https://stackoverflow.com/questions/77352151/how-can-i-do-the-dot-product-of-a-window-and-a-constant-vector-in-polars | I am trying to do calculate a dot product between a window and a static array for all windows in my polars DataFrame and I am struggling to figure out the right way to do this. Anyone here have any hints for me? Here is a quick working example: import polars as pl import numpy as np dummy_data = { "id_": [1, 2, 3, 4, 5, 6, 7, 8], "value": [1, 1, 2, 2, 3, 3, 4, 4] } const = np.array([.5, .5]) df_ = pl.DataFrame(dummy_data) df_ = df_.set_sorted("id_") # This panics: Cannot apply operation on arrays of different lengths df_.rolling("id_", period="2i").agg(pl.col("value").dot(pl.lit(const))) expected = { "id_": [1, 2, 3, 4, 5, 6, 7, 8], "value": [None, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0] } What I expect is for the following calculations to happen given my example above: np.dot([1.0, 1.0], const) np.dot([1.0, 2.0], const) np.dot([2.0, 2.0], const) etc. Presumably this doesn't work because the first window only has 1 value instead of 2? At least that is what it seems like to me. Anyone have a better way to do this? | After receiving a few answers, and finding one of my own, I wanted to share some benchmarks. Here are 3 potential solutions to the problem (adjusted slightly to provide a more comprehensive look at the questions by adding a group column): Data: groups = [] for i in range(n): for j in range(m): groups.append(i) dummy_data = { "id_": list(range(0, m)) * n, "group": groups, "value": [random.randint(1, 50) / 100 for _ in range(n*m)] } vec = np.array([random.randint(1, 100) / 100 for _ in range(x)]) df_ = pl.DataFrame(dummy_data) df_ = df_.sort("id_") rolling_map courteous of ranc on the polars discord: df_.with_columns( pl.col("value").cast(pl.Float32).rolling_map( lambda x: sum([x[i]*vec[i] for i in range(len(x))]) , window_size=len(vec) ).over("group") ) rolling courteous of @DeanMacGregor: ( df_.lazy() .drop('value') .join( df_.lazy() .rolling("id_", by="group", period=f"{len(vec)}i") .agg(pl.col("value")) .filter(pl.col('value').list.len()==len(vec)) .with_row_count('i') .with_columns(const=pl.Series(vec).implode()) .explode('value', 'const') .group_by('i') .agg( id_=pl.col('id_').first(), group=pl.col("group").first(), value=(pl.col('value').cast(pl.Float64)).dot(pl.col('const')) ) .drop('i'), on=['id_', "group"], how='left' ) ).collect() shift which is the solution I came up with: df_.lazy().with_columns( res = pl.sum_horizontal([pl.col("value").shift(i).over("group") * vec[i] for i in range(len(vec))]) ).select( "id_", pl.when(pl.col("id_") < len(vec)) .then(None) .otherwise(pl.col("res")) .alias("res") ).collect() Each provide the exact same answers. Here are the compute times for each on several different sizes of dataset: ################### n=10 m=10 x=2 rolling_map: 826 µs ± 96.6 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) rolling: 387 µs ± 46.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) shift: 160 µs ± 6.37 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) ################### n=10000 m=10 x=10 rolling_map: 119 ms ± 2.18 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) rolling: 8.22 ms ± 170 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) shift: 24.3 ms ± 1.26 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) ################### n=10 m=10000 x=10 rolling_map: 1.01 s ± 7.18 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) rolling: 25.5 ms ± 318 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) shift: 5.3 ms ± 119 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ################### n=1000 m=1000 x=100 rolling_map: > 60 seconds rolling: 2.34 s ± 225 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) shift: 731 ms ± 28.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ################### Hopefully these numbers are useful if others need to run some computation similar to this. The correct choice depends on the data being used and the perceived readability of the code. | 3 | 2 |
77,348,561 | 2023-10-23 | https://stackoverflow.com/questions/77348561/how-does-the-aws-cli-open-a-browser-and-wait-for-a-response-before-proceeding | I'm trying to build a golang cli tool for my company and as part of that build login and some other features into the tool. For the life of me I can't figure out how AWS is able to open a browser window and wait for a few button clicks before proceeding from the CLI. https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_StartDeviceAuthorization.html Here's the CLI command I input aws sso login --profile login Attempting to automatically open the SSO authorization page in your default browser. If the browser does not open or you wish to use a different device to authorize this request, open the following URL: https://device.sso.us-east-1.amazonaws.com/ Then enter the code: abcd-efgh Successfully logged into Start URL: https://d-1421421423.awsapps.com/start Here's the Python docs as well for start device auth and create token https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sso-oidc/client/start_device_authorization.html https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sso-oidc/client/create_token.html | One option that I just threw together that seems to be working is a loop that just checks every second for attempts <= 30 { fmt.Println(attempts) token, err := idc.CreateToken(context.TODO(), &createTokenInput) if err != nil { // if debug is enabled show error log.Debug(err.Error()) attempts++ // wait 1 second time.Sleep(1 * time.Second) } else { response = *token break } } Edit: After running AWS sso login —debug I noticed that the logs are actually looping and running the createToken query over and over, so AWS is doing something similar to the above. | 2 | 1 |
77,333,277 | 2023-10-20 | https://stackoverflow.com/questions/77333277/deprecationwarning-sippytypedict-is-deprecated-pyqt5 | I was writing the most simplest piece of code to run some small app. I got the next warning message: ~\PycharmProjects\LoggerTest\main.py:10: DeprecationWarning: sipPyTypeDict() is deprecated, the extension module should use sipPyTypeDictRef() instead class MainWindow(QMainWindow): My code: # Import libraries import sys # from PyQt5 import QtGui # from PyQt5.QtCore import QEvent from PyQt5.QtWidgets import QApplication, QMainWindow # from PyQt5.QtCore import pyqtSignal #, pyqtSlot from gui_ui import Ui_MainWindow class MainWindow(QMainWindow): def __init__(self, parent=None, **kwargs): super(MainWindow, self).__init__(parent=parent) self.ui = Ui_MainWindow() self.ui.setupUi(self) self.show() if __name__ == '__main__': app = QApplication(sys.argv) g = MainWindow() app.exec_() What does that warning mean? | It is solved in python-3.12.0 After upgrading, the warning should be away. Ref: https://github.com/python/cpython/pull/105747 | 2 | 5 |
77,336,943 | 2023-10-21 | https://stackoverflow.com/questions/77336943/how-to-enforce-in-gekko-writing-a-sudoku-solver | I am writing a Sudoku solver in Gekko (mostly for fun, I'm aware there are better tools for this.) I would like to express the constraints the variables in each row, column, and square most be different from each other. Here is the code: import itertools from gekko import GEKKO SQUARES = [ ( (0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2) ), ( (0, 3), (0, 4), (0, 5), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5) ), ( (0, 6), (0, 7), (0, 8), (1, 6), (1, 7), (1, 8), (2, 6), (2, 7), (2, 8) ), ( (3, 0), (3, 1), (3, 2), (4, 0), (4, 1), (4, 2), (5, 0), (5, 1), (5, 2) ), ( (3, 3), (3, 4), (3, 5), (4, 3), (4, 4), (4, 5), (5, 3), (5, 4), (5, 5) ), ( (3, 6), (3, 7), (3, 8), (4, 6), (4, 7), (4, 8), (5, 6), (5, 7), (5, 8) ), ( (6, 0), (6, 1), (6, 2), (7, 0), (7, 1), (7, 2), (8, 0), (8, 1), (8, 2) ), ( (6, 3), (6, 4), (6, 5), (7, 3), (7, 4), (7, 5), (8, 3), (8, 4), (8, 5) ), ( (6, 6), (6, 7), (6, 8), (7, 6), (7, 7), (7, 8), (8, 6), (8, 7), (8, 8) ) ] BOARD_DIMS = (9, 9) PRE_FILLED_SQUARES = [ (0, 1, 2), (0, 3, 5), (0, 5, 1), (0, 7, 9), (1, 0, 8), (1, 3, 2), (1, 3, 2), (1, 5, 3), (1, 8, 6), (2, 1, 3), (2, 4, 6), (2, 7, 7), (3, 2, 1), (3, 6, 6), (4, 0, 5), (4, 1, 4), (4, 7, 1), (4, 8, 9), (5, 2, 2), (5, 6, 7), (6, 1, 9), (6, 4, 3), (6, 7, 8), (7, 0, 2), (7, 3, 8), (7, 5, 4), (7, 8, 7), (8, 1, 1), (8, 3, 9), (8, 5, 7), (8, 7, 6) ] m = GEKKO() m.options.SOLVER = 1 board = m.Array(m.Var, BOARD_DIMS, lb=1, ub=9, integer=True, value=1) for i, j, val in PRE_FILLED_SQUARES: board[i,j].value = val m.Equation(board[i,j]==val) for row in range(BOARD_DIMS[0]): for i, j in itertools.combinations(range(BOARD_DIMS[1]), r=2): diff = m.if3(board[row, i] - board[row, j], -1, 1) diff_prime = m.if3(board[row, j] - board[row, i], -1, 1) m.Equation(diff + diff_prime == 0) for col in range(BOARD_DIMS[1]): for i, j in itertools.combinations(range(BOARD_DIMS[0]), r=2): diff = m.if3(board[i, col] - board[j, col], -1, 1) diff_prime = m.if3(board[j, col] - board[i, col], -1, 1) m.Equation(diff + diff_prime == 0) for square in SQUARES: for (y, x), (y_prime, x_prime) in itertools.combinations(square, r=2): diff = m.if3(board[y, x] - board[y_prime, x_prime], -1, 1) diff_prime = m.if3(board[y_prime, x_prime] - board[y, x], -1, 1) m.Equation(diff + diff_prime == 0) m.solve(disp=True) print(board) Currently, the solvers respects the pre-filled squares I've filled in but fills every other square with 1.0. The current expression of inequality, with the double subtraction, is because trying something along the lines of m.Equation(board[row,j] != board[row,i]) gave an error to the effect that objects of type "int" don't have a length. | There is no != (not equal) constraint in gekko. You can set the sum of each row to be 1 where each value can be 0 or 1. If the value is equal to 1 then that corresponding value is filled in that cell. Here is an MILP version that is written in PuLP and solved with the MILP CBC solver (credit to TowardsDataScience) import pulp as plp def add_default_sudoku_constraints(prob, grid_vars, rows, cols, grids, values): # Constraint to ensure only one value is filled for a cell for row in rows: for col in cols: prob.addConstraint(plp.LpConstraint(e=plp.lpSum([grid_vars[row][col][value] for value in values]), sense=plp.LpConstraintEQ, rhs=1, name=f"constraint_sum_{row}_{col}")) # Constraint to ensure that values from 1 to 9 is filled only once in a row for row in rows: for value in values: prob.addConstraint(plp.LpConstraint(e=plp.lpSum([grid_vars[row][col][value]*value for col in cols]), sense=plp.LpConstraintEQ, rhs=value, name=f"constraint_uniq_row_{row}_{value}")) # Constraint to ensure that values from 1 to 9 is filled only once in a column for col in cols: for value in values: prob.addConstraint(plp.LpConstraint(e=plp.lpSum([grid_vars[row][col][value]*value for row in rows]), sense=plp.LpConstraintEQ, rhs=value, name=f"constraint_uniq_col_{col}_{value}")) # Constraint to ensure that values from 1 to 9 is filled only once in the 3x3 grid for grid in grids: grid_row = int(grid/3) grid_col = int(grid%3) for value in values: prob.addConstraint(plp.LpConstraint(e=plp.lpSum([grid_vars[grid_row*3+row][grid_col*3+col][value]*value for col in range(0,3) for row in range(0,3)]), sense=plp.LpConstraintEQ, rhs=value, name=f"constraint_uniq_grid_{grid}_{value}")) def add_diagonal_sudoku_constraints(prob, grid_vars, rows, cols, values): # Constraint from top-left to bottom-right - numbers 1 - 9 should not repeat for value in values: prob.addConstraint(plp.LpConstraint(e=plp.lpSum([grid_vars[row][row][value]*value for row in rows]), sense=plp.LpConstraintEQ, rhs=value, name=f"constraint_uniq_diag1_{value}")) # Constraint from top-right to bottom-left - numbers 1 - 9 should not repeat for value in values: prob.addConstraint(plp.LpConstraint(e=plp.lpSum([grid_vars[row][len(rows)-row-1][value]*value for row in rows]), sense=plp.LpConstraintEQ, rhs=value, name=f"constraint_uniq_diag2_{value}")) def add_prefilled_constraints(prob, input_sudoku, grid_vars, rows, cols, values): for row in rows: for col in cols: if(input_sudoku[row][col] != 0): prob.addConstraint(plp.LpConstraint(e=plp.lpSum([grid_vars[row][col][value]*value for value in values]), sense=plp.LpConstraintEQ, rhs=input_sudoku[row][col], name=f"constraint_prefilled_{row}_{col}")) def extract_solution(grid_vars, rows, cols, values): solution = [[0 for col in cols] for row in rows] grid_list = [] for row in rows: for col in cols: for value in values: if plp.value(grid_vars[row][col][value]): solution[row][col] = value return solution def print_solution(solution, rows,cols): # Print the final result print(f"\nFinal result:") print("\n\n+ ----------- + ----------- + ----------- +",end="") for row in rows: print("\n",end="\n| ") for col in cols: num_end = " | " if ((col+1)%3 == 0) else " " print(solution[row][col],end=num_end) if ((row+1)%3 == 0): print("\n\n+ ----------- + ----------- + ----------- +",end="") def solve_sudoku(input_sudoku, diagonal = False ): # Create the linear programming problem prob = plp.LpProblem("Sudoku_Solver") rows = range(0,9) cols = range(0,9) grids = range(0,9) values = range(1,10) # Decision Variable/Target variable grid_vars = plp.LpVariable.dicts("grid_value", (rows,cols,values), cat='Binary') # Set the objective function # Sudoku works only on the constraints - feasibility problem # There is no objective function that we are trying maximize or minimize. # Set a dummy objective objective = plp.lpSum(0) prob.setObjective(objective) # Create the default constraints to solve sudoku add_default_sudoku_constraints(prob, grid_vars, rows, cols, grids, values) # Add the diagonal constraints if flag is set if (diagonal): add_diagonal_sudoku_constraints(prob, grid_vars, rows, cols, values) # Fill the prefilled values from input sudoku as constraints add_prefilled_constraints(prob, input_sudoku, grid_vars, rows, cols, values) # Solve the problem prob.solve() # Print the status of the solution solution_status = plp.LpStatus[prob.status] print(f'Solution Status = {plp.LpStatus[prob.status]}') # Extract the solution if an optimal solution has been identified if solution_status == 'Optimal': solution = extract_solution(grid_vars, rows, cols, values) print_solution(solution, rows,cols) normal_sudoku = [ [3,0,0,8,0,0,0,0,1], [0,0,0,0,0,2,0,0,0], [0,4,1,5,0,0,8,3,0], [0,2,0,0,0,1,0,0,0], [8,5,0,4,0,3,0,1,7], [0,0,0,7,0,0,0,2,0], [0,8,5,0,0,9,7,4,0], [0,0,0,1,0,0,0,0,0], [9,0,0,0,0,7,0,0,6] ] solve_sudoku(input_sudoku=normal_sudoku, diagonal=False) diagonal_sudoku = [ [0,3,0,2,7,0,0,0,0], [0,0,0,0,0,0,0,0,0], [8,0,0,0,0,0,0,0,0], [5,1,0,0,0,0,0,8,4], [4,0,0,5,9,0,0,7,0], [2,9,0,0,0,0,0,1,0], [0,0,0,0,0,0,1,0,5], [0,0,6,3,0,8,0,0,7], [0,0,0,0,0,0,3,0,0] ] solve_sudoku(input_sudoku=diagonal_sudoku, diagonal=True) The solution time is very fast with the CBC solver. + ----------- + ----------- + ----------- + | 6 3 9 | 2 7 4 | 8 5 1 | | 1 4 2 | 6 8 5 | 7 3 9 | | 8 7 5 | 1 3 9 | 6 4 2 | + ----------- + ----------- + ----------- + | 5 1 3 | 7 6 2 | 9 8 4 | | 4 6 8 | 5 9 1 | 2 7 3 | | 2 9 7 | 8 4 3 | 5 1 6 | + ----------- + ----------- + ----------- + | 3 8 4 | 9 2 7 | 1 6 5 | | 9 5 6 | 3 1 8 | 4 2 7 | | 7 2 1 | 4 5 6 | 3 9 8 | + ----------- + ----------- + ----------- + Translating this equivalent problem to gekko is easy with the similar syntax of the two modeling languages: from gekko import GEKKO def add_default_sudoku_constraints(m, grid_vars, rows, cols, grids, values): for row in rows: for col in cols: m.Equation(sum([grid_vars[row][col][value-1] \ for value in values]) == 1) for row in rows: for value in values: m.Equation(sum([grid_vars[row][col][value-1]*value \ for col in cols]) == value) for col in cols: for value in values: m.Equation(sum([grid_vars[row][col][value-1]*value \ for row in rows]) == value) for grid in grids: grid_row = int(grid/3) grid_col = int(grid%3) for value in values: m.Equation(sum([grid_vars[grid_row*3+row][grid_col*3+col][value-1]*value \ for col in range(0,3) for row in range(0,3)]) == value) def add_diagonal_sudoku_constraints(m, grid_vars, rows, cols, values): for value in values: m.Equation(sum([grid_vars[row][row][value-1]*value for row in rows]) == value) m.Equation(sum([grid_vars[row][len(rows)-row-1][value-1]*value \ for row in rows]) == value) def add_prefilled_constraints(m, input_sudoku, grid_vars, rows, cols, values): for row in rows: for col in cols: if(input_sudoku[row][col] != 0): m.Equation(sum([grid_vars[row][col][value-1]*value \ for value in values]) == input_sudoku[row][col]) def extract_solution(grid_vars, rows, cols, values): solution = [[0 for col in cols] for row in rows] for row in rows: for col in cols: for v, value in enumerate(values): if int(grid_vars[row][col][v].value[0]) == 1: solution[row][col] = value return solution def print_solution(solution, rows, cols): print(f"\nFinal result:") print("\n\n+ ----------- + ----------- + ----------- +",end="") for row in rows: print("\n",end="\n| ") for col in cols: num_end = " | " if ((col+1)%3 == 0) else " " print(solution[row][col],end=num_end) if ((row+1)%3 == 0): print("\n\n+ ----------- + ----------- + ----------- +",end="") def solve_sudoku(input_sudoku, diagonal=False): m = GEKKO(remote=True) rows = range(0,9) cols = range(0,9) grids = range(0,9) values = range(1,10) grid_vars = [[[m.Var(lb=0, ub=1, integer=True) \ for _ in values] for _ in cols] for _ in rows] add_default_sudoku_constraints(m, grid_vars, rows, cols, grids, values) if diagonal: add_diagonal_sudoku_constraints(m, grid_vars, rows, cols, values) add_prefilled_constraints(m, input_sudoku, grid_vars, rows, cols, values) m.options.SOLVER = 3 m.solve(disp=True) m.solver_options = ['minlp_gap_tol 1.0e-2',\ 'minlp_maximum_iterations 1000',\ 'minlp_max_iter_with_int_sol 500',\ 'minlp_branch_method 1'] m.options.SOLVER = 1 m.solve(disp=True) solution = extract_solution(grid_vars, rows, cols, values) print_solution(solution, rows, cols) normal_sudoku = [ [3,0,0,8,0,0,0,0,1], [0,0,0,0,0,2,0,0,0], [0,4,1,5,0,0,8,3,0], [0,2,0,0,0,1,0,0,0], [8,5,0,4,0,3,0,1,7], [0,0,0,7,0,0,0,2,0], [0,8,5,0,0,9,7,4,0], [0,0,0,1,0,0,0,0,0], [9,0,0,0,0,7,0,0,6] ] solve_sudoku(input_sudoku=normal_sudoku, diagonal=False) diagonal_sudoku = [ [0,3,0,2,7,0,0,0,0], [0,0,0,0,0,0,0,0,0], [8,0,0,0,0,0,0,0,0], [5,1,0,0,0,0,0,8,4], [4,0,0,5,9,0,0,7,0], [2,9,0,0,0,0,0,1,0], [0,0,0,0,0,0,1,0,5], [0,0,6,3,0,8,0,0,7], [0,0,0,0,0,0,3,0,0] ] solve_sudoku(input_sudoku=diagonal_sudoku, diagonal=True) However, the APOPT solver is a Nonlinear Mixed Integer Programming (MINLP) solver and is much slower than dedicated MILP solvers for linear mixed integer problems. Sometimes using these solver options can help: m.solver_options = ['minlp_gap_tol 1.0e-2',\ 'minlp_maximum_iterations 1000',\ 'minlp_max_iter_with_int_sol 500',\ 'minlp_branch_method 1'] However, in most cases it is better to use a dedicated MILP solver. | 2 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.