question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
77,347,986 | 2023-10-23 | https://stackoverflow.com/questions/77347986/how-to-remove-the-top-and-bottom-e-g-10-of-data-from-dataframe | How can you filter a DataFrame in pandas by selecting rows that fall within the 10th and 90th percentiles based on a specific column and then remove these filtered rows from the original DataFrame? Code Example: # Create a sample DataFrame data = { 'A': [0, 0, 0, 0, 0, 172, 183, 92, 105, 120], 'B': [3, 14, 27, 33, 41, 55, 66, 75, 88, 92] } df = pd.DataFrame(data) # Filter the DataFrame based on the 10th and 90th percentiles of column 'A' q10 = df['A'].quantile(0.1) q90 = df['A'].quantile(0.9) filtered_df = df[(df['A'] > q10) & (df['A'] < q90)] Issue: filtered_df doesn't really remove entry below 10th percentile and above 90th percentile. EDIT: Output should have 8 rows i.e. 80 percentile of the data. | Using percentiles won't guarantee having a fixed number of rows. It rather looks like you want to take 80% of the ranked values. Using rank with 'first' and between: m = (df['A'] .rank(method='first').sub(1) .between(0.1*len(df), 0.9*len(df), inclusive='left') ) out = df[m] Output: A B 1 0 14 2 0 27 3 0 33 4 0 41 5 172 55 7 92 75 8 105 88 9 120 92 Intermediates: A B rank m 0 0 3 1.0 False 1 0 14 2.0 True 2 0 27 3.0 True 3 0 33 4.0 True 4 0 41 5.0 True 5 172 55 9.0 True 6 183 66 10.0 False 7 92 75 6.0 True 8 105 88 7.0 True 9 120 92 8.0 True | 2 | 2 |
77,345,121 | 2023-10-23 | https://stackoverflow.com/questions/77345121/azure-openai-langchain-invalidfield-the-vector-field-content-vector-must-h | I'm trying to create an embedding vector database with some .txt documents in my local folder. In particular I'm following this tutorial from the official page of LangChain: LangChain - Azure Cognitive Search and Azure OpenAI. I have followed all the steps of the tutorial and this is my Python script: # From https://python.langchain.com/docs/integrations/vectorstores/azuresearch import openai import os from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores.azuresearch import AzureSearch os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://xxxxxx.openai.azure.com" os.environ["OPENAI_API_KEY"] = "xxxxxxxxx" os.environ["OPENAI_API_VERSION"] = "2023-05-15" model: str = "text-embedding-ada-002" vector_store_address: str = "https://xxxxxxx.search.windows.net" vector_store_password: str = "xxxxxxx" embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1) index_name: str = "cognitive-search-openai-exercise-index" vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query, ) from langchain.document_loaders import TextLoader from langchain.text_splitter import CharacterTextSplitter loader = TextLoader("C:/Users/xxxxxxxx/azure_openai_cognitive_search_exercise/data/qna/a.txt", encoding="utf-8") documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) vector_store.add_documents(documents=docs) # Perform a similarity search docs = vector_store.similarity_search( query="Who is Pippo Franco?", k=3, search_type="similarity", ) print(docs[0].page_content) Now, when I run the script I get the following error: vector_search_configuration is not a known attribute of class <class 'azure.search.documents.indexes.models._index.SearchField'> and will be ignored algorithm_configurations is not a known attribute of class <class 'azure.search.documents.indexes._generated.models._models_py3.VectorSearch'> and will be ignored Traceback (most recent call last): File "C:\Users\xxxxxxxxx\venv\Lib\site-packages\langchain\vectorstores\azuresearch.py", line 105, in _get_search_client index_client.get_index(name=index_name) File "C:\Users\xxxxxxx\venv\Lib\site-packages\azure\core\tracing\decorator.py", line 78, in wrapper_use_tracer return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxxxxxx\KYF\venv\Lib\site-packages\azure\search\documents\indexes\_search_index_client.py", line 145, in get_index result = self._client.indexes.get(name, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxxxxx\venv\Lib\site-packages\azure\core\tracing\decorator.py", line 78, in wrapper_use_tracer return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxxxxx\KYF\venv\Lib\site-packages\azure\search\documents\indexes\_generated\operations\_indexes_operations.py", line 864, in get map_error(status_code=response.status_code, response=response, error_map=error_map) File "C:\Users\xxxxxxxx\venv\Lib\site-packages\azure\core\exceptions.py", line 165, in map_error raise error azure.core.exceptions.ResourceNotFoundError: () No index with the name 'cognitive-search-openai-exercise-index' was found in the service 'cognitive-search-openai-exercise'. Code: Message: No index with the name 'cognitive-search-openai-exercise-index' was found in the service 'cognitive-search-openai-exercise'. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Users\xxxxxxx\venv\azure_openai_cognitive_search_exercise\test.py", line 25, in <module> vector_store: AzureSearch = AzureSearch( ^^^^^^^^^^^^ File "C:\Users\xxxxxxx\venv\Lib\site-packages\langchain\vectorstores\azuresearch.py", line 237, in __init__ self.client = _get_search_client( ^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxxxxxxx\venv\Lib\site-packages\langchain\vectorstores\azuresearch.py", line 172, in _get_search_client index_client.create_index(index) File "C:\Users\xxxxxxx\venv\Lib\site-packages\azure\core\tracing\decorator.py", line 78, in wrapper_use_tracer return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxxxxxx\venv\Lib\site-packages\azure\search\documents\indexes\_search_index_client.py", line 220, in create_index result = self._client.indexes.create(patched_index, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxxxxxx\venv\Lib\site-packages\azure\core\tracing\decorator.py", line 78, in wrapper_use_tracer return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxxxxx\venv\Lib\site-packages\azure\search\documents\indexes\_generated\operations\_indexes_operations.py", line 402, in create raise HttpResponseError(response=response, model=error) azure.core.exceptions.HttpResponseError: (InvalidRequestParameter) The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Code: InvalidRequestParameter Message: The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Exception Details: (InvalidField) The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition Code: InvalidField Message: The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition I have created an index manually from the Azure Cognitive Search Console, but I don't think this is the correct approach, as the script should automatically create a new index. | Please use pip install azure-search-documents==11.4.0b8 to ensure you are using the azure cognitive search python SDK compatible with LangChain. | 2 | 4 |
77,346,782 | 2023-10-23 | https://stackoverflow.com/questions/77346782/django-create-connected-objects-simultaneously | I'm new to Django and wanted to ask how can I solve a problem that I have class InventoryId(models.Model): inventory_id = models.AutoField(primary_key=True) class Laptop(models.Model): name = models.CharField(max_length=30) inventory_id = models.ForeignKey(InventoryId, on_delete=models.CASCADE) class Desktop(models.Model): name = models.CharField(max_length=30) inventory_id = models.ForeignKey(InventoryId, on_delete=models.CASCADE) In the example above I would like models Laptop and Desktop to use inventory_id's that are stored in the same table and are incremental to avoid collisions. example: laptop1 --> inventory_id = 1 desktop1 --> inventory_id = 2 laptop2 --> inventory_id = 3 what's the correct way of doing that in django? I found something that recommends using signals to do that. but i was discouraged by documentation that it may be not the best idea | Use model inheritance [Django-doc] instead: class InventoryItem(models.Model): inventory_id = models.AutoField(primary_key=True) class Laptop(InventoryItem): name = models.CharField(max_length=30) class Desktop(InventoryItem): name = models.CharField(max_length=30) This will make OneToOneFields from Laptop and Desktop to the table of the InventoryItem. It also guarantees that two Desktops for example can not refer to the same `InventoryItem. Since you here, perhaps as an example, have two name fields with the same specifications, it makes sense to "pull up" this field to the InventoryItem model. | 2 | 1 |
77,346,557 | 2023-10-23 | https://stackoverflow.com/questions/77346557/read-row-for-given-id-from-pickle-file-without-loading-all-data | I have a dataframe with 2 columns (ID, Contents) which I am saving in pickle format. Say for example, I have 10 rows(ie 0 to 9 ID). I am trying to load a single row by passing ID (valid ID between 0-9) from the pickle file without loading the entire file. import pandas as pd import pickle id = 2 path = "contents.pickle" # dummy data df = pd.DataFrame({ "ID": [0,1,2,3,4,5,6,7,8,9], "Contents": [0,1,2,3,4,5,6,7,8,9] }) print(f"{df=}") df.to_pickle(path=path) df = pd.read_pickle(filepath_or_buffer=path) with open(path, "rb") as handle: df = pickle.load(handle) The above both are reading all the data in one go. I am not getting where I should pass the ID to load that row only. If not possible with dataframe, I tried with simple list containing data saved in pickle file using reference here but failed. data = [0,1,2,3,4,5,6,7,8,9] with open(path, "wb") as handle: pickle.dump(data, handle) file = open(file=path) file.seek(id) row = pickle.load(file) print(f"{row=}") | Unfortunately I think pickle is not the right tool to do that (but maybe I'm wrong). Try a column-oriented data file format like parquet and pd.read_parquet: df.to_parquet('results.parquet') row = pd.read_parquet('results.parquet', filters=[('ID', '=', 9)]) Output: >>> row ID Contents 0 9 9 Update With pyarrow and ParquetDataset import pyarrow.parquet as pq ds = pq.ParquetDataset('results.parquet', filters=[('ID', '=', 9)]) Output: >>> ds.read() # read to get the row without load the entire file pyarrow.Table ID: int64 Contents: int64 ---- ID: [[9]] Contents: [[9]] | 2 | 3 |
77,346,244 | 2023-10-23 | https://stackoverflow.com/questions/77346244/how-to-resolve-incompatible-return-value-type-got-fancycat-expected-self | Here's a minimal example I've written: from __future__ import annotations from typing import Protocol from typing_extensions import Self class Cat(Protocol): def add(self, other: Self) -> Self: ... def sub(self, other: Self) -> Self: ... class FancyCat(Cat): def __init__(self, value: int): self._value = value def add(self, other: Self) -> Self: return FancyCat(self._value + other._value) def sub(self, other: Self) -> Self: return FancyCat(self._value - other._value) fc = FancyCat(3) fc2 = FancyCat(4) fc.add(fc2) If I try to type check it, I get $ mypy t.py t.py:19: error: Incompatible return value type (got "FancyCat", expected "Self") [return-value] t.py:22: error: Incompatible return value type (got "FancyCat", expected "Self") [return-value] Found 2 errors in 1 file (checked 1 source file) I'm so confused - isn't Self FancyCat in this case? How can I satisfy mypy here? | Your code does not guarantee that Self will be returned. For example, child classes will still return FancyCat, therefore Self annotation is invalid. To get rid of errors, you can either: a) make sure it's always self.__class__ returned, which IS actual Self def add(self, other: Self) -> Self: return self.__class__(self._value + other._value) b) type with what you are actually returning, FancyCat def sub(self, other: Self) -> FancyCat: return FancyCat(self._value - other._value) | 4 | 4 |
77,338,439 | 2023-10-22 | https://stackoverflow.com/questions/77338439/is-there-a-difference-between-pyenv-activate-and-pyenv-local | I know that to activate a virtual environment with pyenv, we can run pyenv activate [virtualenv], but here, i also saw that we can "select" a virtual environment by running pyenv local [virtualenv]. So what is the difference between the two ways? | The base command is pyenv local <python_version>. It selects what Python version to use in the current directory or its subdirectories. pyenv activate <name> activates a Python virtual environment. pyenv local <venv-name> works after pyenv virtualenv venv-name and eval "$(pyenv virtualenv-init -)" in shell configuration. After the command the <venv-name> (a virtual environment known to pyenv) is activated when entering (with cd) the directory where pyenv local <venv-name> had been run. | 3 | 3 |
77,343,330 | 2023-10-23 | https://stackoverflow.com/questions/77343330/customize-a-class-of-a-library | I am using a Python library that has one function that I would need to customize. That is not a problem, since I can make my own edited version like this: from coolLibrary import originalclass class myclass(originalclass): whatever... The problem that I am facing now, is that the library uses this originalclass in multiple locations. Is there a way to tell the library to use myclass everywhere instead of originalclass? My goal would be to have a custom code without editing the original code so the library can be updated. | If coolLibrary.originalclass has explicit references to itself: # coolLibrary.py class originalclass: def foo(self): print('original foo') def bar(self): originalclass.foo(self) such that: # main.py import coolLibrary class myclass(coolLibrary.originalclass): def foo(self): print('custom foo') myclass().bar() outputs: original foo You can patch coolLibrary.originalclass with your custom class after the custom class is defined: coolLibrary.originalclass = myclass so that: myclass().bar() would then output: custom foo Demo: https://replit.com/@blhsing1/JollyHumiliatingTabs Note that patching the original class makes the behavior change global. If another module that also uses coolLibrary.originalclass depends on its original behavior in order to work, you may want to apply the patch to only the block of code that needs such a custom behavior. This can be done with the unittest.mock.patch context manager: from unittest.mock import patch with patch('coolLibrary.originalclass', myclass): myclass().bar() | 2 | 2 |
77,342,884 | 2023-10-23 | https://stackoverflow.com/questions/77342884/need-to-reshape-transpose-dataframe-in-python | I have the following dataframe and I want to transpose it so that the elements in column "Trans_Type" become new columns to represent the values from column "Trans_amount". import pandas as pd import numpy as np df = pd.DataFrame([['21/10/2023','1','CR',2323], ['21/10/2023','1','CR',23], ['21/10/2023','1','DR',65], ['21/10/2023','2','CR',3.3], ['21/10/2023','3','CR',56], ['21/10/2023','3','DR',23.66], ['21/10/2023','4','CR',54.34], ['22/10/2023','4','CR',23.34], ['22/10/2023','4','DR',5.5]], columns = ['Date','Account_Number','Trans_Type','Trns_Amount']) df DF Output: Date Account_Number Trans_Type Trns_Amount 0 21/10/2023 1 CR 2323.00 1 21/10/2023 1 CR 23.00 2 21/10/2023 1 DR 65.00 3 21/10/2023 2 CR 3.30 4 21/10/2023 3 CR 56.00 5 21/10/2023 3 DR 23.66 6 21/10/2023 4 CR 54.34 7 22/10/2023 4 CR 23.34 8 22/10/2023 4 DR 5.5 Expected Output: Date Account_Number CR DR 0 21/10/2023 1 2323.00 NaN 1 21/10/2023 1 23.00 NaN 2 21/10/2023 1 NaN 65.00 3 21/10/2023 2 3.30 NaN 4 21/10/2023 3 56.00 NaN 5 21/10/2023 3 NaN 23.66 6 21/10/2023 4 54.34 NaN 7 22/10/2023 4 23.34 NaN 8 22/10/2023 4 NaN 5.50 Any assistance would be great. Thanks Alan | You can use DataFrame.reset_index for helper column with unique values, so possible use DataFrame.pivot for reshape: out = (df.reset_index() .pivot(index=['index','Date','Account_Number'], columns='Trans_Type', values='Trns_Amount') .reset_index(level=[1,2])) print (out) Trans_Type Date Account_Number CR DR index 0 21/10/2023 1 2323.00 NaN 1 21/10/2023 1 23.00 NaN 2 21/10/2023 1 NaN 65.00 3 21/10/2023 2 3.30 NaN 4 21/10/2023 3 56.00 NaN 5 21/10/2023 3 NaN 23.66 6 21/10/2023 4 54.34 NaN 7 22/10/2023 4 23.34 NaN 8 22/10/2023 4 NaN 5.50 | 2 | 2 |
77,342,104 | 2023-10-23 | https://stackoverflow.com/questions/77342104/scraping-addresses-tab-data-from-https-training-gov-au-organisation-details-90 | I am trying to scrape information from the "Addresses" tab on the webpage: https://training.gov.au/Organisation/Details/90003 using Python. However, I'm encountering an issue where, even after targeting the correct css selector, or tag, the code only returns null values. Strangely, it works correctly when I target the "Summary" tab. It seems that the website only returns data for the "Summary" tab. I have zero experienced in coding, so I'm unsure if there are specific considerations I need to keep in mind. I am attempting to scrape data from the "Addresses" tab on this webpage: https://training.gov.au/Organisation/Details/90003. I have inspected the webpage and identified the relevant css selector, or tags to target for scraping. I am using Python for web scraping and have tried libraries like Beautiful Soup and Requests. My code works as expected when I scrape data from the "Summary" tab, but it returns null values when I try to scrape from the "Addresses" tab. I suspect that there might be some specific JavaScript or dynamic content loading that prevents data from being retrieved from the "Addresses" tab. I would appreciate any guidance on how to access and scrape data from the "Addresses" tab successfully. Code Sample: Here's the version of the code I'm currently using to scrape data from the "Addresses" tab: import requests from bs4 import BeautifulSoup # URL of the webpage url = 'https://training.gov.au/Organisation/Details/90003' # Send an HTTP GET request to fetch the webpage response = requests.get(url) # Check if the request was successful (status code 200) if response.status_code == 200: # Parse the HTML content soup = BeautifulSoup(response.text, 'html.parser') # Use the CSS selector to target the element with id "rtoDetails-4" target_element = soup.select_one('#rtoDetails-1') # works for rtoDetails-1 but not other selector # Check if the element was found if target_element: # Extract and print the text content of the element print(target_element.text.strip()) else: print("Target element not found.") else: print("Failed to retrieve the webpage. Status code:", response.status_code) Expected Output: I expect the rtoDetails-4 variable to contain the information from the "Addresses" tab, but it currently returns null. Additional Information: Any insights or recommendations on how to handle dynamic content or JavaScript-based loading on webpages would be greatly appreciated. If there are specific steps I need to follow or if I'm missing something crucial, please provide detailed guidance as I'm relatively new to coding. Thank you in advance for your assistance! | The addresses you see on the page is loaded from external URL. You can use this example how to download the right HTML: import requests from bs4 import BeautifulSoup link = "https://training.gov.au/Organisation/Details/90003" response = requests.get(link) soup = BeautifulSoup(response.content, "html.parser") link = soup.select_one('[href*="AjaxDetailsLoadAddresses"]')["href"] link = "https://training.gov.au" + link soup = BeautifulSoup(requests.get(link).content, "html.parser") print(soup.get_text(strip=True, separator=" ")) Prints: ... Job title: Chief Executive Officer Organisation name: Technical and Further Education Commission Phone: (02) 7920 ... | 2 | 1 |
77,341,139 | 2023-10-22 | https://stackoverflow.com/questions/77341139/python-loop-still-going | I am trying to write a program similar to a blind auction, but I am having issues with the while loop. I don't understand why the loop is not ending. I used Thonny and even after the player enters 'no', it seems like the loop is breaking since it prints what is outside the loop and then goes to the bid function inside the while loop, with the player value set to 'yes'. I don't understand why this is happening. I have just started learning Python, so I would appreciate any help in explaining why this is occurring. Thank you! def bid(player_name,player_bid): player_name = input("What's your name?\n") player_bid = input("What's your bid? PLN") bid_list[player_name] = player_bid while True: player = input("Are they any other users? Insert yes or no?") if player == "no": player = "no" break bid(player_name,player_bid) max_value = max(bid_list.values()) print(f"{player_name} has the highest bid - {max_value} ") print("Thank you for playing") player_name = "" player_bid = 0 bid_list = {} bid(player_name,player_bid) I did try to change the code many times and used ChatGPT but still I don't understand why my code is not working. After I switched some part in while loop the code is working but I want to understand why my code is not working and why the loop is not ending. | Don't use recursion as a substitute for looping. Put your while loop around all the code that asks for player names and bids. Your recursion is outside the while loop, so it occurs every time and you never get out of the loop. There's also no need for player_name and player_bid to be function parameters, since the function assigns those variables itself from user input. You could instead pass in bid_list as a parameter. Your max() call gets the maximum bid, but not the player that made that bid. Get the maximum of bid_list.items() which returns both the keys and values. def bid(bid_list): while True: player_name = input("What's your name?\n") player_bid = input("What's your bid? PLN") bid_list[player_name] = player_bid player = input("Are they any other users? Insert yes or no?") if player == "no": break max_player, max_value = max(bid_list.items(), key=lambda x: x[1]) print(f"{max_player} has the highest bid - {max_value} ") print("Thank you for playing") bid_list = {} bid(bid_list) | 2 | 1 |
77,340,804 | 2023-10-22 | https://stackoverflow.com/questions/77340804/crc-computation-port-from-c-to-python | I need to convert the following CRC computation algorithm to Python: #include <stdio.h> unsigned int Crc32Table[256]; unsigned int crc32jam(const unsigned char *Block, unsigned int uSize) { unsigned int x = -1; //initial value unsigned int c = 0; while (c < uSize) { x = ((x >> 8) ^ Crc32Table[((x ^ Block[c]) & 255)]); c++; } return x; } void crc32tab() { unsigned int x, c, b; c = 0; while (c <= 255) { x = c; b = 0; while (b <= 7) { if ((x & 1) != 0) x = ((x >> 1) ^ 0xEDB88320); //polynomial else x = (x >> 1); b++; } Crc32Table[c] = x; c++; } } int main() { unsigned char buff[] = "whatever buffer content"; unsigned int l = sizeof(buff) -1; unsigned int hash; crc32tab(); hash = crc32jam(buff, l); printf("%d\n", hash); } two (failed) attempts to rewrite this in python follow: def crc32_1(buf): crc = 0xffffffff for b in buf: crc ^= b for _ in range(8): crc = (crc >> 1) ^ 0xedb88320 if crc & 1 else crc >> 1 return crc ^ 0xffffffff def crc32_2(block): table = [0] * 256 for c in range(256): x = c b = 0 for _ in range(8): if x & 1: x = ((x >> 1) ^ 0xEDB88320) else: x >>= 1 table[c] = x x = -1 for c in block: x = ((x >> 8) ^ table[((x ^ c) & 255)]) return x & 0xffffffff data = b'whatever buffer content' print(crc32_1(data), crc32_2(data)) Using the three routines on thee exact same data yield three different results: mcon@cinderella:~/Desktop/3xDAsav/DDDAedit$ ./test5 2022541416 mcon@cinderella:~/Desktop/3xDAsav/DDDAedit$ python3 test5.py 2272425879 2096952735 As said: C code is "Golden Standard", how do I fix this in Python? Note: I know I can call C routines from Python, but I consider that as "last resort". | Instead of porting your own CRC32 implementation, you can use one from the Python standard library. For historic reasons, the standard library includes two identical1 CRC32 implementations: binascii.crc32 zlib.crc32 Both implementations match the behavior of your crc32_1 function: import binascii import zlib >>> print(binascii.crc32(b'whatever buffer content')) 2272425879 >>> print(zlib.crc32(b'whatever buffer content')) 2272425879 To get a result matching the C implementation from the question, you just need to apply a constant offset: >>> 0xffff_ffff - zlib.crc32(b'whatever buffer content') 2022541416 As a bonus, these CRC32 functions are implemented in efficient C code, and will be much faster than any equivalent pure-Python port. 1Note that the zlib module is only available when CPython is compiled with zlib support (which is almost always true). In the off chance that you're using a CPythion build without zlib, you won't be able to use the zlib module. Instead, you can use the binascii implementation, which uses zlib when available and defaults to an "in-house" implementation when its not. | 2 | 3 |
77,339,578 | 2023-10-22 | https://stackoverflow.com/questions/77339578/only-1st-character-of-value-in-json-is-printing | import requests import json url = "https://betkarma.com/api/propsComparison?startDate=2023-10-17&endDate=2023-10-23&league=nfl" response = requests.get(url) data = response.json() print(response) for player_name in data["games"][0]["offers"][0]["player"]: if offers[0] == "player": value = player["value"] break print(player_name) The above code appears to be correct; however, something certainly is missing in the code below. I'm expecting the first and last name of each participant, but I'm just getting the first letter of the first person's first name returned for some reason. Any assistance is highly appreciated. I'm not sure whether I'm not outputting the correct thing, if some code is missing, or what I need to do to get a list of names. I'm I'm a noob in python! | try this: import requests url = "https://betkarma.com/api/propsComparison?startDate=2023-10-17&endDate=2023-10-23&league=nfl" response = requests.get(url) data = response.json() for p in data.get("games")[0].get("offers"): print(p["player"]) output: Foye Oluokun Foye Oluokun Michael (Saints) Thomas Michael (Saints) Thomas Pete Werner Pete Werner Demario Davis Demario Davis Marshon Lattimore Demario Davis Marshon Lattimore Pete Werner Marshon Lattimore Alontae Taylor Alontae Taylor Alontae Taylor Carl Granderson Carl Granderson Carl Granderson Jaguars DST Saints DST Blake Grupe Brandon McManus Blake Grupe Blake Grupe Brandon McManus Brandon McManus Brandon McManus Blake Grupe Devin Lloyd Devin Lloyd Devin Lloyd Foyesade Oluokun Derek Carr Trevor Lawrence Trevor Lawrence Derek Carr Derek Carr Derek Carr Trevor Lawrence Derek Carr Trevor Lawrence Trevor Lawrence Derek Carr Derek Carr Derek Carr Trevor Lawrence Derek Carr Derek Carr Taysom Hill Trevor Lawrence Trevor Lawrence Alvin Kamara Travis Etienne Jr. Travis Etienne Jr. Travis Etienne Jr. Alvin Kamara Tank Bigsby Alvin Kamara Alvin Kamara Travis Etienne Jr. Travis Etienne Jr. Alvin Kamara Travis Etienne Jr. Travis Etienne Jr. Alvin Kamara Alvin Kamara Travis Etienne Jr. Alvin Kamara Travis Etienne Travis Etienne Travis Etienne Travis Etienne Travis Etienne Travis Etienne Travis Etienne Travis Etienne Andre Cisco Andre Cisco Rayshawn Jenkins Marcus Maye Marcus Maye Rayshawn Jenkins Rayshawn Jenkins Andre Cisco Marcus Maye Evan Engram Foster Moreau Foster Moreau Foster Moreau Evan Engram Evan Engram Evan Engram Taysom Hill Rashid Shaheed Michael Thomas Chris Olave Calvin Ridley Christian Kirk Michael Thomas Chris Olave Rashid Shaheed Rashid Shaheed Calvin Ridley Christian Kirk Calvin Ridley | 2 | 2 |
77,339,071 | 2023-10-22 | https://stackoverflow.com/questions/77339071/how-to-make-squiggly-lines-to-represent-unshown-parts-of-the-axis | I want to try to replicate the format of this chart form The Economist (the format, not necessarily the content). I've found a tutorial on how to do so here, which has the below code (dataset here) But it does not have the squiggly lines on the left of the axes that represent that part of the axis being skipped. import pandas as pd import numpy as np import matplotlib.pyplot as plt # This makes out plots higher resolution, which makes them easier to see while building plt.rcParams['figure.dpi'] = 100 # import data gdp = pd.read_csv('gdp_1960_2020.csv') gdp_dumbbell = gdp[(gdp['country'].isin(countries)) & ((gdp['year'] == 1960) | (gdp['year'] == 2020))].sort_values(by='gdp') # Setup plot size. fig, ax = plt.subplots(figsize=(7,4)) # Create grid # Zorder tells it which layer to put it on. We are setting this to 1 and our data to 2 so the grid is behind the data. ax.grid(which="major", axis='both', color='#758D99', alpha=0.6, zorder=1) # Remove splines. Can be done one at a time or can slice with a list. ax.spines[['top','right','bottom']].set_visible(False) # Setup data gdp_dumbbell = (gdp[(gdp['country'].isin(countries)) & ((gdp['year'] == 2000) | (gdp['year'] == 2020))][['year','gdp_trillions','country']] .pivot(index='country',columns='year', values='gdp_trillions') .sort_values(by=2020)) # Plot data # Plot horizontal lines first ax.hlines(y=gdp_dumbbell.index, xmin=gdp_dumbbell[2000], xmax=gdp_dumbbell[2020], color='#758D99', zorder=2, linewidth=2, label='_nolegend_', alpha=.8) # Plot bubbles next ax.scatter(gdp_dumbbell[2000], gdp_dumbbell.index, label='2000', s=60, color='#DB444B', zorder=3) ax.scatter(gdp_dumbbell[2020], gdp_dumbbell.index, label='2020', s=60, color='#006BA2', zorder=3) # Set xlim ax.set_xlim(0, 25.05) # Reformat x-axis tick labels ax.xaxis.set_tick_params(labeltop=True, # Put x-axis labels on top labelbottom=False, # Set no x-axis labels on bottom bottom=False, # Set no ticks on bottom labelsize=11, # Set tick label size pad=-1) # Lower tick labels a bit # Reformat y-axis tick labels ax.set_yticklabels(gdp_dumbbell.index, # Set labels again ha = 'left') # Set horizontal alignment to left ax.yaxis.set_tick_params(pad=100, # Pad tick labels so they don't go over y-axis labelsize=11, # Set label size bottom=False) # Set no ticks on bottom/left # Set Legend ax.legend(['2000', '2020'], loc=(-.29,1.09), ncol=2, frameon=False, handletextpad=-.1, handleheight=1) # Add in line and tag ax.plot([-0.08, .9], # Set width of line [1.17, 1.17], # Set height of line transform=fig.transFigure, # Set location relative to plot clip_on=False, color='#E3120B', linewidth=.6) ax.add_patch(plt.Rectangle((-0.08,1.17), # Set location of rectangle by lower left corder 0.05, # Width of rectangle -0.025, # Height of rectangle. Negative so it goes down. facecolor='#E3120B', transform=fig.transFigure, clip_on=False, linewidth = 0)) # Add in title and subtitle ax.text(x=-0.08, y=1.09, s="Great expectations", transform=fig.transFigure, ha='left', fontsize=13, weight='bold', alpha=.8) ax.text(x=-0.08, y=1.04, s="Top 9 countries by GDP, in trillions of USD", transform=fig.transFigure, ha='left', fontsize=11, alpha=.8) # Set source text ax.text(x=-0.08, y=0.04, s="""Source: "GDP of all countries(1960-2020)" via Kaggle.com""", transform=fig.transFigure, ha='left', fontsize=9, alpha=.7) plt.show() This produces a similar looking chart. But it doesn't have the squiggly lines. How can I get those? The chart differs from the image in a few more ways than that (it uses a different dataset and its format is different in a few ways). But the main question is about hwo to get those squiggly lines | You can tweak it manually : # I'm showing only the lines updated and/or added ax.set_xlim(-2, 25.05) ax.grid(which='major', axis='x', color='#758D99', alpha=0.6, zorder=1) ax.spines[['left', 'top','right', 'bottom']].set_visible(False) for y in ax.get_yticks(): ax.plot( [-2, -1.6, -1.4, -1.2, -1, ax.get_xticks()[:-1].max()], [y, y, y+.3, y-.3, y, y], color='#758D99', alpha=0.3 ) plt.show(); By the way, your code is not fully reproducible, there is two missing lines : gdp['gdp_trillions'] = gdp['gdp'] / 1_000_000_000_000 countries = (gdp[gdp['year'] == 2020].sort_values( by='gdp_trillions')[-9:]['country'].values) | 2 | 1 |
77,335,813 | 2023-10-21 | https://stackoverflow.com/questions/77335813/iterating-over-rows-in-many-csv-files-and-modifying-them-in-pandas | I am writing a script that is supposed to read a lot of csv files and transform data in a specific column, when the condition is being met. To do this I have a for-loop that looks like this: for index, transformed_row in transformed_data.iterrows(): new_rows = [] if transformed_row['Code'] in ['14/9', '14+9', '149']: transformed_row['Code'] = '18' elif transformed_row['Code'] == "3+4+1": transformed_row['Code'] = '17' However I read in the Pandas documentation that when using .iterrows() one should not modify the DataFrame. Even though this code works and my output DF is exactly the way I want, I fear that over 50+ csv files this will not always work as stated in the documentation. So my idea was to make a new list new_rows = [] and adding the changed rows there with something like this new_rows.append(transformed_row.copy()) however, when I do this, I get duplicates of the rows I want to transform. I cannot find a solution on how to do this properly... | I think the simpler and better performing option is to use Series.replace (link): replace_dict = {"14/9": "18", "14+9": "18", "149": "18", "3+4+1": "17"} transformed_data["Code"] = transformed_data["Code"].replace(replace_dict) | 2 | 1 |
77,337,481 | 2023-10-21 | https://stackoverflow.com/questions/77337481/merge-two-dataframes-with-partial-column-name-match | I want to merge/concatenate two dataframes tcia and clin. In contrast to the tcia dataframe, the clin dataframe has a substring at the end of the column names (i.e., the 3rd "-" followed by subsequent letters). The dataframes should be combined irrespective of the substring but the final dataframe should have this substring. My code does the job but I'm hoping for a more robust/concise way to do it. Code: clin_df = clin.copy() clin_df.columns = clin_df.columns.str.rsplit('-', n=1).str.get(0) df = pd.concat([clin_df, tcia], axis=0) df.columns = clin.columns Input: clin pd.DataFrame({'TCGA-2K-A9WE-01': {'admin.batch_number': '398.45.0', 'age': '53', 'days_to_initial_pathologic_diagnosis': '0', 'days_to_last_follow_up': '207.0', 'ethnicity': 'not hispanic or latino'}, 'TCGA-2Z-A9J1-01': {'admin.batch_number': '398.45.0', 'age': '71', 'days_to_initial_pathologic_diagnosis': '0', 'days_to_last_follow_up': '2298.0', 'ethnicity': 'not hispanic or latino'}, 'TCGA-2Z-A9J3-01': {'admin.batch_number': '398.45.0', 'age': '67', 'days_to_initial_pathologic_diagnosis': '0', 'days_to_last_follow_up': nan, 'ethnicity': 'not hispanic or latino'}, 'TCGA-2Z-A9J6-01': {'admin.batch_number': '398.45.0', 'age': '60', 'days_to_initial_pathologic_diagnosis': '0', 'days_to_last_follow_up': '1731.0', 'ethnicity': 'not hispanic or latino'}, 'TCGA-2Z-A9J7-01': {'admin.batch_number': '398.45.0', 'age': '63', 'days_to_initial_pathologic_diagnosis': '0', 'days_to_last_follow_up': nan, 'ethnicity': 'not hispanic or latino'}}) tcia pd.DataFrame({'TCGA-2K-A9WE': {'ips_ctla4_neg_pd1_neg': 8.0, 'ips_ctla4_neg_pd1_pos': 7.0, 'ips_ctla4_pos_pd1_neg': 7.0, 'ips_ctla4_pos_pd1_pos': 6.0, 'patient_uuid': '73292c19-d6a8-4bc4-97bc-ccce54f264f8'}, 'TCGA-2Z-A9J1': {'ips_ctla4_neg_pd1_neg': 9.0, 'ips_ctla4_neg_pd1_pos': 8.0, 'ips_ctla4_pos_pd1_neg': 9.0, 'ips_ctla4_pos_pd1_pos': 7.0, 'patient_uuid': '851a1157-e460-4794-8534-2eb6f0ae7468'}, 'TCGA-2Z-A9J3': {'ips_ctla4_neg_pd1_neg': 9.0, 'ips_ctla4_neg_pd1_pos': 7.0, 'ips_ctla4_pos_pd1_neg': 8.0, 'ips_ctla4_pos_pd1_pos': 6.0, 'patient_uuid': '5195c9ac-b649-49f8-8750-f9a4787e8e52'}, 'TCGA-2Z-A9J6': {'ips_ctla4_neg_pd1_neg': 9.0, 'ips_ctla4_neg_pd1_pos': 7.0, 'ips_ctla4_pos_pd1_neg': 8.0, 'ips_ctla4_pos_pd1_pos': 7.0, 'patient_uuid': '4a540448-f106-4b0e-9038-9f7ccefc785b'}, 'TCGA-2Z-A9J7': {'ips_ctla4_neg_pd1_neg': 7.0, 'ips_ctla4_neg_pd1_pos': 5.0, 'ips_ctla4_pos_pd1_neg': 6.0, 'ips_ctla4_pos_pd1_pos': 5.0, 'patient_uuid': 'd66c9261-6c0c-44b0-92fa-a43757f34cb2'}}) Desired output: pd.DataFrame({'TCGA-2K-A9WE': {'admin.batch_number': '398.45.0', 'age': '53', 'days_to_initial_pathologic_diagnosis': '0', 'days_to_last_follow_up': '207.0', 'ethnicity': 'not hispanic or latino', 'ips_ctla4_neg_pd1_neg': 8.0, 'ips_ctla4_neg_pd1_pos': 7.0, 'ips_ctla4_pos_pd1_neg': 7.0, 'ips_ctla4_pos_pd1_pos': 6.0, 'patient_uuid': '73292c19-d6a8-4bc4-97bc-ccce54f264f8'}, 'TCGA-2Z-A9J1': {'admin.batch_number': '398.45.0', 'age': '71', 'days_to_initial_pathologic_diagnosis': '0', 'days_to_last_follow_up': '2298.0', 'ethnicity': 'not hispanic or latino', 'ips_ctla4_neg_pd1_neg': 9.0, 'ips_ctla4_neg_pd1_pos': 8.0, 'ips_ctla4_pos_pd1_neg': 9.0, 'ips_ctla4_pos_pd1_pos': 7.0, 'patient_uuid': '851a1157-e460-4794-8534-2eb6f0ae7468'}, 'TCGA-2Z-A9J3': {'admin.batch_number': '398.45.0', 'age': '67', 'days_to_initial_pathologic_diagnosis': '0', 'days_to_last_follow_up': nan, 'ethnicity': 'not hispanic or latino', 'ips_ctla4_neg_pd1_neg': 9.0, 'ips_ctla4_neg_pd1_pos': 7.0, 'ips_ctla4_pos_pd1_neg': 8.0, 'ips_ctla4_pos_pd1_pos': 6.0, 'patient_uuid': '5195c9ac-b649-49f8-8750-f9a4787e8e52'}, 'TCGA-2Z-A9J6': {'admin.batch_number': '398.45.0', 'age': '60', 'days_to_initial_pathologic_diagnosis': '0', 'days_to_last_follow_up': '1731.0', 'ethnicity': 'not hispanic or latino', 'ips_ctla4_neg_pd1_neg': 9.0, 'ips_ctla4_neg_pd1_pos': 7.0, 'ips_ctla4_pos_pd1_neg': 8.0, 'ips_ctla4_pos_pd1_pos': 7.0, 'patient_uuid': '4a540448-f106-4b0e-9038-9f7ccefc785b'}, 'TCGA-2Z-A9J7': {'admin.batch_number': '398.45.0', 'age': '63', 'days_to_initial_pathologic_diagnosis': '0', 'days_to_last_follow_up': nan, 'ethnicity': 'not hispanic or latino', 'ips_ctla4_neg_pd1_neg': 7.0, 'ips_ctla4_neg_pd1_pos': 5.0, 'ips_ctla4_pos_pd1_neg': 6.0, 'ips_ctla4_pos_pd1_pos': 5.0, 'patient_uuid': 'd66c9261-6c0c-44b0-92fa-a43757f34cb2'}}) | With df.set_axis method (to assign a new column index on-the-fly): df = pd.concat([clin.set_axis(clin.columns.str.rpartition('-', expand=False).str[0], axis=1), tcia]).set_axis(clin.columns, axis=1) print(df) TCGA-2K-A9WE-01 TCGA-2Z-A9J1-01 TCGA-2Z-A9J3-01 TCGA-2Z-A9J6-01 TCGA-2Z-A9J7-01 admin.batch_number 398.45.0 398.45.0 398.45.0 398.45.0 398.45.0 age 53 71 67 60 63 days_to_initial_pathologic_diagnosis 0 0 0 0 0 days_to_last_follow_up 207.0 2298.0 NaN 1731.0 NaN ethnicity not hispanic or latino not hispanic or latino not hispanic or latino not hispanic or latino not hispanic or latino ips_ctla4_neg_pd1_neg 8.0 9.0 9.0 9.0 7.0 ips_ctla4_neg_pd1_pos 7.0 8.0 7.0 7.0 5.0 ips_ctla4_pos_pd1_neg 7.0 9.0 8.0 8.0 6.0 ips_ctla4_pos_pd1_pos 6.0 7.0 6.0 7.0 5.0 patient_uuid 73292c19-d6a8-4bc4-97bc-ccce54f264f8 851a1157-e460-4794-8534-2eb6f0ae7468 5195c9ac-b649-49f8-8750-f9a4787e8e52 4a540448-f106-4b0e-9038-9f7ccefc785b d66c9261-6c0c-44b0-92fa-a43757f34cb2 | 2 | 2 |
77,337,407 | 2023-10-21 | https://stackoverflow.com/questions/77337407/deploy-gen2-google-cloud-functions-with-github-actions | I was able to easily deploy to GitHub Actions for 1st generation Google Cloud Functions, but now with 2nd generation, I get authentication errors. How can I set up a GitHub workflow to deploy my function when I merge or push to my main branch? Here is the workflow I was using before - id: "deploy" uses: "google-github-actions/deploy-cloud-functions@v0" with: name: "<cloud-function-name>" runtime: "python310" region: "us-east1" entry_point: "<function-in-main-file>" timeout: 540 service_account_email: [email protected] ingress_settings: ALLOW_ALL max_instances: 1 | I got this working from a comment on this GitHub issue. Their comment is for a Node function. The code I offer below is for Python, but it's the same for all of them. You'll need a service account json key and the yaml that uses it - and that's it! Service Account Create a service account in the project via the GCP console Give it Cloud Functions Developer and Service Account User permission (source) Click into the service account -> Keys -> Add Key -> Create JSON key Copy the JSON Now go to your GitHub repo settings -> "Secrets and variables" (Sidebar) -> Actions Click "New repository secret" Name: GCP_SA_DEPLOY_KEY (or whatever you want, it goes in the yaml below) Secret: Paste the JSON service account key GitHub Action Create .github/workflows/deploy-python-src.yml in your repo (name it whatever you want). Update the deploy command in the code below (ask ChatGPT if you need help figuring it out). Make sure it works by running it locally before testing the action. name: Deploy Cloud Functions run-name: π ${{ github.actor }} is deploying all cloud functions on: push: branches: - main jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - run: ls - id: 'auth' name: 'Authenticate to Google Cloud' uses: 'google-github-actions/auth@v1' with: credentials_json: '${{ secrets.GCP_SA_DEPLOY_KEY }}' - name: 'Set up Cloud SDK' uses: 'google-github-actions/setup-gcloud@v1' with: version: '>= 363.0.0' - name: 'Use gcloud CLI' run: 'gcloud info' - name: 'Deploy to gen2 cloud function' run: | gcloud functions deploy <cloud-function-name> \ --gen2 \ --region=us-east1 \ --runtime=python310 \ --source=<source-folder-in-repo> \ --entry-point=<function-in-main-file> \ --trigger-topic=<topic-if-pub-sub> β οΈNotesβ οΈ If you get errors related to Cloud Run, you can see the specific issue in the Cloud Run GCP console. The package that handles the auth advises against Service Accounts and provides another option (Workload Identity Federation). Read their README for specifics to know if it's secure in your scenario. | 3 | 5 |
77,335,159 | 2023-10-21 | https://stackoverflow.com/questions/77335159/django-objects-filter-received-a-naive-datetime-while-time-zone-support-is-activ | So this question is specifically about querying a timezone aware date range with max min in Django 4.2. Timezone is set as TIME_ZONE = 'UTC' in settings.py and the model in question has two fields: open_to_readers = models.DateTimeField(auto_now=False, verbose_name="Campaign Opens") close_to_readers = models.DateTimeField(auto_now=False, verbose_name="Campaign Closes") The query looks like allcampaigns = Campaigns.objects.filter(open_to_readers__lte=today_min, close_to_readers__gt=today_max) Failed Solution 1 today_min = datetime.combine(timezone.now().date(), datetime.today().time().min) today_max = datetime.combine(timezone.now().date(), datetime.today().time().max) print("Today Min", today_min, " & Today Max", today_max) returns the following which would be a suitable date range except it also gives the error below for both min and max. Today Min 2023-10-21 00:00:00 & Today Max 2023-10-21 23:59:59.999999 DateTimeField ... received a naive datetime (9999-12-31 23:59:59.999999) while time zone support is active. Partial Working Solution today_min = timezone.now() allcampaigns = Campaigns.objects.filter(open_to_readers__lte=today_min, close_to_readers__gt=today_min) Returns results without error but the time given is the current time and not the minimum or maximum for the day. Failed Solution 2 from here: now = datetime.now(timezone.get_default_timezone()) today_min = now.min today_max = now.max print("Today Min", today_min, " & Today Max", today_max) Returns Today Min 0001-01-01 00:00:00 & Today Max 9999-12-31 23:59:59.999999 and the aforementioned timezone error. How can I create two timezone aware datetime for the minumum and maximum parts of the day? | Likely the easiest way is just: from datetime import timedelta from django.utils.timezone import now date_min = now().replace(hour=0, minute=0, second=0, microsecond=0) date_max = date_min + timedelta(days=1) - date_min.resolution Campaigns.objects.filter( open_to_readers__lte=today_min, close_to_readers__gt=today_min ) | 2 | 1 |
77,333,861 | 2023-10-20 | https://stackoverflow.com/questions/77333861/have-read-html-read-cell-content-and-tooltip-text-bubble-separately-instead-o | This site page has tooltips appearing when hovering over values in columns "Score" and "XP LVL". It appears that read_html will concatenate cell content and tooltip. Splitting those in post-processing is not always obvious and I seek a way to have read_html handle them separately, possibly return them as two columns. This is how the first row appears online: (Rank)# Name Score XP LVL Victories / Total Victory Ratio 1 Raininββββ 6129 447 408 / 531 76% where "Score"'s "6129" carries tooltip "Max6129" where, more annoyingly, "XP LVL"'s "447" carries tooltip "21173534 pts" This is how it appears after reading: pd.read_html('https://stats.gladiabots.com/pantheon?', header=0, flavor="html5lib")[0] # Name Score XP LVL Victories / Total \ 0 1 Raininββββ 6129Max 6129 44721173534 pts 408 / 531 See "44721173534 pts" is the concatenation of "447" and "21173534 pts". "XP LVL" values have a variable number of digits, so splitting the string in the post-processing phase would require being pretty smart about it and I woud like to explore the "let read_html do the split", first. (The special flavor="html5lib" was added because the page is dynamically-generated) I have not found any mention of tooltips in the docs | It turns out that this is because pandas uses the .text attribute of the <td> bs4.element.Tag objects and this one concatenate (without any separator) the texts of all the tag's children. In the first row of the table, the score has two children 6129 and Max 6129, thus the concat. <td nowrap="" class="barContainer"> <div class="scoreBar" style="width: 100%;"></div> <div class="maxScoreBar" style="width: 0%;"></div> <span class="barLabel tooltipable"> "6129" <span class="tooltip"> "Max 6129" </span> </span> </td> A quick/hacky solution would be to override the _text_getter method of the parser used by pandas and replace .text with get_text that has a separator parameter : def _text_getter(self, obj): return obj.get_text(separator="_", strip=True) # I choosed "_" pd.io.html._BeautifulSoupHtml5LibFrameParser._text_getter = _text_getter With this modification, read_html gives this df : # Name Score XP LVL Victories / Total Victory_Ratio 0 1 Raininββββ 6129_Max 6129 447_21173534 pts 408 / 531 76% 1 2 ZM_XLβββ 5888_Max 6025 344_15942978 pts 3685 / 6748 54% 2 3 UzuraGamesβ 5555_Max 5586 119_4688941 pts 610 / 1109 55% .. ... ... ... ... ... ... 997 998 Tekuma 3183_Max 3460 27_370585 pts 151 / 304 49% 998 999 hemi 3183_Max 3227 10_49432 pts 29 / 62 46% 999 1000 wanna bet kid? 3183_Max 3304 13_85777 pts 51 / 95 53% [1000 rows x 6 columns] And this way, you can extract / disattach the values of the two concerned columns : scores = df.pop("Score").str.extract(r"(?P<Score>\d+)_Max (?P<Max>\d+)") xplvls = df.pop("XP LVL").str.extract(r"(?P<XPLVL>\d+)_(?P<PTS>\d+)") out = pd.concat([df, scores, xplvls], axis=1) Output : print(out) # with only `scores` and `xplvls` Score Max XPLVL PTS 0 6129 6129 447 21173534 1 5888 6025 344 15942978 2 5555 5586 119 4688941 .. ... ... ... ... 997 3183 3460 27 370585 998 3183 3227 10 49432 999 3183 3304 13 85777 [1000 rows x 4 columns] | 3 | 2 |
77,333,737 | 2023-10-20 | https://stackoverflow.com/questions/77333737/how-do-i-create-enums-in-python-3-11-with-no-arguments-passed-to-init | I have the following code, which works as expected on 3.9 and 3.10: from enum import Enum class Environment(Enum): DEV = () INT = () PROD = () def __init__(self): self._value_ = self.name.lower() def __str__(self): return self.name def get_url(self): if self is Environment.DEV: return 'http://localhost' else: return f'https://{self.value}.my.domain' Here is the result in a REPL: >>> Environment.DEV.get_url() 'http://localhost' >>> Environment.INT.get_url() 'https://int.my.domain' >>> Environment.PROD.get_url() 'https://prod.my.domain' I did this because I read that defining an __init__ method for enums allows passing tuples when assigning values, and I wanted to avoid the redundant/error prone explicit declarations such as: DEV = 'DEV' INT = 'INT' PROD = 'PROD' Now, when I run the same code on Python 3.11 or 3.12, every call to get_url returns the localhost URL. Trying it out in the REPL, we get: >>> Environment.DEV.get_url() 'http://localhost' >>> Environment.INT.get_url() 'http://localhost' >>> Environment.PROD.get_url() 'http://localhost' I even noticed that all of the enum values are equal to the first one: >>> Environment.DEV <Environment.DEV: 'dev'> >>> Environment.INT <Environment.DEV: 'dev'> >>> Environment.PROD <Environment.DEV: 'dev'> Has something changed in python 3.11 which makes using empty tuples at assignation forbidden/broken? How could I get the same result in those versions, i.e. declare the enum values without having to explicitly pass any argument to __init__? | You could use auto() and _generate_next_value_: from enum import Enum, auto class Environment(Enum): def _generate_next_value_(name, start, count, last_values): return name DEV = auto() INT = auto() PROD = auto() This will automatically generate values based on the names. Note that while _generate_next_value_ is currently documented as a staticmethod, applying the staticmethod decorator is optional, and didn't work on previous versions - previous versions required a bare function. | 2 | 2 |
77,333,397 | 2023-10-20 | https://stackoverflow.com/questions/77333397/how-do-you-merge-a-list-to-an-existing-dataframe | I am trying to replace a value in a pandas dataframe based on another column value. I made sample code below to replicate the issue, but essentially I want to add a column to an existing dataframe and then replace placeholder information based on another column's value. The dataframe I am using (not in the example) is based on an excel document that will be used to webscrape information that I want returned in the new column based on the other column's value. Just wanted to mention that before people asked why not just start the ex_list data in the dataframe. Additionally I only want it replace where the condition is met, not replacing the entire column with a set value. Sample Code ## this would be the excel document df sample_df = pd.DataFrame({"a":[1,2,3,4,5]}) sample_df["b"] = "" ## this data would be webscrapped using information above ex_list = [[1, "CHANGE"],[4, "CHANGE"]] for sub in ex_list: location = sample_df.loc[sample_df['a']==sub[0], 'b'].iloc[0] sample_df.replace(location, sub[1]) sample.head() I also tried this just as a quick playaround but it produced the same output sample_df = pd.DataFrame({"a":[1,2,3,4,5]}) sample_df["b"] = "" ex_list = [[1, "CHANGE"],[4, "CHANGE"]] for sub in ex_list: sample_df[sample_df['a']==sub[0], 'b'].iloc[0] += sub[1] sample_df.head() Both outputs are the same and show no change: a b 0 1 1 2 2 3 3 4 4 5 The output I am hoping for is this a b 0 1 CHANGE 1 2 2 3 3 4 CHANGE 4 5 I would appreciate a second pair of eyes on this. Is my method of 'locating the value' logic off? I thought the .loc/.iloc would be best but perhaps another way of indexing is best? I would be open to any solutions! | IMHO, the title makes your question an XY. I think you want to simply merge both objects : # sample_df["b"] = "" # no need for this line anymore out = sample_df.merge(pd.DataFrame(ex_list, columns=["a", "b"]), how="left") Output : print(out) a b 0 1 CHANGE 1 2 NaN 2 3 NaN 3 4 CHANGE 4 5 NaN | 3 | 3 |
77,333,168 | 2023-10-20 | https://stackoverflow.com/questions/77333168/invert-binary-numpy-array-using-a-mask | I have a binary N-by-N (N~=200) NumPy array populated with zeroes and ones. I would like to apply a Boolean mask and 'swap' the values that correspond to True in the mask, so for example if I had: arr = np.array([0,0,1,1], [0,0,1,1], [0,0,1,1], [0,0,1,1]) mask = np.array([True,False,True,False], [True,False,True,False], [True,False,True,False], [True,False,True,False], I would like the resulting array to be: arr_new = np.array([1,0,0,1], [1,0,0,1], [1,0,0,1], [1,0,0,1]) I started this by initially creating a function swap_cell that swaps between the values, and then followed the approaches in this answer. def swap_cell(x): if x == 1.0: return 0.0 elif x == 0.0: return 1.0 arr_new = np.where(mask,swap_cell(arr),arr) This code returns ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all(), which I understand is because of the 'if' statement in my swap_cell() function. I know there must be a much more pythonic way to accomplish this, but I thought this might work. | If you interpret the first (0,1) array as a boolean array, then you are doing the exclusive or operation of each entry in the first array with each entry of the second array. Therefore, you can do this with the numpy function np.logical_xor() as follows: arr = np.array([[0,0,1,1], [0,0,1,1], [0,0,1,1], [0,0,1,1]]) mask = np.array([[True,False,True,False], [True,False,True,False], [True,False,True,False], [True,False,True,False]]) np.array(np.logical_xor(arr, mask), dtype=int) which returns array([[1, 0, 0, 1], [1, 0, 0, 1], [1, 0, 0, 1], [1, 0, 0, 1]]) | 2 | 3 |
77,331,248 | 2023-10-20 | https://stackoverflow.com/questions/77331248/azure-web-app-uses-azure-authentication-to-access-the-web-app-how-do-i-use-this | I am currently developing an Azure Web App with Python, it uses Azure AD Authentication for site access. I want to display the current users email (I plan to use this later for access tiers also). I saw a way to get the email through use of delegated GraphAPI to get the users details. I have attempted the way below: authority = f'https://login.microsoftonline.com/{subid}' scope = ["https://graph.microsoft.com/.default"] app = ConfidentialClientApplication( client_id, authority=authority, client_credential=client_secret ) token_response = app.acquire_token_for_client(scopes=scope) access_token = token_response['access_token'] headers = { 'Authorization': f'Bearer {access_token}' } response = requests.get('https://graph.microsoft.com/v1.0/me', headers=headers) user_data = response.json() print("~~~~~~~~~~~~~~~~~~~~~~") for key, value in user_data.items(): print(f'{key}: {value}') print("~~~~~~~~~~~~~~~~~~~~~~") But this returns the error: error: {'code': 'BadRequest', 'message': '/me request is only valid with delegated authentication flow.', 'innerError': {'date': '2023-10-20T13:05:47', 'request-id': '9b6d8130-5bf6-4711-ba90-14ccaef0b127', 'client-request-id': '9b6d8130-5bf6-4711-ba90-14ccaef0b127'}} I assume that delegated would work as a user needs to sign and be authenticated on azure to access the site, or am I missing something? Any help or alternative methods would be appreciated GraphAPI Permissions: | The error occurred as your code use client credentials flow for generating access token to call /me endpoint, which is not a delegated flow as it does not involve user interaction. When I ran your code in my environment, I too got same error as below: import msal import requests authority = f'https://login.microsoftonline.com/tenantId' scope = ["https://graph.microsoft.com/.default"] client_id = "appId" client_secret = "secret" app = msal.ConfidentialClientApplication( client_id, authority=authority, client_credential=client_secret ) token_response = app.acquire_token_for_client(scopes=scope) access_token = token_response['access_token'] headers = { 'Authorization': f'Bearer {access_token}' } response = requests.get('https://graph.microsoft.com/v1.0/me', headers=headers) user_data = response.json() print("~~~~~~~~~~~~~~~~~~~~~~") for key, value in user_data.items(): print(f'{key}: {value}') print("~~~~~~~~~~~~~~~~~~~~~~") Response: To resolve the error, you need to switch to delegated flows like authorization code flow, interactive flow etc... that requires user to sign in for generating access token. In my case, I used interactive flow by enabling below option and added http://localhost as redirect URI in mobile/desktop platform: When I ran the below modified python code, it asked me to pick an account to sign in: import msal import requests authority = f'https://login.microsoftonline.com/tenantId' scope = ["https://graph.microsoft.com/.default"] client_id = "appId" app = msal.PublicClientApplication( client_id, authority=authority, ) token_response = app.acquire_token_interactive(scopes=scope) access_token = token_response['access_token'] headers = { 'Authorization': f'Bearer {access_token}' } response = requests.get('https://graph.microsoft.com/v1.0/me', headers=headers) user_data = response.json() print("~~~~~~~~~~~~~~~~~~~~~~") for key, value in user_data.items(): print(f'{key}: {value}') print("~~~~~~~~~~~~~~~~~~~~~~") While signing in, user will get consent screen to accept permissions as below: After accepting the consent, I got the response with signed-in user details successfully in the output console like this: | 2 | 3 |
77,330,978 | 2023-10-20 | https://stackoverflow.com/questions/77330978/alias-for-an-typable-union | I am looking for a way to create a type alias for a Union that is typeable. In my opinion, there is no suitable default type in the typing module. Typing.Iterable is an iterable, which could be a dict β but it shouldn't be. typing.Sequence is a sequence, which supports access by integer indices, but this doesn't include sets. Imagine I have these variables (which could of course also be parameters of a method): x : list[str] | tuple[str] | set[str] | frozenset[str] y : list[str] | tuple[str] | set[str] | frozenset[str] z : list[int] | tuple[int] | set[int] | frozenset[int] This is basically always the same. Therefore I would like to create a type alias for it. Something like this: my_type = list | tuple | set | frozenset so that I can annotate them, like: x: my_type[str] = ["foo", "bar"] y: my_type[str] = ["baz", "bar"] z: my_type[int] = [1, 7, 42] But this gives me this error: TypeError: There are no type variables left in list | tuple | set | frozenset When I define my type with Union (my_type = typing.Union[list, tuple, set, frozenset]) instead of the | shorthand, I get: Traceback (most recent call last): File "/opt/pycharm-professional/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode coro = func() File "<input>", line 4, in <module> File "/usr/lib/python3.10/typing.py", line 312, in inner return func(*args, **kwds) File "/usr/lib/python3.10/typing.py", line 1058, in __getitem__ _check_generic(self, params, len(self.__parameters__)) File "/usr/lib/python3.10/typing.py", line 228, in _check_generic raise TypeError(f"{cls} is not a generic class") TypeError: typing.Union[list, tuple, set, frozenset] is not a generic class How can I create my type alias to make this work? It should work with python>=3.10. | If I understand your question correctly you need a type alias which is generic over some type _T. If that's the case, for python>=3.10 you can make use of TypeAlias & TypeVar. from typing import TypeAlias, TypeVar _T = TypeVar("_T") ListOrTupleOrSetOrFrozenSet: TypeAlias = list[_T] | tuple[_T] | set[_T] | frozenset[_T] x: ListOrTupleOrSetOrFrozenSet[str] = ["1"] And in python 3.12 you could also use the builtin type parameter syntax which also support generic. type ListOrTupleOrSetOrFrozenSet[T] = list[T] | tuple[T] | set[T] | frozenset[T] x: ListOrTupleOrSetOrFrozenSet[str] = ["1"] See also: How do you alias a type in Python? | 3 | 4 |
77,328,739 | 2023-10-20 | https://stackoverflow.com/questions/77328739/subtraction-from-a-dictionary-of-dictionaries-in-a-pandas-dataframe | I have a dataframe where I want to find the difference in the unique_users for the latest day(2023-09-06) and previous day(2023-09-07) for a specific hour 2 and 13 separately for each key 'bsnl' and 'other' and for a specific Exception. I need to consider the Exception and Hour and Date to calculate the difference. DateTime Date Hour Exception IMSI_Operator 0 2023-09-06 02:00:00 2023-09-06 00:00:00 2 s2ap {'bsnl': {'total_sessions': 50007, 'unique_users': 38880}, 'Other': {'total_sessions': 50, 'unique_users': 32}} 1 2023-09-06 13:00:00 2023-09-06 00:00:00 13 s2ap {'bsnl': {'total_sessions': 60004, 'unique_users': 49816}, 'Other': {'total_sessions': 34, 'unique_users': 22}} 2 2023-09-07 13:00:00 2023-09-07 00:00:00 13 s2ap {'bsnl': {'total_sessions': 45224, 'unique_users': 37525}, 'Other': {'total_sessions': 32, 'unique_users': 27}} 3 2023-09-07 02:00:00 2023-09-07 00:00:00 2 s2ap {'bsnl': {'total_sessions': 47713, 'unique_users': 37284}, 'Other': {'total_sessions': 43, 'unique_users': 27}} What I have tried: import pandas as pd import json # Sample DataFrame data = { 'Date': ['2023-09-06 00:00:00', '2023-09-06 00:00:00', '2023-09-07 00:00:00', '2023-09-07 00:00:00'], 'Hour': [2, 13, 13, 2], 'Exception': ['s2ap', 's2ap', 's2ap', 's2ap'], 'IMSI_Operator': [ {'bsnl': {'total_sessions': 50007, 'unique_users': 38880}, 'Other': {'total_sessions': 50, 'unique_users': 32}}, {'bsnl': {'total_sessions': 60004, 'unique_users': 49816}, 'Other': {'total_sessions': 34, 'unique_users': 22}}, {'bsnl': {'total_sessions': 45224, 'unique_users': 37525}, 'Other': {'total_sessions': 32, 'unique_users': 27}}, {'bsnl': {'total_sessions': 47713, 'unique_users': 37284}, 'Other': {'total_sessions': 43, 'unique_users': 27}} ] } result_df = pd.DataFrame(data) # Convert 'Date' to datetime result_df['Date'] = pd.to_datetime(result_df['Date']) # Filter for the latest day latest_day = result_df[result_df['Date'] == result_df['Date'].max()] # Filter for the previous day previous_day = result_df[result_df['Date'] == (result_df['Date'].max() - pd.DateOffset(days=1))] result = {} for key in latest_day['IMSI_Operator'].values[0].keys(): latest_unique_users = latest_day['IMSI_Operator'].values[0][key]['unique_users'] previous_unique_users = previous_day['IMSI_Operator'].values[0][key]['unique_users'] result[key] = { 'unique_users_diff': latest_unique_users - previous_unique_users, } print(result) But I would like to get the difference in the original dataframe itself. The previous date 2023-09-06 rows are not neeed in the final dataframe. my current code is not considering the specific hour and specific exception for the calcualtion. Expected output: Hour Exception Date IMSI_Operator bsnl_unique_users_diff Other_unique_users_diff 2 s2ap 2023-09-07 {'bsnl': {'total_sessions': 60004, 'unique_users': 49816}, 'Other': {'total_sessions': 34, 'unique_users': 22}} -1596 5 13 s2ap 2023-09-07 {'bsnl': {'total_sessions': 50007, 'unique_users': 38880}, 'Other': {'total_sessions': 50, 'unique_users': 32}} -12291 -5 | You can slice and merge with help of json_normalize to rework the dictionaries to columns: date1 = pd.Timestamp('2023-09-07') date2 = pd.Timestamp('2023-09-06') hours = [2, 13] tmp = (df.join(pd.json_normalize(df['IMSI_Operator'])) .query('Hour in @hours') .set_index(['Date', 'Hour']) ) cols = list(tmp.filter(like='unique_users')) out = (tmp.reset_index()[list(df)] .merge(tmp.loc[date1, cols].sub(tmp.loc[date2, cols]) .add_suffix('_diff').assign(Date=date1), on=['Date', 'Hour'], how='right') .reset_index() ) Output: index Date Hour Exception IMSI_Operator bsnl.unique_users_diff Other.unique_users_diff 0 0 2023-09-07 2 s2ap {'bsnl': {'total_sessions': 47713, 'unique_use... -1596 -5 1 1 2023-09-07 13 s2ap {'bsnl': {'total_sessions': 45224, 'unique_use... -12291 5 | 3 | 2 |
77,322,485 | 2023-10-19 | https://stackoverflow.com/questions/77322485/how-to-pass-an-array-containing-functions-as-a-parameter-into-njit | I would like to pass an array which contains a list of numba compiled functions as a parameter into an njit method. In my attempt to do so, I encountered the following error: non-precise type array(pyobject, 1d, C). I'm able to pass a numba compiled function as parameter into an njit method, but i'm unable to do so if the numba compiled functions are stored in an array. Is there a way to pass an array of functions as parameter into njit function? or something along the lines of type casting the array so numba knows the content of the array np.array([function1, function2], dtype=function)? Here's a sample of my code import numpy as np from numba import njit @njit() def function1(x, y): return x > y @njit() def function2(x, y): return x < y @njit() def main(inputArray): print(inputArray[0](1,2)) print(inputArray[1](1,2)) functionArray = np.array([function1, function2]) main(functionArray) Error message numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) non-precise type array(pyobject, 1d, C) | You can try to pass variable number of arguments to function (but still, you get a warning message): import numba T = numba.float64( numba.float64, numba.float64, ) @numba.njit(T) def function1(x, y): return x > y @numba.njit(T) def function2(x, y): return x < y @numba.njit def main(*inputArray): print(inputArray[0](1.0, 2.0)) print(inputArray[1](1.0, 2.0)) l = [function1, function2] main(*l) Prints: 0.0 1.0 EDIT: Using numba.typed.List - this compiles without warning. import numba T = numba.float64( numba.float64, numba.float64, ) @numba.njit(T) def function1(x, y): return x > y @numba.njit(T) def function2(x, y): return x < y @numba.njit def main(inputArray): print(inputArray[0](1.0, 2.0)) print(inputArray[1](1.0, 2.0)) function_sig = numba.types.FunctionType(T) l = numba.typed.List.empty_list(function_sig, 2) l.append(function1) l.append(function2) main(l) Prints: 0.0 1.0 | 2 | 3 |
77,327,162 | 2023-10-19 | https://stackoverflow.com/questions/77327162/list-the-differences-between-two-dataframe-columns-ignoring-case | I am trying to compare columns from 2 dataframes and return the difference, ignoring case. Here is what I have so far: import pandas as pd if __name__ == "__main__": data1={'Name':['Karan','Rohit','Sahil','Aryan']} data2={'Name':['karan','Rohit','Sahil']} df1=pd.DataFrame(data1) df2=pd.DataFrame(data2) print(list(set(df1['Name']).difference(df2['Name']))) This code prints ['Karan', 'Aryan']. How do I modify this to ignore case so that karan and Karan are recognized as a match and only Aryan is returned? I don't want to use the following because it returns aryan and I want to maintain the capitalization of the row. In my real case, they are not first names so it isn't as easy as making the first letter capitalized again after taking the difference. print(list(set(df1['Name'].str.lower()).difference(df2['Name'].str.lower()))) | To perform a case insensitive comparison, use str.casefold: print(list(set(df1['Name'].str.casefold()).difference(df2['Name'].str.casefold()))) If you want to keep the original case use boolean indexing with isin: df1.loc[~df1['Name'].str.casefold().isin(df2['Name'].str.casefold()), 'Name'].unique() Output: array(['Aryan'], dtype=object) | 2 | 6 |
77,325,636 | 2023-10-19 | https://stackoverflow.com/questions/77325636/how-to-load-an-existing-vector-db-into-langchain | I have the following code which loads my pdf file generates embeddings and stores them in a vector db. I can then use it to preform searches on it. The issue is that every time i run it the embeddings are regrated and stored in the db along with the ones already created. Im trying to figurer out How to load an existing vector db into Langchain. rather then recreating them every time the app runs. load it def load_embeddings(store, file): # delete the dir # shutil.rmtree(store) # I have to delete it or it just loads double data loader = PyPDFLoader(file) text_splitter = CharacterTextSplitter( separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len, is_separator_regex=False, ) pages = loader.load_and_split(text_splitter) return DocArrayHnswSearch.from_documents( pages, GooglePalmEmbeddings(), work_dir=store + "/", n_dim=768 ) use it db = load_embeddings("linda_store", "linda.pdf") embeddings = GooglePalmEmbeddings() query = "Have I worked with Oauth?" embedding_vector = embeddings.embed_query(query) docs = db.similarity_search_by_vector(embedding_vector) for i in range(len(docs)): print(i, docs[i]) issue This works fine but if I run it again it just loads the file again into the vector db. I want it to just use the db after I have created it and not create it again. I cant seem to find a method for loading it I tried db = DocArrayHnswSearch.load("hnswlib_store/", embeddings) But thats a no go. | Your load_embeddings function is recreating the database every time you call it. Here's why: 1. You're loading from PyPDFLoader every time ... # We don't need this when loading from store loader = PyPDFLoader(file) ... 2. from_documents(documents, embedding, **kwargs) ... # We don't need to pass pages when loading from store return DocArrayHnswSearch.from_documents( pages, GooglePalmEmbeddings(), work_dir=store + "/", n_dim=768 ) ... Instead, you can try this: def query_vector_store(query): embeddings = OpenAIEmbeddings(openai_api_key=open_ai_key) vector_store = DocArrayHnswSearch.from_params(embeddings, "store/", 1536) embedding_vector = embeddings.embed_query(query) return vector_store.similarity_search_by_vector(embedding_vector) I am using OpenAIEmbeddings() here but the same code should apply to GooglePalmEmbeddings() just make sure you update the value of the dimension. 1. DocArrayHnswSearch.from_params We're using DocArrayHnswSearch.from_params instead to load embeddings from the store (see here). This method does not expect the documents. 2. We're using our vector_store to perform similarity search As you can see from the query_vector_store(query: str) function above, we're not re-loading the documents from the PDF loader every time. Instead, we're just passing in our embeddings, work directory, and dimensions. 3. Usage You can use the method as such: query_vector_store('YOUR_QUERY'). Based on your for loop here: for i in range(len(docs)): print(i, docs[i]) You'll see the documents sorted by most similar. I hope this helps! | 3 | 4 |
77,324,237 | 2023-10-19 | https://stackoverflow.com/questions/77324237/aligning-images-of-the-sun-but-theyre-drifting | I've been aligning over 200 images of the sun taken in sequence in order to analyse them at a later point. I thought I had managed, but when playing back a video of all the images in sequence I noticed that the sun drifts downwards. I believe this is because in some images the full disc is not present, and hence my aligning them "by the centroid" is wrong in some way. Here are some examples of what I mean by drift: I believe this drift is due to how I find my centroid for each disk - as the full disk is not present in every FITS image, the centroid slowly moves to a different position and the alignment skews. My code for this is here: def get_centroid(data): #data is extracted from a FITS file #getting data centroid y_index, x_index = np.where(data >= 1e4) #threshold is set to only include bright circle (sun) and not background #calculate the centroid centroid_y = np.mean(y_index) centroid_x = np.mean(x_index) return(centroid_y, centroid_x) I've realised this only works if the full circle is present - in some of my images, parts of the sun disk are cut off. I'm struggling to edit my function so that if one of the x or y axes is shorter (i.e. if part of the circle is cut off) I could adjust the indices to make it as if the full circle was there - so that my centroid is truly the centre of the circle and I could continue alignment that way? Thanks in advance | Hope this helps you. I used the ideia presentend OpenCV: Fitting a single circle to an image (in Python). Also sorry for the huge answer. First lets create a larger canvas so we can centralize the sun image later. import cv2 as cv import matplotlib.pyplot as plt import numpy as np img1 = cv.imread('6cTxW.png') img2 = cv.imread('CVxDC.png') def image_inside_larger_canvas(img,size): # Define the size of the larger canvas larger_canvas_size = size # Change the dimensions as needed # Create a larger canvas of the specified size larger_canvas = np.zeros((larger_canvas_size[1], larger_canvas_size[0], 3), dtype=np.uint8) # Calculate the position to place the image in the center of the larger canvas x_offset = (larger_canvas_size[0] - img.shape[1]) // 2 y_offset = (larger_canvas_size[1] - img.shape[0]) // 2 # Paste the image onto the larger canvas larger_canvas[y_offset:y_offset + img.shape[0], x_offset:x_offset + img.shape[1]] = img return larger_canvas Now is the image processing part, the explanation is inside the code. #create a larger canvas img1_larger = image_inside_larger_canvas(img2,(1200,1200)) #convert to gray img1_gray = cv.cvtColor(img1_larger, cv.COLOR_BGR2GRAY) #binarization so we can fit the countours th_val,binarized1 = cv.threshold(img1_gray,1,255,cv.THRESH_OTSU) # this part will get you the outer shape of the sun, in # other words, the perimeter (also called morph gradient) kernel = cv.getStructuringElement(cv.MORPH_ELLIPSE,(5,5)) binarized1_eroded = cv.erode(binarized1,kernel) gradient = binarized1 - binarized1_eroded #finding countours and finding the biggest area contours,_ = cv.findContours(gradient,cv.RETR_TREE,cv.CHAIN_APPROX_SIMPLE) areas = [cv.contourArea(c) for c in contours] sorted_areas = np.sort(areas) #choosing the one with the biggest area cnt = contours[areas.index(sorted_areas[-1])] # fit circle (x,y),radius = cv.minEnclosingCircle(cnt) center = (int(x),int(y)) radius = int(radius) # fit ellipse (blue) ellipse = cv.fitEllipse(cnt) (ellipse_center, axes, angle) = ellipse # Get ellipse center for later use x_c,y_c = int(ellipse_center[0]), int(ellipse_center[1]) #change to RGB so we can plot the fitted ellipses and circles img1_gray = cv.cvtColor(img1_gray, cv.COLOR_GRAY2RGB) # draw ellipse and circle to see the process # you can remove this later. cv.ellipse(img1_gray,ellipse,(255,0,0),2) cv.circle(img1_gray,center,radius,(0,255,0),2) # Draw a dot also _ = cv.circle(img1_gray, (x_c,y_c), 10, (255, 0, 0), -1) Now you just need to translate the sun to the center of the canvas. Wich is done by using warpAffine with the T matrix that uses the distance of the ellipse center from the middle of the canvas. height, width = img1_gray.shape[:2] #get the middle of the canvas middle_h, middle_w = height // 2, width // 2 # the matrix tarfomation T = np.float32([[1, 0, middle_w-x_c], [0, 1, middle_h-y_c]]) # use warpAffine to transform the image using the matrix, T img_translation = cv.warpAffine(img1_gray, T, (width, height)) # Draw a horizontal line on the canvas to check if it is in the middle. line = img_translation.shape[1] // 2 cv.line(img_translation, (0, line), (img_translation.shape[0], line), (0, 255, 0), 2) cv.line(img_translation, (line, 0), (line, img_translation.shape[0]), (0, 255, 0), 2) # Draw a dot on the canvas cv.circle(img_translation, (np.abs(x_c), np.abs(y_c)), 10, (0, 255, 0), -1) plt.figure(figsize=(20,20)) plt.imshow(img_translation) plt.axis('off') plt.savefig('dale1.png') plt.show() The green dot is the old sun center, the red circle is the ellipse, the green circle is the circle and, the green lines are the image center lines. Results: | 3 | 2 |
77,325,437 | 2023-10-19 | https://stackoverflow.com/questions/77325437/how-do-i-get-an-github-app-installation-token-to-authenticate-cloning-a-reposito | I am writing a Python application that needs to clone private repositories from an organization. I've already registered a GitHub App, and I can see in the documentation that this needs an installation access token, but each of the steps in how to get one of these takes me down a rabbit hole of (circular references of) links. | There's a couple of steps needed for this: firstly you need to get some information from GitHub by hand, and then there is a little dance that your app needs to do to swap its authentication secrets for a temporary code that can be used to authenticate a git clone. Gather information Before you can write a function to do this, you need three pieces of information, all of which are available from the App's settings. To get there Go to the organization you have created the App for Go to Settings > Developer Settings > GitHub Apps Click Edit next to the name of the App you're using, and authenticate with 2FA The three pieces of information you need are: The App ID This is in the General page, in the About section at the top. The Installation ID If you haven't already, you also need to install the App into the Organization. Once this is done, go back to the Install App page in the App settings, and copy the link for the installation settings. Paste it into your editor and get the number from the end. The link should have the form https://github.com/apps/{app_name}/installations/{installation_id}; the part after the last / is the installation ID. (If you have multiple installations of your app, there may be a way to get this programmatically; I haven't looked into this as I didn't need it for my use case.) PEM file This is how you prove to GitHub that you are in control of the App. Go back to the General page in the App settings, and scroll down to the Private keys section. Click the Generate a private key button; this will immediately generate a .pem file and download it to your machine. Do not commit this to your repository unless you want everyone who can see the repository to be able to authenticate to GitHub as you. The code Once you have these three things, the steps you need in code are: Load your PEM Use the PEM to create a JSON Web Token that will authenticate your API call Call the GitHub API to get an installation token (Use the installation token to clone the repository of interest.) Get the installation token Code to do the first three steps could look like this: from datetime import datetime import jwt import requests def get_installation_access_token( pem_filename: str, app_id: str, installation_id: str ) -> str: """ Obtain and return a GitHub installation access token. Arguments: pem_filename: Filename of a PEM file generated by GitHub to authenticate as the installed app. app_id: The application ID installation_id: The ID of the app installation. Returns: The installation access token obtained from GitHub. """ # With thanks to https://github.com/orgs/community/discussions/48186 now = int(datetime.now().timestamp()) with open(pem_filename, "rb") as pem_file: signing_key = jwt.jwk_from_pem(pem_file.read()) payload = {"iat": now, "exp": now + 600, "iss": app_id} jwt_instance = jwt.JWT() encoded_jwt = jwt_instance.encode(payload, signing_key, alg="RS256") response = requests.post( "https://api.github.com/app/installations/" f"{installation_id}/access_tokens", headers={ "Authorization": f"Bearer {encoded_jwt}", "Accept": "application/vnd.github+json", "X-GitHub-Api-Version": "2022-11-28", }, ) if not 200 <= response.status_code < 300: raise RuntimeError( "Unable to get token. Status code was " f"{response.status_code}, body was {response.text}." ) return response.json()["token"] Pass in the information collected above as the three parameters to the function. Note that this depends on the jwt and requests packages, both available under those names from pip. This will give an installation token that is valid for an hour. (This is much less time than the PEM file is valid, because it has a lot less security. That's the reason this dance is neededβyou're trading something pretty secure for something that is less secure but easier to use with git clone; because it's less secure, it has to be time limited instead to reduce the chance of it getting stolen.) Clone the repository Assuming that you have a repository URL in the form repo_url = https://github.com/organization/repository_name then you can clone the repository as: import git if not original_url.startswith("https://"): raise ValueError("Need an HTTPS URL") auth_url = f"https://x-access-token:{token}@{original_url[8:]}" git.Repo.clone_from( auth_url, deployment["tempdir_path"] / "repo", branch="deployment", ) Here I've used the GitPython library for Python. Equivalently, you could use the shell command $ git clone https://x-access-token:${TOKEN}@github.com/organization/repository_name where ${TOKEN} contains the result of calling the above Python function. Credits Many thanks to loujr on the GitHub Community for the guide that eventually clued me into how to do this. I've stripped out the need to use command-line arguments and to manually pass the JWT into curl, instead keeping everything in Python. | 2 | 6 |
77,325,233 | 2023-10-19 | https://stackoverflow.com/questions/77325233/how-to-convert-a-dict-to-a-dataclass-reverse-of-asdict | the dataclasses module lets users make a dict from a dataclass reall conveniently, like this: from dataclasses import dataclass, asdict @dataclass class MyDataClass: ''' description of the dataclass ''' a: int b: int # create instance c = MyDataClass(100, 200) print(c) # turn into a dict d = asdict(c) print(d) But i am trying to do the reverse process: dict -> dataclass. The best that i can do is unpack a dict back into the predefined dataclass. # is there a way to convert this dict to a dataclass ? my_dict = {'a': 100, 'b': 200} e = MyDataClass(**my_dict) print(e) How can i achieve this without having to pre-define the dataclass (if it is possible) ? | You can use make_dataclass: from dataclasses import make_dataclass my_dict = {"a": 100, "b": 200} make_dataclass( "MyDynamicallyCreatedDataclass", ((k, type(v)) for k, v in my_dict.items()) )(**my_dict) | 5 | 10 |
77,324,859 | 2023-10-19 | https://stackoverflow.com/questions/77324859/how-to-convert-list-of-tuple-into-a-string | I have a list of tuple and I'd like to convert the list into a string of tuple pair separted by comman. Not sure how to do it? For example if I have list of this a=[(830.0, 930.0), (940.0, 1040.0)] I'd like to convert it to a string like this b="(830.0, 930.0), (940.0, 1040.0)" a=[(830.0, 930.0), (940.0, 1040.0), (1050.0, 1150.0), (1160.0, 1260.0), (1270.0, 1370.0), (1380.0, 1480.0), (1490.0, 1590.0)] b=','.join(a) b ----> 2 b=','.join(a) 3 b TypeError: sequence item 0: expected str instance, tuple found | Use f-strings or formatted string literals and str.strip: a = [(830.0, 930.0), (940.0, 1040.0)] b = f"{a}".strip("[]") print(b) # (830.0, 930.0), (940.0, 1040.0) | 3 | 3 |
77,321,575 | 2023-10-19 | https://stackoverflow.com/questions/77321575/how-to-set-scipy-interpolator-to-preserve-the-data-most-accurately | This is an (x,y) plot I have of a vehicle's position data every 0.1 seconds. The total set is around 500 points. I read other solutions about interpolating with SciPy (here and here), but it seems that SciPy interpolates at even intervals by default. Below is my current code: def reduce_dataset(x_list, y_list, num_interpolation_points): points = np.array([x_list, y_list]).T distance = np.cumsum( np.sqrt(np.sum( np.diff(points, axis=0)**2, axis=1 )) ) distance = np.insert(distance, 0, 0)/distance[-1] interpolator = interp1d(distance, points, kind='quadratic', axis=0) results = interpolator(np.linspace(0, 1, num_interpolation_points)).T.tolist() new_xs = results[0] new_ys = results[1] return new_xs, new_ys xs, ys = reduce_dataset(xs,ys, 50) colors = cm.rainbow(np.linspace(0, 1, len(ys))) i = 0 for y, c in zip(ys, colors): plt.scatter(xs[i], y, color=c) i += 1 It produces this output: This is decent, but I want to set the interpolator to try and place more points in the places that are hardest to linearly interpolate, and place less points in areas that can be easily reconstructed with an interpolated line. Notice how in the second image, the final point appears to suddenly "jump" from the previous one. And the middle section seems a bit redundant, since many of those points fall in a perfectly straight line. This is not the most efficient use of 50 points for something that is to be reconstructed as accurately as possible using linear interpolation. I made this manually, but I am looking for something like this, where the algorithm is smart enough to place points very densely in places where the data changes non-linearly: This way, the data can be interpolated with a higher degree of accuracy. The large gaps between points in this graph can be very accurately interpolated with a simple line, whereas the dense clusters require much more frequent sampling. I have read into the interpolator docs on SciPy, but can't seem to find any generator or setting that can do this. I have tried using "slinear" and "cubic" interpolation as well, but it seems to still sample at even intervals rather than grouping points where they are needed most. Is this something SciPy can do, or should I use something like an SKLearn ML algorithm for a job like this? | It seems to me that you are confused between the interpolator object that is constructed by interp1d, and the actual interpolated coordinates that are the final result you want. it seems that SciPy interpolates at even intervals by default interp1d returns an interpolator object that is built from the x and y coordinates you provide. Those do not have to be evenly spaced at all. Then, you provide to this interpolator xnew values that define where the interpolator will reconstruct your signal. This is where you have to specify if you want evenly spaced or not: results = interpolator(np.linspace(0, 1, num_interpolation_points)).T.tolist(). Notice the call to np.linspace, which literally means "linearly spaced values". Replace this by np.logspace() to have logarithmically spaced value, or by something else: import numpy as np from scipy.interpolate import interp1d import matplotlib.pyplot as plt # Generate fake data x = np.linspace(1, 3, 1000) y = (x - 2)**3 # interpolation interpolator = interp1d(x, y) # different xnews N = 20 xnew_linspace = np.linspace(x.min(), x.max(), N) # linearly spaced xnew_logspace = np.logspace(np.log10(x.min()), np.log10(x.max()), N) # log spaced # spacing based on curvature gradient = np.gradient(y, x) second_gradient = np.gradient(gradient, x) curvature = np.abs(second_gradient) / (1 + gradient**2)**(3 / 2) idx = np.round(np.linspace(0, len(curvature) - 1, N)).astype(int) epsilon = 1e-1 a = (0.99 * x.max() - x.min()) / np.sum(1 / (curvature[idx] + epsilon)) xnew_curvature = np.insert(x.min() + np.cumsum(a / (curvature[idx] + epsilon)), 0, x.min()) fig, axarr = plt.subplots(2, 2, layout='constrained', sharex=True, sharey=True) axarr[0, 0].plot(x, y) for ax, xnew in zip(axarr.flatten()[1:], [xnew_linspace, xnew_logspace, xnew_curvature]): ax.plot(xnew, interpolator(xnew), '.--') axarr[0, 0].set_title('base signal') axarr[0, 1].set_title('linearly spaced') axarr[1, 0].set_title('log spaced') axarr[1, 1].set_title('curvature based spaced') plt.savefig('test_interp1d.png', dpi=400) Note that I am not sure that scaling on the curvature as I did is the proper way to do it. But that gives you the idea about interp1d. | 2 | 3 |
77,323,830 | 2023-10-19 | https://stackoverflow.com/questions/77323830/why-is-this-python-priority-queue-failing-to-heapify | Why is this priority queue failing to heapify? Where (150, 200, 200) are the priority values assigned to the dictionaries import heapq priority_q = [ (150, {'intel-labels': {'timestamp': 150}}), (200, {'intel-labels': {'timestamp': 200}}), (200, {'intel-labels': {'timestamp': 200, 'xx': 'xx'}}) ] heapq.heapify(priority_q) print( heapq.nlargest(2, priority_q)) The exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: '<' not supported between instances of 'dict' and 'dict' The below, however, works.. priority_q = [ (150, {'intel-labels': {'timestamp': 150}}), (200, {'intel-labels': {'timestamp': 200}}), (201, {'intel-labels': {'timestamp': 200, 'xx': 'xx'}}) ] heapq.heapify(priority_q) Why is this? | Why is this? heapq heapifies the heap according to comparisons. Tuples are compared lexicographically. If your first values in all tuples are distinct, you will never compare the second values. This is the case for your second example. If you have a duplicate value (200 in your first example), the second elements will be compared. These are dicts (which can't be compared), so this raises an error. As for a proper fix: The heapq docs got you covered. Their suggestion is to use triples such that the second value is an autoincremented tie breaker, or to use a data class that only compares the priority field. Note that these approaches differ slightly: With a tie breaker, there won't be any ties; nlargest will always give you just a single element - the one which was inserted first. If you want it to return all tying elements, you shouldn't use this approach. Applying the second approach to your example, the following works: import heapq from dataclasses import dataclass, field from typing import Any @dataclass(order=True) class PrioritizedItem: priority: int item: Any=field(compare=False) priority_q = [ PrioritizedItem(150, {'intel-labels': {'timestamp': 150}}), PrioritizedItem(200, {'intel-labels': {'timestamp': 200}}), PrioritizedItem(200, {'intel-labels': {'timestamp': 200, 'xx': 'xx'}}) ] heapq.heapify(priority_q) print(heapq.nlargest(2, priority_q)) prints [PrioritizedItem(priority=200, item={'intel-labels': {'timestamp': 200}}), PrioritizedItem(priority=200, item={'intel-labels': {'timestamp': 200, 'xx': 'xx'}})] | 3 | 4 |
77,321,812 | 2023-10-19 | https://stackoverflow.com/questions/77321812/threadpoolexecutor-exits-before-queue-is-empty | My goal is to concurrently crawl URLs from a queue. Based on the crawling result, the queue may be extended. Here is the MWE: import queue from concurrent.futures import ThreadPoolExecutor import time def get(url): # let's assume that the HTTP magic happens here time.sleep(1) return f'data from {url}' def crawl(url, url_queue: queue.Queue, result_queue: queue.Queue): data = get(url) result_queue.put(data) if 'more' in url: url_queue.put('url_extended') url_queue = queue.Queue() result_queue = queue.Queue() for url in ('some_url', 'another_url', 'url_with_more', 'another_url_with_more', 'last_url'): url_queue.put(url) with ThreadPoolExecutor(max_workers=8) as executor: while not url_queue.empty(): url = url_queue.get() executor.submit(crawl, url, url_queue, result_queue) while not result_queue.empty(): data = result_queue.get() print(data) In this MWE, two URLs require another crawl: 'url_with_more' and 'another_url_with_more'. They are added to the url_queue while crawling. However, this solution ends before those two 'more' URLs are processed; after running, the url_queue remains to have two entries. How can I make sure that the ThreadPoolExecutor does not exit too early? Have I misunderstood ThreadPoolExecutor? | You have a race condition where the check for more work to submit happens before new tasks are added from the other thread, you need to not exit the threadpool until you wait on all submitted jobs, and then check if there is more work to submit. task_queue = queue.Queue() with ThreadPoolExecutor(max_workers=8) as executor: # exit when no task pending or to be scheduled while not (url_queue.empty() and task_queue.empty()): # submit new tasks while not url_queue.empty(): url = url_queue.get() task_queue.put(executor.submit(crawl, url, url_queue, result_queue)) # wait for one task if not task_queue.empty(): task_queue.get().result() # process result here if needed. You might want to use concurrent.futures.wait with FIRST_COMPLETED instead of waiting on the first task in the queue if tasks have varying times. tasks_to_wait = set() with ThreadPoolExecutor(max_workers=8) as executor: # exit when no task pending or to be scheduled while not (url_queue.empty() and len(tasks_to_wait) == 0): # submit new tasks while not url_queue.empty(): url = url_queue.get() tasks_to_wait.add(executor.submit(crawl, url, url_queue, result_queue)) # wait for one task if len(tasks_to_wait) != 0: done, tasks_to_wait = concurrent.futures.wait(tasks_to_wait, None, concurrent.futures.FIRST_COMPLETED) | 4 | 1 |
77,316,817 | 2023-10-18 | https://stackoverflow.com/questions/77316817/python-sibling-relative-import-error-no-known-parent-package | I want to import a module from a subpackage so I went here Relative importing modules from parent folder subfolder and since it was not working I read all the literature here on stack and found a smaller problem that I can reproduce but cannot solve. I want to use relative imports because I don't wanna deal with sys.path etc and I don't wanna install every module of my project to be imported everywhere. I wanna make it work with relative imports. My project structure: project/ __init__.py bar.py foo.py main.py bar.py: from .foo import Foo class Bar(): @staticmethod def get_foo(): return Foo() foo.py: class Foo(): pass main.py: from bar import Bar def main(): f = Bar.get_foo() if __name__ == '__main__': main() I am running the project code from terminal with python main.py and I get the following: Traceback (most recent call last): File "**omitted** project/main.py", line 1, in <module> from bar import Bar File "**omitted** project/bar.py", line 1, in <module> from .foo import Foo ImportError: attempted relative import with no known parent package Why am I getting this error? It seems that bar doesn't recognize project as the parent package but: __init__.py is in place bar.py is being run as a module not a script since it is called from main and not from the command line (so __package__ and __name__ should be in place to help solve the relative import Relative imports for the billionth time) Why am I getting this error? What am I getting wrong? I have worked for a while just adding the parent of cwd to the PYTHONPATH but I wanna fix this once and for all. | You should be running your project from the parent of project dir as: $ python -m project.main # note no .py This tells python that there is a package named project and inside it a module named main - then relative and absolute imports work correctly - once you change the import in main in either of from .bar import Bar # relative from project.bar import Bar # absolute | 7 | 13 |
77,319,901 | 2023-10-18 | https://stackoverflow.com/questions/77319901/show-percentage-of-total-in-pandas-pivot-table-with-multiple-columns-based-on-si | Example dataframe, also available as a fiddle: import pandas as pd d = { "year": [2021, 2021, 2021, 2021, 2022, 2022, 2022, 2023, 2023, 2023, 2023], "type": ["A", "B", "B", "A", "A", "B", "A", "B", pd.NA, "B", "A"], "observation": [22, 11, 67, 44, 2, 16, 78, 9, 10, 11, 45] } df = pd.DataFrame(d) df_pivot = pd.pivot_table( df, values="observation", index="year", columns="type", aggfunc="count" ) The pivot table produces the desired output by count (this is intentional, I do not want the sum of observations, I want a row count): >>> print(df_pivot) type A B year 2021 2 2 2022 2 1 2023 1 2 However, I would like to show the percentage divided into total for each row by types "A" and "B" (the values of the "type" column in the dataframe). Note that not all rows have a type, some are NA (one is NA in this sample data to illustrate this). It's fine to ignore these unpopulated values in calculations. This also means that the "total" may be different in each row and is based on the sum of counted values in each type (i.e., count of A + count of B for each year). I have tried multiple ways but it only seems to work when I isolate each specific type one at a time. I have not been able to figure out how to do it where it has similar output only showing the percentage of the total instead of the count. My lambda functions for aggfunc seem to result in incorrect values not reflective of the correct percentages. Example desired output: >>> print(df_desired_output) type A B year 2021 0.50 0.50 2022 0.66 0.33 2023 0.33 0.66 How do I get this desired output? | Division should do it: df_pivot.div(df_pivot.sum(1),axis=0).round(2) type A B year 2021 0.50 0.50 2022 0.67 0.33 2023 0.33 0.67 | 4 | 4 |
77,319,069 | 2023-10-18 | https://stackoverflow.com/questions/77319069/why-does-assign-lambda-sometimes-reads-the-entire-column-rather-each-individua | I have a DataFrame with a code in a column. I want to extract the first digit from said code and add to a different column so I can use it to merge it with a different DF. My code is: df_a = df_a.assign(index_a = lambda x: int(str(x.code)[0])) When I use: df_a = df_a.assign(index_a = lambda x: x.code) This works and I get a new DF with the extra "Code" column and the entire code. If do any operations here like x.code + 1 or x.code * 5 it works. Then, when I try to convert each code to a string by doing: df_a = df_a.assign(index_a = lambda x: str(x.code)) Instead of getting each row with a string code, all rows receive the value of the entire column converted into a massive string. I had a similar problem in the past trying to navigate lambda functions and learned that as long as I did x + 0 before converting, everything worked okay, but this time it's not working. I'm obviously doing something wrong, but I can't figure it out. | it's so because str(x.code) convert the whole column each time, try this instead df_a['code'] = df_a['code'].apply(str) | 2 | 2 |
77,318,706 | 2023-10-18 | https://stackoverflow.com/questions/77318706/merging-multiple-files-with-unequal-rows-based-on-the-common-column-to-form-a-co | I have similar problem of merging-multiple-files-based-on-the-common-column as https://superuser.com/questions/1245094/merging-multiple-files-based-on-the-common-column. I am very near to the solution but I am new to python. I need help with the tweaking of code for joining multiple files. My IDs and columns for individual file look like: File1.txt id SRR1071717 chr1:15039:-::chr1:15795:- 2 chr1:15948:-::chr1:16606:- 6 File2.txt id SRR1079830 chr1:11672:+::chr1:12009:+ 10 chr1:11845:+::chr1:12009:+ 7 chrY:9756574:+::chrY:9757796:+ 0 My desired output id SRR1071717 SRR1079830 chr1:15039:-::chr1:15795:- 2 0 chr1:15948:-::chr1:16606:- 6 0 chr1:11672:+::chr1:12009:+ 0 10 chr1:11845:+::chr1:12009:+ 0 7 chrY:9756574:+::chrY:9757796:+ 0 0 My code: Matrix.py import sys columns = [] data = {} ids = set() for filename in sys.argv[1:]: with open(filename, 'rU') as f: key = next(f).strip().split()[1] columns.append(key) data[key] = {} for line in f: if line.strip(): id, value = line.strip().split() try: data[key][int(id)] = value except ValueError as exc: raise ValueError( "Problem in line: '{}' '{}' '{}'".format( id, value, line.rstrip())) ids.add(int(id)) print('\t'.join(['ID'] + columns)) for id in sorted(ids): line = [] for column in columns: line.append(data[column].get(id, '0')) print('\t'.join([str(id)] + line)) I ran a python code as shown but it's not working correctly (being new to python). Current Output (two lines only!). python3 matrix.py File\*.txt Current output id SRR1071717 SRR1079830 chrY:9756574:+::chrY:9757796:+ 0 0 | Using any awk: $ cat tst.awk FNR == 1 { ++numCols } { if ( !($1 in ids2rows) ) { rows2ids[++numRows] = $1 ids2rows[$1] = numRows } rowNr = ids2rows[$1] vals[rowNr,numCols] = $2 } END { for ( rowNr=1; rowNr<=numRows; rowNr++ ) { id = rows2ids[rowNr] printf "%s", id for ( colNr=1; colNr<=numCols; colNr++ ) { val = ( (rowNr,colNr) in vals ? vals[rowNr,colNr] : 0 ) printf "%s%s", OFS, val } print "" } } $ awk -f tst.awk File1.txt File2.txt id SRR1071717 SRR1079830 chr1:15039:-::chr1:15795:- 2 0 chr1:15948:-::chr1:16606:- 6 0 chr1:11672:+::chr1:12009:+ 0 10 chr1:11845:+::chr1:12009:+ 0 7 chrY:9756574:+::chrY:9757796:+ 0 0 | 2 | 1 |
77,318,492 | 2023-10-18 | https://stackoverflow.com/questions/77318492/building-wheel-for-pyarrow-pyproject-toml-did-not-run-successfully | Just had IT install Python 3.12 on my Windows machine. I do not have admin rights on my machine, which may or may not be important. During install, the following were done: Clicked "Add Python 3.n to Path" box. Went into Customize installation and made sure pip was selected, and, selected "install for all users". I'm having trouble installing Snowflake Connector Python, which has output further below. For sanity, I tried installing numpy, which seemingly worked even though "platform independent libraries not found": py -m pip install numpy1 Could not find platform independent libraries <prefix> Collecting numpy1 Using cached numpy1-0.0.1-py3-none-any.whl Installing collected packages: numpy1 Successfully installed numpy1-0.0.1 And here's the attempt at installing the Snowflake Connector. Would anyone have ideas on my problem(s)? #Have tried both, producing same errors py -m pip install snowflake-connector-python py -m pip install --upgrade snowflake-connector-python Could not find platform independent libraries <prefix> Collecting snowflake-connector-python Using cached snowflake-connector-python-3.3.0.tar.gz (716 kB) Installing build dependencies ... error error: subprocess-exited-with-error Γ pip subprocess to install build dependencies did not run successfully. β exit code: 1 β°β> [326 lines of output] Could not find platform independent libraries <prefix> Collecting setuptools>=40.6.0 Using cached setuptools-68.2.2-py3-none-any.whl.metadata (6.3 kB) Collecting wheel Using cached wheel-0.41.2-py3-none-any.whl.metadata (2.2 kB) Collecting cython Using cached Cython-3.0.4-cp312-cp312-win_amd64.whl.metadata (3.2 kB) Collecting pyarrow<10.1.0,>=10.0.1 Using cached pyarrow-10.0.1.tar.gz (994 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting numpy>=1.16.6 (from pyarrow<10.1.0,>=10.0.1) Using cached numpy-1.26.1-cp312-cp312-win_amd64.whl.metadata (61 kB) Using cached setuptools-68.2.2-py3-none-any.whl (807 kB) Using cached wheel-0.41.2-py3-none-any.whl (64 kB) Using cached Cython-3.0.4-cp312-cp312-win_amd64.whl (2.8 MB) Using cached numpy-1.26.1-cp312-cp312-win_amd64.whl (15.5 MB) Building wheels for collected packages: pyarrow Building wheel for pyarrow (pyproject.toml): started Building wheel for pyarrow (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error Building wheel for pyarrow (pyproject.toml) did not run successfully. exit code: 1 [290 lines of output] Could not find platform independent libraries <prefix> <string>:36: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html WARNING setuptools_scm.pyproject_reading toml section missing 'pyproject.toml does not contain a tool.setuptools_scm section' running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-cpython-312 creating build\lib.win-amd64-cpython-312\pyarrow copying pyarrow\benchmark.py -> build\lib.win-amd64-cpython-312\pyarrow copying pyarrow\cffi.py -> build\lib.win-amd64-cpython-312\pyarrow copying pyarrow\compute.py -> build\lib.win-amd64-cpython-312\pyarrow copying pyarrow\conftest.py -> build\lib.win-amd64-cpython-312\pyarrow copying pyarrow\csv.py -> build\lib.win-amd64-cpython-312\pyarrow [had to remove similar rows to fit question] copying pyarrow\tests\test_util.py -> build\lib.win-amd64-cpython-312\pyarrow\tests copying pyarrow\tests\util.py -> build\lib.win-amd64-cpython-312\pyarrow\tests copying pyarrow\tests\__init__.py -> build\lib.win-amd64-cpython-312\pyarrow\tests creating build\lib.win-amd64-cpython-312\pyarrow\vendored copying pyarrow\vendored\docscrape.py -> build\lib.win-amd64-cpython-312\pyarrow\vendored copying pyarrow\vendored\version.py -> build\lib.win-amd64-cpython-312\pyarrow\vendored copying pyarrow\vendored\__init__.py -> build\lib.win-amd64-cpython-312\pyarrow\vendored creating build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\common.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\conftest.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\encryption.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\test_basic.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\test_compliant_nested_type.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\test_dataset.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\test_data_types.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\test_datetime.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\test_encryption.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\test_metadata.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\test_pandas.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\test_parquet_file.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\test_parquet_writer.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet copying pyarrow\tests\parquet\__init__.py -> build\lib.win-amd64-cpython-312\pyarrow\tests\parquet running egg_info writing pyarrow.egg-info\PKG-INFO writing dependency_links to pyarrow.egg-info\dependency_links.txt writing entry points to pyarrow.egg-info\entry_points.txt writing requirements to pyarrow.egg-info\requires.txt writing top-level names to pyarrow.egg-info\top_level.txt ERROR setuptools_scm._file_finders.git listing git files failed - pretending there aren't any reading manifest file 'pyarrow.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '..\LICENSE.txt' warning: no files found matching '..\NOTICE.txt' warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*~' found anywhere in distribution warning: no previously-included files matching '#*' found anywhere in distribution warning: no previously-included files matching '.git*' found anywhere in distribution warning: no previously-included files matching '.DS_Store' found anywhere in distribution no previously-included directories found matching '.asv' writing manifest file 'pyarrow.egg-info\SOURCES.txt' copying pyarrow\__init__.pxd -> build\lib.win-amd64-cpython-312\pyarrow copying pyarrow\_compute.pxd -> build\lib.win-amd64-cpython-312\pyarrow copying pyarrow\_compute.pyx -> build\lib.win-amd64-cpython-312\pyarrow copying pyarrow\_csv.pxd -> build\lib.win-amd64-cpython-312\pyarrow [had to remove similar rows to fit question] copying pyarrow\includes\libarrow_flight.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes copying pyarrow\includes\libarrow_fs.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes copying pyarrow\includes\libarrow_python.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes copying pyarrow\includes\libarrow_substrait.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes copying pyarrow\includes\libgandiva.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes copying pyarrow\includes\libplasma.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes copying pyarrow\includes\__init__.pxd -> build\lib.win-amd64-cpython-312\pyarrow\includes creating build\lib.win-amd64-cpython-312\pyarrow\src copying pyarrow\src\ArrowPythonConfig.cmake.in -> build\lib.win-amd64-cpython-312\pyarrow\src copying pyarrow\src\ArrowPythonFlightConfig.cmake.in -> build\lib.win-amd64-cpython-312\pyarrow\src copying pyarrow\src\CMakeLists.txt -> build\lib.win-amd64-cpython-312\pyarrow\src copying pyarrow\src\arrow-python-flight.pc.in -> build\lib.win-amd64-cpython-312\pyarrow\src copying pyarrow\src\arrow-python.pc.in -> build\lib.win-amd64-cpython-312\pyarrow\src creating build\lib.win-amd64-cpython-312\pyarrow\tensorflow copying pyarrow\tensorflow\plasma_op.cc -> build\lib.win-amd64-cpython-312\pyarrow\tensorflow copying pyarrow\tests\bound_function_visit_strings.pyx -> build\lib.win-amd64-cpython-312\pyarrow\tests copying pyarrow\tests\pyarrow_cython_example.pyx -> build\lib.win-amd64-cpython-312\pyarrow\tests creating build\lib.win-amd64-cpython-312\pyarrow\src\arrow creating build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python copying pyarrow\src\arrow\python\CMakeLists.txt -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python copying pyarrow\src\arrow\python\api.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python copying pyarrow\src\arrow\python\arrow_to_pandas.cc -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python copying pyarrow\src\arrow\python\arrow_to_pandas.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python copying pyarrow\src\arrow\python\arrow_to_python_internal.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python copying pyarrow\src\arrow\python\benchmark.cc -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python [had to remove similar rows to fit question] copying pyarrow\src\arrow\python\serialize.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python copying pyarrow\src\arrow\python\type_traits.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python copying pyarrow\src\arrow\python\udf.cc -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python copying pyarrow\src\arrow\python\udf.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python copying pyarrow\src\arrow\python\visibility.h -> build\lib.win-amd64-cpython-312\pyarrow\src\arrow\python creating build\lib.win-amd64-cpython-312\pyarrow\tests\data creating build\lib.win-amd64-cpython-312\pyarrow\tests\data\feather copying pyarrow\tests\data\feather\v0.17.0.version.2-compression.lz4.feather -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\feather creating build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc copying pyarrow\tests\data\orc\README.md -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc copying pyarrow\tests\data\orc\TestOrcFile.emptyFile.jsn.gz -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc copying pyarrow\tests\data\orc\TestOrcFile.emptyFile.orc -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc copying pyarrow\tests\data\orc\TestOrcFile.test1.jsn.gz -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc copying pyarrow\tests\data\orc\TestOrcFile.test1.orc -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc copying pyarrow\tests\data\orc\TestOrcFile.testDate1900.jsn.gz -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc copying pyarrow\tests\data\orc\TestOrcFile.testDate1900.orc -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc copying pyarrow\tests\data\orc\decimal.jsn.gz -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc copying pyarrow\tests\data\orc\decimal.orc -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\orc creating build\lib.win-amd64-cpython-312\pyarrow\tests\data\parquet copying pyarrow\tests\data\parquet\v0.7.1.all-named-index.parquet -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\parquet copying pyarrow\tests\data\parquet\v0.7.1.column-metadata-handling.parquet -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\parquet copying pyarrow\tests\data\parquet\v0.7.1.parquet -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\parquet copying pyarrow\tests\data\parquet\v0.7.1.some-named-index.parquet -> build\lib.win-amd64-cpython-312\pyarrow\tests\data\parquet running build_ext creating C:\Users\myusername\AppData\Local\Temp\pip-install-3n6isyev\pyarrow_2f98fb6170b843699540aba751d4b9e7\build\cpp -- Running CMake for PyArrow C++ cmake -DARROW_BUILD_DIR=build -DCMAKE_BUILD_TYPE=release -DCMAKE_INSTALL_LIBDIR=lib -DCMAKE_INSTALL_PREFIX=C:\Users\myusername\AppData\Local\Temp\pip-install-3n6isyev\pyarrow_2f98fb6170b843699540aba751d4b9e7\build\dist -DPYTHON_EXECUTABLE=C:\Users\myusername\AppData\Local\Programs\Python\Python312\python.exe -DPython3_EXECUTABLE=C:\Users\myusername\AppData\Local\Programs\Python\Python312\python.exe -DPYARROW_CXXFLAGS= -DPYARROW_WITH_DATASET=off -DPYARROW_WITH_PARQUET_ENCRYPTION=off -DPYARROW_WITH_HDFS=off -DPYARROW_WITH_FLIGHT=off -G "Visual Studio 15 2017 Win64" C:\Users\myusername\AppData\Local\Temp\pip-install-3n6isyev\pyarrow_2f98fb6170b843699540aba751d4b9e7\pyarrow/src error: command 'cmake' failed: None [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pyarrow Failed to build pyarrow ERROR: Could not build wheels for pyarrow, which is required to install pyproject.toml-based projects [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error Γ pip subprocess to install build dependencies did not run successfully. β exit code: 1 β°β> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. | You are getting this error because pyarrow still does not support python 3.12. There are no wheels yet for 3.12 on pypi https://pypi.org/project/pyarrow/#files Here is the complete discussion : https://github.com/apache/arrow/issues/37880 A lot of packages are dependent on pyarraow. 3 weeks ago they posted: Due to the current complexity on the release process and the Apache guidelines unfortunately they won't be available just yet but I'll work on creating the pyarrow wheels for Python 3.12 and will add it to the 14.0.0 release. Last week they posted: I have merged the wheels support for 3.12. Those will be available as nightly development in our next daily builds. The PyPI availability will be ready as soon as we release Arrow 14.0.0 which we started today with preparations but takes some time (~2 weeks) Hopefully, it will be available soon. Till then you can use python 3.11 for snowflake-connector | 4 | 6 |
77,317,554 | 2023-10-18 | https://stackoverflow.com/questions/77317554/merging-multiple-dataframes-together-with-pandas | I have multiple dataframes that are structured as follows: Molecule Name Molecular weight Score Population Score Population Error A 100 12 15 0.2 B 205 0.4 17 0.8 C 367 17 11 0.82 D 510 9 19.6 0.1 Molecule Name Molecular weight Score Population Score Population Error A 100 20 15 0.2 B 205 16 17 0.8 E 367 11 11 0.82 F 780 11 12 0.5 Imagine I had multiple dataframes where the molecule names, weights, and score can vary. But the population score and population error are the same across all dataframes. I want to merge all the dataframes. The unique identifer is the Molecular weight. So if there are multiple dataframes which contain a molecular weight of 100, the scores of each get saved - take note of molecule C from the first dataframe and molecule E from the second - they have the same molecular weight so should be combined. I essentially want something that comes out like this: Molecule Name Molecular weight Score1 Score2 Population Score Population Error A 100 12 20 15 0.2 B 205 0.4 16 17 0.8 C 367 17 11 11 0.82 D 510 9 0 19.6 0.1 F 780 0 11 12 0.5 I have tried just simple merges but it always creates a new row for Molecule name E. I want to merge on Molecular weight. If the Molecular weight isn't in the original data frame then add a new row and populate the 0s. I hope this makes sense. Any and all help will be appreciated. | You could use a custom concat with de-duplication of the Score columns and groupby.first to get the first molecule name: dfs = [df1, df2] # could be more than 2 input DataFrames out = (pd.concat([d.set_index(['Molecular weight', 'Population Score', 'Population Error']) .rename(columns={'Score': f'Score{i}'}) for i, d in enumerate(dfs, start=1)], axis=1) .groupby(level=0, axis=1).first() .fillna(0, downcast='infer') .reset_index() ) Variant for future versions of pandas: dfs = [df1, df2] out = (pd.concat([d.set_index(['Molecular weight', 'Population Score', 'Population Error']) .rename(columns={'Score': f'Score{i}'}) for i, d in enumerate(dfs, start=1)], axis=1) .T.groupby(level=0).first().T .fillna(0) .reset_index() ) Output: Molecular weight Population Score Population Error Molecule Name Score1 Score2 0 100 15.0 0.20 A 12.0 20 1 205 17.0 0.80 B 0.4 16 2 367 11.0 0.82 C 17.0 11 3 510 19.6 0.10 D 9.0 0 4 780 12.0 0.50 F 0.0 11 | 2 | 2 |
77,316,898 | 2023-10-18 | https://stackoverflow.com/questions/77316898/use-apply-function-in-all-columns-at-once-in-pandas | I have a panda database with several columns, I want to make two types of filters based on a list: 1 - Filter rows that contain at least one item in the list, regardless of the column 2 - Filter rows that contain all items in the row. For example, column 1 might contain one element, and column 2 might contain another element An example of the database I have df = pd.DataFrame({'A' : ["Bonjour (coouse); je teste", "La nuit (horrible)", "La troisième voie"], 'B' : ["La première voie", "Bonjour (coouse); ou pas", "La nuit (horrible)"]}) df I've tried this filter, it only works per column, whereas I want to apply on all columns. It doesn't matter what the column is, if a row contains that value, then I display it. df.loc[df['A'].apply(lambda x: any([k in x for k in ["Bonjour (coouse)", "La troisième voie"]]))] The second solution I'd like to have is the & condition: all the values in the list must be contained in one of the columns. For example, the first value might be in column 1 and the second value in the list might be in column 2 | Since you want to match substrings, I would use a regex approach. Filter rows that contain at least one item in the list, regardless of the column Here we craft a regex, extract the substrings and keep the rows with at least a non-NA value import re target = ["Bonjour (coouse)", "La troisième voie"] pattern = '(%s)' % '|'.join(map(re.escape, target)) # '(Bonjour\\ \\(coouse\\)|La\\ troisième\\ voie)' out = df[df.apply(lambda s: s.str.extract(pattern, expand=False)) .notna().any(axis=1)] regex demo Filter rows that contain all items in the row We can craft the same regex, but instead count the number of different values that we have extracted, to have all it should match the length of the list: import re target = ["Bonjour (coouse)", "La troisième voie"] pattern = '(%s)' % '|'.join(map(re.escape, target)) # '(Bonjour\\ \\(coouse\\)|La\\ troisième\\ voie)' out = df[df.apply(lambda s: s.str.extract(pattern, expand=False)) .nunique(axis=1).eq(len(target))] Alternatively, for the last step, using set operations (less efficient): out = df[df.apply(lambda s: s.str.extract(pattern, expand=False)) .agg(set, axis=1).eq(set(target))] intermediate df.apply(lambda s: s.str.extract(pattern, expand=False)) A B 0 Bonjour (coouse) NaN 1 NaN Bonjour (coouse) 2 La troisième voie NaN | 2 | 1 |
77,313,133 | 2023-10-18 | https://stackoverflow.com/questions/77313133/vs-code-sort-imports-command-has-disappeared-as-an-option-from-the-context-man | Why can I no longer sort my imports in VS Code. I have changed zero settings and sometimes it just does not appear. | They have recently changed this command to "Organize Imports" Comment that says it is ready for October update https://github.com/microsoft/vscode-python/issues/22147#issuecomment-1751173694 Comment that says the new command https://github.com/microsoft/vscode-python/issues/22147#issuecomment-1767140242 | 5 | 5 |
77,311,182 | 2023-10-17 | https://stackoverflow.com/questions/77311182/select-column-based-on-the-value-of-another-column-polars-python | I have a df with ten columns and another column with its values are partial name of the ten columns. Here is a similar sample: import polars as pl df = pl.DataFrame({ "ID" :["A" ,"B" ,"C" ] , "A Left" :["W1" ,"W2" ,"W3" ] , "A Right":["P1" ,"P2" ,"P3" ] , "B Left" :["G1" ,"G2" ,"G3" ] , "B Right":["Y1" ,"Y2" ,"Y3" ] , "C Left" :["M1" ,"M2" ,"M3" ] , "C Right":["K1" ,"K2" ,"K3" ] , }) df shape: (3, 7) βββββββ¬βββββββββ¬ββββββββββ¬βββββββββ¬ββββββββββ¬βββββββββ¬ββββββββββ β ID β A Left β A Right β B Left β B Right β C Left β C Right β β --- β --- β --- β --- β --- β --- β --- β β str β str β str β str β str β str β str β βββββββͺβββββββββͺββββββββββͺβββββββββͺββββββββββͺβββββββββͺββββββββββ‘ β A β W1 β P1 β G1 β Y1 β M1 β K1 β β B β W2 β P2 β G2 β Y2 β M2 β K2 β β C β W3 β P3 β G3 β Y3 β M3 β K3 β βββββββ΄βββββββββ΄ββββββββββ΄βββββββββ΄ββββββββββ΄βββββββββ΄ββββββββββ I want to add a column with its value selected from the other columns based on ID column like below: shape: (3, 8) βββββββ¬βββββββββ¬ββββββββββ¬βββββββββ¬ββββββββββ¬βββββββββ¬ββββββββββ¬ββββββββ β ID β A Left β A Right β B Left β B Right β C Left β C Right β value β β --- β --- β --- β --- β --- β --- β --- β --- β β str β str β str β str β str β str β str β str β βββββββͺβββββββββͺββββββββββͺβββββββββͺββββββββββͺβββββββββͺββββββββββͺββββββββ‘ β A β W1 β P1 β G1 β Y1 β M1 β K1 β W1-P1 β β B β W2 β P2 β G2 β Y2 β M2 β K2 β G2-Y2 β β C β W3 β P3 β G3 β Y3 β M3 β K3 β M3-K3 β βββββββ΄βββββββββ΄ββββββββββ΄βββββββββ΄ββββββββββ΄βββββββββ΄ββββββββββ΄ββββββββ I got this result using unpivot: df.join( df.unpivot(index='ID').with_columns( pl.when(pl.col("ID") == pl.col("variable").str.slice(0,1)).then(pl.col("value")) ).select("ID" , "value").drop_nulls().group_by("ID").agg(pl.col('value').str.join("-")) ,on='ID').sort("ID") However, I need to avoid unpivot because I have two groups of ten columns beside other 50 columns. I have tried using pl.col() and polars.selectors, but I couldn't get the result. import polars.selectors as cs df.with_columns( cs.by_name( ( pl.concat_str([pl.col('ID') , " Left"] ) ) ).alias("value") ) TypeError: invalid name: <Expr ['col("ID").str.concat_horizontaβ¦'] Any suggested solution? | It looks like you want to extract the "base" / "prefix" of the Left/Right columns? There are various ways you could do that: columns = pl.Series(df.select("^.+ (Left|Right)$").columns) columns = columns.str.extract(r"(.+) (Left|Right)$") shape: (3,) Series: '' [str] [ "A" "B" "C" ] You could then use pl.coalesce() to create a single column of the chosen when/then values: df.with_columns( pl.coalesce( pl.when(pl.col("ID") == col).then( pl.format("{}-{}", pl.col(f"{col} Left"), pl.col(f"{col} Right")) ) for col in columns ) .alias("value") ) shape: (3, 8) βββββββ¬βββββββββ¬ββββββββββ¬βββββββββ¬ββββββββββ¬βββββββββ¬ββββββββββ¬ββββββββ β ID β A Left β A Right β B Left β B Right β C Left β C Right β value β β --- β --- β --- β --- β --- β --- β --- β --- β β str β str β str β str β str β str β str β str β βββββββͺβββββββββͺββββββββββͺβββββββββͺββββββββββͺβββββββββͺββββββββββͺββββββββ‘ β A β W1 β P1 β G1 β Y1 β M1 β K1 β W1-P1 β β B β W2 β P2 β G2 β Y2 β M2 β K2 β G2-Y2 β β C β W3 β P3 β G3 β Y3 β M3 β K3 β M3-K3 β βββββββ΄βββββββββ΄ββββββββββ΄βββββββββ΄ββββββββββ΄βββββββββ΄ββββββββββ΄ββββββββ | 3 | 1 |
77,308,101 | 2023-10-17 | https://stackoverflow.com/questions/77308101/when-plotting-with-sns-seaborn-it-just-shows-1-chunk-of-graph | What I want to achieve is something similar to this: This is the code I added: from pandas import * from matplotlib.pyplot import * from math import * import numpy as num import pandas as pd import matplotlib import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib inline import seaborn as sns import statistics users = pd.read_table("/content/drive/MyDrive/Colab Notebooks/Datasets/users.dat",sep="::",names=['UserID','Gender','Age','Occupation','ZipCode'],encoding='latin-1') users['Age'].value_counts() sns.countplot(users["Age"]) And this is what I ended up with instead: I tried using a different file to make sure that the data was correct, I also tried making sure that I didn't misspell anything incorrectly and tried single quotation marks just to get the same result. IDK what's the cause of the problem, unless its the data itself. BTW here's the users file if you're interested: https://drive.google.com/file/d/10iF0gKWAX8GfzOTgCrtgUHMjtcqcvv3R/view | In this code, the issue is due to countplot() which may be related to the distribution of the data. Countplot() visualizes data by counting occurrences in each age group, so if some age groups have very few instances, they might not appear on the plot. To address this, you can arrange the age groups in ascending order and adjust the figure size to ensure the visibility of all age groups on the plot. Try this code: users = pd.read_table("/content/drive/MyDrive/Colab Notebooks/Datasets/users.dat", sep="::", names=['UserID', 'Gender', 'Age', 'Occupation', 'ZipCode'], encoding='latin-1') age_counts = users['Age'].value_counts().sort_index() plt.figure(figsize=(10, 6)) sns.countplot(users["Age"], order=age_counts.index) plt.show() | 2 | 2 |
77,274,665 | 2023-10-11 | https://stackoverflow.com/questions/77274665/cannot-debug-script-with-trio-asyncio-in-pycharm | I have this (very simplified) program running with trio as base async library and with trio_asyncio library allowing me to call asyncio methods too: import asyncio import trio import trio_asyncio async def async_main(*args): print('async_main start') async with trio_asyncio.open_loop() as loop: print('async_main before trio sleep') await trio.sleep(1) print('async_main before asyncio sleep') await trio_asyncio.aio_as_trio(asyncio.sleep)(2) print('async_main after sleeps') print('async_main stop') if __name__ == '__main__': print('main start') trio.run(async_main) print('main stop') It works well, if I run it from PyCharm: main start async_main start async_main before trio sleep async_main before asyncio sleep async_main after sleeps async_main stop main stop But if I run the same code from PyCharm in debug mode (menu Run / Debug), then it raises an exception: Connected to pydev debugger (build 232.9559.58) main start async_main start async_main before trio sleep async_main before asyncio sleep Traceback (most recent call last): File "/home/vaclav/.config/JetBrains/PyCharmCE2023.2/scratches/scratch_3.py", line 12, in async_main await trio_asyncio.aio_as_trio(asyncio.sleep)(2) File "/home/vaclav/.cache/pypoetry/virtualenvs/maybankwithoutselenium-RhkLw-zs-py3.11/lib/python3.11/site-packages/trio_asyncio/_adapter.py", line 54, in __call__ return await self.loop.run_aio_coroutine(f) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vaclav/.cache/pypoetry/virtualenvs/maybankwithoutselenium-RhkLw-zs-py3.11/lib/python3.11/site-packages/trio_asyncio/_base.py", line 214, in run_aio_coroutine fut = asyncio.ensure_future(coro, loop=self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/pycharm-community/plugins/python-ce/helpers/pydev/_pydevd_asyncio_util/pydevd_nest_asyncio.py", line 156, in ensure_future return loop.create_task(coro_or_future) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/base_events.py", line 436, in create_task task = tasks.Task(coro, loop=self, name=name, context=context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/pycharm-community/plugins/python-ce/helpers/pydev/_pydevd_asyncio_util/pydevd_nest_asyncio.py", line 390, in task_new_init self._loop.call_soon(self, context=self._context) File "/home/vaclav/.cache/pypoetry/virtualenvs/maybankwithoutselenium-RhkLw-zs-py3.11/lib/python3.11/site-packages/trio_asyncio/_base.py", line 312, in call_soon self._check_callback(callback, 'call_soon') File "/usr/lib/python3.11/asyncio/base_events.py", line 776, in _check_callback raise TypeError( TypeError: a callable object was expected by call_soon(), got <Task pending name='Task-1' coro=<_call_defer() running at /home/vaclav/.cache/pypoetry/virtualenvs/maybankwithoutselenium-RhkLw-zs-py3.11/lib/python3.11/site-packages/trio_asyncio/_adapter.py:16>> python-BaseException sys:1: RuntimeWarning: coroutine '_call_defer' was never awaited Exception in default exception handler Traceback (most recent call last): File "/usr/lib/python3.11/asyncio/base_events.py", line 1797, in call_exception_handler self.default_exception_handler(context) File "/home/vaclav/.cache/pypoetry/virtualenvs/maybankwithoutselenium-RhkLw-zs-py3.11/lib/python3.11/site-packages/trio_asyncio/_async.py", line 42, in default_exception_handler raise RuntimeError(message) RuntimeError: Task was destroyed but it is pending! Process finished with exit code 1 The source code is copied from the official trio_asyncio documentation. I have two questions: Why the code works well if it is run without debugging, and why it fails when it is run in debugger? How I should modify this code to be able still call both - trio and asyncio methods and it will be possible to use debugger with such code? | This helped me: open Actions search window (press shift twice and switch to Actions tab) type Registry, choose the Registry... item switch off (deselect) the python.debug.asyncio.repl property Thanks to Jetbrains support for help: PY-65970 | 3 | 8 |
77,279,039 | 2023-10-12 | https://stackoverflow.com/questions/77279039/vs-code-python-extension-v2023-18-0-stopped-resolving-all-python-imports-and-sor | My VS Code was working fine, I have a pyenv environment with a specific python version and installed dependencies which I was using and all was good and suddenly it stopped recognizing all imports. They are all whited out. Also I noticed that the Sort Imports option disappeared from the context menu options I have when I right click. I have not changed anything in VS Code, any idea what might be wrong? Current VS Code Python extension version 2023.18.0 | These seem to be fresh issues coming from VSCode and Python extension for VSCode. After doing a lot of digging around I think both of these issues come from very recent changes from VSCode. I mean they were just released this month (October 2023). So regarding the Sort Imports option VSCode just stopped supporting it this month, see reference and ticket opened on VSCode github here To be able to still get automatic sorting of our imports we need to explicitly install the isort extension in VSCode and Use the Organize Imports command with sortkey Shift + Alt + O. If someone wants to automate it completely you can add automatic import sorting on saving. Go to Preferences (Ctrl + Shift + P) search for Open User Settings (JSON). In the json file add this or modify the "[python]" settings section if already exists: "[python]": { ...Other settings... "editor.formatOnSave": true, "editor.codeActionsOnSave": { "source.organizeImports": true }, }, Now if you restart VSCode you should get automated sorting of imports on save. Also see some visual instructions here. Regarding the issue of imports not being recognized I downgraded the VSCode Python extension to a previous version 2023.2.0 and it worked. | 6 | 13 |
77,297,637 | 2023-10-15 | https://stackoverflow.com/questions/77297637/how-to-detect-barely-visible-lines-on-the-grayscale-image-and-calculate-their-le | I am trying to write computer vision code based on the OpenCV library in Python to detect horizontal lines with intensity close to background. See example of the image below. I have tried 2 approaches. The first one is based on Canny edge detection and Hough transform, but it detected only a few lines (see code and image below). import math import numpy as np import cv2 scaleFactor = 1 maskX1 = 57 maskX2 = 263 maskY1 = 30 maskY2 = 164 angleStart = -1 angleEnd = 1 verticalKernel = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]]) sharpenKernel = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]]) def applyKernel(image, kernel): return cv2.filter2D(image, -1, kernel) # read image image_c = cv2.imread('images/1.png') image_c = cv2.resize(image_c, None, fx=scaleFactor, fy=scaleFactor, interpolation=cv2.INTER_CUBIC) cv2.imshow('Original Image', image_c) # convert to grayscale image_g = cv2.cvtColor(image_c, cv2.COLOR_RGB2GRAY) # image_g = cv2.bilateralFilter(image_g, 15, 15, 15) image_g = applyKernel(image_g, sharpenKernel) cv2.imshow('Sharpen Image', image_g) image_g = applyKernel(image_g, verticalKernel) cv2.imshow('Vertical Sobel Operator', image_g) # Gaussian blur and Canny threshold_low = 250 threshold_high = 300 image_canny = cv2.Canny(image_g, threshold_low, threshold_high) cv2.imshow('Canny Image', image_canny) # Visualize region of interest mask = np.zeros_like(image_g) vertices = np.array([[(maskX1 * scaleFactor, maskY1 * scaleFactor), (maskX2 * scaleFactor, maskY1 * scaleFactor), (maskX2 * scaleFactor, maskY2 * scaleFactor), (maskX1 * scaleFactor, maskY2 * scaleFactor)]], dtype=np.int32) cv2.fillPoly(mask, vertices, 255) masked_image = cv2.bitwise_and(image_canny, mask) # masked_image = image_canny cv2.imshow('Region of interest', masked_image) rho = 1 * scaleFactor # distance resolution in pixels theta = np.pi / 180 # angular resolution in radians threshold = 3 # minimum number of votes min_line_len = 10 * scaleFactor # minimum number of pixels making up a line max_line_gap = 20 * scaleFactor # maximum gap in pixels between connectable line segments lines = cv2.HoughLinesP(masked_image, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_image = np.zeros((masked_image.shape[0], masked_image.shape[1], 3), dtype=np.uint8) numLines = 0 totalLineLength = 0 for line in lines: for x1, y1, x2, y2 in line: if x2 == x1: lineAngle = 90 else: lineAngle = math.degrees(math.atan((y2 - y1) / (x2 - x1))) if angleStart < lineAngle < angleEnd: cv2.line(line_image, (x1, y1), (x2, y2), [0, 0, 255], 2) numLines = numLines + 1 totalLineLength = totalLineLength + math.sqrt((x2 - x1)**2 + (y2 - y1)**2) Ξ± = 1 Ξ² = 0.3 Ξ³ = 0 # Resultant weighted image is calculated as follows: original_img * Ξ± + img * Ξ² + Ξ³ image_with_lines = cv2.addWeighted(image_c, Ξ±, line_image, Ξ², Ξ³) cv2.imshow('Image with lines', image_with_lines) cv2.waitKey() cv2.destroyAllWindows() The second approach was based on image thresholding and contour analysis, but the results were also disappointing (see code and image below). import math import numpy as np import cv2 scaleFactor = 1 maskX1 = 57 maskX2 = 263 maskY1 = 30 maskY2 = 164 angleStart = -5 angleEnd = 5 verticalKernel = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]]) sharpenKernel = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]]) basePath = 'images/' fileExtension = '.png' def applyKernel(image, kernel): return cv2.filter2D(image, -1, kernel) def getHoughLines(image, masked_image): rho = 1 * scaleFactor # distance resolution in pixels theta = np.pi / 180 # angular resolution in radians threshold = 3 # minimum number of votes min_line_len = 10 * scaleFactor # minimum number of pixels making up a line max_line_gap = 5 * scaleFactor # maximum gap in pixels between connectable line segments lines = cv2.HoughLinesP(masked_image, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_image = np.zeros((masked_image.shape[0], masked_image.shape[1]), dtype=np.uint8) numLines = 0 totalLineLength = 0 for line in lines: for x1, y1, x2, y2 in line: if x2 == x1: lineAngle = 90 else: lineAngle = math.degrees(math.atan((y2 - y1) / (x2 - x1))) if angleStart < lineAngle < angleEnd: cv2.line(line_image, (x1, y1), (x2, y2), 255, 2) numLines = numLines + 1 totalLineLength = totalLineLength + math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2) Ξ± = 1 Ξ² = 0.3 Ξ³ = 0 # Resultant weighted image is calculated as follows: original_img * Ξ± + img * Ξ² + Ξ³ image_with_lines = cv2.addWeighted(image, Ξ±, line_image, Ξ², Ξ³) cv2.imshow('Image with lines', image_with_lines) return image_with_lines # read image image_g = cv2.imread('images/1.png', cv2.IMREAD_GRAYSCALE) # image_g = cv2.resize(image_g, None, fx=scaleFactor, fy=scaleFactor, interpolation=cv2.INTER_CUBIC) cv2.imshow('Original Image', image_g) # Apply Gaussian blur to reduce noise # image_blurred = cv2.GaussianBlur(image_g, (5, 5), 0) image_blurred = image_g image_blurred = applyKernel(image_blurred, sharpenKernel) cv2.imshow('Sharpen Image', image_blurred) image_blurred = applyKernel(image_blurred, verticalKernel) cv2.imshow('Vertical Sobel Operator', image_blurred) # Apply adaptive thresholding to binarize the image # _, binary_image = cv2.threshold(image_blurred, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) _, binary_image = cv2.threshold(image_blurred, 70, 255, cv2.THRESH_BINARY) cv2.imshow('Binary image', binary_image) # Visualize region of interest mask = np.zeros_like(image_g) vertices = np.array([[(maskX1 * scaleFactor, maskY1 * scaleFactor), (maskX2 * scaleFactor, maskY1 * scaleFactor), (maskX2 * scaleFactor, maskY2 * scaleFactor), (maskX1 * scaleFactor, maskY2 * scaleFactor)]], dtype=np.int32) cv2.fillPoly(mask, vertices, 255) masked_image = cv2.bitwise_and(binary_image, mask) cv2.imshow('Masked image', masked_image) # morphological operations # kernel = np.ones((2,2),np.uint8) # masked_image = cv2.morphologyEx(masked_image, cv2.MORPH_OPEN, kernel) cv2.imshow('morphologyEx', masked_image) # Perform edge detection edges = cv2.Canny(masked_image, 30, 200) cv2.imshow('Edges', edges) contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) print(contours) for contour in contours: # Calculate the length of the contour length = cv2.arcLength(contour, True) # Calculate the area of the contour area = cv2.contourArea(contour) # Filter out small contours (adjust the area threshold as needed) if area > 1: x, y, width, height = cv2.boundingRect(contour) if width > 5: # Draw the contour on the original image cv2.drawContours(image_g, [contour], -1, (0, 255, 0), 2) # Print or store the length and area print(f"Length: {length}, Area: {area}") cv2.imshow('Processed Image', image_g) cv2.waitKey(0) cv2.destroyAllWindows() Is there a way to detect these lines more accurately? | I came up with the following solution based on image thresholding and contours detection. Combination of 2 additional filters gave more visible lines which was easier to detect using thresholding and contours detection. def getStreakyStructuresForOneImage(imagePath, showProcessingImages, filterVertical = False): scaleFactor = 1 maskX1 = 128 # 57 maskX2 = 355 # 263 maskY1 = 50 maskY2 = 205 angleStart = -5 angleEnd = 5 verticalKernel = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]]) sharpenKernel2 = 0.64 * np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]]) sharpenKernel = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]]) if filterVertical: verticalKernel = np.transpose(verticalKernel) sharpenKernel2 = np.transpose(sharpenKernel2) sharpenKernel = np.transpose(sharpenKernel) # read image image_g = cv2.imread(imagePath, cv2.IMREAD_GRAYSCALE) if showProcessingImages: cv2.imshow('Original Image', image_g) image_blurred = applyKernel(image_g, sharpenKernel) if showProcessingImages: cv2.imshow('Sharpen Image', image_blurred) image_blurred = applyKernel(image_blurred, verticalKernel) image_blurred = applyKernel(image_blurred, sharpenKernel2) if showProcessingImages: cv2.imshow('Sharpening using different kernels', image_blurred) _, binary_image = cv2.threshold(image_blurred, 70, 255, cv2.THRESH_BINARY) if showProcessingImages: cv2.imshow('Binary image', binary_image) # Visualize region of interest mask = np.zeros_like(image_g) vertices = np.array([[(maskX1 * scaleFactor, maskY1 * scaleFactor), (maskX2 * scaleFactor, maskY1 * scaleFactor), (maskX2 * scaleFactor, maskY2 * scaleFactor), (maskX1 * scaleFactor, maskY2 * scaleFactor)]], dtype=np.int32) cv2.fillPoly(mask, vertices, 255) masked_image = cv2.bitwise_and(binary_image, mask) if showProcessingImages: cv2.imshow('Masked image', masked_image) contours, _ = cv2.findContours(masked_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) image_g = cv2.cvtColor(image_g, cv2.COLOR_GRAY2RGB) masked_image = cv2.cvtColor(masked_image, cv2.COLOR_GRAY2RGB) line_image = np.zeros((masked_image.shape[0], masked_image.shape[1], 3), dtype=np.uint8) numberOfMeaningfulContours = 0 totalLineLength = 0 for contour in contours: # Calculate the length of the contour length = cv2.arcLength(contour, True) # Calculate the area of the contour area = cv2.contourArea(contour) # Filter out small contours (adjust the area threshold as needed) if area > 0: x, y, width, height = cv2.boundingRect(contour) if filterVertical: if height > 50: # Draw the contour on the original image cv2.drawContours(line_image, [contour], -1, (0, 0, 255), 2) numberOfMeaningfulContours += 1; totalLineLength += width * mmInPx; # Print or store the length and area print(f"Length: {length}, Area: {area}") else: if width > 15: # Draw the contour on the original image cv2.drawContours(line_image, [contour], -1, (0, 0, 255), 2) numberOfMeaningfulContours += 1; totalLineLength += width * mmInPx; # Print or store the length and area print(f"Length: {length}, Area: {area}") Ξ± = 1 Ξ² = 0.14 Ξ³ = 0 # Resultant weighted image is calculated as follows: original_img * Ξ± + img * Ξ² + Ξ³ image_with_lines = cv2.addWeighted(image_g, Ξ±, line_image, Ξ², Ξ³) if showProcessingImages: cv2.imshow('Processed Image', image_with_lines) averageLineLength = 0 if numberOfMeaningfulContours > 0: averageLineLength = totalLineLength / numberOfMeaningfulContours return image_with_lines, numberOfMeaningfulContours, totalLineLength, averageLineLength | 3 | 1 |
77,280,761 | 2023-10-12 | https://stackoverflow.com/questions/77280761/openai-api-error-module-openai-has-no-attribute-chatcompletion-did-you-me | I can't seem to figure out what the issue is here, I am running version 0.28.1: from what I have read I should be using ChatCompletion rather than Completion as that's what gpt-4 and 3.5-turbo supports. response = openai.ChatCompletion.create( prompt=question, temperature=0, max_tokens=3700, top_p=1, frequency_penalty=0, presence_penalty=0, stop=None, model="gpt-4", ) Looking at other answers I can also tell you that my file isn't named openai.py or anything like that. Thanks for any help in advance. | First of all, be sure you have an up-to-date OpenAI package version. If not, upgrade the OpenAI package. Python: pip install --upgrade openai NodeJS: npm update openai The code posted in your question above has a mistake. The Chat Completions API doesn't have the prompt parameter as the Completions API does. Instead, it has the messages parameter. See the official OpenAI documentation. Try the following: import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") completion = openai.ChatCompletion.create( model = "gpt-4", messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ] ) print(completion.choices[0].message) | 2 | 3 |
77,273,461 | 2023-10-11 | https://stackoverflow.com/questions/77273461/error-could-not-build-wheels-for-fasttext-which-is-required-to-install-pyproje | I'm trying to install fasttext using pip install fasttext in python 3.11.4 but I'm running into trouble when building wheels. The error reads as follows: error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.37.32822\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for fasttext Running setup.py clean for fasttext Failed to build fasttext ERROR: Could not build wheels for fasttext, which is required to install pyproject.toml-based projects I've searched the web and most hits indicated that the error has something to do with the build tools of visual studio (which the error above also indicated). I've installed/updated all my build tools and I've also installed the latest SDK as suggested here, but the error persists. Has anyone solved this problem before and can share any potential solution? | Using pip install fasttext-wheel instead solved the problem for me. | 5 | 17 |
77,306,214 | 2023-10-17 | https://stackoverflow.com/questions/77306214/retaining-highlighted-text-in-an-image | I want to remove the non highlighted region (turn it white) from this image and just retain the highlighted portions. This is my image : Getting Highlighted Region import cv2 import numpy as np image = cv2.imread('0002.jpg') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) lower_green = (30, 90, 90) upper_green = (100, 255, 255) green_mask = cv2.inRange(cv2.cvtColor(image, cv2.COLOR_BGR2HSV), lower_green, upper_green) _, binary = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY_INV) binary = cv2.bitwise_not(binary) preserved_highlights = cv2.bitwise_and(image, image, mask=binary) image[green_mask == 0] = [255, 255, 255] #cv2.imwrite('image.jpg', image) result = cv2.add(preserved_highlights, image) cv2.imwrite('final.jpg', result) OUTPUT1 Increasing Clarity As you can see the output is quite blurry, after applying some other operations : image = cv2.imread('final.jpg') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) alpha = 1.5 beta = 0 enhanced = cv2.convertScaleAbs(gray, alpha=alpha, beta=beta) sharpening_filter = np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]]) sharpened = cv2.filter2D(enhanced, -1, sharpening_filter) # Save the final result cv2.imwrite('output_enhanced02.jpg', sharpened) The best I could get was FINAL OUTPUT Question Is there a way I can retain just the highlighted part like it is originally and just remove the non highlighted part? OR if that's not possible How can make my output look more readable and clear ? Thank you | I changed your code a little bit according to Christoph Rackwitz's comment. Here is the result import cv2 as cv import numpy as np image = cv.imread('Q0aCT.jpg', cv.IMREAD_COLOR) lower_green = (0, 100, 0) upper_green = (200, 255, 200) green_mask = cv.inRange(image, lower_green, upper_green) kernel = np.ones((5, 5), np.uint8) openning = cv.morphologyEx(green_mask, cv.MORPH_OPEN, kernel, iterations=1) kernel = np.ones((9, 9), np.uint8) closing = cv.dilate(openning, kernel, iterations=3) preserved_highlights = cv.bitwise_and(image, image, mask=closing) gray = cv.cvtColor(preserved_highlights, cv.COLOR_RGB2GRAY) _, preserved_text = cv.threshold(gray, 100, 255, cv.THRESH_BINARY_INV) preserved_text = cv.bitwise_and(preserved_text, preserved_text, mask=closing) cv.imwrite('final.png', 255 - preserved_text) Steps to get the result Using cv.inRange to find highlighted text Use opening morphology to remove salt noise Use close morphology to fill the black text inside the highlighted area Use a binary threshold to extract those black areas Final Result | 2 | 3 |
77,310,400 | 2023-10-17 | https://stackoverflow.com/questions/77310400/join-polars-dataframes-with-varying-multiple-similar-columns | I am using the polars library in python and have two data frames the look like this import polars as pl data1 = { 'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9] } df1 = pl.DataFrame(data1) data2 = { 'B': [4, 5, 6], 'C': [7, 8, 9], 'D': [1, 2, 3] } df2 = pl.DataFrame(data2) # the column B and C are same in both data frames # TODO: Join/Concat the data frames into one. The data2 can vary some time it can have 2 common columns, some time it can have 1 common column and some times more. and The result should look like. I was wondering if there is any built function, or some kind of flag in a function that exists already in polars, that i can use. result = { 'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9], 'D': [1, 2, 3] } I am not quite sure how to join or concat the polar data frames in order to achieve this. | You can concat with how="align" but the resulting column order differs. pl.concat([df1, df2], how="align") shape: (3, 4) βββββββ¬ββββββ¬ββββββ¬ββββββ β B β C β A β D β β --- β --- β --- β --- β β i64 β i64 β i64 β i64 β βββββββͺββββββͺββββββͺββββββ‘ β 4 β 7 β 1 β 1 β β 5 β 8 β 2 β 2 β β 6 β 9 β 3 β 3 β βββββββ΄ββββββ΄ββββββ΄ββββββ You can see how it is implemented here. It basically finds the common columns to use as on and "outer joins" all the frames together. dfs = [df1, df2] cols = pl.concat(pl.Series(df.columns) for df in dfs) join_cols = list(cols.filter(cols.is_duplicated()).unique()) result = dfs[0].join(dfs[1], how="outer", on=join_cols, suffix="") # only needed for more than 2 frames # for df in dfs[2:]: # result = result.join(df, how="outer", on=join_cols, suffix="") result = result.select(*cols.unique(maintain_order=True)) shape: (3, 4) βββββββ¬ββββββ¬ββββββ¬ββββββ β A β B β C β D β β --- β --- β --- β --- β β i64 β i64 β i64 β i64 β βββββββͺββββββͺββββββͺββββββ‘ β 1 β 4 β 7 β 1 β β 2 β 5 β 8 β 2 β β 3 β 6 β 9 β 3 β βββββββ΄ββββββ΄ββββββ΄ββββββ | 3 | 3 |
77,311,656 | 2023-10-17 | https://stackoverflow.com/questions/77311656/access-parent-object-data-member-from-sub-object-in-python | I have a setup like the following class decorator: def __init__(self, fn): self.fn = fn @staticmethod def wrap(fn): return decorator(fn) def __call__(self): print(f"decorator {self} function called") class A: @decorator.wrap def foo(self): print(f"Object {self} called") def __init__(self): self.boo = 'Boo' How do I from the decorator object access boo variable? | @S.B's solution works but is not thread-safe since multiple instances of A may be calling the foo method at about the same time in different threads, overriding the decorator object's instance attribute before a call to the __call__ method originated from another instance is made. A thread-safe approach would be to return a wrapper function from the __get__ method with the instance available through closure: class decorator: def __init__(self, fn): self.fn = fn def __get__(self, instance, owner=None): def wrapper(*args, **kwargs): print(f'decorator {self} function called; boo = {instance.boo}') return self.fn(instance, *args, **kwargs) return wrapper class A: @decorator def foo(self): print(f"Object {self} called") def __init__(self): self.boo = 'Boo' A().foo() This outputs: decorator <__main__.decorator object at 0x7f6ef3316c10> function called; boo = Boo Object <__main__.A object at 0x7f6ef3339580> called Demo: Try it online! | 2 | 2 |
77,312,659 | 2023-10-17 | https://stackoverflow.com/questions/77312659/plotting-timeseries-line-graph-for-unique-values-in-a-column | I am trying to plot a timeseries graph for all the unique keys in my dataset, below is my dataset, I have 7 unique keys and I am trying to plot event_date on x-axis and line graph for count on y axis. I am looping through each unique key and trying to plot date vs count however getting below error. Based on the error I am not able to understand why shape for y is (1,0) import matplotlib.pyplot as plt import pandas as pd df_pandas = df.toPandas() df_pandas.event_date = pd.to_datetime(df_pandas.event_date) # converting object to pandas datetime colors = ['r', 'g', 'b', 'y', 'k', 'c', 'm'] for i, k in enumerate(df_pandas.key.unique()): plt.plot(df_pandas[df_pandas.key == k].event_date, df_pandas[df_pandas.key == k].count, '-o', label=k, c=colors[i]) plt.gcf().autofmt_xdate() ## Rotate X-axis so you can see dates clearly without overlap plt.legend() ## Show legend Traceback (most recent call last): File "/tmp/1697561012681-0/zeppelin_python.py", line 153, in <module> exec(code, _zcUserQueryNameSpace) File "<stdin>", line 13, in <module> File "/usr/local/lib64/python3.7/site-packages/matplotlib/pyplot.py", line 2813, in plot is not None else {}), **kwargs) File "/usr/local/lib64/python3.7/site-packages/matplotlib/__init__.py", line 1805, in inner return func(ax, *args, **kwargs) File "/usr/local/lib64/python3.7/site-packages/matplotlib/axes/_axes.py", line 1603, in plot for line in self._get_lines(*args, **kwargs): File "/usr/local/lib64/python3.7/site-packages/matplotlib/axes/_base.py", line 393, in _grab_next_args yield from self._plot_args(this, kwargs) File "/usr/local/lib64/python3.7/site-packages/matplotlib/axes/_base.py", line 370, in _plot_args x, y = self._xy_from_xy(x, y) File "/usr/local/lib64/python3.7/site-packages/matplotlib/axes/_base.py", line 231, in _xy_from_xy "have shapes {} and {}".format(x.shape, y.shape)) ValueError: x and y must have same first dimension, but have shapes (21,) and (1,) event_date key count 7/23/23 0:00 389628-135052 74858 7/28/23 0:00 389628-135052 75139 7/12/23 0:00 389631-135055 60910 7/18/23 0:00 389632-135056 68850 7/26/23 0:00 389632-135056 33704 7/27/23 0:00 389630-135054 119679 7/20/23 0:00 389632-135056 71281 7/15/23 0:00 389632-135056 68854 7/23/23 0:00 389634-135058 69020 7/20/23 0:00 389629-135053 59536 7/21/23 0:00 389631-135055 71065 7/25/23 0:00 389629-135053 66887 7/15/23 0:00 389629-135053 66150 7/12/23 0:00 389633-135057 53096 7/14/23 0:00 389634-135058 62948 7/25/23 0:00 389628-135052 74872 7/15/23 0:00 389631-135055 73870 7/18/23 0:00 389631-135055 74548 7/17/23 0:00 389632-135056 68402 7/20/23 0:00 389633-135057 54665 7/15/23 0:00 389633-135057 64637 7/30/23 0:00 389630-135054 123113 7/21/23 0:00 389630-135054 67368 7/12/23 0:00 389632-135056 55618 7/19/23 0:00 389633-135057 70942 7/21/23 0:00 389633-135057 68221 7/18/23 0:00 389628-135052 76602 8/1/23 0:00 389631-135055 13252 7/17/23 0:00 389629-135053 64287 7/25/23 0:00 389634-135058 68104 7/17/23 0:00 389634-135058 66301 7/12/23 0:00 389628-135052 61841 7/18/23 0:00 389630-135054 71472 7/24/23 0:00 389629-135053 68495 7/30/23 0:00 389629-135053 122907 7/26/23 0:00 389630-135054 26650 7/30/23 0:00 389632-135056 134425 7/22/23 0:00 389634-135058 62225 7/18/23 0:00 389633-135057 61047 7/22/23 0:00 389633-135057 60926 7/18/23 0:00 389634-135058 67725 7/16/23 0:00 389633-135057 64254 7/14/23 0:00 389633-135057 61383 7/24/23 0:00 389633-135057 66471 7/16/23 0:00 389629-135053 66548 7/19/23 0:00 389628-135052 75846 7/17/23 0:00 389631-135055 73452 7/13/23 0:00 389631-135055 82725 7/31/23 0:00 389634-135058 41786 7/26/23 0:00 389629-135053 68862 8/1/23 0:00 389633-135057 12333 7/21/23 0:00 389628-135052 72381 7/30/23 0:00 389628-135052 77991 7/19/23 0:00 389630-135054 68765 8/1/23 0:00 389630-135054 12798 7/21/23 0:00 389632-135056 66499 7/29/23 0:00 389633-135057 16644 7/20/23 0:00 389631-135055 74593 7/24/23 0:00 389630-135054 72015 7/27/23 0:00 389632-135056 98245 7/31/23 0:00 389630-135054 56117 7/22/23 0:00 389629-135053 62669 7/23/23 0:00 389631-135055 74936 7/25/23 0:00 389632-135056 69935 7/29/23 0:00 389630-135054 23579 7/13/23 0:00 389632-135056 71917 7/13/23 0:00 389633-135057 67979 7/19/23 0:00 389631-135055 74154 7/23/23 0:00 389632-135056 71347 7/27/23 0:00 389634-135058 57570 7/26/23 0:00 389633-135057 60073 7/17/23 0:00 389633-135057 66860 7/15/23 0:00 389628-135052 75962 7/29/23 0:00 389628-135052 69251 7/27/23 0:00 389631-135055 69051 7/28/23 0:00 389631-135055 74231 7/23/23 0:00 389633-135057 66237 7/30/23 0:00 389634-135058 130063 7/25/23 0:00 389631-135055 73097 7/31/23 0:00 389628-135052 75428 7/24/23 0:00 389634-135058 69958 7/13/23 0:00 389634-135058 72563 7/27/23 0:00 389629-135053 63235 8/1/23 0:00 389629-135053 12444 7/26/23 0:00 389628-135052 80514 7/14/23 0:00 389628-135052 72334 7/30/23 0:00 389631-135055 130862 7/29/23 0:00 389634-135058 21234 7/14/23 0:00 389629-135053 62070 7/30/23 0:00 389633-135057 117653 7/17/23 0:00 389628-135052 74903 7/24/23 0:00 389628-135052 76074 7/28/23 0:00 389630-135054 58201 7/25/23 0:00 389630-135054 70594 7/21/23 0:00 389629-135053 70020 7/20/23 0:00 389634-135058 59776 7/22/23 0:00 389631-135055 73678 7/19/23 0:00 389632-135056 68493 7/28/23 0:00 389632-135056 69294 7/29/23 0:00 389632-135056 16416 8/1/23 0:00 389634-135058 12202 7/16/23 0:00 389628-135052 74739 7/13/23 0:00 389628-135052 78198 7/16/23 0:00 389630-135054 70980 7/31/23 0:00 389632-135056 50267 7/26/23 0:00 389634-135058 77612 7/31/23 0:00 389631-135055 45171 7/22/23 0:00 389630-135054 70867 7/15/23 0:00 389634-135058 67374 7/31/23 0:00 389633-135057 50583 7/19/23 0:00 389629-135053 72029 7/22/23 0:00 389628-135052 75503 7/14/23 0:00 389632-135056 65704 7/12/23 0:00 389634-135058 54886 7/21/23 0:00 389634-135058 70547 7/16/23 0:00 389634-135058 68285 7/27/23 0:00 389628-135052 68078 7/17/23 0:00 389630-135054 70450 7/31/23 0:00 389629-135053 50228 7/28/23 0:00 389629-135053 65580 7/13/23 0:00 389629-135053 69155 7/23/23 0:00 389630-135054 70885 7/14/23 0:00 389630-135054 67892 7/19/23 0:00 389634-135058 73446 7/27/23 0:00 389633-135057 60125 7/28/23 0:00 389633-135057 64199 7/29/23 0:00 389631-135055 33164 7/16/23 0:00 389631-135055 73831 7/24/23 0:00 389632-135056 70333 7/16/23 0:00 389632-135056 68069 7/18/23 0:00 389629-135053 67103 7/24/23 0:00 389631-135055 73852 7/20/23 0:00 389630-135054 72357 7/15/23 0:00 389630-135054 70673 7/12/23 0:00 389630-135054 57477 7/29/23 0:00 389629-135053 18198 7/14/23 0:00 389631-135055 63845 8/1/23 0:00 389632-135056 12744 7/28/23 0:00 389634-135058 67665 7/25/23 0:00 389633-135057 68534 7/12/23 0:00 389629-135053 53612 8/1/23 0:00 389628-135052 12663 7/20/23 0:00 389628-135052 75662 7/26/23 0:00 389631-135055 75014 7/22/23 0:00 389632-135056 68687 7/13/23 0:00 389630-135054 72816 7/23/23 0:00 389629-135053 68272 | The issue with the existing code is .count mirrors the pandas.Series.count method. You may not use . notation if the column name mirrors a pandas method. df[df.key == k]['count'] not df[df.key == k].count Additionally, using plt.plot will still cause issues, as seen in this plot. plt.plot also requires sorting x, and y relative to x. One option is to use sns.relplot with kind='line', or use sns.lineplot. import seaborn as sns g = sns.relplot(kind='line', data=df, x='event_date', y='count', hue='key', height=7.5, aspect=1) To iteratively create the plot by selecting each group, create a figure and Axes to repeatable plot each group onto. fig, ax = plt.subplots(figsize=(10, 10)) for g in df.key.unique(): df[df.key.eq(g)].plot(x='event_date', y='count', ax=ax, label=g) Use pandas.DataFrame.pivot to reshape from a long to wide form and plot without a loop. Use pandas.DataFrame.pivot_table to aggregate data if the dates in each group have more than one value. # pivot the dataframe into a wide format dfp = df.pivot(index='event_date', columns='key', values='count') # and then plot without a loop ax = dfp.plot(figsize=(10, 10)) Sample DataFrame import pandas as pd data = {'event_date': [pd.Timestamp('2023-07-23 00:00:00'), pd.Timestamp('2023-07-28 00:00:00'), pd.Timestamp('2023-07-12 00:00:00'), pd.Timestamp('2023-07-18 00:00:00'), pd.Timestamp('2023-07-26 00:00:00'), pd.Timestamp('2023-07-27 00:00:00'), pd.Timestamp('2023-07-20 00:00:00'), pd.Timestamp('2023-07-15 00:00:00'), pd.Timestamp('2023-07-23 00:00:00'), pd.Timestamp('2023-07-20 00:00:00'), pd.Timestamp('2023-07-21 00:00:00'), pd.Timestamp('2023-07-25 00:00:00'), pd.Timestamp('2023-07-15 00:00:00'), pd.Timestamp('2023-07-12 00:00:00'), pd.Timestamp('2023-07-14 00:00:00'), pd.Timestamp('2023-07-25 00:00:00'), pd.Timestamp('2023-07-15 00:00:00'), pd.Timestamp('2023-07-18 00:00:00'), pd.Timestamp('2023-07-17 00:00:00'), pd.Timestamp('2023-07-20 00:00:00'), pd.Timestamp('2023-07-15 00:00:00'), pd.Timestamp('2023-07-30 00:00:00'), pd.Timestamp('2023-07-21 00:00:00'), pd.Timestamp('2023-07-12 00:00:00'), pd.Timestamp('2023-07-19 00:00:00'), pd.Timestamp('2023-07-21 00:00:00'), pd.Timestamp('2023-07-18 00:00:00'), pd.Timestamp('2023-08-01 00:00:00'), pd.Timestamp('2023-07-17 00:00:00'), pd.Timestamp('2023-07-25 00:00:00'), pd.Timestamp('2023-07-17 00:00:00'), pd.Timestamp('2023-07-12 00:00:00'), pd.Timestamp('2023-07-18 00:00:00'), pd.Timestamp('2023-07-24 00:00:00'), pd.Timestamp('2023-07-30 00:00:00'), pd.Timestamp('2023-07-26 00:00:00'), pd.Timestamp('2023-07-30 00:00:00'), pd.Timestamp('2023-07-22 00:00:00'), pd.Timestamp('2023-07-18 00:00:00'), pd.Timestamp('2023-07-22 00:00:00'), pd.Timestamp('2023-07-18 00:00:00'), pd.Timestamp('2023-07-16 00:00:00'), pd.Timestamp('2023-07-14 00:00:00'), pd.Timestamp('2023-07-24 00:00:00'), pd.Timestamp('2023-07-16 00:00:00'), pd.Timestamp('2023-07-19 00:00:00'), pd.Timestamp('2023-07-17 00:00:00'), pd.Timestamp('2023-07-13 00:00:00'), pd.Timestamp('2023-07-31 00:00:00'), pd.Timestamp('2023-07-26 00:00:00'), pd.Timestamp('2023-08-01 00:00:00'), pd.Timestamp('2023-07-21 00:00:00'), pd.Timestamp('2023-07-30 00:00:00'), pd.Timestamp('2023-07-19 00:00:00'), pd.Timestamp('2023-08-01 00:00:00'), pd.Timestamp('2023-07-21 00:00:00'), pd.Timestamp('2023-07-29 00:00:00'), pd.Timestamp('2023-07-20 00:00:00'), pd.Timestamp('2023-07-24 00:00:00'), pd.Timestamp('2023-07-27 00:00:00'), pd.Timestamp('2023-07-31 00:00:00'), pd.Timestamp('2023-07-22 00:00:00'), pd.Timestamp('2023-07-23 00:00:00'), pd.Timestamp('2023-07-25 00:00:00'), pd.Timestamp('2023-07-29 00:00:00'), pd.Timestamp('2023-07-13 00:00:00'), pd.Timestamp('2023-07-13 00:00:00'), pd.Timestamp('2023-07-19 00:00:00'), pd.Timestamp('2023-07-23 00:00:00'), pd.Timestamp('2023-07-27 00:00:00'), pd.Timestamp('2023-07-26 00:00:00'), pd.Timestamp('2023-07-17 00:00:00'), pd.Timestamp('2023-07-15 00:00:00'), pd.Timestamp('2023-07-29 00:00:00'), pd.Timestamp('2023-07-27 00:00:00'), pd.Timestamp('2023-07-28 00:00:00'), pd.Timestamp('2023-07-23 00:00:00'), pd.Timestamp('2023-07-30 00:00:00'), pd.Timestamp('2023-07-25 00:00:00'), pd.Timestamp('2023-07-31 00:00:00'), pd.Timestamp('2023-07-24 00:00:00'), pd.Timestamp('2023-07-13 00:00:00'), pd.Timestamp('2023-07-27 00:00:00'), pd.Timestamp('2023-08-01 00:00:00'), pd.Timestamp('2023-07-26 00:00:00'), pd.Timestamp('2023-07-14 00:00:00'), pd.Timestamp('2023-07-30 00:00:00'), pd.Timestamp('2023-07-29 00:00:00'), pd.Timestamp('2023-07-14 00:00:00'), pd.Timestamp('2023-07-30 00:00:00'), pd.Timestamp('2023-07-17 00:00:00'), pd.Timestamp('2023-07-24 00:00:00'), pd.Timestamp('2023-07-28 00:00:00'), pd.Timestamp('2023-07-25 00:00:00'), pd.Timestamp('2023-07-21 00:00:00'), pd.Timestamp('2023-07-20 00:00:00'), pd.Timestamp('2023-07-22 00:00:00'), pd.Timestamp('2023-07-19 00:00:00'), pd.Timestamp('2023-07-28 00:00:00'), pd.Timestamp('2023-07-29 00:00:00'), pd.Timestamp('2023-08-01 00:00:00'), pd.Timestamp('2023-07-16 00:00:00'), pd.Timestamp('2023-07-13 00:00:00'), pd.Timestamp('2023-07-16 00:00:00'), pd.Timestamp('2023-07-31 00:00:00'), pd.Timestamp('2023-07-26 00:00:00'), pd.Timestamp('2023-07-31 00:00:00'), pd.Timestamp('2023-07-22 00:00:00'), pd.Timestamp('2023-07-15 00:00:00'), pd.Timestamp('2023-07-31 00:00:00'), pd.Timestamp('2023-07-19 00:00:00'), pd.Timestamp('2023-07-22 00:00:00'), pd.Timestamp('2023-07-14 00:00:00'), pd.Timestamp('2023-07-12 00:00:00'), pd.Timestamp('2023-07-21 00:00:00'), pd.Timestamp('2023-07-16 00:00:00'), pd.Timestamp('2023-07-27 00:00:00'), pd.Timestamp('2023-07-17 00:00:00'), pd.Timestamp('2023-07-31 00:00:00'), pd.Timestamp('2023-07-28 00:00:00'), pd.Timestamp('2023-07-13 00:00:00'), pd.Timestamp('2023-07-23 00:00:00'), pd.Timestamp('2023-07-14 00:00:00'), pd.Timestamp('2023-07-19 00:00:00'), pd.Timestamp('2023-07-27 00:00:00'), pd.Timestamp('2023-07-28 00:00:00'), pd.Timestamp('2023-07-29 00:00:00'), pd.Timestamp('2023-07-16 00:00:00'), pd.Timestamp('2023-07-24 00:00:00'), pd.Timestamp('2023-07-16 00:00:00'), pd.Timestamp('2023-07-18 00:00:00'), pd.Timestamp('2023-07-24 00:00:00'), pd.Timestamp('2023-07-20 00:00:00'), pd.Timestamp('2023-07-15 00:00:00'), pd.Timestamp('2023-07-12 00:00:00'), pd.Timestamp('2023-07-29 00:00:00'), pd.Timestamp('2023-07-14 00:00:00'), pd.Timestamp('2023-08-01 00:00:00'), pd.Timestamp('2023-07-28 00:00:00'), pd.Timestamp('2023-07-25 00:00:00'), pd.Timestamp('2023-07-12 00:00:00'), pd.Timestamp('2023-08-01 00:00:00'), pd.Timestamp('2023-07-20 00:00:00'), pd.Timestamp('2023-07-26 00:00:00'), pd.Timestamp('2023-07-22 00:00:00'), pd.Timestamp('2023-07-13 00:00:00'), pd.Timestamp('2023-07-23 00:00:00')], 'key': ['389628-135052', '389628-135052', '389631-135055', '389632-135056', '389632-135056', '389630-135054', '389632-135056', '389632-135056', '389634-135058', '389629-135053', '389631-135055', '389629-135053', '389629-135053', '389633-135057', '389634-135058', '389628-135052', '389631-135055', '389631-135055', '389632-135056', '389633-135057', '389633-135057', '389630-135054', '389630-135054', '389632-135056', '389633-135057', '389633-135057', '389628-135052', '389631-135055', '389629-135053', '389634-135058', '389634-135058', '389628-135052', '389630-135054', '389629-135053', '389629-135053', '389630-135054', '389632-135056', '389634-135058', '389633-135057', '389633-135057', '389634-135058', '389633-135057', '389633-135057', '389633-135057', '389629-135053', '389628-135052', '389631-135055', '389631-135055', '389634-135058', '389629-135053', '389633-135057', '389628-135052', '389628-135052', '389630-135054', '389630-135054', '389632-135056', '389633-135057', '389631-135055', '389630-135054', '389632-135056', '389630-135054', '389629-135053', '389631-135055', '389632-135056', '389630-135054', '389632-135056', '389633-135057', '389631-135055', '389632-135056', '389634-135058', '389633-135057', '389633-135057', '389628-135052', '389628-135052', '389631-135055', '389631-135055', '389633-135057', '389634-135058', '389631-135055', '389628-135052', '389634-135058', '389634-135058', '389629-135053', '389629-135053', '389628-135052', '389628-135052', '389631-135055', '389634-135058', '389629-135053', '389633-135057', '389628-135052', '389628-135052', '389630-135054', '389630-135054', '389629-135053', '389634-135058', '389631-135055', '389632-135056', '389632-135056', '389632-135056', '389634-135058', '389628-135052', '389628-135052', '389630-135054', '389632-135056', '389634-135058', '389631-135055', '389630-135054', '389634-135058', '389633-135057', '389629-135053', '389628-135052', '389632-135056', '389634-135058', '389634-135058', '389634-135058', '389628-135052', '389630-135054', '389629-135053', '389629-135053', '389629-135053', '389630-135054', '389630-135054', '389634-135058', '389633-135057', '389633-135057', '389631-135055', '389631-135055', '389632-135056', '389632-135056', '389629-135053', '389631-135055', '389630-135054', '389630-135054', '389630-135054', '389629-135053', '389631-135055', '389632-135056', '389634-135058', '389633-135057', '389629-135053', '389628-135052', '389628-135052', '389631-135055', '389632-135056', '389630-135054', '389629-135053'], 'count': [74858, 75139, 60910, 68850, 33704, 119679, 71281, 68854, 69020, 59536, 71065, 66887, 66150, 53096, 62948, 74872, 73870, 74548, 68402, 54665, 64637, 123113, 67368, 55618, 70942, 68221, 76602, 13252, 64287, 68104, 66301, 61841, 71472, 68495, 122907, 26650, 134425, 62225, 61047, 60926, 67725, 64254, 61383, 66471, 66548, 75846, 73452, 82725, 41786, 68862, 12333, 72381, 77991, 68765, 12798, 66499, 16644, 74593, 72015, 98245, 56117, 62669, 74936, 69935, 23579, 71917, 67979, 74154, 71347, 57570, 60073, 66860, 75962, 69251, 69051, 74231, 66237, 130063, 73097, 75428, 69958, 72563, 63235, 12444, 80514, 72334, 130862, 21234, 62070, 117653, 74903, 76074, 58201, 70594, 70020, 59776, 73678, 68493, 69294, 16416, 12202, 74739, 78198, 70980, 50267, 77612, 45171, 70867, 67374, 50583, 72029, 75503, 65704, 54886, 70547, 68285, 68078, 70450, 50228, 65580, 69155, 70885, 67892, 73446, 60125, 64199, 33164, 73831, 70333, 68069, 67103, 73852, 72357, 70673, 57477, 18198, 63845, 12744, 67665, 68534, 53612, 12663, 75662, 75014, 68687, 72816, 68272]} df = pd.DataFrame(data) dfp key 389628-135052 389629-135053 389630-135054 389631-135055 389632-135056 389633-135057 389634-135058 event_date 2023-07-12 61841 53612 57477 60910 55618 53096 54886 2023-07-13 78198 69155 72816 82725 71917 67979 72563 2023-07-14 72334 62070 67892 63845 65704 61383 62948 2023-07-15 75962 66150 70673 73870 68854 64637 67374 2023-07-16 74739 66548 70980 73831 68069 64254 68285 2023-07-17 74903 64287 70450 73452 68402 66860 66301 2023-07-18 76602 67103 71472 74548 68850 61047 67725 2023-07-19 75846 72029 68765 74154 68493 70942 73446 2023-07-20 75662 59536 72357 74593 71281 54665 59776 2023-07-21 72381 70020 67368 71065 66499 68221 70547 2023-07-22 75503 62669 70867 73678 68687 60926 62225 2023-07-23 74858 68272 70885 74936 71347 66237 69020 2023-07-24 76074 68495 72015 73852 70333 66471 69958 2023-07-25 74872 66887 70594 73097 69935 68534 68104 2023-07-26 80514 68862 26650 75014 33704 60073 77612 2023-07-27 68078 63235 119679 69051 98245 60125 57570 2023-07-28 75139 65580 58201 74231 69294 64199 67665 2023-07-29 69251 18198 23579 33164 16416 16644 21234 2023-07-30 77991 122907 123113 130862 134425 117653 130063 2023-07-31 75428 50228 56117 45171 50267 50583 41786 2023-08-01 12663 12444 12798 13252 12744 12333 12202 | 2 | 1 |
77,284,039 | 2023-10-12 | https://stackoverflow.com/questions/77284039/how-to-properly-round-to-the-nearest-integer-in-double-double-arithmetic | I have to analyse a large amount of data using Python3 (PyPy implementation), where I do some operations on quite large floats, and must check if the results are close enough to integers. To exemplify, say I'm generating random pairs of numbers, and checking if they form pythagorean triples (are sides of right triangles with integer sides): from math import hypot from pprint import pprint from random import randrange from time import time def gen_rand_tuples(start, stop, amount): ''' Generates random integer pairs and converts them to tuples of floats. ''' for _ in range(amount): yield (float(randrange(start, stop)), float(randrange(start, stop))) t0 = time() ## Results are those pairs that results in integer hypothenuses, or ## at least very close, to within 1e-12. results = [t for t in gen_rand_tuples(1, 2**32, 10_000_000) if abs((h := hypot(*t)) - int(h)) < 1e-12] print('Results found:') pprint(results) print('finished in:', round(time() - t0, 2), 'seconds.') Running it I got: Python 3.9.17 (a61d7152b989, Aug 13 2023, 10:27:46) [PyPy 7.3.12 with GCC 13.2.1 20230728 (Red Hat 13.2.1-1)] on linux Type "help", "copyright", "credits" or "license()" for more information. >>> ===== RESTART: /home/user/Downloads/pythagorean_test_floats.py ==== Results found: [(2176124225.0, 2742331476.0), (342847595.0, 3794647043.0), (36.0, 2983807908.0), (791324089.0, 2122279232.0)] finished in: 2.64 seconds. Fun, it ran fast, processing 10 million datapoints in a bit over 2 seconds, and I even found some matching data. The hypothenuse is apparently integer: >>> pprint([hypot(*x) for x in results]) [3500842551.0, 3810103759.0, 2983807908.0, 2265008378.0] But not really, if we check the results using the decimal arbitrary precision module, we see the results are not actually not close enough to integers: >>> from decimal import Decimal >>> pprint([(x[0]*x[0] + x[1]*x[1]).sqrt() for x in (tuple(map(Decimal, x)) for x in results)]) [Decimal('3500842551.000000228516418075'), Decimal('3810103758.999999710375341513'), Decimal('2983807908.000000217172157183'), Decimal('2265008377.999999748566051441')] So, I think the problem is the numbers are large enough to fall in the range where python floats lack precision, so false positives are returned. Now, we can just change the program to use arbitrary precision decimals everywhere: from decimal import Decimal from pprint import pprint from random import randrange from time import time def dec_hypot(x, y): return (x*x + y*y).sqrt() def gen_rand_tuples(start, stop, amount): ''' Generates random integer pairs and converts them to tuples of decimals. ''' for _ in range(amount): yield (Decimal(randrange(start, stop)), Decimal(randrange(start, stop))) t0 = time() ## Results are those pairs that results in integer hypothenuses, or ## at least very close, to within 1e-12. results = [t for t in gen_rand_tuples(1, 2**32, 10_000_000) if abs((h := dec_hypot(*t)) - h.to_integral_value()) < Decimal(1e-12)] print('Results found:') pprint(results) print('finished in:', round(time() - t0, 2), 'seconds.') Now we don't get any false positives, but we take a large performance hit. What previously took a bit over 2s, now takes over 100s. It appears decimals are not JIT-friendly: ====== RESTART: /home/user/Downloads/pythagorean_test_dec.py ====== Results found: [] finished in: 113.82 seconds. I found this answer to the question, CPython and PyPy Decimal operation performance, suggesting the use of double-double precision numbers as a faster, JIT-friendly alternative to decimals, to get better precision than built-in floats. So I pip installed the doubledouble third-party module, and changed the program accordingly: from doubledouble import DoubleDouble from decimal import Decimal from pprint import pprint from random import randrange from time import time def dd_hypot(x, y): return (x*x + y*y).sqrt() def gen_rand_tuples(start, stop, amount): for _ in range(amount): yield (DoubleDouble(randrange(start, stop)), DoubleDouble(randrange(start, stop))) t0 = time() print('Results found:') results = [t for t in gen_rand_tuples(1, 2**32, 10_000_000) if abs((h := dd_hypot(*t)) - int(h)) < DoubleDouble(1e-12)] pprint(results) print('finished in:', round(time() - t0, 2), 'seconds.') But I get this error: ======= RESTART: /home/user/Downloads/pythagorean_test_dd.py ====== Results found: Traceback (most recent call last): File "/home/user/Downloads/pythagorean_test_dd.py", line 24, in <module> results = [t for t in gen_rand_tuples(1, 2**32, 10_000_000) if abs((h := dd_hypot(*t)) - int(h)) < DoubleDouble(1e-12)] File "/home/user/Downloads/pythagorean_test_dd.py", line 24, in <listcomp> results = [t for t in gen_rand_tuples(1, 2**32, 10_000_000) if abs((h := dd_hypot(*t)) - int(h)) < DoubleDouble(1e-12)] TypeError: int() argument must be a string, a bytes-like object or a number, not 'DoubleDouble' I think the problem is the module doesn't specify a conversion or rounding to the nearest integer method. The best I could write was an extremely contrived "int" function, that rounds a double-double to the nearest integer by doing a round-trip through string and decimals and back to DoubleDouble: def contrived_int(dd): rounded_str = (Decimal(dd.x) + Decimal(dd.y)).to_integral_value() hi = float(rounded_str) lo = float(Decimal(rounded_str) - Decimal(hi)) return DoubleDouble(hi, lo) But it's very roundabout, defeats the purpose of sidesteping decimals and makes the progam even slower than the full-decimal version. Then I ask, is there a fast way to round a double-double precision number to the nearest integer directly, without intermediate steps going through decimals or strings? | Since Python integers have no upper limits, and you're looking for integral results, you should stick to integer inputs and integer operations. In your example, you can use math.isqrt to perform integer square root instead to avoid any imprecision of floating-point numbers altogether: results = [ (x, y) for x, y in gen_rand_tuples(1, 2 ** 32, 10_000_000) if (s := x * x + y * y) == math.isqrt(s) ** 2 ] In testing this is about as fast as your first attempt with floating-point operations, but without any imprecision: Demo: Try it online! | 6 | 5 |
77,303,260 | 2023-10-16 | https://stackoverflow.com/questions/77303260/how-can-i-import-a-local-module-using-databricks-asset-bundles | I want to do something pretty simple here: import a module from the local filesystem using databricks asset bundles. These are the relevant files: databricks.yml bundle: name: my_bundle workspace: host: XXX targets: dev: mode: development default: true resources: jobs: my_job: name: my_job tasks: - task_key: my_task existing_cluster_id: YYY spark_python_task: python_file: src/jobs/bronze/my_script.py my_script.py from src.jobs.common import * if __name__ == "__main__": hello_world() common.py def hello_world(): print("hello_world") And the following folder structure: databricks.yml src/ βββ __init__.py βββ jobs βββ __init__.py βββ bronze β βββ my_script.py βββ common.py I'm deploying this to my workspace + running it by using Databricks CLI v0.206.0 with the following commands: databricks bundle validate databricks bundle deploy databricks bundle run my_job I'm getting issues to import my common.py module. I'm getting the classic ModuleNotFoundError: No module named 'src' error here. I've added the __init__.py files as I typically do when doing this locally, and tried the following variations: from src.jobs.common import * from jobs.common import * from common import * from ..common import * I guess my issue is that I don't really know what the python path is here, since I'm deploying it on Databricks. How can I do something like this using databricks asset bundles? | I recently ran into a similar issue, albeit with notebook tasks, and came to the following resolution, adapted to your example file structure: In your databricks.yml file, pass an argument to your script via parameters: databricks.yml resources: jobs: my_job: name: my_job tasks: - task_key: my_task existing_cluster_id: YYY spark_python_task: python_file: src/jobs/bronze/my_script.py parameters: ['/Workspace/${workspace.file_path}/src'] main.py import sys bundle_src_path = sys.argv[1] sys.path.append(bundle_src_path) from src.jobs.common import * Caveats: As mentioned, I am using a notebook_task which allows me to pass in parameters that I can read using dbutils. I haven't tested the spark_python_task parameters passing above, but it appears similar and may at least be enough to get you into a working state. Databricks API reference The sys path technique is recommended by Databricks docs, though you may need to approach it differently based on your runtime version. This works well for non-production deployment targets. The parameter passed in for production would likely need to be modified (though I haven't made it that far myself yet)! | 5 | 1 |
77,311,583 | 2023-10-17 | https://stackoverflow.com/questions/77311583/how-can-i-recursively-iterate-through-a-directory-in-python-while-ignoring-some | I have a directory structure on my filesystem, like this: folder_to_scan/ important_file_a important_file_b important_folder_a/ important_file_c important_folder_b/ important_file_d useless_folder/ ... I want to recursively scan through folder_to_scan/, and get all the file names. At the same time, I want to ignore useless_folder/, and anything under it. If I do something like this: path_to_search = Path("folder_to_scan") [pth for pth in path_to_search.rglob("*") if pth.is_file() and 'useless_folder' not in [parent.name for parent in pth.parents]] It will work (probably - I didn't bother trying), but the problem is, useless_folder/ contains millions of files, and rglob will still traverse all of them, take ages, and only apply the filter when constructing the final list. Is there a way to tell Python not to waste time traversing useless folders (useless_folder/ in my case)? | You can easily write your own file iterator using recursion. def useless(path): # your logic to discard folders goes here ... def my_files_iter(path): if path.is_file(): yield path elif path.is_dir(): if useless(path): return for child_path in path.iterdir(): yield from my_files_iter(child_path) | 2 | 2 |
77,309,717 | 2023-10-17 | https://stackoverflow.com/questions/77309717/version-controller-in-websocket-with-fastapi | So, I want to use router within the websocket call. This is what I have, at the moment (using dummy data) version->v1->init.py router.include_router(websocket_controller, prefix="/ws") router_without_prefix.include_router(websocket_controller) version->v1->websocket_controller.py So this is the enpoint: router = APIRouter() @router.websocket("/ws/something") async def websocket_endpoint(websocket: WebSocket): but I want to remove this line: router_without_prefix.include_router(websocket_controller). But when I remove this line and call the websocket I have a 403. Can someone explain to me how I can correct this please? | Edit: According to this FastAPI GitHub issue, this issue is specific to websocket routers. Including the router using include_router might not work, but adding routes individually works. Like so: app.add_websocket_route(path="/ws/something", route=websocket_endpoint) Old: Issue: The router has an extra prefix of /ws. You have a prefix of "/ws" defined in init.py, and the coroutine websocket_endpoint is accessed by "/ws/something". This means the async function is available on "/ws/ws/something". Removing the line, router_without_prefix.include_router(websocket_controller), means you also need to update the @router.websocket("/ws/something") line to just @router.websocket("/something"), since the router already has the prefix. Let me know if this works! | 2 | 3 |
77,306,748 | 2023-10-17 | https://stackoverflow.com/questions/77306748/how-is-qstyle-standardpixmap-defined-in-pyqt6 | How is the enumeration for QStyle.StandardPixmap defined in PyQt6? I've tried to replicate it as shown below: from enum import Enum class bootlegPixmap(Enum): SP_TitleBarMenuButton = 0 SP_TitleBarMinButton = 1 SP_TitleBarMaxButton = 2 for y in bootlegPixmap: print(y) I get the following output: bootlegPixmap.SP_TitleBarMenuButton bootlegPixmap.SP_TitleBarMinButton bootlegPixmap.SP_TitleBarMaxButton If I try and iterate over the original using the following code: from PyQt6.QtWidgets import QStyle for x in QStyle.StandardPixmap: print(x) I get numerical values only: 0 1 2 ... | This has got nothing to do with PyQt per se. It's just the normal behaviour of Python's enum classes. By default, the members of IntEnum/IntFlag print their numerical values; so if you want more informative output, you should use repr instead: >>> import enum >>> >>> class X(enum.IntEnum): A = 1 ... >>> f'{X.A} - {X.A!r}' 1 - <X.A: 1> By contrast, the members of Enum have opaque values, so they default to a textual representation of the values: >>> class Y(enum.Enum): A = 1 ... >>> f'{Y.A} - {Y.A!r}' Y.A - <Y.A: 1> Needless to say, PyQt always uses the integer classes, since that's closest to how they're defined in Qt: >>> from PyQt6.QtWidgets import QStyle >>> >>> QStyle.StandardPixmap.mro() [<enum 'StandardPixmap'>, <enum 'IntEnum'>, <class 'int'>, <enum 'ReprEnum'>, <enum 'Enum'>, <class 'object'>] >>> >>> f'{QStyle.StandardPixmap.SP_ArrowUp} - {QStyle.StandardPixmap.SP_ArrowUp!r}' >>> '50 - <StandardPixmap.SP_ArrowUp: 50>' | 2 | 3 |
77,307,742 | 2023-10-17 | https://stackoverflow.com/questions/77307742/how-should-i-pass-a-class-as-an-attribute-to-another-class-with-attrs | So, I just stumbled upon a hurdle concerning the use of attrs, which is quite new to me (I guess this also applies to dataclasses?). I have two classes, one I want to use as an attribute for another. This is how I would do this with regular classes: class Address: def __init__(self) -> None: self.street = None class Person: def __init__(self, name) -> None: self.name = name self.address = Address() Now with attrs, I tried to do the following: from attrs import define @define class Address: street: str | None = None @define class Person: name: str self.address = Address() Now if I try the following, I don't get the same result for a class and a dataclass, for which the reason wasn't obvious to me at first: person_1 = Person("Joe") person_2 = Person("Jane") person_1.address.street = "street" person_2.address.street = "other_street" print(person_1.address.street) I would expect the output to be "street", which is what happens with a regular class. But with attrs, the output is "other_street". I then compared the hashes of person_1.address and person_2.address, and voila, they are the same. After some thinking this is logical, with attrs I instantiate Address immediately, so everyone gets the same instance of Address, with regular classes I only instantiate them when I instantiate the parent class. Now, there is a fix available with attrs: from attrs import define, field @define class Address: street: str | None = None @define class Person: name: str address: Address = field(init=False) def __attrs_post_init__(self): self.address = Address() But this seems really cumbersome to implement every time. Is there a nice solution to this? One way would be to put the instantiation of Address outside of the class like this: address_1 = Address() person_1 = Person("Joe", address) But my issue with that is, that often I want to instantiate the class in an empty state (for example to seperate input from computed values), and this way adds an extra step to instantiation which I need to remember. So in conclusion: In this case, attrs, dataclass, pydantic etc. blur the line between what belongs to the class and what belongs to the instance, and in my case that led to an hour of "wtf happened here". So back to normal classes? I really like the default and validation possibilities of attrs though. Or is there a best practice way to handle this kind of setup? | You can use the factory argument to specify a callable that is called to return a new instance for the field during instantiation: from attrs import define, field @define class Address: street: str | None = None @define class Person: name: str address: Address = field(factory=Address) | 2 | 4 |
77,304,167 | 2023-10-16 | https://stackoverflow.com/questions/77304167/using-pydantic-to-change-int-to-string | I am sometimes getting data that is either a string or int for 2 of my values. Im trying to figure out the best solution for handling this and I am using pydantic v2. Ive been studying the @field_validtaors(pre=True) but haven't been successful. Here's my code: class User(BaseModel): system_id: str db_id: str id: str record: dict isValid: bool @field_validator("id", mode='before') <<<--------- not working def transform_system_id_to_str(cls, value) -> str: return str(value) @field_validator("system_id", mode='before') <<<--------- not working def transform_system_id_to_str(cls, value: str | int) -> str: if isinstance(value, str): print('yes') return value return str(value) | You can validate more than one fields in a field_validator. Try the following code: class User(BaseModel): system_id: str db_id: str id: str record: dict isValid: bool @field_validator("id","system_id", mode='before') def transform_id_to_str(cls, value) -> str: return str(value) u = User(system_id='199', db_id="1", id='99', record={}, isValid=True) print(u) ## system_id='199' db_id='1' id='99' record={} isValid=True u = User(system_id=199, db_id="1", id=99, record={}, isValid=True) print(u) ## system_id='199' db_id='1' id='99' record={} isValid=True Before validators run before Pydantic's internal parsing and validation (e.g. coercion of a str to an int). https://docs.pydantic.dev/latest/concepts/validators/ | 5 | 4 |
77,306,963 | 2023-10-17 | https://stackoverflow.com/questions/77306963/does-class-variable-is-common-among-objects-in-python | Suppose I want to implement common variable among objects using class variables (similar to static in java/c++). When I access class variable using object, it shows default value. Then I updaded class variable via object, its not updating class variable for other objects. It is showing old value in when class variable accessed via other objects or Class Name directly. Why is it so, and what is the way in python to make class/static variable (common among objects) # Class for Computer Science Student class CSStudent: stream = 'cse' # Class Variable def __init__(self,name): self.name = name # Instance Variable # Objects of CSStudent class a = CSStudent('Geek') b = CSStudent('Nerd') print(a.stream) # prints "cse" print(b.stream) # prints "cse" print(a.name) # prints "Geek" print(b.name) # prints "Nerd" # Class variables can be accessed using class # name also print(CSStudent.stream) # prints "cse" # Now if we change the stream for just a it won't be changed for b a.stream = 'ece' b.stream = 'abc' print(CSStudent.stream) # prints 'ece' print(a.stream) # prints 'ece' print(b.stream) # prints 'abc' | You need to know the lookup procedure. First off, both instances and classes have their own namespaces. Only the class variables are shared between all instances. The namespace of classes and instances are accessible via __dict__ attribute. When you access an attribute from an instance, Python first looks at the namespace of the instance, if it can find it, it returns it, otherwise it's gonna find it in its class! class CSStudent: stream = "cse" a = CSStudent() b = CSStudent() print("stream" in a.__dict__) # False print("stream" in b.__dict__) # False print("stream" in CSStudent.__dict__) # True print(a.stream) # cse print(b.stream) # cse So a and b doesn't have that stream, only the class has. Now a.stream = "something" will add that attribute to the namespace of that specific instance. a.stream = "something" print("stream" in a.__dict__) # True print("stream" in b.__dict__) # false print("stream" in CSStudent.__dict__) # True Now if you access a.stream it finds it in the namespace of a and returns it, but because b doesn't have that, it will find it in the class.(the shared one) print(a.stream) # something print(b.stream) # cse If you want to the change to reflect all instances, you need to change it on the class which has it shared between all instances. class CSStudent: stream = "cse" a = CSStudent() b = CSStudent() print(a.stream) # cse print(b.stream) # cse CSStudent.stream = "something" print(a.stream) # something print(b.stream) # something Always think about who has what. Then consider the precedence, instance namespaces are checked first. Note: I simplified the explanation by not mentioning what descriptors are because we don't have one here. Take a look at it later. You would be surprised to see that in fact always class namespaces are checked first! | 3 | 3 |
77,306,397 | 2023-10-17 | https://stackoverflow.com/questions/77306397/wide-to-long-in-pandas-while-aligning-columns | Consider the following example: pd.DataFrame({'time' : [1,2,3], 'X1-Price': [10,12,11], 'X1-Quantity' : [2,3,4], 'X2-Price': [3,4,2], 'X2-Quantity' : [1,1,1]}) Out[92]: time X1-Price X1-Quantity X2-Price X2-Quantity 0 1 10 2 3 1 1 2 12 3 4 1 2 3 11 4 2 1 I am trying to reshape this wide dataframe into a long format. The difficulty is that I want to break down the variables by type (identified by the X1, X2 variables) while keeping price and quantity aligned on the same rows. That is, the desired output is the following Out[93]: time type quantity price 0 1 X1 2 10 1 2 X1 3 12 2 3 X1 4 11 3 1 X2 1 3 4 2 X2 1 4 5 3 X2 1 2 I am not sure how to do this. I tried the pd.wide_to_long() function but it does not work correctly. pd.wide_to_long(b, i = 'time', j = 'value', stubnames = ['X']) Out[95]: Empty DataFrame Columns: [X1-Price, X2-Price, X2-Quantity, X1-Quantity, X] Index: [] Do you have any idea? Thanks! | wide_to_long requires a specific order. First invert the chunks around the -, then change some of the default parameters: # alternative: # tmp.columns = df.columns.str.split('-').str[::-1].str.join('-') tmp = df.rename(columns=lambda s: '-'.join(s.split('-')[::-1])) out = (pd.wide_to_long(tmp, i='time', j='type', stubnames=['Quantity', 'Price'], sep='-', suffix=r'X\d+') .reset_index() ) Output: time type Quantity Price 0 1 X1 2 10 1 2 X1 3 12 2 3 X1 4 11 3 1 X2 1 3 4 2 X2 1 4 5 3 X2 1 2 | 2 | 3 |
77,305,599 | 2023-10-17 | https://stackoverflow.com/questions/77305599/how-do-i-get-pandas-to-ignore-at-start-of-header-line | I am reading a .fce file in Pandas. The start of the file looks like this: # Forces and moments acting on bodies # Direction1 = ( 1 0 0) # Direction2 = ( 0 1 0) # Moments around ( 0 0 0) # Boundary regions: 0 # Time F1-press F1-visc F1-total F2-press F2-visc F2-total M3-press M3-visc M3-total 1250 1.1630515 0.018437668 1.1814892 0.11872933 0.076623484 0.19535281 -0.079455905 -0.0024483534 -0.081904258 1250 1.1451982 0.019795173 1.1649934 0.10773641 0.076852597 0.18458901 -0.07645881 -0.0026731929 -0.079132003 I have been able to get pandas to ignore the first 5 lines since those are not useful to me. The headers are contained in the 6th line. The issue now is that Pandas is interpretting the # as one of the headers and I would like it to ignore this character. Time should be the first header. I am using the command df = pd.read_csv('DragLift.fce',sep='\s+', skiprows=5) to read the file. I have found that both read_table and read_csv work the same for this and have the same issue. Everything works fine except Pandas' interpretation of the header. The output is: # Time F1-press ... 1250 1.163051 0.018438 ... ... ... ... ... when I want: Time F1-press F1-visc ... 1250 1.163051 0.018438 ... ... ... ... ... How can I get Pandas to ignore the #? Thank you for your help. | Quick and easy option, shift your data after import: df = (pd.read_csv('DragLift.fce',sep='\s+', skiprows=5) .shift(axis=1).iloc[:, 1:] ) print(df) Alternatively, read the headers with a regex, ignore the lines with #, and pass the names manually: import re with open('DragLift.fce') as f: m = next(re.finditer('#([^\n]+)\n(?!#)', f.read()), None) names = m.group(1).split() if m else None df = pd.read_csv('DragLift.fce', sep='\s+', names=names, comment='#') Output: Time F1-press F1-visc F1-total F2-press F2-visc F2-total M3-press M3-visc M3-total 0 1250 1.163051 0.018438 1.181489 0.118729 0.076623 0.195353 -0.079456 -0.002448 -0.081904 1 1250 1.145198 0.019795 1.164993 0.107736 0.076853 0.184589 -0.076459 -0.002673 -0.079132 | 2 | 2 |
77,304,052 | 2023-10-16 | https://stackoverflow.com/questions/77304052/transactional-operations-with-google-cloud-ndb | I am using Google Cloud Datastore via the python-ndb library (Python 3). My goal is to transactionally create two entities at once. For example, when a user creates an Account entity, also create a Profile entity, such that if either entity fails to be created, then neither entity should be created. From the datastore documentation, it can be implemented like this: from google.cloud import datastore client = datastore.Client() def transfer_funds(client, from_key, to_key, amount): with client.transaction(): from_account = client.get(from_key) to_account = client.get(to_key) from_account["balance"] -= amount to_account["balance"] += amount client.put_multi([from_account, to_account]) However, there is no example provided by the python-ndb documentation. From chatgpt, I tried something like this: from google.cloud import ndb client = ndb.Client() def create_new_account(): with client.context(): @ndb.transactional def _transaction(): account = Account() account.put() profile = Profile(parent=account.key) profile.put() return account, profile try: account, profile = _transaction() except Exception as e: print('Failed to create account with profile') raise e return account, profile However, I get the error: TypeError: transactional_wrapper() missing 1 required positional argument: 'wrapped' | You're almost there. It should be @ndb.transactional() def _transaction(): Ran your code and got your error. Then amended it to mine and it went through. Source - https://github.com/googleapis/python-ndb/blob/main/google/cloud/ndb/_transaction.py#L317 | 2 | 3 |
77,304,749 | 2023-10-16 | https://stackoverflow.com/questions/77304749/python-how-to-join-string-elements-in-list-with-complex-notation | Edited to show more accurate strings I have a long list with the following grouping structure. I am showing just two groups, but in reality there are 300. The strings can be any random collection of numbers and letters: ["['P431', 'N7260', 'K492'], ['R109', 'T075X9A'], ['U8154', 'C861']", "['R878X8', 'T61'], ['F6332', 'Q7979'], ['L520'], ['B7939', 'K5132']"] For each group in the list, I want the final output to look like the following: [['P431 N7260 K492 R109 T075X9A', 'U8154 C861'], ['R878X8 T61 F6332 Q7979 L520', 'B7939 K5132']] So for each group in the list, I want to join all of the elements in every sub-list except the last sub-list, then separately join all of the elements in the last sub-list, and finally add these both together into a single sub-list, each of which will now have just two elements. Thank you in advance as I am a beginner. | You can try this: from ast import literal_eval example=[ "['P431', 'N7260', 'K492'], ['R109', 'T075X9A'], ['U8154', 'C861']", "['R878X8', 'T61'], ['F6332', 'Q7979'], ['L520'], ['B7939', 'K5132']", "['a','b','c'], 'not a list'" ] for st in example: *first_grp,last=literal_eval(st) if any(not isinstance(s, list) for s in (first_grp, last)): continue print([" ".join([e for r in first_grp for e in r]), " ".join(last)]) Prints: ['P431 N7260 K492 R109 T075X9A', 'U8154 C861'] ['R878X8 T61 F6332 Q7979 L520', 'B7939 K5132'] This is similar to Andrej Kesely solution but may overcome the 'random strings' you describe with a type check. | 3 | 2 |
77,304,625 | 2023-10-16 | https://stackoverflow.com/questions/77304625/gunicorn3-goes-to-the-development-server | When I open a server with gunicorn for my flask app, it automatically opens the development server of flask, without giving me any errors: $ gunicorn3 --workers=1 main:app \[2023-10-16 19:46:13 +0000\] \[1061\] \[INFO\] Starting gunicorn 20.1.0 \[2023-10-16 19:46:13 +0000\] \[1061\] \[INFO\] Listening at: http://127.0.0.1:8000 (1061) \[2023-10-16 19:46:13 +0000\] \[1061\] \[INFO\] Using worker: sync \[2023-10-16 19:46:13 +0000\] \[1062\] \[INFO\] Booting worker with pid: 1062 * Serving Flask app 'app' * Debug mode: off 2023-10-16 19:46:14,209:INFO - WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on http://127.0.0.1:5000 2023-10-16 19:46:14,209:INFO - Press CTRL+C to quit I already have other servers with gunicorn and I don't understand this behavior, does someone know? I've changed the port and the host, but the outcome is the same, also with the debug mode and the number of workers of gunicorn3. | If you have the following lines in your main application script then it will cause Flask's development server to start if __name__ == '__main__': app.run() You could comment out the above part Make sure that you don't have any environment variables like FLASK_ENV=development or FLASK_APP=app.py Confirm that main:app correctly points to your Flask application object | 2 | 2 |
77,302,486 | 2023-10-16 | https://stackoverflow.com/questions/77302486/sort-values-in-a-nested-dictionary-from-min-to-max | I created a dictionary, named dict_dif and loaded nested data into the same. The nested structure of this dictionary has the following structure: Variables (four different variables) Regions (eleven different regions for each of the four variables) Data points (60 data points per region). The dictionary looks as follows (where I shorten the variables, regions and data points to provide a small and better overview): 'variable_one': {'V1': [0.2883961843782473, -0.2564336831277686, -0.1502803806437477, -0.3404171090176428, 'V2': [0.3053319292730037, -0.12252840653652636, -0.18759593722968465, -0.04368054841152297, 'variable_two': {'V1': [0.05129822246425414, ... Aim: I would like to sort the numeric values from the lowest to the highest for each variable and region, respectively. For example, in the sorted dictionary the numeric values should span from the lowest to the highest for region V1, for region V2, and so on individually for each variable. I tried the following code (among others) that I found on the internet: sorted_dict = {var : dict(sorted(roi.items(), key = lambda ele: ele[1])) for var, roi in dict_dif.items()} However, that code seems to sort the regions based on each regionβs first value. Is there a way to adjust this code above (or a different code) so that the order of the variables and regions remains the same, while only sorting the regionsβ values? | If I understand correctly, you want to sort the numeric values for each variable sorted_dict = { variable: {region: sorted(values) for region, values in regions.items()} for variable, regions in dict_dif.items() } | 2 | 2 |
77,277,096 | 2023-10-12 | https://stackoverflow.com/questions/77277096/error-in-calculating-dynamic-time-warping | I am using this github codes (https://github.com/nageshsinghc4/-Dynamic-Time-Warping-DTW-/blob/main/Dynamic_Time_Warping(DTW).ipynb) to calculate Dynamic Time Warping. However, when run dtw_distance, warp_path = fastdtw(x, y, dist=euclidean), I get an error that "ValueError: Input vector should be 1-D.". I have same problem with this github codes (https://github.com/ElsevierSoftwareX/SOFTX-D-22-00246) too. Here is the codes, import pandas as pd import numpy as np # Plotting Packages import matplotlib.pyplot as plt import seaborn as sbn import matplotlib as mpl mpl.rcParams['figure.dpi'] = 150 savefig_options = dict(format="png", dpi=150, bbox_inches="tight") # Computation packages from scipy.spatial.distance import euclidean from fastdtw import fastdtw def compute_euclidean_distance_matrix(x, y) -> np.array: """Calculate distance matrix This method calcualtes the pairwise Euclidean distance between two sequences. The sequences can have different lengths. """ dist = np.zeros((len(y), len(x))) for i in range(len(y)): for j in range(len(x)): dist[i,j] = (x[j]-y[i])**2 return dist def compute_accumulated_cost_matrix(x, y) -> np.array: """Compute accumulated cost matrix for warp path using Euclidean distance """ distances = compute_euclidean_distance_matrix(x, y) # Initialization cost = np.zeros((len(y), len(x))) cost[0,0] = distances[0,0] for i in range(1, len(y)): cost[i, 0] = distances[i, 0] + cost[i-1, 0] for j in range(1, len(x)): cost[0, j] = distances[0, j] + cost[0, j-1] # Accumulated warp path cost for i in range(1, len(y)): for j in range(1, len(x)): cost[i, j] = min( cost[i-1, j], # insertion cost[i, j-1], # deletion cost[i-1, j-1] # match ) + distances[i, j] return cost # Create two sequences x = [7, 1, 2, 5, 9] y = [1, 8, 0, 4, 4, 2, 0] dtw_distance, warp_path = fastdtw(x, y, dist=euclidean) cost_matrix = compute_accumulated_cost_matrix(x, y) When I run the code, I get the following error that, ValueError: Input vector should be 1-D. | There's a conflict here between what SciPy is expecting and what FastDTW is expecting. FastDTW is expecting to compare one element at a time. SciPy is expecting to get an entire vector at once. Here's what FastDTW says about the dist argument. (Source.) dist : function or int The method for calculating the distance between x[i] and y[j]. If dist is an int of value p > 0, then the p-norm will be used. If dist is a function then dist(x[i], y[j]) will be used. If dist is None then abs(x[i] - y[j]) will be used. Here's what SciPy says about the euclidean function: Computes the Euclidean distance between two 1-D arrays. The FastDTW documentation I just quoted gives you a way to solve this. A p-norm with p=2 is equivalent to the Euclidean norm. (Source.) Therefore, the simplest way to solve this problem is to provide dist=2. dtw_distance, warp_path = fastdtw.fastdtw(x, y, dist=2) | 2 | 3 |
77,295,731 | 2023-10-15 | https://stackoverflow.com/questions/77295731/postgresql-values-list-table-constructor-with-sqlalchemy-how-to-for-native-sq | I am trying to use PostgreSQL Values list https://www.postgresql.org/docs/current/queries-values.html (also known as table constructor) in Python with SQLAlchemy. Here the SQL working in PostgreSQL SELECT input_values.ticker FROM (VALUES ('A'), ('B'), ('C')) as input_values(ticker) I built a list of tickers and passing it as argument. It converted by SQLAlchecmy to SELECT input_values.ticker FROM (VALUES (%(input_values_1)s, %(input_values_2)s, %(input_values_3)s)) as input_values(ticker) Which looks good if would list be used as param with IN clause. But in my case this not works. How to provide parameters list correctly? Here the code I have: import logging from injector import inject from sqlalchemy import bindparam from sqlalchemy.sql import text from database.database_connection import DatabaseConnection class CompaniesDAO: FIND_NEW_QUERY = ''' SELECT input_values.ticker FROM (VALUES :input_values) as input_values(ticker) ''' @inject def __init__(self, database_connection: DatabaseConnection): self.__database_connection = database_connection def save_new(self, companies): tickers = ['A', 'B', 'C'] input_values = {'input_values': tickers} database_engine = self.__database_connection.get_engine() with database_engine.connect() as connection: query = text(CompaniesDAO.FIND_NEW_QUERY) query = query.bindparams(bindparam('input_values', expanding=True)) result = connection.execute(query, input_values) new_tickers = [row[0] for row in result] logging.info(new_tickers) I saw few related discussions like VALUES clause in SQLAlchemy and checked current solution like https://github.com/sqlalchemy/sqlalchemy/wiki/PGValues. However, I not see solution for native SQL query. Is there any? | For the general case, the Table Value Constructor (TVC) values should be a list of tuples, not a list of scalar values tickers = [('A',), ('B',), ('C',)] from which we can then build the TVC (VALUES construct) tvc = values( column("ticker", String), name="input_values" ).data(tickers) Now the easiest way to get the query is simply qry = select(text("input_values.ticker")).select_from(tvc) with which we can do engine.echo = True with engine.begin() as conn: results = conn.execute(qry).all() """ SELECT input_values.ticker FROM (VALUES (%(param_1)s), (%(param_2)s), (%(param_3)s)) AS input_values (ticker) [no key 0.00036s] {'param_1': 'A', 'param_2': 'B', 'param_3': 'C'} """ print(results) """ [('A',), ('B',), ('C',)] """ However, if you really want to use a literal SQL query you could also do tvc = values( column("ticker", String), name="input_values", literal_binds=True, ).data(tickers) qry = text( "SELECT input_values.ticker " f"FROM ({tvc.compile(engine)}) AS input_values (ticker)" ) engine.echo = True with engine.begin() as conn: results = conn.execute(qry).all() """ SELECT input_values.ticker FROM (VALUES ('A'), ('B'), ('C')) AS input_values (ticker) [generated in 0.00026s] {} """ print(results) """ [('A',), ('B',), ('C',)] """ Or if your TVC had a lot of columns and you didn't want to type them all out you could let SQLAlchemy build the column list for you tvc = values( column("ticker", String), name="input_values", literal_binds=True, ).data(tickers) basic_select = select(text("*")).select_from(tvc) tvc_compiled = str(basic_select.compile(engine))[15:] print(tvc_compiled) # (VALUES ('A'), ('B'), ('C')) AS input_values (ticker) | 2 | 2 |
77,297,624 | 2023-10-15 | https://stackoverflow.com/questions/77297624/writing-gzip-csv-file-introduces-random-chars-in-the-first-row | I am writing some csv data to a csv gzip file in python as follows import csv import gzip import io csv_rows=[["cara","vera","tara"],["rar","mar","bar"],["jump","lump","dump"]] mem_file = io.BytesIO() with gzip.GzipFile(fileobj=mem_file,mode="wb") as gz: with io.TextIOWrapper(gz, encoding='utf-8') as wrapper: writer = csv.writer(wrapper) writer.writerows(csv_rows) gz.write(mem_file.getvalue()) gz.close() mem_file.seek(0) When I gunzip the file the first column in the first row is a strange set of characters and causes the first row to actually have 4 columns The 2nd and 3rd rows are ok I have tried different data and always see this behavior in the first column of the first row. What is wrong with the code? For reference here it is what I see in the gunzipped file ?βΉ? #,eΓΏcara,vera,tara rar,mar,bar jump,lump,dump | Remove this line: gz.write(mem_file.getvalue()) (also it isgetvalue(), not getValue()). The csv.writer() already uses io.TextIOWrapper to write to gzip.GzipFile, so there's no need to write to it again. | 2 | 3 |
77,269,618 | 2023-10-11 | https://stackoverflow.com/questions/77269618/why-does-an-operation-on-a-large-integer-silently-overflow | I have a list that contains very large integers and I want to cast it into a pandas column with a specific dtype. As an example, if the list contains 2**31, which is outside the limit of int32 dtype, casting it into dtype int32 throws an Overflow Error, which lets me know to use another dtype or handle the number in some other way beforehand. import pandas as pd pd.Series([2**31], dtype='int32') # OverflowError: Python int too large to convert to C long But if a number is large but inside the dtype limits (i.e. 2**31-1), and some number is added to it which results in a value that is outside the dtype limits, then instead of an OverflowError, the operation is executed without any errors, yet the value is now inverted, becoming a completely wrong number for the column. pd.Series([2**31-1], dtype='int32') + 1 0 -2147483648 dtype: int32 Why is it happening? Why doesnβt it raise an error like the first case? PS. I'm using pandas 2.1.1 and numpy 1.26.0 on Python 3.12.0. | Why does an operation on a large integer silently overflow? As a short answer, that's because of how numpy deals with overflows. On my platform (with the same versions of Python/Packages as yours) : from platform import * import numpy as np; import pandas as pd system(), version(), machine() python_version(), pd.__version__, np.__version__ ('Linux', '#34~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Sep 7 13:12:03 UTC 2', 'x86_64') ('3.12.0', '2.1.1', '1.26.0') I can reproduce your issue but with a larger integer than the one you choose as an example : pd.Series([2**63], dtype="int32") raises this : OverflowError: Python int too large to convert to C long While pd.Series([2**31], dtype="int32") raises this : ValueError: Values are too large to be losslessly converted to int32. To cast anyway, use pd.Series(values).astype(int32) Details Let's agree that you're using two different type of objects, which potentially means two different scenarios, i.e, 1) error raised or 2) no error raised : pd.Series : the Series constructor pd.Series.add : a method of the former The construction : pd.Series([2**31], dtype="int32") It is handled behind the scenes by sanitize_array which receives your input (the list [2**31], i.e [2147483648]) and call in this case maybe_cast_to_integer_array. The latter will make a classical NumPy construction using np.array : casted = np.array([2147483648], dtype="int32") DeprecationWarning: NumPy will stop allowing conversion of out-of-bound Python integers to integer arrays. The conversion of 2147483648 to int32 will fail in the future. For the old behavior, usually: np.array(value).astype(dtype) will give the desired result (the cast overflows). np.array([2147483648], dtype='int32') You may ask yourself why the warning above doesn't show off while constructing your Series, well that's because pandas silents it. Now, right after casting, pandas calls np.asarray without specifying a dtype to let NumPy infer a dtype (which is int64 here) : arr = np.asarray(arr). And since, casted.dtype < arr.dtype, the ValueError is triggered. The addition : pd.Series([2**31-1], dtype="int32") + 1 This one is delegated to _na_arithmetic_op that receives array([2147483647], dtype=int32) and 1 and try to add them together with the help of _evaluate_standard to make a classical operator.add operation that is equivalent to np.array([2147483647]) + 1 and since the fixed size of NumPy numeric types may cause overflow errors when a value requires more memory than available in the data type, the result is the array([-2147483648], dtype=int32) which is passed to sanitize_array to construct back a Series : pd.Series([2**31-1], dtype="int32") + 1 0 -2147483648 dtype: int32 NB : When you go beyond the limit of the int32, NumPy wraps around to the minimum value : a = np.array([2**31-1], dtype="int32"); b = 1 a+b # this gives array([-2147483648], dtype=int32) Here is some other examples : def wrap_int32(i, N, l=2**31): return ((i+N) % l) - l wrap_int32(2**31, 0) # -2147483648 wrap_int32(2**31, 1) # -2147483647 wrap_int32(2**31, 2) # -2147483646 wrap_int32(2**31, 3) # -2147483645 # ... I have a list that contains very large integers and I want to cast it into a pandas column with a specific dtype. As an example, if the list contains 2**31, which is outside the limit of int32 dtype, casting it into dtype int32 throws an OverflowError, which lets me know to use another dtype or handle the number in some other way beforehand. Maybe you should consider opening an issue so that the arithmetic operations made by pandas raise an error in case of an overflow. And as a workaround (or maybe a solution?) for your usecase, you can try catching upstream the integers that doesn't fall within the int32 range : iint32 = np.iinfo(np.int32) lst = [100, 1234567890000, -1e19, 2**31, 2**31-1, -350] out = [i for i in lst if iint32.min <= i and i <= iint32.max] # [100, 2147483647, -350] | 8 | 9 |
77,294,679 | 2023-10-14 | https://stackoverflow.com/questions/77294679/pandas-column-split-a-row-with-conditional-and-create-a-separate-column | It seems the problem is not difficult, but somehow I am not able to make it work. My problem is as follows. I have a dataframe say as follows: dfin A B C a 1 198q24 a 2 128q6 a 6 1456 b 7 67q22 b 1 56 c 3 451q2 d 11 1q789 So now what I want to do is as follows, whenever the script will encounter a 'q', it will split the values and create a separate column with values starting from 'q'. The part before 'q' will remain in the original ( or maybe can crate a new column). So my desired output should be as follows: dfout A B C D a 1 198 q24 a 2 128 q6 a 6 1456 b 7 67 q22 b 1 56 c 3 451 q2 d 11 1 q789 So what I have tried till now is as follows: dfout = dfin.replace('\q\d*', '', regex=True) Its creating one column without q, but I am not able to create the column D and not working as expected. Any help/ideas will help and be appreciated. | import pandas as pd def get_input() -> pd.DataFrame: csv_text = """ a 1 198q24 a 2 128q6 a 6 1456 b 7 67q22 b 1 56 c 3 451q2 d 11 1q789 """.strip() return pd.DataFrame(map(str.split, csv_text.splitlines()), columns=["a", "b", "c"]) def split_on_q(df_in: pd.DataFrame) -> pd.DataFrame: df = df_in.c.str.split("q", expand=True) df_out = df_in.copy() df_out["c"] = df[0] df_out["d"] = _prepend_q(df[1]) return df_out def _prepend_q(series: pd.Series) -> pd.Series: return series.apply(lambda s: None if s is None else f"q{s}") if __name__ == "__main__": print(split_on_q(get_input())) Output: a b c d 0 a 1 198 q24 1 a 2 128 q6 2 a 6 1456 None 3 b 7 67 q22 4 b 1 56 None 5 c 3 451 q2 6 d 11 1 q789 | 2 | 1 |
77,281,898 | 2023-10-12 | https://stackoverflow.com/questions/77281898/pandas-dataframe-columns-unexpectedly-out-of-order | I encountered a rather unexpected result today with a pandas dataframe. My script takes genome sequence data (in fasta format) as input and calculates several basic metrics. I store those metrics in a pandas dataframe. The script starts by defining an empty dataframe with headers: stats_df = pd.DataFrame(columns=['Assembly','Size','#Contigs','#Contigs > 3000','N50','Longest_contig']) The main body of the script then loops through all genome files and calculates all the metrics listed in the headers. That all works just fine. I then add the metrics for the genome assembly to the stats_df dataframe using these two lines: new_df_row = pd.DataFrame({'Assembly':[assembly_name],'Size':[assembly_size_in_Mbp],'#Contigs':[num_of_contigs],'#Contigs > 3000':[greater_than_3000_count],'N50':[n50],'Longest_contig':[longest_contig]}) stats_df = pd.concat([stats_df,new_df_row],ignore_index=True) The unexpected behaviour is when I view stats_df the columns are ordered: #Contigs, #Contigs > 3000, Assembly, Longest_contig, N50, Size. This is different to the order of the entries in the empty dataframe a the start and in each new row added. The metrics are all in the right place, I'm just wondering what causes the columns to move around like that? | I think the problem is that for new_df_row you're using a dictionary to create the data frame. Dictionaries are unordered (in Python 3.6 and below), so the keys do not have any fix order. This article and also this question explain the issue nicely. To fix it, you could upgrade to Python 3.7 which has ordered dictionaries. | 2 | 2 |
77,294,331 | 2023-10-14 | https://stackoverflow.com/questions/77294331/how-to-relay-the-sum-of-each-nested-list-to-the-next-one | My input is this list : my_list = [[3, 4, -1], [0, 1], [-2], [7, 5, 8]] I need to sum the nested list i and pad it to the right of the list i+1, it's like a relay course. I have to mention that the original list should be untouched and I'm not allowed to copy it. wanted = [[3, 4, -1], [0, 1, 6], [-2, 7], [7, 5, 8, 5]] I tried the code below but I got a list with more elements than my original one : from itertools import pairwise wanted = [] for left, right in pairwise(my_list): wanted.extend([left, right + [sum(left)]]) print(wanted) [[3, 4, -1], [0, 1, 6], [0, 1], [-2, 1], [-2], [7, 5, 8, -2]] Can you guys explain what's happening please or suggest another solution ? | You can try itertools.accumulate: from itertools import accumulate my_list = [[3, 4, -1], [0, 1], [-2], [7, 5, 8]] print(list(accumulate(my_list, lambda l1, l2: [*l2, sum(l1)]))) Prints: [[3, 4, -1], [0, 1, 6], [-2, 7], [7, 5, 8, 5]] EDIT: As @KellyBundy pointed in the comments, the first element in the list is shared. So if it's a problem, make a copy from it e.g.: accumulate([my_list[0][:], *my_list[1:]], lambda l1, l2: [*l2, sum(l1)]) | 5 | 5 |
77,294,480 | 2023-10-14 | https://stackoverflow.com/questions/77294480/how-to-take-the-number-of-occurrences-of-a-number-in-a-list-and-create-a-new-lis | I am given a list to put into a function. The function is then suppose to return the amount of times each number was in the function. So for example, if the given list is [15,15,15,6,6,6,7,7], the output list should be [3,15,3,6,2,7]. def encode_rle(flat_data): set_list = set(flat_data) flat_data_converted = [] for value in set_list: flat_data_converted.append(value) list = [] for number in flat_data_converted: amount = flat_data.count(number) list.append(amount) new_list = flat_data_converted + list This is what I have done so far, but the numbers in the list are not attached to their occurrence amount. | def encode_rle(flat_data): result = [] i = 0 n = len(flat_data) while i < n: count = 1 while i + 1 < n and flat_data[i] == flat_data[i + 1]: count += 1 i += 1 result.extend([count, flat_data[i]]) i += 1 return result flat_data = [15,15,15,6,6,6,7,7] print(flat_data) print(encode_rle(flat_data)) Expected output: [3, 15, 3, 6, 2, 7] | 2 | 1 |
77,293,906 | 2023-10-14 | https://stackoverflow.com/questions/77293906/error-botocore-exceptions-httpclienterror-an-http-client-raised-an-unhandled-e | So I am pretty new to asynchronous programming in python3. I am creating aws sqs client using types_aiobotocore_sqs and aiobotocore libraries. But I am getting the error mentioned in the title. Following is the code implementation url.py file: @api_view(["GET"]) def root_view(request): async def run_async(): sqs_helper = await Helper.create(QueueName.TEST_QUEUE.value) message = await sqs_helper.get_queue_url() return message response = asyncio.run(run_async()) return Response(str(response)) sqs_helper file class Helper: def __init__(self, queue_name: str) -> None: self.queue_name = queue_name self.access_key_id = env("AWS_ACCESS_KEY_ID") self.secret_access_key = env("AWS_SECRET_ACCESS_KEY") self.region_name = env("REGION_NAME") self.sqs = None @classmethod async def create(cls, queue_name): instance = cls(queue_name) await instance.setup() return instance async def setup(self): self.sqs_client = await self.create_sqs_client() self.sqs = Sqs(self.sqs_client, self.queue_name) async def create_sqs_client(self): session = get_session() async with session.create_client('sqs', region_name=self.region_name, aws_access_key_id=self.access_key_id, aws_secret_access_key=self.secret_access_key) as client: client: SQSClient print("TYPE" + str(client)) return client async def get_queue_url(self): return await self.sqs.get_queue_url() Sqs client which initiating the queue class Sqs: def __init__(self, client: SQSClient, queue_name: str) -> None: self.queue_name = queue_name self._queue_url = "" self.client = client async def get_queue_url(self) -> str: if not self._queue_url: try: response = await self.client.get_queue_url(QueueName=self.queue_name) self._queue_url = response["QueueUrl"] except ClientError as err: if ( err.response.get("Error", {}).get("Code") == "AWS.SimpleQueueService.NonExistentQueue" ): raise QueueDoesNotExist( f"Queue {self.queue_name} does not exist" ) from err raise err return self._queue_url I am getting the particular error at response = await self.client.get_queue_url(QueueName=self.queue_name) in Sqs class which in the third code snippet. | When you use async with session.create_client(...) as client, the client object is limited to that context and will be None outside of it. You should assign it directly instead to ensure it's accessible outside that context: async def create_sqs_client(self): session = get_session() client = await session.create_client('sqs', region_name=self.region_name, aws_access_key_id=self.access_key_id, aws_secret_access_key=self.secret_access_key) print("TYPE" + str(client)) return client Make sure to also handle the closing of the client manually since you're not using async with which would do that for you. | 2 | 3 |
77,292,600 | 2023-10-14 | https://stackoverflow.com/questions/77292600/how-to-automatically-delete-row-after-10-minutes-after-creation | Suppose I create a database and add desired tables, columns with this code: import sqlite3 connection = sqlite3.connect("clients.db", check_same_thread=False) cursor = connection.cursor() cursor.execute("CREATE TABLE IF NOT EXISTS customers(user_id INT, user_name, dt_string)") cursor.execute("""INSERT INTO customers VALUES(?, ?. ?);""", (user_id, user_name, dt_string)) connection.commit() Now I know I can delete this row upon entry with this code cursor.execute("""DELETE FROM customers WHERE user_id=?;""", (user_id,) ) connection.commit() How can I wait for a certain amount of time before removing this entry? I know it's achievable using time.sleep and threading but I wanted to know if there is any other way. | Understand that sqlite is serverless. That is its whole purpose. So there is nothing doing anything fancy in the background. It is not like SQL servers, that may have some procedures and stuff that could implement this kind of temporality (I am not aware of anything like that in MySql neither. But, I am not well versed into it, so that doesn't prove that it doesn't exist. And at least it could exist. Redis, for example have such things). So here, if someone make the row disappear after a while, it is you. Since there is nobody else. Sqlite is just a library to open and read files, not an API to talk with "someone else", that is to a server. So, one method, for example, could be to use a timer from threading import Timer def delUser(userid): cursor = connection.cursor() cursor.execute("""DELETE FROM customers WHERE user_id=?;""", (user_id,) ) connection.commit() #... Timer(10, delUser, (user_id,)).start() But of course, you'll have to deal with multithreading caveout, if the main thread may still be writing during that time. You've already added the check_same_thread=False flag (I don't know why btw. I needed it, but you didn't so far. Probably because you've already tried similar solutions). But you also have to handle some semaphores or anyway to ensure that there is only one thread writing at a time. Quote from documentation When using multiple threads with the same connection writing operations should be serialized by the user to avoid data corruption There is a discussion about this in another question | 3 | 2 |
77,287,622 | 2023-10-13 | https://stackoverflow.com/questions/77287622/modulenotfounderror-no-module-named-kafka-vendor-six-moves-in-dockerized-djan | I am facing an issue with my Dockerized Django application. I am using the following Dockerfile to build my application: FROM python:alpine ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 ENV DJANGO_SUPERUSER_PASSWORD datahub RUN mkdir app WORKDIR /app COPY ./app . RUN mkdir -p volumes RUN apk update RUN apk add --no-cache gcc python3-dev musl-dev mariadb-dev RUN pip3 install --upgrade pip RUN pip3 install -r requirements.txt RUN apk del gcc python3-dev musl-dev CMD python3 manage.py makemigrations --noinput &&\ while ! python3 manage.py migrate --noinput; do sleep 1; done && \ python3 manage.py collectstatic --noinput &&\ python3 manage.py createsuperuser --user datahub --email admin@localhost --noinput;\ python3 manage.py runserver 0.0.0.0:8000 In my requirements.txt file: kafka-python==2.0.2 When I run my application inside the Docker container, I encounter the following error: ModuleNotFoundError: No module named 'kafka.vendor.six.moves' Compelete Error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.12/site-packages/kafka/__init__.py", line 23, in <module> from kafka.consumer import KafkaConsumer File "/usr/local/lib/python3.12/site-packages/kafka/consumer/__init__.py", line 3, in <module> from kafka.consumer.group import KafkaConsumer File "/usr/local/lib/python3.12/site-packages/kafka/consumer/group.py", line 13, in <module> from kafka.consumer.fetcher import Fetcher File "/usr/local/lib/python3.12/site-packages/kafka/consumer/fetcher.py", line 19, in <module> from kafka.record import MemoryRecords File "/usr/local/lib/python3.12/site-packages/kafka/record/__init__.py", line 1, in <module> from kafka.record.memory_records import MemoryRecords, MemoryRecordsBuilder File "/usr/local/lib/python3.12/site-packages/kafka/record/memory_records.py", line 27, in <module> from kafka.record.legacy_records import LegacyRecordBatch, LegacyRecordBatchBuilder File "/usr/local/lib/python3.12/site-packages/kafka/record/legacy_records.py", line 50, in <module> from kafka.codec import ( File "/usr/local/lib/python3.12/site-packages/kafka/codec.py", line 9, in <module> from kafka.vendor.six.moves import range ModuleNotFoundError: No module named 'kafka.vendor.six.moves' I have already tried updating the Kafka package, checking dependencies, and installing the six package manually. However, the issue still persists. Can anyone provide insights on how to resolve this error? Thank you in advance for your help! | This appears to be a Python 3.12 issue, I have the same error but in an entirely different context. Instead of FROM python:alpine I suggest you use FROM python:3.11 3.12 is still very new are there are many projects still trying to work out issues. | 15 | 14 |
77,291,235 | 2023-10-14 | https://stackoverflow.com/questions/77291235/python-object-cannot-access-bases-attribute | An class instance should have access to the classes attributes. __bases__ as per my understand is a class attribute If I have a class called C1, I can call C1.__bases__ but if I define an instance of C1, obj1 = C1(), obj1.__bases__ does not work? I am surprised because using attribute notation, it should start a search in C1's namespace and find __bases__ | Standard attribute lookup searches through an object's __dict__ and those of its class and its class's ancestors. A class's __bases__ attribute isn't defined in its __dict__. __bases__ is managed by a descriptor in type.__dict__, which handles the attribute lookup by retrieving the tuple from an internal field in the class's memory layout. Since __bases__ isn't in C1.__dict__, looking up __bases__ on an instance of C1 doesn't find it. You have to look it up on C1 itself. | 2 | 4 |
77,290,205 | 2023-10-13 | https://stackoverflow.com/questions/77290205/python-in-memory-gzip-on-existing-file | I have a situation where I have an existing file. I want to compress this file using gzip and get the base64 encoding of this file and use this string for latter operations including sending as part of data in an API call. I have the following code which works fine: import base64 import gzip base64_string_to_use_later = None with open('C:\\test.json', 'rb') as orig_file: with gzip.open('C:\\test.json.gz', 'wb') as zipped_file: zipped_file.writelines(orig_file) with gzip.open('C:\\test.json.gz', 'rb') as zipped_file: base64_string_to_use_later = base64.b64encode(zipped_file.read()) This code will take the existing file, create a compressed version and write this back to the file system. The second block takes the compressed file, opens it and fetches the base 64 encoded version. Is there a way to make this more elegant to compress the file in memory and retrieve the base64 encoded string in memory? | Use gzip.compress() to compress the data in memory instead of writing to a file. import base64 import gzip with open('C:\\test.json', 'rb') as orig_file: base64_string_to_use_later = base64.b64encode(gzip.compress(orig_file.read())) | 2 | 2 |
77,286,930 | 2023-10-13 | https://stackoverflow.com/questions/77286930/np-select-int64-and-int64-difference | def apply_km_cluster(df): conditions = [ (df['value'].between(0, 20000)), # Between 0 and 20,000 (df['value'].between(20000, 50000)), # Between 20,000 and 50,000 (df['value'].between(50000, 100000)), # Between 50,000 and 100,000 (df['value'].between(100000, 200000)), # Between 100,000 and 200,000 (df['value'] >= 200000) # Greater than or equal to 200,000 ] values = [1, 2, 3, 4, 5] # Assign values for each condition. df['cluster'] = np.select(conditions, values) return df Hi, i have df and value column be Int64 type and if i use np.select like this i get "TypeError: invalid entry 0 in condlist: should be boolean ndarray". After some research, i convert int64 type and error be resolved why Int64 get this error i cant understand, any explian. I tried Int64 and int64 types in np.select and expect it to work for both. | np.select has a test # If cond array is not an ndarray in boolean format or scalar bool, abort. for i, cond in enumerate(condlist): if cond.dtype.type is not np.bool_: raise TypeError( 'invalid entry {} in condlist: should be boolean ndarray'.format(i)) If df is a dataframe with Int64 dtype: In [316]: df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 6 entries, 0 to 5 Data columns (total 1 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 value 6 non-null Int64 dtypes: Int64(1) memory usage: 182.0 bytes In [317]: df['value'].between(0, 20000).values Out[317]: <BooleanArray> [False, False, False, True, True, False] Length: 6, dtype: boolean In [318]: _.dtype Out[318]: BooleanDtype The values is BooleanArray with a pandas extension dtype. At least for now, np.select cannot handle conditions that yield extended dtypes. I believe the pandas extended dtype are still considered experimental. https://pandas.pydata.org/docs/user_guide/boolean.html BooleanArray is currently experimental. Its API or implementation may change without warning. | 2 | 1 |
77,284,818 | 2023-10-13 | https://stackoverflow.com/questions/77284818/should-hypen-minus-u002d-or-hypen-u2010-be-used-for-iso-8601-datetimes | Python interpreter gives the following when generating an ISO-8601 formatted date/time string: >>> import datetime >>> datetime.datetime.now().isoformat(timespec='seconds') '2023-10-12T22:35:02' Note that the '-' character in the string is a hypen-minus character. When going backwards to produce the datetime object, we do the following: >>> datetime.datetime.strptime('2023-10-12T22:35:02', '%Y-%m-%dT%H:%M:%S') datetime.datetime(2023, 10, 12, 22, 35, 2) This all checks out. However, sometimes when the ISO-8601 formatted date/time string is provided from an external source, such as a parameter sent over in a GET/POST request, or in a .csv file, the hyphens are sent as the β (U+2010) character, which causes the parsing to break: >>> datetime.datetime.strptime('2023β10β12T22:35:02', '%Y-%m-%dT%H:%M:%S') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/_strptime.py", line 568, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/_strptime.py", line 349, in _strptime raise ValueError("time data %r does not match format %r" % ValueError: time data '2023β10β12T22:35:02' does not match format '%Y-%m-%dT%H:%M:%S' What is the correct standard? Is it hypen-minus - U+002D as given by Python when converting via .isoformat(), or hypen β U+2010? Would it be best practice to accept both? | The ISO 8601 standard is not publicly available for free. Perhaps someone who has a copy can post a more definitive answer. ISO has published a brief summary of the ISO 8601 standard. The summary consistently uses HYPHEN-MINUS (0x2D). (Thanks to Giacomo Catenazzi for pointing this out in a comment.) RFC 3339 is based on ISO 8601, and it consistently uses the HYPHEN-MINUS character (0x2D), not the Unicode HYPHEN character (0x2010). Note that using HYPHEN-MINUS, which is an ASCII character, avoids issues with differing character sets. Reference: https://datatracker.ietf.org/doc/html/rfc3339 If you create timestamps intended to be consistent with ISO 8601, you should definitely use HYPHEN-MINUS. If you receive timestamps that are supposedly intended to be ISO 8601, but they include HYPHEN (0x2010) characters, you can choose to accept them. Whether you should accept them depends on the requirements of your project. If possible, ask whoever is generating timestamps to use the correct HYPHEN-MINUS characters. Once you start accepting non-standard input, you might have to do an open-ended amount of work. | 2 | 5 |
77,279,451 | 2023-10-12 | https://stackoverflow.com/questions/77279451/draw-a-star-with-turtle-in-python | Initially, I needed to draw flowers using stars with a random number of branches. So, I wrote this function: def star(nb_branches:int , size:int) : for i in range(int(nb_branches / 2)) : forward(size * 3) left(180 - (360 / nb_branches)) (the arguments of this function are random and are created in another function but that is not the problem) However, I realised that when the number of branches were odd, the star wasn't a perfect star. I think the loop and/or the angle are the source of this problem. Since, I tried different angles and possibilities but I didn't obtain yet the answer. Here is an example of what I got from this function with odd numbers | I was thinking about the differentiation between the odd and even numbers when I realized that my initial code worked well with even numbers, but with the half of branches. So, I took your function for the odd numbered stars and I intentionally doubled the number of branches for the even numbered stars to achieve it. Finally, I added a last forward to close the star. Here is the function : def star(nb_branches:int , size:int) : if(nb_branches % 2 == 0) : nb_branches *= 2 for i in range(nb_branches) : forward(size * 3) left(180 - (360 / nb_branches)) forward(size * 3 / 2) else : for i in range(nb_branches) : forward(size * 3) left(180 - (180 / nb_branches)) forward(size * 3 / 2) There are surely some improvements to be made, but it works ! | 2 | 1 |
77,287,959 | 2023-10-13 | https://stackoverflow.com/questions/77287959/if-a-row-contains-at-least-two-not-nan-values-split-the-row-into-two-separate-o | I am trying to convert datafarame to desired output format with requirements mentioned below. Provided requirements: Each row can only keep one not Nan value (except Trh1 and Trh2) I want to avoid methods that iterate over each row for performance reasons. I have only included four columns, for example, in a real scenario there are many more columns to share Example: Input: Index Schema Column Trh1 Trh2 Trh3 Trh4 0 schema_1 col_1 NaN 0.01 NaN NaN 1 schema_2 col_2 0.02 0.03 NaN NaN 2 schema_3 col_3 0.03 0.04 0.05 NaN 3 schema_4 col_4 NaN NaN 0.06 0.07 Expected output: Index Schema Column Trh1 Trh2 Trh3 Trh4 0 schema_1 col_1 NaN 0.01 NaN NaN 1 schema_2 col_2 0.02 0.03 NaN NaN 2 schema_3 col_3 0.03 0.04 NaN NaN 3 schema_3 col_3 NaN NaN 0.05 NaN 4 schema_4 col_4 NaN NaN 0.06 NaN 5 schema_4 col_4 NaN NaN NaN 0.07 I explored following approach: Split row into 2 based on condition pandas. However, this approach is only suitable for splitting a row if there are no Nan values in the two columns. | handling a jump cols = ['Index', 'Schema', 'Column', 'Trh1', 'Trh2'] special = ['Trh1', 'Trh2'] others = list(df.columns.difference(cols)) out = (df .assign(init=lambda d: d[others].isna().all(axis=1)) [cols+['init']+others] .set_index(cols).stack().to_frame() .assign(n=lambda d: d.groupby(level=range(df.index.ndim)).cumcount()) .set_index('n', append=True)[0] .unstack(-2) .reset_index() ) out.loc[out['init'].isna(), special] = np.nan out = out.drop(columns=['n', 'init']) out = out.dropna(subset=special+others, how='all') Output: Index Schema Column Trh1 Trh2 Trh3 Trh4 0 0 schema_1 col_1 NaN 0.01 NaN NaN 1 1 schema_2 col_2 0.02 0.03 NaN NaN 2 2 schema_3 col_3 0.03 0.04 NaN NaN 3 2 schema_3 col_3 NaN NaN 0.05 NaN 5 3 schema_4 col_4 NaN NaN 0.06 NaN 6 3 schema_4 col_4 NaN NaN NaN 0.07 original answer You can use reshaping with de-duplication, with stack/unstack: cols = ['Index', 'Schema', 'Column', 'Trh1', 'Trh2'] out = (df # stack and remove NaNs .set_index(cols).stack().to_frame() # deduplicate .assign(n=lambda d: d.groupby(level=range(df.index.ndim)).cumcount()) # reshape to original shape .set_index('n', append=True)[0] .unstack(-2) # cleanup .reset_index() .drop(columns='n') ) # add rows that were dropped because having no value out = pd.concat([df[df[df.columns.difference(cols)].isna().all(axis=1)], out], ignore_index=True).sort_values(by='Index') # optional NB. this requires no duplicates in the initial cols. Or with melt, which might be more memory intensive but also more robust if you have duplicates: cols = ['Index', 'Schema', 'Column', 'Trh1', 'Trh2'] out = (df.melt(cols) # drop NAs, except first row per group .loc[lambda d: d['value'].notna() | ~d[cols].duplicated()] # de-duplicate .assign(n=lambda d: d.groupby(cols, dropna=False).cumcount()) # reshape .pivot(index=cols+['n'], columns='variable', values='value') # cleanup .reset_index().rename_axis(index=None, columns=None) ) Output: Index Schema Column Trh1 Trh2 Trh3 Trh4 0 0 schema_1 col_1 NaN 0.01 NaN NaN 1 1 schema_2 col_2 0.02 0.03 NaN NaN 2 2 schema_3 col_3 0.03 0.04 0.05 NaN 3 3 schema_4 col_4 NaN NaN 0.06 NaN 4 3 schema_4 col_4 NaN NaN NaN 0.07 | 6 | 6 |
77,286,483 | 2023-10-13 | https://stackoverflow.com/questions/77286483/maturin-project-with-python-bindings-behind-feature | I'm trying to write optional Python bindings for a Rust library, using maturin and PyO3. The default layout created by maturin is my-project βββ Cargo.toml βββ python β βββ my_project β βββ __init__.py β βββ bar.py βββ pyproject.toml βββ README.md βββ src βββ lib.rs where all Rust code, including the #[pymodule] attributes go into src/lib.rs: use pyo3::prelude::*; /// Formats the sum of two numbers as string. #[pyfunction] fn sum_as_string(a: usize, b: usize) -> PyResult<String> { Ok((a + b).to_string()) } /// A Python module implemented in Rust. #[pymodule] fn rir_generator(_py: Python, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(sum_as_string, m)?)?; Ok(()) } However, since I want to put all of this code behind a conditional feature, I am trying to put all of that wrapper code into src/python.rs and then import it into src/lib.rs using #[cfg(feature = "python")] pub mod python; But building this fails with the warning Warning: Couldn't find the symbol PyInit_my_project in the native library. Python will fail to import this module. If you're using pyo3, check that #[pymodule] uses my_project as module name If I put the code back into src/lib.rs, the warning disappears. Is there a way to put PyO3 bindings into a submodule that is then conditionally imported using features? | You are almost there. You need to add following section to the Cargo.toml to remove the warning. [features] default = ["python"] python = [] Quoting from the documentation: Features are defined in the [features] table in Cargo.toml. Each feature specifies an array of other features or optional dependencies that it enables. By default, all features are disabled unless explicitly enabled. This will cause the native library to build without PyInit_<module_name> symbol, Hence the warning. This can be changed by specifying the default feature | 4 | 2 |
77,283,252 | 2023-10-12 | https://stackoverflow.com/questions/77283252/how-to-get-a-surface-plot-to-sync-with-the-associated-scatter-points | I have a simple problem, wherein I have a training set of 80 points and a testing set of 20 points, which I am plotting against their corresponding values of bearing stress on the z axis, where the x axis is thickness and y axis is diameter of a component I am optimizing. I get the response as I want it, but it looks like it is not being rendered properly and even with very low alpha values, it seems to be causing the issue. f_bru = [9060.3, 4373.6, 2913.2, 2146.7, 1686.9, 1396.2, 1155.1, 1015.2, 1048.3, 841.1, 3302.2, 1768.5, 1245.4, 853.3, 688.7, 572.4, 578.4, 473.9, 444.6, 384.6, 2342.3, 1183.2, 801.0, 539.7, 460.9, 401.8, 331.1, 284.7, 271.5, 235.9, 1758.1, 822.2, 582.0, 373.5, 329.7, 305.3, 243.0, 218.5, 193.4, 159.9, 1402.8, 666.4, 419.5, 355.7, 269.5, 219.6, 198.7, 177.1, 143.4, 136.7, 1111.4, 530.7, 340.8, 270.6, 225.7, 188.1, 152.4, 123.6, 123.6, 105.2, 948.3, 483.5, 315.2, 239.9, 181.7, 151.9, 131.3, 117.2, 112.2, 97.7, 800.4, 392.9, 277.0, 214.3, 149.8, 145.1, 103.2, 105.7, 87.1, 85.5, 724.8, 351.1, 240.3, 168.6, 145.1, 116.8, 108.0, 83.5, 86.0, 70.5, 650.4, 318.6, 209.8, 162.8, 126.7, 98.9, 95.9, 80.9, 62.6, 65.2] t = [15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 15.0, 35.5625, 35.5625, 35.5625, 35.5625, 35.5625, 35.5625, 35.5625, 35.5625, 35.5625, 35.5625, 56.125, 56.125, 56.125, 56.125, 56.125, 56.125, 56.125, 56.125, 56.125, 56.125, 76.6875, 76.6875, 76.6875, 76.6875, 76.6875, 76.6875, 76.6875, 76.6875, 76.6875, 76.6875, 97.25, 97.25, 97.25, 97.25, 97.25, 97.25, 97.25, 97.25, 97.25, 97.25, 117.75, 117.75, 117.75, 117.75, 117.75, 117.75, 117.75, 117.75, 117.75, 117.75, 138.375, 138.375, 138.375, 138.375, 138.375, 138.375, 138.375, 138.375, 138.375, 138.375, 158.875, 158.875, 158.875, 158.875, 158.875, 158.875, 158.875, 158.875, 158.875, 158.875, 179.5, 179.5, 179.5, 179.5, 179.5, 179.5, 179.5, 179.5, 179.5, 179.5, 200.0, 200.0, 200.0, 200.0, 200.0, 200.0, 200.0, 200.0, 200.0, 200.0] d = [10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0] d1 = list(d) t1 = list(t) a = [(t1[i], d1[i]) for i in range(len(t))] y = np.array(f_bru) x_train, x_test, y_train, y_split = train_test_split(a,y, test_size=0.2, train_size=0.8) t_train, d_train = zip(*x_train) t_test, d_test = zip(*x_test) t_test = np.array(t_test) d_test = np.array(d_test) t_train = np.array(t_train) d_train = np.array(d_train) t = np.array(t) d = np.array(d) f_bru = np.array(f_bru) def func(td, c1 = 1, c2 = 1, c3 = 1, c4 = 1): t,d = td return c1 + c2*d**-1 + c3*t**-1 + c4*(d**-1*t**-1) popt, pcov = curve_fit(func, (t,d), f_bru) #Generate fitted surface t_fit, d_fit = np.meshgrid(t,d) f_bru_fit = func((t_fit, d_fit), *popt) #Approximated values of f_bru f_bru_predicted = func((t_test, d_test), *popt) #plot the training and validation data points and the fitted surface fig1 = plt.figure(figsize=(16,16)) axes = fig1.add_subplot(111, projection = '3d') axes.scatter(t_train, d_train, y_train, c = 'b', marker = 'o', label = 'Training set') axes.scatter(t_test, d_test, y_split, c = 'r', marker = 'o', label = "Testing set") #generate a plot with surface fit axes.plot_surface(t_fit, d_fit, f_bru_fit, color='y', alpha = 0.05) axes.set_xlabel('t') axes.set_ylabel('d') axes.set_zlabel('Bearing stress in MPa') plt.title('3D plot') plt.show() The surface plot that it generates looks like it could be rendered better and its opacity changes with the direction in which it is being moved, at the same time making it difficult to actually see some of the points that are being plotted as I need to view the randomness with which the points are being plotted. I checked by running a few other response surface codes given to me as an example and they appear to be perfect, however not this one. | The artifact you have is due to misordered points in your meshgrid. Once you have regressed your parameters, it does not matter which points (independent variables) you use to plot (train or test, or something else), they will be part of the same surface. So, it is sufficient to plot the surface with an ad hoc grid (rectangular and sorted): t = np.array(t) d = np.array(d) z = np.array(f_bru) tlin = np.linspace(t.min(), t.max(), 50) dlin = np.linspace(d.min(), d.max(), 50) T, D = np.meshgrid(tlin, dlin) Z = func((T, D), *popt) fig = plt.figure() axes = fig.add_subplot(111, projection = '3d') axes.plot_surface(T, D, Z, color='y', alpha = 0.5) axes.scatter(t_train, d_train, y_train, c = 'b', marker = 'o', label = 'Training set') axes.scatter(t_test, d_test, y_split, c = 'r', marker = 'o', label = "Testing set") | 2 | 2 |
77,276,769 | 2023-10-11 | https://stackoverflow.com/questions/77276769/using-the-unittest-module-in-python-to-write-non-unit-tests-is-it-bad-practice | Suppose I design a class, called A, in its own module. Consider an abbreviation of what the code might look like. # file: A.py class A: # define some useful behaviour for A class pass Then I go on to perform some unit tests for this new class. The tests would be roughly be carried out as shown below. The reason for defining the create_test_A function outside of the TestA class will become apparent below. # file test_A.py import A import unittest def create_test_A(): # initialize into a useful test state before returning return A.A() class TestA(unittest.TestCase): def test(self): new_A = create_test_A() # write some clever assertions about behaviour of new_A pass Now suppose I build a new class, which is designed to work directly with instantiations of the A class. class A_bundle: def __init__(self): # define some fields that work with objects of type <class A> pass When I go to write unit tests for this new class, my first instinct would be to create an instance of it, and then create a few objects of type A to interact with it. Perhaps it would go like this. #file test_A_bundle.py import test_A # this is the part I feel weird doing import A_bundle import unittest class TestABundle(unittest.TestCase): def test_a_bundle(self): # create a couple of test objects of type A new_A1 = test_A.create_test_A() new_A2 = test_A.create_test_A() # make a new A_bundle to perform tests on new_bundle = A_bundle.A_bundle() # make some assertions about the way new_bundle # should interact with new_A1 and new_A2 Have I now gone outside the scope of unit testing and into the world of integration testing, since I'm not just testing the behaviour of the A_bundle class independently? And if so, it seems that I could still use the unittest module in this way and run some useful tests. Is this considered a bad practice? That is, using the unittest module to write and perform tests that are not, in fact, unit tests? | On my opinion you're still in the realm of unit tests, but I have a suggestion for you. The hint Insert instances of the class A as arguments of the __init__() method of class bundle_A. Following this hint I have modified your file A_bundle.py as following: class A_bundle: def __init__(self, a1, a2): self.a1 = a1 self.a2 = a2 # define some fields that work with objects of type <class A> def sum(self): return self.a1.m_a() + self.a2.m_a() I have also defined the method sum() in the class A_bundle: this is an example of field that works with objects of type class A. Changes to class A In the file A.py I have define the attribute x=10 and the method m_a() as followed: # file: A.py class A: x = 10 def m_a(self): return self.x Test methods in test_A_bundle.py At the end I propose you a file test_A_bundle.py which contains 2 methods: your method test_a_bundle() which creates 2 real instances of class A, an instance of class A_bundle and verifies the result of the method sum() a second test method called test_a_bundle_with_mock() which substitutes the real instance of A by 2 Mock object and decide the return value of the method m_a() of this 2 Mock object The code of test_A_bundle.py is the followed: #file test_A_bundle.py import test_A # this is the part I feel weird doing import A_bundle import unittest from unittest import mock import A class TestABundle(unittest.TestCase): #+++++ YOUR TEST METHOD +++++ def test_a_bundle(self): # create a couple of test objects of type A new_A1 = test_A.create_test_A() new_A2 = test_A.create_test_A() # make a new A_bundle to perform tests on (passes 2 instances of # A to the __init__ method of A_bundle) new_bundle = A_bundle.A_bundle(new_A1, new_A2) self.assertEqual(20, new_bundle.sum()) def test_a_bundle_with_mock(self): mock_a1 = mock.Mock() mock_a2 = mock.Mock() mock_a1.m_a.return_value = 20 mock_a2.m_a.return_value = 30 new_bundle = A_bundle.A_bundle(mock_a1, mock_a2) self.assertEqual(50, new_bundle.sum()) if __name__ == '__main__': unittest.main() The second method shows you how to test the interaction between objects of class A and object of class A_bundle by unit test. On my opinion this test is not an integration test, it remains a unit test with tests the cooperation between classes. | 4 | 2 |
77,283,648 | 2023-10-12 | https://stackoverflow.com/questions/77283648/vs-code-python-extension-circa-v2018-19-no-longer-includes-support-for-linters | The Python extension for VS Code used to provide builtin support for tools like formatters and linters, including: Linting: Pylint, Flake8, Mypy, Bandit, Pydocstyle, Pycodestyle, Prospector, Pylama Formatting: autopep8, Black, YAPF What's happening to the builtin support for these tools in the Python extension? How can I get integrated support for these tools in VS Code going forward? | Basically, see https://github.com/microsoft/vscode-python/wiki/Migration-to-Python-Tools-Extensions. I'll try to summarize/quote. As announced on April 2022, our team has been working towards breaking the tools support we offer in the Python extension for Visual Studio Code into separate extensions, with the intent of improving performance, stability and no longer requiring the tools to be installed in a Python environment β as they can be shipped alongside an extension. This also allows the extensions to be shipped separately from the Python one once a new version of their respective tool becomes available. Those extensions include: ms-python.pylint, ms-python.flake8, ms-python.mypy-type-checker, ms-python.black-formatter, ms-python.autopep8, ms-python.isort, charliermarsh.ruff, matangover.mypy, eeyore.yapf. Prompts, commands, and context menu items already started getting removed in the 2018.18.0 release- Ex. Remove old linter and formatter prompts and commands #21979 and Remove sort imports from command palette and context menu #22058. Lots of removals are in the iteration plan for the October 2023 release. Official VS Code Python docs for linting and for formatting look to have been updated already- at least partially, which is nice. Settings related to linting and formatting features that are moving to their own extensions are accordingly being removed (here's the full list with migration instructions). That includes python.linting.enabled, python.formatting.provider, and a host of settings related to specific linters and formatters. Not all the linters and formatters that were supported before have extensions yet. If you want to create a linter or formatter tool extension yourself, the Python Tools Extension Template will probably help. Or, you can take a look at the list of alternatives for deprecated settings, which includes trying another extension that supports multiple linters (Ex. charliermarsh.ruff), disabling extension auto-update and sticking with an older version of the Python extension / other extension that supports the tool you want to use, or writing a task to run the tool in the integrated terminal (you can also write custom problem matching). | 10 | 17 |
77,276,144 | 2023-10-11 | https://stackoverflow.com/questions/77276144/extract-consecutive-rows-with-similar-values-in-a-column-more-with-a-specific-pa | I was looking out to extract consecutive rows with specified text repeated continuously for more than 5 times. ex: A B C 10 john 1 12 paul 1 23 kishan 1 12 teja 1 12 zebo 1 324 vauh -1 3434 krish -1 232 poo -1 4535 zoo 1 4343 doo 1 342 foo -1 123 soo 1 121 koo -1 34 loo -1 343454 moo -1 565343 noo -1 2323234 voo -1 3434 coo 1 545 xoo 1 6565 zoo 1 232321 qoo 1 34454 woo 1 546556 eoo 1 65665 roo -1 5343 too -1 3232 yoo 1 1212 uoo 1 23355667 ioo 1 787878 joo -1 I am looking out for the below result where the column value 'c' has consecutive 1's repeated more than 4 times as different groups . Output: A B C group 10 john 1 1 12 paul 1 1 23 kishan 1 1 12 teja 1 1 12 zebo 1 1 3434 coo 1 2 545 xoo 1 2 6565 zoo 1 2 232321 qoo 1 2 34454 woo 1 2 546556 eoo 1 2 | Using masks and factorize: # identify 1s m = df['C'].eq(1) # group consecutive values g = m.ne(m.shift()).cumsum() # identify stretches of 5+ 1s m2 = m & df.groupby(g)['C'].transform('size').ge(5) out = (df.loc[m2] .assign(group=pd.factorize(g[m2])[0]+1) ) Output: A B C group 0 10 john 1 1 1 12 paul 1 1 2 23 kishan 1 1 3 12 teja 1 1 4 12 zebo 1 1 17 3434 coo 1 2 18 545 xoo 1 2 19 6565 zoo 1 2 20 232321 qoo 1 2 21 34454 woo 1 2 22 546556 eoo 1 2 | 2 | 4 |
77,281,875 | 2023-10-12 | https://stackoverflow.com/questions/77281875/add-new-row-with-differ-of-last-and-first-row | I would like to create a summary row to show differ between last and first row. e.g. import pandas as pd data = [ ['A',1,5], ['B',2,4], ['C',3,3], ['D',4,2], ['E',5,1], ['F',6,0] ] df = pd.DataFrame(data,columns=['name','x','y']) print(df) Output ddataframe should be: : name x y : 0 A 1 5 : 1 B 2 4 : 2 C 3 3 : 3 D 4 2 : 4 E 5 1 : 5 F 6 0 : 6 diff 5 -5 What's the best way to do that? | Subtract first row from last row and concat the diff along index axis c = ['x', 'y'] diff = df[c].iloc[-1] - df[c].iloc[0] diff['name'] = 'diff' pd.concat([df, diff.to_frame().T], ignore_index=True) name x y 0 A 1 5 1 B 2 4 2 C 3 3 3 D 4 2 4 E 5 1 5 F 6 0 6 diff 5 -5 | 2 | 2 |
77,277,139 | 2023-10-12 | https://stackoverflow.com/questions/77277139/install-python-3-12-using-mamba-on-mac | I am trying to install python 3.12 on an M1 Apple Mac using mamba as follows ... mamba install -c conda-forge python=3.12.0 It yields the following error message ... Looking for: ['python=3.12.0'] conda-forge/osx-arm64 Using cache conda-forge/noarch Using cache Could not solve for environment specs The following packages are incompatible ββ mamba is installable with the potential options β ββ mamba [1.0.0|1.1.0|...|1.5.1] would require β β ββ python_abi 3.11.* *_cp311, which can be installed; β ββ mamba [0.10.0|0.11.1|...|1.5.1] would require β β ββ python_abi 3.8.* *_cp38, which can be installed; β ββ mamba [0.10.0|0.11.1|...|1.5.1] would require β β ββ python_abi 3.9.* *_cp39, which can be installed; β ββ mamba [0.18.1|0.18.2|...|1.5.1] would require β ββ python_abi 3.10.* *_cp310, which can be installed; ββ python 3.12.0** is not installable because there are no viable options ββ python 3.12.0 would require β ββ python_abi 3.12.* *_cp312, which conflicts with any installable versions previously reported; ββ python 3.12.0rc3 would require ββ _python_rc, which does not exist (perhaps a missing channel). Any pointers on how to do this properly would be much appreciated. | Upgrading base environment Python to a version that hasn't finished migrating1 is a recipe for trouble. Generally, one does not need to upgrade base environment's Python unless the Python version goes EOL. If you would like to explore the new Python 3.12, then create a new environment: mamba create -n py312 -c conda-forge python=3.12 [1]: Migration is the process on Conda Forge by which packages get rebuilt to support new global versions, such as Python 3.12 or R 4.3. Conda Forge provides a dashboard to track the status of migrations, such as Python 3.12. | 2 | 7 |
77,274,572 | 2023-10-11 | https://stackoverflow.com/questions/77274572/multiqc-modulenotfounderror-no-module-named-imp | I am running fastqc and multiqc in ubuntu linux terminal. fastqc runs perfectly without any issues but multiqc fails to run, showing the message. No idea how to fix the missing 'imp' module. I tried to read and apply every solution found in the internet or google. I used the command 'conda install multiqc' to install multiqc in the existing conda environment. I tried to install it in a new conda environment. Still, its showing the same message.The python version currently running is 3.12.0 Could anyone help to fix the issue? | Python 3.12 is new and the consequences of the changes it introduces (like dropping the imp module) need to propagate to the community. Stay on Python 3.11 for now. | 25 | 48 |
77,280,579 | 2023-10-12 | https://stackoverflow.com/questions/77280579/pandas-vectorised-way-to-forward-fill-a-series-using-a-gradient | Consider a dataframe which has a series price with gaps containing NaN: import numpy as np import pandas as pd df = pd.DataFrame({"price": [1, 2, 3, np.nan, np.nan, np.nan, np.nan, np.nan, 9, 10]}, index=pd.date_range("2023-01-01", periods=10)) price 2023-01-01 1.0 2023-01-02 2.0 2023-01-03 3.0 2023-01-04 NaN 2023-01-05 NaN 2023-01-06 NaN 2023-01-07 NaN 2023-01-08 NaN 2023-01-09 9.0 2023-01-10 10.0 My desired result is to fill this gap using the last known gradient prior to the gap, i.e.: price 2023-01-01 1.0 2023-01-02 2.0 2023-01-03 3.0 2023-01-04 4.0 2023-01-05 5.0 2023-01-06 6.0 2023-01-07 7.0 2023-01-08 8.0 2023-01-09 9.0 2023-01-10 10.0 This is easy to achieve using iteration: gradients = (df["price"] - df["price"].shift(1)).ffill() price_values = df["price"].values for index, val in enumerate(price_values): last_price = price_values[index - 1] gradient = gradients.iloc[index] if pd.isna(val) and not pd.isna(last_price) and not pd.isna(gradient): df["price"].iat[index] = last_price + gradient price 2023-01-01 1.0 2023-01-02 2.0 2023-01-03 3.0 2023-01-04 4.0 2023-01-05 5.0 2023-01-06 6.0 2023-01-07 7.0 2023-01-08 8.0 2023-01-09 9.0 2023-01-10 10.0 This works fine but is slow. This also feels like a common use case and I would be surprised if it was not built in to pandas, but I am unable to find it in documentation. Is there a better, vectorised way to do this? | Assuming you want to use the last diff as a step size to increment the NaNs, you could compute the diff, ffill it, then use that to fillna the original Series, finally compute a groupby.cumsum: m = df['price'].notna() df['gradient_fill'] = (df['price'].fillna(df['price'].diff().ffill()) .groupby(m.cumsum()).cumsum() ) Output: NB. I changed the input for clarity. price gradient_fill 2023-01-01 1.0 1.0 2023-01-02 20.0 20.0 2023-01-03 30.0 30.0 2023-01-04 NaN 40.0 2023-01-05 NaN 50.0 2023-01-06 NaN 60.0 2023-01-07 NaN 70.0 2023-01-08 NaN 80.0 2023-01-09 9.0 9.0 2023-01-10 10.0 10.0 Intermediates: price m diff ffill fillna group gradient_fill 2023-01-01 1.0 True NaN NaN 1.0 1 1.0 2023-01-02 20.0 True 19.0 19.0 20.0 2 20.0 2023-01-03 30.0 True 10.0 10.0 30.0 3 30.0 2023-01-04 NaN False NaN 10.0 10.0 3 40.0 2023-01-05 NaN False NaN 10.0 10.0 3 50.0 2023-01-06 NaN False NaN 10.0 10.0 3 60.0 2023-01-07 NaN False NaN 10.0 10.0 3 70.0 2023-01-08 NaN False NaN 10.0 10.0 3 80.0 2023-01-09 9.0 True NaN 10.0 9.0 4 9.0 2023-01-10 10.0 True 1.0 1.0 10.0 5 10.0 You could also form groups of non-NA/NA and interpolate with a spline for each one: m = df['price'].notna() group = (m & ~m.shift(fill_value=False)).cumsum() df['spline'] = (df.groupby(group)['price'] .transform(lambda s: s.interpolate(method='spline', order=1)) ) Output: price spline 2023-01-01 1.0 1.000000 2023-01-02 20.0 20.000000 2023-01-03 30.0 30.000000 2023-01-04 NaN 42.828422 2023-01-05 NaN 54.949738 2023-01-06 NaN 67.071054 2023-01-07 NaN 79.192371 2023-01-08 NaN 91.313687 2023-01-09 9.0 9.000000 2023-01-10 10.0 10.000000 Graphical comparison of approaches: | 3 | 2 |
77,280,167 | 2023-10-12 | https://stackoverflow.com/questions/77280167/replacing-einsum-with-normal-operations | I need to replace einsum operation with standard numpy operations in the following code: import numpy as np a = np.random.rand(128, 16, 8, 32) b = np.random.rand(256, 8, 32) output = np.einsum('aijb,rjb->ira', a, b) How would I do that? | One option would be to align to a similar shape and broadcast multiply, then sum and reorder the axes: output2 = (b[None, None]*a[:,:,None]).sum(axis=(-1, -2)).transpose((1, 2, 0)) # assert np.allclose(output, output2) But this is much less efficient as it's producing a large intermediate (shape (128, 16, 256, 8, 32)): # np.einsum('aijb,rjb->ira', a, b) 68.9 ms Β± 23.1 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) # (b[None, None]*a[:,:,None]).sum(axis=(-1, -2)).transpose((1, 2, 0)) 4.66 s Β± 1.65 s per loop (mean Β± std. dev. of 7 runs, 1 loop each) Shapes: # b[None, None].shape #a i r j b (1, 1, 256, 8, 32) # a[:,:,None].shape # a i r j b (128, 16, 1, 8, 32) | 2 | 5 |
77,272,655 | 2023-10-11 | https://stackoverflow.com/questions/77272655/why-is-a-lambdef-allowed-as-a-type-hint-for-variables | The Python grammar has this rule: assignment: | NAME ':' expression ['=' annotated_rhs ] # other options for rule omitted while the expression rules permit a lambda definition (lambdef). That means this python syntax is valid: q: lambda p: p * 4 = 1 Is there a use case for permitting a lambda there, or is this just a quirk of a somewhat loose grammar? Similarly, this allows conditional types a: int if b > 3 else str = quux, which seems a bit more sane but still unexpected. | This is specified under the PEP-0526(Syntax for Variable Annotations). Python does not care about the annotation as long as βit evaluates without raisingβ. It's the duty of the type-checker to flag it as invalid annotation. Quoting from the PEP: Other uses of annotations While Python with this PEP will not object to: alice: 'well done' = 'A+' bob: 'what a shame' = 'F-' since it will not care about the type annotation beyond βit evaluates without raisingβ, a type checker that encounters it will flag it, unless disabled with #type: ignore or @no_type_check. However, since Python wonβt care what the βtypeβ is, if the above snippet is at the global level or in a class, __annotations__ will include {'alice': 'well done', 'bob': 'what a shame'}. These stored annotations might be used for other purposes, but with this PEP we explicitly recommend type hinting as the preferred use of annotations. For example, running your snippet with mypy will produce the following error: file.py:1: error: Invalid type comment or annotation [valid-type] | 2 | 2 |
77,275,464 | 2023-10-11 | https://stackoverflow.com/questions/77275464/iterative-search-on-secondary-dataframe-to-return-values-to-primary-dataframe-n | I have two dataframes - df1 & df2. I need to search each value from a specific column in df1 on df2 and return 2 nearest values. data1 = np.array([(1, 150), (2, 250), (3, 350), (4, 590)]) df1 = pd.DataFrame(data1, columns=['n', 'day']) df1 n day 0 1 150 1 2 250 2 3 350 3 4 590 data2 = np.array([(120, 10.5), (180, 10.7), (350, 11.2), (620, 15.5)]) df2 = pd.DataFrame(data2, columns=['day', 'rate']) df2 day rate 0 120 10.5 1 180 10.7 2 350 11.2 3 620 15.5 I managed to find the values using this function: #to find 2 nearest values def find_rates(df, s, x, n=2): diff = (s - x).abs() return df.loc[diff.nsmallest(n).index].sort_index() Exemple: find_rates(df2, df2.day, 150, n=2) day rate 0 120.0 10.5 1 180.0 10.7 find_rates(df2, df2.day, 350, n=2) day rate 1 180.0 10.7 2 350.0 11.2 What I expect from this is to iterative search each "day" value from df1 on df2 and store 2 nearest values on df1 new columns: data3 = np.array([(1, 150, 120, 180, 10.5, 10.7), (2, 250, 180, 350, 10.7, 11.2), (3, 350, 180, 350, 10.7, 11.2), (4, 590, 350, 620, 11.2, 15.5)]) want = pd.DataFrame(data3, columns=['n', 'day', 'day1', 'day2', 'rate1', 'rate2']) want n day day1 day2 rate1 rate2 0 1.0 150.0 120.0 180.0 10.5 10.7 1 2.0 250.0 180.0 350.0 10.7 11.2 2 3.0 350.0 180.0 350.0 10.7 11.2 3 4.0 590.0 350.0 620.0 11.2 15.5 Any clues on how to do that? | Another possible solution : N = 2 day1 = df1["day"].to_numpy()[:, None] day2 = df2["day"].to_numpy(); arr2 = df2.to_numpy() idx = np.sort(np.argsort(np.abs(day2 - day1), axis=1)[:, :N]) days = pd.DataFrame(arr2[:, 0][idx], columns=["day1", "day2"]) rates = pd.DataFrame(arr2[:, 1][idx], columns=["rate1", "rate2"]) out = pd.concat([df1, days, rates], axis=1) Output : print(out) n day day1 day2 rate1 rate2 0 1 150 120.00 180.00 10.50 10.70 1 2 250 180.00 350.00 10.70 11.20 2 3 350 180.00 350.00 10.70 11.20 3 4 590 350.00 620.00 11.20 15.50 | 5 | 6 |
77,275,476 | 2023-10-11 | https://stackoverflow.com/questions/77275476/do-numpy-savez-and-numpy-savez-compressed-use-pickle | I recently encountered numpy.savez and numpy.savez_compressed. Both seem to work well with arrays of differing types, including object arrays. However, numpy.load does not work well with object type arrays. For example: import numpy as np numbers = np.full((10, 1), np.pi) strings = np.full((10, 1), "letters", dtype=object) np.savez("test.npz", numbers=numbers, strings=strings) data = np.load("test.npz") Calling data["strings"] throws the following ValueError: ValueError: Object arrays cannot be loaded when allow_pickle=False However, enabling pickle on numpy.load resolves this issue. Pickling is not discussed within the numpy.savez and numpy.savez_compressed documents...which makes me wonder why pickle is required to load the data. Do numpy.savez and numpy.savez_compressed use pickle automatically behind the scenes? | Since you have dtype=object for that, pickle will be used (serializing Python objects). Serializing with pickle is allowed by default when saving, but deserializing pickles must be explicitly requested when loading. That's because loading pickled data can execute arbitrary code, and for untrusted input this would be a security concern. | 2 | 2 |
77,275,400 | 2023-10-11 | https://stackoverflow.com/questions/77275400/create-a-typing-annotated-instance-with-variadic-args-in-older-python | The question is simply how to reproduce this: import typing a = [1, 2, 3] cls = typing.Annotated[int, *a] or even this: import typing cls1 = typing.Annotated[int, 1, 2, 3] unann_cls = typing.get_args(cls1)[0] metadata = cls1.__metadata__ cls2 = typing.Annotated[unann_cls, *metadata] in Python 3.9-3.10. Annotated[int, *a] is a syntax error in <3.11 so typing_extensions.Annotated shouldn't help here. "Nested Annotated types are flattened" suggests the following code: cls2 = unann_cls for m in metadata: cls2 = typing.Annotated[cls2, m] and it seems to work but surely there should be a more clean way? | In the expression obj[1, 2, 3] 1, 2, 3 is just a tuple literal. It's exactly the same as x = 1, 2, 3 obj[x] We're simply passing the tuple (1, 2, 3) to __getitem__. To get the behavior you want, you can wrap the tuple literal with parentheses so that the syntax for sequence unpacking inside a tuple literal kicks in: >>> typing.Annotated[(int, *a)] typing.Annotated[int, 1, 2, 3] | 2 | 2 |
77,272,373 | 2023-10-11 | https://stackoverflow.com/questions/77272373/repeat-each-item-in-a-list-based-on-condition-in-a-df-column | I have a pandas data frame, and a list. The pandas data frame has 2 columns State and dates. State Dates 0 1/1/2023 1 2/1/2023 2 3/1/2023 3 4/1/2023 0 1/1/2023 1 2/1/2023 0 1/1/2023 1 2/1/2023 2 3/1/2023 0 1/1/2023 1 2/1/2023 2 3/1/2023 3 4/1/2023 4 5/1/2023 5 6/1/2023 ... country = [A, B, C, D,...] Every time State is 0 , it should also add new item from the list corresponding to the iteration, in a new column called 'country'. State Dates country 0 1/1/2023 A 1 2/1/2023 A 2 3/1/2023 A 3 4/1/2023 A 0 1/1/2023 B 1 2/1/2023 B 0 1/1/2023 C 1 2/1/2023 C 2 3/1/2023 C 0 1/1/2023 D 1 2/1/2023 D 2 3/1/2023 D 3 4/1/2023 D 4 5/1/2023 D 5 6/1/2023 D How can get to the result above? | Another solution: country = ["A", "B", "C", "D"] df["Country"] = df.groupby(df["State"].eq(0).cumsum())["State"].transform( lambda _, i=iter(country): next(i, None) ) print(df) Prints: State Dates Country 0 0 1/1/2023 A 1 1 2/1/2023 A 2 2 3/1/2023 A 3 3 4/1/2023 A 4 0 1/1/2023 B 5 1 2/1/2023 B 6 0 1/1/2023 C 7 1 2/1/2023 C 8 2 3/1/2023 C 9 0 1/1/2023 D 10 1 2/1/2023 D 11 2 3/1/2023 D 12 3 4/1/2023 D 13 4 5/1/2023 D 14 5 6/1/2023 D | 3 | 1 |
77,270,468 | 2023-10-11 | https://stackoverflow.com/questions/77270468/looking-for-a-py2neo-fork | Unfortunately due to some internal politics within neo4j, the py2neo project is now EOL and was deleted from pypi. See this twitter thread. Is there anyone who might have forked the GitHub repository? | You can see updates here https://community.neo4j.com/t/farewell-py2neo-what-happens-now/64419 Copied from Above link, https://github.com/overhangio/py2neo, https://github.com/SarLecobee/py2neo, https://github.com/neo4j-field/py2neo, And One can see here as well, https://www.reddit.com/r/Neo4j/comments/174jl66/py2neo_no_longer_available/ | 2 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.