question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
77,051,620
2023-9-6
https://stackoverflow.com/questions/77051620/nanobind-trampoline-method-name-issue
I have a trampoline class that I am trying to wrap in nanobind like so: class PyT : T { public: NB_TRAMPOLINE(T, 1); void F() override { NB_OVERRIDE(F); } }; // ... nb::class_<T, PyT>(m, "T") .def(nb::init<>()) .def("f", &PyT::F) It compiles fine, but when i go to use it in python, it only works if I call the function as F() despite wrapping it with the method name f(). How do I solve this?
You can solve this by using NB_OVERRIDE_NAME and specifying the name you want it to be overriding, so in this case it would be: void F() override { NB_OVERRIDE_NAME("f", F); }
2
2
77,051,212
2023-9-6
https://stackoverflow.com/questions/77051212/python-extract-words-surround-a-word-from-another-column
I have the following dataframe: id text word 1 i am working with john he is my colleague john 2 i watched the bond movie and the bond actor was amazing bond 3 mary is my friend and we work together mary 4 hello world hello python peter I would like to create another column where I retain the 3 words at the left and right of each word in column ["word"]. Sometimes the searched word can appear more than once, or do not have any words at the left or right, sometimes the text does not include the searched word. This is what I want: id text word retain 1 i am working with john he is my colleague john am working with john he is my 2 i watched the bond movie and the bond actor was amazing bond i watched the bond movie and the bond actor was amazing 3 mary is my friend and we work together mary mary is my friend 4 hello world hello python peter Following this question: Extract words surrounding a word and inserting results in a dataframe column I tried to adapt the code: retain = df.text.str.extract("(?P<before>(?:\w+\W+){,3})df['word']\W+(?P<after>(?:\w+\W+){,3})", expand=True) but I only obtained two columns full of NaNs. Any suggestion?
You could try: import re df['retain'] = [m.group() if (m:=re.search(fr'(?:(?:\w+\W+){{,3}}){w}(?:.*{w})?\W+(?:(?:\w+\W+){{,2}}\w+)?', t)) else '' for t, w in zip(df['text'], df['word'])] Output: id text word retain 0 1 i am working with john he is my colleague john am working with john he is my 1 2 i watched the bond movie and the bond actor was amazing bond i watched the bond movie and the bond actor was amazing 2 3 mary is my friend and we work together mary mary is my friend 3 4 hello world hello python peter
2
2
77,026,971
2023-9-2
https://stackoverflow.com/questions/77026971/how-to-wrap-a-vectort-in-nanobind
In boost::python, to wrap a vector<T> you would do something like this: boost::python::class_< std::vector < T > >("T") .def(boost::python::vector_indexing_suite<std::vector< T > >()); How do I accomplish the same thing in nanobind?
Include the header <nanobind/stl/bind_vector.h> and then its just: nb::bind_vector<std::vector<T> >(m, "TVector");
3
2
77,008,824
2023-8-30
https://stackoverflow.com/questions/77008824/how-to-set-cookies-on-jinja2-templateresponse-in-fastapi
I am using Python FastAPI and Jinja2, all of which I am new to. I am able to set cookies alone or return html templates on their own, but I cannot work out how to do both at once. Setting cookies only works as expected, but returning a template seems to overwrite that and just returns html with no cookies. @app.get("/oauth/auth", response_class=HTMLResponse) async def login_page(request: Request, response: Response): client_Code_Req_Schema = ClientCodeReqSchema(client_id=request.query_params.get("client_id"), redirect_uri=request.query_params.get("redirect_uri"), response_type=request.query_params.get("response_type")) if check_client(client_Code_Req_Schema): response.set_cookie(key="redirect_uri", value="test") return templates.TemplateResponse("authorize.html", {"request": request}) else: raise HTTPException(status_code=400, detail="Invalid request") Many thanks for any advice. Happy to provide more info if I missed something.
You should set the cookie on the TemplateResponse instead—which is returned from that endpoint—not the Response object defined in the endpoint's parameters, which could be used when returing a simple (JSON) message, e.g., return {'msg': 'OK'}. Related answers that you might find helpful can be found here, here and here. Example @app.get("/oauth/auth", response_class=HTMLResponse) async def login_page(request: Request): if ... response = templates.TemplateResponse("authorize.html", {"request": request}) response.set_cookie(key="redirect_uri", value="test") return response else: raise HTTPException(status_code=400, detail="Invalid request")
4
4
77,038,132
2023-9-4
https://stackoverflow.com/questions/77038132/python-pillow-pil-doesnt-recognize-the-attribute-textsize-of-the-object-imag
I already checked python version on my environment (sublime text) and it is 3.11.0, the latest, I checked pillow version which is 10.0.0, the latest, and my code looks similar to other examples online. the code has a part in Italian, but its pretty understandable. the problem is at "disegno.textsize(testo, font=font) after I run the code: line 14, in metti_testo_su_sfondo text_width, text_height = disegno.textsize(testo, font=font) ^^^^^^^^^^^^^^^^ AttributeError: 'ImageDraw' object has no attribute 'textsize' its strange because imagedraw should have the textsize attribute. I'm a novice, I hope I didn't miss anything blatant from PIL import Image, ImageDraw, ImageFont def metti_testo_su_sfondo(testo, sfondo, posizione=(10, 10), colore_testo=(0, 0, 0), dimensione_font=25): # Apri l'immagine dello sfondo immagine_sfondo = Image.open(sfondo) disegno = ImageDraw.Draw(immagine_sfondo) font = ImageFont.truetype("ARIAL.TTF", dimensione_font) text_width, text_height = disegno.textsize(testo, font=font) # Calcola le coordinate del testo centrato x = (immagine_sfondo.width - text_width) // 2 y = (immagine_sfondo.height - text_height) // 2 disegno.text((x, y), testo, fill=colore_testo, font=font) immagine_sfondo.save("spotted.png") testo_da_inserire = "Ciao, mondo!" sfondo_da_utilizzare = "spotted_bianco.jpg" metti_testo_su_sfondo(testo_da_inserire, sfondo_da_utilizzare) The objective is a code that makes me images automatically without needing to edit them manually. I checked build system, python version and pillow version. when I run the code through cmd though it gives me this error: from PIL import Image, ImageDraw, ImageFont ModuleNotFoundError: No module named 'PIL'
textsize was deprecated, the correct attribute is textlength which gives you the width of the text. for the height use the fontsize * how many rows of text you wrote. Example code: w = draw.textlength(text, font=font) h = fontSize * rows
15
24
77,008,165
2023-8-30
https://stackoverflow.com/questions/77008165/official-api-for-typing-generic-orig-bases
I have a generic type for which I'd like to retrieve at runtime the type of its type variable. The following snippet runs well, but it uses Generic.__orig_bases__ which is not an official API (its use is discouraged in PEP 560 that defines it). Is there an official API to retrieve it? And if not, is there another (officially supported) way for me to code get_t in this example? import typing T = typing.TypeVar("T") class MyGeneric(typing.Generic[T]): @classmethod def get_t(cls) -> type[T]: for base in cls.__orig_bases__: if typing.get_origin(base) is MyGeneric: return typing.get_args(base)[0] raise RuntimeError("didn't work :(") class IntImplementation(MyGeneric[int]): pass assert IntImplementation.get_t() is int
I think with Python 3.12 we can safely say that __orig_bases__ is now documented and there's a function that can retrieve it: from types import get_original_bases print(get_original_bases(IntImplementation)) # (__main__. MyGeneric[int],) Reference: https://docs.python.org/3/library/types.html#types.get_original_bases
3
4
77,025,354
2023-9-1
https://stackoverflow.com/questions/77025354/with-pydantic-v2-and-model-validate-how-can-i-create-a-computed-field-from-an
This context here is that I am using FastAPI and have a response_model defined for each of the paths. The endpoint code returns a SQLAlchemy ORM instance which is then passed, I believe, to model_validate. The response_model is a Pydantic model that filters out many of the ORM model attributes (internal ids and etc...) and performs some transformations and adds some computed_fields. This all works just fine so long as all the attributes you need are part of the Pydantic model. Seems like __pydantic_context__ along with model_config = ConfigDict(from_attributes=True, extra='allow') would be a great way to hold on to some of the extra attributes from the ORM model and use them to compute new fields, however, it seems that when model_validate is used to create the instance that __pydantic_context__ remains empty. Is there some trick to getting this behavior in a clean way? I have a way to make this work, but it involves dynamically adding new attributes to my ORM model, which leaves me with a bad feeling and a big FIXME in my code. Here is some code to illustrate the problem. Note that the second test case fails. from typing import Any from pydantic import BaseModel, ConfigDict, computed_field, model_validator class Foo: def __init__(self): self.original_thing = "foo" class WishThisWorked(BaseModel): """ __pydantic_extra__ does not pick up the additional attributes when model_validate is used to instantiate """ model_config = ConfigDict(from_attributes=True, extra='allow') @computed_field @property def computed_thing(self) -> str: try: return self.__pydantic_extra__["original_thing"] + "_computed" except Exception as e: print(e) return None model = WishThisWorked(original_thing="bar") print(f'WishThisWorked (original_thing="bar") worked: {model.computed_thing == "bar_computed"}') # this is the case that I actually want to work model_orm = WishThisWorked.model_validate(Foo()) print(f'WishThisWorked model_validate(Foo()) worked: {model.computed_thing == "foo_computed"}') class WorksButKludgy(BaseModel): """ I don't like having to modify the instance passed to model_validate """ model_config = ConfigDict(from_attributes=True) computed_thing: str @model_validator(mode="before") @classmethod def _set_fields(cls, values: Any) -> Any: if type(values) is Foo: # This is REALLY gross values.computed_thing = values.original_thing + "_computed" elif type(values) is dict: values["computed_thing"] = values["original_thing"] + "_computed" return values print(f'WorksButKludgy (original_thing="bar") worked: {model.computed_thing == "bar_computed"}') model = WorksButKludgy(original_thing="bar") model_orm = WorksButKludgy.model_validate(Foo()) print(f'WorksButKludgy model_validate(Foo()) worked: {model_orm.computed_thing == "foo_computed"}')```
What you could consider is having all the ORM attributes in your schema, but labelling them as excluded. Then you have access to all your ORM attributes when you want to use them in a computed field: from pydantic import BaseModel, Field, property, computed_field, ConfigDict from sqlalchemy.orm import declaritive_base from sqlalchemy import Column, Integer, String SqlBase = declaritive_base() class SqlModel(SqlBase): ID = Column(Integer) Name = Column(String) class SqlSchema(BaseModel): model_config = ConfigDict(from_attributes=True) ID: int = Field(exclude=True) Name: str = Field(...) @computed_field @property def id_name(self) -> str: return f'{self.ID}_{self.Name}'
11
10
77,006,745
2023-8-30
https://stackoverflow.com/questions/77006745/oserror-meta-llama-llama-2-7b-chat-hf-is-not-a-local-folder
I'm trying to replied the code from this Hugging Face blog. At first I installed the transformers and created a token to login to hugging face hub: pip install transformers huggingface-cli login After that it is said to use use_auth_token=True when you have set a token. Unfortunately after running the code I get an error: from transformers import AutoTokenizer import transformers import torch model = "meta-llama/Llama-2-7b-chat-hf" tokenizer = AutoTokenizer.from_pretrained(model, use_auth_token=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'I liked "Breaking Bad" and "Band of Brothers". Do you have any recommendations of other shows I might like?\n', do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") Error: OSError: meta-llama/Llama-2-7b-chat-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. It says that the model cannot be found, but you can find it in the list of models on hugging face here. This is the version of the transformers package I'm using: > pip show transformers Name: transformers Version: 4.33.0.dev0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: /Users/quinten/opt/miniconda3/lib/python3.9/site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by: spacy-transformers Does anyone know how to fix this error?
def from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs): the pretrained_model_name_or_path may the model repo or the model path in your case the model repo is "meta-llama/Llama-2-7b-chat-hf" which is right. according to https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/tree/main you must agree to the terms and conditions in the above link in order to access the model.
4
6
77,020,278
2023-9-1
https://stackoverflow.com/questions/77020278/how-to-load-a-huggingface-dataset-from-local-path
Take a simple example in this website, https://huggingface.co/datasets/Dahoas/rm-static: if I want to load this dataset online, I just directly use, from datasets import load_dataset dataset = load_dataset("Dahoas/rm-static") What if I want to load dataset from local path, so I download the files and keep the same folder structure from web Files and versions fristly, -data |-test-00000-of-00001-bf4c733542e35fcb.parquet |-train-00000-of-00001-2a1df75c6bce91ab.parquet -.gitattributes -README.md -dataset_infos.json Then, put them into my folder, but shows error when loading: dataset_path ="/data/coco/dataset/Dahoas/rm-static" tmp_dataset = load_dataset(dataset_path) It shows FileNotFoundError: No (supported) data files or dataset script found in /data/coco/dataset/Dahoas/rm-static.
Save the data with save_to_disk then load it with load_from_disk. For example: import datasets ds = datasets.load_dataset("Dahoas/rm-static") ds.save_to_disk("Path/to/save") and later if you wanna re-utilize it just normal load_dataset will work ds = datasets.load_from_disk("Path/to/save") you can verify the same by printing the dataset you will be getting same result for both. This is the easier way out. The file format it is generally saved in is arrow. For the second method where you are downloading the parquet file. Would require you to explicitly declaring the dataset and it config, might be included in json and then you can load it.
6
6
77,017,804
2023-8-31
https://stackoverflow.com/questions/77017804/polars-valueerror-could-not-convert-value-unknown-as-a-literal
I have a line of code in polars that worked prior to my most recent update of the polars package to '0.19.0'. This example ran before: import polars as pl df = pl.DataFrame( { "a": [5, 6, 7, 8, 9], "b": [5, 6, 7, 8, 9], "c": [5, 6, 7, 8, None],}) cols_1 = ["a", "b"] cols_2 = ["c"] df = df.filter(pl.all(pl.col(cols_1 + cols_2).is_not_null())) But now raises the error: ValueError: could not convert value 'Unknown' as a Literal
Disclaimer. I am just summarising the comments of @jqurious and @keraion. As of the polars release 0.19.0, you'll need to use dedicated horizontal aggregation functions, such as all_horizontal or any_horizontal. import polars as pl df = pl.DataFrame({ "a": [5, 6, 7, 8, 9], "b": [5, 6, 7, 8, 9], "c": [5, 6, 7, 8, None] }) df.filter(pl.all_horizontal(pl.col(["a", "b", "c"]).is_not_null())) The horizontal aggregations were introduced in release 0.18.7 and deprecated in 0.18.8.
5
3
77,024,460
2023-9-1
https://stackoverflow.com/questions/77024460/got-python-unexpected-failure-in-python-in-excel
I have Excel Version 2309 (Build 16827.20000) through the Beta Channel of 365 Insider Program. The Python (Preview) icons do appear in the Formulas ribbon. I can also type Python code by =PY [tab] However, none of the code works, including the built-in samples. All I got is a #PYTHON! error. When I click show error message, it says "The Python interpreter returned the following error: Unexpected failure". I have also tried each of the following codes like but all returns #PYTHON! 1+2 #Comment xl("A1:E5") In addition to Excel, do I need to install something else to get Python works? Thanks in advance.
Same problem and finally solved. My 365 subscription is managed by my organization and I have two accounts: the first is my email with "company.com" domain, the second is my email with "company.onmicrosoft.com" domain. When I login with the first the PY function returns #PYTHON! error, while using the second with "onmicrosoft.com" domain it works. It seems a common issues for those having two accounts like me.
3
0
77,043,020
2023-9-5
https://stackoverflow.com/questions/77043020/compare-2-pdf-files-langchain
import streamlit as st import os import tempfile from pathlib import Path from pydantic import BaseModel, Field import streamlit as st from langchain.chat_models import ChatOpenAI from langchain.agents import Tool from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import FAISS from langchain.document_loaders import PyPDFLoader from langchain.chains import RetrievalQA from langchain.agents import initialize_agent import openai os.environ["OPENAI_API_KEY"] = "" os.environ['OPENAI_API_TYPE'] = 'azure' os.environ['OPENAI_API_VERSION'] = '2023-03-15-preview' os.environ['OPENAI_API_BASE'] = "https://summarization" #API settings for embedding openai.api_type = "azure" openai.api_base = "https://summarization" openai.api_version = '2023-03-15-' openai.api_key = "" class DocumentInput(BaseModel): question: str = Field() # Create a temporary directory in the script's folder script_dir = Path(__file__).resolve().parent temp_dir = os.path.join(script_dir, "tempDir") def main(): st.title("PDF Document Comparison") # Create a form to upload PDF files and enter a question st.write("Upload the first PDF file:") pdf1 = st.file_uploader("Choose a PDF file", type=["pdf"], key="pdf1") st.write("Upload the second PDF file:") pdf2 = st.file_uploader("Choose a PDF file", type=["pdf"], key="pdf2") question = st.text_input("Enter your question") submit_button = st.button("Compare PDFs") if submit_button: if pdf1 and pdf2: if not os.path.exists(temp_dir): os.makedirs(temp_dir) else: # Clear the previous contents of the "tempDir" folder for file in os.listdir(temp_dir): file_path = os.path.join(temp_dir, file) try: if os.path.isfile(file_path): os.unlink(file_path) except Exception as e: print(f"Error deleting file: {e}") # Save the PDF files to the "tempDir" directory pdf1_path = os.path.join(temp_dir, pdf1.name) with open(pdf1_path, 'wb') as f: f.write(pdf1.getbuffer()) pdf2_path = os.path.join(temp_dir, pdf2.name) with open(pdf2_path, 'wb') as f: f.write(pdf2.getbuffer()) llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo",engine="gpt-35-turbo") tools = [] files = [ { "name": pdf1.name, "path": pdf1_path, }, { "name": pdf2.name, "path": pdf2_path, }, ] for file in files: loader = PyPDFLoader(file["path"]) pages = loader.load_and_split() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(pages) embeddings = OpenAIEmbeddings() retriever = FAISS.from_documents(docs, embeddings).as_retriever() # Wrap retrievers in a Tool tools.append( Tool( args_schema=DocumentInput, name=file["name"], description=f"useful when you want to answer questions about {file['name']}", func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever), ) ) agent = initialize_agent( tools=tools, llm=llm, verbose=True, ) st.write(agent({"input": question})) # Now you have both PDFs saved in the "tempDir" folder # You can perform your PDF comparison here if __name__ == "__main__": main() I get the following error : pydantic.v1.error_wrappers.ValidationError: 1 validation error for Tool args_schema subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel) I am following the example from langchain documentation:https://python.langchain.com/docs/integrations/toolkits/document_comparison_toolkit
I've run into this issue, I solved it using from pydantic.v1 import BaseModel or install last v1 version pip install pydantic==1.10.12 Pydantic has released v2 version on June 30, 2023 and langchain integration is not compatible
2
3
77,034,675
2023-9-4
https://stackoverflow.com/questions/77034675/finding-the-minimum-cost-for-m-compatible-elements-for-group-1-and-group-2-al
Here is a problem statement. I recently had an interview question regarding this: given arrays compatible1, compatible2, and cost of length n >= 1. cost[i] represents the cost of element i. the respective compatible[i] == 1 if compatible with that group and 0 otherwise. also given min_compatible, which is the minimum number of elements to be compatible for each. for example, if item is compatible with both, it counts toward the quota for both. return the minimum cost to fulfill the compatibility requirement. My idea was: we either are compatible with one of group1, group2, or both (disjoint). pick the minimum cost that's compatible with both if both still need an element and that one is less than the minimum of the non-intersect ones combined. otherwise, just pick the minimum you need. My solution looked like this (Python): from heapq import heappush, heappop def getMinCost(cost, compatible1, compatible2, min_compatible): # set of indices compatible with each c1 = {i for i,c in enumerate(compatible1) if c == 1} c2 = {i for i,c in enumerate(compatible2) if c == 1} # not enough to fulfill reqs if len(c1) < min_compatible or len(c2) < min_compatible: return -1 # make disjoint -> one, the other, or both c1, c2, c3 = c1 - c2, c2 - c1, c2 & c1 # fill the disjoint heaps heap1 = [] heap2 = [] heap3 = [] for i,c in enumerate(cost): if i in c3: heappush(heap3, c) elif i in c2: heappush(heap2, c) elif i in c1: heappush(heap1, c) # handle edge case (one empty early) heappush(heap1, float('inf')) heappush(heap2, float('inf')) heappush(heap3, float('inf')) # amount fulfilling each a1 = 0 a2 = 0 # minimum cost counter minCost = 0 # both are below threshold, prioritize intersection while a1 < min_compatible and a2 < min_compatible: # if first cond true, advantageous to take the one # that fulfills both if heap1[0] + heap2[0] >= heap3[0]: minCost += heappop(heap3) a1 += 1 a2 += 1 elif heap1[0] <= heap2[0]: minCost += heappop(heap1) a1 += 1 else: minCost += heappop(heap2) a2 += 1 # deal with leftovers while a1 < min_compatible: if heap1[0] <= heap3[0]: minCost += heappop(heap1) else: minCost += heappop(heap3) a2 += 1 a1 += 1 while a2 < min_compatible: if heap2[0] <= heap3[0]: minCost += heappop(heap2) else: minCost += heappop(heap3) a1 += 1 a2 += 1 return minCost I succeeded in 8/15 of the test cases, but not all (and the inputs were hidden). Where did I go wrong? I cannot see it and am looking to (1) be aware of my mistake (2) not repeat it in the future. The errors were all incorrect output.
...pick the minimum cost that's compatible with both if both still need an element... I can't see your test cases so this may or may not be the issue, but you should pick a dual-compatible item even if only one of the minimum requirements still needs to be fulfilled as long as it's more economical than picking a singular-compatible item. It seems you do attempt to pick a dual-compatible item when only one category is still needed. However, I believe you're calculating the cost of it incorrectly: if heap1[0] <= heap3[0]: When you pick a dual item (let's say for category 1), the true cost is the cost of the item minus the most expensive category 2 item that was chosen (because you would be able to knock that item off if you picked the dual one). You would need to revise your code keep track of which items were selected (most likely by just adding three more heaps). Edit: Your comment really threw me for a loop, and it took me the better part of an hour but I finally came up with a counter-example. Consider the follow items and prices: Group 1: 10, 20 Group 2: 30, 90 Dual: 10, 100 The minimum required for both groups is 3. Your algorithm starts by picking out the first dual for 10, then since the next dual costs 100, it picks the first 1 for 10, then the second 1 for 20. Now that group 1 has been fulfilled, we moved on to the third while loop where your algorithm picks the first 2 for 30, which is correct, but then picks the second 2 for 90, but that is incorrect! We should pick the dual for 100, and remove the second 1 for 20, which would result in a total cost of 100 - 20 = 80 instead of 90, but nonetheless fulfilling both groups' minimums. I'd like to note that the Achilles heel of your algorithm is that the first while loop only considers the price of the very next item(s); it can't look further into the heap than the first item. That was eventually the key finding a counter-example: your algorithm works fine if it only has to make a single decision in the second/third while, but if it has to make two or more then things can go wrong! Adding both group 1 and group 2 items simultaneously during the first while loop should fix the issue. In way of a "proof" that such an algorithm will always produce the minimal solution, consider the following: since all items of the same group provide the same amount of utility (e.g. all group 1 items are identical to each other save for price), the minimum solution will always use only the least expensive items for each group. In other words, if you sort the group 1 items from least to most expensive, there will be some index in the list such that all the items to the left of the index are in the minimum solution, and all the items to the right are not used. Next, notice that since the minimum requirement for both group 1 and 2 and identical, that "dividing index" will be the the same for both group 1 and 2. For example, if the minimum solution contains 4 group 1 items (excluding dual items), then it will also contain 4 group 2 items. In this way, you can think of combining the least expensive group 1 and 2 items into a quasi-dual item, with price equal to the sum of their price. We can do the same with the second least expensive, third, and so on, pairing them together. We can be certain that if one half of the quasi-dual item is in the minimum solution, the other half will be as well, so we can treat the two as a single item. At this point, the algorithm is quite simple: just alternate between picking the cheapest of either the cheapest dual item or cheapest quasi-dual item. If one of the groups isn't large enough to supply it's minimum (e.g. if the minimum requirement is 6 but there's only 5 group 2 items), just do the algorithm normally until you run out of quasi-dual items, then pick the cheapest regular dual items.
2
4
77,048,073
2023-9-5
https://stackoverflow.com/questions/77048073/how-to-iterate-through-paginated-results-in-the-python-sdk-for-microsoft-graph
I'm getting to grips with the Python SDK, having never used GraphQL before (but I'm familiar with the basic concept). I'm able to retrieve the odata_next_link value from responses, but I'm not sure how to use it. I note from here that: You should include the entire URL in the @odata.nextLink property in your request for the next page of results. Depending on the API that the query is being performed against, the @odata.nextLink URL value will contain either a $skiptoken or a $skip query parameter. The URL also contains all the other query parameters present in the original request. Do not try to extract the $skiptoken or $skip value and use it in a different request. However, I'm not sure how to include that URL in the next request. Currently, my queries look like response = await graph_client.groups.by_group_id(group_id).transitive_members.get() - I don't see an option there to change the base url. I thought I could do something like: query_params = GroupsRequestBuilder.GroupsRequestBuilderGetQueryParameters( skip_token = parse_qs(urlparse(response.odata_next_link).query)['$skipToken'][0] ) request_configuration = GroupsRequestBuilder.GroupsRequestBuilderGetRequestConfiguration( query_parameters=query_params ) response = await graph_client.[...].get(request_configuration) but that reports GroupsRequestBuilder.GroupsRequestBuilderGetQueryParameters.__init__() got an unexpected keyword argument 'skip_token' (and similarly for if I try naming the parameter skiptoken or skipToken) Frustratingly, there's no code example here - but, based on those examples, I did search the repo for an Iterator - with no results.
We do not have a page iterator for the Python SDK yet, however there is added support for using a raw url for the request. In the case of pagination, a user can use the odata_next_link property value to make a fresh request using with_url and get items in the next page. Here's a working example: from msgraph.generated.users.users_request_builder import UsersRequestBuilder async def get_users(): query_params = UsersRequestBuilder.UsersRequestBuilderGetQueryParameters( select=["id", "displayName", "createdDateTime"], top=5, ) request_config = UsersRequestBuilder.UsersRequestBuilderGetRequestConfiguration( query_parameters=query_params ) users_list = [] users = await client.users.get(request_configuration=request_config) for user in users.value: users_list.append(user) next_link = users.odata_next_link while next_link: users = await client.users.with_url(next_link).get() next_link = users.odata_next_link for user in users.value: users_list.append(user) print(len(users_list))
5
11
77,018,288
2023-8-31
https://stackoverflow.com/questions/77018288/sqlalchemy-how-to-customize-standard-type-like-datetime-param-binding-process
Given the following snippet t = Table( "foo", MetaData(), Column("bar", DateTime()), ) engine.execute(t.insert((datetime(1900, 1, 1),))) engine.execute(t.insert(("1900-01-01",))) the last statement works well for postgresql, while failing for Spark e.g. Cannot safely cast 'bar': string to timestamp [SQL: INSERT INTO TABLE `foo` VALUES (%(bar)s)] [parameters: {'bar': '1900-01-01'}] I can manage it with custom type like class MyDateTime(TypeDecorator): impl = DateTime def process_bind_param(self, value, dialect): if dialect.name == "hive" and isinstance(value, str): return datetime.strptime(value, "%Y-%m-%d") return value t = Table( "foo", MetaData(), Column("bar", MyDateTime()), ) but solution seems hacky while we directly checks for dialect name I need to customize existing type for dialect, not implementing new one, because we have code base with DateTime type Is there any solution for sqlalchemy to customize existing type?
There is no built-in way to customize the parameter binding processing for a standard SQLAlchemy type, such as DateTime(), for a specific dialect. However, there are a few workarounds that you can use. One workaround is to create a custom type that wraps the standard type and overrides the process_bind_param() method. For example, the following code shows a custom type that overrides the process_bind_param() method to convert a string to a datetime object for the Hive dialect: class MyDateTime(TypeDecorator): impl = DateTime def process_bind_param(self, value, dialect): if dialect.name == "hive" and isinstance(value, str): return datetime.strptime(value, "%Y-%m-%d") return value You can then use this custom type in your SQLAlchemy schema instead of the standard DateTime() type. For example: t = Table( "foo", MetaData(), Column("bar", MyDateTime()), ) Another workaround is to use a custom dialect that overrides the bind_param() method for the standard DateTime() type. For example, the following code shows a custom dialect that overrides the bind_param() method to convert a string to a datetime object for the Hive dialect: class MyHiveDialect(postgresql.PGDialect): def bind_param(self, value, type_): if type_ == DateTime and isinstance(value, str): return datetime.strptime(value, "%Y-%m-%d") return super().bind_param(value, type_) You can then use this custom dialect when creating your SQLAlchemy engine. For example: engine = create_engine("postgresql://localhost/foo", dialect=MyHiveDialect()) Which workaround you choose depends on your specific needs. If you only need to customize the parameter binding processing for a specific type for a single dialect, then the first workaround is probably the simplest solution. If you need to customize the parameter binding processing for multiple types or for multiple dialects, then the second workaround may be a better solution. Please note that both of these workarounds are considered to be "hacky" solutions. There is no official way to customize the parameter binding processing for a standard SQLAlchemy type for a specific dialect. If you need to do this, then you should be aware that your code may be brittle and may not work with future versions of SQLAlchemy.
3
3
77,047,476
2023-9-5
https://stackoverflow.com/questions/77047476/pandas-dataframe-write-parquet-and-setting-the-zstd-compression-level
I am writing out a compressed Parquet file from DataFrame as following: result_df.to_parquet("my-data.parquet", compression="zstd") How can I instruct Pandas on the compression level of zstd coding?
Using pyarrow engine you can send compression_level in kwargs to to_parquet result_df.to_parquet(path, engine='pyarrow', compression='zstd', compression_level=1) Test: import pandas as pd import pyarrow.parquet as pq path = 'my-data.parquet' result_df = pd.DataFrame({'a': range(100000)}) for i in range(10): # create the file result_df.to_parquet(path, engine='pyarrow', compression='zstd', compression_level=i) # get compressed file size metadata = pq.ParquetFile(path).metadata.row_group(0).column(0) print(f'compression level {i}: {metadata.total_compressed_size}') Output: compression level 0: 346166 compression level 1: 309501 compression level 2: 309500 compression level 3: 346166 compression level 4: 355549 compression level 5: 381823 compression level 6: 310104 compression level 7: 310088 compression level 8: 308866 compression level 9: 308866
2
1
77,028,925
2023-9-2
https://stackoverflow.com/questions/77028925/docker-compose-fails-error-externally-managed-environment
I am using a windows machine and have installed wsl to be able to use Docker desktop. Of course the build failed and then I observed python3 and pip3 in the dockerfile. So I installed ubuntu and debian via wsl and then tried to run the app (docker-compose up). It still fails and throws the following error: ERROR [test 3/5] RUN pip3 install daff==1.3.46 0.9s ------ > [test 3/5] RUN pip3 install daff==1.3.46: 0.812 error: externally-managed-environment 0.812 0.812 × This environment is externally managed 0.812 ╰─> To install Python packages system-wide, try apt install 0.812 python3-xyz, where xyz is the package you are trying to 0.812 install. 0.812 0.812 If you wish to install a non-Debian-packaged Python package, 0.812 create a virtual environment using python3 -m venv path/to/venv. 0.812 Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make 0.812 sure you have python3-full installed. 0.812 0.812 If you wish to install a non-Debian packaged Python application, 0.812 it may be easiest to use pipx install xyz, which will manage a 0.812 virtual environment for you. Make sure you have pipx installed. 0.812 0.812 See /usr/share/doc/python3.11/README.venv for more information. 0.812 0.812 note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. 0.812 hint: See PEP 668 for the detailed specification. ------ failed to solve: process "/bin/sh -c pip3 install daff==1.3.46" did not complete successfully: exit code: 1 Here's the DockerFile: FROM postgres:12 RUN apt-get update && apt-get install -y \ python3 \ python3-pip RUN pip3 install daff==1.3.46 # Copy project files COPY . /app WORKDIR /app ENV PATH=$PATH:/app/bin Any idea how to resolve the issue? Because I'm using docker, this How do I solve "error: externally-managed-environment" everytime I use pip3? doesn't help me.
This error occurs due to this: PEP0668. I found the resolution for local machines while going through this answer. Specifically for docker, using the pip install command with the flag --break-system-packages worked for me. In your script update the line: RUN pip3 install daff==1.3.46 with RUN pip3 install daff==1.3.46 --break-system-packages
9
19
77,016,132
2023-8-31
https://stackoverflow.com/questions/77016132/import-from-another-file-inside-the-same-module-and-running-from-a-main-py-outsi
Given the following tree: ├── main.py └── my_module ├── a.py ├── b.py └── __init__.py a.py: def f(): print('Hello World.') b.py: from a import f def f2(): f() if __name__ == '__main__': f2() main.py: from my_module.b import f2 if __name__ == '__main__': f2() When I run b.py, "Hello World." is printed successfully. However, when I run main.py, I get the following error: Traceback (most recent call last): File "/home/user/main.py", line 1, in <module> from my_module.b import f2 File "/home/user/my_module/b.py", line 1, in <module> from a import f ModuleNotFoundError: No module named 'a' I would expect the same output executing main.py and b.py
Because you have an __init__.py file in the my_module directory, Python sees the entirety of my_module as a package. That also means that all imports within my_module must be relative imports. This is easily fixed by changing your import in b.py to be a relative import. b.py from .a import f def f2(): f() if __name__ == '__main__': f2() If you want to be able to run b.py as a separate script in addition to importing it as part of the the my_module package. You can make the imports conditionally relative or absolute based on whether __name__ == "__main__" if __name__ != '__main__': from .a import f def f2(): f() if __name__ == '__main__': from a import f f2() Possibly a better alternative to checking if __name__ == '__main__': would be to use try/except try: from .a import f except ImportError: # If relative import failed (running as __main__), try absolute from a import f def f2(): f() if __name__ == '__main__': f2() See also the definitive explanation of relative imports on SO
2
3
77,037,560
2023-9-4
https://stackoverflow.com/questions/77037560/how-to-count-work-days-between-date-columns-with-polars
I have the following DataFrame. df = pl.from_repr(""" ┌────────────┬───────────────┐ │ date ┆ maturity_date │ │ --- ┆ --- │ │ date ┆ date │ ╞════════════╪═══════════════╡ │ 2000-01-04 ┆ 2000-01-17 │ │ 2000-01-04 ┆ 2000-02-15 │ │ 2000-01-04 ┆ 2000-03-15 │ │ 2000-01-04 ┆ 2000-04-17 │ │ 2000-01-04 ┆ 2000-05-15 │ └────────────┴───────────────┘ """) I'm trying to get the number of the workdays between date and maturity_date (not counting saturday and sunday) I'd also like to calculate diff days that use a given calendar like a trade date calendar of stock market which is different from a normal calendar. I use this date_ranges to count workdays, but it seems only a little faster than map_elements df.with_columns( pl.date_ranges("date", "maturity_date") .list.eval(pl.element().dt.weekday() <= 5) .list.count_matches(True) .alias("workdays_diff") # pl.concat_list("date", "maturity_date").map_elements(lambda x: get_work_days(x[0], x[1])) # .alias("workdays_diff") ) shape: (5, 3) ┌────────────┬───────────────┬───────────────┐ # ┌────────────────┐ │ date ┆ maturity_date ┆ workdays_diff │ # │ tradedate_diff │ │ --- ┆ --- ┆ --- │ # │ --- │ │ date ┆ date ┆ u32 │ # │ i64 │ ╞════════════╪═══════════════╪═══════════════╡ # ╞════════════════╡ │ 2000-01-04 ┆ 2000-01-17 ┆ 10 │ # │ 10 │ │ 2000-01-04 ┆ 2000-02-15 ┆ 31 │ # │ 21 │ │ 2000-01-04 ┆ 2000-03-15 ┆ 52 │ # │ 42 │ │ 2000-01-04 ┆ 2000-04-17 ┆ 75 │ # │ 65 │ │ 2000-01-04 ┆ 2000-05-15 ┆ 95 │ # │ 80 │ └────────────┴───────────────┴───────────────┘ # └────────────────┘ Is there a faster way? Is there also a way to calculate tradedate_diff?
@Dean MacGregor's excellent answer using pure Polars wasn't quite performant enough for my use case. Using Numpy's built in busday_count function turned out to be much faster in my case and can easily be converted back to a polars with pl.from_numpy. nyse_holidays=pl.Series([ date(2000,1,17), date(2000,2,21), date(2000,4,21), date(2000,5,29), date(2000,7,4), date(2000,9,4), date(2000,11,23), date(2000,12,25), ]) df = pl.DataFrame( { "business_days": pl.from_numpy(np.busday_count(data["date"], data["maturity_date"]),["business_days"])["business_days"], "trade_days": pl.from_numpy(np.busday_count(data["date"], data["maturity_date"], holidays=nyse_holidays),["trade_days"])["trade_days"], } ) shape: (5, 2) ┌───────────────┬────────────┐ │ business_days ┆ trade_days │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═══════════════╪════════════╡ │ 9 ┆ 9 │ │ 30 ┆ 29 │ │ 51 ┆ 49 │ │ 74 ┆ 72 │ │ 94 ┆ 91 │ └───────────────┴────────────┘
3
3
77,045,427
2023-9-5
https://stackoverflow.com/questions/77045427/using-python-to-automate-creation-of-jira-tickets
I have been trying to write a python script to automatically raise Jira tickets, and have been running into some trouble. To be more specific, I tried to use both the issue_create and create_issue methods as outlined in the atlassian-python API reference. In the code provided below, I successfully obtain the correct project id just to verify that my authentication (PAT) works. However the second part fails and a Jira ticket (task) is not created. For reference, here is my code: from atlassian import Jira jira = Jira( url = "https://jira.example.com/", token = "MyPersonalAccessToken" ) proj = jira.get_project('key', expand=None) print(proj.get("id")) # to verify that authentication worked jira = Jira( url = "https://jira.example.com/rest/api/2/issue", token = "MyPersonalAccessToken" ) jira.issue_create( fields={ 'project': { 'key': 'key' }, 'summary': 'Testing JIRA python API', 'description': 'testing', 'issuetype': { "name": "Task" }, } ) Below is the output I get when running the code above: <PROJECT ID> Creating issue "Testing JIRA python API" Traceback (most recent call last): File "/Users/<user>/jira_python/jira.py", line 21, in <module> jira.issue_create( File "/Users/<user>/.local/share/virtualenvs/new_project-RFzzfWjC/lib/python3.10/site-packages/atlassian/jira.py", line 1435, in issue_create return self.post(url, data={"fields": fields}) File "/Users/<user>/.local/share/virtualenvs/new_project-RFzzfWjC/lib/python3.10/site-packages/atlassian/rest_client.py", line 333, in post response = self.request( File "/Users/<user>/.local/share/virtualenvs/new_project-RFzzfWjC/lib/python3.10/site-packages/atlassian/rest_client.py", line 257, in request self.raise_for_status(response) File "/Users/<user>/.local/share/virtualenvs/new_project-RFzzfWjC/lib/python3.10/site-packages/atlassian/rest_client.py", line 490, in raise_for_status raise HTTPError(error_msg, response=response) requests.exceptions.HTTPError I should also note that I have tried using just the base URL (jira.example.com) but I also received the same error. Please note that in the above code, the url and token have been modified for obvious reasons. I've tried using try-except to catch the error but to no avail. How can I find out where I'm going wrong and why my issues are not being created? Please let me know if I should provide further information, and thank you in advance.
I wanted to provide an answer here, for which I want to provide most of the credit to @matszwecja who hinted how to properly raise an exception so I can find out what's going on. After adding an exception handler, I was able to catch the two issues that were preventing my script from working as intended: The url parameter in the issue_create call should be "https://jira.example.com" instead of "https://jira.example.com/rest/api/2/issue", the issue_create function adds the correct endpoint automatically. I was missing a mandatory custom field, which was specific to my Jira project settings. Using an exception handler helped me find that out. See the code that worked below: from atlassian import Jira from requests import HTTPError jira = Jira( url = "https://jira.example.com/", token = "MyPersonalAccessToken" ) try: jira.issue_create( fields={ 'project': { 'key': 'key' }, 'summary': 'Testing JIRA python API', 'description': 'testing', 'issuetype': { "name": "Task" }, } ) except HTTPError as e: print(e.response.text) I hope this helps anyone who may run into similar issues.
3
3
77,047,680
2023-9-5
https://stackoverflow.com/questions/77047680/imshow-with-x-axis-as-log-scale-is-not-equally-spaced
I am using pcolor to generate the following plot (code below). It has a colorbar in log scale and the x-values are in log-scale too. The problem is that the rectangles in this plot have different widths (I've put a red grid to show the rectangles better, suggestion of Trenton). Is there any way in which I can make sure the width of each rectangle is the same? import matplotlib.pyplot as plt from matplotlib.colors import LogNorm import numpy as np # Generate Values x_values = np.geomspace(start=1, stop=1e-2, num=6) y_values = np.arange(start=0, stop=50, step=4, dtype=int) x_grid, y_grid = np.meshgrid(x_values, y_values) z_values = np.random.randn(len(y_values), len(x_values)) fig, ax = plt.subplots() im = ax.pcolor(x_grid, y_grid, z_values, norm=matplotlib.colors.LogNorm(), ec='r', lw=2) ax.set_xscale('log') fig.colorbar(im) plt.show()
You need to specify the bin edges. Probably a better way to do this in numpy, but the idea is simple - transform to log space, get the bin edges by linear interpolation, and then transform back to normal space. import matplotlib.pyplot as plt from matplotlib.colors import LogNorm import numpy as np # Generate Values x_values = np.geomspace(start=1, stop=1e-2, num=6) y_values = np.arange(start=0, stop=50, step=4, dtype=int) # edges? logx = np.log10(x_values) edgex = np.hstack(( logx[:-1] - np.diff(logx) / 2, logx[-1] - np.diff(logx)[-1] / 2, logx[-1] + np.diff(logx)[-1] / 2)) edgex = 10**edgex edgey = np.hstack(( y_values[:-1] - np.diff(y_values) / 2, y_values[-1] - np.diff(y_values)[-1] / 2, y_values[-1] + np.diff(y_values)[-1] / 2)) np.random.seed(12345) z_values = np.random.randn(len(y_values), len(x_values)) fig, axs = plt.subplots(1, 2, layout='constrained') ax = axs[0] im = ax.pcolormesh(x_values, y_values, z_values, norm=LogNorm(), ec='r', lw=2) ax.set_xscale('log') ax.set_title('Linear gaps') ax.plot(x_values, 0 * x_values, 'dm') fig.colorbar(im) ax = axs[1] im = ax.pcolormesh(edgex, edgey, z_values, norm=LogNorm(), ec='r', lw=2) ax.plot(x_values, 0 * x_values, 'dm') ax.set_xscale('log') ax.set_title('Log gaps') fig.colorbar(im)
3
3
77,025,860
2023-9-1
https://stackoverflow.com/questions/77025860/setuptools-replacing-manifest-in-with-pyproject-toml
Does setuptools support replacing the Manifest.in file, which specifies files that should only be included in the sdist distribution with a declaration in pyproject.toml?
Setuptools does not currently support replacing the MANIFEST.in with a declaration in pyproject.toml. There is also currently no specification for how to control the files included in an sdist. There are other build backends which do support this, by using their own tool subsection of pyproject.toml. For example, hatch uses [tool.hatch.build.targets.sdist] to control the sdist content, see File selection for more details about that.
2
3
77,044,403
2023-9-5
https://stackoverflow.com/questions/77044403/type-hint-function-accepting-a-union
Here is my (much simplified) code: def myfun(X:list[str|int]): for x in X: print(x) X = [1,2,3] myfun(X) Pyright complains on the last line because I provide a list of int while the function requires list[int|str]. What is the best way to deal with that case? Is there a way to say pyright to accept "subtypes"? Constraints: I do not want to define X as X:list[str|int]=[1,2,3] because, in my real case, I want X to be understood as list of int. I can call the function with myfun(list[str|int](X)) but it is really annoying.
Program to interfaces, not implementations -- Gang of Four This has been a staple chesnut of OOP lore for a long time (it dates back about 35 years now). But if you've never worked in a statically typed system before, it can be confusing what that means. After all, Python has been object oriented since its inception, and until type hinting sayings like that were never really applicable other than in a very weak sense like using duck typing instead of isinstance. But now we really need to make sure that we study the history of how to build software well in a statically typed system lest we be doomed to repeat it, and your question offers a great lens to examine this (thanks, btw). First lets get a little background out of the way. We can't talk about why your code errors without defining generics. The technical definition is that a generic type is a type that is parameterized by other types. Which is cool and all, but perhaps a more intuitive way is to think of them as types that are incomplete, a sort of fill-in-the-blank type. I realize you may know this already, but not everyone reading this will get it. When we talk about a value like a list [] it makes sense on it's own: it's a container and we can put all sorts of things inside it, the fact it's empty right now doesn't really matter. But types are categories, so it really doesn't make sense to talk about the type (category) of 'List' without saying a List of something specific. Types like List are generic in that you need another type to complete them, you can have a List[int] or a List[Tuple[int, int]] or whatever. Depending on the circumstances generics vary in how they handle subtype relationships in their parameters, this variance is the source of your problem as commenter jonrsharpe already helpfully pointed out. Explaining variance is outside the scope of this answer, but in addition to the official mypy docs you might find this resource helpful. Because you use the union type str|int that means that str and int are both subtypes of that union, and so the variance rules apply when the union is passed as a parameter to a generic type. TL;DR because of how mypy treats the variance of different generic container types you want to use a different one than List, the error message helpfully suggests Sequence instead, as another commenter STerliakov points out you only really need Iterable. But let's get back to my opening quote and talk about why the system is structured that way. Part of the reason I presume is to encourage the best practice I opened with: when designing an API down to and including function signatures, you really want to specify the bare minimum contract that the function needs to do it's work. If the only thing your function needs is an object with a getTimeStamp method that returns an integer, you really don't want to write def my_fun(x: SomeClass): because now my_fun is tightly coupled to that class. If you want to refactor SomeClass and move the getTimeStamp functionality to SomeOtherClass now every call site of my_fun is broken and needs to be changed. Refactoring tools are helpful but not really a solution: what if this is a published library on PyPI? Now what should be an internal implementation detail has leaked and you have a breaking change major version semver bump. Instead, you want to use the type system to say "this function expects that 'x' will be an object with a getTime method that returns an integer", and any object that qualifies is fine (N.B. this also greatly simplifies writing unit tests for the code and obviates the need for a lot of unnecessary mocking). So for your code which iterates a linear number of integers or strings you want to use a type that describes "something that contains integers or strings and works with a for loop": from typing import Iterable def my_fun(xs: Iterable[int|str]): for x in xs: print(x) foo = {1, 2, 3} # set bar = [1, 2, 3] # list baz = {"a": 1, "b": 2, "c": 3} # dict my_fun(foo) my_fun(bar) my_fun(baz) You can see now that the concrete data structure doesn't matter, we can pass in anything that conforms, even a custom user-defined object with __iter__ and __next__ methods! The typechecker encourages you to do this by making it more cumbersome if you don't and instead use a concrete data type like List, via the generic variance rules. Well, almost. See I'm assuming you typed it that way because you want to be able to pass in [1, 'a'] and not just [1, 2] or ['a', 'b'], and this cool trick fails in mypy when we use a heterogeneous structure: oops = [1, 'a'] my_fun(oops) You will get an error that you supplied List[object] where it expected Iterable[str|int]. Which unlike the best-practice-encouraging generic variance rules is arguably a flaw in the mypy typechecker, I note that my IDE gets this correct and does not show that error because it's using pyright as the LSP (I see you tagged the question pyright, so you may be ok). So that stinks, and you would need to explicitly hint oops to be oops: Iterable[str|int] = [1, 'a'], which is unfortunate.
3
3
77,046,536
2023-9-5
https://stackoverflow.com/questions/77046536/split-numpy-array-into-segments-where-condition-is-met
I have an array like so: arr = np.array([1, 2, 3, 4, -5, -6, 3, 5, 1, -2, 5, -1, -1, 10]) I want to get rid of all negative values, and split the array at each index where there was a negative value. The result should look like this: split_list = [[1, 2, 3, 4], [3, 5, 1], [5], [10]] I know how to do this using list comprehension, but since the array can get quite large and I have to do the calculation many times, I want to find a solution using numpy. I found this https://www.geeksforgeeks.org/python-split-list-into-lists-by-particular-value/, which I can use to split the array where there are negative values, but I can't simultaneously remove them.
Note that instead of numpy, you could make use of itertools.groupby this way (though, judging on this: NumPy grouping using itertools.groupby performance, pure numpy will likely be more efficient): import numpy as np from itertools import groupby arr = np.array([1, 2, 3, 4, -5, -6, 3, 5, 1, -2, 5, -1, -1, 10]) split_list = [list(group) for key, group in groupby(arr, key=lambda x:x>=0) if key] # [[1, 2, 3, 4], [3, 5, 1], [5], [10]]
2
6
77,045,947
2023-9-5
https://stackoverflow.com/questions/77045947/polars-join-with-or-condition
I have two dataset coming from two very different data sources. Dataframe 1 ┌────────────┬─────────────────┬─────────────────┬─────────────────┬────────────────┬──────────────┐ │ date ┆ label ┆ org_slug ┆ org_id ┆ org_name ┆ issues_count │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ date ┆ str ┆ str ┆ str ┆ str ┆ i64 │ ╞════════════╪═════════════════╪═════════════════╪═════════════════╪════════════════╪══════════════╡ │ 2023-08-29 ┆ label1 ┆ org-slug-name-1 ┆ org-id-1 ┆ org-name ┆ 1 │ │ └────────────┴─────────────────┴─────────────────┴─────────────────┴────────────────┴──────────────┘ Dataframe 2 ┌────────────┬─────────────┬───────────────────┬───────────────────┬───────────┬─────────┐ │ date ┆ org_name ┆ org_id ┆ org_slug ┆ info_1 ┆ info_2 │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ date ┆ str ┆ str ┆ str ┆ i64 ┆ i64 │ ╞════════════╪═════════════╪═══════════════════╪═══════════════════╪═══════════╪═════════╡ │ 2023-08-29 ┆ org-name ┆ org-id-1 ┆ org-slug-name-1 ┆ 10 ┆ 12 └────────────┴─────────────┴───────────────────┴───────────────────┴───────────┴─────────┘ I'm trying to join these two dataframe based on date. Then I would like to still joining via org_id or org_slug or org_name. These generally match, but there are times where org_slug or org_id can be null and I want to rely on the next condition. is it possible to achieve it in polars?
Assuming your data per date is small, you might get away with an inner join to a filter. However, this can quickly explode the result set and may run out of memory. print( df1.join(df2, on=["date"], how="inner") .filter( pl.any_horizontal( pl.col("org_id") == pl.col("org_id_right"), pl.col("org_slug") == pl.col("org_slug_right"), pl.col("org_name") == pl.col("org_name_right"), ) ) .drop("org_name_right", "org_id_right", "org_slug_right") ) If your data is of any significant size, it might be better suited to do multiple left joins, then include a when condition to match to the first matched join. The .with_columns(pl.lit(True).alias("joined")) will be null if the join fails, or true if is matches. # columns we want to get from the join (e.g. info_1, info_2) df2_columns = [c for c in df2.columns if c in set(df2.columns) - set(df1.columns)] # expression to select the results of the first join, # alias the output to the first join's name when_expr = [ ( # First join criteria pl.when(pl.col("joined").is_not_null()) .then(pl.col(c)) # Second join criteria .when(pl.col("joined_joinslug").is_not_null()) .then(pl.col(f"{c}_joinslug").alias(c)) # Third join criteria .when(pl.col("joined_joinname").is_not_null()) .then(pl.col(f"{c}_joinname").alias(c)) ) for c in df2_columns ] # Multiple left joins on the different join keys. print( df1.join( df2.with_columns(pl.lit(True).alias("joined")), on=["date", "org_id"], how="left", suffix="_joinid", ) .join( df2.with_columns(pl.lit(True).alias("joined")), on=["date", "org_slug"], how="left", suffix="_joinslug", ) .join( df2.with_columns(pl.lit(True).alias("joined")), on=["date", "org_name"], how="left", suffix="_joinname", ) .select(pl.col(df1.columns), *when_expr) ) Another alternative is to loop through each key set and collect the inner join matches and try the remaining anti join results, after going through each join, concatenate the results. df2_columns = [c for c in df2.columns if c in set(df2.columns) - set(df1.columns)] out_dfs = [] for joinkey in ["org_id", "org_slug", "org_name"]: out_df = df1.join( df2, on=["date", joinkey], how="inner", ).select(pl.col(df1.columns), pl.col(df2_columns)) out_dfs.append(out_df) df1 = df1.join( df2, on=["date", joinkey], how="anti", ) out_df = pl.concat(out_dfs)
2
2
77,044,491
2023-9-5
https://stackoverflow.com/questions/77044491/python-dash-how-to-use-input-from-a-dynamically-created-dropdown
I got an app that contains a button with callback function to create an unlimited amount of dropdowns which will be automatically id'ed as 'dropdown-i'. The struggle is that I don't seem to be able to actually use the values I input in these Dropdowns in another callback function (that's only trying to print them). How can I retrieve these values or how would you do this? Apparently the part value=dcc.Dropdown(id=dropdown_id).value doesn't work. import dash import dash_bootstrap_components as dbc import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output, State app = dash.Dash(__name__) app.layout = html.Div([ html.Button("Add Dropdown and Number Field", id="add-button"), html.Div(id="input-container", children=[]), html.Div(id="output"), ]) @app.callback( Output("input-container", "children"), Input("add-button", "n_clicks"), State("input-container", "children") ) def add_input(n_clicks, existing_children): if n_clicks is None: return existing_children new_input = dbc.Row([ dbc.Col(dcc.Dropdown( options=[ {'label': 'Option 1', 'value': 'option-1'}, {'label': 'Option 2', 'value': 'option-2'}, # Add more dropdown options as needed ], value='option-1', id=f'dropdown-{n_clicks}' )), dbc.Col(dcc.Input( type='number', value=0, id=f'weight-{n_clicks}' )), ]) existing_children.append(new_input) return existing_children @app.callback( Output("output", "children"), Input("add-button", "n_clicks"), State("input-container", "children") ) def process_dropdowns(n_clicks, dropdown_children): if n_clicks is None: return [] # Create a list to store the selected values from each dropdown selected_values = [] # Iterate through the dropdowns to retrieve their values for i, child in enumerate(dropdown_children): dropdown_id = f'dropdown-{i+1}' selected_value = dcc.Dropdown(id=dropdown_id).value selected_values.append(selected_value) # Process the selected values or use them as needed return f"Selected Dropdown Values: {', '.join(selected_values)}" if __name__ == "__main__": app.run_server(debug=False)
This is the typical use case for leveraging pattern-matching callback selectors. The pattern-matching callback selectors MATCH, ALL, & ALLSMALLER allow you to write callbacks that respond to or update an arbitrary or dynamic number of components. The idea is to use composite id's (type+index) using dictionaries rather than strings, so that we can identify a given component as being the nth component of a specific type. I also updated the first callback so that it makes partial property updates rather than sending back and forth all data across the network, which makes it more efficient. from dash import Dash, dcc, html, Input, Output, ALL, Patch, callback, no_update import dash_bootstrap_components as dbc app = Dash(__name__) app.layout = html.Div([ html.Button("Add Dropdown and Number Field", id="add-button"), html.Div(id="input-container", children=[]), html.Div(id="output"), ]) @app.callback( Output("input-container", "children"), Input("add-button", "n_clicks") ) def add_input(n_clicks): if n_clicks is None: return no_update patched_children = Patch() new_input = dbc.Row([ dbc.Col(dcc.Dropdown( id={'type': 'dropdown', 'index': n_clicks}, options=[ {'label': 'Option 1', 'value': 'option-1'}, {'label': 'Option 2', 'value': 'option-2'}, # Add more dropdown options as needed ], value='option-1', )), dbc.Col(dcc.Input( id={'type': 'weight', 'index': n_clicks}, type='number', value=0, )), ]) patched_children.append(new_input) return patched_children @callback( Output("output", "children"), Input({"type": "dropdown", "index": ALL}, "value"), ) def process_dropdowns(values): return html.Div( ['Selected Dropdown Values:'] + [html.Div(f"Dropdown {i + 1} = {value}") for (i, value) in enumerate(values)] ) if __name__ == "__main__": app.run_server(debug=False)
2
3
77,045,763
2023-9-5
https://stackoverflow.com/questions/77045763/filter-a-pandas-dataframe-if-cell-values-exist-in-another-dataframe-but-with-a-r
I have two pandas DataFrame, with the same structure. Dataframe B is a subset of DataFrame A. I want to filter DataFrame B, only if the Price value appears in DataFrame A, or it is within 1% of a value in DataFrame A. For example, even if the exact price is not present, I want to keep the value if there is a row in A with a price +/- 1%. DataFrame A: index price 0 20.23 1 10.34 2 5.28 3 12.25 4 12.32 DataFrame B: index price 0 0.23 1 10.34 2 5.26 Desired Result of filtering: index price 0 10.34 1 5.26 import pandas as pd dfA = pd.DataFrame({'index': [0, 1, 2, 3, 4], 'price': [20.23, 10.34, 5.28, 12.25, 12.32]}) dfB = pd.DataFrame({'index': [0, 1, 2], 'price': [0.23, 10.34, 5.26]}) The following will only give me the exact matches. dfB[dfB['price'].isin(dfA['price'])]
Here is another take, using .merge_asof (note: dfA and dfB needs to be sorted by price): dfA = dfA.sort_values("price") dfB = dfB.sort_values("price") x = pd.merge_asof(dfB, dfA, on="price", direction="nearest") Creates x: index_x price index_y 0 0 0.23 2 1 2 5.26 2 2 1 10.34 1 Then calculating the percentage difference: x["diff"] = np.abs(x["price"] - dfA.loc[x["index_y"], "price"].values) print(x[x["diff"] <= (x["price"] / 100)]) Prints: index_x price index_y diff 1 2 5.26 2 0.02 2 1 10.34 1 0.00 The complete code: dfA = dfA.sort_values("price") dfB = dfB.sort_values("price") x = pd.merge_asof(dfB, dfA, on="price", direction="nearest") x["diff"] = np.abs(x["price"] - dfA.loc[x["index_y"], "price"].values) x = x[x["diff"] <= (x["price"] / 100)] x = x[["index_x", "price"]].set_index("index_x").rename_axis(index=None).sort_index() print(x) Prints: price 1 10.34 2 5.26
3
4
77,045,573
2023-9-5
https://stackoverflow.com/questions/77045573/converting-list-function-to-generator-using-yield
I am trying to convert a for loop into an iterator using yield, but I have failed in my attempts. I don't understand exactly why the yield isn't giving me the expected output. Does anyone know what the problem is? Attempt at using yield: def iteration_order(dimensions): for dim in range(dimensions): order = [0, dim, 0] yield order for j in range(6): sgn = 1 if j % 2 == 0 else -1 idx = j % 3 for _ in range(dim if j < 5 else dim-1): order[idx] += sgn yield order print(list(iteration_order(2)) >>> [[0, 0, 0], [0, 1, 1], [0, 1, 1], [0, 1, 1], [0, 1, 1], [0, 1, 1], [0, 1, 1]] The code as it should work (when not using yield): def iteration_order(dimensions): full_order = [] for dim in range(dimensions): order = [[0, dim, 0]] for j in range(6): sgn = 1 if j % 2 == 0 else -1 idx = j % 3 for _ in range(dim if j < 5 else dim-1): nxt = list(order[-1]) nxt[idx] += sgn order.append(nxt) full_order.extend(order) return full_order print(iteration_order(2)) >>> [[0, 0, 0], [0, 1, 0], [1, 1, 0], [1, 0, 0], [1, 0, 1], [0, 0, 1], [0, 1, 1]]
The problem you see is because you are using that same list for everything. You might yield it with different value, but the generator still has reference to that list and it modifies it, giving you weird output. If you add .copy() for each yield, they will be unique lists and will behave as expected: def iteration_order(dimensions): for dim in range(dimensions): order = [0, dim, 0] yield order.copy() for j in range(6): sgn = 1 if j % 2 == 0 else -1 idx = j % 3 for _ in range(dim if j < 5 else dim-1): order[idx] += sgn yield order.copy() print(list(iteration_order(2))) # [[0, 0, 0], [0, 1, 0], [1, 1, 0], [1, 0, 0], [1, 0, 1], [0, 0, 1], [0, 1, 1]]
2
2
77,043,946
2023-9-5
https://stackoverflow.com/questions/77043946/how-to-calculate-binomial-probabilities-in-python-with-very-small-numbers
I'm trying to calculate the likelihood of successfully guessing a password in Python. For example, if we take a 10 character lowercase password (26**10 possible passwords) and we can make 1 billion guesses a second, we can calculate the probability of successfully guessing the password in one hour with: from scipy.stats import binom (1 - binom.pmf(k=0, n=1000000000*3600, p=1 / 26**10)) Which gives us the the result of 0.02525515384826793 (i.e, 2.5%). However, this doesn't work as we increase the length of the password (or more strictly, as p gets closer to zero). For instance if we increase the length of the password to 12 characters: from scipy.stats import binom (1 - binom.pmf(k=0, n=1000000000*3600, p=1 / 26**12)) Then the returned value is just 0.0, which is incorrect - presumably due to the float rounding down to zero at some point. How can I calculate this to get a more precise answer? Edit: this was with SciPy 1.10.1. Testing on the latest version (1.11.2 at time of writing) gave the correct value - so it looks like this was an issue with the older version.
I think this is about the limitations of floating-point arithmetic which I also often face (i.e, float overflow). you can use the mpmath library, which provides arbitrary-precision arithmetic: from mpmath import mp mp.dps = 50 # Set the desired precision (adjust as needed) total_passwords = 26**12 guesses_per_second = 1000000000 seconds_in_an_hour = 3600 p = mp.mpf(1) / mp.mpf(total_passwords) n = mp.mpf(guesses_per_second) * mp.mpf(seconds_in_an_hour) probability = 1 - mp.power(1 - p, n) print(float(probability)) Extra-Note: Floating-point numbers lose precision even when you are just working with such seemingly harmless numbers like 0.2 or 76.5. You should be extra careful when working with a large amount of floating-point operations over the same data as errors may build up rather quickly. I hope this can help somehow.
2
4
77,043,496
2023-9-5
https://stackoverflow.com/questions/77043496/is-there-a-split-function-that-returns-in-this-case
Of course we have: "1,2,3".split(",") # ["1", "2", "3"] "1".split(",") # ["1"] but also this, which is sometimes problematic in some situations (*): "".split(",") # [""] Is there a built-in way (maybe with a parameter, or a specific function) to have: "".split(",", allow_empty=True) # [] ? This would (sometimes) make sense: the input is empty, so the output list should be empty. (*) Example situation: for element in s.split(","): print(f"we have the element {element}") # works for s = "1, 2, 3" # works for s = "1" # doesn't work for s = "" => the loop should be empty
Not to my knowledge but you could use a regex with re.findall: import re re.findall(r'[^,]+', '1,2,3') # ['1', '2', '3'] re.findall(r'[^,]+', '1') # ['1'] re.findall(r'[^,]+', '') # [] Note that this would also discard empty strings within the input string: re.findall(r'[^,]+', '1,,3') # ['1', '3']
3
5
77,041,507
2023-9-5
https://stackoverflow.com/questions/77041507/input-field-validation-with-two-possible-names
I'm migrating an API that was originally written in Python. The Python API allows you to send your request as camelCase or snake_case like this: This is allowed { "someInput": "nice" } This is allowed { "some_input": "nice" } This is done using a great Python library: Pydantic from pydantic import BaseModel def to_camel(string): words = string.split('_') return words[0] + ''.join(word.capitalize() for word in words[1:]) class InputModel(BaseModel): some_input: str class Config: alias_generator = to_camel allow_population_by_field_name = True This allows to create an InputModel by alias (someInput) or by field name (some_input). I want to do the same or equivalent in Go. I'm using Gin: func Routes(router *gin.Engine) { v1 := router.Group("/v1") { v1.POST("/shipments", controllers.ShipmentCreator) } } func ShipmentCreator(ctx *gin.Context) { ResponseController := new(internal.OutputModel) var body domain.ShipmentsInputModel if err := ctx.BindJSON(&body); err != nil { fmt.Println(err) } validate := validator.New() err := validate.Struct(body) if err != nil { var validationErrors validator.ValidationErrors errors.As(err, &validationErrors) for _, validationError := range validationErrors { ResponseController.AddError(internal.ErrorsModel{ Parameter: validationError.Field(), Message: validationError.Error(), }) } ctx.JSON(http.StatusBadRequest, ResponseController) return } My input struct looks something like this: type ShipmentsInputModel struct { LotId string `json:"lotId" tag:"lot_id" alias:"lot_id" validate:"required"` } This does not work when my input is: { "lot_id": "someLotId" } It returns: "message": "Key: 'ShipmentsInputModel.LotId' Error:Field validation for 'LotId' failed on the 'required' tag", How can I accept both camelCase and snake_case?
In Go, you cannot provide two JSON tags together for a single struct field. JSON tags are specified using a single string, and they are used to define how a field should be marshaled (serialized to JSON) or unmarshaled (deserialized from JSON). You cannot specify multiple tags for a single field within a struct directly. If you need to support both CamelCase and SnakeCase in your JSON output, you would typically have to choose one consistent naming convention for your struct fields and then use appropriate JSON tags for all the fields. There's a neat way of doing this. I hope this helps. package main import ( "encoding/json" "fmt" ) type ShipmentsInputModel struct { LotID } type LotID struct { LotId string `json:"lotId,omitempty"` Lot_ID string `json:"lot_id,omitempty"` } func (s *ShipmentsInputModel) setLodID(id string) { s.LotId = id s.Lot_ID = id } func main() { shipment := ShipmentsInputModel{} shipment.setLodID("someLotID") // Convert struct to JSON jsonData, err := json.Marshal(shipment) if err != nil { fmt.Println("Error:", err) return } // prints: {"lotId":"someLotID","lot_id":"someLotID"} fmt.Println(string(jsonData)) }
2
2
77,041,235
2023-9-5
https://stackoverflow.com/questions/77041235/functional-difference-between-coverage-run-m-pytest-and-pytest-cov
The Coverage tool supports generating code coverage data from Pytest tests with coverage run -m pytest .... However, there is also the Pytest-Cov plugin, which invokes Coverage and generates coverage data by adding the --cov= option to Pytest. However the Pytest-Cov documentation doesn't seem to explain anywhere how this differs from just using coverage run. Is there any practical difference, or is it just a matter of the options/configuration choices that are available?
The differences/advantages are mentioned on their Github README and docs Compared to just using coverage run this plugin does some extras: Subprocess support: you can fork or run stuff in a subprocess and will get covered without any fuss. Xdist support: you can use all of pytest-xdist's features and still get coverage. Consistent pytest behavior. If you run coverage run -m pytest you will have slightly different sys.path (CWD will be in it, unlike when running pytest). All features offered by the coverage package should work, either through pytest-cov's command line options or through coverage's config file.
2
3
77,041,143
2023-9-5
https://stackoverflow.com/questions/77041143/best-approach-to-split-explode-and-tidy-data-using-regex-in-python-and-pandas
I have a dataset that requires splitting, exploding, and tidying using regular expressions (regex) in Python and Pandas. The dataset consists of logs from multiple users sent through an old machine to a server API. Each cell may contain multiple messages, and my goal is to transform the data into a structured and tidy format. Here's a sample of the dataset: data = { 'text_plain': [ "5:57:11 H2045: Estatus OK updated (19:48:34) Mark P.: No Defects found on parcel", "11:04:38 Jill : Travel on Time 2:11:30 YHXO: Wheater conds OK", "6:53:07 Jill : Stable Props 22:38:15 Carl V : OK Status 6:15:34 IUJO-65: Door Open", "18:44:38 Van UHJ: Spider on site Alert", "/10:37:43/ H2046 : Movie Purchase Rejected", "10:33:46 Mark P.: Alert by Super User overwrite 21:55:22 Jill: push sent 6:54:41 YHXO: pull received", "23:20:04 Jill : Windows Closed", "5:16:58 Carl V: Is someone on the Front door?", "(17:11:49) IUJO-66 : No Response on Deck (5:10:43) Van UHJ : Flights delay 8:34:08 H2047: Buy Concert Tickets 9:05:42 Mark P.: Gen. OK", "7:00:15 Jill : Status not ok updated 21:22:34 YHXO: Front desk clear" ], 'id': [1,2,3,4,5,6,7,8,9,10] } As you can see the data needs to be split based on the psuedo timestamp pattern that we see on the column "text_Plain" (will always be in 24-hour format thankfully) followed by a username and a message. The timestamp can be enclosed in parentheses or sometimes in slashes or sometimes not even enclosed at all as you can see, BUT the recurrent pattern that will be of used always to split the data will be r'[(/]?(\d{1,2}:\d{2}:\d{2})[/)]?' My desired output will be: id timestamp user msg 1 5:57:11 H2045 Estatus OK updated 1 19:48:34 Mark P. No Defects found on parcel 2 11:04:38 Jill Travel on Time 2 2:11:30 YHXO Weather conds OK 3 6:53:07 Jill Stable Props 3 22:38:15 Carl V OK Status 3 6:15:34 IUJO-65 Door Open 4 18:44:38 Van UHJ Spider on site Alert 5 10:37:43 H2046 Movie Purchase Rejected 6 10:33:46 Mark P. Alert by Super User overwrite 6 21:55:22 Jill Push sent 6 6:54:41 YHXO Pull received 7 23:20:04 Jill Windows Closed 8 5:16:58 Carl V Is someone on the Front door? 9 17:11:49 IUJO-66 No Response on Deck 9 5:10:43 Van UHJ Flights delay 9 8:34:08 H2047 Buy Concert Tickets 9 9:05:42 Mark P. Gen. OK 10 7:00:15 Jill Status not ok updated 10 21:22:34 YHXO Front desk clear My thought of process tells me that I should first split the col "text_Plain" by my regex pattern (without REMOVING IT) so that I can get something like this: ['5:57:11 H2045: Estatus OK updated', '(19:48:34) Mark P.: No Defects found on parcel'] '[11:04:38 Jill : Travel on Time', '2:11:30 YHXO: Wheater conds OK', '6:53:07 Jill : Stable Props'] ['22:38:15 Carl V : OK Status',' 6:15:34 IUJO-65: Door Open', '18:44:38 Van UHJ: Spider on site Alert', '/10:37:43/ H2046 : Movie Purchase Rejected'] To then extract my timestamp, user, msg and get the id (afterwards exploding my plain_text col) But, here is the catch: I have to deal with 5-20 thousands data logs a day and I need to write code that can actually prove to be faster than or more efficient than the one I'm using because I'm new to progrmaming and I have read that regex can be very slow and not efficient and I am keen to learn a better or faster way. This is my workaround at the moment but is taking WAY TOO long for my PC to handle: pattern = '(?<=[(/])(\d{1,2}:\d{2}:\d{2})(?=[/)])' df['new_text'] = df['text_plain'].str.split(pattern) df['new_text'] = df['new_text'].apply(lambda x: [s for s in x if s.strip() != '']) df = df.explode('new_text') but it is failing as it is not keeping the pattern after the splitting and missing data along the way, also it uses .apply and I have read that we shouldn't use it with regex as it will make the operation way slower if you can clarify that would be awesome
Try (regex demo): out = ( df["text_plain"] .str.extractall( r"(?P<timestamp>\d+:\d+:\d+)\S*\s+(?P<user>[^:]+?)\s*:\s*(?P<msg>.*?)(?=\s*\S*\d+:\d+:\d+|\Z)" ) .droplevel(level=1) ) print(pd.merge(df[["id"]], out, left_index=True, right_index=True)) Prints: id timestamp user msg 0 1 5:57:11 H2045 Estatus OK updated 0 1 19:48:34 Mark P. No Defects found on parcel 1 2 11:04:38 Jill Travel on Time 1 2 2:11:30 YHXO Wheater conds OK 2 3 6:53:07 Jill Stable Props 2 3 22:38:15 Carl V OK Status 2 3 6:15:34 IUJO-65 Door Open 3 4 18:44:38 Van UHJ Spider on site Alert 4 5 10:37:43 H2046 Movie Purchase Rejected 5 6 10:33:46 Mark P. Alert by Super User overwrite 5 6 21:55:22 Jill push sent 5 6 6:54:41 YHXO pull received 6 7 23:20:04 Jill Windows Closed 7 8 5:16:58 Carl V Is someone on the Front door? 8 9 17:11:49 IUJO-66 No Response on Deck 8 9 5:10:43 Van UHJ Flights delay 8 9 8:34:08 H2047 Buy Concert Tickets 8 9 9:05:42 Mark P. Gen. OK 9 10 7:00:15 Jill Status not ok updated 9 10 21:22:34 YHXO Front desk clear
2
3
77,037,891
2023-9-4
https://stackoverflow.com/questions/77037891/typeerror-issubclass-arg-1-must-be-a-class
I am trying to use the Spacy library again for my NPL task. somedays back it was working totally fine with spacy.load("en_core_web_sm"). I thought of using medium instead of small, but now nothing is working. I play around with spacy versions from the latest (3.6.1) to older (3.2.0). I am able to install SpaCy 3.2.0 and can download (!python -m spacy download en_core_web_md) medium pipeline too, however getting the "TypeError" error while loading. ( I tried for both small & medium, and got the same error). I will appreciate your help. Thanks in Advance!! !pip install spacy==3.2.0 !python -m spacy download en_core_web_md import spacy from pydantic import BaseModel import spacy nlp = spacy.load("en_core_web_md") Error:- TypeError Traceback (most recent call last) <ipython-input-2-8bdf9ac04f54> in <module> 1 # Initialize SpaCy model ----> 2 nlp = spacy.load("en_core_web_md") ~\Anaconda3\lib\site-packages\spacy\__init__.py in load(name, vocab, disable, exclude, config) 49 RETURNS (Language): The loaded nlp object. 50 """ ---> 51 return util.load_model( 52 name, vocab=vocab, disable=disable, exclude=exclude, config=config 53 ) ~\Anaconda3\lib\site-packages\spacy\util.py in load_model(name, vocab, disable, exclude, config) 418 return get_lang_class(name.replace("blank:", ""))() 419 if is_package(name): # installed as package --> 420 return load_model_from_package(name, **kwargs) # type: ignore[arg-type] 421 if Path(name).exists(): # path to model data directory 422 return load_model_from_path(Path(name), **kwargs) # type: ignore[arg-type] ~\Anaconda3\lib\site-packages\spacy\util.py in load_model_from_package(name, vocab, disable, exclude, config) 451 """ 452 cls = importlib.import_module(name) --> 453 return cls.load(vocab=vocab, disable=disable, exclude=exclude, config=config) # type: ignore[attr-defined] 454 455 ~\Anaconda3\lib\site-packages\en_core_web_md\__init__.py in load(**overrides) 8 9 def load(**overrides): ---> 10 return load_model_from_init_py(__file__, **overrides) ~\Anaconda3\lib\site-packages\spacy\util.py in load_model_from_init_py(init_file, vocab, disable, exclude, config) 613 if not model_path.exists(): 614 raise IOError(Errors.E052.format(path=data_path)) --> 615 return load_model_from_path( 616 data_path, 617 vocab=vocab, ~\Anaconda3\lib\site-packages\spacy\util.py in load_model_from_path(model_path, meta, vocab, disable, exclude, config) 486 overrides = dict_to_dot(config) 487 config = load_config(config_path, overrides=overrides) --> 488 nlp = load_model_from_config(config, vocab=vocab, disable=disable, exclude=exclude) 489 return nlp.from_disk(model_path, exclude=exclude, overrides=overrides) 490 ~\Anaconda3\lib\site-packages\spacy\util.py in load_model_from_config(config, vocab, disable, exclude, auto_fill, validate) 523 # registry, including custom subclasses provided via entry points 524 lang_cls = get_lang_class(nlp_config["lang"]) --> 525 nlp = lang_cls.from_config( 526 config, 527 vocab=vocab, ~\Anaconda3\lib\site-packages\spacy\language.py in from_config(cls, config, vocab, disable, exclude, meta, auto_fill, validate) 1783 # The pipe name (key in the config) here is the unique name 1784 # of the component, not necessarily the factory -> 1785 nlp.add_pipe( 1786 factory, 1787 name=pipe_name, ~\Anaconda3\lib\site-packages\spacy\language.py in add_pipe(self, factory_name, name, before, after, first, last, source, config, raw_config, validate) 786 lang_code=self.lang, 787 ) --> 788 pipe_component = self.create_pipe( 789 factory_name, 790 name=name, ~\Anaconda3\lib\site-packages\spacy\language.py in create_pipe(self, factory_name, name, config, raw_config, validate) 669 # We're calling the internal _fill here to avoid constructing the 670 # registered functions twice --> 671 resolved = registry.resolve(cfg, validate=validate) 672 filled = registry.fill({"cfg": cfg[factory_name]}, validate=validate)["cfg"] 673 filled = Config(filled) ~\Anaconda3\lib\site-packages\thinc\config.py in resolve(cls, config, schema, overrides, validate) 744 validate: bool = True, 745 ) -> Dict[str, Any]: --> 746 resolved, _ = cls._make( 747 config, schema=schema, overrides=overrides, validate=validate, resolve=True 748 ) ~\Anaconda3\lib\site-packages\thinc\config.py in _make(cls, config, schema, overrides, resolve, validate) 793 if not is_interpolated: 794 config = Config(orig_config).interpolate() --> 795 filled, _, resolved = cls._fill( 796 config, schema, validate=validate, overrides=overrides, resolve=resolve 797 ) ~\Anaconda3\lib\site-packages\thinc\config.py in _fill(cls, config, schema, validate, resolve, parent, overrides) 848 schema.__fields__[key] = copy_model_field(field, Any) 849 promise_schema = cls.make_promise_schema(value, resolve=resolve) --> 850 filled[key], validation[v_key], final[key] = cls._fill( 851 value, 852 promise_schema, ~\Anaconda3\lib\site-packages\thinc\config.py in _fill(cls, config, schema, validate, resolve, parent, overrides) 847 field = schema.__fields__[key] 848 schema.__fields__[key] = copy_model_field(field, Any) --> 849 promise_schema = cls.make_promise_schema(value, resolve=resolve) 850 filled[key], validation[v_key], final[key] = cls._fill( 851 value, ~\Anaconda3\lib\site-packages\thinc\config.py in make_promise_schema(cls, obj, resolve) 1055 sig_args[name] = (annotation, default) 1056 sig_args["__config__"] = _PromiseSchemaConfig -> 1057 return create_model("ArgModel", **sig_args) 1058 1059 ~\Anaconda3\lib\site-packages\pydantic\main.cp38-win_amd64.pyd in pydantic.main.create_model() ~\Anaconda3\lib\site-packages\pydantic\main.cp38-win_amd64.pyd in pydantic.main.ModelMetaclass.__new__() ~\Anaconda3\lib\site-packages\pydantic\fields.cp38-win_amd64.pyd in pydantic.fields.ModelField.infer() ~\Anaconda3\lib\site-packages\pydantic\fields.cp38-win_amd64.pyd in pydantic.fields.ModelField.__init__() ~\Anaconda3\lib\site-packages\pydantic\fields.cp38-win_amd64.pyd in pydantic.fields.ModelField.prepare() ~\Anaconda3\lib\site-packages\pydantic\fields.cp38-win_amd64.pyd in pydantic.fields.ModelField._type_analysis() ~\Anaconda3\lib\typing.py in __subclasscheck__(self, cls) 772 if self._special: 773 if not isinstance(cls, _GenericAlias): --> 774 return issubclass(cls, self.__origin__) 775 if cls._special: 776 return issubclass(cls.__origin__, self.__origin__) **TypeError: issubclass() arg 1 must be a class**
You can add below to your requirements: typing-inspect==0.8.0 typing_extensions==4.5.0 There appears to be a bug in Pydantic v1.10.7 and earlier related to the recent release of typing_extensions v4.6.0 that causes errors for import spacy and any other spacy commands for python 3.8 and python 3.9 For spacy v3.2 and v3.3, we have published patch releases with fixes for the typing_extension requirement. Upgrade to spacy v3.2.6+ or v3.3.3 python -m pip install 'spacy~=3.2.6' Pydantic should be v1.10.8 or higher Here is the complete solution in this link: https://github.com/explosion/spaCy/issues/12659
7
10
77,033,528
2023-9-3
https://stackoverflow.com/questions/77033528/why-cant-we-use-a-fill-value-when-reshaping-a-dataframe-array
I have this dataframe : df = pd.DataFrame([list("ABCDEFGHIJ")]) ​ 0 1 2 3 4 5 6 7 8 9 0 A B C D E F G H I J I got an error when trying to reshape the dataframe/array : np.reshape(df, (-1, 3)) ValueError: cannot reshape array of size 10 into shape (3) I'm expecting this array (or a dataframe with the same shape) : array([['A', 'B', 'C'], ['D', 'E', 'F'], ['G', 'H', 'I'], ['J', nan, nan]], dtype=object) Why NumPy can't guess the expected shape by completing the missing values with nan?
Another possible solution, based on numpy.pad, which inserts the needed np.nan into the array: n = 3 s = df.shape[1] m = s // n + 1*(s % n != 0) np.pad(df.values.flatten(), (0, m*n - s), mode='constant', constant_values=np.nan).reshape(m,n) Explanation: s // n is the integer division of the length of the original array and the number of columns (after reshape). s % n gives the remainder of the division s // n. For instance, if s = 9, then s // n is equal to 3 and s % n equal to 0. However, if s = 10, s // n is equal to 3 and s % n equal to 1. Thus, s % n != 0 is True. Consequently, 1*(s % n != 0) is equal to 1, which makes m = 3 + 1. (0, m*n - s) means the number of np.nan to insert at the left of the array (0, in this case) and the number of np.nan to insert at the right of the array (m*n - s). Output: array([['A', 'B', 'C'], ['D', 'E', 'F'], ['G', 'H', 'I'], ['J', nan, nan]], dtype=object)
3
2
77,033,740
2023-9-3
https://stackoverflow.com/questions/77033740/whats-the-difference-between-astypecategory-categorical-and-factorize
Python offers many ways to convert a variable to categorical. import numpy as np import pandas as pd mydata = pd.Series(['A', 'B', 'B', 'C']) mydata 0 A 1 B 2 B 3 C dtype: object pd.factorize(mydata) # Outputs a tuple with an array and an index (array([0, 1, 1, 2], dtype=int64), Index(['A', 'B', 'C'], dtype='object')) pd.Categorical(mydata) # Outputs a pandas. How do you extract categories? ['A', 'B', 'B', 'C'] Categories (3, object): ['A', 'B', 'C'] mydata.astype('category') # Outputs a series. How do you extract categories? 0 A 1 B 2 B 3 C dtype: category Categories (3, object): ['A', 'B', 'C'] There are more alternatives such as sklearn LabelEncoder and keras to_categorical, which are used with pipelines because they preserve the conversions and allow to reapply them to new data. Could you please explain the differences, advantages or limitations of these methods, and they different applications: factorize(), categorical() and astype("category")? For example if I want to use them for a decision tree model or I just want to calculate a frequency table. Which one is easier to use or more versatile? For example if afterwards I want to modify a category or add a new one or change a value or merge two categories.
The methods factorize(), categorical(), and astype("category") in Python offer different ways to convert variables to categorical data: pandas.factorize docs : This method is useful for obtaining a numeric representation of an array when all that matters is identifying distinct values. It returns a tuple containing an array of encoded values and an index mapping the original categories to the encoded values. The advantage of factorize() is that it provides a simple and efficient way to convert categorical data into numerical form. However, it does not directly provide the categorical data itself, only the encoded values. pandas.Categorical docs: This method creates a pandas Categorical object(limited, and usually fixed, number of possible values (categories)), which represents a categorical variable. It returns a pandas Categorical series that contains the original data along with the categories. The advantage of Categorical() is that it explicitly represents the categories and allows you to extract them using the .categories attribute. It also provides additional functionalities like ordering and renaming categories. However, it does not encode the data into numerical form like factorize(). astype("category") : this method converts a pandas series or column to a categorical data type. It returns a series with the data type set to 'category'. The advantage of astype("category") is that it allows you to efficiently store and manipulate categorical data within a pandas DataFrame. It also provides some basic functionalities like accessing the categories using the .cat.categories attribute. However, it does not provide direct encoding or explicit representation of categories like the other methods. For a decision tree model, you can use any of these methods depending on your specific requirements. If you only need the encoded values, factorize() would be the simplest option. If you need to explicitly represent the categories and perform operations on them, Categorical() would be more suitable. If you want to efficiently store and manipulate categorical data within a DataFrame, astype("category") would be a good choice. In terms of modifying categories or adding new ones, both Categorical() and astype("category") allow you to do so. With Categorical(), you can use the .rename_categories() method to change category names, and the .add_categories() method to add new categories. With astype("category"), you can directly assign new categories to the .cat.categories attribute. Overall, the choice of method depends on your specific needs and the functionalities you require for model.
5
2
77,032,448
2023-9-3
https://stackoverflow.com/questions/77032448/pytorch-lightning-how-to-save-a-checkpoint-for-every-validation-epoch
It is not clear from the docs how to save a checkpoint for every epoch, and have it actually saved and not instantly deleted, with no followed metric. How to do it?
max_epochs = 100 val_every_n_epochs = 1 checkpoint_callback = ModelCheckpoint( # dirpath=checkpoints_path, # <--- specify this on the trainer itself for version control filename="fa_classifier_{epoch:02d}", every_n_epochs=val_every_n_epochs, save_top_k=-1, # <--- this is important! ) trainer = Trainer( callbacks=[checkpoint_callback], default_root_dir=checkpoints_path, check_val_every_n_epoch=val_every_n_epochs, max_epochs=max_epochs, gpus=1 ) this will not delete saved checkpoints.
4
6
77,031,847
2023-9-3
https://stackoverflow.com/questions/77031847/sending-request-from-react-to-fastapi-causes-origin-http-localhost5173-has-b
I was making a POST request which sends an image file from my UI to the backend server. Backend Server: from fastapi import FastAPI, UploadFile from fastapi.middleware.cors import CORSMiddleware import numpy as np import tensorflow as tf from io import BytesIO from PIL import Image app = FastAPI() origins = ["http://localhost:5173/"] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) model = tf.keras.models.load_model("../models/1") class_names = ["Normal", "Tuberculosis"] def convert_image_to_np_array(bytes): # Read bytes as PIL image and convert it to np array np_array = np.array(Image.open(BytesIO(bytes))) return np_array @app.post("/predict") async def predict_tuberculosis(file: UploadFile): bytes = await file.read() np_array = convert_image_to_np_array(bytes) batch_image = np.expand_dims(np_array, axis=0) resized_batch_image = np.resize(batch_image, (1,256,256,3)) prediction = model.predict(resized_batch_image) label = class_names[np.argmax(prediction)] accuracy = np.max(prediction) print(accuracy) return label Frontend: import React from "react" import { useState } from "react" import axios from "axios" import "./App.css" function App() { function callPrediction(file) { const formData = new FormData() formData.append("file", file) axios.post("http://localhost:8000/predict", formData) .then(res => setResult(res.data)) .catch(err => console.log(err)) } ... Note: The file input from callPrediction has the format like this When I call the function callPrediction to send an image file as an input to the function predict_tuberculosis , I got this error popping up I did go searching for this but all the solutions I got was just adding CORS to my backend (which I have already done). I really appreciate any helps! Thank you
You should remove the trailing slash from the origin URL in: http://localhost:5173/ ^ The origin URL should instead look like this http://localhost:5173. Hence: origins = ["http://localhost:5173"] app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) For further details, please have a look at the following answers here, here, as well as here, here and here.
4
4
77,031,177
2023-9-3
https://stackoverflow.com/questions/77031177/how-to-reverse-a-singly-linked-list-using-recursive-approach-in-python
I wonder how to reverse a singly linked list using recursive approach in Python. This is LeetCode problem 206. Reverse Linked List: Given the head of a singly linked list, reverse the list, and return the reversed list. Example 1: Input: head = [1,2,3,4,5] Output: [5,4,3,2,1] Most people use an iterative approach, which is quite classic and easily comprehensible. However, I have come across a recursive method which bewilders me: class Solution: def reverseList(self, head: Optional[ListNode]) -> Optional[ListNode]: # recursive call if not head: return None newHead = head if head.next: newHead = self.reverseList(head.next) head.next.next = head head.next = None print(newHead) return newHead For instance, if I input [1,2,3], I would manually work out the process as follows, which seems intuitively right but I cannot convince myself that it will return the right thing through recursive calls of reverselist. Can you kindly tell me why? Thanks very much!
Here is how you could visualise the reversal of linked list 1→2→3: We start with: head │ ┌─┴─────────┐ ┌───────────┐ ┌───────────┐ │ val: 1 │ │ val: 2 │ │ val: 3 │ │ next: ────────┤ next: ────────┤ next: None│ └───────────┘ └───────────┘ └───────────┘ The first recursive call receives head.next, and will have its own local names. To distinguish those local names from the names we alread have in the original execution of reverseList, I'll add accents to the names, where the number of acccents denotes the recursion depth. So at the first recursive call we can visualise the situation like this: head head' │ │ ┌─┴─────────┐ ┌─┴─────────┐ ┌───────────┐ │ val: 1 │ │ val: 2 │ │ val: 3 │ │ next: ────────┤ next: ────────┤ next: None│ └───────────┘ └───────────┘ └───────────┘ And at the second level recursive call: head head' head'' newHead'' │ │ │ │ ┌─┴─────────┐ ┌─┴─────────┐ ┌─┴──────┴──┐ │ val: 1 │ │ val: 2 │ │ val: 3 │ │ next: ────────┤ next: ────────┤ next: None│ └───────────┘ └───────────┘ └───────────┘ This second-level recursive execution will not enter the if block and just set the next attribute of head'' to None (which it already was) and return newHead''. Back to the first recursive execution that returned reference is assigned to newHead', so we have this: head head' newHead' │ │ │ ┌─┴─────────┐ ┌─┴─────────┐ ┌─┴─────────┐ │ val: 1 │ │ val: 2 │ │ val: 3 │ │ next: ────────┤ next: ────────┤ next: None│ └───────────┘ └───────────┘ └───────────┘ In this recursive execution it is the first time we execute head.next.next = head, acting on head'. So we get: head head' newHead' │ │ │ ┌─┴─────────┐ ┌─┴─────────┐ ┌─┴─────────┐ │ val: 1 │ │ val: 2 │ │ val: 3 │ │ next: ────────┤ next: ────────┤ next: ┐ │ │ │ │ ├───────────┘ │ └───────────┘ └───────────┘ └───────────┘ And then head' gets None for its next attribute: head head' newHead' │ │ │ ┌─┴─────────┐ ┌─┴─────────┐ ┌─┴─────────┐ │ val: 1 │ │ val: 2 │ │ val: 3 │ │ next: ────────┤ next: None│ │ next: ┐ │ │ │ │ ├───────────┘ │ └───────────┘ └───────────┘ └───────────┘ Finally this first recursive execution returns newHead'. The original execution receives this reference and assigns it to its own newHead: head newHead │ │ ┌─┴─────────┐ ┌───────────┐ ┌─┴─────────┐ │ val: 1 │ │ val: 2 │ │ val: 3 │ │ next: ────────┤ next: None│ │ next: ┐ │ │ │ │ ├───────────┘ │ └───────────┘ └───────────┘ └───────────┘ Again, head.next.next = head is executed: head newHead │ │ ┌─┴─────────┐ ┌───────────┐ ┌─┴─────────┐ │ val: 1 │ │ val: 2 │ │ val: 3 │ │ next: ────────┤ next: ┐ │ │ next: ┐ │ │ ├───────────┘ ├───────────┘ │ └───────────┘ └───────────┘ └───────────┘ And finally, head.next is set to None and newHead is returned: (returned) │ ┌───────────┐ ┌───────────┐ ┌─┴─────────┐ │ val: 1 │ │ val: 2 │ │ val: 3 │ │ next: None│ │ next: ┐ │ │ next: ┐ │ │ ├───────────┘ ├───────────┘ │ └───────────┘ └───────────┘ └───────────┘ ...which is the reversed list. Hope this clarifies it.
2
3
77,023,710
2023-9-1
https://stackoverflow.com/questions/77023710/how-can-i-get-only-text-in-scrapy-selector-in-python
I hope you are doing well. <ul> <li> <s>Title:</s> De Aardappeleters </li> <li> <s>Dimensions:</s> 82 x 114 cm </li> <li> <s>Media:</s> canvas </li> <li> <s>Style:</s> Realism </li> <li> <s>Date:</s> 1885 </li> ______ <li> | <s>Genre:</s> | It is located on a page of the website here Modern | </li> ______| </ul> I have an HTML block☝ that I want to receive a text from li. But unfortunately, this li has no class or ID that I can select.This block is for a site. <li> <s>Genre:</s> Modern </li> I want to select the genre list and get the text.👇 Modern The main problem here is that this block is different on another page.👇 <ul> <li> <s>Title:</s> De Aardappeleters </li> <li> <s>Dimensions:</s> 82 x 114 cm </li> <li> <s>Media:</s> canvas </li> ______ <li> | <s>Genre:</s> |And it is located here on another page. Modern | </li> ______| <li> <s>Style:</s> Realism </li> <li> <s>Date:</s> 1885 </li> </ul> OriginalTagFind = layout.css('article ul li s::text').getall() TitleOriginal = [tag.strip() for tag in OriginalTagFind if tag.startswith('Genre:')] In my opinion, if I come to the place I have selected and print the text of the mother's list with Next Sibiling. is it possible؟
With a css selector you can use: 'li:has(s):contains("Genre:")::text' With an xpath selector you can use: "//li[s[contains(text(), 'Genre')]]/text()" I have demonstrated using both with your example below: In [1]: html = """<ul> ...: <li> ...: <s>Title:</s> ...: De Aardappeleters ...: </li> ...: <li> ...: <s>Dimensions:</s> ...: 82 x 114 cm ...: </li> ...: <li> ...: <s>Media:</s> ...: canvas ...: </li> ...: <li> ...: <s>Style:</s> ...: Realism ...: </li> ...: <li> ...: <s>Date:</s> ...: 188 ...: </li> ...: <li> ...: <s>Genre:</s> ...: Modern ...: </li> ...: </ul> """ In [2]: selector = scrapy.Selector(text=html) In [3]: ''.join(selector.xpath("//li[s[contains(text(), 'Genre')]]/text()").getall()).strip() Out[3]: 'Modern' In [4]: ''.join(selector.css('li:has(s):contains("Genre:")::text').getall()).strip() Out[4]: 'Modern'
3
1
77,009,691
2023-8-30
https://stackoverflow.com/questions/77009691/how-to-maximize-the-cache-hit-rate-of-the-2-element-combinations
My question is simple, but I find it difficult to get the point straight, so please allow me to explain step by step. Suppose I have N items and N corresponding indices. Each item can be loaded using the corresponding index. def load_item(index: int) -> ItemType: # Mostly just reading, but very slow. return item Also I have a function that takes two (loaded) items and calculates a score. def calc_score(item_a: ItemType, item_b: ItemType) -> ScoreType: # Much faster than load function. return score Note that calc_score(a, b) == calc_score(b, a). What I want to do is calculate the score for all 2-item combinations and find (at least) one combination that gives the maximum score. This can be implemented as follows: def dumb_solution(n: int) -> Tuple[int, int]: best_score = 0 best_combination = None for index_a, index_b in itertools.combinations(range(n), 2): item_a = load_item(index_a) item_b = load_item(index_b) score = calc_score(item_a, item_b) if score > best_score: best_score = score best_combination = (index_a, index_b) return best_combination However, this solution calls the load_item function 2*C(N,2) = N*(N-1) times, which is the bottleneck for this function. This can be resolved by using a cache. Unfortunately, however, the items are so large that it is impossible to keep all items in memory. Therefore, we need to use a size-limited cache. from functools import lru_cache @lru_cache(maxsize=M) def load(index: int) -> ItemType: # Very slow process. return item Note that M (cache size) is much smaller than N (approx. N // 10 to N // 2). The problem is that the typical sequence of combinations is not ideal for the LRU cache. For instance, when N=6, M=3, itertools.combinations generates the following sequence, and the number of calls of the load_item function is 17. [ (0, 1), # 1, 2 (0, 2), # -, 3 (0, 3), # -, 4 (0, 4), # -, 5 (0, 5), # -, 6 (1, 2), # 7, 8 (1, 3), # -, 9 (1, 4), # -, 10 (1, 5), # -, 11 (2, 3), # 12, 13 (2, 4), # -, 14 (2, 5), # -, 15 (3, 4), # 16, 17 (3, 5), # -, - (4, 5), # -, - ] However, if I rearrange the above sequence as follows, the number of calls will be 10. [ (0, 1), # 1, 2 (0, 2), # -, 3 (1, 2), # -, - (0, 3), # -, 4 (2, 3), # -, - (0, 4), # -, 5 (3, 4), # -, - (0, 5), # -, 6 (4, 5), # -, - (1, 4), # 7, - (1, 5), # -, - (1, 3), # -, 8 (3, 5), # -, - (2, 5), # 9, - (2, 4), # -, 10 ] Question: How can I generate a sequence of 2-item combinations that maximizes the cache hit rate? What I tried: The solution I came up with is to prioritize items that are already in the cache. from collections import OrderedDict def prioritizes_item_already_in_cache(n, cache_size): items = list(itertools.combinations(range(n), 2)) cache = OrderedDict() reordered = [] def update_cache(x, y): cache[x] = cache[y] = None cache.move_to_end(x) cache.move_to_end(y) while len(cache) > cache_size: cache.popitem(last=False) while items: # Find a pair where both are cached. for i, (a, b) in enumerate(items): if a in cache and b in cache: reordered.append((a, b)) update_cache(a, b) del items[i] break else: # Find a pair where one of them is cached. for i, (a, b) in enumerate(items): if a in cache or b in cache: reordered.append((a, b)) update_cache(a, b) del items[i] break else: # Cannot find item in cache. a, b = items.pop(0) reordered.append((a, b)) update_cache(a, b) return reordered For N=100, M=10, this sequence resulted in 1660 calls, which is about 1/3 of the typical sequence. For N=100, M=50 there are only 155 calls. So I think I can say that this is a promising approach. Unfortunately, this function is too slow and useless for large N. I was not able to finish for N=1000, but the actual data is in the tens of thousands. Also, it does not take into account how to select an item when no cached item is found. Therefore, even if it is fast, it is doubtful that it is theoretically the best solution (so please note my question is not how to make the above function faster). (Edited) Here is the complete code including everyone's answers and the test and benchmark code. import functools import itertools import math import time from collections import Counter, OrderedDict from itertools import chain, combinations, product from pathlib import Path from typing import Callable, Iterable, Tuple import joblib import matplotlib.pyplot as plt import numpy as np import pandas as pd from PIL import Image, ImageDraw ItemType = int ScoreType = int def load_item(index: int) -> ItemType: return int(index) def calc_score(item_a: ItemType, item_b: ItemType) -> ScoreType: return abs(item_a - item_b) class LRUCacheWithCounter: def __init__(self, maxsize: int): def wrapped_func(key): self.load_count += 1 return load_item(key) self.__cache = functools.lru_cache(maxsize=maxsize)(wrapped_func) self.load_count = 0 def __call__(self, key: int) -> int: return self.__cache(key) def basic_loop(iterator: Iterable[Tuple[int, int]], cached_load: Callable[[int], int]): best_score = 0 best_combination = None for i, j in iterator: a = cached_load(i) b = cached_load(j) score = calc_score(a, b) if score > best_score: best_score = score best_combination = (i, j) return best_score, best_combination def baseline(n, _): return itertools.combinations(range(n), 2) def prioritizes(n, cache_size): items = list(itertools.combinations(range(n), 2)) cache = OrderedDict() reordered = [] def update_cache(x, y): cache[x] = cache[y] = None cache.move_to_end(x) cache.move_to_end(y) while len(cache) > cache_size: cache.popitem(last=False) while items: # Find a pair where both are cached. for i, (a, b) in enumerate(items): if a in cache and b in cache: reordered.append((a, b)) update_cache(a, b) del items[i] break else: # Find a pair where one of them is cached. for i, (a, b) in enumerate(items): if a in cache or b in cache: reordered.append((a, b)) update_cache(a, b) del items[i] break else: # Cannot find item in cache. a, b = items.pop(0) reordered.append((a, b)) update_cache(a, b) return reordered def Matt_solution(n: int, cache_size: int) -> Iterable[Tuple[int, int]]: dest = [] def findPairs(lo1: int, n1: int, lo2: int, n2: int): if n1 < 1 or n2 < 1: return if n1 == 1: for i in range(max(lo1 + 1, lo2), lo2 + n2): dest.append((lo1, i)) elif n2 == 1: for i in range(lo1, min(lo1 + n1, lo2)): dest.append((i, lo2)) elif n1 >= n2: half = n1 // 2 findPairs(lo1, half, lo2, n2) findPairs(lo1 + half, n1 - half, lo2, n2) else: half = n2 // 2 findPairs(lo1, n1, lo2, half) findPairs(lo1, n1, lo2 + half, n2 - half) findPairs(0, n, 0, n) return dest def Kelly_solution(n: int, cache_size: int) -> Iterable[Tuple[int, int]]: k = cache_size // 2 r = range(n) return chain.from_iterable(combinations(r[i : i + k], 2) if i == j else product(r[i : i + k], r[j : j + k]) for i in r[::k] for j in r[i::k]) def Kelly_solution2(n: int, cache_size: int) -> Iterable[Tuple[int, int]]: k = cache_size - 2 r = range(n) return chain.from_iterable(combinations(r[i : i + k], 2) if i == j else product(r[i : i + k], r[j : j + k]) for i in r[::k] for j in r[i::k]) def diagonal_block(lower, upper): for i in range(lower, upper + 1): for j in range(i + 1, upper + 1): yield i, j def strip(i_lower, i_upper, j_lower, j_upper): for i in range(i_lower, i_upper + 1): for j in range(j_lower, j_upper + 1): yield i, j def btilly_solution(n: int, cache_size: int): i_lower = 0 i_upper = n - 1 k = cache_size - 2 is_asc = True while i_lower <= i_upper: # Handle a k*k block first. At the end that is likely loaded. if is_asc: upper = min(i_lower + k - 1, i_upper) yield from diagonal_block(i_lower, upper) j_lower = i_lower j_upper = upper i_lower = upper + 1 else: lower = max(i_lower, i_upper - k + 1) yield from diagonal_block(lower, i_upper) j_lower = lower j_upper = i_upper i_upper = lower - 1 yield from strip(i_lower, i_upper, j_lower, j_upper) is_asc = not is_asc def btilly_solution2(n: int, cache_size: int): k = cache_size - 2 for top in range(0, n, k): bottom = top + k # Diagonal part. for y in range(top, min(bottom, n)): # Y-axis Top to Bottom for x in range(y + 1, min(bottom, n)): # X-axis Left to Right yield y, x # Strip part. # Stripping right to left works well when cache_size is very small, but makes little difference when it is not. for x in range(n - 1, bottom - 1, -1): # X-axis Right to Left for y in range(top, min(bottom, n)): # Y-axis Top to Bottom yield y, x def btilly_solution3(n: int, cache_size: int): k = cache_size - 2 r = range(n) for i in r[::k]: yield from combinations(r[i : i + k], 2) yield from product(r[i + k :], r[i : i + k]) def btilly_solution4(n: int, cache_size: int): def parts(): k = cache_size - 2 r = range(n) for i in r[::k]: yield combinations(r[i : i + k], 2) yield product(r[i + k :], r[i : i + k]) return chain.from_iterable(parts()) def plot(df, series, ignore, y, label, title): df = df[df["name"].isin(series)] # plt.figure(figsize=(10, 10)) for name, group in df.groupby("name"): plt.plot(group["n"], group[y], label=name) y_max = df[~df["name"].isin(ignore)][y].max() plt.ylim(0, y_max * 1.1) plt.xlabel("n") plt.ylabel(label) plt.title(title) plt.legend(loc="upper left") plt.tight_layout() plt.grid() plt.show() def run(func, n, cache_ratio, output_dir: Path): cache_size = int(n * cache_ratio / 100) output_path = output_dir / f"{n}_{cache_ratio}_{func.__name__}.csv" if output_path.exists(): return started = time.perf_counter() for a, b in func(n, cache_size): pass elapsed_iterate = time.perf_counter() - started # test_combinations(func(n, cache_size), n) started = time.perf_counter() cache = LRUCacheWithCounter(cache_size) basic_loop(iterator=func(n, cache_size), cached_load=cache) elapsed_cache = time.perf_counter() - started output_path.write_text(f"{func.__name__},{n},{cache_ratio},{cache_size},{cache.load_count},{elapsed_iterate},{elapsed_cache}") def add_lower_bound(df): def calc_lower_bound(ni, mi): n = ni m = n * mi // 100 return m + math.ceil((math.comb(n, 2) - math.comb(m, 2)) / (m - 1)) return pd.concat( [ df, pd.DataFrame( [ {"name": "lower_bound", "n": ni, "m": mi, "count": calc_lower_bound(ni, mi)} for ni, mi in itertools.product(df["n"].unique(), df["m"].unique()) ] ), ] ) def benchmark(output_dir: Path): log_dir = output_dir / "log" log_dir.mkdir(parents=True, exist_ok=True) candidates = [ baseline, prioritizes, Matt_solution, Kelly_solution, Kelly_solution2, btilly_solution, btilly_solution2, btilly_solution3, btilly_solution4, ] nc = np.linspace(100, 500, num=9).astype(int) # nc = np.linspace(500, 10000, num=9).astype(int)[1:] # nc = np.linspace(10000, 100000, num=9).astype(int).tolist()[1:] print(nc) mc = np.linspace(10, 50, num=2).astype(int) print(mc) joblib.Parallel(n_jobs=1, verbose=5, batch_size=1)([joblib.delayed(run)(func, ni, mi, log_dir) for ni in nc for mi in mc for func in candidates]) def plot_graphs(output_dir: Path): log_dir = output_dir / "log" results = [] for path in log_dir.glob("*.csv"): results.append(path.read_text().strip()) (output_dir / "stat.csv").write_text("\n".join(results)) df = pd.read_csv(output_dir / "stat.csv", header=None, names=["name", "n", "m", "size", "count", "time", "time_full"]) df = add_lower_bound(df) df = df.sort_values(["name", "n", "m"]) for m in [10, 50]: plot( df[df["m"] == m], series=[ baseline.__name__, prioritizes.__name__, Matt_solution.__name__, Kelly_solution.__name__, Kelly_solution2.__name__, btilly_solution.__name__, "lower_bound", ], ignore=[ baseline.__name__, prioritizes.__name__, ], y="count", label="load count", title=f"cache_size = {m}% of N", ) plot( df[df["m"] == 10], series=[ baseline.__name__, prioritizes.__name__, Matt_solution.__name__, Kelly_solution.__name__, Kelly_solution2.__name__, btilly_solution.__name__, btilly_solution2.__name__, btilly_solution3.__name__, btilly_solution4.__name__, ], ignore=[ prioritizes.__name__, Matt_solution.__name__, ], y="time", label="time (sec)", title=f"cache_size = {10}% of N", ) class LRUCacheForTest: def __init__(self, maxsize: int): self.cache = OrderedDict() self.maxsize = maxsize self.load_count = 0 def __call__(self, key: int) -> int: if key in self.cache: value = self.cache[key] self.cache.move_to_end(key) else: if len(self.cache) == self.maxsize: self.cache.popitem(last=False) value = load_item(key) self.cache[key] = value self.load_count += 1 return value def hit(self, i, j): count = int(i in self.cache) self(i) count += int(j in self.cache) self(j) return count def visualize(): # Taken from https://stackoverflow.com/a/77024514/18125313 and modified. n, m = 100, 30 func = btilly_solution2 pairs = func(n, m) cache = LRUCacheForTest(m) # Create the images, save as animated png. images = [] s = 5 img = Image.new("RGB", (s * n, s * n), (255, 255, 255)) draw = ImageDraw.Draw(img) colors = [(255, 0, 0), (255, 255, 0), (0, 255, 0)] for step, (i, j) in enumerate(pairs): draw.rectangle((s * j, s * i, s * j + s - 2, s * i + s - 2), colors[cache.hit(i, j)]) if not step % 17: images.append(img.copy()) images += [img] * 40 images[0].save(f"{func.__name__}_{m}.gif", save_all=True, append_images=images[1:], optimize=False, duration=30, loop=0) def test_combinations(iterator: Iterable[Tuple[int, int]], n: int): # Note that this function is not suitable for large N. expected = set(frozenset(pair) for pair in itertools.combinations(range(n), 2)) items = list(iterator) actual = set(frozenset(pair) for pair in items) assert len(actual) == len(items), f"{[item for item, count in Counter(items).items() if count > 1]}" assert actual == expected, f"dup={actual - expected}, missing={expected - actual}" def test(): n = 100 # N cache_size = 30 # M def run(func): func(n, cache_size) # Measure generation performance. started = time.perf_counter() for a, b in func(n, cache_size): pass elapsed = time.perf_counter() - started # Test generated combinations. test_combinations(func(n, cache_size), n) # Measure cache hit (load count) performance. cache = LRUCacheWithCounter(cache_size) _ = basic_loop(iterator=func(n, cache_size), cached_load=cache) print(f"{func.__name__}: {cache.load_count=}, {elapsed=}") candidates = [ baseline, prioritizes, Matt_solution, Kelly_solution, Kelly_solution2, btilly_solution, btilly_solution2, btilly_solution3, btilly_solution4, ] for f in candidates: run(f) def main(): test() visualize() output_dir = Path("./temp2") benchmark(output_dir) plot_graphs(output_dir) if __name__ == "__main__": main() I have no problem with you not using the above test code or changing the behavior of basic_loop or LRUCacheWithCounter. Additional Note: The score calculation cannot be pruned using neighbor scores. The score calculation cannot be pruned using only a portion of the item. It is impossible to guess where the best combination will be. Using faster media is one option, but I'm already at my limit, so I'm looking for a software solution. Thank you for reading this long post to the end. Edit: Thanks to btilly's answer and help with Kelly's visualization, I have come to the conclusion that btilly's solution is the best and (possibly) optimal one. Here is a theoretical explanation (although I am not very good at math, so it could be wrong). Let N represent the number of indexes, M the cache size, and C the number of combinations (same as math.comb). Consider a situation where the cache is full and no further combinations can be generated without loading. If we add a new index at this point, the only combinations that can be generated are combinations of the newly added index and the remaining indexes in the cache. This pattern holds for each subsequent iteration. Hence, while the cache is full, the maximum number of combinations can be generated per load is M - 1. This logic holds if the cache isn't full as well. If M' indexes are currently in the cache, then the next index can generate at most M' combinations. The subsequent index can generate at most M' + 1 combinations, and so forth. In total, at most C(M,2) combinations can be generated before the cache is full. Thus, to generate C(N,2) combinations, at least M loads are required to fill the cache, at least (C(N,2) - C(M,2)) / (M - 1) loads are required after the cache is filled. From above, the load counts complexity of this problem is Ω(N^2 / M). I have plotted this formula as a lower_bound in the graphs below. Note that it is only a lower bound and no guarantee that it can actually be achieved. As an aside, Kelly's solution needs to configure k to maximize its performance. For M = 50% of N, it's about M * 2/3. For M = 30% of N, it's about M * 5/6. Although I couldn't figure out how to calculate it. As a general configuration, I use k = M - 2 (which is not best, but relatively good) in the Kelly_solution2 in the graphs below. For M = 10% of N: For M = 50% of N: Note that, in these graphs, it looks like O(N), but this is because I determined M based on N. When M does not change, it is O(N^2) as described above. Here is an animation visualizing the cache hit rate of btilly_solution2, composed by a modified version of Kelly's code. Each pixel represents a combination, with red representing combinations where both indexes are loaded, yellow where one index is loaded, and green where neither index is loaded. In addition, since I'm looking for the optimal sequence, execution time doesn't matter much. But just in case anyone is curious, here is a comparison of execution times (iteration only). btilly_solution4 (btilly's solution modified by Kelly) is almost as fast as itertools.combinations, which should be optimal in this case. Note, however, that even without the modification, it took only 112 nanoseconds per combination. That's it. Thanks to everyone involved.
Here is a simple approach that depends on the cache and gets 230 on your benchmark. def diagonal_block (lower, upper): for i in range(lower, upper + 1): for j in range(i, upper + 1): yield (i, j) def strip (i_lower, i_upper, j_lower, j_upper): for i in range(i_lower, i_upper+1): for j in range (j_lower, j_upper + 1): yield (i, j) # def your_solution_here(n: int, cache_size: int) -> Iterable[Tuple[int, int]]: def your_solution_here(n: int, cache_size: int): i_lower = 0 i_upper = n-1 k = cache_size - 2 is_asc = True while i_lower <= i_upper: # Handle a k*k block first. At the end that is likely loaded. if is_asc: upper = min(i_lower + k - 1, i_upper) yield from diagonal_block(i_lower, upper) j_lower = i_lower j_upper = upper i_lower = upper + 1 else: lower = max(i_lower, i_upper - k + 1) yield from diagonal_block(lower, i_upper) j_lower = lower j_upper = i_upper i_upper = lower - 1 yield from strip(i_lower, i_upper, j_lower, j_upper) is_asc = not is_asc A comment about how I thought this one up. We want to compare a group of objects with every other uncompared object. The group should be everything that fits in the cache except one. So we start with the first k objects, compare them with each other, then just proceed along in a strip to the end. And now we need our second group. Well, we already have the last object, and we don't need the rest. So we take k objects from the end, make that a group. Compare the group with itself, then proceed along a strip to the first object outside of our original group. Now reverse direction, and so on. At all points, i_lower represents the first object still needing comparing, and i_upper represents the last. If we're going forward, we take k objects starting at i_lower. If we're going backwards we take k objects starting at i_upper and go backwards. When I was implementing it, there were two complications. The first is that we have to worry about the edge condition when we meet in the middle. The second is that we might have to do the strip in 2 directions. I chose to only do the strip ascending. This is actually a bug. On most of the ascending loads, I did not get the first element in my cache. Oops. But it is still pretty good.
17
12
77,009,144
2023-8-30
https://stackoverflow.com/questions/77009144/asyncio-this-event-loop-is-already-running-issue
is it a good idea to run the asyncio eventloop inside a thread? import asyncio import time from sample_threading import parallel loop = asyncio.new_event_loop() async def fn(p): for i in range(5): print(i) time.sleep(5) print("done") @parallel def th(p): loop.run_until_complete(fn(p)) th(1) th(2) th(3) above code giving error raise RuntimeError('This event loop is already running') RuntimeError: This event loop is already running any suggestion ?
error message you are haveing, This event loop is already running, is beacuse when you attempt to run an asyncio event loop that is already running. In your code, you are creating a new event loop using asyncio.new_event_loop(), but you are not explicitly setting it as the current event loop. import asyncio import time from sample_threading import parallel async def fn(p): for i in range(5): print(i) # instead of time.sleep, you ensure that the sleep operation is non- # blocking and allows other tasks to run concurrently. await asyncio.sleep(5) print("done") @parallel def th(p): asyncio.run(fn(p)) th(1) th(2) th(3) Running the asyncio event loop inside a thread can be a valid approach in certain scenarios, especially when you need to perform asynchronous tasks concurrently with other operations.
4
1
77,022,582
2023-9-1
https://stackoverflow.com/questions/77022582/type-hint-for-a-special-argument
Consider the following function: from datetime import date def days_between(start_date: date, end_date: date) -> int: if start_date == "initial": start_date = date(2023, 9, 1) delta = end_date - start_date return delta.days The type hint of start date is mostly fine, but it does not cover the case where "initial" is passed. This is the only string allowed, how would you express it? The options I thought about are: start_date: date | str which seems a bit of an overkill start_date: date | "initial" maybe fine but I have not come across anything similar no change and simply add a description in the docstring For completeness, I am more interested in how to convey the use of the function to a user compared to formal correctness for static type checking.
You should use typing.Literal from typing import Literal def days_between(start_date: date | Literal['initial'], end_date: date) -> int: ... This is included starting from python 3.8. Literal can be used to indicate to type checkers that the annotated object has a value equivalent to one of the provided literals.
2
3
77,021,106
2023-9-1
https://stackoverflow.com/questions/77021106/is-there-anyway-to-generate-doc-strings-for-python-code-using-github-copilot-vs
Is there anyway to generate doc string using Github Copilot I have code and I want to generate doc string for it. def make_chat_content(self,chat_uuid,text,db_session): import uuid all_content = ChatContent.query.filter_by(chat_uuid=chat_uuid).all() chat_json = dump(all_content) chat_json.append({"role":"user","content":text}) response, total_words_generated = self.chat.get_response(chat_json) for example doc string that describes function """ Add the user's text to the chat content and generate a response. This function takes the chat UUID, user's text, and a database session as input. It appends the user's content to the existing chat content, generates a response using a chat model, and returns the response along with the total words generated. Args: chat_uuid (str): The UUID of the chat session. text (str): The text content provided by the user. db_session: The database session for querying chat content. Returns: tuple: A tuple containing the response generated by the chat model and the total number of words generated in the response. """
Looks like it can be done: Only test the sample code in the post.
2
4
77,019,925
2023-9-1
https://stackoverflow.com/questions/77019925/numpy-vectorized-logic-giving-unexpected-output
I could swear this should be returning three rows that meet the logic, and I cannot for the life of me figure out why only one is coming back.. Can someone please explain this? import numpy as np data = np.array([[1, 2, 3], [2, 5, 6], [7, 8, 9], [4, 3, 4]]) target_sum = 7 x = (((data[:, 0] + data[:, 1])==target_sum) | data[:, 0] ==7) matching_rows = data[x] print(matching_rows) My terminal says: [[7 8 9]] but surely it should say: [[2 5 6] [4 3 4] [7 8 9]] No?
I think you mean: x = ((data[:, 0] + data[:, 1])==target_sum) | (data[:, 0] ==7) Notice the parenthesis position. As stated in the doc: Unlike C, all comparison operations in Python have the same priority, which is lower than that of any arithmetic. That means a | b == c is evaluated as (a | b) == c, not a | (b==c).
4
6
77,019,628
2023-8-31
https://stackoverflow.com/questions/77019628/how-to-order-facets-when-using-the-seaborn-objects-interface
I am trying to order facets in a plot produced by the seaborn objects interface. import numpy as np import seaborn as sns import seaborn.objects as so import matplotlib.pyplot as plt df = sns.load_dataset("iris") df["species"] = df["species"].astype("category") df["species"] = df["species"].cat.codes rng = np.random.default_rng(seed=0) df["subset"] = rng.choice(['A','B','C'], len(df), replace=True) fig = plt.figure(figsize=(6.4 * 2.0, 4.8 * 2.0)) _ = ( so.Plot(df, x="sepal_length", y="sepal_width") .facet(row="species", col="subset") .add(so.Dot()) .on(fig) .plot() ) However, if col_order or row_order are passed as parameters to the .facet() line an "unexpected keyword argument" TypeError is raised. _ = ( so.Plot(df, x="sepal_length", y="sepal_width") .facet( row="species", col="subset", row_order=['A','C','B'], col_order=[0,2,1] ) .add(so.Dot()) .on(fig) .plot() ) TypeError: Plot.facet() got an unexpected keyword argument 'row_order' How should facets be ordered when using the seaborn.objects interface? Note that this question is very similar to "Seaborn ordering of facets" which is the same question when the plot is generated using seaborn but not the seaborn.objects module. Ideally, an answer should also work when using the wrap parameter of facet() in the seaborn.objects interface.
Plot.facet has a single order parameter. When only col or row are used a single list can be passed and it will be used by the appropriate variable. When both col and row are used, order can be a dictionary with col/row keys: ( so.Plot(df, x="sepal_length", y="sepal_width") .facet(row="species", col="subset", order={"row": [2, 1, 0], "col": ["A", "B", "C"]}) .add(so.Dot()) .layout(size=(6.4 * 2, 4.8 * 2)) )
2
3
77,017,948
2023-8-31
https://stackoverflow.com/questions/77017948/how-to-filter-duplicates-based-on-multiple-columns-in-polars
I was earlier able to filter duplicates based on multiple columns using df.filter(pl.col(['A','C']).is_duplicated()) but after the latest version update this is not working. import polars as pl df = pl.DataFrame( { "A": [1,4,4,7,7,10,10,13,16], "B": [2,5,5,8,18,11,11,14,17], "C": [3,6,6,9,9,12,12,15,18] } ) df.filter(pl.col(['A','C']).is_duplicated()) giving error df.filter(df.select( pl.col(['A','C']).is_duplicated() ) ) giving error
This behavior was noted as ambiguous in 0.16.10 and would return this error: exceptions.ComputeError: The predicate passed to 'LazyFrame.filter' expanded to multiple expressions: col("A").is_duplicated(), col("C").is_duplicated(), This is ambiguous. Try to combine the predicates with the 'all' or `any' expression. However 0.19.0 removed the deprecated behavior of all/any replaced by all_horizontal and any_horizontal. To get the same behavior as the pre-0.16.10, use df.filter(pl.all_horizontal(pl.col(['A','C']).is_duplicated())) I've modified the input slightly to reflect the differences between any_horizontal and all_horizontal import polars as pl df = pl.DataFrame( { "A": [1,3,4,7,7,10,10,13,16], "B": [2,5,5,8,18,11,11,14,17], "C": [3,6,6,9,9,12,12,15,18] } ) # print("legacy run in 0.16.9: ", df.filter(pl.col(['A','C']).is_duplicated())) print("all_horizontal: ", df.filter(pl.all_horizontal(pl.col(['A','C']).is_duplicated()))) print("any_horizontal: ", df.filter(pl.any_horizontal(pl.col(['A','C']).is_duplicated()))) legacy run in 0.16.9: shape: (4, 3) ┌─────┬─────┬─────┐ │ A ┆ B ┆ C │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 7 ┆ 8 ┆ 9 │ │ 7 ┆ 18 ┆ 9 │ │ 10 ┆ 11 ┆ 12 │ │ 10 ┆ 11 ┆ 12 │ └─────┴─────┴─────┘ all_horizontal: shape: (4, 3) ┌─────┬─────┬─────┐ │ A ┆ B ┆ C │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 7 ┆ 8 ┆ 9 │ │ 7 ┆ 18 ┆ 9 │ │ 10 ┆ 11 ┆ 12 │ │ 10 ┆ 11 ┆ 12 │ └─────┴─────┴─────┘ any_horizontal: shape: (6, 3) ┌─────┬─────┬─────┐ │ A ┆ B ┆ C │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 3 ┆ 5 ┆ 6 │ │ 4 ┆ 5 ┆ 6 │ │ 7 ┆ 8 ┆ 9 │ │ 7 ┆ 18 ┆ 9 │ │ 10 ┆ 11 ┆ 12 │ │ 10 ┆ 11 ┆ 12 │ └─────┴─────┴─────┘
3
2
77,017,290
2023-8-31
https://stackoverflow.com/questions/77017290/optimized-way-to-find-index-pairs-of-two-values-in-a-pandas-column
I have a dataframe of predictions on text data which gives a column named "prediction". here is the code to recreate it. import pandas as pd # initialize list elements data = ['others', 'others', 'prediction1', 'others', 'others', 'others', 'others', 'others', 'others', 'others', 'prediction2', 'others', 'prediction2', 'others', 'others', 'prediction1', 'others', 'others', 'others', 'others', 'others', 'others', 'others', 'prediction2', 'others', 'others', 'others', 'others', 'prediction1', 'others'] score = [0.75, 0.75, 0.9, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.88, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75, 0.9, 0.75, 0.75, 0.75, 0.75, 0.75, 0.75] df = pd.DataFrame( {'prediction': data, 'score': score }) I want to find a list of index pairs of [(index_prediction1,index_prediction2),(index_prediction1,index_prediction2)] in this column where when prediction1 is found, find next prediction2 which has score higher than 0.85 and similarly find more pairs like this. prediction1 has no restriction of score. Currently I'm using iterrows which I came to know are very slow. This is my current code for finding all pairs. pred_pairs = [] for i, row in df.iterrows(): if (row['prediction'] == 'prediction1'): for index, row in df.iloc[i+1:].iterrows(): if row['prediction'] == 'prediction2' and row['score']>=0.85: pred_pairs.append((i,index)) break pred_pairs I would like to know a better approach to do this.
original question You can use zip: pred_pairs = list(zip(df.index[df['prediction'].eq('prediction1')], df.index[df['prediction'].eq('prediction2')])) Output: [(8, 20), (28, 36)] Or, if the pairs always alternate, with numpy: out = (df.index[df['prediction'].isin(['prediction1', 'prediction2'])] .to_numpy().reshape(-1, 2).tolist() ) Output: [[8, 20], [28, 36]] updated question: more complex logic only keep the prediction2 > 0.85 keep the first prediction2 after each prediction1 s = df.query('(prediction == "prediction1") | ((prediction == "prediction2") & (score > 0.85))')['prediction'] out = [list(x.index[:2]) for _, x in s.groupby(s.eq('prediction1').cumsum())] Output: [[2, 12], [15, 23], [28]] If you only want the pairs in which there is a prediction2: s = df.query('(prediction == "prediction1") | ((prediction == "prediction2") & (score > 0.85))')['prediction'] out = [l for _, x in s.groupby(s.eq('prediction1').cumsum()) if len(l:=list(x.index[:2]))>1] Output: [[2, 12], [15, 23]]
2
1
77,016,287
2023-8-31
https://stackoverflow.com/questions/77016287/sorting-list-of-tuples-by-multiple-keys-that-can-be-empty
I try to sort list of tuples first by 1st element, than by 2nd (if it exists), than by 3rd (if it exists). Since the tuples are of different length, some of them does not have 2nd or/and 3rd element, so the error appears: IndexError: list index out of range on line undefined. list_to_sort = [(1, 11), (2, 0, 0), (1, 2), (2), (0, 1), (1, 2, 1), (1, 1, 1), (2, 0)] expected output: [(0, 1), (1, 1, 1), (1, 2), (1, 2, 1), (1, 11), (2), (2, 0), (2, 0, 0)] What I tried to do already, but the same error appears: IndexError: list index out of range on line undefined. 1. print(sorted(list_to_sort,key=lambda x:(x[0], x[1], x[2]))) print(sorted(list_to_sort,key=lambda x:(x[0],(x[1] is not None, x[1]), (x[2] is not None, x[2]))) def check(x): if x: return x else: return "" print(sorted(list_to_sort,key=lambda x:(x[0], check(x[1]), check(x[2]))) I want the code to check if the 2nd/3rd element is there and if yes sort with it's value, if not - suppose that it is empty or zero for sorting purposes. It seems easy, but I'm running out of options here. Any suggestions are welcome :) Thanks in advance!
I think you are looking for the basic, default sorting for a list a tuples however one of the items in your list might look like a tuple (2) but is an int. For it to be a tuple it would look more like (2,). If that item was a tuple then you could just do: print(sorted(list_to_sort)) So, that gives us a clue as to how to proceed. Let's assume for a moment that you want that item to be an int but would like to sort it as if it was a tuple. We can try: print(sorted(list_to_sort, key=lambda x: x if isinstance(x, tuple) else (x,))) That will give you: [(0, 1), (1, 1, 1), (1, 2), (1, 2, 1), (1, 11), 2, (2, 0), (2, 0, 0)]
3
1
77,014,428
2023-8-31
https://stackoverflow.com/questions/77014428/what-is-the-meaning-of-a-files-key-in-poetry-lock
I just installed a newer version of a package in a poetry project. Now, for every package listed in the poetry.lock file, there's an added files key, like this: [[package]] name = "..." version = "..." files = [ {...}, {...} ] This wasn't there before, only after I installed the new version of the package, and introduces a lot of changes to the poetry.lock. What is this files key and why was it created now when it wasn't there before? Can I prevent its creation?
There is little documentation on Poetry's lock file format and its changes from version to version. Regarding your question, there is a discussion on Github that briefly describes what's going on: Before lock file format version 2.0 The files entry was its own section in the lock file, so one could find, for example, the following structure: [[package]] name = "matplotlib" version = ... description = ... category = ... optional = ... python-versions = ... ... [metadata.files] matplotlib = [ {file = "matplotlib-XXX.whl", hash = "sha256:XXX"}, ... ] With lock file format version 2.0 The files entry is now part of each package's section, so a corresponding entry looks, for example, as follows: [[package]] name = "matplotlib" version = ... description = ... category = ... optional = ... python-versions = ... files = [ {file = "matplotlib-XXX.whl", hash = "sha256:XXX"}, ... ] Information-wise, both representations are equivalent, just the structure is different. Meaning of the files entries To my understanding, the entries are created to ensure that the content of a package that is about to be installed has not changed since the lock file creation. This is to ensure the full reproducibility of the project's environment at reinstallation. That's why the entries consist of file-hash pairs: file is the wheel (.whl file) that represents the Python package: usually there are different wheels for different platforms and Python versions, which is why there is a list of entries for most packages; hash is the sha256 checksum for the wheel. As to your last question: I don't think it is possible to prevent the creation of the files entries. Also, given that (a) the corresponding information serves a vital purpose of the lock file and (b) the same information has been stored before albeit in a different place, I don't think it would make much sense to do so.
2
3
77,012,106
2023-8-30
https://stackoverflow.com/questions/77012106/django-allauth-modulenotfounderror-no-module-named-allauth-account-middlewar
"ModuleNotFoundError: No module named 'allauth.account.middleware'" I keep getting this error in my django project even when django-allauth is all installed and setup??? I tried even reinstalling and changing my python to python3 but didn't change anything, can't figure out why all other imports are working but the MIDDLEWARE one... Help pls? settings.py: """ Django settings for youtube2blog2 project. Generated by 'django-admin startproject' using Django 4.2.4. For more information on this file, see For the full list of settings and their values, see """ from pathlib import Path import django import os import logging import pyfiglet import allauth # Build paths inside the project like this: BASE_DIR / 'subdir'. BASE_DIR = Path(__file__).resolve().parent.parent # Quick-start development settings - unsuitable for production # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'omegalul' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # CUSTOM CODE # os.environ['FFMPEG_PATH'] = '/third-party/ffmpeg.exe' # os.environ['FFPROBE_PATH'] = '/third-party/ffplay.exe' OFFLINE_VERSION = False def offline_version_setup(databases): if (OFFLINE_VERSION): # WRITE CODE TO REPLACE DATABASES DICT DATA FOR OFFLINE SETUP HERE return True return banner_ascii_art = pyfiglet.figlet_format("CHRIST IS KING ENTERPRISES") logger = logging.getLogger() logger.setLevel(logging.DEBUG) print("\n - CURRENT DJANGO VERSION: " + str(django.get_version())) print("\n - settings.py: Current logger level is " + str(logger.getEffectiveLevel())) logger.debug('settings.py: Logger is working.\n\n') MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' AUTHENTICATION_BACKENDS = [ # Needed to login by username in Django admin, regardless of `allauth` 'django.contrib.auth.backends.ModelBackend', # `allauth` specific authentication methods, such as login by email 'allauth.account.auth_backends.AuthenticationBackend', ] ''' NEEDED SETUP FOR SOCIAL AUTH REQUIRES DEVELOPER CREDENTIALS ON PAUSE UNTIL MVP IS DONE # Provider specific settings SOCIALACCOUNT_PROVIDERS = { 'google': { # For each OAuth based provider, either add a ``SocialApp`` # (``socialaccount`` app) containing the required client # credentials, or list them here: 'APP': { 'client_id': '123', 'secret': '456', 'key': '' } } 'apple': { } 'discord' { } } ''' LOGIN_REDIRECT_URL = 'dashboard' # # Application definition INSTALLED_APPS = [ # My Apps 'yt2b2', 'home', 'dashboard', # Django Apps 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # Downloaded Apps 'rest_framework', 'embed_video', 'allauth', 'allauth.account', 'allauth.socialaccount', #'allauth.socialaccount.providers.google', #'allauth.socialaccount.providers.apple', #'allauth.socialaccount.providers.discord', ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', # Downloaded Middleware 'allauth.account.middleware.AccountMiddleware', ] ROOT_URLCONF = 'youtube2blog2.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR, 'templates')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'youtube2blog2.wsgi.application' # Database DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': BASE_DIR / 'db.sqlite3', # <--------- OFFLINE VERSION # Consider masking these secret variables using a .env file to beef up your Django app's security. Besides, Vercel allows you to list your environment variables during deployment. #'URL' : 'postgresql://postgres:[email protected]:5968/railway', #'NAME' : 'railway', #'USER' : 'postgres', #'PASSWORD' : 'oibkk5LL9sI5dzY5PAnj', #'HOST' : 'containers-us-west-128.railway.app', #'PORT' : '5968' } } # Password validation AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_TZ = True # Static files (CSS, JavaScript, Images) STATIC_URL = '/static/' # the path in url STATICFILES_DIRS = [ os.path.join(BASE_DIR, "static"), ] # Default primary key field type DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField' Tried changing to python3, reinstalling django-allauth through pip, other stackoverflow solutions, shifting through allauth docs... Nothing worked until now Update: removed https links because of spam filter Error location: MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', # Downloaded Middleware 'allauth.account.middleware.AccountMiddleware', ]
The middleware is only present in the unreleased 0.56.0-dev, likely you are using 0.55.2 and following 0.56 documentation.
10
11
77,009,676
2023-8-30
https://stackoverflow.com/questions/77009676/why-is-a-property-on-a-subclass-that-returns-a-type-consistent-with-the-same-at
class Foo: bar: str class Bat(Foo): @property def bar(self) -> str: ... Given the above code, my typechecker (mypy) raises the following complaint: error: Signature of "bar" incompatible with supertype "Foo" [override] This surprises me given that an instance of Foo or Bat will behave the same from the perspective of a caller accessing the bar attribute/property. What is the issue the typechecker is preventing by rejecting this code?
Expanding on the comments to the OP: Older versions of Mypy had some sort of issue/bug somehow tangentially related to this that led to some discussions on the project's GitHub: Mypy disallows overriding an attribute with a property and that should have be fixed on versions >=v0.990 There's also a discussion which feels much closer to the current's OP exposition: overriding variable with property: error Signature incompatible with supertype In this second case, what happens is that... class Foo: bar: str ...tells Mypy that .bar will be a writable attribute, whereas just declaring the @property in the child class Bat... class Bat(Foo): @property def bar(self) -> str: ... would make the attribute read-only. The most straightforward fix, in this case, might be creating a setter for .bar. The following code: class Foo: bar: str class Bat(Foo): @property def bar(self) -> str: return "something" @bar.setter def bar(self, value: str): # ... actual setter code probably # for self._bar or something like that pass Produces: Success: no issues found in 1 source file
5
6
77,012,002
2023-8-30
https://stackoverflow.com/questions/77012002/unexpected-behaviour-when-using-put-with-multidimensional-numpy-array
I have an array A defined as: [[0, 0], [0, 0]] I have a list I containing indices of A, for example [(0, 1), (1, 0), (1, 1)], and a list v with as many values as in I, for example [1, 2, 3]. I want to replace entries of A at indices contained in I with corresponding values stored in v. The expected result is therefore: [[0, 1], [2, 3]] I was expecting to be able to achieve this using np.put. I tried: np.put(A, I, v) However the value of B after running the line above is: [[1., 3.], [0., 0.]] Why did put behave this way and how can I achieve the result I was expecting?
IIUC, you can do: I = np.array(I) A[I[:, 0], I[:, 1]] = v print(A) Prints: [[0 1] [2 3]]
2
1
77,006,611
2023-8-30
https://stackoverflow.com/questions/77006611/how-to-make-all-values-in-an-array-fall-into-a-range
Say I have a NumPy array of floats, there are positive values and negative values. I have two numbers, say they are a and b, a <= b and [a, b] is a (closed) number range. I want to make all of the array fall into the range [a, b], more specifically I want to replace all values outside of the range with the corresponding terminal value. I am not trying to scale values to fit numbers into a range, in Python that would be: mina = min(arr) scale = (b - a) / (max(arr) - mina) [a + (e - mina) * scale for e in arr] Or in NumPy: mina = arr.min() scale = (b - a) / (arr.max() - mina) a + (arr - mina) * scale I am trying to replace all values lower than a with a and all values higher than b with b, while leaving all other values unchanged, I can do it in a single list comprehension in Python: [e if a <= e <= b else (a if e < a else b) for e in arr] I can do the same with two broadcasts: arr[arr < a] = a arr[arr > b] = b Even though NumPy is way faster than Python, the above is two loops, not one, the method is inefficient but compiled. What is a faster way? I have done the measurement, multiple times, and Python is indeed much slower as expected: In [1]: import numpy as np In [2]: numbers = np.random.random(4096) * 1024 In [3]: %timeit numbers[numbers < 256] 16.1 µs ± 219 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) In [4]: %timeit numbers[numbers > 512] 20.9 µs ± 526 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) In [5]: %timeit [e if 256 <= e <= 512 else (256 if e < 256 else 512) for e in numbers] 927 µs ± 101 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [6]: %timeit [e if 256 <= e <= 512 else (256 if e < 256 else 512) for e in numbers.tolist()] 684 µs ± 38.2 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) Edit Fixed the code to scale a range of numbers into another range. When I asked the question I didn't think very much about it, because it wasn't relevant, but now I have given it a second thought and it was obviously wrong, so I corrected it. Another Edit Just thought about it again, my original Python code isn't as efficient as it can be, it can be done with only two comparisons whereas the original used three: [a if e < a else (b if e > b else e) for e in arr]
You can use the np.clip Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [0, 1] is specified, values smaller than 0 become 0, and values larger than 1 become 1. It is faster than the broadcasts way. Code Example: import numpy as np arr = np.array([-3, 5, 10, -7, 2, 8, -12, 15]) a = 0 b = 10 new_arr = np.clip(arr, a, b) print(new_arr) PERFORMANCE BENCHMARK For Array size of 1000 Method 1 (List comprehension) time: 0.0115 seconds Method 2 (NumPy broadcasts) time: 0.0009 seconds Method 3 (np.clip()) time: 0.0009 seconds ----------------------------------------------------------------- For Array size of 10000 Method 1 (List comprehension) time: 0.1137 seconds Method 2 (NumPy broadcasts) time: 0.0069 seconds Method 3 (np.clip()) time: 0.0017 seconds ----------------------------------------------------------------- For Array size of 100000 Method 1 (List comprehension) time: 1.3205 seconds Method 2 (NumPy broadcasts) time: 0.1152 seconds Method 3 (np.clip()) time: 0.0107 seconds ----------------------------------------------------------------- For Array size of 1000000 Method 1 (List comprehension) time: 13.8250 seconds Method 2 (NumPy broadcasts) time: 1.0064 seconds Method 3 (np.clip()) time: 0.1973 seconds
2
5
77,004,957
2023-8-30
https://stackoverflow.com/questions/77004957/what-datatype-is-considered-list-like-in-python
In the Pandas documentation here for Series.isin(values), they state: values : set or list-like What is considered list-like? For a Python dictionary temp_dict, would temp_dict.keys() and temp_dict.values() be considered list-like?
"List-like" isn't a standard Python term. Googling pandas list-like turns up pandas.api.types.is_list_like, but the documentation for that just says Check if the object is list-like. Objects that are considered list-like are for example Python lists, tuples, sets, NumPy arrays, and Pandas Series. Strings and datetime objects, however, are not considered list-like. which isn't really much of a spec. So, as a last resort, we turn to the source code, and after following a lot of imports and aliasing, we eventually find this function: cdef bint c_is_list_like(object obj, bint allow_sets) except -1: # first, performance short-cuts for the most common cases if util.is_array(obj): # exclude zero-dimensional numpy arrays, effectively scalars return not cnp.PyArray_IsZeroDim(obj) elif isinstance(obj, list): return True # then the generic implementation return ( # equiv: `isinstance(obj, abc.Iterable)` getattr(obj, "__iter__", None) is not None and not isinstance(obj, type) # we do not count strings/unicode/bytes as list-like # exclude Generic types that have __iter__ and not isinstance(obj, (str, bytes, _GenericAlias)) # exclude zero-dimensional duck-arrays, effectively scalars and not (hasattr(obj, "ndim") and obj.ndim == 0) # exclude sets if allow_sets is False and not (allow_sets is False and isinstance(obj, abc.Set)) ) So Pandas considers an object list-like if it passes this complicated series of checks. If an object is a 0-dimensional NumPy array, it's not list-like. Otherwise, if it's a list, it's list-like. Otherwise, it needs to pass all the following checks to be list-like: It needs to have an __iter__ attribute that's not None. It needs to not be a type. It needs to not be a string, a bytestring, or a "generic alias" (a type used for some typing module things). It needs to not have an ndim attribute equal to 0. In some cases, Pandas will disallow instances of collections.abc.Set, which are sets, frozensets, and certain other set-like objects. (abc is collections.abc here.) That means Pandas considers most iterable objects to be list-like. Strings, bytestrings, generic aliases, and iterable type objects (like Enum classes) are excluded, with that part about excluding iterable type objects probably being a bug - the code is trying to exclude non-iterable type objects whose instances are iterable. The 0-dimensional array and ndim==0 checks attempt to exclude objects for which positive-dimensional instances of their type would be iterable, but 0-dimensional instances aren't. Sets and other collections.abc.Set subclasses are sometimes excluded, but Series.isin doesn't pass the flag to exclude them.
10
15
77,004,785
2023-8-30
https://stackoverflow.com/questions/77004785/why-is-the-input-still-a-string-type-after-running-it-through-a-function-to-turn
I type a time like 7:30 run it through the convert() function to turn that into a float that would equal 7.5. Then call it back to main() and check what type it is now, and it gives me it's still a str not a float. def main(): meal_time = input("What time is it?").strip() convert(meal_time) print(type(meal_time)) def convert(time): hour, minu = time.split(":") hour = float(hour) #7.0 minu = float(minu) / 60 return float(hour+minu) if __name__ == "__main__": main() Input 7:30 Output type 'str'
You're not updating meal_time in main() with the float returned from convert(). Functions in Python don't modify the original variable unless it's mutable and you're explicitly changing it. Instead of: convert(meal_time) Try: meal_time = convert(meal_time) Output you should observe after the above change: <class 'float'>
2
5
76,970,173
2023-8-24
https://stackoverflow.com/questions/76970173/how-to-get-files-and-form-data-using-the-request-object-in-fastapi
I am developing a webhook in which a third-party service will hit my URL and will provide some files, now I can not use FastAPI's UploadFile = File (...) because it throws an error of the required field File I want to read the payload and files from the request object as we can do in Flask by simply doing this from flask import request files = request.files How can I achieve the same in FastAPI?
The proper approach would be to define File/UploadFile and Form type parameters in your endpoint, as demonstrated in Method 1 of this answer, as well as here, here and here (for a faster file and data uploading approach, see this answer and this answer as well). For instance: @app.post("/submit") async def register(name: str = Form(...), files: List[UploadFile] = File(...)): pass However, since you are looking for an approach using the Request object (which could be useful when dealing with arbitrary data), you could use FastAPI/Starlette's await request.form() method to parse the body, which would return a FormData object, containing all the File(s) and Form data submitted by the user. Simple Working Example from fastapi import FastAPI, Request app = FastAPI() @app.post("/submit") async def submit(request: Request): return await request.form() A more complete example is given below, which also demonstrates how to obtain every File and Form input in the FormData object returned by Starlette. For more details and references, please have a look at this answer, which the following example is based on. Also, if, for any reason, you would like to use a def instead of async def endpoint, please have a look at this answer on how to read the file contents inside a def endpoint. You might find this answer helpful as well. Complete Working Example app.py from fastapi import FastAPI, Request, Depends, HTTPException from fastapi.responses import HTMLResponse from fastapi.templating import Jinja2Templates from starlette.datastructures import FormData, UploadFile app = FastAPI() templates = Jinja2Templates(directory='templates') async def get_body(request: Request): content_type = request.headers.get('Content-Type') if content_type is None: raise HTTPException(status_code=400, detail='No Content-Type provided!') elif (content_type == 'application/x-www-form-urlencoded' or content_type.startswith('multipart/form-data')): try: return await request.form() except Exception: raise HTTPException(status_code=400, detail='Invalid Form data') else: raise HTTPException(status_code=400, detail='Content-Type not supported!') # Use this approach, if keys (names) of Form/File data are unknown to the backend beforehand @app.post('/submit') async def submit(body=Depends(get_body)): if isinstance(body, FormData): # if Form/File data received for k in body: entries = body.getlist(k) if isinstance(body.getlist(k)[0], UploadFile): # check if it is an UploadFile object for file in entries: print(f'Filename: {file.filename}. Content (first 15 bytes): {await file.read(15)}') else: data = entries if len(entries) > 1 else entries[0] print(f"{k}={data}") return 'OK' # Use this approach, if keys (names) of Form/File data are known to the backend beforehand @app.post('/other') async def other(body=Depends(get_body)): if isinstance(body, FormData): # if Form/File data received items = body.getlist('items') print(f"items={items}") msg = body.get('msg') print(f"msg={msg}") files = body.getlist('files') # returns a list of UploadFile objects if files: for file in files: print(f'Filename: {file.filename}. Content (first 15 bytes): {await file.read(15)}') return 'OK' @app.get('/', response_class=HTMLResponse) async def main(request: Request): return templates.TemplateResponse('index.html', {'request': request}) Test using HTML <form> Please make sure that each <input> element, for both form and file inputs, includes a name attribute, as shown below; otherwise, it won't be included in Starlette's FormData object. templates/index.html <!DOCTYPE html> <html> <body> <form method="post" action="/submit" enctype="multipart/form-data"> msg : <input type="text" name="msg" value="test"><br> item 2 : <input type="text" name="items" value="1"><br> item 2 : <input type="text" name="items" value="2"><br> <label for="fileInput">Choose file(s) to upload</label> <input type="file" id="fileInput" name="files" multiple> <input type="submit" value="submit"> </form> </body> </html> Test using JavaScript's Fetch API templates/index.html <!DOCTYPE html> <html> <body> <form id="myForm" > msg : <input type="text" name="msg" value="test"><br> item 1 : <input type="text" name="items" value="1"><br> item 2 : <input type="text" name="items" value="2"><br> </form> <label for="fileInput">Choose file(s) to upload</label> <input type="file" id="fileInput" name="files" multiple><br> <input type="button" value="Submit" onclick="submitUsingFetch()"> <p id="resp"></p> <script> function submitUsingFetch() { const resp = document.getElementById("resp"); const fileInput = document.getElementById('fileInput'); const myForm = document.getElementById('myForm'); var formData = new FormData(myForm); for (const file of fileInput.files) formData.append('files', file); fetch('/submit', { method: 'POST', body: formData, }) .then(response => response.json()) .then(data => { resp.innerHTML = JSON.stringify(data); // data is a JSON object }) .catch(error => { console.error(error); }); } </script> </body> </html> Test using Python requests test.py import requests url = 'http://127.0.0.1:8000/submit' data = {'items': ['foo', 'bar'], 'msg': 'Hello!'} files = [('files', open('a.txt', 'rb')), ('files', open('b.txt', 'rb'))] # Send Form data and files r = requests.post(url, data=data, files=files) print(r.text) # Send Form data only r = requests.post(url, data=data) print(r.text)
5
6
77,001,129
2023-8-29
https://stackoverflow.com/questions/77001129/how-to-configure-fastapi-logging-so-that-it-works-both-with-uvicorn-locally-and
I have the following FastAPI application: from fastapi import FastAPI import logging import uvicorn app = FastAPI(title="api") LOG = logging.getLogger(__name__) LOG.info("API is starting up") LOG.info(uvicorn.Config.asgi_version) @app.get("/") async def get_index(): LOG.info("GET /") return {"Hello": "Api"} The application locally is run with: uvicorn api:app --reload INFO: Will watch for changes in these directories: ['/Users/user/code/backend/api'] INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [44258] using StatReload INFO: Started server process [44260] INFO: Waiting for application startup. INFO: Application startup complete. It is not logging any of the startup messages. Later on when sending an HTTP request to the API: INFO: 127.0.0.1:50538 - "POST /api/v1/endpoint HTTP/1.1" 200 OK In the function body there is LOG.info("example") that does not get logged either. Is there a way to make FastAPI logging work with Uvicorn and also in production (independently of the execution environments like Uvicorn)?
1. Setting up the uvicorn logger Straight from the documentation: Logging --log-config <path> - Logging configuration file. Options: dictConfig() formats: .json, .yaml. Any other format will be processed with fileConfig(). Set the formatters.default.use_colors and formatters.access.use_colors values to override the auto-detected behavior. If you wish to use a YAML file for your logging config, you will need to include PyYAML as a dependency for your project or install uvicorn with the [standard] optional extras. --log-level <str> - Set the log level. Options: 'critical', 'error', 'warning', 'info', 'debug', 'trace'. Default: 'info'. --no-access-log - Disable access log only, without changing log level. --use-colors / --no-use-colors - Enable / disable colorized formatting of the log records, in case this is not set it will be auto-detected. This option is ignored if the --log-config CLI option is used. Regarding the log level As shown above, the --log-level flag specifies the lowest severity log message the logger will handle, where trace is the lowest severity/level and critical is the highest one. For instance, if the level is set to info, the logger will only handle info, warning, error and critical messages, whereas debug and trace messages will be ignored. If the level is set to trace, the logger will handle all the messages. Running uvicorn from the command line When running uvicorn using the command line interface, you could set the log level as follows. On a side note, if one would like to disable the "access log" messages only, without changing the log level, they could use the --no-access-log flag (the --access-log flag is enabled by default). Moreover, in order to change the host and/or port, one could do that using --host 0.0.0.0 and/or --port 8000. In the example below, main refers to the filename of the application (e.g., main.py)—see this answer for more details. uvicorn main:app --log-level trace Running uvicorn programmatically To run uvicorn from within a Python program, you could use the following. One could set the logging level, using the log_level flag in uvicorn.run(), as shown below. Again, if one would like to disable the "access log" messages only, they could do that by setting the access_log argument to False (i.e., access_log=False). To change the host and/or port, one could use, for instance, host='0.0.0.0' and/or port=8000. uvicorn.run(app, log_level="trace") 2. Using the uvicorn logger to log custom messages too Uvicorn, as shown in its implementation here, internally uses various loggers such as uvicorn, uvicorn.access, uvicorn.error and uvicorn.asgi. The logger, however, that comes by the name uvicorn.error seems to be the one mostly used by Uvicorn, as shown here and here, for instance, to log various warnings, errors, as well as other type of information. On the other hand, uvicorn.access logger appears to be used for logging HTTP requests; for example, see here. For uvicorn.asgi logger, see here as well. Hence, one could use the uvicorn.error logger to log their own custom messages/errors, as shown in the example below, along with the uvicorn messages (again, the logging level could be changed using the log_level flag in uvicorn.run()) The uvicorn.error logger, as shown in the implementation here, will propagate a message by default to its ancestor logger, i.e., uvicorn. On a side note, the parent logger, in this case uvicorn, would normally pass on the message to the highest-level logger, known as the root logger, but the uvicorn logger seems to have propagate flag set to False (see the relevant implementation), meaning that its messages won't propagate to the root logger (which is perfectly fine—as described in the official Python documentation, it is strongly advised that you do not log to the root logger in your library). For the sake of completeness, it should be noted that in order to disable this behaviour—not that you have to—on uvicorn.error logger in the example below, one could set the propagate attribute to False for that logger as well, e.g., logger.propagate = False. main.py from fastapi import FastAPI import uvicorn import logging app = FastAPI(title='api') logger = logging.getLogger('uvicorn.error') @app.get('/') async def main(): logger.info('GET /') # or logger.debug(), logger.error(), etc. return 'success' if __name__ == '__main__': uvicorn.run(app, log_level="trace") 3. Using custom-formatted uvicorn loggers to log custom messages too This approach demonstrates how to customize the uvicorn loggers, as well as use them to log both uvicorn and custom messages. To define a custom format for the uvicorn loggers, one could use the log_config attribute in uvicorn.run() to pass a logging configuration dictionary (i.e., dictConfig()), as shown in the exmaple below, including the various schema details, such as formatters, handlers and loggers. You could then define the uvicorn.error logger in main.py, as demonstrated in the previous section, and use it across your application. For the file handler in the example below, RotatingFileHandler is used, in which: You can use the maxBytes and backupCount values to allow the file to rollover at a predetermined size. When the size is about to be exceeded, the file is closed and a new file is silently opened for output. Rollover occurs whenever the current log file is nearly maxBytes in length; but if either of maxBytes or backupCount is zero, rollover never occurs, so you generally want to set backupCount to at least 1, and have a non-zero maxBytes (by default, the file would grow indefinitely). When backupCount is non-zero, the system will save old log files by appending the extensions ‘.1’, ‘.2’ etc., to the filename. For example, with a backupCount of 5 and a base file name of app.log, you would get app.log, app.log.1, app.log.2, up to app.log.5. The file being written to is always app.log. When this file is filled, it is closed and renamed to app.log.1, and if files app.log.1, app.log.2, etc. exist, then they are renamed to app.log.2, app.log.3 etc. respectively. main.py from fastapi import FastAPI import uvicorn import logging import settings app = FastAPI(title='api') logger = logging.getLogger('uvicorn.error') @app.get('/') async def main(): logger.info('GET /') # or logger.debug(), logger.error(), etc. return 'success' if __name__ == '__main__': uvicorn.run(app, log_config=settings.LOGGING_CONFIG) settings.py LOGGING_CONFIG = { 'version': 1, 'disable_existing_loggers': True, 'formatters': { 'standard': { 'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s' }, 'custom_formatter': { 'format': "%(asctime)s [%(processName)s: %(process)d] [%(threadName)s: %(thread)d] [%(levelname)s] %(name)s: %(message)s" }, }, 'handlers': { 'default': { 'formatter': 'standard', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stdout', # Default is stderr }, 'stream_handler': { 'formatter': 'custom_formatter', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stdout', # Default is stderr }, 'file_handler': { 'formatter': 'custom_formatter', 'class': 'logging.handlers.RotatingFileHandler', 'filename': 'app.log', 'maxBytes': 1024 * 1024 * 1, # = 1MB 'backupCount': 3, }, }, 'loggers': { 'uvicorn': { 'handlers': ['default', 'file_handler'], 'level': 'TRACE', 'propagate': False }, 'uvicorn.access': { 'handlers': ['stream_handler', 'file_handler'], 'level': 'TRACE', 'propagate': False }, 'uvicorn.error': { 'handlers': ['stream_handler', 'file_handler'], 'level': 'TRACE', 'propagate': False }, 'uvicorn.asgi': { 'handlers': ['stream_handler', 'file_handler'], 'level': 'TRACE', 'propagate': False }, }, } Custom JSON Formatter (simple) One could have the log messages displayed and/or saved in JSON format, if they wish, by either using a simple JSON format such as: settings.py LOGGING_CONFIG = { 'version': 1, 'disable_existing_loggers': True, 'formatters': { 'standard': ..., # same as above or customize that as well 'custom_formatter': { 'format': "{'time':'%(asctime)s', 'process_name': '%(processName)s', 'process_id': '%(process)s', 'thread_name': '%(threadName)s', 'thread_id': '%(thread)s','level': '%(levelname)s', 'logger_name': '%(name)s', 'message': '%(message)s'}" }, }, ... # the rest is the same as in the original settings.py above } Custom JSON Formatter (elegant) Or, a more elegant version, as demonstrated previously in this answer and as shown below. Please refer to that answer and this one for further details, as well as the relevant middleware and methods for logging Request and Response information, which would go into the extra parameter when logging messages in the application, for example: logger.info("some msg", extra={'extra_info': get_extra_info(request, response)}) If you don't need that kind of information, please feel free not to use the extra parameter, as well as remove the extra_info part from the get_log() function below. settings.py import logging, json class CustomJSONFormatter(logging.Formatter): def __init__(self, fmt): logging.Formatter.__init__(self, fmt) def format(self, record): logging.Formatter.format(self, record) return json.dumps(get_log(record), indent=2) def get_log(record): d = { "time": record.asctime, "process_name": record.processName, "process_id": record.process, "thread_name": record.threadName, "thread_id": record.thread, "level": record.levelname, "logger_name": record.name, "pathname": record.pathname, "line": record.lineno, "message": record.message, } if hasattr(record, "extra_info"): d["req"] = record.extra_info["req"] d["res"] = record.extra_info["res"] return d LOGGING_CONFIG = { 'version': 1, 'disable_existing_loggers': True, 'formatters': { 'standard': ..., # same as above or customize that as well 'custom_formatter': { '()': lambda: CustomJSONFormatter(fmt='%(asctime)s') }, }, ... # the rest is the same as in the original settings.py above } Output example: { "time": "2024-10-27 11:05:00,300", "process_name": "MainProcess", "process_id": 4102, "thread_name": "AnyIO worker thread", "thread_id": 1147, "level": "INFO", "logger_name": "uvicorn.error", "pathname": "C:\\...", "line": 33, "message": "GET /", "req": { "url": "/", "headers": { "host": "localhost:8000", "user-agent": "Mozilla...", "accept": "text/html,application/xhtml+xml,..." }, "method": "GET", "http_version": "1.1", "original_url": "/", "query": {} }, "res": { "status_code": 200, "status": "OK" } } 4. Using a custom Python logger separate from uvicorn loggers In case one wished having a separate custom Python logger instead of customizing the existing uvicorn loggers, as demonstrated earlier, they would need to add a StreamHandler and/or FileHandler and set the desired level, i.e., DEBUG, INFO, WARNING, etc.—the lowest level offered by Python's logging module is DEBUG, with the default level being WARNING (if one is interested in adding a custom log level, see this post). You could either do that using a dictConfig(), as shown earlier, or directly using the logging's module functions and classes. The following example is based on this answer, which demonstrates how to customize the format of the logging messages in JSON (hence, see that answer, if you are looking for a similar format presented in the previous section), as well as this answer that shows how to log both the request and response bodies in the background. More details and examples can also be found in Python's official documentation page here. You may also want to have a look at all the available LogRecord attributes that can be used to format the logging records. Setting log_level="trace" in uvicorn.run() would set the level of the uvicorn logger to trace, as described earlier—in case one needed that as well. Also, one could still customize the uvicorn loggers, if they wish, using the LOGGING_CONFIG dictionary provided in the previous section and passing it to the settings, i.e., uvicorn.run(..., log_config=settings.LOGGING_CONFIG). In that way, one could get the uvicorn logs in an elegant format and have them saved to a file on disk as well. Working Example from fastapi import FastAPI import logging import uvicorn import sys app = FastAPI() logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) formatter = logging.Formatter("%(asctime)s [%(processName)s: %(process)d] [%(threadName)s: %(thread)d] [%(levelname)s] %(name)s: %(message)s") stream_handler = logging.StreamHandler(sys.stdout) stream_handler.setFormatter(formatter) file_handler = logging.FileHandler("info.log") file_handler.setFormatter(formatter) logger.addHandler(stream_handler) logger.addHandler(file_handler) logger.info('API is starting up') @app.get('/') async def main(): logger.info('GET /') return 'ok' if __name__ == '__main__': uvicorn.run(app, log_level="trace") # or `log_config=settings.LOGGING_CONFIG` 5. Final Notes In each of the above cases, one may wish to initialize the logger at startup inside a lifespan handler, and then add it to request.state, so that it can be accessed outside the main file of the application as well; for instance, from a submodule that uses APIRouter to create endpoints, lying inside a routers package, which is normally the case when building Bigger Applications. To do that, please have a look at this answer.
27
31
76,963,311
2023-8-23
https://stackoverflow.com/questions/76963311/llama-cpp-python-not-using-nvidia-gpu-cuda
I have been playing around with oobabooga text-generation-webui on my Ubuntu 20.04 with my NVIDIA GTX 1060 6GB for some weeks without problems. I have been using llama2-chat models sharing memory between my RAM and NVIDIA VRAM. I installed without much problems following the intructions on its repository. So what I want now is to use the model loader llama-cpp with its package llama-cpp-python bindings to play around with it by myself. So using the same miniconda3 environment that oobabooga text-generation-webui uses I started a jupyter notebook and I could make inferences and everything is working well BUT ONLY for CPU. A working example bellow, from llama_cpp import Llama llm = Llama(model_path="/mnt/LxData/llama.cpp/models/meta-llama2/llama-2-7b-chat/ggml-model-q4_0.bin", n_gpu_layers=32, n_threads=6, n_ctx=3584, n_batch=521, verbose=True), prompt = """[INST] <<SYS>> Name the planets in the solar system? <</SYS>> [/INST] """ output = llm(prompt, max_tokens=350, echo=True) print(output['choices'][0]['text'].split('[/INST]')[-1]) Of course! Here are the eight planets in our solar system, listed in order from closest to farthest from the Sun: Mercury Venus Earth Mars Jupiter Saturn Uranus Neptune Note that Pluto was previously considered a planet but is now classified as a dwarf planet due to its small size and unique orbit. I want to make inference using GPU as well. What is wrong? Why can't I offload to gpu like the parameter n_gpu_layers=32 specifies and also like oobabooga text-generation-webui already does on the same miniconda environment whithout any problems?
After searching around and suffering quite for 3 weeks I found out this issue on its repository. The llama-cpp-python needs to known where is the libllama.so shared library. So exporting it before running my python interpreter, jupyter notebook etc. did the trick. For using the miniconda3 installation used by oobabooga text-generation-webui I exported it like bellow: export LLAMA_CPP_LIB=/yourminicondapath/miniconda3/lib/python3.10/site-packages/llama_cpp_cuda/libllama.so Voilà!!!! On importing from llama_cpp import Llama I get ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1060, compute capability 6.1 And on llm = Llama(model_path="/mnt/LxData/llama.cpp/models/meta-llama2/llama-2-7b-chat/ggml-model-q4_0.bin", n_gpu_layers=28, n_threads=6, n_ctx=3584, n_batch=521, verbose=True), ... llama_model_load_internal: using CUDA for GPU acceleration llama_model_load_internal: mem required = 2381.32 MB (+ 1026.00 MB per state) llama_model_load_internal: allocating batch_size x (512 kB + n_ctx x 128 B) = 480 MB VRAM for the scratch buffer llama_model_load_internal: offloading 28 repeating layers to GPU llama_model_load_internal: offloaded 28/35 layers to GPU llama_model_load_internal: total VRAM used: 3521 MB ...
12
13
76,976,114
2023-8-25
https://stackoverflow.com/questions/76976114/how-to-get-the-days-between-today-and-a-polars-date
I'm having a bit of trouble with my python code. I originally wrote it using pandas, but I need something a bit faster, so I'm converting it to polars. After reading the mongodb into polars dataframes with race = pl.DataFrame(list(race_coll.find())) and converting the 'date_of_race' column into pl.Date type using race = race.with_columns(pl.col('date_of_race').str.strptime(pl.Date, format='%d %m %Y').cast(pl.Date)) The pandas code that worked was days_between = (pd.to_datetime('today') - race.date_of_race.values[0]) // np.timedelta64(1,'D') I have tried the following: date = pl.DataFrame({"date_of_race": [1], "value": race['date_of_race']}) days_between = (pd.to_datetime('today').normalize() - days_between[0][0]) // np.timedelta64(1,'D') TypeError: 'int' object is not subscriptable days_between = (pd.to_datetime('today').normalize() - race['date_of_race']) // np.timedelta64(1,'D') PanicException: cannot coerce datatypes: ComputeError(ErrString("failed to determine supertype of object and date")) When I print the dates, I get the following: pandas: print(race.date_of_race.values[0]) 2022-10-15T00:00:00.000000000 polars: print(race['date_of_race']) shape: (1,) Series: 'date_of_race' [date] [ 2022-10-15 ] Any help is appreciated
use a Python datetime object for the reference date, and .dt.total_days() to get the days difference. EX: import polars as pl import pandas as pd s = pl.Series([ "2022-10-30T00:00:00", "2022-10-30T01:00:00", "2022-10-30T02:00:00", "2022-10-30T03:00:00", "2022-10-30T04:00:00", "2022-10-30T05:00:00", ]).cast(pl.Datetime) diff = pd.to_datetime('today').normalize().to_pydatetime() - s # could also use the datetime module's date class here via # datetime.today().date() print(diff) # Series: '' [duration[μs]] # [ # 299d # 298d 23h # 298d 22h # 298d 21h # 298d 20h # 298d 19h # ] print(diff.dt.total_days()) # Series: '' [i64] # [ # 299 # 298 # 298 # 298 # 298 # 298 # ]
3
4
76,988,796
2023-8-27
https://stackoverflow.com/questions/76988796/python-polars-number-of-rows-since-last-value-0
Given a polars DataFrame column like df = pl.DataFrame({"a": [0, 29, 28, 4, 0, 0, 13, 0]}) how to get a new column like shape: (8, 2) ┌─────┬──────┐ │ a ┆ dist │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞═════╪══════╡ │ 0 ┆ 1 │ │ 29 ┆ 0 │ │ 28 ┆ 0 │ │ 4 ┆ 0 │ │ 0 ┆ 1 │ │ 0 ┆ 2 │ │ 13 ┆ 0 │ │ 0 ┆ 1 │ └─────┴──────┘ The solution should preferably work with .over() for grouped values and optionally an additional rolling window function like rolling_mean(). I know of the respective question for pandas but couldn't manage to translate it.
Here's one way with rle_id to identify the groups to project over, and only doing so on the 0 groups with a when/then: df = pl.DataFrame({"a": [0, 29, 28, 4, 0, 0, 13, 0]}) df.with_columns( dist=pl.when(pl.col('a') == 0) .then(pl.col('a').cum_count().over(pl.col('a').ne(0).rle_id())) .otherwise(0) ) shape: (8, 2) ┌─────┬──────┐ │ a ┆ dist │ │ --- ┆ --- │ │ i64 ┆ u32 │ ╞═════╪══════╡ │ 0 ┆ 1 │ │ 29 ┆ 0 │ │ 28 ┆ 0 │ │ 4 ┆ 0 │ │ 0 ┆ 1 │ │ 0 ┆ 2 │ │ 13 ┆ 0 │ │ 0 ┆ 1 │ └─────┴──────┘
3
2
76,984,212
2023-8-26
https://stackoverflow.com/questions/76984212/how-to-apply-ip-lookup-using-polars
Given two tables I'd like to conduct a lookup over all ips and find the network it belongs to: I have two large tables: and the following networks: Regarding the ClientIP (First table) I thought of casting the whole column with ip_address Regarding the second column (second table) I thought of casting the whole column with ip_network Something like this: import ipaddress network = ipaddress.ip_network('99.96.0.0/13') ip_obj = ipaddress.ip_address('99.87.29.96') print(ip_obj in network) and then conduct an apply function, but it is very slow especially for tables with this kind of size. I noticed in some databases like KQL, there is a built-in support: ipv4-lookup Is there any kind of builtin support for iplookup in polars? or in pyarrows? any suggestions?
Assuming the network dataframe's IpCidr blocks are not overlapping, you could convert the IPv4 addresses to a pl.Int64 and get the max value within the CIDR block. A function using only a pl.Expr to convert an IPv4 address to pl.Int64 import polars as pl def ip_addr4_int64_expr(ipv4_str_expr: pl.Expr): return ( ipv4_str_expr.str.split(".") .list.eval( pl.element().cast(pl.Int64) * (2 ** (8 * (pl.element().cum_count(reverse=True)))).cast(pl.Int64) ) .list.sum() ) A range of addresses can be derived from the CIDR's prefix by getting the number of available hosts and adding it to the base IPv4's Int64 representation. cidr_split_ipv4_expr = pl.col("IpCidr").str.split("/").list.get(0) cidr_prefix_expr = pl.col("IpCidr").str.split("/").list.get(1).cast(pl.Int64) ip_cidr_df = ip_cidr_df.with_columns( ip_addr4_int64_expr(cidr_split_ipv4_expr).alias("ip_addr4_int64"), ( ip_addr4_int64_expr(cidr_split_ipv4_expr) - 1 + ((2 ** (32 - cidr_prefix_expr)).cast(pl.Int64)) ).alias("cidr_ip_max"), ) client_df = client_df.with_columns( ip_addr4_int64_expr(pl.col("ClientIP")).alias("ip_addr4_int64"), ) Using a join_asof, a range lookup can be done. Then null out values that return above the max IP range. client_df = ( client_df.sort("ip_addr4_int64") .join_asof(ip_cidr_df.sort("ip_addr4_int64"), on="ip_addr4_int64") .select( "ClientIP", "Timestamp", pl.when(pl.col("ip_addr4_int64") <= pl.col("cidr_ip_max")) .then(pl.col("Info")) .alias("Info"), ) ) Examples: ip_cidr_df = pl.DataFrame( { "IpCidr": [ "99.96.0.0/13", "99.88.0.0/13", "1.0.136.0/22", "1.0.128.0/21", "1.0.0.0/24", "10.0.0.0/8", "127.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16", ], "Info": [ "ATT-INTERNET4", "ATT-INTERNET4", "TOT-NET TOT Public Company Limit", "TOT-NET TOT Public Company Limit", "CLOUDFLARENET", "The 10.0.0.0/8 Range", "The 127.0.0.0/8 Range", "The 172.16.0.0/12 Range", "The 192.168.0.0/16 Range", ], } ) client_df = pl.DataFrame( { "Timestamp": [ "2023-06-01 00:00:00", "2023-06-01 00:00:00", "2023-06-01 00:00:00", "2023-06-01 00:00:00", "2023-06-30 23:59:00", "2023-06-30 23:59:00", "2023-06-30 23:59:00", ], "ClientIP": [ "1.0.0.14", "99.96.1.5", "99.87.29.96", "10.0.0.1", "127.0.0.1", "172.16.0.1", "192.168.0.1", ], } ) Output: shape: (7, 3) ┌─────────────┬─────────────────────┬──────────────────────────┐ │ ClientIP ┆ Timestamp ┆ Info │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str │ ╞═════════════╪═════════════════════╪══════════════════════════╡ │ 1.0.0.14 ┆ 2023-06-01 00:00:00 ┆ CLOUDFLARENET │ │ 10.0.0.1 ┆ 2023-06-01 00:00:00 ┆ The 10.0.0.0/8 Range │ │ 99.87.29.96 ┆ 2023-06-01 00:00:00 ┆ null │ │ 99.96.1.5 ┆ 2023-06-01 00:00:00 ┆ ATT-INTERNET4 │ │ 127.0.0.1 ┆ 2023-06-30 23:59:00 ┆ The 127.0.0.0/8 Range │ │ 172.16.0.1 ┆ 2023-06-30 23:59:00 ┆ The 172.16.0.0/12 Range │ │ 192.168.0.1 ┆ 2023-06-30 23:59:00 ┆ The 192.168.0.0/16 Range │ └─────────────┴─────────────────────┴──────────────────────────┘ Note: This answer assumes a dataframe consisting only of IPv4 addresses and no overlapping CIDR blocks in ip_cidr_df. The same logic could be applied by converting IPv6 addresses to a pl.Struct consisting of pl.Int64.
3
5
76,959,447
2023-8-23
https://stackoverflow.com/questions/76959447/how-can-i-reduce-the-amount-of-data-in-a-polars-dataframe
I have a csv file with a size of 28 GB, which I want to plot. Those are way too many data points obviously, so how can I reduce the data? I would like to merge about 1000 data points into one by calculating the mean. This is the sturcture of my DataFrame: df = pl.from_repr(""" ┌─────────────────┬────────────┐ │ Time in seconds ┆ Force in N │ │ --- ┆ --- │ │ f64 ┆ f64 │ ╞═════════════════╪════════════╡ │ 0.0 ┆ 2310.18 │ │ 0.0005 ┆ 2313.23 │ │ 0.001 ┆ 2314.14 │ └─────────────────┴────────────┘ """) I thought about using group_by_dynamic, and then calculating the mean of each group, but this only seems to work when using datetimes? The time in seconds is given as a float however.
You can also group by an integer column to create groups of size N: In case of a group_by_dynamic on an integer column, the windows are defined by: “1i” # length 1 “10i” # length 10 We can add a row index and cast to pl.Int64 to use it. (df.with_row_index() .group_by_dynamic(pl.col.index.cast(pl.Int64), every="2i") .agg("force") ) shape: (4, 2) ┌───────┬────────────┐ │ index ┆ force │ │ --- ┆ --- │ │ i64 ┆ list[str] │ ╞═══════╪════════════╡ │ 0 ┆ ["A", "B"] │ │ 2 ┆ ["C", "D"] │ │ 4 ┆ ["E", "F"] │ │ 6 ┆ ["G"] │ └───────┴────────────┘
3
1
76,973,907
2023-8-25
https://stackoverflow.com/questions/76973907/type-hint-for-a-matplotlib-color
I'm type-hinting functions in Python, and not sure what type a matplotlib color should be. I have a function like this: def plot_my_thing(data: np.ndarray, color: ???): # function def here What should the type be in ???, when it is meant to be a matplotlib color that you can feed into plt.plot() for the type hint? Right now I'm planning to just use Any. I've searched and not found an answer. There are some discussions at GitHub about it: https://github.com/matplotlib/matplotlib/issues/21505 But that seems like a package-specific problem, though I may not be understanding it.
Matplotlib introduced type hints in 3.8.0. Now, you can use matplotlib.typing.ColorType which is just an alias for all valid matplotlib colors. For more information, see the matplotlib typing api reference.
3
6
76,967,187
2023-8-24
https://stackoverflow.com/questions/76967187/what-is-alternative-of-deleted-moneydecimal-places-display
I stuck a bit with hiding decimals of Money object. Ex: In [51]: str(Money(123, 'USD')) Out[51]: 'US$123.00' returns with 00 in the end. Before it was resolved by money_obj.decimal_places_display = 0 but it is deleted in the last version on djmoney https://django-money.readthedocs.io/en/latest/changes.html#id3 I have tried to use babel format_currency, but no success. The decimals are there so far: In [54]: from babel.numbers import format_currency ...: format_currency(12345, 'USD', format='¤#') Out[54]: '$12345.00' For now my solution is quite manual, and the question is it possible to make it better? In [55]: from babel.numbers import format_decimal ...: from djmoney.money import Money ...: ...: from utils.constants import CURRENCIES_CHOICES ...: ...: ...: def format_int(money: Money) -> str: ...: amount = format_decimal(round(money.amount), locale='en_GB') ...: currency = format_currency(0, str(money.currency), format='¤') ...: return f'{currency} {amount}' ...: ...: format_int(Money(12345, 'USD')) ...: Out[55]: '$ 12,345'
babel's format_currency allows us to display the rounded amount by passing a format string and currency_digits=False. However, by using a format string we lose the ability to localize the amount and the placement of the currency sign. Example: # Python 3.12.2 # Babel==2.14.0 from babel.numbers import format_currency # This is OK, since in en_GB the currency sign goes before the amount format_currency( currency='USD', currency_digits=False, format='¤#', locale='en_GB', number=42.24, ) # US$42 # This overrides the placement of the currency sign in fr_FR format_currency( currency='USD', currency_digits=False, format='¤#', locale='fr_FR', number=42.24, ) # $US42 If we omit the format string, then we see that in French the currency symbol should go after the amount: format_currency( currency='USD', currency_digits=False, locale='fr_FR', number=42.24, ) # 42,24\xa0$US We can round the amount and keep the localized format by using a helper function, e.g. the function published in this comment or an adapted version which always rounds the amount: from babel import Locale def format_currency_no_decimals( currency, number, locale, **kwargs, ): locale_ = Locale.parse(locale) pattern = locale_.currency_formats["standard"] return pattern.apply( number, locale_, currency=currency, force_frac=(0, 0), **kwargs ) format_currency_no_decimals( currency='USD', locale='en_GB', number=42.24, ) # US$42 format_currency_no_decimals( currency='USD', locale='fr_FR', number=42.24, ) # 42\xa0$US Note: we will get a deprecation warning when calling pattern.apply: DeprecationWarning: The force_frac parameter to NumberPattern.apply() is deprecated., so this solution isn't future proof. We can also override the currency symbol for a specific locale, e.g.: from babel import Locale Locale.parse("en_GB").currency_symbols["USD"] = "$" format_currency(currency='USD', locale='en_GB', number=42.24) # $42.24 format_currency_no_decimals(currency='USD', locale='en_GB', number=42.24) # $42
3
2
77,000,435
2023-8-29
https://stackoverflow.com/questions/77000435/how-to-create-search-options
I'm currently working on having my Discord music bot create a keyword search menu, similar to the one depicted in the image. I'm utilizing the ytsearch from the ytdlp library to retrieve the top five results for a keyword search. The search process often takes longer than 3 seconds, resulting in unsuccessful responses. I'm curious if any of you have effective strategies to reduce this search time. I've attempted to reference this article for guidance. Although the code runs well, the issue of timeouts and failed responses persists, and I'm unsure how to address it. Code snippet: from discord.commands import slash_command import yt_dlp as youtube_dl youtube_dl.utils.bug_reports_message = lambda: '' ytdlopts = { 'format': 'bestaudio/best', 'extractaudio': True, 'outtmpl': 'downloads/%(extractor)s-%(id)s-%(title)s.%(ext)s', 'restrictfilenames': True, 'noplaylist': True, 'nocheckcertificate': True, 'ignoreerrors': False, 'logtostderr': False, 'quiet': True, 'no_warnings': True, 'dump_single_json': True, 'default_search': 'auto', 'postprocessors': [{"key" : "FFmpegExtractAudio", "preferredcodec" : "mp3", "preferredquality" : "256"}], 'buffersize': 16777216, 'source_address': '0.0.0.0' # ipv6 addresses cause issues sometimes } ytdl = youtube_dl.YoutubeDL(ytdlopts) async def song_search(self, ctx): options = [] if ctx.value: to_run = partial(ytdl.extract_info, f"ytsearch5:{ctx.value}", download=False) info_dict = await asyncio.get_event_loop().run_in_executor(None, to_run) if 'entries' in info_dict: # Extract up to 5 items from the list of entries entries = info_dict['entries'][:5] for entry in entries: if 'title' in entry: options.append(entry['title']) print(options) return options @slash_command(name="play", description="play") @option("url",description="url", autocomplete = song_search) async def play_(self, ctx, *, url: str): await ctx.trigger_typing() vc = ctx.voice_client if not vc: await self.join_channel(ctx) await self.load_source_defer(ctx) player = self.get_player(ctx) source = await self.get_music_source(ctx, url) await player.queue.put(source) The error that appears: ['Create Your Own Discord Bot in Python 3.10 Tutorial (2022 Edition)', 'The EASIEST Discord Chat Bot Tutorial On The Internet (Python 3.10) 2023', 'Code a Discord Bot with Python - Host for Free in the Cloud', 'Making a Discord Bot In Python (Part 1: Setup)', 'All you need to know about Buttons in Discord.py & Pycord | Ultimate Python Guide'] Task exception was never retrieved future: <Task finished name='Task-44' coro=<ApplicationCommandMixin.on_application_command_auto_complete.<locals>.callback() done, defined at C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\bot.py:853> exception=NotFound('404 Not Found (error code: 10062): Unknown interaction')> Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\bot.py", line 856, in callback return await command.invoke_autocomplete_callback(ctx) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\commands\core.py", line 1011, in invoke_autocomplete_callback return await ctx.interaction.response.send_autocomplete_result( File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\interactions.py", line 1017, in send_autocomplete_result await self._locked_response( File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\interactions.py", line 1090, in _locked_response await coro File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\webhook\async_.py", line 219, in request raise NotFound(response, data) discord.errors.NotFound: 404 Not Found (error code: 10062): Unknown interaction
How to create options in pycord Here's an example that I have below: breed_types = ['German Shepherd', 'Bulldog', 'etc'] # list of breed types to autocomplete dog_breeds = discord.Option(str, autocomplete=discord.utils.basic_autocomplete(breed_types), required=False) # create options using autocomplete util @bot.slash_command(name="dog", description="Random dog picture", ) async def dog(ctx, breed: dog_breeds): await ctx.respond(breed) This is what it outputs: Here's what happens when you input something: So here's how it works. breed_types is a list of different options. Technically, it is a global variable because it's where it's placed scope-wise, but I just put it there for explanation purposes (you could use a function to generate it). Then that gets put into dog_breeds which is then put into autocomplete with the discord utils handler (I cover why I use this later). This allows for autocomplete input of anything in the list. However, it can let the user input anything outside of the list breed_types which can lead to errors. How to make the options non optional If we edit the following lines: def convert_list_to_options(input_list): return [discord.OptionChoice(name=option) for option in input_list] dog_breeds = discord.Option(str, choices=convert_list_to_options(breed_types), required=False) # create options using helper function I've created the above helper function. It converts a string list into discord.OptionChoice() for the attribute choices. The list of available choices for this option. Can be a list of values or OptionChoice objects (which represent a name:value pair). If provided, the input from the user must match one of the choices in the list. Documentation However, if you have a large number of options, the above method may not work. So, instead, you can add the following (to your slash command) and use the first method if breed is not None: if breed not in breed_types: return await ctx.respond('Option does not exist', ephemeral=True) else: (insert code here) Current documentation issues Currently, the documentation says the following: The autocomplete handler for the option. Accepts an iterable of str, a callable (sync or async) that takes a single argument of AutocompleteContext, or a coroutine. Must resolve to an iterable of str. Documentation The: "Must resolve to an iterable of str." is not true. This is currently a code or documentation issue that hasn't been resolved yet. In the meanwhile, use the discord.utils helper function provided above. In context with your problem I've attempted to reference this article for guidance. Although the code runs well, the issue of timeouts and failed responses persists, and I'm unsure how to address it. If your slash command itself is taking time you could use: await ctx.defer() Using the above at the beginning of a slash command will not time the command out. This is typically used when the interaction is acknowledged and a secondary action will be done later. Documentation However, if the slash option itself is timing out, you could try this in your song_search function: await ctx.interaction.response.defer() Looking through the documentation, interaction.response (looked into source too) is of class discord.InteractionResponse
3
0
76,997,449
2023-8-29
https://stackoverflow.com/questions/76997449/invalidate-django-cached-property-in-signal-handler-without-introducing-unnecess
Let's say I have the following Django models: class Team(models.Model): users = models.ManyToManyField(User, through="TeamUser") @cached_property def total_points(self): return self.teamuser_set.aggregate(models.Sum("points"))["points__sum"] or 0 class TeamUser(models.Model): team = models.ForeignKey(Team, on_delete=models.CASCADE) user = models.ForeignKey(User, on_delete=models.CASCADE) points = models.IntegerField() I want to create a signal handler that will invalidate the team.total_points cache when TeamUser object is created/updated/deleted. I started with the following signal handler. Note the Django docs recommended the del instance.prop call. @receiver(post_save, sender=models.TeamUser) @receiver(post_delete, sender=models.TeamUser) def invalidate_cache(**kwargs): try: del kwargs["instance"].team.total_points except AttributeError: pass And some tests. Note I'm using pytest-django. def test_create_team_users(django_assert_num_queries): user = factories.UserFactory() team = factories.TeamFactory() assert team.total_points == 0 with django_assert_num_queries(1): TeamUser.objects.create(team=team, user=user, points=2) assert team.total_points == 2 with django_assert_num_queries(1): TeamUser.objects.create(team=team, user=user, points=3) assert team.total_points == 5 def test_delete_all_team_users(django_assert_num_queries): user = factories.UserFactory() team = factories.TeamFactory() for _ in range(10): TeamUser.objects.create(team=team, user=user, points=2) with django_assert_num_queries(2): TeamUser.objects.all().delete() assert team.total_points == 0 The test_create_team_users test passed but the test_delete_all_team_users test failed because the query count is 12 instead of 2. Yikes! Looks like an N+1 query. To prevent this, I updated my signal handler to only invalidate the team.total_points cache if the user object is cached on the TeamUser object. I found the is_cached method in this SO answer. @receiver(post_save, sender=models.TeamUser) @receiver(post_delete, sender=models.TeamUser) def invalidate_cache(sender, instance, **kwargs): if sender.team.is_cached(instance): try: del instance.team.total_points except AttributeError: pass Now both tests pass! Does this correctly invalidate the team.total_points cache in all cases? Is there an edge case I'm missing?
I think there is a misconception here. cached_property isn't actually a persisting database cache or a Redis cache. It lives and dies within one instance. This usually means a single request. It will be already refreshed in the next request. It's reason for existence is to not call an expensive method more than once, within the same instance. For example, if you had a template that used this team.total_points next to a team's name, but also around footnotes and maybe as html title attribute for the picture so that it shows when you hover over team's picture, with @property, this method would get called 3 times, with @cached_property, this method would only get called once. But it's not going to last for the next request or next page refresh. So "invalidating" it in a signal is most possibly unneeded. Not only unneeded but it won't even work for some cases. This would fail for example: def test_delete_all_team_users(django_assert_num_queries): user = factories.UserFactory() team = factories.TeamFactory() for _ in range(10): TeamUser.objects.create(team=team, user=user, points=2) # this assert will pass, but while doing it, total_points will be cached assert team.total_points == 20 with django_assert_num_queries(2): TeamUser.objects.all().delete() # this will fail now because team.total_points was cached before and the cache isn't cleared assert team.total_points == 0 This is because "instance.team" in the post_delete callback isn't the same team instance as the one you have in the test function. These are separate team instances that are populated by the same database row. So a signal can't work here to delete this cache. It's useless. Instead of a post_delete signal here, you should delete team.total_points within the test function itself, after deleting TeamUsers or before asserting. Then it would pass. def test_delete_all_team_users(django_assert_num_queries): user = factories.UserFactory() team = factories.TeamFactory() for _ in range(10): TeamUser.objects.create(team=team, user=user, points=2) assert team.total_points == 20 with django_assert_num_queries(2): TeamUser.objects.all().delete() del team.total_points # this will pass now assert team.total_points == 0 This too would pass with the signal because django is populating instance.team with the same team instance as the one you have in the test function in here. def test_delete_all_team_users(django_assert_num_queries): user = factories.UserFactory() team = factories.TeamFactory() for _ in range(10): TeamUser.objects.create(team=team, user=user, points=2) assert team.total_points == 20 with django_assert_num_queries(2): team.teamuser_set.all().delete() # TeamUser.objects.all().delete() # this will pass too. because signal cleared the cache this time. assert team.total_points == 0 I should also say that post_delete signal here is also costing you an extra database hit. if you get rid of it, TeamUser.objects.all().delete() should only hit your database once, instead of twice. But you would need to manually delete the cache after you do something that will change it. Besides the test functions, it will very rarely occur so I don't think it'll be much of a miss. Edit: This is not only for deleting but if you are not passing the team instance itself, but passing an id of it perhaps, post_save signals won't work as you expect either. This too will fail: def test_create_team_users(django_assert_num_queries): user = factories.UserFactory() team = factories.TeamFactory() assert team.total_points == 0 with django_assert_num_queries(1): # I changed this so that we're passing team_id instead of team # In post_save, team instance will be different. TeamUser.objects.create(team_id=team.id, user=user, points=2) # so this assert will fail assert team.total_points == 2
5
3
76,996,496
2023-8-28
https://stackoverflow.com/questions/76996496/0-360-longitude-labels-in-cartopy-orthographic-projection
I'm trying to produce an orthographic figure using Cartopy. I have basically the entire thing working exactly the way I want it to...except for the labels on the longitude gridlines. While the convention for longitude is often to use 0 - 180 E or 0 - 180 W, the convention on other bodies can sometimes be 0 - 360 degrees. I want to adhere to the latter convention, as to be consistent with the longitude convention in the rest of my work. My problem is that the LongitudeFormatter does not seem to have any option that allows me to simply plot longitude gridlines with 0 - 360 labels. The documentation does give an example of how to accomplish this with PlateCarree (here), but I am using orthographic projection, and attempting to apply the given example just tells me "This formatter cannot be used with non-rectangular projections." That being said, the LongitudeFormatter otherwise seems to work completely fine with the orthographic projection? Here's a simplified example that demonstrates my question. import matplotlib.pyplot as plt import cartopy.crs as ccrs from cartopy.mpl.ticker import (LongitudeFormatter, LatitudeFormatter, LongitudeLocator,LatitudeLocator) plotproj = ccrs.Orthographic(central_longitude=90, central_latitude=45) mgeodetic = ccrs.Geodetic() fig1,ax1 = plt.subplots(1,1,figsize=(10.0,10.0),constrained_layout=True, subplot_kw={"projection":plotproj}) ax1.set_global() #grid lines longlocs = [0,30,60,90,120,150,180,210,240,270,300,330,360] gl = ax1.gridlines(draw_labels=True,x_inline=True, xpadding=0,ypadding=0,xlocs=longlocs) gl.top_labels = False gl.left_labels = False gl.xlocator = LongitudeLocator(nbins=12) gl.ylocator = LatitudeLocator(nbins=12) gl.xformatter = LongitudeFormatter(auto_hide=False) gl.yformatter = LatitudeFormatter(auto_hide=False) ax1.invert_xaxis() #makes longitude run the opposite direction, necessary for my use case This example produces the following figure: I have tried the options available for the LongitudeFormatter. direction_label = False just makes the longitude run from 0 to 180 and -180 to 0. dateline_direction_label and zero_direction_label only affect the 0 and 180 meridians. cardinal_labels is for replacing the existing E and W labels, but doesn't affect the values. I feel like I must be overlooking something obvious- how do I plot 0 to 360 longitude labels?
Q: How do I plot 0 to 360 longitude labels? A: You can replace the labels with new ones that meet your needs. The code given below should do the trick. Place it in before the line ax1.invert_xaxis(). fig1, ax1 = plt.subplots(1, 1, figsize=(10.0, 10.0), constrained_layout=True, subplot_kw={"projection": plotproj}) ax1.set_global() #grid lines gl = ax1.gridlines(draw_labels=True, x_inline=True, xpadding=0, ypadding=0) gl.top_labels = False gl.left_labels = False gl.xlocator = LongitudeLocator(nbins=12) gl.ylocator = LatitudeLocator(nbins=12) gl.xformatter = LongitudeFormatter(auto_hide=False) gl.yformatter = LatitudeFormatter(auto_hide=False) # (Relevant code) # Generate the plot to enable access to the labels' attributes ax1.draw(fig1.canvas.get_renderer()) # The current labels and the new ones to use # If you need degree symbol just add + "°" rep_vals = {'150°W': str(360-150), '120°W': str(360-120), '90°W': str(360-90), '60°W': str(360-60), '30°W': str(360-30), '0°': "0", '30°E': "30", '60°E': "60", '90°E': "90", '120°E': "120", '150°E': "150" } # Iterate through the x-labels for ea in gl.xlabel_artists: #print(ea, ea.get_position()[0], ea.get_visible()) if ea.get_visible(): # If visible, replace it with new label ea.set_text(rep_vals[ea.get_text()]) ax1.invert_xaxis()
2
2
76,989,170
2023-8-27
https://stackoverflow.com/questions/76989170/change-column-value-depending-on-how-many-times-value-appears-in-other-column
I have a dataframe that looks like this: Container Event A Clean B Dry A Clean A Dry B Clean C Clean C Clean C Clean I want to introduce a new column called 'Temperature', which has the value 4 the first time a container has the event 'Clean' and value 3 for all subsequent 'Clean' events. Dry would always have value 1. The dataframe should look like this: Container Event Temperature A Clean 4 B Dry 1 A Clean 3 A Dry 1 B Clean 4 C Clean 4 C Clean 3 C Clean 3 Reproducible dataframe: d = {'Container': ['A','B','A','A','B','C','C','C'], 'Event': ['Clean', 'Dry', 'Clean', 'Dry', 'Clean', 'Clean', 'Clean', 'Clean']} df = pd.DataFrame(data=d)
make conditions and use np.select import numpy as np cond1 = ~df.duplicated() cond2 = df['Event'].eq('Clean') cond3 = df['Event'].eq('Dry') df['Temperature'] = np.select([cond1 & cond2, cond2, cond3], [4, 3, 1]) df Container Event Temperature 0 A Clean 4 1 B Dry 1 2 A Clean 3 3 A Dry 1 4 B Clean 4 5 C Clean 4 6 C Clean 3 7 C Clean 3 If there are only dry and clean, cond3 does not need to be created. import numpy as np cond1 = ~df.duplicated() cond2 = df['Event'].eq('Clean') df['Temperature'] = np.select([cond1 & cond2, cond2], [4, 3], 1) same result
2
3
76,985,067
2023-8-26
https://stackoverflow.com/questions/76985067/i-am-having-an-import-error-with-the-fitz-library-in-pycharm
I am having this issue of importing the fitz library in PyCharm. I pip installed PyMuPDF and in my code I added "import fitz" but it is giving me this error: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/fitz/_fitz.so, 0x0002): tried: '/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/fitz/_fitz.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/fitz/_fitz.so' (no such file), '/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/fitz/_fitz.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')) How do I fix this error?
The problem with this not working on my mac mini m2 chip was because there wasn't any wheel that supported it but there is a new wheel. Just use this in your terminal if you're having the same problem. pip install --upgrade pymupdf
2
3
77,002,238
2023-8-29
https://stackoverflow.com/questions/77002238/type-checking-python-function-signatures-of-protocol-subclass-with-mypy
Is there a way to safely type-check a python class which subclasses a protocol? If I define a protocol with a certain method function signature, then implicit subclasses must define a method with a compatible signature: # protocols.py from abc import abstractmethod from dataclasses import dataclass from typing import Protocol class SupportsPublish(Protocol): @abstractmethod def publish(self, topic: str): ... def publish(m: SupportsPublish): m.publish("topic") @dataclass class Publishable: foo: str = "bar" def publish(self): print(self) publish(Publishable()) # ✗ mypy protocols.py # protocols.py:24: error: Argument 1 to "publish" has incompatible type "Publishable"; expected "SupportsPublish" [arg-type] # protocols.py:24: note: Following member(s) of "Publishable" have conflicts: # protocols.py:24: note: Expected: # protocols.py:24: note: def publish(self, topic: str) -> Any # protocols.py:24: note: Got: # protocols.py:24: note: def publish(self) -> Any # Found 1 error in 1 file (checked 1 source file) But, if I explicitly subtype SupportsPublish, mypy does not report a type error: ... @dataclass class Publishable(SupportsPublish): ... # ✗ mypy protocols.py # Success: no issues found in 1 source file Based on this blurb from the PEP, I expected the type checker to find the function signature mismatch: Note that there is little difference between explicit and implicit subtypes, the main benefit of explicit subclassing is to get some protocol methods “for free”. In addition, type checkers can statically verify that the class actually implements the protocol correctly: This is my environment: > mypy --version mypy 1.3.0 (compiled: yes) > python --version Python 3.9.17 I expected mypy to point out the function signature mismatch.
I just want to point out, as was established in the comments, it is not in fact true that if you explicitly subtype SupportsPublish, mypy does not report a type error. The problem is that you weren't type annotating your method, which essentially tells mypy "don't check this". If you do, for example: from dataclasses import dataclass from typing import Protocol class SupportsPublish(Protocol): def publish(self, topic: str) -> None: ... def publish(m: SupportsPublish): m.publish("topic") @dataclass class Publishable: foo: str = "bar" def publish(self) -> None: print(self) Then mypy will complain: (py311) Juans-MBP:~ juan$ mypy foo.py foo.py:18: error: Argument 1 to "publish" has incompatible type "Publishable"; expected "SupportsPublish" [arg-type] foo.py:18: note: Following member(s) of "Publishable" have conflicts: foo.py:18: note: Expected: foo.py:18: note: def publish(self, topic: str) -> None foo.py:18: note: Got: foo.py:18: note: def publish(self) -> None Found 1 error in 1 file (checked 1 source file) Because this is a requirement of just regular subclassing with method overriding. If you aren't going to run mypy with full --strict mode, at least somehow (through how it is invoked or by using a mypy.ini) make sure you have --disallow-untyped-defs or --disallow-untyped-calls
7
3
76,997,401
2023-8-29
https://stackoverflow.com/questions/76997401/how-to-use-mplcursors-to-annotate-with-a-complete-dataframe-row-in-a-multigrid-p
I'm trying to plot a multi-dimensional scatterplot across several visual properties (facets, hue, shape, x, y). I'm also trying to get a tooltip on cursor hover to show additional properties of the point. (I'm using seaborn + mplcursors, but I'm not married to this solution.) The problem is that the hover has the wrong index in the dataset and displays the wrong information. You can see the same in the following toy example assembled from two examples from the seaborn and mplcursors websites. I believe I've diagnosed the issue to the cursor.connect() not returning the proper index in the dataframe. I can get this example to work if I reduce the number of modifiers (hue, col, row, etc), but it doesn't work with all of these included. import seaborn as sns import matplotlib.pyplot as plt import mplcursors df = sns.load_dataset("tips") sns.relplot(data=df, x="total_bill", y="tip", hue="day", col="time", row="sex") def show_hover_panel(get_text_func=None): cursor = mplcursors.cursor( hover=2, # Transient annotation_kwargs=dict( bbox=dict( boxstyle="square,pad=0.5", facecolor="white", edgecolor="#ddd", linewidth=0.5, ), linespacing=1.5, arrowprops=None, ), highlight=True, highlight_kwargs=dict(linewidth=2), ) if get_text_func: cursor.connect( event="add", func=lambda sel: sel.annotation.set_text(get_text_func(sel.index)), # <- this doesn't appear to return the correct integer index in the dataframe ) return cursor def on_add(index): item = df.iloc[index] parts = [ f"total_bill: {item.total_bill}", f"tip: {item.tip}", f"day: ${item.day}", f"time: ${item.time}", f"sex: ${item.sex}", ] return "\n".join(parts) show_hover_panel(on_add) plt.show() What I tried: minimum viable example removing modifiers = works traced back the correct point locations based on the data BUT when I pass the index to the tooltip I notice that the index doesn't correspond to the proper index in he dataframe.
sns.relplot returns a FacetGrid which contains an axes_dict. That's a dictionary that for each column and row tells which is the corresponding subplot (ax). Based on this, you can create a new dictionary that maps the ax to the corresponding subset of the dataframe. (Note that this might occupy a lot of extra memory for a large dataframe.) The selected artist in mplcursors keeps a reference to the subplot (set.artist.axes) which can be used as a key in the new dictionary. Here is how the example could look like. The annotation function is now larger, so it needs its own function. import seaborn as sns import matplotlib.pyplot as plt import mplcursors df = sns.load_dataset("tips") g = sns.relplot(data=df, x="total_bill", y="tip", hue="day", col="time", row="sex") # create a dictionary mapping subplots to their corresponding subset of the dataframe subplot_df_dict = dict() for (sex, time), ax in g.axes_dict.items(): subplot_df_dict[ax] = df[(df['sex'] == sex) & (df['time'] == time)].reset_index(drop=True) def show_annotation(sel): ax = sel.artist.axes item = subplot_df_dict[ax].iloc[sel.index] parts = [ f"total_bill: {item.total_bill}", f"tip: {item.tip}", f"day: ${item.day}", f"time: ${item.time}", f"sex: ${item.sex}", ] sel.annotation.set_text("\n".join(parts)) def show_hover_panel(show_annotation_func=None): cursor = mplcursors.cursor( hover=2, # Transient annotation_kwargs=dict( bbox=dict( boxstyle="square,pad=0.5", facecolor="white", edgecolor="#ddd", linewidth=0.5, ), linespacing=1.5, arrowprops=None, ), highlight=True, highlight_kwargs=dict(linewidth=2), ) if show_annotation_func is not None: cursor.connect( event="add", func=show_annotation_func ) return cursor show_hover_panel(show_annotation) plt.show()
2
2
77,002,835
2023-8-29
https://stackoverflow.com/questions/77002835/im-learning-python-web-scraping-it-shows-attributeerror-when-i-scrapy-crawl-a
I'm learning python scraping with scrapy. I did exacly the same thing as the tutorial teaches. But I got an error. Please help! My Python code: import scrapy class BookSpider(scrapy.Spider): name = "books" allowed_domains = ["books.toscrape.com"] start_urls = ["https://books.toscrape.com"] def parse(self, response): books = response.css("article.product_pod") for book in books: yield{ "name":book.css("h3 a::text").get(), "price":book.css(".product_price .price_color::text").get(), "url": book.css("h3 a").attrib["href"], } The terminal shows Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\Administrator\python\venv\bookscraper\Scripts\scrapy.exe\__main__.py", line 7, in <module> File "C:\Users\Administrator\python\venv\bookscraper\Lib\site-packages\scrapy\cmdline.py", line 161, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "C:\Users\Administrator\python\venv\bookscraper\Lib\site-packages\scrapy\cmdline.py", line 114, in _run_print_help func(*a, **kw) File "C:\Users\Administrator\python\venv\bookscraper\Lib\site-packages\scrapy\cmdline.py", line 169, in _run_command cmd.run(args, opts) File "C:\Users\Administrator\python\venv\bookscraper\Lib\site-packages\scrapy\commands\crawl.py", line 30, in run self.crawler_process.start() File "C:\Users\Administrator\python\venv\bookscraper\Lib\site-packages\scrapy\crawler.py", line 390, in start install_shutdown_handlers(self._signal_shutdown) File "C:\Users\Administrator\python\venv\bookscraper\Lib\site-packages\scrapy\utils\ossignal.py", line 19, in install_shutdown_handlers reactor._handleSignals() ^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'AsyncioSelectorReactor' object has no attribute '_handleSignals' The ossignal.py file: import signal signal_names = {} for signame in dir(signal): if signame.startswith("SIG") and not signame.startswith("SIG_"): signum = getattr(signal, signame) if isinstance(signum, int): signal_names[signum] = signame def install_shutdown_handlers(function, override_sigint=True): """Install the given function as a signal handler for all common shutdown signals (such as SIGINT, SIGTERM, etc). If override_sigint is ``False`` the SIGINT handler won't be install if there is already a handler in place (e.g. Pdb) """ from twisted.internet import reactor reactor._handleSignals() signal.signal(signal.SIGTERM, function) if signal.getsignal(signal.SIGINT) == signal.default_int_handler or override_sigint: signal.signal(signal.SIGINT, function) # Catch Ctrl-Break in windows if hasattr(signal, "SIGBREAK"): signal.signal(signal.SIGBREAK, function)
As pointed out in my comment, the issue you are describing is already being tackled by scrapy here and has to do with one of its dependencies, twisted (a day ago, a new version was released, 23.8.0, which seems to cause the issue). Another user fixed the issue by installing a previous version of twisted (see here). Basically, he installed the following version of twisted, which fixed his issue. pip install Twisted==22.10.0 Until the issue is fixed and a new version is released, I suggest using the previous version.
8
22
77,002,315
2023-8-29
https://stackoverflow.com/questions/77002315/get-2d-gauss-legendre-quadrature-points-and-weights-using-numpy
NumPy provides the np.polynomial.legendre.leggauss() function to compute the sample points and weights for Gauss-Legendre quadrature. I'm trying to use this function to get the 2D points and weights for a quadrilateral. I am able to use the function as shown here: def gauss2D(n): x, w = np.polynomial.legendre.leggauss(n) weights = [] gauss_pts = [] for i in range(n): for j in range(n): wts = w[i] * w[j] weights.append(wts) g = [x[i], x[j]] gauss_pts.append(g) return np.array(weights), np.array(gauss_pts) Which outputs the following for n=2 integration points: weights [1. 1. 1. 1.] points [[-0.57735027 -0.57735027] [-0.57735027 0.57735027] [ 0.57735027 -0.57735027] [ 0.57735027 0.57735027]] If it's possible, I would like to get rid of the for-loops by using NumPy arrays but my attempt (see function below) does not generate the correct results. def gauss2Dnumpy(n): x, w = np.polynomial.legendre.leggauss(n) weights = np.concatenate((w, w)) gauss_pts = np.zeros((n*2, n)) for i in range(n*2): for j in range(n): gauss_pts[i] = x[j], x[j] return weights, gauss_pts Is there a way to accomplish this without the for-loops?
This would be: x, w = np.polynomial.legendre.leggauss(n) gauss_pts = np.array(np.meshgrid(x,x,indexing='ij')).reshape(2,-1).T weights = (w*w[:,None]).ravel()
2
4
77,001,865
2023-8-29
https://stackoverflow.com/questions/77001865/python-call-class-function-as-other-classes-attribute
Is it possible to create a construction like this, where a method foo of a class A is used to set the value of an attribute of another class B. Then this method should be called on an instance of A without calling A.foo() directly. class A: def foo(self): print("Hello") class B: def __init__(self): self.func = A.foo def bar(self): obj = A() obj.self.func() My use case is that in the __init__() method of B different functions of A can be chosen for set the value of the attribute self.func by an if-else statement; this because calling the method of class A directly it is not possible, since it is not clear which one will be chosen.
In this case, you could just pass the instance explicitly: class A: def foo(self): print("Hallo") class B: def __init__(self): self.func = A.foo def bar(self): obj = A() self.func(obj)
2
3
76,996,584
2023-8-28
https://stackoverflow.com/questions/76996584/multiple-tags-for-beatifulsoup
import os from bs4 import BeautifulSoup # Get a list of all .htm files in the HTML_bak folder html_files = [file for file in os.listdir('HTML_bak') if file.endswith('.htm')] # Loop through each HTML file for file_name in html_files: input_file_path = os.path.join('HTML_bak', file_name) output_file_path = os.path.join('HTML', file_name) # Read the input file with errors='ignore' with open(input_file_path, 'r', encoding='utf-8', errors='ignore') as input_file: input_content = input_file.read() # Parse the input content using BeautifulSoup with html5lib parser soup = BeautifulSoup(input_content, 'html5lib') main_content = soup.find('div', style='position:initial;float:left;text-align:left;overflow-wrap:break-word !important;width:98%;margin-left:5px;background-color:#FFFFFF;color:black;') # Overwrite the output file with modified content with open(output_file_path, 'w', encoding='utf-8') as output_file: output_file.write(str(main_content)) This code correctly scans HTML files in a folder and only pulls in the desired div based on style. However, there are sometimes tags within this div tag that I want to remove. Those tags appear as: <div class="gmail_quote">2010/2/11 some text here .... </div> How can I edit my code to also remove these tags with gmail_quote class? Update 8/29/23: I am copying an example HTML content to make sure my question is clear. I want to keep the contents of the <div style="position:initial.... after <body bgColor=#ffffff> and remove the contents of the <div class="gmail_quote">2010/2/11 ... <html><body style="background-color:#FFFFFF;"><div></div></body></html><article style="width:100%;float:left; position:left;background-color:#FFFFFF; margin: 0mm 0mm 0mm 0mm; "><style> @media print { pre { overflow-x:break-word; white-space:pre; white-space:hp-pre-wrap; white-space:-moz-pre-wrap; white-space:-o-pre-wrap; white-space:-pre-wrap; white-space:pre-wrap; word-wrap:break-word;} }pre { overflow-x:break-word; white-space:pre; white-space:hp-pre-wrap; white-space:-moz-pre-wrap; white-space:-o-pre-wrap; white-space:-pre-wrap; white-space:pre-wrap; word-wrap:break-word;} @page {size: auto; margin: 12mm 4mm 12mm 6mm; } </style> <div style="position:initial;float:left;background-color:transparent;text-align:left;width:100%;margin-left:5px;"> <html><head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8;"><style> .hdrfldname{color:black;font-size:20px; line-height:120%;} .hdrfldtext{overflow-wrap:break-word;color:black;font-size:20px;line-height:120%;} </style></head> <body bgColor=#ffffff> <div style="position:initial;float:left;text-align:left;font-weight:normal;width:100%;background-color:#eee9e9;"> <span class='hdrfldname'>SUBJECT: </span><span class='hdrfldtext'>lorem ipsum</span><br> <span class='hdrfldname'>FROM: </span><span class='hdrfldtext'>lorem ipsum</span><br> <span class='hdrfldname'>TO: </span><span class='hdrfldtext'>lorem ipsum</span><br> <span class='hdrfldname'>DATE: </span><span class='hdrfldtext'>2010/02/12 09:10</span><br> </div></body></html> </div> <div style="position:initial;float:left;text-align:left;overflow-wrap:break-word !important;width:98%;margin-left:5px;background-color:#FFFFFF;color:black;"><br> <html><head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8;"><style> pre { overflow-x:break-word; white-space:pre; white-space:hp-pre-wrap; white-space:-moz-pre-wrap; white-space:-o-pre-wrap; white-space:-pre-wrap; white-space:pre-wrap; word-wrap:break-word;} </style></head><body bgColor=#ffffff> <div> lorem ipsum </div> <div class="gmail_quote">2010/2/11 lorem ipsum<span dir="ltr">&lt;<a style="max-width:100%;" href="lorem ipsum">lorem ipsum</a>&gt;</span><br> </body></html> </div> </article> <div>&nbsp;<br></div>
You can modify your code to remove the div tags with the gmail_quote class by using the decompose() method of the BeautifulSoup library. This method removes a tag from the tree and then completely destroys it and its contents. # Find and remove all div tags with class gmail_quote for tag in main_content.find_all('div', {'class': 'gmail_quote'}): tag.decompose() Add this code below your main_content line. This should remove all div tags with the gmail_quote class from the main_content before writing it to the output file.
2
2
76,998,938
2023-8-29
https://stackoverflow.com/questions/76998938/select-closest-element-between-2-numpy-arrays-of-differing-sizes
I have 2 numpy arrays of different sizes. One of them 'a' contains int values, while the other (larger) np array 'b' contains float values with (say) 3-4 values per element/value in 'a'. a = np.random.randint(low = 1, high = 100, size = (7)) a # array([35, 11, 48, 20, 13, 31, 49]) b = np.array([34.78, 34.8, 35.1, 34.99, 11.3, 10.7, 11.289, 18.78, 19.1, 20.05, 12.32, 12.87, 13.5, 31.03, 31.15, 29.87, 48.1, 48.5, 49.2]) a.shape, b.shape # ((7,), (19,)) The idea is to find the value in 'b' matching each unique value in 'a' in terms of closest distance, which I am computing using abs values. To do it using a single element of 'a', or using the first element of 'a': # Compare first element of a with all elements of b- np.abs(a[0] - b).argsort() ''' array([ 3, 2, 1, 0, 14, 13, 15, 16, 17, 18, 9, 8, 7, 12, 11, 10, 4, 6, 5]) ''' # b[3] # 34.99 # np.abs(a[0] - b).argsort()[0] # 3 b[np.abs(a[0] - b).argsort()[0]] # 34.99 So, the 4th element in 'b' (b[3]) is the closest match to a[0]. To compute this for all values in 'a', I use a loop as: for e in a: idx = np.abs(e - b).argsort() print(f"{e} has nearest match = {b[idx[0]]:.4f}") ''' 35 has nearest match = 34.9900 11 has nearest match = 11.2890 48 has nearest match = 48.1000 20 has nearest match = 20.0500 13 has nearest match = 12.8700 31 has nearest match = 31.0300 49 has nearest match = 49.2000 ''' How can I achieve this without the slow for loop? Note: a.shape = 1400 and b.shape = 1.5 million (approxmately)
If you have a lot of values to check you could also try using a kdTree. For few values the overhead of building the tree won't make this worth it, but for large n and especially for searching for nearest neighbors in a multidimensional space it's a lot quicker than computing the distance for all pairs: import numpy as np from scipy import spatial a = np.random.randint(low = 1, high = 100, size = (7)) b = np.array([34.78, 34.8, 35.1, 34.99, 11.3, 10.7, 11.289, 18.78, 19.1, 20.05, 12.32, 12.87, 13.5, 31.03, 31.15, 29.87, 48.1, 48.5, 49.2]) kd_tree = spatial.KDTree(np.expand_dims(b, 1)) # This returns distance to the nearest neighbor d # and position of the nearest neighbor i d,i = kd_tree.query(np.expand_dims(a, 1), k=[1])
2
1
76,995,416
2023-8-28
https://stackoverflow.com/questions/76995416/why-does-pandas-multiindex-slice-require-a-column-placeholder-when-selecting-all
Consider this Pandas data frame with a MultiIndex: df = pd.DataFrame({'x': [1, 2, 3, 4]}) arrays = [[1, 1, 2, 2], [False, True, False, True]] df.index = pd.MultiIndex.from_arrays(arrays, names=('grp', 'is_even')) df x grp is_even 1 False 1 True 2 2 False 3 True 4 I can select a specific entry - say, grp == 1 & is_even == True: df.loc[(1, True)] x 2 Name: (1, True), dtype: int64 And with slice or pd.IndexSlice notation, I can select a specific value for the first index level (grp == 1), along with all values of the second level: # with slice() df.loc[slice(1), slice(None)] x grp is_even 1 False 1 True 2 # with pd.IndexSlice idx = pd.IndexSlice df.loc[idx[1, :]] # note - correct rows selected but grp index level not shown x is_even False 1 True 2 But when I try to select all values of the first index level, and a specific value for the second index level (e.g. is_even == True), this syntax pattern fails (KeyError on the second index level's value). df.loc[idx[:, True]] # also throws KeyError with df.loc[(slice(None), slice(True))] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) ~/anaconda3/envs/betterup-example-analysis/lib/python3.6/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2645 try: -> 2646 return self._engine.get_loc(key) 2647 except KeyError: # ... The error trace is long (and reproducible with this code if it's helpful to see it all), but one segment has a couple of comments that hint at where the problem is (in bold): ~/anaconda3/envs/betterup-example-analysis/lib/python3.6/site-packages/pandas/core/indexing.py in _getitem_lowerdim(self, tup) 1371 # we may have a nested tuples indexer here 1372 if self._is_nested_tuple_indexer(tup): -> 1373 return self._getitem_nested_tuple(tup) 1374 1375 # we maybe be using a tuple to represent multiple dimensions here After some experimentation, I found that adding a column placeholder resolves the error. (Doing this also restores the missing first index level in the IndexSlice example above.) df.loc[idx[:, True], :] x grp is_even 1 True 2 2 True 4 I also found I can get there with query: df.query('is_even == True') So I'm happy I have a fix, but my Pandas-fu isn't strong enough to understand why the error happens without including the column placeholder. If someone can help me grok what's happening here, I'd really appreciate it! Pandas version: 1.0.5 Python version: 3.6.15
idx[:, True] is actually just a wrapper to create: (slice(None, None, None), True) So when you pass it to slice, this uses the first part for the index and the second for the columns. By doing df.loc[idx[:, True], :] you make it explicit that the columns slicer is : A further demonstration is trying to run df.loc[idx[:, 'x']], which will correctly slice the column x. When running df.loc[slice(1, True), 'x'],you are not using 1 for the first level and True for the second but rather doing: df.loc[1:True, 'x'] If you want to get the item at index (1, True) and column x, use: df.loc[(1, True), 'x'] # 2 If you want all values of the first level, just the True in the second level and column x: df.loc[(slice(None), True), 'x'] grp is_even 1 True 2 2 True 4 Name: x, dtype: int64
2
3
76,994,428
2023-8-28
https://stackoverflow.com/questions/76994428/why-limit-to-last-not-working-in-firestore
When running the code below I get the first item not the last. Why is that? async def example(): db: google.cloud.firestore.AsyncClient = AsyncClient() await db.document("1/1").set({"a": "1"}) await db.document("1/2").set({"a": "2"}) last = ( await ( db.collection("1") .order_by("a") .limit_to_last(1) # <= not working - getting the first not last .get() ) )[0].to_dict() first = ( await ( db.collection("1") .order_by("a") .limit(1) .get() ) )[0].to_dict() print(first, last) # {'a': '1'} {'a': '1'} asyncio.run(example()) The workaround is the use of descending ordering with limit(). But I don't understand - what is the purpose of the limit_to_last() function?
If your database schema looks like this: db | --- 1 (collection) | --- 1 (document) | | | --- a: "1" | --- 2 (document) | --- a: "2" Then the following query: db.collection("1").order_by("a").limit_to_last(1) Will indeed return: {'a': '2'} Because the last element in the results is the second document. I tested and it worked. However, when you want to store numbers, it's not recommended to store them as strings but as numbers. Why? Because when you order strings the order is lexicographically.
2
3
76,989,290
2023-8-27
https://stackoverflow.com/questions/76989290/how-to-fool-issubclass-checks-with-a-magicmock
I have something like this: from unittest.mock import MagicMock class A: pass class B(A): pass mock_B = MagicMock(spec_set=B) assert issubclass(mock_B, A) # TypeError: issubclass() arg 1 must be a class How can I get this to pass? isinstance and Mocking had a lot of stuff various answers, but from there I can't figure it out.
The problem here is that you want to treat mock_B, an instance of MagicMock, as a class, but an object in Python can only be considered as a class if it is an instance of type, which mock_B is simply not. So instead of trying to make a Mock object class-like, which is technically impossible since Python does not allow dynamically changing the type of an object, a workaround would be to make a class Mock-like. To do that, we can create a class with the same name and base classes as the class you want to mock, but then delegate all of its attribute lookups to a Mock object by making the __getattribute__ method of the class' metaclass perform such a delegation. This way, the new class passes the issubclass check easily because it is really a subclass of the intended base class A, and yet it also behaves like a Mock object because all attribute lookups are delegated to one. However, one problem with delegating all attribute lookups to a Mock object is that the resulting "class" would not behave like a class as the Mock object has no class-like attributes. We can solve this by defaulting attributes that aren't available in the Mock object to the class being mocked instead: from unittest.mock import MagicMock def mock_class(cls): class meta(type): def __getattribute__(self, name): try: return getattr(mock, name) except AttributeError: return getattr(cls, name) mock = MagicMock(spec_set=cls) return meta(cls.__name__, cls.__bases__, {}) so that: class A: pass class B(A): def foo(self): pass mock_B = mock_class(B) assert issubclass(mock_B, A) print(mock_B) print(mock_B.foo) print(mock_B.__name__) print(mock_B.__bases__) passes the issubclass assertion and outputs: <class '__main__.B'> <MagicMock name='mock.foo' id='1970990059024'> B (<class '__main__.A'>,) Demo: Try it online!
2
3
76,996,744
2023-8-29
https://stackoverflow.com/questions/76996744/using-a-list-comprehension-with-if-to-create-a-list-of-items-without-duplicati
I have a list word_list = ['cat', 'dog', 'rabbit']. I want to use list comprehension to print each individual character from the list but removes any duplicate character. This is my code: word_list = ['cat', 'dog', 'rabbit'] letter_list = [""] letter_list = [letter for word in word_list for letter in word if letter not in letter_list ] print(letter_list) this returns ['c', 'a', 't', 'd', 'o', 'g', 'r', 'a', 'b', 'b', 'i', 't'] which is not the desired result ['c', 'a', 't', 'd', 'o', 'g', 'r', 'b', 'i'] and I can't figure out why.
It is technically possible to implement deduplication with a list comprehension if initialization of some variables are allowed. You can use a set seen to keep track of letters already encountered, and a set include to record whether the current letter was already seen before it is added to the set seen: seen = set() include = set() print([ letter for word in word_list for letter in word if ( include.clear() if letter in seen else include.add(1), seen.add(letter) ) and include ]) Since Python 3.8 you can also use an assignment expression to avoid having to rely on side effects of functions, which are generally discouraged in a list comprehension: seen = set() print([ letter for word in word_list for letter in word if ( include := letter not in seen, seen := seen | {letter} ) and include ]) But if you are not dead set on implementing the deduplication with a list comprehension, it would be cleaner to use the dict.fromkeys method instead since dict keys are always unique and follow insertion order since Python 3.7: from itertools import chain print([*{}.fromkeys(chain(*word_list))]) Demo: Try it online!
5
4
76,992,029
2023-8-28
https://stackoverflow.com/questions/76992029/python-regex-matching-with-optional-prefix-and-suffix
I have a regular expression that matches parts of a string (specifically peptide sequences with modifications) and I want to use re.findall to get all parts of the string: The sequence can start with an optional suffix that is anything non-capital letter string followed by -. And the sequence can also have a prefix that starts with a - followed by a non-capitial letter string. The rest of the sequence should be split by capital letters with an optional prefix for each. E.g. "foo-ABcmCD-bar" -> ['foo-','A','B','cmC','D','-bar'] "DEF" -> ['','D','E','F',''] "WHATEVER-foo" -> ['', 'W', 'H', 'A', 'T', 'E', 'V', 'E', 'R', '-foo'] "cmC-foo" -> ['', 'cmC', '-foo'] "ac-cmC-foo" -> ['ac-', 'cmC', '-foo'] What I have is: (?:(^(?:[^A-Z]+-)?)|((?:-[^A-Z]+)?$)|((?:[^A-Z]*)?[A-Z])) Capturing group 1 (^(?:[^A-Z]+-)?) is supposed to catch the optional prefix or an empty String. Capturing group 2 ((?:-[^A-Z]+)?$) is supposed to catch the optional suffix or an empty String. Capturing group 3 ((?:[^A-Z]*)?[A-Z]) is supposed to catch any capital character in the rest of the string that could have a substring of non-capital characters in front. I get the optional prefix or empty string. The suffix seems almost to work - BUT if there is a suffix the end of line is matched twice one with the suffix and ones with an empty string. >>> re.findall(r,"foo-ABC-bar") ['foo-', 'A', 'B', 'C', '-bar', ''] >>> re.findall(r,"ABC-bar") ['', 'A', 'B', 'C', '-bar', ''] >>> re.findall(r,"ABcmC") ['', 'A', 'B', 'cmC', ''] I.e. how do I get rid of the extra empty string or why is the $ matched twice? example: https://regex101.com/r/koZPOD/1
This question already have three answers, all about how to write a regex that works, but none about how to fix your regex. Yours is almost correct: It just needs a tiny modification. [...] why is the $ matched twice? This is the relevant part: (?:-[^A-Z]+)?$ As you see, you are matching an optional pattern before $, a zero-width assertion. That said, after matching -bar in foo-A-bar, the engine proceeds to the position right behind it, where it finds no -[^A-Z]+ but, again, $. Since the former is optional, this is recorded as another match. [...] how do I get rid of the extra empty string[?] We explicitly tell the engine to match $ iff it is preceded by either -[^A-Z]+ (i.e. has suffix), or something that is not [^A-Z] (i.e. no suffix): (?: (^(?:[^A-Z]+-)?) | ((?:-[^A-Z]+|(?<![^A-Z]))$) | ((?:[^A-Z]*)?[A-Z]) ) Try it on regex101.com. (regex101.com's Python flavor doesn't reflect the actual result so I used PCRE2 instead.) Also, the outermost (?: ) and the (?: )? in (?:[^A-Z]*)? are unnecessary; you can remove them entirely. (?<![^A-Z]) can also be simplified as (?<=[A-Z]). (^(?:[^A-Z]+-)?) | ((?:-[^A-Z]+|(?<=[A-Z]))$) | ([^A-Z]*[A-Z]) Try it on regex101.com. Remove the capturing groups to make it .findall()-friendly: pattern = re.compile(r'^(?:[^A-Z]+-)?|(?:-[^A-Z]+|(?<=[A-Z]))$|[^A-Z]*[A-Z]') for testcase in testcases: print(f'{testcase!r:<16}: {pattern.findall(testcase)}') 'foo-ABcmCD-bar': ['foo-', 'A', 'B', 'cmC', 'D', '-bar'] 'DEF' : ['', 'D', 'E', 'F', ''] 'WHATEVER-foo' : ['', 'W', 'H', 'A', 'T', 'E', 'V', 'E', 'R', '-foo'] 'foo-ABC-bar' : ['foo-', 'A', 'B', 'C', '-bar'] 'ABC-bar' : ['', 'A', 'B', 'C', '-bar'] 'ABcmC' : ['', 'A', 'B', 'cmC', ''] 'foo-abCD' : ['foo-', 'abC', 'D', ''] 'abCD' : ['', 'abC', 'D', '']
4
4
76,980,340
2023-8-25
https://stackoverflow.com/questions/76980340/how-to-calculate-the-shap-values-for-the-probability-of-a-lightgbm-binary-classi
I modeled a LGBM binary classification model, however I am not interested in the classification itself, I am interested in the probabilities of a given set of observations to output a positive classification. I wanted to see how my model works using SHAP values, but I am struggling to find how to find the SHAP values for the probability values, mainly because the output is a vector with two values. Here is how I tried to do it: import shap model = LGBMClassifier() model = model.fit(X_train,y_train) # Get SHAP values explainer = shap.TreeExplainer(model, data=X_train, feature_perturbation="interventional", model_output="probability") shap_values = explainer.shap_values(X_train) The only plot that works with the SHAP values generated is the summary plot, which vale values that range from -0.2 to 0.4 . I just want to be safe that those values are really referring to probabilities and would like to see it in a force plot, waterfall plot or any other plot.
Try link="logit" for the force_plot: import pandas as pd import numpy as np import shap import lightgbm as lgbm from sklearn.model_selection import train_test_split from sklearn.datasets import load_breast_cancer from scipy.special import expit shap.initjs() data = load_breast_cancer() X = pd.DataFrame(data.data, columns=data.feature_names) y = data.target X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42 ) model = lgbm.LGBMClassifier() model.fit(X_train, y_train) explainer_raw = shap.TreeExplainer(model) shap_values = explainer_raw(X_train) # force plot of first row for class 1 class_idx = 1 row_idx = 0 expected_value = explainer_raw.expected_value[class_idx] shap_value = shap_values[:, :, class_idx].values[row_idx] shap.force_plot( base_value=expected_value, shap_values=shap_value, features=X_train.iloc[row_idx, :], link="logit", # <-- here )
2
4
76,993,621
2023-8-28
https://stackoverflow.com/questions/76993621/generate-fake-valid-data-with-pydantic
I would like to create automated examples of valid data based on my pydantic models. How can I do this? Example: import pydantic from typing import Any class ExampleData(pydantic.BaseModel): a: int b: str = pydantic.Field(min_length=10, max_length=10) @staticmethod def example() -> dict[str, Any]: # some logic return {} a.example() """Returns { "a": 1, "b": "0123456789" } """ P.S. I suspect that pydantic provides this functionality because fastapi generates sample data, but I'm not sure if this is exactly its functionality and I couldn't find such a method. Can any one help me understand this?
Here's how you can do it using pydantic and Faker: Installation pip install Faker pip install pydantic Script import uuid from datetime import date, datetime, timedelta from typing import List, Union from pydantic import BaseModel, UUID4 from faker import Faker # your pydantic model class Person(BaseModel): id: UUID4 name: str hobbies: List[str] age: Union[float, int] birthday: Union[datetime, date] class PersonFactory: @classmethod def generate_id(cls): return str(uuid.uuid4()) @classmethod def generate_name(cls): # Implement your own logic to generate realistic names return Faker().name() @classmethod def generate_hobbies(cls): # Implement your own logic to generate hobbies return Faker().words(nb=1) @classmethod def generate_age(cls): # Implement your own logic to generate realistic ages return Faker().random_int(1940, 2023) @classmethod def generate_birthday(cls): # Implement your own logic to generate realistic birthdays return Faker().date_of_birth(tzinfo=None, minimum_age=18, maximum_age=80) @classmethod def build(cls): id = cls.generate_id() name = cls.generate_name() hobbies = cls.generate_hobbies() birthday = cls.generate_birthday() age = datetime.now().year - birthday.year return Person(id=id, name=name, hobbies=hobbies, age=age, birthday=birthday) result = PersonFactory.build() print(result) Output: id=UUID('4b7ffc04-48a1-4f4d-8c5b-d3167717dd69') name='Katherine Brown' hobbies=['stay'] age=30 birthday=datetime.date(1993, 2, 17) This will generate a fake person data everytime you run PersonFactory.build()
6
6
76,993,635
2023-8-28
https://stackoverflow.com/questions/76993635/python-itertools-zip-longest-with-mutable-fillvalue
In a code that evaluates a web response, I would like to zip elements from several lists. However, the elements of the iterators are dict's. Therefore, I would like to fill up the missing values also with dict's, but each generated element should have it's own dict instance. The following code groups elements from each list by itertools.zip_longest. As long as there is a non mutable fillvalue specified, there is no problem. import collections import itertools l1 = [{"a": 100}, {"b": 200}, {"c": 300}] l2 = [{"d": 400}] ll = list(itertools.zip_longest(l1, l2, fillvalue=0)) print(ll) -> [({'a': 100}, {'d': 400}), ({'b': 200}, 0), ({'c': 300}, 0)] Now, when a mutable fillvalue is specified, all the fillvalue's share the same instance and so changing one, changes all: import collections import itertools l1 = [{"a": 100}, {"b": 200}, {"c": 300}] l2 = [{"d": 400}] ll = list(itertools.zip_longest(l1, l2, fillvalue=dict())) ll[1][1]["x"] = 150 print(ll) -> [({'a': 100}, {'d': 400}), ({'b': 200}, {'x': 150}), ({'c': 300}, {'x': 150})] To prevent that all the dicts share the same instance I used copy.deepcopy: import collections import copy import itertools l1 = [{"a": 100}, {"b": 200}, {"c": 300}] l2 = [{"d": 400}] ll = list(itertools.zip_longest(l1, l2, fillvalue=copy.deepcopy(dict()))) ll[1][1]["x"] = 150 print(ll) -> [({'a': 100}, {'d': 400}), ({'b': 200}, {'x': 150}), ({'c': 300}, {'x': 150})] As a result, still all dict's from the fillvalue share the same instance. I would like to add that ll = [item or dict() for item in itertools.zip_longest(l1, l2)] works neither, assuming a fillvalue of None. So, how can I make each fillvalue unique?
You can use a sentinel value and wrap zip_longest in another pipeline. For example: sentinel = object() l = list(tuple({} if x is sentinel else x for x in items) for items in zip_longest(l1, l2, fillvalue=sentinel)) For this example, you can probably use None rather than a custom sentinel object.
2
3
76,992,453
2023-8-28
https://stackoverflow.com/questions/76992453/pandas-describe-for-datetime-column
I am using pandas to perform data analysis, I have a datetime column and I am using describe() as below. s = pd.Series([np.datetime64("2000-01-01"), np.datetime64("2010-01-01"), np.datetime64("2010-01-01")]) print('--------------------') print(s) print('--------------------') print(s.dtype) print('--------------------') print(s.describe()) I am getting output as below: -------------------- 0 2000-01-01 1 2010-01-01 2 2010-01-01 dtype: datetime64[ns] -------------------- datetime64[ns] -------------------- count 3 mean 2006-09-01 08:00:00 min 2000-01-01 00:00:00 25% 2004-12-31 12:00:00 50% 2010-01-01 00:00:00 75% 2010-01-01 00:00:00 max 2010-01-01 00:00:00 dtype: object I was going through documentation, The above output summary should be generated for numeric datatype as opposed to non-numeric datatypes like object, category, timestamp which have different summary. pandas version: pandas = "^2.0.0" documentation: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.describe.html What I am missing, does datetime datatype is treated as numeric datatype inside describe()?
Yes, internally datetime objects are stored as integers (number of nanoseconds since epoch), making them inherently numeric data. For the reported data, all values (['count', 'mean', 'min', '25%', '50%', '75%', 'max']) are valid as they represent the count or a date in the min-max range. You can note that std is absent, since computing a standard deviation wouldn't make sense. The behavior on datetime data is actually demonstrated in the describe documentation: Describing a timestamp Series. s = pd.Series([ np.datetime64("2000-01-01"), np.datetime64("2010-01-01"), np.datetime64("2010-01-01") ]) s.describe() count 3 mean 2006-09-01 08:00:00 min 2000-01-01 00:00:00 25% 2004-12-31 12:00:00 50% 2010-01-01 00:00:00 75% 2010-01-01 00:00:00 max 2010-01-01 00:00:00 dtype: object Internally, describe is handled in pandas/core/methods/describe.py. The select_describe_func function is choosing how to handle the data based on the dtype and a specific approach (describe_timestamp_1d) is used for datetime data. Follow-up: "Using astype on the above example, if the datatype is set as datetime then summary is numeric and if the datatype is set as object then summary is non-numeric. So, In general while performing analysis on such column, do we set datatype as datetime or object, which is preferred?" When using s.astype('object').describe() you get the same behavior as you would have for strings: s.astype('object').describe() count 3 # number of values unique 2 # number of unique values top 2010-01-01 00:00:00 # most frequent value freq 2 # count of the most frequent value dtype: object It's up to you to determine which behavior you need, if you use your dates as datetime information then keep the original type, if you use them as a Categorical (for instance if you have 2 or 3 different dates and their absolute value is not meaningful other than to define a group, then the non-numeric approach is probably fine)
3
3
76,992,421
2023-8-28
https://stackoverflow.com/questions/76992421/attributeerror-module-ssl-has-no-attribute-protocol-tlsv1-3
I am trying to setup a tls context in python. I want to force TLSv1.3 usng: context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_3) This does not work as I receive the following error: AttributeError: module 'ssl' has no attribute 'PROTOCOL_TLSv1_3' I am on Ubuntu 20.04 and am using python version 3.8 and openssl version 1.1.1f. Why doesn't it support TLSv1.3?
TLS 1.3 protocol will be available with PROTOCOL_TLS in OpenSSL >= 1.1.1. There is no dedicated PROTOCOL constant for just TLS 1.3. https://docs.python.org/3.11/library/ssl.html#ssl.SSLContext Footnote 3. According to the protocol version description, this is how you set TLS 1.3 as the minimum version supported: client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.minimum_version = ssl.TLSVersion.TLSv1_3 client_context.maximum_version = ssl.TLSVersion.TLSv1_3 https://docs.python.org/3.11/library/ssl.html#protocol-versions https://docs.python.org/3.11/library/ssl.html#tls-1-3
2
5
76,989,937
2023-8-28
https://stackoverflow.com/questions/76989937/mongoengine-aggregate-lookup-on-id-not-working
Campaign Record { "_id": { "$oid": "64eb81337f7d9f6e1107fc3a" }, "_cls": "CampaignModel", "campaignNumber": "CMPN-00005", "account": { "$oid": "64d80694fe75052b39c0e615" }, "campaignName": "title", "campaignDescription": "wdesc", "campaignStartDate": { "$date": "2023-08-17T00:00:00.000Z" }, "campaignEndDate": { "$date": "2023-08-27T00:00:00.000Z" }, "isWorkOrderGeneratedFlag": false, "isInvoiceGenerated": false, "campaignCost": "20000", "clientName": "", "attachedBillboardList": [], "accountId": "64d80694fe75052b39c0e615" } Booking Record { "_id": { "$oid": "64eb81337f7d9f6e1107fc3b" }, "_cls": "BookingModel", "campaignId": "64eb81337f7d9f6e1107fc3a", "billboardId": "64e0d18aaec5d7ef7bf848dd", "startDate": { "$date": "2023-09-01T00:00:00.000Z" }, "endDate": { "$date": "2023-09-10T00:00:00.000Z" }, "costPerDay": 200, "billboard": { "_ref": { "$ref": "BillBoard", "$id": { "$oid": "64e0d18aaec5d7ef7bf848dd" } }, "_cls": "BillBoardModel" } } I am trying to join the campaign with booking on _id from the campaign record and campaignId in the booking record. Campaign Mongoengine class class CampaignModel(BaseModel): meta = {'collection': 'Campaign'} campaignNumber = me.SequenceField(sequence_name="CAMPAIGN", value_decorator=generate_campaign_number) account = me.ReferenceField(UserModel) campaignName = me.StringField() campaignDescription = me.StringField() campaignStartDate = me.DateField(default=datetime.datetime.now) campaignEndDate = me.DateField(default=datetime.datetime.now) isWorkOrderGeneratedFlag = me.BooleanField(default=False) isInvoiceGenerated = me.BooleanField(default=False) campaignCost = me.StringField() clientName = me.StringField() attachedBillboardList = me.ListField() Booking Mongoengine Class class BookingModel(BaseModel): meta = {'collection': 'Booking'} campaignId = me.StringField() campaign = me.LazyReferenceField(CampaignModel,dbref=False) billboardId = me.StringField() accountId = me.StringField() billBoard = me.LazyReferenceField(BillBoardModel,dbref=False) startDate = me.DateField() endDate = me.DateField() costPerDay = me.FloatField() query records = CampaignModel.objects.aggregate(*[ { "$lookup": { "from": "Booking", # Tag collection database name "localField": "_id", # Reference field "foreignField": "campaignId", # Primary key of the Tag collection "as": "attaches" } }]) Expected: Records from campaign collection with records from Booking collection as an array in attaches key. But the attaches key was empty.
The problem was you were storing the campaignId as a string type in the Booking collection. Make sure that both values to be compared must be same type before comparing. Thus, you need to convert the campaignId for the document in the Booking collection as ObjectId type as below: db.Campaign.aggregate([ { "$lookup": { "from": "Booking", let: { campaignId: "$_id" }, pipeline: [ { $match: { $expr: { $eq: [ "$$campaignId", { $toObjectId: "$campaignId" } ] } } } ], "as": "attaches" } } ]) Demo @ Mongo Playground
2
3
76,986,516
2023-8-27
https://stackoverflow.com/questions/76986516/how-to-retrieve-all-spark-session-config-variables
In databricks I can set a config variable at session level, but it is not found in the context variables: spark.conf.set(f"dataset.bookstore", '123') #dataset_bookstore spark.conf.get(f"dataset.bookstore")#123 scf = spark.sparkContext.getConf() allc = scf.getAll() scf.contains(f"dataset.bookstore") # False I understand there is a difference between session and context-level config variables, how can I retrieve all session-level variables using spark.conf? Note: all_session_vars = spark.conf.getAll() returns AttributeError: 'RuntimeConfig' object has no attribute 'getAll' so it looks like a runtime-level config
Pyspark's RuntimeConfig indeed seems to be limited compared to its Scala counterpart. However if we peek at its source code... class RuntimeConfig: def __init__(self, jconf: JavaObject) -> None: """Create a new RuntimeConfig that wraps the underlying JVM object.""" self._jconf = jconf we will be able to retrieve its underlying JVM object and through that call its getAll method: c = spark.conf._jconf.getAll() c.contains("dataset.bookstore") # True
5
6
76,988,753
2023-8-27
https://stackoverflow.com/questions/76988753/inner-classes-cant-reference-eachother-is-there-a-more-pythonic-way
I want to use inner classes (mainly dataclass and Enums) to keep things encapsulated. They hold data and defines that are only relevant to the main class, so I'd like to keep them inside it. I get the sense that this is not the most Pythonic way to do things, but I'm not sure how to make it better. The real problem is that I need some of those inner classes to contain variables that use types of the other inner classes, and Python doesn't seem to allow that. This is what I would like to do (this is just a pared down example). This keeps everything as part of DataPacket, so that when you reference the inner classes you use DataPacket to get to it. ie. DataPacket.DataStatus.GOOD, etc, and it's clear where that "define" comes from. However the DataStatus reference in SensorData is not found unless it is moved out of the DataPacket class. from dataclasses import dataclass from typing import List from enum import IntEnum class DataPacket: class DataStatus(IntEnum): GOOD = 0 ERROR = 1 UNKNOWN = 255 @dataclass class SensorData(): sensor_name: int = 0 sensor_type: int = 0 unit_type: int = 0 status: DataStatus = DataStatus.UNKNOWN value: float = 0 @dataclass class Sensors(): data: List[SensorData] count: int = 0 def build_packet(self): sensors = self.Sensors([]) # Read data from device to fill in sensor values sensors.count = 1 data = self.SensorData() data.sensor_name = 1 data.sensor_type = 2 data.unit_type = 3 data.status = self.DataStatus.GOOD data.value = 100 sensors.data.append(data) return sensors packet = DataPacket() sensors = packet.build_packet() if sensors.data[0].status == DataPacket.DataStatus.GOOD: print(sensors.data[0].value) else: print("Sensors data error") This is how to get it to work, but I don't like this structure: from dataclasses import dataclass from typing import List from enum import IntEnum class DataStatus(IntEnum): GOOD = 0 ERROR = 1 UNKNOWN = 255 @dataclass class SensorData(): sensor_name: int = 0 sensor_type: int = 0 unit_type: int = 0 status: DataStatus = DataStatus.UNKNOWN value: float = 0 @dataclass class Sensors(): data: List[SensorData] count: int = 0 class DataPacket: def build_packet(self): sensors = Sensors([]) # Read data from device to fill in sensor values sensors.count = 1 data = SensorData() data.sensor_name = 1 data.sensor_type = 2 data.unit_type = 3 data.status = DataStatus.GOOD data.value = 100 sensors.data.append(data) return sensors packet = DataPacket() sensors = packet.build_packet() if sensors.data[0].status == DataStatus.GOOD: print(sensors.data[0].value) else: print("Sensors data error") Thanks for your help/suggestions!
The problem you are encountering is because class bodies do not create enclosing scopes. Nesting class definitions isn't a common pattern in Python. You are going to be working against the language to get it to work. Here is a minimal example of your problem: >>> class Foo: ... class Bar: ... pass ... class Baz: ... bar = Bar() ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 4, in Foo File "<stdin>", line 5, in Baz NameError: name 'Bar' is not defined Here is a way to get around it, refer to Bar where it is in scope, and assign to the class attribute after the class definition: >>> class Foo: ... class Bar: ... pass ... class Baz: ... pass ... Baz.bar = Bar() ... But really, you should just keep the solution you have already, which is perfectly Pythonic. Note, this workaround means you are going to have to abandon the dataclasses.dataclass code generator, since that requires annotations in the class body. Or, I suppose, you could use string annotations: >>> import dataclasses >>> class Foo: ... class Bar: ... pass ... @dataclasses.dataclass ... class Baz: ... bar: "Bar" ... But if you want a default value, which actually requires the class, it's not going to work. Your reasoning for nesting the classes is that "They hold data and defines that are only relevant to the main class, so I'd like to keep them inside it." but the main unit of code organization is the module in Python. Everything being in a module here is perfectly acceptable.
2
3
76,963,279
2023-8-23
https://stackoverflow.com/questions/76963279/parallelize-accelerate-loops-of-tensor-additions
Background: I am working on a program that first shifts the different channels of a tensor along the "column" dimension with different distances, and then performs a summation along the "channel" dimension to merge the different dimensions into one. Specifically, given a tensor x of size (B,C,H,W) and step size S, where B, C, H, W represent the batch size, channel number, height, and width, respectively, the i-th channel of x is shifted by distance (i-1)*S, and then the C channels are summed into one. Here is an 1D toy example. Assume that I have a 3-channel tensor x as x = torch.tensor( [[1,1,1], [2,2,2], [3,3,3]] ) Now I set the step size as 1, and then perform a shift on the tensor as x_shifted = torch.tensor( [[1,1,1,0,0], [0,2,2,2,0], [0,0,3,3,3]] ) Here, the first channel is shifted by distance 0, the second channel is shifted by distance 1, and the third channel is shifted by distance 2. Finally, all the three channels are summed and merged into one as y = torch.tensor( [[1,3,6,5,3]] ) Question: I have implemented the original process w.r.t. 2D image tensors in the following code: import torch import torch.nn.functional as F from time import time ############################################# # Parameters ############################################# B = 16 C = 28 H = 256 W = 256 S = 2 T = 1000 device = torch.device('cuda') seed = 2023 torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) ############################################# # Method 1 ############################################# alpha = torch.zeros(B, 1, 1, W+(C-1)*S, device=device) for i in range(C): alpha[..., (i*S):(i*S+W)] += 1 def A(x, mask): z = x * mask y = torch.zeros(B, 1, H, W+(C-1)*S, device=x.device) for i in range(C): y[..., (i*S):(i*S+W)] += z[:, (i):(i+1)] return y def A_pinv(y, mask): z = y / alpha.to(y.device) x = torch.cat([z[..., (i*S):(i*S+W)] for i in range(C)], dim=1) / mask return x ############################################# # Method 2 ############################################# kernel = torch.zeros(1, C, 1, (C-1)*S+1, device=device) for i in range(C): kernel[:, C-i-1, :, i*S] = 1 def A_fast(x, mask): return F.conv2d(x * mask, kernel.to(x.device), padding=(0, (C-1)*S)) def A_pinv_fast(y, mask): return F.conv_transpose2d(y / alpha.to(y.device), kernel, padding=(0, (C-1)*S)) / mask ############################################# # Test 1 ############################################# start_time = time() MAE = 0 for i in range(T): x = torch.rand(B, C, H, W, device=device) mask = torch.rand(1, 1, H, W, device=device) mask[mask == 0] = 1e-12 y = A(x, mask) x_init = A_pinv(y, mask) y_init = A(x_init, mask) MAE += (y_init - y).abs().mean().item() MAE /= T end_time = time() print('---') print('Test 1') print('Running Time:', end_time - start_time) print('MAE:', MAE) ############################################# # Test 2 ############################################# start_time = time() MAE = 0 for i in range(T): x = torch.rand(B, C, H, W, device=device) mask = torch.rand(1, 1, H, W, device=device) mask[mask == 0] = 1e-12 y = A_fast(x, mask) x_init = A_pinv_fast(y, mask) y_init = A_fast(x_init, mask) MAE += (y_init - y).abs().mean().item() MAE /= T end_time = time() print('---') print('Test 2') print('Running Time:', end_time - start_time) print('MAE:', MAE) Here, Method 1 implements the process with a for loop, while I believe that Method 2 implements the process equivalently by using a 2D convolution operation. To be more specific, functions A and A_pinv realize the forwarding compress process and its "pseudo-inverse", respectively. Their "fast" versions in Method 2 are expected to be faster with a parallelized implementation. However, when I run the code, I find that the Method 1 is still much faster than the Method 2 with large speed leading. I want to ask that: Can we effectively accelerate the Method 1? To be more specific, I wonder if we can parallelize the for loops, to make the "Shift+Summation" process faster?
Large-kernel convolutions are not necessarily efficient. torch.scatter_add_ can sum over the shifted elements directly. I didn't write the pseudo inverse (I think it was to check for correctness? I compared this new method with your Method1/Method2). out_W = W + (C-1)*S i_list = torch.arange(C, dtype=torch.long, device=device) y_list = torch.arange(H, dtype=torch.long, device=device) x_list = torch.arange(W, dtype=torch.long, device=device) indices = x_list + i_list.view(C, 1, 1)*S + y_list.view(1, H, 1)*(out_W) indices = indices.view(1, C*H*W).expand(B, C*H*W) """ functionally equivalent to: for i in range(C): for y in range(H): for x in range(W): indices[i*H*W+y*W+x] = x + i*S + y*(out_W) """ def A_faster(x, mask): y = torch.zeros(B, H*out_W, device=x.device) y.scatter_add_(1, indices, (x*mask).view(B, C*H*W)) return y.view(B, 1, H, out_W) Surprisingly, your method 1 holds up well even for larger C (or scatter does not scale well). For C=28: --- Test 1 Running Time: 1.4626126289367676 --- Test 2 Running Time: 2.808514356613159 --- Test 3 Running Time: 1.3663663864135742 --- |Test1 - Test2|: tensor(9.2172e-07, device='cuda:0') --- |Test1 - Test3|: tensor(7.5425e-09, device='cuda:0') --- |Test2 - Test3|: tensor(9.2173e-07, device='cuda:0') For C=512 (method 2 skipped as it is too slow): --- Test 1 Running Time: 27.37247085571289 --- Test 3 Running Time: 24.335933446884155 --- |Test1 - Test3|: tensor(3.9411e-08, device='cuda:0') Full testing code: import torch import torch.nn.functional as F from time import time ############################################# # Parameters ############################################# B = 16 C = 28 H = 256 W = 256 S = 2 T = 1000 device = torch.device('cuda') seed = 2023 torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) ############################################# # Method 1 ############################################# alpha = torch.zeros(B, 1, 1, W+(C-1)*S, device=device) for i in range(C): alpha[..., (i*S):(i*S+W)] += 1 def A(x, mask): z = x * mask y = torch.zeros(B, 1, H, W+(C-1)*S, device=x.device) for i in range(C): y[..., (i*S):(i*S+W)] += z[:, (i):(i+1)] return y def A_pinv(y, mask): z = y / alpha.to(y.device) x = torch.cat([z[..., (i*S):(i*S+W)] for i in range(C)], dim=1) / mask return x ############################################# # Method 2 ############################################# kernel = torch.zeros(1, C, 1, (C-1)*S+1, device=device) for i in range(C): kernel[:, C-i-1, :, i*S] = 1 def A_fast(x, mask): return F.conv2d(x * mask, kernel.to(x.device), padding=(0, (C-1)*S)) def A_pinv_fast(y, mask): return F.conv_transpose2d(y / alpha.to(y.device), kernel, padding=(0, (C-1)*S)) / mask ############################################# # Method 3 ############################################# out_W = W + (C-1)*S i_list = torch.arange(C, dtype=torch.long, device=device) y_list = torch.arange(H, dtype=torch.long, device=device) x_list = torch.arange(W, dtype=torch.long, device=device) indices = x_list + i_list.view(C, 1, 1)*S + y_list.view(1, H, 1)*(out_W) indices = indices.view(1, C*H*W).expand(B, C*H*W) """ functionally equivalent to: for i in range(C): for y in range(H): for x in range(W): indices[i*H*W+y*W+x] = x + i*S + y*(out_W) """ def A_faster(x, mask): y = torch.zeros(B, H*out_W, device=x.device) y.scatter_add_(1, indices, (x*mask).view(B, C*H*W)) return y.view(B, 1, H, out_W) ############################################# # Test 1 ############################################# torch.cuda.synchronize() start_time = time() for i in range(T): x = torch.rand(B, C, H, W, device=device) mask = torch.rand(1, 1, H, W, device=device) mask[mask == 0] = 1e-12 y = A(x, mask) torch.cuda.synchronize() end_time = time() print('---') print('Test 1') print('Running Time:', end_time - start_time) ############################################# # Test 2 ############################################# torch.cuda.synchronize() start_time = time() for i in range(T): x = torch.rand(B, C, H, W, device=device) mask = torch.rand(1, 1, H, W, device=device) mask[mask == 0] = 1e-12 y = A_fast(x, mask) torch.cuda.synchronize() end_time = time() print('---') print('Test 2') print('Running Time:', end_time - start_time) ############################################# # Test 3 ############################################# torch.cuda.synchronize() start_time = time() for i in range(T): x = torch.rand(B, C, H, W, device=device) mask = torch.rand(1, 1, H, W, device=device) mask[mask == 0] = 1e-12 y = A_faster(x, mask) torch.cuda.synchronize() end_time = time() print('---') print('Test 3') print('Running Time:', end_time - start_time) error = 0 for _ in range(T): error += (A(x, mask) - A_fast(x, mask)).abs().mean() error /= T print('---') print('|Test1 - Test2|: ', error) error = 0 for _ in range(T): error += (A(x, mask) - A_faster(x, mask)).abs().mean() error /= T print('---') print('|Test1 - Test3|: ', error) error = 0 for _ in range(T): error += (A_fast(x, mask) - A_faster(x, mask)).abs().mean() error /= T print('---') print('|Test2 - Test3|: ', error)
2
2
76,985,136
2023-8-26
https://stackoverflow.com/questions/76985136/change-the-logic-of-custom-type-from-extra-types-phonenumber
There is the PhoneNumber type to validate phone numbers: from pydantic import BaseModel from pydantic_extra_types.phone_numbers import PhoneNumber class User(BaseModel): name: str phone_number: PhoneNumber user = User(name='John', phone_number='+447911123456') print(user.phone_number) #> tel:+44-7911-123456 I'm satisfied with it except the unwanted "tel:" prefix. I applied a trick like that: from pydantic import BaseModel from pydantic_extra_types.phone_numbers import PhoneNumber class User(BaseModel): name: str phone_number: PhoneNumber @model_validator(mode="after") def remove_prefix_from_phone(self) -> "User": if self.phone_number: self.phone_number = self.phone_number.removeprefix("tel:") return self user = User(name='John', phone_number='+447911123456') print(user.phone_number) #> +44-7911-123456 But problem is that I convert the field into str and the PhoneNumber class is not even considered as str subtype unlike EmailStr for example. Seems like it a bad move and MyPy marks it as error as well - incompatible types in assignment (meanwhile, the validation logic is fine). Maybe I could do it smarter? I couldn't think of other options but it should be done within the Pydantic model for sure, because I load data from DB in a web-service and prefix would be exist in auto-generated response. Would be glad for any piece of advice!
The PhoneNumber class has these class variables: #https://github.com/pydantic/pydantic-extra-types/blob/092251d226edcf4e06bbe4f904da177fad20a6de/pydantic_extra_types/phone_numbers.py#L26 default_region_code: str | None = None phone_format: str = 'RFC3966' min_length: int = 7 max_length: int = 64 If we follow a bunch of code chains, we end up in the phonenumber repository that PhoneNumber depends on. This is the last stop for your data before being returned as a PhoneNumber, We can see that phone_format ultimately affects the phone number in the following ways: #https://github.com/daviddrysdale/python-phonenumbers/blob/2f06ef6db2ca83f3856fbb8019a0c665f5971b13/python/phonenumbers/phonenumberutil.py#L1726 def _prefix_number_with_country_calling_code(country_code, num_format, formatted_number): """A helper function that is used by format_number and format_by_pattern.""" if num_format == PhoneNumberFormat.E164: return _PLUS_SIGN + unicod(country_code) + formatted_number elif num_format == PhoneNumberFormat.INTERNATIONAL: return _PLUS_SIGN + unicod(country_code) + U_SPACE + formatted_number elif num_format == PhoneNumberFormat.RFC3966: #_RF3966_PREFIX = 'tel:' return _RFC3966_PREFIX + _PLUS_SIGN + unicod(country_code) + U_DASH + formatted_number else: return formatted_number It may be notable that the above returns only the formatted number if you use an unrecognized format. There is a 'NATIONAL' format that is oddly absent from the above conditions. Using it should trigger the else. This should solve your problem. from pydantic import BaseModel from pydantic_extra_types.phone_numbers import PhoneNumber PhoneNumber.phone_format = 'E164' #'INTERNATIONAL', 'NATIONAL' class User(BaseModel): name: str phone_number: PhoneNumber
2
4
76,983,253
2023-8-26
https://stackoverflow.com/questions/76983253/randomly-sampling-points-within-an-octagon-using-python
How do I randomly sample 2d points uniformly from within an octagon using Python / Numpy? We can say that the octagon is centered at the origin (0, 0). The following is what I've done: import numpy as np import matplotlib.pyplot as plt def sample_within_octagon(num_points): points = np.zeros((num_points, 2)) # Generate random angle in radians angles = np.random.uniform(0, 2 * np.pi, size=(num_points,)) # Calculate the maximum radius for the given angle # This is wrong. max_radii = 1.0 / np.sqrt(2) / np.cos(np.pi / 8 - angles % (np.pi / 4)) # Generate random radius within the bounds of the octagon # Use square-root to prevent it from being more dense in center. radii = np.sqrt(np.random.uniform(0, max_radii)) # Convert polar coordinates to Cartesian coordinates x = radii * np.cos(angles) y = radii * np.sin(angles) points[:, 0] = x points[:, 1] = y return points num_points = 10000 random_points = sample_within_octagon(num_points) plt.scatter( np.array(random_points)[:, 0], np.array(random_points)[:, 1], s=1); plt.axis('equal'); The above code is mostly correct, but the max_radii calculation is incorrect, because the edges are slightly curved outward. I am not necessarily committed to the overall approach of the above algorithm, so any algorithm will do. Having said that, I would slightly prefer an approach that (like the above, if it had actually worked correctly) would generalize to 16-gons and so on.
In your code, the formula for max_radii needs a little modification, try the following: import matplotlib.pyplot as plt import numpy as np from scipy import interpolate def sample_within_octagon(num_points, inv_transform_evals=10000): points = np.zeros((num_points, 2)) # Angle offset for each octagon segment offset = np.pi / 8.0 # Generate random angle in radians max_radii_in = np.linspace(0, 2 * np.pi, inv_transform_evals) max_radii_out = 1 / np.cos(np.abs(max_radii_in % (np.pi / 4) - offset)) max_radii_cdf = np.cumsum(max_radii_out / max_radii_out.sum()) f = interpolate.interp1d(np.array([0.] + list(max_radii_cdf)), np.array([0.] + list(max_radii_in))) angles_out = np.random.uniform(0, 1, num_points) angles = f(angles_out) # Calculate max radius based on octagon geometry max_radii = 1 / np.cos(np.abs(angles % (np.pi / 4) - offset)) # Generate random radius with square root scaling radii = np.sqrt(np.random.uniform(0, 1, num_points)) * max_radii # Convert to Cartesian coordinates points[:, 0] = radii * np.cos(angles) points[:, 1] = radii * np.sin(angles) return points # Generate and plot points num_points = 10000 points = sample_within_octagon(num_points) plt.scatter(points[:, 0], points[:, 1], s=1) plt.axis('equal') plt.show() Note: The above solution has been modified by the OP - @calmcc based on suggestions in the comments of the question.
3
2
76,981,933
2023-8-26
https://stackoverflow.com/questions/76981933/cannot-type-password-in-pypi
I'm trying to publish a module to PyPi and when I run the command twine upload dist/* everything works fine, I can type in my name, but then I cannot type anything in the password field, other than clicking enter, I've tried googling it but found none of the solutions worked, there isn't much information on this anyway, help will be appreciated.
Um, this might be a really basic answer here (I can't comment on posts yet), but, are you sure you're not just dealing with the fact that by default in most command line interfaces/shells, you won't see any kind of "activity" (not even censored/starred out characters) when typing in a password? So yeah, you should just try typing out your password and then hitting enter. It can be a bit annoying when, say, connecting to a server via SSH and you have a lengthy passphrase to type on your private key, but think you might've missed a character (and know which one and where), but because it doesn't display anything, you can't use the arrow keys to go back and position the cursor to the right place to insert it. You have to just hit backspace an appropriate number of times and start over.
2
3
76,970,335
2023-8-24
https://stackoverflow.com/questions/76970335/how-to-convert-this-python-code-with-a-ssl-client-certificate-to-kotlin-ktor
I have some Python code that makes a HTTP request: import requests response = requests.get( url, cert = tuple(clientCertPath, pkeyPath), // paths to crt.pem and pkey.pem verify = serverCertPath // path to server-ca.crt file ) I'd like to rewrite this to Kotlin using ktor. This is what I've come up with so far: val serverCert = serverCertPath.inputStream().use { CertificateFactory.getInstance("X.509").generateCertificate(it) as X509Certificate } val keyStore = KeyStore.getInstance(...).apply { load(null, null) setCertificateEntry("serverCert", serverCert) } val trustManagerFactory = ... // init with keystore val sslContext = SSLContext.getInstance("TLS") // and init with above config val client = HttpClient(Java) { engine { config { sslContext(sslContext) } } } // So far so good. This server certificate config seems to work and cover the 'verify' parameter. Now for the other cert. val clientCert = CertificateFactory.getInstance("X.509").let { clientCertPath.inputStream().use { stream -> it.generateCertificate(stream) as X509Certificate } } client.request(url) { this.method = HttpMethod.Get // how to supply client cert? } And now I'm stuck. How do I apply the client certificate to the request? Either client-level configuration or request-level configuration would be fine with me. Also, I still haven't used the pkeyPath. Where do I do that?
There are couple of ways to load pem files and creating a sslcontext out of it. However it is a bit verbose with traditional java. If you just want to go with that without the need for additional libraries, you can give the following stackoverflow topic a try How to build a SSLSocketFactory from PEM certificate and key without converting to keystore? Or else you can try my library which is a bit less verbose. What you need to do is load your pem files and create a key manager and trust manager which you can use to create a sslcontext. Ktor will be able to use this object. An example code snippet would be: var keyManager = PemUtils.loadIdentityMaterial("certificate-chain.pem", "private-key.pem"); var trustManager = PemUtils.loadTrustMaterial("some-trusted-certificate.pem"); var sslFactory = SSLFactory.builder() .withIdentityMaterial(keyManager) .withTrustMaterial(trustManager) .build(); var sslContext = sslFactory.getSslContext(); You can add the library with the following snippet to your project: implementation("io.github.hakky54:sslcontext-kickstart-for-pem:8.1.5")
3
1
76,981,232
2023-8-26
https://stackoverflow.com/questions/76981232/unpacking-for-list-indices
I often find I have something like this: cur = [0, 0] # the indices into array matrix = [[1,1,1]] where I do matrix[cur[0]][cur[1]] Is there any sort of unpacking syntax here? Like: matrix[*cur]
If you switch to NumPy and you're using Python 3.11+, then yes, that works. import numpy as np cur = [0, 0] matrix = np.array([[1, 2, 3]]) print(matrix[*cur]) # -> 1 Before Python 3.11, you can just convert to tuple: print(matrix[tuple(cur)]) # -> 1 Based on the name matrix, NumPy might be a better solution in other ways too. For example you get elementwise operations. Note: The Python 3.11 syntax change doesn't seem to be documented in the normal places (what's new and grammar). From a quick look, I only found it mentioned under typing.Unpack: "* couldn’t be used in certain places". It is covered in PEP 646 though, which introduced typing.TypeVarTuple.
3
6
76,980,525
2023-8-25
https://stackoverflow.com/questions/76980525/what-is-wrong-with-this-matrix-multiplication-in-python
In the below code (Jupiter notebook), output is given wrong, how to correct the code? import numpy as np A = [1,22,231,1540] B = [[-1,1,0],[1,-1,1],[0,6,0],[0,6,0]] C = [[113401201],[10649],[1]] result = np.dot(np.dot(A, B),C) print(result) output [-1800609408] The actual answer is I want to find the error and correct it
You're probably running this code on 32-bit platform (where integers are 32 bit): A = np.array(A, dtype=np.int32) B = np.array(B, dtype=np.int32) C = np.array(C, dtype=np.int32) result = np.dot(np.dot(A, B), C) print(result) Prints (incorrectly, because the value overflows): [-1800609408] To correct it, use 64-bit values: A = np.array(A, dtype=np.int64) B = np.array(B, dtype=np.int64) C = np.array(C, dtype=np.int64) result = np.dot(np.dot(A, B), C) print(result) Prints: [2494357888]
2
6
76,980,315
2023-8-25
https://stackoverflow.com/questions/76980315/sorting-dataframe-contents-based-on-last-column-values
I am trying to sort and display the contents of a file. The code I have does not produce the expected output. I have set ascending=True or ascending=False but it still does not work. Thank in advance. sampledata.txt # Source file ADS43 11.468 02:45 982AS2S 5.657 02:45 K72KSU3 -3.398 02:45 JJS7AS 3.238 02:45 LO92SA 2.221 02:45 22SA8A -1.931 02:45 ADS43 11.468 03:00 982AS2S -5.657 03:00 K72KSU3 3.398 03:00 JJS7AS -2.238 03:00 LO92SA 7.221 03:00 111AS2 -10.756 03:00 P352AS -1.912 03:30 982AS2S -12.595 03:30 K72KSU3 -9.153 03:30 JJS7AS 12.238 03:30 LO92SA 17.221 03:30 111AS2 -13.756 03:30 Current Code: #output is not properly sorted based on the last column values data = {} for row in open('sampledata.txt'): cols = row.rstrip().split() if cols[2] not in data: data[cols[2]] = {} data[cols[2]][cols[0]] = cols[1] df = pd.DataFrame(data) df2 = df.sort_values(by=[df.columns[-1]], ascending=False) print (df2) Current Output: #- Based on sampledata.txt 02:45 03:00 03:30 LO92SA 2.221 7.221 17.221 JJS7AS 3.238 -2.238 12.238 K72KSU3 -3.398 3.398 -9.153 111AS2 NaN -10.756 -13.756 982AS2S 5.657 -5.657 -12.595 P352AS NaN NaN -1.912 ADS43 11.468 11.468 NaN 22SA8A -1.931 NaN NaN Intended Output: #- sorted based on the last column from largest to smallest 02:45 03:00 03:30 LO92SA 2.221 7.221 17.221 JJS7AS 3.238 -2.238 12.238 P352AS NaN NaN -1.912 K72KSU3 -3.398 3.398 -9.153 982AS2S 5.657 -5.657 -12.595 111AS2 NaN -10.756 -13.756 ADS43 11.468 11.468 NaN 22SA8A -1.931 NaN NaN
I've executed your code near-verbatim and don't receive the error you report. data = {} for row in ''' ADS43 11.468 02:45 982AS2S 5.657 02:45 K72KSU3 -3.398 02:45 JJS7AS 3.238 02:45 LO92SA 2.221 02:45 22SA8A -1.931 02:45 ADS43 11.468 03:00 982AS2S -5.657 03:00 K72KSU3 3.398 03:00 JJS7AS -2.238 03:00 LO92SA 7.221 03:00 111AS2 -10.756 03:00 P352AS -1.912 03:30 982AS2S -12.595 03:30 K72KSU3 -9.153 03:30 JJS7AS 12.238 03:30 LO92SA 17.221 03:30 111AS2 -13.756 03:30'''.split('\n'): cols = row.rstrip().split() if cols[2] not in data: data[cols[2]] = {} data[cols[2]][cols[0]] = cols[1] df = pd.DataFrame(data) df.sort_values(by=df.columns[-1], ascending=True) That yields this (mis-sorted) data frame but the problem with the sort here is not the same problem as that which you report in your OP (as of this version), which seems merely to be the unsorted version of df. The aetiology of the issue is that the data are being sorted as strings and not as numbers. 02:45 03:00 03:30 P352AS NaN NaN -1.912 982AS2S 5.657 -5.657 -12.595 111AS2 NaN -10.756 -13.756 K72KSU3 -3.398 3.398 -9.153 JJS7AS 3.238 -2.238 12.238 LO92SA 2.221 7.221 17.221 ADS43 11.468 11.468 NaN 22SA8A -1.931 NaN NaN I think a proper way to solve this problem would be just to run the following. It loads the long form data in numeric format and then pivots (I just prefer stack and unstack) the data into position. df0 = pd.read_csv(YOUR_FILE, sep='\s+', header=None, index_col=0) d = df0.set_index(2, append=True)[1].unstack() # gets rid of multi-index and names d.columns.name = None d.index.name = None d.sort_values(by=d.columns[-1], ascending=False) That yields what you seem to want: 02:45 03:00 03:30 LO92SA 2.221 7.221 17.221 JJS7AS 3.238 -2.238 12.238 P352AS NaN NaN -1.912 K72KSU3 -3.398 3.398 -9.153 982AS2S 5.657 -5.657 -12.595 111AS2 NaN -10.756 -13.756 22SA8A -1.931 NaN NaN ADS43 11.468 11.468 NaN
2
2
76,980,131
2023-8-25
https://stackoverflow.com/questions/76980131/how-to-filter-pandas-dataframe-so-that-the-first-and-last-rows-within-a-group-ar
I have a dataframe like below: data = [ [123456, "2017", 150.235], [123456, "2017", 160], [123456, "2017", 135], [123456, "2017", 135], [123456, "2017", 135], [123456, "2018", 202.5], [123456, "2019", 168.526], [123456, "2020", 175.559], [123456, "2020", 176], [123456, "2021", 206.667], [789101, "2017", 228.9], [789101, "2018", 208], [789101, "2018", 208], [789101, "2018", 208], ] df = pd.DataFrame( data, columns=[ "ID", "year", "value", ], ) df In this dataframe I have an ID column and 2+ years. The year columns can contain 1 or more value columns. I would like to filter this dataframe so that all of the earliest year rows (even if there are duplicate values) and all of the latest year rows (again, even if there are duplicate values I want them). My desired output is: I found another SO question that was similar: g = df.groupby("ID") (pd.concat([g.head(1), g.tail(1)]) .drop_duplicates() .sort_values('ID') .reset_index(drop=True)) but it only first to the first value within the first year and I want all of the values. Can anyone please advise?! Thank you !!
Try: out = df.groupby("ID", group_keys=False).apply( lambda x: x[(x.year == x.year.min()) | (x.year == x.year.max())] ) print(out) Prints: ID year value 0 123456 2017 150.235 1 123456 2017 160.000 2 123456 2017 135.000 3 123456 2017 135.000 4 123456 2017 135.000 9 123456 2021 206.667 10 789101 2017 228.900 11 789101 2018 208.000 12 789101 2018 208.000 13 789101 2018 208.000
2
1
76,979,303
2023-8-25
https://stackoverflow.com/questions/76979303/pandas-filter-dataframe-by-column-of-sets-using-multiple-conditions-regarding-s
I have a problem whereby I want to filter the following dataframe such that I only return the rows where we have both a pie and a non-pie item: ID set 1 apple pie, banana loaf 2 banana pie, apple pie 3 banana loaf, apple tart Thus, the expected output would be: ID set 1 apple pie, banana loaf Note that every set in the set column contains exactly two items. What I have tried so far: df[(any("pie" in s for s in df['set'])) & (any("pie" not in s for s in df['set']))] I expect I am doing something that is breaking Pandas dataframe filtering convention but not sure what exactly. Any help appreciated!
You could use apply on your dataframe: df[df.set.apply(lambda x: len([s for s in x if "pie" in s]) == 1)] Results: ID set 0 1 [apple pie, banana loaf]
3
2