question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,149,918
2024-3-12
https://stackoverflow.com/questions/78149918/split-record-into-two-records-with-a-calculation-based-on-condition
As title says, Let's say I have following dataframe import pandas as pd df = pd.DataFrame({'UID':['A','B','C','D'],'FlagVal':[0,100,50,90],'TrueVal':[1000,1000,1000,1000]}) ndf = df.loc[~df['FlagVal'].between(0,100,inclusive='neither')] mdf = df.loc[df['FlagVal'].between(0,100,inclusive='neither')] I want to split records into two where FlagVal is between 0 and 100. i.e. mdf , and perform a few calculations - def split_record(row): gtd = row.copy() ungtd = row.copy() gtd['UID'] = row['UID'] + '_T1' gtd['Flag'] = 'Y' ungtd['UID'] = row['UID'] + '_T2' ungtd['Flag'] = '' gtd['TrueVal'] = float(gtd['TrueVal'])*(float(gtd['FlagVal'])/100.0) ungtd['TrueVal'] = float(ungtd['TrueVal'])*(1 - (float(ungtd['FlagVal'])/100.0)) gtd['FlagVal'] = 100 ungtd['FlagVal'] = 0 result_data = pd.DataFrame([gtd, ungtd]) return result_data split_df = pd.concat([split_record(row) for _, row in mdf.iterrows()], ignore_index=True) then merge with 'ndf'. This works just fine for ~1000 records or so but it takes a toll when record count is in millions. I tried using apply function with axis=1, but not sure how to merge it with data again. Can you point me to a correct function to optimize it?
You can create split_df in two steps (this will be very fast): # create the 100% part: mdf_100 = pd.DataFrame({"TrueVal": mdf["FlagVal"].div(100) * mdf["TrueVal"]}).assign( FlagVal=100, UID=mdf["UID"] + "_T1", Flag="Y" ) # create the 0% part: mdf_0 = pd.DataFrame( {"TrueVal": (1 - mdf["FlagVal"].div(100)) * mdf["TrueVal"]} ).assign(FlagVal=0, UID=mdf["UID"] + "_T2", Flag="") split_df = pd.concat([mdf_100, mdf_0]).sort_index() print(split_df) Prints: TrueVal FlagVal UID Flag 2 500.0 100 C_T1 Y 2 500.0 0 C_T2 3 900.0 100 D_T1 Y 3 100.0 0 D_T2 EDIT: With multiple TrueValX columns: cols = ["TrueVal1", "TrueVal2", "TrueVal3"] # create the 100% part: mdf_100 = pd.DataFrame({c: mdf["FlagVal"].div(100) * mdf[c] for c in cols}).assign( FlagVal=100, UID=mdf["UID"] + "_T1", Flag="Y" ) # create the 0% part: mdf_0 = pd.DataFrame({c: (1 - mdf["FlagVal"].div(100)) * mdf[c] for c in cols}).assign( FlagVal=0, UID=mdf["UID"] + "_T2", Flag="" ) split_df = pd.concat([mdf_100, mdf_0]).sort_index() print(split_df) Prints: TrueVal1 TrueVal2 TrueVal3 FlagVal UID Flag 2 500.0 1000.0 1500.0 100 C_T1 Y 2 500.0 1000.0 1500.0 0 C_T2 3 900.0 1800.0 2700.0 100 D_T1 Y 3 100.0 200.0 300.0 0 D_T2 Input df: UID FlagVal TrueVal1 TrueVal2 TrueVal3 0 A 0 1000 2000 3000 1 B 100 1000 2000 3000 2 C 50 1000 2000 3000 3 D 90 1000 2000 3000
2
1
78,149,859
2024-3-12
https://stackoverflow.com/questions/78149859/is-there-a-way-to-integrate-vector-embeddings-in-a-langhcain-agent
I'm trying to use the Langchain ReAct Agents and I want to give them my pinecone index for context. I couldn't find any interface that let me provide the LLM that uses the ReAct chain my vector embeddings as well. Here I set up the LLM and retrieve my vector embedding. llm = ChatOpenAI(temperature=0.1, model_name="gpt-4") retriever = vector_store.as_retriever(search_type='similarity', search_kwargs={'k': k}) Here I start my ReAct Chain. prompt = hub.pull("hwchase17/structured-chat-agent") agent = create_structured_chat_agent(llm, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) result = agent_executor.invoke( { "input": question, "chat_history": chat_history } ) Before using the ReAct Agent, I used the vector embedding like this. crc = ConversationalRetrievalChain.from_llm(llm, retriever) result = crc.invoke({'question': systemPrompt, 'chat_history': chat_history}) chat_history.append((question, result['answer'])) Is there any way to combine both methods and have a ReAct agent that also uses vector Embeddings?
You can specify the retriever as a tool for the agent. Example: from langchain.tools.retriever import create_retriever_tool retriever = vector_store.as_retriever(search_type='similarity', search_kwargs={'k': k}) retriever_tool = create_retriever_tool( retriever, "retriever_name", "A detailed description of the retriever and when the agent should use it.", ) tools = [retriever_tool] agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) References: Agent > Retriever (LangChain)
2
2
78,149,546
2024-3-12
https://stackoverflow.com/questions/78149546/multiindex-dataframe-to-a-standard-index-df
How do I convert a MultiIndex DataFrame to a Standard index DF? import pandas as pd df1 = pd.DataFrame({'old_code': ['00000001', '00000002', '00000003', '00000004'], 'Desc': ['99999991', '99999992 or 99999922', 'Use 99999993 or 99999933', '99999994']}, ) df1.set_index('old_code', inplace=True) df2=df1["Desc"].str.extractall(r"(?P<new_code>\d{7,9})") print(df2.head(10)) My output looks like this old_code match new_code 00000001 0 99999991 00000002 0 99999992 1 99999922 00000003 0 99999993 1 99999933 00000004 0 99999994 I'm trying to get it in a format like this? old_code new_code 00000001 99999991 00000002 99999992 00000002 99999922 00000003 99999993 00000002 99999933 00000004 99999994
You can drop the second level index and then reset the index: print(df2.droplevel(1).reset_index()) Prints: old_code new_code 0 00000001 99999991 1 00000002 99999992 2 00000002 99999922 3 00000003 99999993 4 00000003 99999933 5 00000004 99999994
2
1
78,149,052
2024-3-12
https://stackoverflow.com/questions/78149052/in-a-dataframe-how-can-i-find-if-each-row-number-appears-in-a-column-of-lists
Given a data frame, I need to create a new column 'flag' which will have value True in row i if i is not in any of the lists in column 'id' and False otherwise. For the example below, flag for rows 1, 3, 5, and 6 should be True. I attempt it with lambda, but cannot get it work. import pandas as pd pos = pd.DataFrame(columns=['id', 'pred']) pos.loc[1,'id'] = [4, 4] pos.loc[2,'id'] = [2] pos.loc[3,'id'] = [2, 4] pos.loc[4,'id'] = [2] pos.loc[5,'id'] = [2, 4] pos.loc[6,'id'] = [4] for i in range(0, len(pos)): pos['flag'] = pos.apply(lambda x: int(i in x['id']), axis=True) print(pos) id pred flag 1 [4, 4] NaN False 2 [2] NaN False 3 [2, 4] NaN False 4 [2, 6] NaN False 5 [2, 6] NaN False 6 [6] NaN False
With (~)isin/explode: pos["flag"] = ~pos.index.isin(pos["id"].explode()) # .astype(int) ? Output : id pred flag 1 [4, 4] NaN True 2 [2] NaN False 3 [2, 4] NaN True 4 [2] NaN False 5 [2, 4] NaN True 6 [4] NaN True [6 rows x 3 columns]
3
3
78,148,461
2024-3-12
https://stackoverflow.com/questions/78148461/how-do-i-create-a-preface-page-before-the-title-page-in-quarto
I am using quarto, python, and jupyter lab/notebook to be able to create reproducible reports. Our reports include a preface page before the title page with additional information about what is included in the report. So, how do I place content before the title page? The final document will be a .pdf. After much googling I suspect it has something to do with modifying before-body.tex but it wasn't clear to me if or how I could move the title page.
Yes, you need to use before-body.tex latex template partials. Include your preface contents before the title. before-body.tex $if(has-frontmatter)$ \frontmatter $endif$ % ----------- PREFACE ---------- \begin{center} \Large{Preface} \end{center} Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut vulputate bibendum mauris vel aliquam. Fusce vulputate non ligula vitae accumsan % ------------------------------- $if(title)$ $if(beamer)$ \frame{\titlepage} $else$ \maketitle $endif$ $if(abstract)$ \begin{abstract} $abstract$ \end{abstract} $endif$ $endif$ And, then use this template partial in the YAML of the ipynb file.
3
2
78,134,640
2024-3-10
https://stackoverflow.com/questions/78134640/polars-expanding-window-at-fixed-points
I have a Polars dataframe with 3 columns - group, date, value. The goal is to calculate cumsum(value) for each expanding window ends at the first time point at each year for each group. For example, for the following sample dataframe: import polars as pl df = pl.DataFrame( { "date": [ "2020-03-01", "2020-05-01", "2020-11-01", "2021-01-01", "2021-02-03", "2021-06-08", "2022-01-05", "2020-07-01", "2020-09-01", "2022-01-05", "2023-02-04", ], "group": [1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2], "value": [1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4], }, ).with_columns(pl.col("date").str.strptime(pl.Date)) The result I am looking for is: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ group ┆ value β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════║ β”‚ 2020-03-01 ┆ 1 ┆ 1 β”‚ β”‚ 2021-01-01 ┆ 1 ┆ 10 β”‚ β”‚ 2022-01-05 ┆ 1 ┆ 28 β”‚ β”‚ 2020-07-01 ┆ 2 ┆ 1 β”‚ β”‚ 2022-01-05 ┆ 2 ┆ 6 β”‚ β”‚ 2023-02-04 ┆ 2 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Basically, at the first date of each year, calculate the cumulative sum of value from beginning up to (including) this particular date, for each group respectively. I tried group_by_dynamic and rolling, but still unable to find a concise and clear way to solve this problem. Any idea is welcome. Thanks!
A "naive" approach could be to calculate the full cum_sum and then keep the first yearly row of each group. df.with_columns( cum_sum = pl.col("value").cum_sum().over("group") ).filter( pl.col("date").dt.year().is_first_distinct().over("group") ) shape: (6, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ group ┆ value ┆ cum_sum β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═══════β•ͺ═════════║ β”‚ 2020-03-01 ┆ 1 ┆ 1 ┆ 1 β”‚ β”‚ 2021-01-01 ┆ 1 ┆ 4 ┆ 10 β”‚ β”‚ 2022-01-05 ┆ 1 ┆ 7 ┆ 28 β”‚ β”‚ 2020-07-01 ┆ 2 ┆ 1 ┆ 1 β”‚ β”‚ 2022-01-05 ┆ 2 ┆ 3 ┆ 6 β”‚ β”‚ 2023-02-04 ┆ 2 ┆ 4 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ A more "direct" (efficient) approach could be to take the yearly sum first, then calculate the cum_sum: (df.group_by( "group", pl.col("date").dt.year().alias("year"), maintain_order=True ) .agg( pl.col("date").first(), pl.col("value").first(), pl.col("value").sum().alias("sum"), ) .with_columns( pl.when(pl.col("group").is_first_distinct()) .then(pl.col("value")) .otherwise( pl.col("value") + pl.col("sum").cum_sum().shift().over("group") ) .alias("cum_sum") ) ) shape: (6, 6) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ group ┆ year ┆ date ┆ value ┆ sum ┆ cum_sum β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i32 ┆ date ┆ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ══════β•ͺ════════════β•ͺ═══════β•ͺ═════β•ͺ═════════║ β”‚ 1 ┆ 2020 ┆ 2020-03-01 ┆ 1 ┆ 6 ┆ 1 β”‚ β”‚ 1 ┆ 2021 ┆ 2021-01-01 ┆ 4 ┆ 15 ┆ 10 β”‚ β”‚ 1 ┆ 2022 ┆ 2022-01-05 ┆ 7 ┆ 7 ┆ 28 β”‚ β”‚ 2 ┆ 2020 ┆ 2020-07-01 ┆ 1 ┆ 3 ┆ 1 β”‚ β”‚ 2 ┆ 2022 ┆ 2022-01-05 ┆ 3 ┆ 3 ┆ 6 β”‚ β”‚ 2 ┆ 2023 ┆ 2023-02-04 ┆ 4 ┆ 4 ┆ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
4
78,146,458
2024-3-12
https://stackoverflow.com/questions/78146458/strategy-to-make-django-models-with-lots-of-memberfunctions-more-manageable
I have a huge model because of its many member functions. I would like to organize the functions into separate classes according to their purpose, so everything is a bit more organized. Using OneToOneField is not a good option for me, because my app heavily relies on query caching (with cacheops). And I noticed that getting the latest version of a data object (from db) may be attached to a an outdated OneToOneField (from cache). Also, the number of data members is not a problem, only the number of member functions. So I would prefer to use just one data model. For example, I could do it this way: # models.py class MyModel(models.Model): # members # separate member functions class CustomModelFunctions: @staticmethod def some_function(my_model_instance): # logic # call the function on the my_model obj # instead of calling my_model.some_function() CustomModelFunctions.some_function(my_model) This works, but the problem is that these functions can not be used in django templates, since the my_model argument can not be passed. For example, I don't see how I could replace the following template code: {% if my_model.some_function %} Does anyone know a workaround for this? Or a different approach to organize models with a lot of member functions?
I have a huge model because of its many member functions. I would like to organize the functions into separate classes according to their purpose, so everything is a bit more organized. Probably the best way would be to work with a proxy, like: class FunctionCategory(type): def __get__(self, obj, objtype=None): return self(obj) class FunctionCategoryBase(metaclass=FunctionCategory): def __init__(self, obj): self.obj = obj # separate member functions class FunctionCategory1(FunctionCategoryBase): def some_function11(self): # use self.obj return None def some_function12(self): # use self.obj return None # separate member functions class FunctionCategory2(FunctionCategoryBase): def some_function21(self): # use self.obj return None def some_function22(self): # use self.obj return None def some_function23(self): # use self.obj return None class MyModel(models.Model): # members functions1 = FunctionCategory1 functions2 = FunctionCategory2 Then you can use: my_model_object.functions1.some_function12() This will thus for my_model_object.functions1 create a FunctionCategory1 where it injects the model object.
2
2
78,147,903
2024-3-12
https://stackoverflow.com/questions/78147903/how-to-create-a-conditional-incremented-column-in-polars
I'd like to create a conditional incremented column in polars. It should start from 1 and increment only if a certain condition (pl.col('code') == 'L') is met. import polars as pl df = pl.DataFrame({'file': ['a.txt','a.txt','a.txt','a.txt','b.txt','b.txt','c.txt','c.txt','c.txt','c.txt','c.txt'], 'code': ['X','Y','Z','L','A','A','B','L','C','L','X'] }) df.with_columns(pl.int_range(start=1, end=pl.len()+1).over('file').alias('rrr') ) This produces a simple unconditional increment. But how do I add conditions?
Not sure which output exactly you're expecting, but here's an example of incrementing the counter only at rows which meet the criteria, using cum_sum(): df.with_columns( pl.when(pl.col('code') == 'L').then(pl.lit(1)).otherwise(pl.lit(0)).alias('rrr') ).with_columns( pl.col('rrr').cum_sum().over('file') + 1 ) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ file ┆ code ┆ rrr β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ i32 β”‚ β•žβ•β•β•β•β•β•β•β•ͺ══════β•ͺ═════║ β”‚ a.txt ┆ X ┆ 1 β”‚ β”‚ a.txt ┆ Y ┆ 1 β”‚ β”‚ a.txt ┆ Z ┆ 1 β”‚ β”‚ a.txt ┆ L ┆ 2 β”‚ β”‚ b.txt ┆ A ┆ 1 β”‚ β”‚ b.txt ┆ A ┆ 1 β”‚ β”‚ c.txt ┆ B ┆ 1 β”‚ β”‚ c.txt ┆ L ┆ 2 β”‚ β”‚ c.txt ┆ C ┆ 2 β”‚ β”‚ c.txt ┆ L ┆ 3 β”‚ β”‚ c.txt ┆ X ┆ 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
2
1
78,147,344
2024-3-12
https://stackoverflow.com/questions/78147344/how-to-get-the-number-of-the-row-that-contains-a-specific-value-in-the-given-col
I need to get the number of the row that contains a specific value in the given column. It is guaranteed that such value exists in the column and in a unique row. My attempt: import pandas as pd pos = pd.DataFrame(columns=['id', 'pred']) pos.loc[1,'id'] = [4, 4, 4] pos.loc[2,'id'] = [2, 3, 3] pos.loc[3,'id'] = [1, 1, 2] pos.loc[4,'id'] = [1, 2, 1] print(pos) print(pos[pos['id'] == [1, 1, 2]].index[0]) I am getting an error: ValueError: ('Lengths must match to compare', (6,), (3,))
You need to loop. If you want the position: next(i for i, x in enumerate(pos['id']) if [1,1,2] == x) Output: 2 Or, to have the index: next(i for i, x in zip(pos.index, pos['id']) if [1,1,2] == x) Output: 3
4
3
78,145,973
2024-3-12
https://stackoverflow.com/questions/78145973/bug-in-large-sparse-csr-binary-matrices-multiplication-result
This boggles my mind, is this a known bug or am I missing something? If a bug is there a way to circumvent it? Suppose I have a relatively small binary (0/1) n x q scipy.sparse.csr_matrix, as in: import numpy as np from scipy import sparse def get_dummies(vec, vec_max): vec_size = vec.size Z = sparse.csr_matrix((np.ones(vec_size), (np.arange(vec_size), vec)), shape=(vec_size, vec_max), dtype=np.uint8) return Z q = 100 ns = np.round(np.random.random(q)*100).astype(np.int16) Z_idx = np.repeat(np.arange(q), ns) Z = get_dummies(Z_idx, q) Z <5171x100 sparse matrix of type '<class 'numpy.uint8'>' with 5171 stored elements in Compressed Sparse Row format> Here Z is a standard dummy variables matrix, with n=5171 observations and q=100 variables: Z[:5, :5].toarray() array([[1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0]], dtype=uint8) E.g., if the first 5 variables have... ns[:5] array([21, 22, 37, 24, 99], dtype=int16) frequencies, we would also see this in Z's column sums: Z[:, :5].sum(axis=0) matrix([[21, 22, 37, 24, 99]], dtype=uint64) Now, as expected, if I multiply Z.T @ Z I should get a q x q diagonal matrix, on the diagonal the frequencies of the q variables: print((Z.T @ Z).shape) print((Z.T @ Z)[:5, :5].toarray() (100, 100) [[21 0 0 0 0] [ 0 22 0 0 0] [ 0 0 37 0 0] [ 0 0 0 24 0] [ 0 0 0 0 99]] Now for the bug: If n is really large (for me it happens around n = 100K already): q = 1000 ns = np.round(np.random.random(q)*1000).astype(np.int16) Z_idx = np.repeat(np.arange(q), ns) Z = get_dummies(Z_idx, q) Z <495509x1000 sparse matrix of type '<class 'numpy.uint8'>' with 495509 stored elements in Compressed Sparse Row format> The frequencies are large, the sum of Z's columns is as expected: print(ns[:5]) Z[:, :5].sum(axis=0) array([485, 756, 380, 87, 454], dtype=int16) matrix([[485, 756, 380, 87, 454]], dtype=uint64) But the Z.T @ Z is messed up! In the sense that I'm not getting the right frequencies on the diagonal: print((Z.T @ Z).shape) print((Z.T @ Z)[:5, :5].toarray()) (1000, 1000) [[229 0 0 0 0] [ 0 244 0 0 0] [ 0 0 124 0 0] [ 0 0 0 87 0] [ 0 0 0 0 198]] Amazingly, there is some relation to the true frequencies: import matplotlib.pyplot as plt plt.scatter(ns, (Z.T @ Z).diagonal()) plt.xlabel('real frequencies') plt.ylabel('values on ZZ diagonal') plt.show() What is going on? I'm using standard colab: import scipy as sc print(np.__version__) print(sc.__version__) 1.25.2 1.11.4 PS: Obviously if I wanted just Z.T @ Z's output matrix there are easier ways to get it, this is a very simplified reduced problem, thanks.
You are using uint8 in get_dummies print(Z.dtype, (Z.T @ Z)[:5, :5].dtype) # uint8, uint8 So the result is overflowing. The pattern is because the result is the true result modulo 256. If you change the dtype to uint16, the problem disappears. The fact that sum increases the precision of the dtype is a bit surprising, but it is documented and consistent with what NumPy sum does.
2
2
78,140,395
2024-3-11
https://stackoverflow.com/questions/78140395/python-function-compatible-with-sync-and-async-client
I'm developing a library that should support sync and async users while minimizing code duplication inside the library. The ideal case would be implementing the library async (since the library does remote API calls) and adding support for sync with a wrapper or so. Consider the following function as part of the library. # Library code async def internal_function(): """Internal part of my library doing HTTP call""" pass My idea was to provide a wrapper that detects if the users of the library use async or not. # Library code def api_call(): """Public api of my library""" if asyncio.get_event_loop().is_running(): # Async caller return internal_function() # This returns a coroutine, so the caller needs to await it # Sync caller, so we need to run the coroutine in a blocking way return asyncio.run(internal_function()) At first, this seemed to be the solution. With this I can support users that call the function from an async function, users that call the function from a notebook (that's also async), and users that call the function from a plain python script (here it falls back to the blocking asyncio.run). However, there are cases when the function is called from within an event loop, but the direct caller is a sync function. # Code from users of the library async def entrypoint(): """This entrypoint is called in an event loop, e.g. within fastlib""" return legacy_sync_code() # Call to sync function def legacy_sync_code(): """Legacy code that does not support async""" # Call to library code from sync function, # expect value not coroutine, could not use await api_response = api_call() return api_response.json() # Needs to be value, not coroutine In the last line, the call of json() fails. api_call() wrongly inferrs that the caller can await the response, and returns the coroutine. In order to support these kind of users, I would need to identify if the direct calling function is sync and expects a value instead of a corouting, have a way to await the result of internal_function() in a blocking way. Using asyncio.run does only work in plain python scripts and fails if the code was called form an event loop higher up in the stack trace. The first point could be mitigated if my library provided two functions, e.g., api_call_async() and api_call_sync(). I hope I make my point clear enough. I don't see why this should be fundamentally impossible, however, I can accept if the design of Python does not allow me to support sync and async users in a completely transparent way.
Do not try to be too flexible or solve problems that yet not occurred. That would make your life harder and potentially your code more complex to develop and maintain and in result not really that flexible as you'd have thought. It's nothing wrong to not babysit fellow developers and let them take some bit of responsibility. Let them make own choice if they want to go sync or async. Also note, that what can appear benefit for you (in terms of implementation details etc, like detecting sync/async), may be a deal breaker for others as you might be doing something you weren't asked for in the first place. It's easy to outsmart yourself for no real benefit and then waste lot of time fixing problems you just happily created. I personally would go for separate api_call_sync() and api_call_async() approach but also ensure my library documentation is dumb-clear and discuss all the edge cases you right now think of as being potentially problematic, so my users got full picture of the battlefield to take the best shot they can.
3
4
78,144,684
2024-3-12
https://stackoverflow.com/questions/78144684/how-to-find-closed-objects-among-an-array-of-lines
I have a list of tuples. Each tuple contains coordinate points (x, y, x1, y1,...) forming a line. All these lines form a drawing. There are 5 closed objects in this picture. How can I get the number of these objects with their coordinates? Coordinates_list = [(939, 1002, 984, 993, 998, 1001, 1043, 995, 1080, 1004, 1106, 994, 1147, 1003, 1182, 995, 1223, 1005), (939, 1002, 939, 900), (939, 900, 961, 916), (961, 916, 1031, 898), (1031, 898, 1080, 906), (1080, 906, 1190, 896), (1190, 896, 1225, 897), (1223, 1005, 1225, 897), (939, 1002, 1031, 898, 1106, 994, 1190, 896, 1182, 995)] I tried to use the DFS (Depth First Search) Algorithm, but it always returned fewer objects than there actually were. But the data here is presented differently - here there are no more than two points in the tuple, but this does not change the drawing def find_closed_figures(lines): def dfs(line_idx, visited): visited.add(line_idx) for neighbor_idx, line in enumerate(lines): if neighbor_idx not in visited and lines[line_idx][3:6] == line[0:3]: dfs(neighbor_idx, visited) closed_figures_count = 0 visited_lines = set() for idx, line in enumerate(lines): if idx not in visited_lines: dfs(idx, visited_lines) closed_figures_count += 1 return closed_figures_count coordinates_list = [(939, 1002, 0, 984, 993, 0), (984, 993, 0, 998, 1001, 0), (998, 1001, 0, 1043, 995, 0), (1043, 995, 0, 1080, 1004, 0), (1080, 1004, 0, 1106, 994, 0), (1106, 994, 0, 1147, 1003, 0), (1147, 1003, 0, 1182, 995, 0), (1182, 995, 0, 1223, 1005, 0), (939, 1002, 0, 939, 900, 0), (939, 900, 0, 961, 916, 0), (961, 916, 0, 1031, 898, 0), (1031, 898, 0, 1080, 906, 0), (1080, 906, 0, 1190, 896, 0), (1190, 896, 0, 1225, 897, 0), (1223, 1005, 0, 1225, 897, 0), (939, 1002, 0, 1031, 898, 0), (1031, 898, 0, 1106, 994, 0), (1106, 994, 0, 1190, 896, 0), (1190, 896, 0, 1182, 995, 0)] closed_figures_count = find_closed_figures(coordinates_list) print(closed_figures_count)
Here is a condensed version of Grismar's answer, where he has a good explanation as well. I just removed the manual search for segments and treated each point as a node directly. import networkx as nx lines = [ (939, 1002, 984, 993, 998, 1001, 1043, 995, 1080, 1004, 1106, 994, 1147, 1003, 1182, 995, 1223, 1005), (939, 1002, 939, 900), (939, 900, 961, 916), (961, 916, 1031, 898), (1031, 898, 1080, 906), (1080, 906, 1190, 896), (1190, 896, 1225, 897), (1223, 1005, 1225, 897), (939, 1002, 1031, 898, 1106, 994, 1190, 896, 1182, 995) ] point_lists = [[pair for pair in zip(line[::2],line[1::2])] for line in lines] edges = [segment for point_list in point_lists for segment in zip(point_list[:-1], point_list[1:])] G = nx.from_edgelist(edges) cycles = list(nx.chordless_cycles(G)) print('The number of chord-less cycles:', len(cycles)) print('The cycles:') for cycle in cycles: print("\t", cycle)
3
2
78,143,341
2024-3-11
https://stackoverflow.com/questions/78143341/explode-a-polars-dataframe-column-without-duplicating-other-column-values
As a minimum example, let's say we have next polars.DataFrame: df = pl.DataFrame({"sub_id": [1,2,3], "engagement": ["one:one,two:two", "one:two,two:one", "one:one"], "total_duration": [123, 456, 789]}) sub_id engagement total_duration 1 one:one,two:two 123 2 one:two,two:one 456 3 one:one 789 then, we explode "engagement" column df = df.with_columns(pl.col("engagement").str.split(",")).explode("engagement") and receive: sub_id engagement total_duration 1 one:one 123 1 two:two 123 2 one:two 456 2 two:one 456 3 one:one 789 For visualization I use Plotly, and code would be following: import plotly.express as px fig = px.bar(df, x="sub_id", y="total_duration", color="engagement") fig.show() Resulting plot: Now it basically means that subscribers 1 and 2 have their total_duration (total watched time) doubled. How could I remain total_duration per sub, but leaving engagement groups as shown on the plot legend?
An option to handle this in polars would be to split total_duration equally between engagement rows within sub_id. For this, we simply divide total_duration by the number of rows of the given sub_id. ( df .with_columns( pl.col("engagement").str.split(",") ) .explode("engagement") .with_columns( pl.col("total_duration") / pl.len().over("sub_id") ) ) shape: (5, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ sub_id ┆ engagement ┆ total_duration β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ════════════β•ͺ════════════════║ β”‚ 1 ┆ one:one ┆ 61.5 β”‚ β”‚ 1 ┆ two:two ┆ 61.5 β”‚ β”‚ 2 ┆ one:two ┆ 228.0 β”‚ β”‚ 2 ┆ two:one ┆ 228.0 β”‚ β”‚ 3 ┆ one:one ┆ 789.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
6
3
78,144,401
2024-3-12
https://stackoverflow.com/questions/78144401/how-could-i-handle-superclass-and-subclass-cases
Problem statement: I have 100s of py files which defines pydantic schema. Suddenly I need to treat empty string as None. I am expecting minimal changes across all the files. Approach I implemented: I created an inherited class such as class ConstrainedStr(str): @classmethod def __get_validators__(cls): yield cls.validate @classmethod def validate(cls, v: str, field: Field) -> Optional[str]: v = v.strip() if v == "": return None return v Then, in all the py files I just added an import statement from package.module import ConstrainedStr as str Luckily, It worked at the first sight. But I ended up with an issue where I have a function User file: from package.module import ConstrainedStr as str def validate(cls, value:str): //sample value 'asd' if isinstance(value, str): validation_rule() Here, this conditional statement failed. Question How could I avoid major changes to achieve this? Is there a way? Why that isinstance check failed. Constrainedstr is also a str right? Is my understanding wrong? When I digged into this further, just to understand the behaviour for the question #2, I found the below. type(ConstrainedStr) return type<class>. `type('123')` returns type<str> But when I check the builtins package, str is also a class. But type function returns the type for str as str but for ConstrainedStr as class. Hence, My question #2 popped up. Another example: class A(int): pass isinstance(2, A) # False This is my #2 question.
While an instance of a subclass is automatically an instance of the base class, the reverse isn't true. This is analogous to that every cat is a mammal, but not every mammal is a cat. Similarly, since ConstrainedStr is a subclass of str, an instance of the child class ConstrainedStr is automatically an instance of the base class str, while conversely an instance of the base class str, such as 'asd', cannot automatically be considered an instance of a child class such as ConstrainedStr. To force a reverse instance check to be also true, you can customize the behavior of isinstance by defining __instancecheck__ on a metaclass. Also customize issubclass by defining __subclasscheck___ to keep the relationships consistent. To achieve the desired behavior, your custom __instancecheck__ should return True if the given instance is an instance of any of the base classes: class SubstitutableMeta(type): def __instancecheck__(cls, instance): return any(isinstance(instance, base) for base in cls.__bases__) def __subclasscheck__(cls, subclass): return any(issubclass(subclass, base) for base in cls.__bases__) class ConstrainedStr(str, metaclass=SubstitutableMeta): pass print(isinstance('asd', ConstrainedStr)) # outputs True
3
3
78,143,771
2024-3-11
https://stackoverflow.com/questions/78143771/polars-replacing-the-first-n-rows-with-certain-values
In Pandas we can replace the first n rows with certain values: import pandas as pd import numpy as np df = pd.DataFrame({"test": np.arange(0, 10)}) df.iloc[0:5] = np.nan df out: test 0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 5.0 6 6.0 7 7.0 8 8.0 9 9.0 What would be the equivalent operations in Python Polars?
You can use DataFrame.with_row_index(): import polars as pl df = pl.DataFrame({"test": np.arange(1, 11)}) print( df.with_row_index() .with_columns( pl.when(pl.col("index") < 5) .then(None) .otherwise(pl.col("test")) .alias("test") ) .drop("index") ) Prints: shape: (10, 1) β”Œβ”€β”€β”€β”€β”€β”€β” β”‚ test β”‚ β”‚ --- β”‚ β”‚ f64 β”‚ β•žβ•β•β•β•β•β•β•‘ β”‚ Null β”‚ β”‚ Null β”‚ β”‚ Null β”‚ β”‚ Null β”‚ β”‚ Null β”‚ β”‚ 6 β”‚ β”‚ 7 β”‚ β”‚ 8 β”‚ β”‚ 9 β”‚ β”‚ 10 β”‚ β””β”€β”€β”€β”€β”€β”€β”˜ or by pl.int_range() which gives more flexibility in cases of groups. print( (df .with_columns( pl.int_range(0, pl.len(), dtype=pl.UInt32).over(True).alias("index") ) .with_columns( pl.when(pl.col("index") < 5) .then(None) .otherwise(pl.col("test")) .alias("test") ) .drop("index") ) )
3
3
78,143,178
2024-3-11
https://stackoverflow.com/questions/78143178/how-to-check-multiple-items-for-nullity-in-an-and-fashion-in-python
I was wondering if there's a Pythonic/hacky way to check for all (or none) items and dependencies in a set to be null. Suppose I have a variable foo that is a dict, but can be nullable: foo: Optional[dict] = None And I want to check that both foo is not None or one of its keys: if foo is not None and foo.get('bar') is not None: pass Using AND, it will "short-circuit" before anything else, so I'm safe checking for other keys on a nullable item. I was thinking about ways of doing that more neatly, like we can do with any() or all(). I though about doing it in a list comprehension, but it would fail: if not all(item is not None for item in [foo, foo.get('bar'), foo.get('baz')]: # Value Error! :sad One way could be using a try/catch, but that is ugly and won't make it look fun... We can't go for walrus operator as it is not assignable in a list comprehension: if not all (item for item in [foo := foo or {}, foo.get('bar'), foo.get('baz')]: # SyntaxError, can't use walrus in list comprehension I was also wondering if there's not a very well maintened tool that serves this purpose. Any better ideas?
Simply separate the foo from all the foo.gets and use an and: if foo is not None and all(foo.get(key) is not None for key in ('bar', 'baz')): You can also use key in foo to check if the key exists in the dictionary, this is actually safer because if the key exists and has a value of None in the dictionary, then using the above approach won't distinguish between the two cases: if foo is not None and all(key in foo for key in ('bar', 'baz')):
2
3
78,141,652
2024-3-11
https://stackoverflow.com/questions/78141652/pandas-remove-rows-with-only-one-type-of-value-repeated-or-not
I am trying to remove rows whose all values are the same, or a combination of the same values. I have, for example a dataframe like: data = {'A': ['1, 1, 1', '1', '2', '3', '1'], 'B': ['1', '1,1,1,1', '2', '4', '1'], 'C': ['1, 1', '2', '3', '5', '1']} I want to remove rows whose values in all columns are '1' or any combination of '1'. The final result should be something like: data = {'A': ['1', '2', '3'], 'B': ['1,1,1,1', '2', '4'], 'C': ['2', '3', '5']} I've tried the following: def remove_rows_with_ones(value): return all(x == '1' for x in value.split(',')) mask = df.apply(lambda row: any(remove_rows_with_ones(val) for val in row), axis=1) df_filtered = df[~mask] But it does not seem to work.
You could convert to string and check if each cell contains a character other than 1 (or space/comma), if at least one True, keep the row: out = df[df.apply(lambda s: s.astype(str).str.contains('[^1 ,]')).any(axis=1)] Or, with your original idea of splitting the strings on ', ': import re out = df[~df.applymap(lambda c: all(x=='1' for x in re.split(', *', c))).all(axis=1)] # pandas β‰₯ 2.1 out = df[~df.map(lambda c: all(x=='1' for x in re.split(', *', c))).all(axis=1)] Output: A B C 1 1 1,1,1,1 2 2 2 2 3 3 3 4 5 Intermediates: # df.apply(lambda s: s.astype(str).str.contains('[^1 ,]')) A B C 0 False False False 1 False False True 2 True True True 3 True True True 4 False False False # df.map(lambda c: all(x=='1' for x in re.split(', *', c))) A B C 0 True True True 1 True True False 2 False False False 3 False False False 4 True True True
2
2
78,140,607
2024-3-11
https://stackoverflow.com/questions/78140607/how-to-relate-list-items-from-two-pandas-dataframe-columns
I have a DataFrame that contains two columns composed by lists. One column has the categories of certain item, the other column has the points associated to this category. import pandas as pd cat = [ ['speed', 'health', 'strength', 'health'], ['strength', 'speed', 'speed'], ['strength', 'speed', 'health', 'speed'] ] pts = [ [1, 2, 1.5, -1], [2, -1.5, 1.5], [-1, 2, 0, 1.5] ] s_cat = pd.Series(cat, name='cat') s_pts = pd.Series(pts, name='pts') df = pd.concat([s_cat, s_pts], axis=1) Output: cat pts 0 [speed, health, strength, health] [1, 2, 1.5, -1] 1 [strength, speed, speed] [2, -1.5, 1.5] 2 [strength, speed, health, speed] [-1, 2, 0, 1.5] I would like to relate these lists to get the sum of the points from each category to a new column, and if possible, the count of positive and negative points of each category. I expect something like this: cat pts speed_sum health_sum strength_sum speed_pos speed_neg health_pos health_neg strength_pos strength_neg 0 [speed, health, strength, health] [1, 2, 1.5, -1] 1.0 1.0 1.5 1 0 1 1 1 0 1 [strength, speed, speed] [2, -1.5, 1.5] 0.0 NaN 2.0 1 1 0 0 1 0 2 [strength, speed, health, speed] [-1, 2, 0, 1.5] 3.5 0.0 -1.0 2 0 0 0 0 1
Try: def pos(vals): return (vals > 0).sum() def neg(vals): return (vals < 0).sum() tmp = df.explode(["cat", "pts"]) tmp = tmp.pivot_table( index=tmp.index, columns="cat", values="pts", aggfunc=["sum", pos, neg], ) tmp.columns = [f"{b}_{a}" for a, b in tmp.columns] out = pd.concat([df, tmp], axis=1) print(out) Prints: cat pts health_sum speed_sum strength_sum health_pos speed_pos strength_pos health_neg speed_neg strength_neg 0 [speed, health, strength, health] [1, 2, 1.5, -1] 1 1 1.5 1.0 1.0 1.0 1.0 0.0 0.0 1 [strength, speed, speed] [2, -1.5, 1.5] NaN 0.0 2 NaN 1.0 1.0 NaN 1.0 0.0 2 [strength, speed, health, speed] [-1, 2, 0, 1.5] 0 3.5 -1 0.0 2.0 0.0 0.0 0.0 1.0
2
3
78,140,231
2024-3-11
https://stackoverflow.com/questions/78140231/vscode-pylance-false-positive-reaction-on-importerror
I am trying to write cross-platform app in python. I want to use uvloop (only if possible), using this code: import asyncio try: import uvloop asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) except ImportError: pass However, pylance shows message that "Import "uvloop" could not be resolved" in Windows (because uvloop is not available for this OS, so that's okay). It's not a big deal, everything works, but this message is annoying. I can set ignore rule, but I want to keep other unresolved import warnings, because in other cases they are pretty useful. How can I tell Pylance, that in this specific case I'm okay with it?
Set an inline ignore statement: import asyncio try: import uvloop # type: ignore asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) except ImportError: pass It'd be better to be more specific with the rule you want to ignore. The above will ignore all type checking errors. https://github.com/microsoft/pylance-release/issues/196#issuecomment-668099106
2
2
78,122,541
2024-3-7
https://stackoverflow.com/questions/78122541/unexpected-keyword-argument-use-dora-when-attempting-to-generate-summary-from
I have fine-tuned a Mistral7B LLM using LoRA in 16 bit configuration using samsum training set from Hugginggace. The idea is to feed the fine-tuned LLM a conversation an it should generate a summary. here is my trining script: # set the train & validation data set from datasets import load_dataset train_dataset = load_dataset('json', data_files='/data/datasets/summarisation/samsum/train.json', split='train') eval_dataset = load_dataset('json', data_files='/data/datasets/summarisation/samsum/validation.json', split='train') # Set up the Accelerator. I'm not sure if we really need this for a QLoRA given its description (I have to read more about it) but it seems it can't hurt, and it's helpful to have the code for future reference. You can always comment out the accelerator if you want to try without. from accelerate import FullyShardedDataParallelPlugin, Accelerator from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig fsdp_plugin = FullyShardedDataParallelPlugin( state_dict_config=FullStateDictConfig(offload_to_cpu=True, rank0_only=False), optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=True, rank0_only=False), ) accelerator = Accelerator(fsdp_plugin=fsdp_plugin) # Let's use Weights & Biases to track our training metrics. You'll need to apply an API key when prompted. Feel free to skip this if you'd like, and just comment out the `wandb` parameters in the `Trainer` definition below. import wandb, os wandb.login() wandb_project = "mistral-samsun-finetune" if len(wandb_project) > 0: os.environ["WANDB_PROJECT"] = wandb_project # Formatting prompts # Then create a `formatting_func` to structure training examples as prompts. def formatting_func(example): text = f"### Dialog: {example['dialogue']}\n ### Summary: {example['summary']}" return text ### 2. Load Base Model # Let's now load Mistral - mistralai/Mistral-7B-v0.1 - using 4-bit quantization! import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, AutoConfig base_model_id = "mistralai/Mistral-7B-v0.1" # 4 bit config #bnb_config = BitsAndBytesConfig( # load_in_4bit=True, # bnb_4bit_use_double_quant=True, # bnb_4bit_quant_type="nf4", # bnb_4bit_compute_dtype=torch.bfloat16 #) #model = AutoModelForCausalLM.from_pretrained(base_model_id, quantization_config=bnb_config, device_map="auto") #16 bit config = AutoConfig.from_pretrained(base_model_id) config.quantization = "int8" # Set quantization to int8 for 16-bit precision model = AutoModelForCausalLM.from_pretrained(base_model_id, config=config) ### 3. Tokenization #Set up the tokenizer. Add padding on the left as it [makes training use less memory](https://ai.stackexchange.com/questions/41485/while-fine-tuning-a-decoder-only-llm-like-llama-on-chat-dataset-what-kind-of-pa). #For `model_max_length`, it's helpful to get a distribution of your data lengths. Let's first tokenize without the truncation/padding, so we can get a length distribution. tokenizer = AutoTokenizer.from_pretrained( base_model_id, padding_side="left", add_eos_token=True, add_bos_token=True, ) tokenizer.pad_token = tokenizer.eos_token def generate_and_tokenize_prompt(prompt): return tokenizer(formatting_func(prompt)) # return tokenizer(prompt, padding="max_length", truncation=True) # Reformat the prompt and tokenize each sample: tokenized_train_dataset = train_dataset.map(generate_and_tokenize_prompt) tokenized_val_dataset = eval_dataset.map(generate_and_tokenize_prompt) # Let's get a distribution of our dataset lengths, so we can determine the appropriate `max_length` for our input tensors. import matplotlib.pyplot as plt def plot_data_lengths(tokenized_train_dataset, tokenized_val_dataset): lengths = [len(x['input_ids']) for x in tokenized_train_dataset] lengths += [len(x['input_ids']) for x in tokenized_val_dataset] print(len(lengths)) # Plotting the histogram plt.figure(figsize=(10, 6)) plt.hist(lengths, bins=20, alpha=0.7, color='blue') plt.xlabel('Length of input_ids') plt.ylabel('Frequency') plt.title('Distribution of Lengths of input_ids') plt.show() plot_data_lengths(tokenized_train_dataset, tokenized_val_dataset) #From here, you can choose where you'd like to set the max_length to be. You can truncate and pad training examples to fit them to your chosen size. Be aware that choosing a larger max_length has its compute tradeoffs. # I'm using my personal notes to train the model, and they vary greatly in length. I spent some time cleaning the dataset so the samples were about the same length, cutting up individual notes if needed, but being sure to not cut in the middle of a word or sentence. # Now let's tokenize again with padding and truncation, and set up the tokenize function to make labels and input_ids the same. This is basically what self-supervised fine-tuning is. #max_length = 512 # This was an appropriate max length for my dataset #We have a dynamic max length max_length = max(max(len(x['input_ids']) for x in tokenized_train_dataset), max(len(x['input_ids']) for x in tokenized_val_dataset)) print(f"Max length: {max_length}") wandb.init() wandb.log({"Max length": max_length}) def generate_and_tokenize_prompt2(prompt): result = tokenizer( formatting_func(prompt), truncation=True, max_length=max_length, padding="max_length", ) result["labels"] = result["input_ids"].copy() return result tokenized_train_dataset = train_dataset.map(generate_and_tokenize_prompt2) tokenized_val_dataset = eval_dataset.map(generate_and_tokenize_prompt2) # Check that `input_ids` is padded on the left with the `eos_token` (2) and there is an `eos_token` 2 added to the end, and the prompt starts with a `bos_token` (1). print(tokenized_train_dataset[1]['input_ids']) # Now all the samples should be the same length, `max_length`. plot_data_lengths(tokenized_train_dataset, tokenized_val_dataset) # Set Up LoRA # Now, to start our fine-tuning, we have to apply some preprocessing to the model to prepare it for training. For that use the prepare_model_for_kbit_training method from PEFT. from peft import prepare_model_for_kbit_training model.gradient_checkpointing_enable() model = prepare_model_for_kbit_training(model) def print_trainable_parameters(model): # """ # Prints the number of trainable parameters in the model. # """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) # T #Let's print the model to examine its layers, as we will apply QLoRA to all the linear layers of the model. Those layers are q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, and lm_head. print(model) #Here we define the LoRA config. #`r` is the rank of the low-rank matrix used in the adapters, which thus controls the number of parameters trained. A higher rank will allow for more expressivity, but there is a compute tradeoff. #`alpha` is the scaling factor for the learned weights. The weight matrix is scaled by `alpha/r`, and thus a higher value for `alpha` assigns more weight to the LoRA activations. #The values used in the QLoRA paper were `r=64` and `lora_alpha=16`, and these are said to generalize well, but we will use `r=32` and `lora_alpha=64` so that we have more emphasis on the new fine-tuned data while also reducing computational complexity. from peft import LoraConfig, get_peft_model config = LoraConfig( r=32, lora_alpha=64, target_modules=[ # "q_proj", # "k_proj", # "v_proj", # "o_proj", # "gate_proj", # "up_proj", # "down_proj", "lm_head", ], bias="none", lora_dropout=0.05, # Conventional task_type="CAUSAL_LM", ) #apply Lora model = get_peft_model(model, config) print_trainable_parameters(model) #See how the model looks different now, with the LoRA adapters added: print(model) #5 Run Training! #Overfitting is when the validation loss goes up (bad) while the training loss goes down significantly, meaning the model is learning the training set really well, but is unable to generalize to new datapoints. # In most cases, this is not desired, but since I am just playing around with a model to generate outputs like my journal entries, I was fine with a moderate amount of overfitting. #With that said, a note on training: you can set the max_steps to be high initially, and examine at what step your model's performance starts to degrade. #There is where you'll find a sweet spot for how many steps to perform. For example, say you start with 1000 steps, and find that at around 500 steps the model starts overfitting, as described above. #Therefore, 500 steps would be yt spot, so you would use the checkpoint-500 model repo in your output dir (mistral-journal-finetune) as your final model in step 6 below. #If you're just doing something for fun like I did and are OK with overfitting, you can try different checkpoint versions with different degrees of overfitting. #You can interrupt the process via Kernel -> Interrupt Kernel in the top nav bar once you realize you didn't need to train anymore. if torch.cuda.device_count() > 1: # If more than 1 GPU model.is_parallelizable = True model.model_parallel = True model = accelerator.prepare_model(model) import transformers from datetime import datetime project = "finetune-lora" base_model_name = "mistral" run_name = base_model_name + "-" + project trainer = transformers.Trainer( model=model, train_dataset=tokenized_train_dataset, eval_dataset=tokenized_val_dataset, args=transformers.TrainingArguments( output_dir=output_dir, warmup_steps=500, per_device_train_batch_size=2, gradient_accumulation_steps=1, gradient_checkpointing=True, max_steps=10000, learning_rate=2.5e-5, # Want a small lr for finetuning bf16=True, optim="paged_adamw_8bit", logging_steps=25, # When to start reporting loss logging_dir="./logs", # Directory for storing logs save_strategy="steps", # Save the model checkpoint every logging step save_steps=25, # Save checkpoints every 50 steps evaluation_strategy="steps", # Evaluate the model every logging step eval_steps=25, # Evaluate and save checkpoints every 50 steps do_eval=True, # Perform evaluation at the end of training report_to="wandb", # Comment this out if you don't want to use weights & baises run_name=f"{run_name}-{datetime.now().strftime('%Y-%m-%d-%H-%M')}" # Name of the W&B run (optional) ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) model.config.use_cache = False # silence the warnings. Please re-enable for inference! trainer.train() The script above works as expected. Now I have a seconds script where I feed a file path that contains the conversation it should return a summary. This is my second script: import sys import torch from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig from peft import PeftModel def read_file(file_path): try: with open(file_path, 'r') as file: file_content = file.read() return file_content except FileNotFoundError: print(f"The file at path '{file_path}' was not found.") except Exception as e: print(f"An error occurred: {e}") return None if __name__ == "__main__": if len(sys.argv) != 2: print("Usage: python script.py <file_path>") sys.exit(1) file_path = sys.argv[1] content = read_file(file_path) if content is None: sys.exit(1) print(f"File with dialogue '{file_path}':") # Load Base Model base_model_id = "mistralai/Mistral-7B-v0.1" config = AutoConfig.from_pretrained(base_model_id) #model = AutoModelForCausalLM.from_pretrained(base_model_id, config=config) config.quantization = "int8" # Set quantization to int8 for 16-bit precision model = AutoModelForCausalLM.from_pretrained(base_model_id, config=config) tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_bos_token=True, trust_remote_code=True) # Load the LoRA adapter from the appropriate checkpoint directory new_checkpoint_path = "mistral-finetune-lora16/checkpoint-10000" # Since PeftModel.from_pretrained expects only one argument, we need to pass the model and the checkpoint path separately ft_model = PeftModel.from_pretrained(base_model_id, new_checkpoint_path) # Generate output using the loaded model and tokenizer base_prompt = " ### Dialog: \n ### Summary: #" eval_prompt = f"{base_prompt[:13]}{content}{base_prompt[13:]}" model_input = tokenizer(eval_prompt, return_tensors="pt").to("cuda") ft_model.eval() with torch.no_grad(): output = ft_model.generate(**model_input, max_new_tokens=100, repetition_penalty=1.15)[0] decoded_output = tokenizer.decode(output, skip_special_tokens=True) print(decoded_output) When I'm running the second script it throws this error: Traceback (most recent call last): File "run-mistral-lora-samsun.py", line 43, in <module> ft_model = PeftModel.from_pretrained(base_model_id, new_checkpoint_path) File "/bigdata/usr/src/mistral-train-full-samsum/lib/python3.8/site-packages/peft/peft_model.py", line 325, in from_pretrained config = PEFT_TYPE_TO_CONFIG_MAPPING[ File "/bigdata/usr/src/mistral-train-full-samsum/lib/python3.8/site-packages/peft/config.py", line 152, in from_pretrained return cls.from_peft_type(**kwargs) File "/bigdata/usr/src/mistral-train-full-samsum/lib/python3.8/site-packages/peft/config.py", line 119, in from_peft_type return config_cls(**kwargs) TypeError: __init__() got an unexpected keyword argument 'use_dora' Does anyone know why this happens? I have specify that I have done the same thing using QLoRa (4 bit) and it worked as expected chat GPT suggestion: tried initialising the ft_model this way : ft_model = PeftModel.from_pretrained("lora", new_checkpoint_path, model=model) but I get a different exception message: Traceback (most recent call last): File "run-mistral-lora-samsun.py", line 44, in <module> ft_model = PeftModel.from_pretrained("lora", new_checkpoint_path, model=model) TypeError: from_pretrained() got multiple values for argument 'model'
I had the same issue, simply upgrading the peft library to the latest version solved the problem for me.
2
3
78,130,914
2024-3-9
https://stackoverflow.com/questions/78130914/fill-null-in-polars-dataframe-by-sampling-the-rest-of-the-column
How would you fill_null in a DataFrame by sampling the rest of the column for a unique value each time a null is occurred? Example: Suppose I have this DataFrame: >>> daf=pl.DataFrame({"a":[1,2, None, None, 5, 6, None, None, None, 10], "b":[None, "two", "three", None, "five", "six", "seven", None, "nine", "ten"]}) >>> daf shape: (10, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•ͺ═══════║ β”‚ 1 ┆ null β”‚ β”‚ 2 ┆ two β”‚ β”‚ null ┆ three β”‚ β”‚ null ┆ null β”‚ β”‚ 5 ┆ five β”‚ β”‚ 6 ┆ six β”‚ β”‚ null ┆ seven β”‚ β”‚ null ┆ null β”‚ β”‚ null ┆ nine β”‚ β”‚ 10 ┆ ten β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ I'd like to fill the nulls in all columns by sampling the rest of the column. For example, the nulls in "a" should get a random value from [1,2,5,6,10], and the nulls in "b" should get a random value from ["two", "three", "five", "six", "seven", "nine", "ten"]. But there shouldn't be str in "a" or i64 in "b". And the value used to fill the null shouldn't be the same for each null in a column. I was able to do it by looping over the columns using a standard Python for loop: >>> for c in daf.columns: ... repl=daf[c].drop_nulls().sample(daf[c].len(), with_replacement=True) ... daf=daf.with_columns(pl.col(c).fill_null(repl)) ... >>> daf shape: (10, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═══════║ β”‚ 1 ┆ two β”‚ β”‚ 2 ┆ two β”‚ β”‚ 1 ┆ three β”‚ β”‚ 2 ┆ six β”‚ β”‚ 5 ┆ five β”‚ β”‚ 6 ┆ six β”‚ β”‚ 5 ┆ seven β”‚ β”‚ 10 ┆ two β”‚ β”‚ 5 ┆ nine β”‚ β”‚ 10 ┆ ten β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ but I was trying to figure out how to do it using expressions, and couldn't get it. Is there a way to do this in a more compact way, using the Polars expression syntax?
Solution This seems to work. It's based on your original idea, it just does the same operations to each column purely with expressions. daf.select( pl.col("*").fill_null( pl.col("*").drop_nulls().sample(pl.len(), with_replacement=True) ) ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ═══════║ β”‚ 1 ┆ two β”‚ β”‚ 2 ┆ two β”‚ β”‚ 6 ┆ three β”‚ β”‚ 10 ┆ six β”‚ β”‚ 5 ┆ five β”‚ β”‚ 6 ┆ six β”‚ β”‚ 1 ┆ seven β”‚ β”‚ 2 ┆ two β”‚ β”‚ 2 ┆ nine β”‚ β”‚ 10 ┆ ten β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Explanation pl.col("*") (or pl.all()) just selects each column individually and applies the subsequent operation to it. Later on it retains the same structure, because for example this works: >>> df = pl.DataFrame({"a": [1, 2, 3], "b": [100, 200, 300]}) >>> df.select(pl.col("*").mul(pl.col("*").mean())) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════║ β”‚ 2.0 ┆ 20000.0 β”‚ β”‚ 4.0 ┆ 40000.0 β”‚ β”‚ 6.0 ┆ 60000.0 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ If I use a LazyFrame and explain, this is what it does under the hood: >>> daf.select(pl.col("*").fill_null(pl.col("*").drop_nulls().sample(pl.len(), with_replacement=True))).explain() SELECT [col("a").fill_null([col("a").drop_nulls().sample([len()])]), col("b").fill_null([col("b").drop_nulls().sample([len()])])] FROM DF ["a", "b"]; PROJECT 2/2 COLUMNS; SELECTION: "None" What was a bit surprising to me that the fill_null part actually works. The documentation for fill_null is not very clear on the fact that you can pass an expression to it that returns something else than a single value and it works this way. Also one would think that using 1 or 100 instead of pl.len() would result to the same, but no. 1 returns the same sample for each null replacement, and value any other than 1 or pl.len() gives a shape error. It seems to do a table-sized or operation and replace every null value with the value from the corresponding spot from the other side (which can still be null), which is kind of interesting. >>> daf.select(pl.col("*").fill_null(pl.col("*").reverse())) shape: (10, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•ͺ═══════║ β”‚ 1 ┆ ten β”‚ β”‚ 2 ┆ two β”‚ β”‚ null ┆ three β”‚ β”‚ null ┆ seven β”‚ β”‚ 5 ┆ five β”‚ β”‚ 6 ┆ six β”‚ β”‚ null ┆ seven β”‚ β”‚ null ┆ three β”‚ β”‚ 2 ┆ nine β”‚ β”‚ 10 ┆ ten β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
2
2
78,118,265
2024-3-7
https://stackoverflow.com/questions/78118265/efficiently-calculate-angle-between-three-points-over-triplets-of-rows-in-a-nump
Suppose we have a numpy array A of size M x N which we interpret as M vectors of dimension N. For three vectors a,b,c we'd like to compute the cosine of the angle they form: cos(angle(a,b,c)) = np.dot((a-b)/norm(a-b), (c-b)/norm(c-b)) We want to compute this quantity over triplets of A, of which there should be (M choose 2)*(M-2) unique triplets (by symmetry of a and c; please correct me if I'm wrong on this). We can of course accomplish this with a triply nested for loop, but I am hoping this can be done in a vectorized manner. I am pretty sure I can use some broadcasting tricks to compute an array that includes the desired outputs and more, but I am hoping someone can furnish a recipe that computes exactly the unique quantities, preferably without extra computation. Thanks. edit. for completeness, naive impl. with loops: angles = [] for i in range(len(A)): for j in range(len(A)): for k in range(i+1, len(A)): if j not in (i,k): d1 = A[i] - A[j] d2 = A[k] - A[j] ang = np.dot(d1/np.linalg.norm(d1), d2/np.linalg.norm(d2)) angles.append(ang)
of which there should be (M choose 2)*(M-2) unique triplets (by symmetry of a and c; please correct me if I'm wrong on this) I think that's right. I counted M * ((M-1) choose 2), and that's equivalent. I am hoping someone can furnish a recipe that computes exactly the unique quantities, preferably without extra computation. Well, let's start with the easy thing - vectorizing your loop, assuming we have arrays of indices i, j, and k pre-generated. def cosang1(A, i, j, k): d1 = A[i] - A[j] d2 = A[k] - A[j] d1_hat = np.linalg.norm(d1, axis=1, keepdims=True) d2_hat = np.linalg.norm(d2, axis=1, keepdims=True) # someone will almost certainly suggest a better way to do this ang = np.einsum("ij, ij -> i", d1/d1_hat, d2/d2_hat) return ang This reduces the problem to computing the arrays of indices, assuming that indexing the arrays is a small enough fraction of the total computation time. I can't see a way to avoid the redundant calculations without doing this sort of thing. Then, if we're willing to allow redundant calculation, the simplest way to generate the indices is with np.meshgrid. def cosang2(A): i = np.arange(len(A)) i, j, k = np.meshgrid(i, i, i) i, j, k = i.ravel(), j.ravel(), k.ravel() return cosang1(A, i, j, k) For A of shape (30, 3) on Colab, the Python loop method took 160ms, and this solution took 7ms. It's very easy to generate the unique sets of indices quickly if we're allowed to use Numba. This is basically just your code broken out into a function: from numba import jit # generate unique tuples of indices of vectors @jit(nopython=True) def get_ijks(M): ijks = [] for i in range(M): for j in range(M): for k in range(i+1, M): if j not in (i, k): ijks.append((i, j, k)) return ijks (Of course, we could also just use Numba on your whole loop.) This took a little less than half the time as the redundant vectorized solution. It might be possible to generate the indices efficiently using pure NumPy. Initially, I thought it was going to be as simple as: i = np.arange(M) j, k = np.triu_indices(M, 1) i, j, k = np.broadcast_arrays(i, j[:, None], k[:, None]) i, j, k = i.ravel(), j.ravel(), k.ravel() That's not quite right, but it is possible to start with that and fix those indices with a loop over range(M) (better than triple nesting, anyway!). Something like: # generates the same set of indices as `get_ijks`, # but currently a bit slower. def get_ijks2(M): i = np.arange(M) j, k = np.triu_indices(M-1, 1) i, j, k = np.broadcast_arrays(i[:, None], j, k) i, j, k = i.ravel(), j.ravel(), k.ravel() for ii in range(M): # this can be improved by using slices # instead of masks where possible mask0 = i == ii mask1 = (j >= ii) & mask0 mask2 = (k >= ii) & mask0 j[mask1] += 1 k[mask1 | mask2] += 1 return j, i, k # intentionally swapped due to the way I think about this ~~I think this can be sped up by using just slices and no masks, but I can't do that tonight.~~ Update: as indicated in the comment, the last loop wasn't necessary! def get_ijks3(M): i = np.arange(M) j, k = np.triu_indices(M-1, 1) i, j, k = np.broadcast_arrays(i[:, None], j, k) i, j, k = i.ravel(), j.ravel(), k.ravel() mask1 = (j >= i) mask2 = (k >= i) j[mask1] += 1 k[mask1 | mask2] += 1 return j, i, k # intentionally swapped And that's quite a bit faster than the Numba loop. I'm actually surprised that worked out! All the code together, in case you want to run it: from numba import jit import numpy as np rng = np.random.default_rng(23942342) M = 30 N = 3 A = rng.random((M, N)) # generate unique tuples of indices of vectors @jit(nopython=True) def get_ijks(M): ijks = [] for i in range(M): for j in range(M): for k in range(i+1, M): if j not in (i, k): ijks.append((i, j, k)) return ijks # attempt to generate the same integers efficiently # without Numba def get_ijks2(M): i = np.arange(M) j, k = np.triu_indices(M-1, 1) i, j, k = np.broadcast_arrays(i[:, None], j, k) i, j, k = i.ravel(), j.ravel(), k.ravel() for ii in range(M): # this probably doesn't need masks mask0 = i == ii mask1 = (j >= ii) & mask0 mask2 = (k >= ii) & mask0 j[mask1] += 1 k[mask1 | mask2] += 1 return j, i, k # intentionally swapped due to the way I think about this # proposed method def cosang1(A, i, j, k): d1 = A[i] - A[j] d2 = A[k] - A[j] d1_hat = np.linalg.norm(d1, axis=1, keepdims=True) d2_hat = np.linalg.norm(d2, axis=1, keepdims=True) ang = np.einsum("ij, ij -> i", d1/d1_hat, d2/d2_hat) return ang # another naive implementation def cosang2(A): i = np.arange(len(A)) i, j, k = np.meshgrid(i, i, i) i, j, k = i.ravel(), j.ravel(), k.ravel() return cosang1(A, i, j, k) # naive implementation provided by OP def cosang0(A): angles = [] for i in range(len(A)): for j in range(len(A)): for k in range(i+1, len(A)): if j not in (i,k): d1 = A[i] - A[j] d2 = A[k] - A[j] ang = np.dot(d1/np.linalg.norm(d1), d2/np.linalg.norm(d2)) angles.append(ang) return angles %timeit cosang0(A) %timeit get_ijks(len(A)) ijks = np.asarray(get_ijks(M)).T %timeit cosang1(A, *ijks) %timeit cosang2(A) # 180 ms Β± 34.7 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each) # 840 Β΅s Β± 68.3 Β΅s per loop (mean Β± std. dev. of 7 runs, 1 loop each) # 2.19 ms Β± 126 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) # <ipython-input-1-d2a3835710f2>:26: RuntimeWarning: invalid value encountered in divide # ang = np.einsum("ij, ij -> i", d1/d1_hat, d2/d2_hat) # 8.13 ms Β± 1.78 ms per loop (mean Β± std. dev. of 7 runs, 100 loops each) cosangs0 = cosang0(A) cosangs1 = cosang1(A, *ijks) cosangs2 = cosang2(A) np.testing.assert_allclose(cosangs1, cosangs0) # passes %timeit get_ijks2(M) # 1.73 ms Β± 242 Β΅s per loop (mean Β± std. dev. of 7 runs, 1000 loops each) i, j, k = get_ijks2(M) cosangs3 = cosang1(A, i, j, k) np.testing.assert_allclose(np.sort(cosangs3), np.sort(cosangs0)) # passes %timeit get_ijks3(M) # 184 Β΅s Β± 25.8 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) i, j, k = get_ijks3(M) cosangs4 = cosang1(A, i, j, k) np.testing.assert_allclose(np.sort(cosangs4), np.sort(cosangs0)) # passes
2
4
78,132,690
2024-3-9
https://stackoverflow.com/questions/78132690/strange-base64-python-decoding
From a stream of TCP segments, I pulled binary data with the help of wireshark, which I later found out is a bmp file. Then I load one big line of binary data, clear it from spaces, newline character and intermediate "=". I mean their separation at the end of each TCP segment. Then I execute the following code: import base64 tst = sec_orig.replace('\n', '').replace(' ', '').replace('=', '') decoded_data = base64.b64decode(tst + '=', altchars=None, validate=True) Raw data from wireshark: Qk02MAEAAAAAADYEAAAoAAAAQAEAAPAAAAABAAgAAAAAAAAAAABCCwAAQgsAAAABAAAAAQAAAAAA AAAAAAAAAKAAAPAAAPAAAAAA/PwA/PwAAPxw/AD8/PwAoKCgAEBAQABQMAAAWFhYANCg0ACgkHAA oJBwANDQ0ADYyLQA1JgAANyEAAC4uLgAaPR0APCoAAAgICAAAJD8AAAA+ACoqKgAvLy8AMzMzADc 3NwA7OzsAPz8/AAAAAAAAAAQAAAAIAAAADAAAABEAAAAVAAAAGQAAAB0AAAAiAAAAJgAAACoAAAA vAAAAMwAAADcAAAA7AAAAPwAAAAAAAAQAAAAIAAAADAAAABEAAAAVAAAAGQAAAB0AAAAiAAAAJgA AACoAAAAvAAAAMwAAADcAAAA7AAAAPwAAAAAAAAQAAAAIAAAADAAAABEAAAAVAAAAGQAAAB0AAAA iAAAAJgAAACoAAAAvAAAAMwAAADcAAAA7AAAAPwAAAD8AAAA/BAAAPwgAAD8MAAA/EQAAPxUAAD8 ZAAA/HQAAPyIAAD8mAAA/KgAAPy8AAD8zAAA/NwAAPzsAAD8/AAA/PwAAOz8AADc/AAAzPwAALz8 AACo/AAAmPwAAIj8AAB0/AAAZPwAAFT8AABE/AAAMPwAACD8AAAQ/AAAAPwAAAD8AAAA/BAAAPwg AAD8MAAA/EQAAPxUAAD8ZAAA/HQAAPyIAAD8mAAA/KgAAPy8AAD8zAAA/NwAAPzsAAD8/AAA/PwA AOz8AADc/AAAzPwAALz8AACo/AAAmPwAAIj8AAB0/AAAZPwAAFT8AABE/AAAMPwAACD8AAAQ/AAA APwAAAD8ABAA/AAgAPwAMAD8AEQA/ABUAPwAZAD8AHQA/ACIAPwAmAD8AKgA/AC8APwAzAD8ANwA /ADsAPwA/AD8APwA/AD8AOwA/ADcAPwAzAD8ALwA/ACoAPwAmAD8AIgA/AB0APwAZAD8AFQA/ABE APwAMAD8ACAA/AAQAPwAAAAAAAAA ... BQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUF BQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUF BQUFBQUFBQUFBQUFBQUFBQkFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUF BQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUJ And I get a certain binary string from base64: b'BM60\x01\x00\x00\x00\x00\x006\x04\x00\x00(\x00\x00\x00@\x01\x00\x00\xf0\x00\x00\x00\x01\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00B\x0b\x00\x00B\x0b\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xa0\x00\x00\xf0\x00\x00\xf0\x00\x00\x00\x00\xfc\xfc\x00\xfc\xfc\x00\x00\xfcp\xfc\x00\xfc\xfc\xfc\x00\xa0\xa0\xa0\x00@@@\x00P0\x00\x00XXX\x00\xd0\xa0\xd0\x00\xa0\x90p\x00\xa0\x90p\x00\xd0\xd0\xd0\x00\xd8\xc8\xb4\x00\xd4\x98\x00\x00\xdc\x84\x00\x00\xb8\xb8\xb8\x00h\xf4t\x00\xf0\xa8\x00\x00 \x00\x00\x90\xfc\x00\x00\x00\xf8\x00\xa8\xa8\xa8\x00\xbc\xbc\xbc\x00\xcc\xcc\xcc\x00\xdc\xdc\xdc\x00\xec\xec\xec\x00 ... \t\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\x05\t' Then using the Pillow library I display this image obtained from the TCP stream segments and get the following image: from PIL import Image import io Image.open(io.BytesIO(decoded_data)) distorted bmp image As far as I understand, somewhere somehow incorrectly taken shift relative to bmp color matrix, but I do not understand where I make a mistake, can you please suggest The image is displayed, but with a staggered pitch that I don't know how to normalize yet.
Using this script, I get an uncorrupted image shown: import io from pathlib import Path import yaml from PIL import Image packets = yaml.safe_load(Path("5bksih1B.txt").read_text()) raw = b"".join([p["data"] for p in packets]) im = Image.open(io.BytesIO(raw)) im.show() There are two 320x240 images in the full data you pasted, here are the results: Note: that "marker" Qk02MA you noticed is actually the start of a bitmap header: >>> import base64 >>> base64.b64decode("Qk02MA==") b'BM60' To continuously read frames from a stream, first parse this header, and then consume the correct amount of bytes from a buffer.
2
5
78,122,836
2024-3-7
https://stackoverflow.com/questions/78122836/difference-between-numpy-power-and-for-certain-values
I have a numpy array where the entries in f**2 differ from f[i]**2, but only for some specific value. import numpy as np np.set_printoptions(precision = 16) f = np.array([ -40709.6555510835, -40708.6555510835, -33467.081758611654, -27653.379955714125]) f2 = f**2 # f2 = np.power(f,2) print("outside loop", np.abs(f[1]**2 - f2[1]), np.abs(1.0 - f2[1] / f[1]**2), f.dtype, f[1].dtype, f2.dtype, f2[1].dtype) for i, val in enumerate(f): print("inside loop", i, np.abs(val**2 - f2[i]), np.abs(1.0 - f2[i] / val**2), val.dtype, f2.dtype, f2[i].dtype) Produces output: outside loop 2.384185791015625e-07 2.220446049250313e-16 float64 float64 float64 float64 inside loop 0 0.0 0.0 float64 float64 float64 inside loop 1 2.384185791015625e-07 2.220446049250313e-16 float64 float64 float64 inside loop 2 0.0 0.0 float64 float64 float64 inside loop 3 0.0 0.0 float64 float64 float64 I do note that this is a relative error on the order of epsilon. This issue goes away when using np.power instead of ** in the definition of f2. Even so, why is f[i]**2 not the same as the ith value of f**2 (even if only for certain values in f). I'm using python 3.10.6 and the latest numpy 1.26.4. Edit: The fundamental issue is captured in: import numpy as np f = np.array([-40708.6555510835]) print((f[0])**2 - (f**2)[0]) which displays a value of -2.384185791015625e-07 I would like to know why that specific number has this specific issue. If you'd like confirmation, or to try different values for f, see this demo.
The results are different because f**2 calls numpy.square, while f[0]**2 and numpy.power(f, 2) call numpy.power. numpy.ndarray.__pow__ is written in C. It looks like this: static PyObject * array_power(PyObject *a1, PyObject *o2, PyObject *modulo) { PyObject *value = NULL; if (modulo != Py_None) { /* modular exponentiation is not implemented (gh-8804) */ Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } BINOP_GIVE_UP_IF_NEEDED(a1, o2, nb_power, array_power); if (fast_scalar_power(a1, o2, 0, &value) != 0) { value = PyArray_GenericBinaryFunction(a1, o2, n_ops.power); } return value; } The value = PyArray_GenericBinaryFunction(a1, o2, n_ops.power); is a Python function call to the numpy.power ufunc object, but first, it tries a fast_scalar_power function. This function tries to optimize exponentiation with common scalar powers, such as 2. For the f**2 operation, fast_scalar_power detects the exponent of 2, and delegates the operation to numpy.square: else if (exponent == 2.0) { fastop = n_ops.square; } For numpy.power(f, 2), this is of course a direct call to numpy.power. numpy.power doesn't go through fast_scalar_power, and doesn't have any special handling for an exponent of 2. (Depending on what underlying power implementation it hits, that implementation might still have special handling for 2, though.) For scalars, I believe numpy.float64.__pow__ actually just calls array_power: static PyObject * gentype_power(PyObject *m1, PyObject *m2, PyObject *modulo) { if (modulo != Py_None) { /* modular exponentiation is not implemented (gh-8804) */ Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } BINOP_GIVE_UP_IF_NEEDED(m1, m2, nb_power, gentype_power); return PyArray_Type.tp_as_number->nb_power(m1, m2, Py_None); } so it hits fast_scalar_power, but one of the first checks in fast_scalar_power is if (PyArray_Check(o1) && An instance of numpy.float64 does not pass PyArray_Check, which checks for objects whose type is exactly numpy.ndarray. Thus, the scalar goes through the general numpy.power code path.
10
6
78,128,890
2024-3-8
https://stackoverflow.com/questions/78128890/pandas-dataframe-title-caption-in-plain-text-to-string-output
As an example: import pandas as pd df = pd.DataFrame({ "Hello World": [1, 2, 3, 4], "And Some More": [10.0, 20.0, 30.0, 40.0], }) df_caption = "Table 1: My Table" df.style.set_caption(df_caption) # only works for HTML; https://stackoverflow.com/q/57958432 with pd.option_context('display.max_rows', None, 'display.max_columns', None, 'display.width', None, 'max_colwidth', 50, 'display.float_format', "{:.2f}".format): df_str = df.to_string() print(df_str) ... outputs: Hello World And Some More 0 1 10.00 1 2 20.00 2 3 30.00 3 4 40.00 ... and clearly, there is no table title/caption in the plain text output of .to_string(). Sure I can just print(df_caption) myself separately - but is it otherwise somehow possible to add dataframe (table) caption on the Pandas DataFrame object, so that it is output in the string generated by .to_string()?
DataFrame.style has a specific use that does not relate to printing DataFrames in the console. From the code documentation: Contains methods for building a styled HTML representation of the DataFrame. DataFrame.to_string() has many attributes, but none of them relate to displaying a caption or a name. It does take a header, but that relates specifically to the column names. DataFrame.__repr__ uses DataFrame.to_string, so no captions here either. In conclusion: it's not "possible to add dataframe (table) caption on the Pandas DataFrame object, so that it is output in the string generated by .to_string()". You can, of course, create your own function to do that: data = { "Name": ["Alice", "Bob", "Charlie", "David", "Emily"], "Age": [25, 30, 35, 40, 45], "City": ["New York", "Los Angeles", "Chicago", "Houston", "Boston"], } df = pd.DataFrame(data) def print_df(df, name): print(df) print(f"{name = }") print_df(df, name="Example DataFrame") Name Age City 0 Alice 25 New York 1 Bob 30 Los Angeles 2 Charlie 35 Chicago 3 David 40 Houston 4 Emily 45 Boston name = 'Example DataFrame'
3
1
78,130,743
2024-3-8
https://stackoverflow.com/questions/78130743/parsing-a-csv-file-to-add-details-to-xml-file
Ned help to parse a csv file which has following details to xml file csv file is in the following format: name,ip,timeout domain\user1,10.119.77.218,9000 domain\user2,2.80.189.26,9001 domain\user3,4.155.10.110,9002 domain\user4,9.214.119.86,9003 domain\user5,4.178.187.27,9004 domain\user6,3.76.178.117,9005 The above details from csv needs to be added to XML file which has the following format: <login> <entry name="domain\user1" ip="10.119.77.218" timeout="9000"/> <entry name="domain\user2" ip="2.80.189.26" timeout="9001"/> <entry name="domain\user11" ip="4.155.10.110" timeout="12000"/> ... </login> Need some script because there are tones of files which needs to be converted. I tried the following tool: https://www.convertcsv.com/csv-to-xml.htm But the above tool converts individual row to separate entry which is not needed. Looking for your feedback. Thank you.
If the structure of CSV/XML file is simple you can use csv module and construct the string directly (for more complicated scenarios I recommend lxml/bs4 modules): import csv with open("your_data.csv", "r") as f: reader = csv.reader(f) header = next(reader) data = list(reader) out = ["<login>"] for line in data: s = " ".join(f'{k}="{v}"' for k, v in zip(header, line)) out.append(f"\t<entry {s} />") out.append("</login>") print(*out, sep="\n") Prints: <login> <entry name="domain\user1" ip="10.119.77.218" timeout="9000" /> <entry name="domain\user2" ip="2.80.189.26" timeout="9001" /> <entry name="domain\user3" ip="4.155.10.110" timeout="9002" /> <entry name="domain\user4" ip="9.214.119.86" timeout="9003" /> <entry name="domain\user5" ip="4.178.187.27" timeout="9004" /> <entry name="domain\user6" ip="3.76.178.117" timeout="9005" /> </login>
3
1
78,130,544
2024-3-8
https://stackoverflow.com/questions/78130544/groupby-as-list-into-new-column
How can I aggreate all product names of the grouped dataframe into a new column as list or set: import pandas as pd # 2.0.3 df = pd.DataFrame( { "customer_id": [1, 2, 3, 2, 1], "order_id": [1, 2, 3, 4, 1], "products": ["foo", "bar", "baz", "foo", "bar"], "amount": [1, 1, 1, 1, 1] } ) print(df) grouped = df.groupby(["customer_id", "order_id"]) df["product_order_count"] = grouped["amount"].transform("sum") df["all_products"] = grouped["products"].agg(list).reset_index() print(df) Although I followed another question (Pandas groupby: How to get a union of strings) an exception is thrown: Traceback (most recent call last): File "C:\temp\tt.py", line 15, in <module> df["all_orders"] = grouped["products"].agg(list).reset_index() File "c:\Users\foo\.venvs\kapa_monitor-38\lib\site-packages\pandas\core\frame.py", line 3940, in __setitem__ self._set_item_frame_value(key, value) File "c:\Users\foo\.venvs\kapa_monitor-38\lib\site-packages\pandas\core\frame.py", line 4094, in _set_item_frame_value raise ValueError( ValueError: Cannot set a DataFrame with multiple columns to the single column all_products Expected output (all_products, as list or set): customer_id order_id products amount product_order_count all_products 0 1 1 foo 1 2 'foo', 'bar' 1 2 2 bar 1 1 'bar' 2 3 3 baz 1 1 'baz' 3 2 4 foo 1 1 'foo' 4 1 1 bar 1 2 'foo', 'bar'
You could use transform with a function that returns something that is the same length with the group: df["all_products"] = grouped["products"].transform(lambda x: [list(x)]*len(x)) Output: customer_id order_id products amount product_order_count all_products 0 1 1 foo 1 2 [foo, bar] 1 2 2 bar 1 1 [bar] 2 3 3 baz 1 1 [baz] 3 2 4 foo 1 1 [foo] 4 1 1 bar 1 2 [foo, bar] Or you can joint the strings (I don't really recommend lists in the data): df["all_products"] = grouped["products"].transform(','.join) which gives customer_id order_id products amount product_order_count all_products 0 1 1 foo 1 2 foo,bar 1 2 2 bar 1 1 bar 2 3 3 baz 1 1 baz 3 2 4 foo 1 1 foo 4 1 1 bar 1 2 foo,bar
2
2
78,130,203
2024-3-8
https://stackoverflow.com/questions/78130203/pandas-make-list-size-in-a-column-same-as-in-another-column
I have two columns: serial_number and inv_number containing lists. If there is one inv_number for multiple serial_number, I need to make the size of inv_number's list the same as serial_number's. serial_number inv_number 28 [Π‘029768, Π‘029775] [101040031171, 101040031172] 29 [090020960190402011, 090020960190402009] [210134002523, 210134002524] 31 [1094] [410124000215] 32 [01] [101040022094] 33 [F161B5, F17D86, F17D8D, F1825C, F1825A, F1825D] [101040026976] Here at the index 33 we have 6 serial numbers but one inventory number, so it should be changed to [101040026976, 101040026976, 101040026976, 101040026976, 101040026976, 101040026976] I've tried to do it by "multiplying" values to make a list (like [value] * N): si.loc[si['inv_number'].apply(len)==1, 'inv_number'].apply (lambda x: [str(x[0])] * si['serial_number'].apply(len).values) but it gives me an error: UFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U12'), dtype('int64')) -> None How can I solve this problem?
Try: mask = (df["serial_number"].str.len() > 1) & (df["inv_number"].str.len() == 1) df.loc[mask, "inv_number"] = df["serial_number"].str.len() * df.loc[mask, "inv_number"] print(df) Prints: serial_number inv_number 28 [Π‘029768, Π‘029775] [101040031171, 101040031172] 29 [090020960190402011, 090020960190402009] [210134002523, 210134002524] 31 [1094] [410124000215] 32 [01] [101040022094] 33 [F161B5, F17D86, F17D8D, F1825C, F1825A, F1825D] [101040026976, 101040026976, 101040026976, 101040026976, 101040026976, 101040026976]
4
4
78,129,071
2024-3-8
https://stackoverflow.com/questions/78129071/break-wrap-long-text-of-column-names-in-pandas-dataframe-plain-text-to-string-ou
Consider this example: import pandas as pd df = pd.DataFrame({ "LIDSA": [0, 1, 2, 3], "CAE": [3, 5, 7, 9], "FILA": [1, 2, 3, 4], # 2 is default, so table idx 1 is default "VUAMA": [0.5, 1.0, 1.5, 2.0], }) df_colnames = { # https://stackoverflow.com/q/48243818 "LIDSA": "Lorem ipsum dolor sit amet", "CAE": "Consectetur adipiscing elit", "FILA": "Fusce imperdiet libero arcu", "VUAMA": "Vitae ultricies augue molestie ac", } # "Pandas autodetects the size of your terminal window if you set pd.options.display.width = 0" https://stackoverflow.com/q/11707586 with pd.option_context('display.max_rows', None, 'display.max_columns', None, 'display.width', 0, 'max_colwidth', 20, 'display.float_format', "{:.2f}".format): df_str = df.rename(df_colnames,axis=1).to_string() print(df_str) This results with the terminal stdout printout, at the time 111 characters wide: Lorem ipsum dolor sit amet Consectetur adipiscing elit Fusce imperdiet libero arcu Vitae ultricies augue molestie ac 0 0 3 1 0.50 1 1 5 2 1.00 2 2 7 3 1.50 3 3 9 4 2.00 So, only the last column got line-broken (and correspondingly, the values for it). I would have preferred that each long column name gets line-broken / word-wrapped at say 20 characters, and then the values output correspondingly, something like: Lorem ipsum dolor Consectetur Fusce imperdiet Vitae ultricies sit amet adipiscing elit libero arcu augue molestie ac 0 0 3 1 0.50 1 1 5 2 1.00 2 2 7 3 1.50 3 3 9 4 2.00 I thought 'max_colwidth', 20 would do that, but apparently it doesn't. I even tried adding explicit linebreaks in the long column names, but they just get rendered as \n, and the column name is still in one line (as noted also in Linebreaks in pandas column names) So, is it possible to "word-wrap"/"line break" long column names in Pandas for plain text string output?
You could use textwrap.wrap and tabulate: # pip install tabulate from textwrap import wrap from tabulate import tabulate df_colnames_wrap = {k: '\n'.join(wrap(v, 20)) for k,v in df_colnames.items()} print(tabulate(df.rename(columns=df_colnames_wrap), headers='keys', tablefmt='plain')) Output: Lorem ipsum dolor Consectetur Fusce imperdiet Vitae ultricies sit amet adipiscing elit libero arcu augue molestie ac 0 0 3 1 0.5 1 1 5 2 1 2 2 7 3 1.5 3 3 9 4 2 With float formatting: print(tabulate(df.rename(columns=df_colnames_wrap) .convert_dtypes(), headers='keys', tablefmt='plain', floatfmt='.2f' )) Output: Lorem ipsum dolor Consectetur Fusce imperdiet Vitae ultricies sit amet adipiscing elit libero arcu augue molestie ac 0 0 3 1 0.50 1 1 5 2 1.00 2 2 7 3 1.50 3 3 9 4 2.00
3
5
78,127,976
2024-3-8
https://stackoverflow.com/questions/78127976/how-to-add-new-columns-from-a-list-to-an-existing-dataframe-in-polars
I'd like to add new empty columns (elements of mylist) to an existing dataframe. This code does it: import polars as pl df = pl.DataFrame({'a': [1,2,3]}) mylist = [f'col{i}' for i in range(1,4)] data = [[''] for i in range(1,len(mylist)+1)] df.join(pl.DataFrame(data=data,schema=mylist), how='cross') But is there a more polars way to do this? Something using .with_columns() or pl.any()? So that I do not have to create a new dataframe and join it?
You seem to be looking for pl.lit() which allows you to create an "expression" from a literal value. You can then use .alias() to choose the name. df.with_columns(pl.lit('').alias(col) for col in mylist) shape: (3, 4) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ a ┆ col1 ┆ col2 ┆ col3 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ══════β•ͺ══════║ β”‚ 1 ┆ ┆ ┆ β”‚ β”‚ 2 ┆ ┆ ┆ β”‚ β”‚ 3 ┆ ┆ ┆ β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ map() could also be used instead of a comprehension. df.with_columns(map(pl.lit('').alias, mylist))
4
5
78,128,662
2024-3-8
https://stackoverflow.com/questions/78128662/converting-pytorch-bfloat16-tensors-to-numpy-throws-typeerror
When you try to convert a Torch bfloat16 tensor to a numpy array, it throws a TypeError: import torch x = torch.Tensor([0]).to(torch.bfloat16) x.numpy() # TypeError: Got unsupported ScalarType BFloat16 import numpy as np np.array(x) # same error Is there a work-around to make this conversion?
Currently, numpy does not support bfloat16**. One work-around is to upcast the tensor from half-precision to single-precision before making the conversion: x.float().numpy() The Pytorch maintainers are also considering adding a force=True option to the Tensor.numpy method to this automatically. ** although that may change thanks to work by @jakevdp
3
4
78,128,367
2024-3-8
https://stackoverflow.com/questions/78128367/cross-merge-by-group-in-pandas
I am trying to cross merge two dataframes but limiting the merge so only combinations within the same group are provided. The pandas documentation says When performing a cross merge, no column specifications to merge on are allowed. At the moment to achieve this I'm using a for loop and concatenating the resulting dfs but is there a more efficient way? Example input data: import pandas as pd df1 = pd.DataFrame({ 'group': [1, 1, 2, 2], 'field_a': ['apple', 'pear', 'banana', 'papaya'] }) df2 = pd.DataFrame({ 'group': [1, 1, 2, 2], 'field_b': ['apple', 'strawberry', 'coconut', 'papaya'] }) Example required output: pd.DataFrame({'group': [1, 1, 1, 1, 2, 2, 2, 2], 'field_a': ['apple', 'apple', 'pear', 'pear', 'banana', 'banana', 'papaya', 'papaya'], 'field_b': ['apple', 'strawberry', 'apple', 'strawberry', 'coconut', 'papaya', 'coconut', 'papaya']}) Current approach: cols = ['group', 'field_a', 'field_b'] all_possible_matches = pd.DataFrame({ col: [] for col in cols }) for group in [1, 2]: combined = df1[df1['group'] == group].merge(df2[df2['group'] == group][['field_b']], how='cross') all_possible_matches = pd.concat([all_possible_matches, combined])
A cross-merge by group would be equivalent to a merge on the group: out = df1.merge(df2, on='group') # if "group" is the only common column # out = df1.merge(df2) Output: group field_a field_b 0 1 apple apple 1 1 apple strawberry 2 1 pear apple 3 1 pear strawberry 4 2 banana coconut 5 2 banana papaya 6 2 papaya coconut 7 2 papaya papaya Before how='cross' was available in pandas, a way to perform a cross-merge was actually to add a dummy key and merge on that: # before df1['key'] = 1 df2['key'] = 1 df1.merge(df2, on='key').drop(columns=['key']) # now df1.merge(df2, how='cross')
2
2
78,127,307
2024-3-8
https://stackoverflow.com/questions/78127307/how-to-read-parquet-files-from-aws-s3-with-polars
Following the documentation for reading from cloud storage, I have created the below script that fails. import boto3 import polars as pl import os session = boto3.Session(profile_name=os.environ["AWS_PROFILE"]) credentials = session.get_credentials() current_credentials = credentials.get_frozen_credentials() # Specify your S3 bucket and file path s3_bucket = "bucket" s3_file_path = "path/file.parquet" # Create the full S3 path s3_path = f"s3://{s3_bucket}/{s3_file_path}" storage_options = { 'aws_access_key_id': current_credentials.access_key, 'aws_secret_access_key': current_credentials.secret_key, 'aws_region': 'us-east-1' credentials } df = pl.scan_parquet(s3_path, storage_options=storage_options) This gives output below which I understand is a common error for not having permissions to access the file. ComputeError: Generic S3 error: Client error with status 403 Forbidden: No Body versions: Python '3.9.18' polars '0.20.14' boto3 '1.34.58' Running on macos. I am also successfully able to read the parquet using pandas just setting the AWS_PROFILE env var. Am I using the storage_options incorrectly? It doesn't seem able to take 'aws_profile' key value pair to extract local config credentials its self?
You might also need to pass the corresponding session token as follows. import polars as pl import boto3 profile_name = "your-profile" s3_path = "s3://my-s3-bucket/my_file.parquet" session = boto3.session.Session(profile_name=profile_name) credentials = session.get_credentials().get_frozen_credentials() df = pl.read_parquet( s3_path, storage_options={ "aws_access_key_id": credentials.access_key, "aws_secret_access_key": credentials.secret_key, "aws_session_token": credentials.token, "aws_region": session.region_name, }, ) From the AWS access keys documentation: Specifies an AWS session token used as part of the credentials to authenticate the user. You receive this value as part of the temporary credentials returned by successful requests to assume a role. A session token is required only if you manually specify temporary security credentials. However, we recommend you always use temporary security credentials instead of long-term credentials. For security recommendations, see Security best practices in IAM.
4
5
78,125,310
2024-3-8
https://stackoverflow.com/questions/78125310/how-to-calculate-mode-when-using-python-polars-in-aggregation
I am involving in a data-mining project and have some problems whiling doing feature engineering. One of my goal is to aggregate data according to the primary key, and to produce new columns. So I write this: df = df.group_by("case_id").agg(date_exprs(df,df_base)) def date_expr(df, df_base): # Join df and df_base on 'case_id' column df = df.join(df_base[['case_id','date_decision']], on="case_id", how="left") for col in df.columns: if col[-1] in ("D",): df = df.with_columns(pl.col(col) - pl.col("date_decision")) df = df.with_columns(pl.col(col).dt.total_days()) cols = [col for col in df.columns if col[-1] in ("D",)] # Generate expressions for max, min, mean, mode, and std of date differences expr_max = [pl.max(col).alias(f"max_{col}") for col in cols] expr_min = [pl.min(col).alias(f"min_{col}") for col in cols] expr_mean = [pl.mean(col).alias(f"mean_{col}") for col in cols] expr_mode = [pl.mode(col).alias(f"mode_{col}") for col in cols] expr_std = [pl.std(col).alias(f"std_{col}") for col in cols] return expr_max + expr_min + expr_mean + expr_mode + expr_std However, there goes an error: AttributeError: module 'polars' has no attribute 'mode'. I looked up document of polars on github and found there was no Dataframe.mode() but Series.mode(), which I thought might be the reason of error? I referred to chatGPT, which could not help because these codes with error were just from it. Besides, here is only an example of dealing with float type. What about string type? Can I also apply your method? I am looking forward to your kind help!!
In your example it fails because there's no syntactic sugar for Expr.mode() as it is for aggregate functions (for example, pl.max() is a syntactic sugar for Expr.max(). The mode() is actually not aggregation function but computation one, which means it just calculates the most occuring value(s) within the column. So, given the DataFrame like this: df = ( pl.DataFrame({ 'aD' : [200, 200, 300, 400, 1, 3], 'bD': [2, 3, 6, 4, 5, 1], 'case_id': [1,1,1,2,2,2] }) ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ aD ┆ bD ┆ case_id β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════════║ β”‚ 200 ┆ 2 ┆ 1 β”‚ β”‚ 200 ┆ 3 ┆ 1 β”‚ β”‚ 300 ┆ 6 ┆ 1 β”‚ β”‚ 400 ┆ 4 ┆ 2 β”‚ β”‚ 1 ┆ 5 ┆ 2 β”‚ β”‚ 3 ┆ 1 ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ you can calculate mode() with the following code: df.with_columns( pl.col('aD').mode(), pl.col('bD').mode() ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ aD ┆ bD ┆ case_id β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════════║ β”‚ 200 ┆ 1 ┆ 1 β”‚ β”‚ 200 ┆ 5 ┆ 1 β”‚ β”‚ 200 ┆ 6 ┆ 1 β”‚ β”‚ 200 ┆ 4 ┆ 2 β”‚ β”‚ 200 ┆ 2 ┆ 2 β”‚ β”‚ 200 ┆ 3 ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ given that, we still can calculate the results you need. I'll simplify you function a bit by using selectors and Expr.prefix(): import polars.selectors as cs def date_expr(): # Generate expressions for max, min, mean, mode, and std of date differences expr_max = cs.ends_with('D').max().name.prefix("max_") expr_min = cs.ends_with('D').min().name.prefix("min_") expr_mean = cs.ends_with('D').mean().name.prefix("mean_") expr_mode = cs.ends_with('D').mode().first().name.prefix("mode_") expr_std = cs.ends_with('D').std().name.prefix("std_") return expr_max, expr_min, expr_mean, expr_std, expr_mode df.group_by("case_id").agg(date_expr()) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ case_id ┆ max_aD ┆ max_bD ┆ min_aD ┆ … ┆ std_aD ┆ std_bD ┆ mode_aD ┆ mode_bD β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 ┆ i64 ┆ ┆ f64 ┆ f64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ════════β•ͺ════════β•ͺ════════β•ͺ═══β•ͺ════════════β•ͺ══════════β•ͺ═════════β•ͺ═════════║ β”‚ 2 ┆ 400 ┆ 5 ┆ 1 ┆ … ┆ 229.787583 ┆ 2.081666 ┆ 3 ┆ 4 β”‚ β”‚ 1 ┆ 300 ┆ 6 ┆ 200 ┆ … ┆ 57.735027 ┆ 2.081666 ┆ 200 ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ note that I've used Expr.first() to the one of the values for mode - as there might be different ones with the same frequency. You can use list expressions to specify which one you'd like to get.
4
3
78,126,235
2024-3-8
https://stackoverflow.com/questions/78126235/is-it-possible-to-exclude-first-n-values-in-each-window-when-using-rolling-to
This is my DataFrame: import pandas as pd df = pd.DataFrame({'a': [150, 106, 119, 131, 121, 140, 160, 119, 170]}) And this is the expected output. I want to create column b: a b 0 150 140 1 106 160 2 119 160 3 131 161 4 121 NaN 5 140 NaN 6 160 NaN 7 119 NaN 8 170 NaN I want to get the maximum value in a rolling window of 6. However I want to ignore the first value of each window. In this picture I have shown the windows that I want. Red cells are the ones that should be excluded from calculations and green ones are the maximum value of the window that are in b. I prefer a generic solution. For example getting max() after ignoring first N values of each window. These are some of my attempts that did not work: # attempt 1 df['b'] = df.a.shift(-1).rolling(6).max() # attempt 2 df['b'] = df.a.rolling(6, closed='left').max() # attempt 3 for i in range(3): x = df.iloc[i+1:i+6]
The naive approach would be to use groupby.transform and drop the first item: N = 6 df['out'] = df.loc[::-1, 'a'].rolling(N).apply(lambda x: x.iloc[:-1].max()) However, since your operation is independent of this value, better compute the rolling.max on N-1 and shift after the fact. Which can simplify the code to: N = 6 df['out'] = df.loc[::-1, 'a'].rolling(N-1).max().shift() Note that since rolling uses the previous values by default, we need to first invert the order of the Series with [::-1]. Output: a out 0 150 140.0 1 106 160.0 2 119 160.0 3 131 170.0 4 121 NaN 5 140 NaN 6 160 NaN 7 119 NaN 8 170 NaN generalization To generalize to skipping skip values: N = 6 skip = 2 df['out'] = df.loc[::-1, 'a'].rolling(N-skip).max().shift(skip) Example (changing the second 119 to 118 for clarity): a skip=0 skip=1 skip=2 skip=3 skip=4 skip=5 skip=6 0 150 150.0 140.0 140.0 140.0 140.0 140.0 NaN 1 106 160.0 160.0 160.0 160.0 160.0 160.0 NaN 2 119 160.0 160.0 160.0 160.0 160.0 118.0 NaN 3 131 170.0 170.0 170.0 170.0 170.0 170.0 NaN 4 121 NaN NaN NaN NaN NaN NaN NaN 5 140 NaN NaN NaN NaN NaN NaN NaN 6 160 NaN NaN NaN NaN NaN NaN NaN 7 118 NaN NaN NaN NaN NaN NaN NaN 8 170 NaN NaN NaN NaN NaN NaN NaN
2
3
78,124,783
2024-3-7
https://stackoverflow.com/questions/78124783/coding-whack-a-mole-and-my-keypress-wont-register-for-the-current-mole
I'm trying to get a whack-a-mole game running for a homework assignment. The program executes fine and it generates a random mole that hops between squares. The mole is supposed to be hit using the numberpad at the reference, so 7 on the top-left and so on. However, whenever it plays, it tells me I miss every time. With experimenting, I found that if I predict where the mole goes, I'll hit it, which means it's not doing the comparison until the next loop. What needs to happen here? import turtle import random import time t = turtle.Turtle() t.hideturtle() mole_x, mole_y = 0, 0 # Set up screen wn = turtle.Screen() wn.title("Whack-A-Mole") wn.bgcolor("green") wn.setup(width=600, height=600) wn.tracer(0) # Draws a square with top-left position (x,y) and side length size def drawsq(x, y, size): t.penup() t.goto(x, y) t.pendown() for i in range(4): t.forward(size) t.right(90) t.penup() # Draw a circle at center (x,y) with radius r def drawcr(x, y, r): t.penup() t.goto(x, y - r) t.pendown() t.circle(r) def molecoords(): coords = [-150, 0, 150] x = random.choice(coords) y = random.choice(coords) return x, y #Draws the mole def draw_mole(x, y): drawcr(x, y, 50) # Body drawcr(x - 40, y - 40, 7) # Left foot drawcr(x + 40, y - 40, 7) # Right foot drawcr(x - 55, y + 15, 7) # Left hand drawcr(x + 55, y + 15, 7) # Right hand t.penup() # Head t.goto(x - 45, y + 20) t.setheading(-50) t.pendown() t.circle(60, 100) t.setheading(0) drawcr(x - 10, y + 35, 2) # Left eye drawcr(x + 10, y + 35, 2) # Right eye drawgrid(x - 7, y + 20, 2, 1, 7) # Teeth t.goto(x, y + 22) # Nose t.fillcolor("black") # Set the fill color t.begin_fill() # Begin filling t.pendown() t.left(60) t.forward(5) t.left(120) t.forward(5) t.left(120) t.forward(5) t.end_fill() t.setheading(0) # Draw a grid with x rows and y columns with squares of side length size starting at (tlx,tly) def drawgrid(tlx, tly, x, y, size): for i in range(x): for j in range(y): drawsq(tlx + (i * size), tly - j * size, size) def check_hit(key): target_positions = { '1': (-150, -150), '2': (0, -150), '3': (150, -150), '4': (-150, 0), '5': (0, 0), '6': (150, 0), '7': (-150, 150), '8': (0, 150), '9': (150, 150) } target_x, target_y = target_positions.get(key) if (mole_x, mole_y) == (target_x, target_y): print("Hit!") else: print("Miss!") def on_key_press(key): check_hit(key) def game_loop(): global mole_x, mole_y start = time.time() duration = 30 while time.time() - start < duration: t.clear() drawgrid(-225, 225, 3, 3, 150) mole_x, mole_y = molecoords() draw_mole(mole_x, mole_y) wn.update() time.sleep(2) # Bind key press events wn.listen() wn.onkeypress(lambda: on_key_press('1'), '1') wn.onkeypress(lambda: on_key_press('2'), '2') wn.onkeypress(lambda: on_key_press('3'), '3') wn.onkeypress(lambda: on_key_press('4'), '4') wn.onkeypress(lambda: on_key_press('5'), '5') wn.onkeypress(lambda: on_key_press('6'), '6') wn.onkeypress(lambda: on_key_press('7'), '7') wn.onkeypress(lambda: on_key_press('8'), '8') wn.onkeypress(lambda: on_key_press('9'), '9') game_loop() def gameover(): t.penup() wn.clear() wn.bgcolor("black") t.goto(0, 100) t.pencolor("White") t.write("Time's Up!", align="center", font=("Arial", 80, "bold")) gameover() turtle.done() It's reading the inputs properly, just not applying them at the right time.
There's a good deal going on here (I generally suggest minimizing your question code to isolate the issue), but the fundamental problem is using while and sleep in a turtle program. Generally, neither belong. The usual tool to implement loops and events is Screen().ontimer(). Here's a partial rewrite that cleans up a few things (uses basic docstrings, removes some repetition) and reorganizes the code a bit to accommodate an ontimer design. There's plenty of room for improvement, left as an exercise. import random from turtle import Screen, Turtle def draw_square(x, y, size): """Draws a square with top-left position (x,y) and side length size""" t.penup() t.goto(x, y) t.pendown() for i in range(4): t.forward(size) t.right(90) t.penup() def draw_circle(x, y, r): """Draw a circle at center (x,y) with radius r""" t.penup() t.goto(x, y - r) t.pendown() t.circle(r) def new_mole_position(): """Chooses random mole coordinates""" coords = [-150, 0, 150] x = random.choice(coords) y = random.choice(coords) return x, y def draw_mole(x, y): """Draws the mole at (x, y)""" draw_circle(x, y, 50) # Body draw_circle(x - 40, y - 40, 7) # Left foot draw_circle(x + 40, y - 40, 7) # Right foot draw_circle(x - 55, y + 15, 7) # Left hand draw_circle(x + 55, y + 15, 7) # Right hand t.penup() # Head t.goto(x - 45, y + 20) t.setheading(-50) t.pendown() t.circle(60, 100) t.setheading(0) draw_circle(x - 10, y + 35, 2) # Left eye draw_circle(x + 10, y + 35, 2) # Right eye draw_grid(x - 7, y + 20, 2, 1, 7) # Teeth t.goto(x, y + 22) # Nose t.fillcolor("black") # Set the fill color t.begin_fill() # Begin filling t.pendown() t.left(60) t.forward(5) t.left(120) t.forward(5) t.left(120) t.forward(5) t.end_fill() t.setheading(0) def draw_grid(tlx, tly, x, y, size): """ Draws a grid with x rows and y columns with squares of side length size starting at (tlx,tly) """ for i in range(x): for j in range(y): draw_square(tlx + (i * size), tly - j * size, size) def check_hit(key): """Determines whether a key corresponds to the mole"s location""" target_positions = { "1": (-150, -150), "2": (0, -150), "3": (150, -150), "4": (-150, 0), "5": (0, 0), "6": (150, 0), "7": (-150, 150), "8": (0, 150), "9": (150, 150) } target_x, target_y = target_positions.get(key) if (mole_x, mole_y) == (target_x, target_y): print("Hit!") else: print("Miss!") def move_mole(): """ Moves the mole and triggers the next mole move if the timer hasn"t expired """ global mole_x, mole_y if game_over: return mole_x, mole_y = new_mole_position() t.clear() draw_grid(-225, 225, 3, 3, 150) draw_mole(mole_x, mole_y) wn.update() wn.ontimer(move_mole, 2000) def end_game(): """Ends the game""" global game_over game_over = True t.penup() wn.clear() wn.bgcolor("black") t.goto(0, 100) t.pencolor("White") t.write("Time's Up!", align="center", font=("Arial", 20, "bold")) def bind_key(k): """Bind key k to check hit on the corresponding square""" wn.onkeypress(lambda: check_hit(k), k) t = Turtle() t.hideturtle() mole_x, mole_y = 0, 0 # Set up screen wn = Screen() wn.tracer(0) wn.title("Whack-A-Mole") wn.bgcolor("green") wn.setup(width=600, height=600) # Bind key press events wn.listen() for i in range(1, 10): bind_key(str(i)) duration = 30 game_over = False wn.ontimer(end_game, duration * 1000) move_mole() wn.exitonclick()
4
1
78,125,526
2024-3-8
https://stackoverflow.com/questions/78125526/computeerror-with-polars-dataframe-while-trying-to-chain-expressions-in-a-single
I am trying to do a series of operations on a single column in a lazy polars DataFrame and I am trying to avoid using with_columns_seq repeatedly in doing so, but there's a ComputeError stating a duplicate column name. Is there a better alternative to this? df = ( df .with_columns_seq([ pl.col('sentiment').cast(pl.UInt8), pl.col('review').map_elements(lambda x: BeautifulSoup(x).get_text()), pl.col('review').str.replace(r"[^a-zA-Z0-9]", " "), pl.col('review').str.to_lowercase(), # pl.col('review').str.split_by(' ') ]) ) df.collect().head() and the error is ComputeError: the name: 'review' passed to `LazyFrame.with_columns` is duplicate It's possible that multiple expressions are returning the same default column name. If this is the case, try renaming the columns with `.alias("new_name")` to avoid duplicate column names. tried to compose all operations but failed due to a duplicate column error
Explanation Your error message tells you already almost everything to grasp the problem: It's possible that multiple expressions are returning the same default column name. If this is the case, try renaming the columns with .alias("new_name") to avoid duplicate column names. However, it does not in this case actually suggest a solution that you probably want. The error comes because each of the lines starting with pl.col("review") creates a new column review, because .with_columns_seq does not mean that you run an each of those expressions to updated data one after another. It just means that parallel processing is not used to do this operation, and you might as well just use with_columns. The usual solution would be to use alias("new_name"), like the error message helpfully suggests. However, in your situation it would just create two columns with slightly differing data .with_columns_seq([ pl.col('review').map_elements(lambda x: x), pl.col('review').str.to_lowercase().alias("lowercase"), ]) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ sentiment ┆ review ┆ lowercase β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════════════════β•ͺ══════════════════════║ β”‚ 1 ┆ I love this movie ┆ i love this movie β”‚ β”‚ 0 ┆ I hate this movie!! ┆ i hate this movie!! β”‚ ... This seems to be not something you want. Instead you need a way to do each of those operations to the new review column sequantially. Solution To actually run these expressions for review sequentially, you pipe them together. The power of expressions is that every expression produces a new expression, and that they can be piped together. You can run an expression by passing them to one of Polars execution contexts. So you would do this instead: df = ( df .with_columns([ pl.col('sentiment').cast(pl.UInt8), pl.col('review').map_elements(lambda x: BeautifulSoup(x).get_text()) .str.replace_all(r"[^a-zA-Z0-9]", " ") .str.to_lowercase() ]) )
3
5
78,122,301
2024-3-7
https://stackoverflow.com/questions/78122301/how-to-handle-multi-line-results-of-user-defined-functions-in-polars
I'd like to parse lines of text to multiple columns and lines in polars, with user defined function. import polars as pl df = pl.DataFrame({'file': ['aaa.txt','bbb.txt'], 'text': ['my little pony, your big pony','apple+banana, cake+coke']}) def myfunc(p_str: str) -> list: res = [] for line in p_str.split(','): x = line.strip().split(' ') res.append({f'word{e+1}': w for e, w in enumerate(x)}) return res If I just run a test it's fine, list of dicts is created: myfunc(df['text'][0]) [{'word1': 'my', 'word2': 'little', 'word3': 'pony'}, {'word1': 'your', 'word2': 'big', 'word3': 'pony'}] Even creating a dataframe of it is easy: pl.DataFrame(myfunc(df['text'][0])) But trying to do map_elements() fails: (df.with_columns(pl.struct(['text']).map_elements(lambda x: myfunc(x['text'])).alias('aaa') ) ) thread '' panicked at crates/polars-core/src/chunked_array/builder/list/anonymous.rs:161:69: called Result::unwrap() on an Err value: InvalidOperation(ErrString("It is not possible to concatenate arrays of different data types.")) --- PyO3 is resuming a panic after fetching a PanicException from Python. --- What I'd like as a result is something like: file word1 word2 word3 aaa.txt my little pony aaa.txt your big pony bbb.txt apple+banana bbb.txt cake+coke Any idea?
The error is likely due to the complex return type returned by the UDF. Polars quickly infers the dtype of the inner dictionaries. However, the dictionaries being evaluated later might have a different number of fields and the error is raised. A simpler reproducible example would be the following. import random import polars as pl def ufunc(x: int): return [ {f"word_{i}": "elephant" for i in range(random.randint(1, 4))} for _ in range(random.randint(1, 4)) ] pl.DataFrame({"id": [1, 2]}).with_columns(pl.col("id").map_elements(ufunc)) Originally, I thought correctly setting the returned_dtype parameter of pl.Expr.map_elements would resolve the issue. However, I didn't find a way to specify a struct dtype with varying number of fields. Still, when wrapping the return value into another list, polars seems to infer the returned dtype correctly. Then, we can obtain the first (and only) list element using .list.first, explode the lines into rows using pl.DataFrame.explode, and finally unnest the inner structs. ( pl.DataFrame({"id": [1, 2]}) .with_columns( pl.col("id").map_elements(lambda x: [ufunc(x)]).list.first() ) .explode("id") .unnest("id") ) shape: (7, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ word_0 ┆ word_1 ┆ word_2 ┆ word_3 β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ══════════β•ͺ══════════║ β”‚ elephant ┆ elephant ┆ null ┆ null β”‚ β”‚ elephant ┆ elephant ┆ elephant ┆ elephant β”‚ β”‚ elephant ┆ null ┆ null ┆ null β”‚ β”‚ elephant ┆ null ┆ null ┆ null β”‚ β”‚ elephant ┆ elephant ┆ null ┆ null β”‚ β”‚ elephant ┆ elephant ┆ elephant ┆ null β”‚ β”‚ elephant ┆ null ┆ null ┆ null β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
2
78,124,238
2024-3-7
https://stackoverflow.com/questions/78124238/pydantic-non-default-argument-follows-default-argument
I don't understand why the code: from typing import Optional from pydantic import Field from pydantic.dataclasses import dataclass @dataclass class Klass: field1: str = Field(min_length=1) field2: str = Field(min_length=1) field3: Optional[str] throws the error: TypeError: non-default argument 'field3' follows default argument if by default Field default kwarg is PydanticUndefined. Why are field1 and field2 default arguments? I'm using python 3.8 and pydantic 2.6 I tried field3: Optional[str] = Field(...) and it works. I expected the code block above to work because all fields are required and none has default values.
TL;DR field3 is a required argument with type Optional[str], not an optional argument, because you didn't assign anything to field3 in the class definition. field1 and field2 are technically optional, because the Field object you assign to each provides a default of PydanticUndefined. That value, though, causes a validation error at runtime if you don't supply another argument in its place. The dataclass decorator is constructing a def statement to define your class's __init__ method that looks something like def __init__(self, field1=PydanticUndefined, field2=PydanticUndefined, field3): ... The constructed statement is then execed, which is why you get the error about a non-default argument when the class is defined, rather than when you try to instantiate the class. To make field3 optional, you have to provide a default value. field3: Optional[str] = None This makes the defined statement something like def __init__(self, field1=PydanticUndefined, field2=PydanticUndefined, field3=None): ... You can't (as far as I know) make field1 or field2 truly required; the PydanticUndefined value just causes __init__ to raise a ValidationError rather than a TypeError if no explicit argument is passed. >>> Klass() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/chepner/py311/lib/python3.11/site-packages/pydantic/_internal/_dataclasses.py", line 134, in __init__ s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s) pydantic_core._pydantic_core.ValidationError: 2 validation errors for Klass field1 Field required [type=missing, input_value=ArgsKwargs(()), input_type=ArgsKwargs] For further information visit https://errors.pydantic.dev/2.5/v/missing field2 Field required [type=missing, input_value=ArgsKwargs(()), input_type=ArgsKwargs] For further information visit https://errors.pydantic.dev/2.5/v/missing I haven't dug into the source to see exactly how that happens, but I assume it's something resembling def __init__(self, field1=PydanticUndefined, ...): if field1 is PydanticUndefined: # prepare ValidationError exception if field2 is PydanticUndefined: # prepare ValidationError exception if <ValidationError needs to be raised>: raise ValidationError(...) If desired, you can provide "real" default values for field1 and field2 by adding the default keyword argument to Field.
5
3
78,124,477
2024-3-7
https://stackoverflow.com/questions/78124477/can-an-awk-file-be-executed-with-subprocess
I have 3 files all within the same directory: ex.awk data.csv file.py I want to execute my awk file using subprocess so I can call it within a PyTest testcase in the future. These are my sample files: ex.awk #!/usr/bin/awk -f BEGIN { FS = ","; } { print NF; } END {print "DONE"} data.csv 10,34,32 4,76,8304 4759,5869,2940 file.py import subprocess cmd = ['awk', '-f', 'ex.awk', 'data.csv'] subprocess.run(cmd) On bash, the following command works: $ awk -f ex.awk data.csv 3 3 3 DONE I'm trying to get the same result when running file.py, but I get the following FileNotFoundError: [WinError 2] The system cannot find the file specified Am I missing some args in my array? Did I need to use some other subprocess args like shell or others?
To execute the awk script from Python on Windows where you have encountered a FileNotFoundError, it's often due to the way Windows handles file paths or the execution of Unix/Linux-specific utilities that might not be directly available in the Windows command prompt environment. So you need to provide the absolute path or update Environment variable path to make it available. Try these after adjusting the 'C:/Program Files/Git/usr/bin/awk.exe' path as necessary; import subprocess cmd = ['C:/Program Files/Git/usr/bin/awk.exe', '-f', 'ex.awk', 'data.csv'] subprocess.run(cmd, shell=True) Alternatively, to execute it within a Unix-like shell provided by Git Bash or WSL: import subprocess cmd = 'awk -f ex.awk data.csv' subprocess.run(cmd, shell=True, executable='C:/Program Files/Git/bin/bash.exe')
2
1
78,123,745
2024-3-7
https://stackoverflow.com/questions/78123745/python-itertools-product-challenge-to-expand-a-dict-with-tuples
Given a dictionary like this with some items being tuples... params = { 'a': 'static', 'b': (1, 2), 'c': ('X', 'Y') } I need the "product" of the items into a list of dict like this, with the tuples expanded so each item in b will be matched with each item in c... [{ 'a': 'static', 'b': 1, 'c': 'X' }, { 'a': 'static', 'b': 1, 'c': 'Y' }, { 'a': 'static', 'b': 2, 'c': 'X' }, { 'a': 'static', 'b': 2, 'c': 'Y')}] I can easily separate the initial input into a list of non-tuple items and tuple items, and apply the key of each tuple to the values as a "tag" prior to multiplication so they look like this: 'b##1', 'b##2', 'c##X', 'c##Y'. Then parse those back into the above dict after multiplication. If I would always see 2 tuple items (like b and c), I could easily pass both to itertools.products. But there could be 0..n tuple items, and product() doesn't multiply a list of lists in this way. Can anyone think of a solution? TAG = '##' # separate tuples and non-tuples from the input, and prepend the key of each tuple as a tag on the value to parse out later for key, value in params.items(): if type(value) is tuple: for x in value: tuples.append(f'{key}{TAG}{x}') else: non_tuples.append({key: value}) print(list(product(tuples)) # BUG: doesn't distribute each value of b with each value of c
product takes multiple iterables, but the key thing to remember is that an iterable can contain a single item. In cases where a value in your original dict isn't a tuple (or maybe a list), you want to convert it to a tuple containing a single value and pass that to product: params_iterables = {} for k, v in params.items(): if isinstance(v, (tuple, list)): params_iterables[k] = v # v is already a tuple or a list else: params_iterables[k] = (v, ) # A tuple containing a single value, v which gives: params_iterables = {'a': ('static',), 'b': (1, 2), 'c': ('X', 'Y')} Then, simply get the product of the values in params_iterables: result = [] for values in product(*params_iterables.values()): result.append(dict(zip(params, values))) The dict(zip(params, values)) line creates a dict where the first element of values is assigned the first key in params, and so on. This dict is then appended to result, which gives the desired output: [{'a': 'static', 'b': 1, 'c': 'X'}, {'a': 'static', 'b': 1, 'c': 'Y'}, {'a': 'static', 'b': 2, 'c': 'X'}, {'a': 'static', 'b': 2, 'c': 'Y'}]
2
3
78,120,071
2024-3-7
https://stackoverflow.com/questions/78120071/how-to-calculate-day-difference-between-rows-with-specific-conditions-in-pandas
I am a beginner of Pandas. I have a dataframe like below and want to calculate the "CV" day difference since "Visit" per "ID", as the "CV_days_after_visit" in the expected result. May I know how I can achieve this with Pandas in Python import pandas as pd data1 = {'ID': ['A2A', 'A2A', 'A2A', 'BB3', 'BB3', 'BB3', '5EE', '5EE'], 'Action': ['Visit', 'CV', 'CV', 'Visit', 'Visit', 'CV', 'Visit', 'CV'], 'date': ['2023/4/1', '2023/4/5', '2023/4/7', '2023/5/5', '2023/5/29', '2023/5/30', '2023/6/1', '2023/6/10']} df = pd.DataFrame(data1) print (df) ID Acton date 0 A2A Visit 2023/4/1 1 A2A CV 2023/4/5 2 A2A CV 2023/4/7 3 BB3 Visit 2023/5/5 4 BB3 Visit 2023/5/29 5 BB3 CV 2023/5/30 6 5EE Visit 2023/6/1 7 5EE CV 2023/6/10 Thanks in advance.
You can subtract forward filling by GroupBy.ffill dates if Visit by Series.sub and convert timedeltas to days by Series.dt.days, last if necessary remove 0 days by Series.mask: #convert column to datetimes df['date'] = pd.to_datetime(df['date']) m = df['Acton'].eq('Visit') df['CV_days_after_visit'] = (df['date'].sub(df['date'].where(m).groupby(df['ID']).ffill()) .dt.days .mask(m)) print (df) ID Acton date CV_days_after_visit 0 A2A Visit 2023-04-01 NaN 1 A2A CV 2023-04-05 4.0 2 A2A CV 2023-04-07 6.0 3 BB3 Visit 2023-05-05 NaN 4 BB3 Visit 2023-05-29 NaN 5 BB3 CV 2023-05-30 1.0 6 5EE Visit 2023-06-01 NaN 7 5EE CV 2023-06-10 9.0 How it working: df['date'] = pd.to_datetime(df['date']) m = df['Acton'].eq('Visit') print (df.assign(m1 = df['date'].where(m), ffill = df['date'].where(m).groupby(df['ID']).ffill(), sub = df['date'].sub(df['date'].where(m).groupby(df['ID']).ffill()), days = df['date'].sub(df['date'].where(m).groupby(df['ID']).ffill()) .dt.days)) ID Acton date m1 ffill sub days 0 A2A Visit 2023-04-01 2023-04-01 2023-04-01 0 days 0 1 A2A CV 2023-04-05 NaT 2023-04-01 4 days 4 2 A2A CV 2023-04-07 NaT 2023-04-01 6 days 6 3 BB3 Visit 2023-05-05 2023-05-05 2023-05-05 0 days 0 4 BB3 Visit 2023-05-29 2023-05-29 2023-05-29 0 days 0 5 BB3 CV 2023-05-30 NaT 2023-05-29 1 days 1 6 5EE Visit 2023-06-01 2023-06-01 2023-06-01 0 days 0 7 5EE CV 2023-06-10 NaT 2023-06-01 9 days 9
2
5
78,119,374
2024-3-7
https://stackoverflow.com/questions/78119374/how-do-i-dynamically-import-a-function-by-its-pythonic-path
I have a function in a submodule that normally can be imported like this: from core.somepack import my_func Instead I would like to import it lazily by a given pythonic string core.somepack.my_func. What is the best way to do it? my_func = some_function_i_am_asking_for('core.somepack.my_func') my_func()
Try using python built-in importlib module. It helps importing a module or a function at runtime when i.e the module name is not known until runtime. Here is a sample code for having a lazy import using importlib: import importlib def lazy_import(fn_path): module_path, fn_name = fn_path.rsplit('.', 1) module = importlib.import_module(module_path) fn = getattr(module, fn_name) return fn You can call the lazy_import function with your 'module_name.func_name' string.
2
2
78,118,100
2024-3-7
https://stackoverflow.com/questions/78118100/pandas-filter-dataframe-by-difference-between-adjacent-rows
I have the following data in a dataframe. Timestamp MeasureA MeasureB MeasureC MeasureD 0.00 26.46 63.60 3.90 0.67 0.94 26.52 78.87 1.58 0.42 1.94 30.01 82.04 1.13 0.46 3.00 30.19 82.00 1.17 0.36 4.00 30.07 81.43 1.13 0.42 5.94 30.02 82.46 1.05 0.34 8.00 30.22 82.48 0.98 0.35 9.00 30.00 82.21 1.13 0.33 10.00 30.00 82.34 1.12 0.34 And I'd like to filter the entries using some non-uniform intervals. Let say that my intervals are [1.0, 1.5] What I'm trying to achieve is that, we take the first row (row0), and to get the next valid row, we look at what is the next row whose Timestamp value is greater or equal to row0 + 1.0. In this scenario, the next valid row will be the one with the 1.94 timestamp. Then, for the next valid row, we will use the next item in the intervals array. Which is 1.5. That will make the next row the one with a timestamp value of 4.00. Since 1.94 + 1.5 is equal to 3.44. For the next row, we go back and start from the beginning of the array of intervals. After going through all the data, the resulting dataframe should be: Timestamp MeasureA MeasureB MeasureC MeasureD 0.00 26.46 63.60 3.90 0.67 1.94 30.01 82.04 1.13 0.46 4.00 30.07 81.43 1.13 0.42 5.94 30.02 82.33 1.11 0.35 8.00 30.22 82.48 0.98 0.35 9.00 30.00 82.21 1.13 0.33 Is there a way to achieve this with the existing filtering methods in pandas?
Try: from itertools import cycle # the interval: A, B = 1.0, 1.5 comparing, out, last_t = cycle([B, A]), [], float("-inf") j = next(comparing) for i, t in zip(df.index, df.Timestamp): if t >= last_t + j: out.append(i) last_t = t j = next(comparing) print(df.loc[out]) Prints: Timestamp MeasureA MeasureB MeasureC MeasureD 0 0.00 26.46 63.60 3.90 0.67 2 1.94 30.01 82.04 1.13 0.46 4 4.00 30.07 81.43 1.13 0.42 5 5.94 30.02 82.46 1.05 0.34 6 8.00 30.22 82.48 0.98 0.35 7 9.00 30.00 82.21 1.13 0.33
3
3
78,117,898
2024-3-6
https://stackoverflow.com/questions/78117898/web-scraping-wikipedia-table-using-beautiful-soup-getting-none-returned
New to web scraping and coding in general. This is probably an easy problem for someone more experienced... maybe not... here it is: Trying to web scrape a table from wikipedia. I've located the table in the html and added that info in my code. However when I run it I get 'none' returned instead of confirmation the table has been correctly located. from bs4 import BeautifulSoup from urllib.request import urlopen url = 'https://en.wikipedia.org/wiki/List_of_songs_recorded_by_the_Beatles' html = urlopen(url) soup = BeautifulSoup(html, 'html.parser') table = soup.find('table',{'class':'wikitable sortable plainrowheaders jquery-tablesorter'}) print(table) Return: None
Remove the jquery-tablesorter from the "class" string - this class is added by javascript and beautifulsoup doesn't see it (note: always observe the real HTML document the servers sends you, that's what is beautifulsoup seeing - press ctrl-U in your browser): from urllib.request import urlopen from bs4 import BeautifulSoup url = "https://en.wikipedia.org/wiki/List_of_songs_recorded_by_the_Beatles" html = urlopen(url) soup = BeautifulSoup(html, "html.parser") table = soup.find("table", {"class": "wikitable sortable plainrowheaders"}) print(table) Prints: <table class="wikitable sortable plainrowheaders" style="text-align:center"> <caption>Name of song, core catalogue release, songwriter, lead vocalist and year of original release </caption> <tbody><tr> <th scope="col">Song </th> <th scope="col">Core catalogue release(s) </th> <th scope="col">Songwriter(s) </th> ...
2
2
78,113,717
2024-3-6
https://stackoverflow.com/questions/78113717/why-python-is-running-as-32-bit-on-64-bit-windows-10-with-64-bit-python-installe
I have a Python script that runs on Windows 10 Pro x64. When I open Task Manager, it shows that Python is running as 32-bit application. This is weird because: Python 3.12.2 is installed as 64-bit variant (double-checked) Installer got from here This is the only Python instance installed on this PC Script was launched via double-click on it (if this is important info) Please, explain me: Why it is still shown as 32-bit in task manager? Or is it error of Task Manager? Is it possible to force running all python scripts as 64-bit? If yes: how? UPDATE / ANSWER According to this documentation it is the 32-bit launcher, but the Python will be still 64-bit version to bypass this, right click on the python scripts and select "Open with" and select there the 64-bit Python executable. Typically, something like this :C:\Program Files\Python312\python.exe, than when double-clicking the python script it will appear in task manager like this: do this only if this is the only instance of Python installed on PC
Your Python launcher is 32-bit, but it runs 64-bit Python interpreter.
3
2
78,116,908
2024-3-6
https://stackoverflow.com/questions/78116908/pandas-slice-3-level-multiindex-based-on-a-list-with-2-levels
Here is a minimal example: import pandas as pd import numpy as np np.random.seed(0) idx = pd.MultiIndex.from_product([[1,2,3], ['a', 'b', 'c'], [6, 7]]) df = pd.DataFrame(np.random.randn(18), index=idx) selection = [(1, 'a'), (2, 'b')] I would like to select all the rows in df that have as index that starts with any of the items in selection. So I would like to get the sub dataframe of df with the indices: (1, 'a', 6), (1, 'a', 7), (2, 'b', 6), (2, 'b', 7) What is the most straightforward/pythonian/pandasian way of doing this? What I found: sel = [id[:2] in selection for id in df.index] df.loc[sel]
You could use boolean indexing with isin: out = df[df.index.isin(selection)] Output: 0 1 a 6 1.560268 7 0.674709 2 b 6 0.848069 7 0.130719 If you want to select other levels, drop the unused leading levels: # here we want to select on levels 1 and 2 selection = [('a', 6), ('b', 7)] df[df.index.droplevel(0).isin(selection)] Output: 0 1 a 6 1.560268 b 7 0.137769 2 a 6 0.754946 b 7 0.130719 3 a 6 -2.275646 b 7 -2.199944
3
2
78,116,325
2024-3-6
https://stackoverflow.com/questions/78116325/pyspark-where-clause-can-work-on-a-column-that-doesnt-exist
I noticed by accident a weird behavior of pyspark. Basically, it can execute where function on a column that doesn't exist in a dataframe: print(spark.version) df = spark.read.format("csv").option("header", True).load("abfss://some_abfs_path/df.csv") print(type(df), df.columns.__len__(), df.count()) c = df.columns[0] # A column name before renaming df = df.select(*[col(x).alias(f"{x}_new") for x in df.columns]) # Add suffix to column names print(c in df.columns) try: df.select(c) except: print("SO THIS DOESN'T WORK, WHICH MAKES SENSE.") # BUT WHY DOES THIS WORK: print(df.where(col(c).isNotNull()).count()) # IT'S USING c AS f"{c}_new" print(df.where(col(f"{c}_new").isNotNull()).count()) Outputs: 3.1.2 <class 'pyspark.sql.dataframe.DataFrame'> 102 1226791 False SO THIS DOESN'T WORK, WHICH MAKES SENSE. 1226791 1226791 As you can see, the weird part is that when column c doesn't exist in df after column renaming, it can still be used for where function. My intuition is pyspark compiles where before select renaming under the hood. But it will be a horrible design in that case and doesn't explain why both old and new column names could work. Would appreciate any insights, thanks. I'm running things on Azure Databricks.
When in doubt, use df.explain() to figure out what's going on under the hood. This will confirm your intution: Spark context available as 'sc' (master = local[*], app id = local-1709748307134). SparkSession available as 'spark'. >>> df = spark.read.option("header", True).option("inferSchema", True).csv("taxi.csv") >>> c = df.columns[0] >>> from pyspark.sql.functions import * >>> df = df.select(*[col(x).alias(f"{x}_new") for x in df.columns]) >>> df.explain() == Physical Plan == *(1) Project [VendorID#17 AS VendorID_new#51, tpep_pickup_datetime#18 AS tpep_pickup_datetime_new#52, tpep_dropoff_datetime#19 AS tpep_dropoff_datetime_new#53, passenger_count#20 AS passenger_count_new#54, trip_distance#21 AS trip_distance_new#55, RatecodeID#22 AS RatecodeID_new#56, store_and_fwd_flag#23 AS store_and_fwd_flag_new#57, PULocationID#24 AS PULocationID_new#58, DOLocationID#25 AS DOLocationID_new#59, payment_type#26 AS payment_type_new#60, fare_amount#27 AS fare_amount_new#61, extra#28 AS extra_new#62, mta_tax#29 AS mta_tax_new#63, tip_amount#30 AS tip_amount_new#64, tolls_amount#31 AS tolls_amount_new#65, improvement_surcharge#32 AS improvement_surcharge_new#66, total_amount#33 AS total_amount_new#67] +- FileScan csv [VendorID#17,tpep_pickup_datetime#18,tpep_dropoff_datetime#19,passenger_count#20,trip_distance#21,RatecodeID#22,store_and_fwd_flag#23,PULocationID#24,DOLocationID#25,payment_type#26,fare_amount#27,extra#28,mta_tax#29,tip_amount#30,tolls_amount#31,improvement_surcharge#32,total_amount#33] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex(1 paths)[file:/Users/charlie/taxi.csv], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<VendorID:int,tpep_pickup_datetime:string,tpep_dropoff_datetime:string,passenger_count:int,... >>> df = df.where(col(c).isNotNull()) >>> df.explain() == Physical Plan == *(1) Project [VendorID#17 AS VendorID_new#51, tpep_pickup_datetime#18 AS tpep_pickup_datetime_new#52, tpep_dropoff_datetime#19 AS tpep_dropoff_datetime_new#53, passenger_count#20 AS passenger_count_new#54, trip_distance#21 AS trip_distance_new#55, RatecodeID#22 AS RatecodeID_new#56, store_and_fwd_flag#23 AS store_and_fwd_flag_new#57, PULocationID#24 AS PULocationID_new#58, DOLocationID#25 AS DOLocationID_new#59, payment_type#26 AS payment_type_new#60, fare_amount#27 AS fare_amount_new#61, extra#28 AS extra_new#62, mta_tax#29 AS mta_tax_new#63, tip_amount#30 AS tip_amount_new#64, tolls_amount#31 AS tolls_amount_new#65, improvement_surcharge#32 AS improvement_surcharge_new#66, total_amount#33 AS total_amount_new#67] +- *(1) Filter isnotnull(VendorID#17) +- FileScan csv [VendorID#17,tpep_pickup_datetime#18,tpep_dropoff_datetime#19,passenger_count#20,trip_distance#21,RatecodeID#22,store_and_fwd_flag#23,PULocationID#24,DOLocationID#25,payment_type#26,fare_amount#27,extra#28,mta_tax#29,tip_amount#30,tolls_amount#31,improvement_surcharge#32,total_amount#33] Batched: false, DataFilters: [isnotnull(VendorID#17)], Format: CSV, Location: InMemoryFileIndex(1 paths)[file:/Users/charlie/taxi.csv], PartitionFilters: [], PushedFilters: [IsNotNull(VendorID)], ReadSchema: struct<VendorID:int,tpep_pickup_datetime:string,tpep_dropoff_datetime:string,passenger_count:int,... From bottom to top: FileScan to read the data, Filter to discard unneeded data, Project to apply the alias. It's a sensible way for Spark to construct its DAG - discard data as eagerly as possible so you don't waste time operating on it - but as you've noticed, it can lead to unexpected behavior. If you'd like to avoid this, use df.checkpoint() to materialize the DataFrame prior to your df.where() statement - this will give you the expected error when you attempt to reference the old column name: >>> from pyspark.sql.functions import * >>> spark.sparkContext.setCheckpointDir("file:/tmp/") >>> df = spark.read.option("header", True).option("inferSchema", True).csv("taxi.csv") >>> c = df.columns[0] >>> df = df.select(*[col(x).alias(f"{x}_new") for x in df.columns]) >>> df = df.checkpoint() >>> df.explain() == Physical Plan == *(1) Scan ExistingRDD[VendorID_new#51,tpep_pickup_datetime_new#52,tpep_dropoff_datetime_new#53,passenger_count_new#54,trip_distance_new#55,RatecodeID_new#56,store_and_fwd_flag_new#57,PULocationID_new#58,DOLocationID_new#59,payment_type_new#60,fare_amount_new#61,extra_new#62,mta_tax_new#63,tip_amount_new#64,tolls_amount_new#65,improvement_surcharge_new#66,total_amount_new#67] >>> df = df.where(col(c).isNotNull()) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/homebrew/opt/apache-spark/libexec/python/pyspark/sql/dataframe.py", line 3325, in filter jdf = self._jdf.filter(condition._jc) File "/opt/homebrew/opt/apache-spark/libexec/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__ File "/opt/homebrew/opt/apache-spark/libexec/python/pyspark/errors/exceptions/captured.py", line 185, in deco raise converted from None pyspark.errors.exceptions.captured.AnalysisException: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `VendorID` cannot be resolved. Did you mean one of the following? [`VendorID_new`, `extra_new`, `RatecodeID_new`, `mta_tax_new`, `DOLocationID_new`].; 'Filter isnotnull('VendorID) +- LogicalRDD [VendorID_new#51, tpep_pickup_datetime_new#52, tpep_dropoff_datetime_new#53, passenger_count_new#54, trip_distance_new#55, RatecodeID_new#56, store_and_fwd_flag_new#57, PULocationID_new#58, DOLocationID_new#59, payment_type_new#60, fare_amount_new#61, extra_new#62, mta_tax_new#63, tip_amount_new#64, tolls_amount_new#65, improvement_surcharge_new#66, total_amount_new#67], false >>>
2
3
78,112,700
2024-3-6
https://stackoverflow.com/questions/78112700/why-am-i-getting-mean-of-empty-slice-warnings-without-nans
I've got a dataframe keptdata. I have a for loop to search through the rows of keptdata, and at one point need to average the previous values. This is the relevant line in my code, that produces the warning: avg = np.average(keptdata.iloc[i-500:i].price[keptdata.price != 0]) Here i is the variable that's being looped over in the for loop, price is a column in the dataframe, and I'm including the row in the average only if price is not zero. Why am I getting a warning? Googling for the warning it seems to happen if there are NaNs in the dataframe, but I checked for that using keptdata.isnull.any().any() which returns false. Another possibility is that if i-500 is negative, that might be causing problems, but I rewrote the for loop to start with i = 700 and I still get the warning. Removing keptdata.price != 0 seems to resolve the issue, but that changes the line fundamentally. Can the cause of the warning be fixed? Do I have any options other than suppressing the warning?
I figured this one out. The problem was the keptdata dataframe had a series of at least five hundred 0's for price. This means that after inserting the condition keptdata.price != 0, there was no longer anything to take an average over, and numpy sensibly returns a "Mean of empty slice" warning.
2
2
78,112,019
2024-3-6
https://stackoverflow.com/questions/78112019/pass-parameters-to-mainargv-from-within-another-python-script
I have a python script that passes arguments "-i on" to argv in main define this is part of the code; def main(argv): status = "none" try: opts, args = getopt.getopt(argv,"hi:") except getopt.GetoptError: print sys.argv[0], ' -i on|off' sys.exit(2) for opt, arg in opts: if opt == '-h': print sys.argv[0]," -i on|off" print "Switches invertor on and off" sys.exit() elif opt in ("-i"): status = arg up = UPower() if (up.connect() < 0): print "Could not connect to the device" exit -2 newstatR = 0 if (status == "on"): newstatR = 1 I have another script that i would like to call this script like this; import sys from flask import Flask, request sys.path.insert(0, '/home/domoticame/epever-inverter-read') import ivctl ... ivctl.main('-i on') This however doesnt reliably work, the execution is different than on command line It seems to execute the off command in stead of on command This however doesnt reliably work, the execution is different than on command line It seems to execute the off command in stead of on command
Have you tried to pass them as a list? import sys from flask import Flask, request sys.path.insert(0, '/home/domoticame/epever-inverter-read') import ivctl ... ivctl.main('-i on'.split())
2
3
78,076,401
2024-2-28
https://stackoverflow.com/questions/78076401/uninstall-uv-python-package-installer
I recently installed uv on linux using the command line provided in the documentation: curl -LsSf https://astral.sh/uv/install.sh | sh It created an executable in /home/currentuser/.cargo/bin/uv. Now, just out of curiosity, I would like to know how to remove it properly. Is it as simple as deleting a file ? Or is there any script or command line which is cleaner? Please note that I also tried with pip install/uninstall uv, and it worked perfectly, but the installation path was different.
The official documentation now has complete uninstall instructions: # Remove all data that uv has stored uv cache clean rm -r "$(uv python dir)" rm -r "$(uv tool dir)" # Remove binaries rm ~/.local/bin/uv ~/.local/bin/uvx If your original installation was a version earlier than 0.5.2 (November 2024), then replace the last line with: rm ~/.cargo/bin/uv ~/.cargo/bin/uvx These instructions cover macOS and Linux. See the documentation for Windows instructions.
5
7
78,090,426
2024-3-1
https://stackoverflow.com/questions/78090426/cannot-import-name-binom-test-from-scipy-stats-error-when-using-borutashap
I am trying to use borutashap for feature selection of a machine learning project. I can't run: from BorutaShap import BorutaShap always came across this error: ImportError: cannot import name 'binom_test' from 'scipy.stats' I upgraded both borutashap and scipy to make sure they are the latest version. I am using a conda env, and I removed the whole env and recreated a new env and reinstalled all libraries, but still the same mistake. Could anyone help me with this issue? Thank you in advance.
The function scipy.stats.binom_test() was removed from SciPy 1.12.0. Source. There is an alternative, scipy.stats.binomtest(), which is called slightly differently. Here are the docs for old and new. I believe you could fix this either by downgrading SciPy to 1.11.4 or by applying a fix to BorutaShap to call the new binomial test function. I believe you'd need to install the package in editable form, then edit Boruta-Shap/src/BorutaShap.py to do this, on two lines: Line 9: Change from scipy.stats import binom_test, ks_2samp to from scipy.stats import binomtest, ks_2samp Line 885: Change return [binom_test(x, n=n, p=p, alternative=alternative) for x in array] to return [binomtest(x, n=n, p=p, alternative=alternative).pvalue for x in array] See also this issue on the BorutaShap issue tracker.
3
5
78,102,296
2024-3-4
https://stackoverflow.com/questions/78102296/how-to-group-dataframe-rows-into-list-in-polars-group-by
import polars as pl df = pl.DataFrame({ "Letter": ["A", "A", "B", "B", "B", "C", "C", "D", "D", "E"], "Value': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] }) I want to group Letter and collect their corresponding Value in a List. Related Pandas question: How to group dataframe rows into list in pandas groupby I know pandas code will not work here: df.group_by("a")["b"].apply(list) TypeError: 'GroupBy' object is not subscriptable Output will be: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Letter ┆ Value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ A ┆ [1, 2] β”‚ β”‚ B ┆ [3, 4, 5] β”‚ β”‚ C ┆ [6, 7] β”‚ β”‚ D ┆ [8, 9] β”‚ β”‚ E ┆ [10] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
You could do this. import polars as pl df = pl.DataFrame( { 'Letter': ['A', 'A', 'B', 'B', 'B', 'C', 'C', 'D','D','E'], 'Value': [1, 2, 3, 4, 5, 6, 7, 8, 9,10] } ) g = df.group_by('Letter', maintain_order=True).agg(pl.col('Value')) print(g) This will print β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Letter ┆ Value β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ list[i64] β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ═══════════║ β”‚ A ┆ [1, 2] β”‚ β”‚ B ┆ [3, 4, 5] β”‚ β”‚ C ┆ [6, 7] β”‚ β”‚ D ┆ [8, 9] β”‚ β”‚ E ┆ [10] β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ maintain_order=True is required if you want to order of the groups to be consistent with the input data.
9
9
78,080,159
2024-2-29
https://stackoverflow.com/questions/78080159/how-to-type-annotate-a-polars-dataframe-created-from-an-arrow-table
In python, I'm trying to create a polars DataFrame from a pyarrow table, like so: import pyarrow as pa import polars as pl table = pa.table( { "a": [1, 2, 3], "b": [4, 5, 6], } ) df = pl.from_arrow(table) The return-type of pl.from_arrow() is (DataFrame | Series), even though my df really always is a DataFrame. This results in type-checker-warnings further down the line, where I want to perform actions on the data. For instance: df.select(pl.col("a")) Cannot access member "select" for type "Series" Member "select" is unknown Pylance(reportAttributeAccessIssue) I could try to find a work-around, but haven't found a "proper" way to do this yet. For instance, I could do: df = pl.DataFrame(table) df.select(pl.col("a")) # or maybe even if (isinstance(df, pl.Series)): raise TypeError What would be the proper way to deal with this situation? How to assure the type-checker that pl.from_arrow() really returns a DataFrame?
You can use the cast function. from typing import cast import pyarrow as pa import polars as pl table = pa.table( { "a": [1, 2, 3], "b": [4, 5, 6], } ) df = cast(pl.DataFrame, pl.from_arrow(table)) reveal_type(df) Since pl.from_arrow can also return a Series, this force the typechecker to consider the type you put there if you're sure about it.
3
2
78,079,857
2024-2-29
https://stackoverflow.com/questions/78079857/apache-arrow-with-apache-spark-unsupportedoperationexception-sun-misc-unsafe
I am trying to integrate Apache Arrow with Apache Spark in a PySpark application, but I am encountering an issue related to sun.misc.Unsafe or java.nio.DirectByteBuffer during the execution. import os import pandas as pd from pyspark.sql import SparkSession extra_java_options = os.getenv("SPARK_EXECUTOR_EXTRA_JAVA_OPTIONS", "") spark = SparkSession.builder \ .appName("ArrowPySparkExample") \ .getOrCreate() spark.conf.set("Dio.netty.tryReflectionSetAccessible", "true") spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true") pdf = pd.DataFrame(["midhun"]) df = spark.createDataFrame(pdf) result_pdf = df.select("*").toPandas() Error Message: in stage 0.0 (TID 11) (192.168.140.22 executor driver): java.lang.UnsupportedOperationException: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available at org.apache.arrow.memory.util.MemoryUtil.directBuffer(MemoryUtil.java:174) at org.apache.arrow.memory.ArrowBuf.getDirectBuffer(ArrowBuf.java:229) Environment: Apache Spark version: 3.4 Apache Arrow version: 1.5 Java version: jdk 21
Same issue with: Apache Spark version 3.5.1 Java JDK 21 Downgrading java to test minimum supported version. Update: Java JDK 17 resolves this issue, please see supported version of Spark: https://spark.apache.org/docs/latest/ (see Spark runs on Java)
4
2
78,084,538
2024-2-29
https://stackoverflow.com/questions/78084538/openai-assistants-api-how-do-i-upload-a-file-and-use-it-as-a-knowledge-base
My goal is to create a chatbot that I can provide a file to that holds a bunch of text, and then use the OpenAI Assistants API to actually use the file when querying my chatbot. I will use the gpt-3.5-turbo model to answer the questions. The code I have is the following: file_response = client.files.create( file=open("website_content.txt", "rb"), purpose="assistants" ) query_response = client.assistants.query( assistant_id="my_assistant_id", input="Tell me about xxx?", files=[file_response['id']] ) However, this is not working, for what I think could be a few things. For one, I don't fully understand the way it is supposed to work, so I was looking for some guidance. I have already created an assistant via the dashboard, but now I want to just upload a file and then query it. Do I have to use something else, like "threads" via the API, or no? How do I do this?
Note: The code below works with the OpenAI Assistants API v1. In April 2024, the OpenAI Assistants API v2 was released. See the migration guide. I created a customer support chatbot and made a YouTube tutorial about it. The process is as follows: Step 1: Upload a File with an "assistants" purpose my_file = client.files.create( file=open("knowledge.txt", "rb"), purpose='assistants' ) Step 2: Create an Assistant my_assistant = client.beta.assistants.create( model="gpt-3.5-turbo-1106", instructions="You are a customer support chatbot. Use your knowledge base to best respond to customer queries.", name="Customer Support Chatbot", tools=[{"type": "retrieval"}] ) Step 3: Create a Thread my_thread = client.beta.threads.create() Step 4: Add a Message to a Thread my_thread_message = client.beta.threads.messages.create( thread_id=my_thread.id, role="user", content="What can I buy in your online store?", file_ids=[my_file.id] ) Step 5: Run the Assistant my_run = client.beta.threads.runs.create( thread_id=my_thread.id, assistant_id=my_assistant.id, ) Step 6: Periodically retrieve the Run to check on its status to see if it has moved to completed keep_retrieving_run = client.beta.threads.runs.retrieve( thread_id=my_thread.id, run_id=my_run.id ) Step 7: Retrieve the Messages added by the Assistant to the Thread once the run status is "completed" all_messages = client.beta.threads.messages.list( thread_id=my_thread.id ) print(f"User: {my_thread_message.content[0].text.value}") print(f"Assistant: {all_messages.data[0].content[0].text.value}") See the full code. Important note The assistant might sometimes behave strangely. The Assistants API is still in beta, and it seems that OpenAI has trouble keeping it realiable, as discussed on the official OpenAI forum. The assistant might sometimes answer that it cannot access the files you uploaded. You might think you did something wrong, but if you run identical code later or the next day, the assistant will successfully access all files and give you an answer. The weird responses I got were the following: Assistant: I currently do not have access to the file you uploaded. Could you provide some details about what you're selling or any specific questions you have in mind? Assistant: I currently don't have the ability to directly access the contents of the file you uploaded. However, if you can provide some details or specific questions about the than happy to assist you in finding the information you need. Assistant: I currently don't have visibility into the specific contents of the file you've uploaded. Could you provide more details about the file or its contents so that I can assist you further? Assistant: I see you've uploaded a file. How can I assist you with it?
2
6
78,092,914
2024-3-2
https://stackoverflow.com/questions/78092914/django-cant-change-username-in-custom-user-model
I have the following User model in Django's models.py: class User(AbstractBaseUser): username = models.CharField(max_length=30, unique=True, primary_key=True) full_name = models.CharField(max_length=65, null=True, blank=True) email = models.EmailField( max_length=255, unique=True, validators=[EmailValidator()] ) When trying to update a username via shell python manage.py shell: userobj = User.objects.get(username="username1") userobj.username = user.lower() userobj.save() It seems that it is trying to create a new user, for which the UNIQUE constraint of the e-mail is violated: django.db.utils.IntegrityError: UNIQUE constraint failed: data_app_user.email Any solutions? Thank you!
This is one of the reasons not to use a username as primary key. Indeed, Django uses the primary key as token to check if two items are the same. If the primary key changes, and you save it, it will first try to update the user with that primary key, and if that fails, make an insert. This can thus produce several unwanted scenarios: one is that it duplicates data, in case the new primary key does not exist, but perhaps even worse: it can result in copying the data of the user to a record of the "new" username, and thus essentially overriding data of that user. As a general rule of thumb, it is better to consider primary keys "blackbox values". Yes, an AutoField works with an integer, but it is better to assume that it is a token where we don't know the details. The fact that it is an integer, is merely an implementation detail. In most database courses, one learns that the primary key is the (subset of) column(s) that makes a record unique, and from a database perspective, that is the right approach, but it is a "Djangoism" that the primary keys use a uniform type, and often don't store real information. This is also useful if you would later work with a GenericForeignKey [Django-doc], so that means there are essentially two options: using an AutoField, or a UUIDField, but regardless, it is better not to store information in the primary key. If you really want to insist, you can update the primary key with: User.objects.filter(pk='username1').update(pk='username2') Which will make a query that looks like: UPDATE app_name_user SET username = 'username2' WHERE username = 'username1' But that is probably not a good idea either, since now you omit validation, signals, and all other "mechanics" that Django has put in place. So I would strongly advise to rewrite the model and don't make username the primary key. You can still set it unique=True [Django-doc].
2
2
78,095,982
2024-3-3
https://stackoverflow.com/questions/78095982/having-one-vector-column-for-multiple-text-columns-on-qdrant
I have a products table that has a lot of columns, which from these, the following ones are important for our search: Title 1 to Title 6 (title in 6 different languages) Brand name (in 6 different languages) Category name (in 6 different languages) Product attributes like size, color, etc. (in 6 different languages) We are planning on using qdrant vector search to implement fast vector queries. But the problem is that all the data important for searching, are in different columns and I do not think (correct me if I am wrong) generating vector embeddings separately for all the columns is the best solution. I came up with the idea of mixing the columns together and generating separate collections; and I came up with this solution because the title, the category, brand and attrs columns are essentially the same just in different langs. Also I use the "BAAI/bge-m3" model which is a multilingual text embedding model that supports more than 100 langs. So, in short, I created different collections for different languages, and for each collection I have a vector column containing the vector for the combined text of title, brand, color, and category in each language and when searched, because we already know which language the website is, we will search in that specific language collection. Now, the question is, is this a valid method? What are the pros and cons of this method? I know for sure that when combined, I can not give different weights to different parts of this vector. For example one combined text of title, category, color, and brand may look like this: "Koala patterned hoodie children blue Bubito" or Something like: "Striped t-shirt men navy blue Zara" Now, user may search "blue hoodie for men", but due to the un-weighted structure of the combined vector, it will not retrieve the best results. I may be wrong and this may be one of the best results, but please tell me more about the pros and cons of this method, and if you can, give me a better idea. It is important to note that currently we have more than 300,000(300K) products and they will grow to more than 1,000,000 (1M) in the near future.
You seem like you have thought this through already, and your method is valid, practical, simple and scalable. Here is a quick overview of what I think about your particular question. Pros of your method By segregating data into collections based on language, you ensure that searches are conducted within the correct linguistic context. It's quite rare for users to mix languages in search terms so I feel like you are right on this point. In terms of scalability, your approach seems optimal as you can expand linearly as your database grows. The separation of the languages could allow you to separate the databases for the different regions, Chinese in China, English in England and query only the one from the right region. Combining relevant fields into a single vector for each language streamlines the search process. This approach reduces the complexity of managing multiple vectors for each product, which can lower overhead and improve efficiency Cons of your method As you stated previously, combining fields without weighting can lead to less precise search outcomes because there is no way to tell which keywords are important. The combined vector approach might not always accurately reflect the nuances of the data. For instance, a product's title, brand, and category might not always align perfectly with the user's search intent, especially if the brand name is a common word in the user's language, which could lead to feeling like the "Verbatim" mode of Google. Alternative approaches Weighted Vector Combination Instead of merging all fields into a single vector, consider creating separate vectors for each field (title, brand, category, attributes) and then combining them with weights that reflect their importance. This method allows for more precise control over search relevance but requires more computational resources and complexity, and a fair bit of judgement if you fine-tune the weights yourself. Another solution in a similar fashion would be to hard code some "important" keywords, or pin them to search in a specific column. This might be doable if your catalog includes a few "main" categories of products, but can be very tedious/un-doable if your products are very diverse. Semantic Search with Fine-Tuning Utilize the BAAI/bge-m3 model to generate embeddings for each field individually, then combine these embeddings in a manner that allows for weighting. This could involve training a custom model on your data to better understand the significance of different fields in the context of your products. This approach essentially automates the previous one, but requires you to already have data about the search intent and the keywords used by the clients. This method is also fairly complicated to implement but could yield good results if you combine it with analytics from the websites so that it can learn over time. I hope this can help you, I would be interested to know what method you will end up using.
4
2
78,100,006
2024-3-4
https://stackoverflow.com/questions/78100006/how-to-change-cpu-affinity-on-linux-with-python-in-realtime
I know I can use os.sched_setaffinity to set affinity, but it seems that I can't use it to change affinity in realtime. Below is my code: First, I have a cpp program // test.cpp #include <iostream> #include <thread> #include <vector> void workload() { unsigned long long int sum = 0; for (long long int i = 0; i < 50000000000; ++i) { sum += i; } std::cout << "Sum: " << sum << std::endl; } int main() { unsigned int num_threads = std::thread::hardware_concurrency(); std::cout << "Creating " << num_threads << " threads." << std::endl; std::vector<std::thread> threads; for (unsigned int i = 0; i < num_threads; ++i) { threads.push_back(std::thread(workload)); } for (auto& thread : threads) { thread.join(); } return 0; } Then, I compile it g++ test.cpp -O0 and I'll get an a.out file in the same directory. Then, still in the same directory, I have a python file # test.py from subprocess import Popen import os import time a = set(range(8, 16)) b = set(range(4, 12)) if __name__ == "__main__": proc = Popen("./a.out", shell=True) pid = proc.pid print("pid", pid) tic = time.time() while True: if time.time() - tic < 10: os.sched_setaffinity(pid, a) print("a", os.sched_getaffinity(pid)) else: os.sched_setaffinity(pid, b) print("b", os.sched_getaffinity(pid)) res = proc.poll() if res is None: time.sleep(1) else: break a.out would run a long time, and my expect for test.py is: in the first 10 seconds, I would see cpu 8~15 busy while 0~7 idle; and after 10 seconds, I would see cpu 4~11 busy while others idle. But as I observed with htop, I found that in the first 10 seconds, my observation indeed met my expect, however after 10 seconds, I could see b {4, 5, 6, 7, 8, 9, 10, 11} every second, as if I successfully set the affinity; but on htop, I still found that cpu 8~15 busy while 0~7 idle until the program normally stopped, which means I faild to set the affinity. I'd like to ask why would this happen? I read the manual but didn't find anything to mention about it. And it seems that python's os.sched_setaffinity doesn't return anything so I can't see the result. I'm using AMD cpu, but I don't think that matters.
The root of the problem is that on Linux, sched_setaffinity() affects a thread, not a process. The main thread has the same id as the process, but subsequent threads have different ids, and they inherit the CPU affinity from the parent thread. Apparently Python is quick enough to set the CPU affinity of the main thread, so the child threads inherited the initial CPU mask from the main thread, but changing the CPU affinity of the main thread later won't affect already running threads (but it would, in theory, affect new threads if they are spawned from the main thread later). To work around it, you can introduce a helper function like this: def ChangeProcessAffinity(pid, cpus): for tid in map(int, os.listdir(f'/proc/{pid}/task/')): try: os.sched_setaffinity(tid, cpus) except e: pass # maybe log the error instead ...then call that in your while loop. Note that there are two problems with this: The solution using procfs is specific to Linux, The solution is inherently racy: it's possible that threads start or stop after listing them, which is also why you need an exception handler. I don't think there is an atomic way to set the CPU affinity for all threads in a process; the taskset command line utility (which you already found) does approximately the same as I suggested above. If you want a more robust solution, I think it's best to implement it inside the C++ binary itself (if you have control over it), since it knows when threads are started or stopped.
2
2
78,097,487
2024-3-3
https://stackoverflow.com/questions/78097487/pylance-in-visual-studio-code-does-not-recognise-poetry-virtual-env-dependencies
I am using Poetry to manage a Python project. I create a virtual environment for Poetry using a normal poetry install and pyproject.toml workflow. Visual Studio Code and its PyLance does not pick up project dependencies in Jupyter Notebook. Python stdlib modules are recognised The modules of my application are recognised The modules in the dependencies and libraries my application uses are not recognised Instead, you get an error Import "xxx" could not be resolved Pylance (reportMissingImports) An example screenshot with some random imports that show what is recognised and what is not (tradeexecutor package is Poetry project, then some random Python packages dependency are not recognised).: The notebook still runs fine within Visual Studio Code, so the problem is specific to PyLance, the virtual environment is definitely correctly set up. Some Python Language Server output (if relevant): 2024-03-01 10:15:40.628 [info] [Info - 10:15:40] (28928) Starting service instance "trade-executor" 2024-03-01 10:15:40.656 [info] [Info - 10:15:40] (28928) Setting pythonPath for service "trade-executor": "/Users/moo/code/ts/trade-executor" 2024-03-01 10:15:40.657 [info] [Info - 10:15:40] (28928) Setting environmentName for service "trade-executor": "3.10.13 (trade-executor-8Oz1GdY1-py3.10 venv)" 2024-03-01 10:15:40.657 [info] [Info - 10:15:40] (28928) Loading pyproject.toml file at /Users/moo/code/ts/trade-executor/pyproject.toml 2024-03-01 10:15:40.657 [info] [Info - 10:15:40] (28928) Pyproject file "/Users/moo/code/ts/trade-executor/pyproject.toml" has no "[tool.pyright]" section. 2024-03-01 10:15:41.064 [info] [Info - 10:15:41] (28928) Found 763 source files 2024-03-01 10:15:41.158 [info] [Info - 10:15:41] (28928) Background analysis(4) root directory: file:///Users/moo/.vscode/extensions/ms-python.vscode-pylance-2024.2.2/dist 2024-03-01 10:15:41.158 [info] [Info - 10:15:41] (28928) Background analysis(4) started 2024-03-01 10:15:41.411 [info] [Info - 10:15:41] (28928) Indexer background runner(5) root directory: file:///Users/moo/.vscode/extensions/ms-python.vscode-pylance-2024.2.2/dist (index) 2024-03-01 10:15:41.411 [info] [Info - 10:15:41] (28928) Indexing(5) started 2024-03-01 10:15:41.662 [info] [Info - 10:15:41] (28928) scanned(5) 1 files over 1 exec env 2024-03-01 10:15:42.326 [info] [Info - 10:15:42] (28928) indexed(5) 1 files over 1 exec Also looks like PyLance correctly finds the virtual environment in the earlier Python Language Server output: 2024-03-03 19:36:56.784 [info] [Info - 19:36:56] (41658) Pylance language server 2024.2.2 (pyright version 1.1.348, commit cfb1de0c) starting 2024-03-03 19:36:56.789 [info] [Info - 19:36:56] (41658) Server root directory: file:///Users/moo/.vscode/extensions/ms-python.vscode-pylance-2024.2.2/dist 2024-03-03 19:36:56.789 [info] [Info - 19:36:56] (41658) Starting service instance "trade-executor" 2024-03-03 19:36:57.091 [info] [Info - 19:36:57] (41658) Setting pythonPath for service "trade-executor": "/Users/moo/Library/Caches/pypoetry/virtualenvs/trade-executor-8Oz1GdY1-py3.10/bin/python" 2024-03-03 19:36:57.093 [info] [Info - 19:36:57] (41658) Setting environmentName for service "trade-executor": "3.10.13 (trade-executor-8Oz1GdY1-py3.10 venv)" 2024-03-03 19:36:57.096 [info] [Info - 19:36:57] (41658) Loading pyproject.toml file at /Users/moo/code/ts/trade-executor/pyproject.toml How to diagnose the issue further and then fix the issue?
Some other people were having the same issue. Pylance does not show auto import information from site-packages directory #3281 Pylance not indexing all files and symbols for sqlalchemy even with package depth of 4 Visual Studio Code's PyLance implementation seems to have some internal limits that may prevent indexing all files. However, this was not the case for me. Instead, PyLance was somehow corrupted. Running: PyLance: Clear all persistent indices from the command palette fixed the issue for. After this, PyLance seemed to behave.
6
1
78,088,888
2024-3-1
https://stackoverflow.com/questions/78088888/how-to-calculate-the-midpoints-of-each-triangle-edges-of-an-icosahedron
Am trying to compute the midpoints of each triangle edges of an icosahedron to get an icosphere which is the composion of an icosahedron subdivided in 6 or more levels. i tried to calculate the newly created vertices of each edges but some points wore missing. I tried to normalize each mid point but still the points woren't evenly spread out and some points wore missing. import matplotlib.pyplot as plt import numpy as np num_points = 12 indices = np.arange(0, num_points, dtype='float') r = 1 vertices = [ [0.0, 0.0, -1.0], [0.0, 0.0, 1.0] ] # poles # icosahedron for i in range(num_points): theta = np.arctan(1 / 2) * (180 / np.pi) # angle 26 degrees phi = np.deg2rad(i * 72) if i >= (num_points / 2): theta = -theta phi = np.deg2rad(36 + i * 72) x = r * np.cos(np.deg2rad(theta)) * np.cos(phi) y = r * np.cos(np.deg2rad(theta)) * np.sin(phi) z = r * np.sin(np.deg2rad(theta)) vertices.append([x, y, z]) vertices = np.array(vertices) Icosahedron: # Triangle Subdivision for _ in range(2): for j in range(0, len(vertices), 3): v1 = vertices[j] v2 = vertices[j + 1] v3 = vertices[j + 2] m1_2 = ((v1 + v2) / 2) m2_3 = ((v2 + v3) / 2) m1_3 = ((v1 + v3) / 2) m1_2 /= np.linalg.norm(m1_2) m2_3 /= np.linalg.norm(m2_3) m1_3 /= np.linalg.norm(m1_3) vertices = np.vstack([vertices, m1_2, m2_3, m1_3,]) print(vertices) plt.figure().add_subplot(projection='3d').scatter(vertices[:, 0], vertices[:, 1], vertices[:, 2]) plt.show() icosphere attempt: I used this as reference to create an icosahedron https://www.songho.ca/opengl/gl_sphere.html and what am expecting to achive is this: Geodesic polyhedro: I tried debugging the subdivision of each edges and it performed well: import numpy as np import matplotlib.pyplot as plt vertices = [[1, 1], [2, 3], [3, 1]] vertices = np.array(vertices) for j in range(2): for i in range(0, len(vertices), 3): v1 = vertices[i] v2 = vertices[i + 1] v3 = vertices[i + 2] m1_2 = (v1 + v2) / 2 m1_3 = (v1 + v3) / 2 m2_3 = (v2 + v3) / 2 vertices = np.vstack([vertices, m1_2, m1_3, m2_3]) plt.figure().add_subplot().scatter(vertices[:, 0], vertices[:, 1]) plt.plot(vertices[:, 0], vertices[:, 1], '-ok') plt.show() Midpoints of each edgeL
Pardon the JavaScripty answer (because that's easier for showing things off in the answer itself =), and the "cabinet projection", which is a dead simple way to turn 3D into 2D but it'll look "strangly squished" (unless you're playing an isometric platformer =) Icosahedrons are basically fully defined by their edge length, so once you've decided on that one value, everything else is locked in, and you can generate your 12 icosahedron points pretty easily. In your case, if we stand the icosahedron on a "pole" at (0,0,0), then its height is defined by the edge length as: h = edge * sqrt(1/2 * (5 + sqrt(5))) (Or if we want maximum fives, h = edge * ((5 + 5 ** 0.5) * 0.5) ** 0.5) And we can generate two rings of 5 vertices each, one ring at height h1 = edge * sqrt(1/2 - 1 / (2 * sqrt(5)));, spaced at 2Ο€/5 angular intervals, and then the other at height h2 = h - h1 (no need to do additional "real" math!) with the same angular intervals, but offset by Ο€/5. That gives us this: function sourceCode() { const edge = 45; const h = edge * sqrt(1 / 2 * (5 + sqrt(5))); const poles = [ [0, 0, 0], [0, 0, h] ]; function setup() { setSize(300, 120); setProjector(150, 100); play(); } function draw() { clear(); const [bottom, top] = poles; const [p1, p2] = generatePoints(poles); drawIcosahedron(bottom, p1, p2, top); } function generatePoints(poles) { // get an angle offset based on the mouse, // to generate with fancy rotation. let ratio = (frame / 2000) % TAU; if (pointer.active) ratio = (pointer.x / width); const ao = TAU * ratio; // create our bottom and top rings: const r = edge * sqrt(0.5 + sqrt(5) / 10); const h1 = edge * sqrt(1 / 2 - 1 / (2 * sqrt(5))); const h2 = h - h1; return [h1, h2].map((h, id) => { const ring = []; for (let i = 0; i < 5; i++) { const a = ao + i * TAU / 5 + id * PI / 5; const p = [r * cos(a), r * sin(a), h]; ring.push(p); } return ring; }); } function drawIcosahedron(b, p1, p2, t) { b = project(...b); p1 = p1.map(p => project(...p)); p2 = p2.map(p => project(...p)); t = project(...t); // draw our main diagonal setColor(`lightgrey`); line(...b, ...t); // then draw our bottom ring p1.forEach((p, i) => { // connect down line(...b, ...p); // and connect self line(...p, ...p1[(i + 1) % 5]); }); // and then our top ring p2.forEach((p, i) => { // connect down line(...p, ...p1[i]); line(...p, ...p1[(i + 1) % 5]); // connect self line(...p, ...p2[(i + 1) % 5]); // and because this is the last ring, connect up line(...t, ...p); }); setColor(`black`); [b, ...p1, ...p2, t].forEach(p => point(...p)); } function pointerMove() { redraw(); } } customElements.whenDefined('graphics-element').then(() => { graphicsElement.loadFromFunction(sourceCode); }); <script type="module" src="https://cdnjs.cloudflare.com/ajax/libs/graphics-element/1.10.0/graphics-element.js"></script> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/graphics-element/1.10.0/graphics-element.css"> <graphics-element id="graphicsElement" title="An icosahedron"></graphics-element> (mouse-over that graphic for more precise 3D rotational control) To then find all the midpoints, we could do a lot of maths, but why bother: we can just linearly interpolate between points: const lerp = (a,b) => [ (a[0] + b[0]) / 2, # average for x coordinates (a[1] + b[1]) / 2, # average for y coordinates (a[2] + b[2]) / 2, # average for z coordinates ]; // We end up with six "rings". Three constructed "from below" const m1 = p1.map((p) => avg(b, p)); const m2 = p1.map((p,i) => avg(p, p1[(i+1)%5])); const m3 = p1.map((p,i) => avg(p, p2[i])); // And three constructed "from above" const m4 = p2.map((p,i) => avg(p, p1[(i+1)%5])); const m5 = p2.map((p,i) => avg(p, p2[(i+1)%5])); const m6 = p2.map((p) => avg(t, p)); So adding that to the previous graphic: function sourceCode() { const edge = 45; const h = edge * sqrt(1 / 2 * (5 + sqrt(5))); const poles = [ [0, 0, 0], [0, 0, h] ]; function setup() { setSize(300, 120); setProjector(150, 100); play(); } function draw() { clear(); const [bottom, top] = poles; const [p1, p2] = generatePoints(poles); drawIcosahedron(bottom, p1, p2, top); drawMidpoints(...generateMidPoints(bottom, p1, p2, top)); } function generatePoints(poles) { // get an angle offset based on the mouse, // to generate with fancy rotation. let ratio = (frame / 2000) % TAU; if (pointer.active) ratio = (pointer.x / width); const ao = TAU * ratio; // create our bottom and top rings: const r = edge * sqrt(0.5 + sqrt(5) / 10); const h1 = edge * sqrt(1 / 2 - 1 / (2 * sqrt(5))); const h2 = h - h1; return [h1, h2].map((h, id) => { const ring = []; for (let i = 0; i < 5; i++) { const a = ao + i * TAU / 5 + id * PI / 5; const p = [r * cos(a), r * sin(a), h]; ring.push(p); } return ring; }); } function generateMidPoints(b, p1, p2, t) { // we could use math, but why both when we can lerp? const avg = (a, b) => [ (a[0] + b[0]) / 2, (a[1] + b[1]) / 2, (a[2] + b[2]) / 2, ]; const m1 = p1.map((p) => avg(b, p)); const m2 = p1.map((p, i) => avg(p, p1[(i + 1) % 5])); const m3 = p1.map((p, i) => avg(p, p2[i])); const m4 = p2.map((p, i) => avg(p, p1[(i + 1) % 5])); const m5 = p2.map((p, i) => avg(p, p2[(i + 1) % 5])); const m6 = p2.map((p) => avg(t, p)); return [m1, m2, m3, m4, m5, m6]; } function drawIcosahedron(b, p1, p2, t) { b = project(...b); p1 = p1.map(p => project(...p)); p2 = p2.map(p => project(...p)); t = project(...t); setColor(`lightgrey`); line(...b, ...t); p1.forEach((p, i) => { line(...b, ...p); line(...p, ...p1[(i + 1) % 5]); }); p2.forEach((p, i) => { line(...p, ...p1[i]); line(...p, ...p1[(i + 1) % 5]); line(...p, ...p2[(i + 1) % 5]); line(...t, ...p); }); setColor(`black`); [b, ...p1, ...p2, t].forEach(p => circle(...p, 2)); } function drawMidpoints(m1, m2, m3, m4, m5, m6) { m1 = m1.map(p => project(...p)); m2 = m2.map(p => project(...p)); m3 = m3.map(p => project(...p)); m4 = m4.map(p => project(...p)); m5 = m5.map(p => project(...p)); m6 = m6.map(p => project(...p)); setColor(`salmon`); const midpoints = [m1, m2, m3, m4, m5, m6].flat(); midpoints.forEach((p, i) => circle(...p, 2)); } function pointerMove() { redraw(); } } customElements.whenDefined('graphics-element').then(() => { graphicsElement.loadFromFunction(sourceCode); }); <script type="module" src="https://cdnjs.cloudflare.com/ajax/libs/graphics-element/1.10.0/graphics-element.js"></script> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/graphics-element/1.10.0/graphics-element.min.css"> <graphics-element id="graphicsElement" title="All edge midpoints highlighted"></graphics-element> (And again, mouse-over that graphic to get a better 3D feel) That still leaves "normalizing" each newly derived vertex so that it lies on the same sphere that the original icosahedral vertices, by scaling it relative to the center as the icosahedron, with a new length equal to half the distance to our poles (i.e., the same as every vertex in our original icosahedron): function scaleToSphere(r, ...pointArrays) { // Nothing too fancy, just scale the vector // relative to the icosahedral center by moving // it down, scaling it, and moving it back up: const resize = (p) => { const q = [p[0], p[1], p[2] - r]; const d = (q[0]**2 + q[1]**2 + q[2]**2)**0.5; p[0] = q[0] * r/d; p[1] = q[1] * r/d; p[2] = q[2] * r/d + r; }; pointArrays.forEach(arr => arr.forEach(p => resize(p))); } So that really only leaves the "tedious" job of linking up all our edges... let's add scaleToSphere to the previous code, with a new drawIcosaspheron function that draws all our edges: function sourceCode() { const edge = 45; const h = edge * sqrt(1 / 2 * (5 + sqrt(5))); const poles = [ [0, 0, 0], [0, 0, h] ]; function setup() { setSize(300, 120); setProjector(150, 100); play(); } function draw() { clear(); const [bottom, top] = poles; const [p1, p2] = generatePoints(); const midpoints = generateMidPoints(bottom, p1, p2, top); scaleToSphere(h / 2, ...midpoints); drawIcosaspheron(bottom, p1, p2, ...midpoints, top); } function scaleToSphere(r, ...pointArrays) { const resize = (p) => { const q = [p[0], p[1], p[2] - r]; const d = (q[0] ** 2 + q[1] ** 2 + q[2] ** 2) ** 0.5; p[0] = q[0] * r / d; p[1] = q[1] * r / d; p[2] = q[2] * r / d + r; }; pointArrays.forEach(arr => arr.forEach(p => resize(p))); } function generatePoints() { // get an angle offset based on the mouse, // to generate with fancy rotation. let ratio = (frame / 2000) % TAU; if (pointer.active) ratio = (pointer.x / width); const ao = TAU * ratio; // create our bottom and top rings: const r = edge * sqrt(0.5 + sqrt(5) / 10); const h1 = edge * sqrt(1 / 2 - 1 / (2 * sqrt(5))); const h2 = h - h1; return [h1, h2].map((h, id) => { const ring = []; for (let i = 0; i < 5; i++) { const a = ao + i * TAU / 5 + id * PI / 5; const p = [r * cos(a), r * sin(a), h]; ring.push(p); } return ring; }); } function generateMidPoints(b, p1, p2, t) { // we could use math, but why both when we can lerp? const avg = (a, b) => [ (a[0] + b[0]) / 2, (a[1] + b[1]) / 2, (a[2] + b[2]) / 2, ]; const m1 = p1.map((p) => avg(b, p)); const m2 = p1.map((p, i) => avg(p, p1[(i + 1) % 5])); const m3 = p1.map((p, i) => avg(p, p2[i])); const m4 = p2.map((p, i) => avg(p, p1[(i + 1) % 5])); const m5 = p2.map((p, i) => avg(p, p2[(i + 1) % 5])); const m6 = p2.map((p) => avg(t, p)); return [m1, m2, m3, m4, m5, m6]; } function drawIcosaspheron(b, p1, p2, m1, m2, m3, m4, m5, m6, t) { b = project(...b); p1 = p1.map(p => project(...p)); p2 = p2.map(p => project(...p)); m1 = m1.map(p => project(...p)); m2 = m2.map(p => project(...p)); m3 = m3.map(p => project(...p)); m4 = m4.map(p => project(...p)); m5 = m5.map(p => project(...p)); m6 = m6.map(p => project(...p)); t = project(...t); // Draw our mesh based on rings: setColor(`lightgrey`); // first ring: m1.forEach((p, i) => { line(...b, ...p); line(...p, ...m1[(i + 1) % 5]); }); // second ring, which is {p1,m2} p1.forEach((p,i) => { line(...p, ...m1[i]); line(...p, ...m2[i]); }); m2.forEach((p,i) => { line(...p, ...m1[i]); line(...p, ...m1[(i+1) % 5]); line(...p, ...p1[(i+1) % 5]); }); // third ring, which is {m3,m4} m3.forEach((p,i) => { line(...p, ...p1[i]); line(...p, ...m2[i]); line(...p, ...m4[i]); }); m4.forEach((p,i) => { line(...p, ...m2[i]); line(...p, ...m2[(i+1) % 5]); line(...p, ...m3[(i+1) % 5]); }); // fourth ring, which is {p2,m5} p2.forEach((p,i) => { line(...p, ...m3[i]); line(...p, ...m4[i]); line(...p, ...m5[i]); }); m5.forEach((p,i) => { line(...p, ...m3[(i+1) % 5]); line(...p, ...m4[i]); line(...p, ...p2[(i+1) % 5]); }); // fifth ring, which is m6 m6.forEach((p, i) => { line(...p, ...p2[i]); line(...p, ...m5[i]); line(...p, ...m5[(i + 5 - 1) % 5]); line(...p, ...m6[(i + 1) % 5]); line(...t, ...p); }); setColor(`black`); [b, ...p1, ...p2, t].forEach(p => circle(...p, 2)); setColor(`salmon`); [...m1, ...m2, ...m3, ...m4, ...m5, ...m6].forEach(p => circle(...p, 2)); } function pointerMove() { redraw(); } } customElements.whenDefined('graphics-element').then(() => { graphicsElement.loadFromFunction(sourceCode); }); <script type="module" src="https://cdnjs.cloudflare.com/ajax/libs/graphics-element/1.10.0/graphics-element.js"></script> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/graphics-element/1.10.0/graphics-element.min.css"> <graphics-element id="graphicsElement" title="After spherical scaling"></graphics-element> Of course, if you want to keep going, the manual edge-building is silly, but since we're building rings of vertices, we (and by "we" of course I mean "you" ;) can write our point generation to be a little smarter and automatically "merge" all new points on a ring into that same ring, so that edge building is a matter of starting with "five edges from the bottom pole to first ring", then iterating over each higher ring so that it cross-connects all points to the previous ring (as well as adding all edges of the ring itself, of course) until we reach the top pole, which just needs five edges to the ring below it.
2
5
78,095,645
2024-3-3
https://stackoverflow.com/questions/78095645/how-do-i-get-selenium-on-python-to-connect-to-an-existing-instance-of-firefox
I am trying to use Selenium to connect to existing instance of Firefox - the documentation says to use something like this options=webdriver.FirefoxOptions() options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe' webdriver_service = Service(r'c:\tmp\geckodriver.exe') driver = webdriver.Firefox(service = webdriver_service, service_args=['--marionette-port', '2828', '--connect-existing']) However, I get the error driver = webdriver.Firefox(service = webdriver_service, service_args=['--marionette-port', '2828', '--connect-existing']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: WebDriver.__init__() got an unexpected keyword argument 'service_args' I see other questions about "unexpected keyword argument", they say latest versions of Selenium has other ways of passing arguments through Options. I tried options.add_argument('--marionette-port') options.add_argument('2828') options.add_argument('--connect-existing') But it still seems to create a new instance of Firefox I have started firefox with the following arguments "C:\Program Files\Mozilla Firefox\firefox.exe" -marionette -start-debugger-server 2828 How do I fix this? These are my versions Python version python --version Python 3.12.2 Selenium version pip show selenium Name: selenium Version: 4.18.1 Geckodriver version geckodriver --version geckodriver 0.34.0 (c44f0d09630a 2024-01-02 15:36 +0000) Firefox 123.0 (64-bit) Windows 11
You should put the service_args parameter in the Service class and not on Firefox. I believe this should work options=webdriver.FirefoxOptions() options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe' webdriver_service = Service(r'c:\tmp\geckodriver.exe', service_args=['--marionette-port', '2828', '--connect-existing']) driver = webdriver.Firefox(service = webdriver_service)
2
2
78,108,792
2024-3-5
https://stackoverflow.com/questions/78108792/why-is-builtin-sorted-slower-for-a-list-containing-descending-numbers-if-each
I sorted four similar lists. List d consistently takes much longer than the others, which all take about the same time: a: 33.5 ms b: 33.4 ms c: 36.4 ms d: 110.9 ms Why is that? Test script (Attempt This Online!): from timeit import repeat n = 2_000_000 a = [i // 1 for i in range(n)] # [0, 1, 2, 3, ..., 1_999_999] b = [i // 2 for i in range(n)] # [0, 0, 1, 1, 2, 2, ..., 999_999] c = a[::-1] # [1_999_999, ..., 3, 2, 1, 0] d = b[::-1] # [999_999, ..., 2, 2, 1, 1, 0, 0] for name in 'abcd': lst = globals()[name] time = min(repeat(lambda: sorted(lst), number=1)) print(f'{name}: {time*1e3 :5.1f} ms')
As alluded to in the comments by btilly and Amadan, this is due to how the Timsort sorting algorithm works. Detailed description of the algorithm is here. Timsort speeds up operation on partially sorted arrays by identifying runs of sorted elements. A run is either "ascending", which means non-decreasing: a0 <= a1 <= a2 <= ... or "descending", which means strictly decreasing: a0 > a1 > a2 > ... Note that a run is always at least 2 long, unless we start at the array's last element. Your arrays a, b and c each consist of just one run. The array d has 1 million runs. The reason why the descending run cannot be >= is to make the sort stable, i.e. keep the order of equal elements: The definition of descending is strict, because the main routine reverses a descending run in-place, transforming a descending run into an ascending run. Reversal is done via the obvious fast "swap elements starting at each end, and converge at the middle" method, and that can violate stability if the slice contains any equal elements. Using a strict definition of descending ensures that a descending run contains distinct elements. Python 3.11 has slightly improved version of timsort, sometimes called powersort, but it uses the same run detection and thus has the same performance.
32
43
78,110,125
2024-3-5
https://stackoverflow.com/questions/78110125/how-to-dynamically-create-fastapi-routes-handlers-for-a-list-of-pydantic-models
I have a problem using FastAPI, trying to dynamically define routes. It seems that the final route/handler defined overrides all the previous ones (see image below) Situation: I have a lot of Pydantic models as a list (imported from models.py: simplified in example), and would like to create a GET endpoint for each model. Something like this (my app is more complex β€” there is a reason it is structured as it is β€” but I've simplified it down to this, in order to find the problem, which still occurs): # models.py class Thing(BaseNode): value: str class Other(BaseNode): value: str model_list = [Thing, Other] # app.py from models import model_list app = FastApi() # Iterate over the models list and define a get function for model in models_list: @app.get(f"/{model.__name__.lower()}") def get() -> str: print(model.__name__) return f"Getting {model.__name__) I'm obviously doing something wrong (or FastAPI references the handler function in some weird way... by module/function name(?) so the get() function is being re-defined for each iteration?) Any ideas, or better way to structure this? (I can't manually write out a new function for each model!) Thanks! UPDATE: Update: If I define the get() function inside another function, it works: for model in models_list: def get(model): def _get(): print("Getting", model) return f"Getting {model.__name__}" return _get app.get(.get(f"/{model.__name__.lower()}", name=f"{model.__name__}.View")(get(model))
Most likely the issue lies in the fact that you are using the same model reference for every endpoint created, and hence, it is always assigned the last value given in the for loop. You should rather use a helper function that returns an inner function, as demonstrated in Option 2 of this answer. Example from fastapi import FastAPI, APIRouter, Request from pydantic import BaseModel class One(BaseModel): value: str class Two(BaseModel): value: str app = FastAPI() models = [One, Two] def create_endpoint(m_name: str): async def endpoint(request: Request): print(request.url) return f"You called {m_name}" return endpoint for m in models: app.add_api_route(f"/{m.__name__.lower()}", create_endpoint(m.__name__), methods=["GET"])
3
4
78,109,250
2024-3-5
https://stackoverflow.com/questions/78109250/running-a-python-file-that-imports-from-airflow-package-requires-airflow-instan
I am running into a weird import issue with Airflow. I want to create a module from which others can import. I also want to run unit tests on this module. However, I noticed that as soon as you import anything from the airflow package, it will try and run Airflow. Example: # myfile.py from airflow import DAG print("Hello world") Then run it with python myfile.py, results in: (.venv) c:\Users\Jarro\Development\airflow-tryout-import>python myfile.py WARNING:root:OSError while attempting to symlink the latest log directory Traceback (most recent call last): File "c:\Users\Jarro\Development\airflow-tryout-import\myfile.py", line 1, in <module> from airflow import DAG File "C:\Users\Jarro\Development\airflow-tryout-import\.venv\Lib\site-packages\airflow\__init__.py", line 68, in <module> settings.initialize() File "C:\Users\Jarro\Development\airflow-tryout-import\.venv\Lib\site-packages\airflow\settings.py", line 559, in initialize configure_orm() File "C:\Users\Jarro\Development\airflow-tryout-import\.venv\Lib\site-packages\airflow\settings.py", line 237, in configure_orm raise AirflowConfigException( airflow.exceptions.AirflowConfigException: Cannot use relative path: `sqlite:///C:\Users\Jarro/airflow/airflow.db` to connect to sqlite. Please use absolute path such as `sqlite:////tmp/airflow.db`. Aside from the error itself, I am actually way more concerned that it seems I am not able to import things from Airflow, without there being side-effects (such as database initializations). Am I going all wrong about this? Is there another way I can import things from Airflow without these side effects, for example for typing purposes?
This is happening because of how impoorts work in python. You're importing from the airflow package which has an __init__ file. If you inspect it, there's a piece of code in it that does all of the airflow init stuff. lines 67-68 in __init__py: if not os.environ.get("_AIRFLOW__AS_LIBRARY", None): settings.initialize() If you just want to import things for typing like you said, you can set the _AIRFLOW__AS_LIBRARY env variable to anything and it will not initialize.
2
3
78,106,081
2024-3-5
https://stackoverflow.com/questions/78106081/how-to-generate-graph-from-pandas-dataframe-with-full-hierachical-data
Having a Pandas dataframe with sample hierarchical data i.e. data = pd.DataFrame({ "manager_id": ["A", "A", "B", "A", "C", "A", "B"], "employee_id": ["B", "C", "C", "D", "E", "E", "E"] } ) Given that the data consists of all the descendent relationships for each manager. For example in each manager id (e.g. "A"), the employee id comprises both the employees (e.g. "B") directly managed by manager "A" and the employees managed by employee "B" (e.g. "C", "D"). How can this be represented into a directed graph in networkx? The edges of the graph should be [('A', 'B'), ('A', 'D'), ('B', 'C'), ('C', 'E')]
Read the DataFrame as a directed graph with from_pandas_edgelist, then sort the nodes with topological_sort and only keep the edge with the parent that has the greatest topological index per child: G = nx.from_pandas_edgelist(data, source='manager_id', target='employee_id', create_using=nx.DiGraph) # topological order order = {n: i for i,n in enumerate(nx.topological_sort(G))} # {'A': 0, 'B': 1, 'D': 2, 'C': 3, 'E': 4} # for each node, only keep the parent that has the greatest topological index parents = {} for source, target in G.edges(): p = parents.setdefault(target, source) if order[source] > order[p]: parents[target] = source # parents # {'C': 'B', 'E': 'C', 'B': 'A', 'D': 'A'} # remove shorter edges G.remove_edges_from(G.edges - set(zip(parents.values(), parents.keys()))) # or # G = G.edge_subgraph(list(zip(parents.values(), parents.keys()))) Output: Graph before filtering: Variant You can also compute an order with topological_generations to have the same number for nodes of the same generation. order = {n: i for i, l in enumerate(nx.topological_generations(G)) for n in l} # {'A': 0, 'B': 1, 'D': 1, 'C': 2, 'E': 3} In link to your other question, you could also compute the relative generation difference and only keep the edges with a generation difference of 1 between the two nodes: G = nx.from_pandas_edgelist(data, source='manager_id', target='employee_id', create_using=nx.DiGraph) order = {n: i for i, l in enumerate(nx.topological_generations(G)) for n in l} # {'A': 0, 'B': 1, 'D': 1, 'C': 2, 'E': 3} # compute the relative generation difference # keep the edges with a difference of 1 keep = [e for e in G.edges if order[e[1]]-order[e[0]] == 1] # [('A', 'B'), ('A', 'D'), ('B', 'C'), ('C', 'E'), ('F', 'G')] G = G.edge_subgraph(list(zip(parents.values(), parents.keys())))
3
2
78,108,812
2024-3-5
https://stackoverflow.com/questions/78108812/display-information-using-django-tags
I am writing a site on Django, the purpose of this site is to create a test to assess students' knowledge I need help with outputting options for answers to a question I keep the questions in a list and the answer options in a nested list for example: questions = [ "question1","question2","question3"] answers = [[ "answer1","answer2",answer3 ],["answer1","answer2",answer3,answer4 ] , ["answer1","answer2",answer3 ]]` and I need this data to be displayed in the following format: question1 answer1 answer2 answer3 question2 answer1 answer2 answer3 answer4 question3 answer1 answer2 answer3 here is my code, it does not work very correctly, I know, I have not yet figured out how to implement it in Django tags {% for question in questions %} <p>{{question}}</p> <ul> {% for answer in answers %} {% for current in answer %} <li><input type="radio" id="option{{ forloop.parentloop.counter }}_{{ forloop.counter }}" name="answer{{ forloop.parentloop.counter }}" value="{{ current }}">{{ current }}</li> {% endfor %} {% endfor %} </ul> {% endfor %}
Perform the joining in the view: def my_view(request): questions = ['question1', 'question2', 'question3'] answers = [ ['answer11', 'answer12', 'answer13'], ['answer21', 'answer22', 'answer23', 'answer24'], ['answer31', 'answer32', 'answer33'], ] return render(request, 'some_template.html', {'qas': zip(questions, answers)}) and in the template work with: {% for question, answers in qas %} <p>{{question}}</p> <ul> {% for answer in answers %} <li><input type="radio" id="option{{ forloop.parentloop.counter }}_{{ forloop.counter }}" name="answer{{ forloop.parentloop.counter }}" value="{{ current }}">{{ current }}</li> {% endfor %} </ul> {% endfor %} That being said, what you do is primitive obsession [refactoring.guru]: expressing data in terms of lists, strings, etc. If data has a certain structure, it makes more sense to define a dedicated class for it, and add logic to it. For example to render the id="" for the option, and logic to parse back data.
3
3
78,106,827
2024-3-5
https://stackoverflow.com/questions/78106827/pandas-find-last-value-before-a-given-timestamp
For the dataframe below, I am trying to add a column to each row that captures the ask_size at different time intervals, for the sake of example, say 1 millisecond. So for instance, for row 1, the size 1ms before should be 165 since that is the prevailing ask size 1ms before - even though the previous timestamp (2024-02-12 09:00:00.178941829) was way before, it is still the **prevailing ** size 1 millisecond before. For another example, row 3 to 8 should be all 203, since that is the size at timestamp 2024-02-12 09:00:00.334723166, which would be the last timestamp 1ms before row 3 to 8. Been reading up on merge_asof, tried a few things below, but no luck. Any help appreciated! Table example idx event_timestamp ask_size 0 2024-02-12 09:00:00.178941829 165 1 2024-02-12 09:00:00.334673928 166 2 2024-02-12 09:00:00.334723166 203 3 2024-02-12 09:00:00.339505589 203 4 2024-02-12 09:00:00.339517572 241 5 2024-02-12 09:00:00.339585194 276 6 2024-02-12 09:00:00.339597200 276 7 2024-02-12 09:00:00.339679756 277 8 2024-02-12 09:00:00.339705796 312 9 2024-02-12 09:00:00.343967540 275 10 2024-02-12 09:00:00.393306026 275 Raw DATA data = { 'event_timestamp': ['2024-02-12 09:00:00.178941829', '2024-02-12 09:00:00.334673928', '2024-02-12 09:00:00.334723166', '2024-02-12 09:00:00.339505589', '2024-02-12 09:00:00.339517572', '2024-02-12 09:00:00.339585194', '2024-02-12 09:00:00.339597200', '2024-02-12 09:00:00.339679756', '2024-02-12 09:00:00.339705796', '2024-02-12 09:00:00.343967540'], 'ask_size_1_x': [165.0, 166.0, 203.0, 203.0, 241.0, 276.0, 276.0, 277.0, 312.0, 275.0] } df = pd.DataFrame(data) Attempt data['1ms'] = data['event_timestamp'] - pd.Timedelta(milliseconds=1) temp = data[['event_timestamp','ask_size_1']] temp_time_shift = data[['1ms','ask_size_1']] temp2 = pd.merge_asof( temp, temp_time_shift, left_on = 'event_timestamp', right_on = '1ms', direction='backward' ) EDIT Suggestion: import pandas as pd data = { 'event_timestamp': [ '2024-02-12 09:00:00.393306026', '2024-02-12 09:00:00.393347792', '2024-02-12 09:00:00.393351971', '2024-02-12 09:00:00.393355738', '2024-02-12 09:00:00.393389724', '2024-02-12 09:00:00.542780521', '2024-02-12 09:00:00.542841917', '2024-02-12 09:00:00.714845055', '2024-02-12 09:00:00.714908862', '2024-02-12 09:00:00.747016524' ], 'ask_size_1': [275.0, 275.0, 237.0, 237.0, 202.0, 202.0, 202.0, 262.0, 261.0, 263.0] } df = pd.DataFrame(data) df['event_timestamp'] = pd.to_datetime(df['event_timestamp']) # Convert 'event_timestamp' to datetime format tolerance = pd.Timedelta('1ms') df['out'] = pd.merge_asof(df['event_timestamp'].sub(tolerance), df[['event_timestamp', 'ask_size_1']], direction='forward', tolerance=tolerance )['ask_size_1'] The output is the below, you can see row 7 for instance, both the ask_size and out are the same. The out should be the last ask_size at least 1ms before row 7, which would be row 6, with a value of 202. Looking at it the yellow could technically be NaN since there is no value at a timestamp greater than 1ms before. event_timestamp ask_size_1 out 0 2024-02-12 09:00:00.393306026 275.0 275.0 1 2024-02-12 09:00:00.393347792 275.0 275.0 2 2024-02-12 09:00:00.393351971 237.0 275.0 3 2024-02-12 09:00:00.393355738 237.0 275.0 4 2024-02-12 09:00:00.393389724 202.0 275.0 5 2024-02-12 09:00:00.542780521 202.0 202.0 6 2024-02-12 09:00:00.542841917 202.0 202.0 7 2024-02-12 09:00:00.714845055 262.0 262.0 8 2024-02-12 09:00:00.714908862 261.0 262.0 9 2024-02-12 09:00:00.747016524 263.0 263.0 Expected output:
IIUC you can indeed use a merge_asof. You however need to adapt the parameters to perform the search in the correct order: delta = pd.Timedelta('1ms') df['out'] = pd.merge_asof(df['event_timestamp'].sub(delta), df, direction='backward')['ask_size_1'] NB. I'm assuming here that the timestamps are already sorted. If not you need to sort them before running the merge_asof. Output: event_timestamp ask_size_1 out 0 2024-02-12 09:00:00.393306026 271.0 NaN 1 2024-02-12 09:00:00.393347792 275.0 NaN 2 2024-02-12 09:00:00.393351971 237.0 NaN 3 2024-02-12 09:00:00.393355738 237.0 NaN 4 2024-02-12 09:00:00.393389724 202.0 NaN 5 2024-02-12 09:00:00.542780521 206.0 202.0 6 2024-02-12 09:00:00.542841917 51.0 202.0 7 2024-02-12 09:00:00.714845055 262.0 51.0 8 2024-02-12 09:00:00.714908862 261.0 51.0 9 2024-02-12 09:00:00.747016524 263.0 261.0 If you want to get the 271 for the yellow values, you could adapt is slightly: tmp = pd.concat([pd.DataFrame({'event_timestamp': [df['event_timestamp'].iloc[0]-delta], 'ask_size_1': [df['ask_size_1'].iloc[0]]}), df]) delta = pd.Timedelta('1ms') df['out'] = pd.merge_asof(df['event_timestamp'].sub(delta), tmp, direction='backward', allow_exact_matches=False)['ask_size_1'] Output: event_timestamp ask_size_1 out 0 2024-02-12 09:00:00.393306026 271.0 NaN 1 2024-02-12 09:00:00.393347792 275.0 271.0 2 2024-02-12 09:00:00.393351971 237.0 271.0 3 2024-02-12 09:00:00.393355738 237.0 271.0 4 2024-02-12 09:00:00.393389724 202.0 271.0 5 2024-02-12 09:00:00.542780521 206.0 202.0 6 2024-02-12 09:00:00.542841917 51.0 202.0 7 2024-02-12 09:00:00.714845055 262.0 51.0 8 2024-02-12 09:00:00.714908862 261.0 51.0 9 2024-02-12 09:00:00.747016524 263.0 261.0
4
1
78,107,054
2024-3-5
https://stackoverflow.com/questions/78107054/convert-time-zone-function-to-retrieve-the-values-based-on-the-timezone-specif
I'm attempting to determine the time based on the timezone specified in each row using Polars. Consider the following code snippet: import polars as pl from datetime import datetime from polars import col as c df = pl.DataFrame({ "time": [datetime(2023, 4, 3, 2), datetime(2023, 4, 4, 3), datetime(2023, 4, 5, 4)], "tzone": ["Asia/Tokyo", "America/Chicago", "Europe/Paris"] }).with_columns(c.time.dt.replace_time_zone("UTC")) df.with_columns( tokyo=c.time.dt.convert_time_zone("Asia/Tokyo").dt.hour(), chicago=c.time.dt.convert_time_zone("America/Chicago").dt.hour(), paris=c.time.dt.convert_time_zone("Europe/Paris").dt.hour() ) In this example, I've computed the time separately for each timezone to achieve the desired outcome, which is [11, 22, 6], corresponding to the hour of the time column according to the tzone timezone. Even then it is difficult to collect the information from the correct column. Unfortunately, the following simple attempt to dynamically pass the timezone from the tzone column directly into the convert_time_zone function does not work: df.with_columns(c.time.dt.convert_time_zone(c.tzone).dt.hour()) # TypeError: argument 'time_zone': 'Expr' object cannot be converted to 'PyString' What would be the most elegant approach to accomplish this task?
The only way to do this which fully works with lazy execution is to use the polars-xdt plugin: df = pl.DataFrame( { "time": [ datetime(2023, 4, 3, 2), datetime(2023, 4, 4, 3), datetime(2023, 4, 5, 4), ], "tzone": ["Asia/Tokyo", "America/Chicago", "Europe/Paris"], } ).with_columns(pl.col("time").dt.replace_time_zone("UTC")) df.with_columns( result=xdt.to_local_datetime("time", pl.col("tzone")).dt.hour(), ) Result: Out[6]: shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ time ┆ tzone ┆ result β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ datetime[ΞΌs, UTC] ┆ str ┆ i8 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════════════β•ͺ════════║ β”‚ 2023-04-03 02:00:00 UTC ┆ Asia/Tokyo ┆ 11 β”‚ β”‚ 2023-04-04 03:00:00 UTC ┆ America/Chicago ┆ 22 β”‚ β”‚ 2023-04-05 04:00:00 UTC ┆ Europe/Paris ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”˜ https://github.com/pola-rs/polars-xdt If you don't need lazy execution, then as other answers have suggested, you can iterate over the unique values of your 'time_zone' column
5
4
78,106,285
2024-3-5
https://stackoverflow.com/questions/78106285/polars-dataframe-rolling-sum-look-ahead
I want to calculate rolling_sum, but not over x rows above the current row, but over the x rows below the current row. My solution is to sort the dataframe with descending=True before applying the rolling_sum and sort back to descending=False. My solution: import polars as pl # Dummy dataset df = pl.DataFrame({ "Date": [1, 2, 3, 4, 5, 1, 2, 3, 4, 5], "Close": [-1, 1, 2, 3, 4, 4, 3, 2, 1, -1], "Company": ["A", "A", "A","A", "A", "B", "B", "B", "B", "B"] }) # Solution using sort twice ( df .sort(by=["Company", "Date"], descending=[True, True]) .with_columns( pl.col("Close").rolling_sum(3).over("Company").alias("Cumsum_lead") ) .sort(by=["Company", "Date"], descending=[False, False]) ) Is there a better solution? With better I mean: more computational efficient and/or less code / easier to read Thanks! EDIT: I just thought of one other solution which is avoids sorting / reversing the column altogether: using shift ( df .with_columns( pl.col("Close") .rolling_sum(3) .shift(-2) .over("Company").alias("Cumsum_lead")) )
You can avoid sorting the rows and instead reverse the specific column twice using pl.Expr.reverse. ( df .with_columns( pl.col("Close") .reverse().rolling_sum(3).reverse() .over("Company").alias("Cumsum_lead") ) ) For readability, this could also be wrapped into a helper function. def rolling_sum_lead(expr: pl.Expr, window_size: int) -> pl.Expr: return expr.reverse().rolling_sum(window_size).reverse() ( df .with_columns( rolling_sum_lead(pl.col("Close"), 3).over("Company").alias("Cumsum_lead") ) ) Note. On my machine, this takes 124 Β΅s Β± 5.67 Β΅s per loop in contrast to 205 Β΅s Β± 6.9 Β΅s per loop for the solution using pl.DataFrame.sort.
4
2
78,105,407
2024-3-5
https://stackoverflow.com/questions/78105407/how-do-i-pass-in-a-parameter-name-into-a-function-as-an-argument-in-python
How do I pass in a parameter name into a function as an argument? def edit_features_retry(editType,data): AGOLlayer.edit_features(editType=data) edit_features_retry("adds", datalayer) error: TypeError: edit_features() got an unexpected keyword argument 'editType' If I do it this way as a string: def edit_features_retry(editType,data): AGOLlayer.edit_features(editType=data) edit_features_retry(adds, datalayer) error: NameError: name 'adds' is not defined
You can use dictionary unpacking. Here's a simple example: def f(a = 1, b = 2): print(f"{a = }, {b = }") f(b=5) # output: "a = 1, b = 5" kwargs = {"b": 5} f(**kwargs) # output: "a = 1, b = 5" Which can be applied to your scenario: def edit_features_retry(editType, data): AGOLlayer.edit_features(**{editType: data}) edit_features_retry("adds", datalayer)
2
3
78,106,064
2024-3-5
https://stackoverflow.com/questions/78106064/how-can-i-import-a-py-file-in-ibm-quantum-lab
I created a separate Python file for numerical calculations when implementing a quantum circuit. Now, I'm trying to use IBM Quantum Lab to apply this code to a circuit using Qiskit. %matplotlib inline import math import decomposition_2qubit as d2 import numpy as np from qiskit import QuantumCircuit, execute, Aer from qiskit.visualization import plot_histogram from qiskit.extensions import * from qiskit.quantum_info import Statevector matrix = (1/2)*np.array([ [1,1,1,1], [1,-1,1,-1], [1,1,-1,-1], [1,-1,-1,1]]) qc = QuantumCircuit(2,2) #qc.x(0) #qc.x(1) qc.barrier() d2.twoqubit_to_single(qc, matrix) ket = Statevector(qc) ket.draw('latex') qc.draw('mpl') However, when I try to import the decomposition_2qubit.py file, I encounter this error. How can I resolve this? Traceback (most recent call last): Cell In[1], line 4 import decomposition_2qubit as d2 ModuleNotFoundError: No module named 'decomposition_2qubit' Use %tb to get the full traceback. I have the 'decomposition_2qubit.py' file located in the same folder as the current Jupyter Notebook file I am using. Additionally, since IBM Quantum Lab appears to use Jupyter Notebook as its base, I tried searching for solutions when encountering such errors in Jupyter Notebook, but unfortunately, I couldn't resolve the issue.
You are probably facing an issue with the file path. As suggested here, you can either specify your own import path explicitly or just prepend path modifiers like .. and/or /.
3
2
78,105,399
2024-3-5
https://stackoverflow.com/questions/78105399/what-is-the-best-way-to-filter-groups-by-two-lambda-conditions-and-create-a-new
This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'a': ['x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'z', 'z', 'z', 'p', 'p', 'p', 'p'], 'b': [1, -1, 1, 1, -1, 1, 1, -1, -1, -1, -1, 1, 1, 1, 1] } ) And this the expected output. I want to create column c: a b c 0 x 1 first 1 x -1 first 2 x 1 first 3 x 1 first 4 y -1 second 5 y 1 second 6 y 1 second 7 y -1 second 11 p 1 first 12 p 1 first 13 p 1 first 14 p 1 first Groups are defined by column a. I want to filter df and choose groups that either their first b is 1 OR their second b is 1. I did this by this code: df1 = df.groupby('a').filter(lambda x: (x.b.iloc[0] == 1) | (x.b.iloc[1] == 1)) And for creating column c for df1, again groups should be defined by a and then if for each group first b is 1 then c is first and if the second b is 1 then c is second. Note that for group p, both first and second b is 1, for these groups I want c to be first. Maybe the way that I approach the issue is totally wrong.
A generic method that works with any number of positions for the first 1: d = {0: 'first', 1: 'second'} s = (df.groupby('a')['b'] .transform(lambda g: g.reset_index()[g.values==1] .first_valid_index()) .replace(d) ) out = df.assign(c=s).dropna(subset=['c']) Notes: if you remove the replace step you will get an integer in c if you use map in place of replace you can ignore the positions that are not defined as a dictionary key Output: a b c 0 x 1 first 1 x -1 first 2 x 1 first 3 x 1 first 4 y -1 second 5 y 1 second 6 y 1 second 7 y -1 second 11 p 1 first 12 p 1 first 13 p 1 first 14 p 1 first Example from comments: df = pd.DataFrame({'a': ['x', 'x', 'x', 'x', 'y', 'y', 'y', 'y', 'z', 'z', 'z', 'p', 'p', 'p', 'p'], 'b': [1, -1, 1, 1, -1, 1, 1, -1, -1, -1, 1, 1, 1, 1, 1]}) d = {0: 'first', 1: 'second'} s = (df.groupby('a')['b'] .transform(lambda g: g.reset_index()[g.values==1] .first_valid_index()) .map(d) ) out = df.assign(c=s).dropna(subset=['c']) a b c 0 x 1 first 1 x -1 first 2 x 1 first 3 x 1 first 4 y -1 second 5 y 1 second 6 y 1 second 7 y -1 second 11 p 1 first 12 p 1 first 13 p 1 first 14 p 1 first You can also only filter the rows with: m1 = df.groupby('a').cumcount().le(1) m2 = df['b'].eq(1) out = df.loc[df['a'].isin(df.loc[m1&m2, 'a'])]
6
2
78,105,655
2024-3-5
https://stackoverflow.com/questions/78105655/django-rest-framework-django-channels-errno-111-connect-call-failed-127
I am programming a project with Django (4.2.6), Django Rest Framework (3.14.0), Channels (4.0.0) and Channels-Redis(4.2.0), which acts as a backend for a mobile application. So far, I haven't had any problems with the Rest API connections and the endpoints I developed. I try to test the connection through websockets, but I get the following error: Exception inside application: Error connecting to localhost:6379. Multiple exceptions: [Errno 111] Connect call failed ('::1', 6379, 0, 0), [Errno 111] Connect call failed ('127.0.0.1', 6379). Traceback (most recent call last): File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/redis/asyncio/connection.py", line 274, in connect await self.retry.call_with_retry( File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/redis/asyncio/retry.py", line 59, in call_with_retry return await do() File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/redis/asyncio/connection.py", line 681, in _connect reader, writer = await asyncio.open_connection( File "/usr/lib/python3.10/asyncio/streams.py", line 48, in open_connection transport, _ = await loop.create_connection( File "/usr/lib/python3.10/asyncio/base_events.py", line 1084, in create_connection raise OSError('Multiple exceptions: {}'.format( OSError: Multiple exceptions: [Errno 111] Connect call failed ('::1', 6379, 0, 0), [Errno 111] Connect call failed ('127.0.0.1', 6379) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/django/contrib/staticfiles/handlers.py", line 101, in __call__ return await self.application(scope, receive, send) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/channels/routing.py", line 62, in __call__ return await application(scope, receive, send) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/channels/routing.py", line 116, in __call__ return await application( File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/channels/consumer.py", line 94, in app return await consumer(scope, receive, send) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/channels/consumer.py", line 58, in __call__ await await_many_dispatch( File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/channels/utils.py", line 50, in await_many_dispatch await dispatch(result) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/channels/consumer.py", line 73, in dispatch await handler(message) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/djangochannelsrestframework/consumers.py", line 84, in websocket_connect await super().websocket_connect(message) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/channels/generic/websocket.py", line 173, in websocket_connect await self.connect() File "/home/tecnicus/Documentos/Repositorios/MyProyect/MyApp/api/consumers.py", line 16, in connect await self.model_change.subscribe() File "/home/tecnicus/Documentos/Repositorios/virtualenvs/Selene_Backend_Copy_Test/lib/python3.10/site-packages/djangochannelsrestframework/observer/base_observer.py", line 144, in subscribe await consumer.add_group(group_name) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/djangochannelsrestframework/consumers.py", line 102, in add_group await self.channel_layer.group_add(name, self.channel_name) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/channels_redis/core.py", line 504, in group_add await connection.zadd(group_key, {channel: time.time()}) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/redis/asyncio/client.py", line 605, in execute_command conn = self.connection or await pool.get_connection(command_name, **options) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/redis/asyncio/connection.py", line 1076, in get_connection await self.ensure_connection(connection) File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/redis/asyncio/connection.py", line 1109, in ensure_connection await connection.connect() File "/home/tecnicus/Documentos/Repositorios/virtualenvs/virtualenv1/lib/python3.10/site-packages/redis/asyncio/connection.py", line 282, in connect raise ConnectionError(self._error_message(e)) redis.exceptions.ConnectionError: Error connecting to localhost:6379. Multiple exceptions: [Errno 111] Connect call failed ('::1', 6379, 0, 0), [Errno 111] Connect call failed ('127.0.0.1', 6379). WebSocket DISCONNECT /ws/ [127.0.0.1:33284] As I am a newbie when it comes to the 3 frameworks mentioned, I tried to follow the documentation of them as best I could, plus the tutorials that I found about it. Settings.py INSTALLED_APPS = [ 'daphne', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'channels', 'rest_framework', 'rest_framework_simplejwt', 'rest_framework_simplejwt.token_blacklist', 'djangochannelsrestframework', 'django_cleanup.apps.CleanupConfig', 'drf_yasg', 'django_filters', 'rangefilter', # Aplicaciones 'MyApp', ] ASGI_APPLICATION = 'SeleneServer.asgi.application' CHANNEL_LAYERS = { "default": { "BACKEND": "channels_redis.core.RedisChannelLayer", "CONFIG": { "hosts": [("localhost", 6379)], }, }, } REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework_simplejwt.authentication.JWTAuthentication', 'rest_framework.authentication.SessionAuthentication'), 'DEFAULT_FILTER_BACKENDS': ['django_filters.rest_framework.DjangoFilterBackend'] } SIMPLE_JWT = { "ACCESS_TOKEN_LIFETIME": timedelta(hours=12), "REFRESH_TOKEN_LIFETIME": timedelta(days=7), 'UPDATE_LAST_LOGIN': True } Asgi.py import os from channels.routing import ProtocolTypeRouter, URLRouter from channels.auth import AuthMiddlewareStack from django.core.asgi import get_asgi_application import agenda.api.router os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'SeleneServer.settings') django_asgi_app = get_asgi_application() application = ProtocolTypeRouter({ 'http': django_asgi_app, 'websocket': AuthMiddlewareStack( URLRouter( agenda.api.router.websocket_urlpatterns ) ) }) Router.py from django.urls import re_path from MyApp.api.consumers import MyConsumer websocket_urlpatterns = [ re_path(r"^ws/", MyConsumer.as_asgi()), ] Consumers.py from djangochannelsrestframework import permissions from djangochannelsrestframework.generics import GenericAsyncAPIConsumer from djangochannelsrestframework.mixins import ListModelMixin from djangochannelsrestframework.observer import model_observer from MyApp.models import MyModel from MyApp.api.serializers import MyModelSerializer class MyConsumer(ListModelMixin, GenericAsyncAPIConsumer): queryset = MyModel.objects.all() serializer_class = MyModelSerializer permissions = (permissions.AllowAny,) async def connect(self, **kwargs): await self.model_change.subscribe() await super().connect() @model_observer(MyModel) async def model_change(self, message, observer=None, **kwargs): await self.send_json(message) @model_change.serializer def model_serialize(self, instance, action, **kwargs): return dict(data=MyModelSerializer(instance=instance).data, action=action.value) I am not using docker or anything similar, since I only want the project to work locally on my computer first, before uploading it to the Vercel server. I tried to try running the project without Daphne, because I heard that for the latest versions of Channels it was not a required dependency, but the result was that when testing the websocket, the server threw an error that the address to which the request was made cannot be found: Not Found: /ws/ [05/Mar/2024 02:31:15] "GET /ws/ HTTP/1.1" 404 3003). I also tried to remove the AuthMiddlewareStack from asgi.py to test a connection that does not require the user to be logged in, but the results are the same, with or without daphne. I don't know if you recommend another way to check the websocket connection, since I have used Browser WebSocket Client for Chrome, and websocket-client for python, always at ws://127.0.0.1:8000/ws/, since the frontend, which will be developed in React Native and will not be developed yet, but I must test if the websocket system will work, even if it is at a very basic.
Channels need a REDIS backend to be able to operate. You need to install and start REDIS to solve the error
2
2
78,105,003
2024-3-5
https://stackoverflow.com/questions/78105003/401-unauthorized-error-when-making-post-request-to-google-apps-script-web-app-wi
I am trying to send a POST request to a Google Apps Script web app using a Python script. While accessing the web app's URL in a web browser works fine with GET requests, sending a POST request with Python results in a 401 Unauthorized error. The authentication process is as follows: from google_auth_oauthlib.flow import InstalledAppFlow SCOPES = ['openid', 'https://www.googleapis.com/auth/script.projects', 'https://www.googleapis.com/auth/script.webapp.deploy'] CLIENT_SECRETS_FILE = 'client_secret.json' flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRETS_FILE, SCOPES, redirect_uri='http://localhost:80/') credentials = flow.run_local_server(port=80) headers = { 'Authorization': f'Bearer {credentials.token}', 'Content-Type': 'application/json', } # Checking token validity if credentials.valid: print("Token is valid.") else: print("Token is not valid.") # Checking if the token has expired if credentials.expired: print("Token has expired.") else: print("Token is still valid.") print("Scopes granted to the token:", credentials.scopes) print("Access token:", credentials.token) try: response = requests.post(url, headers=headers, json={"text": "hello from python with OAuth"}) response.text response.raise_for_status() # 400 or 500 Error except requests.exceptions.HTTPError as err: print(f"HTTP error occurred: {err}") # HTTP Error except Exception as err: print(f"An error occurred: {err}") # Other Error And the appsscript.json settings for the Apps Script web app are as follows: { "timeZone": "Asia/Seoul", "dependencies": {}, "exceptionLogging": "STACKDRIVER", "runtimeVersion": "V8", "webapp": { "executeAs": "USER_ACCESSING", "access": "DOMAIN" } } The token is valid, and there are no issues with the granted scopes. According to the Google Cloud Console, the user has owner permissions, and the Apps Script API is enabled. Despite these settings and conditions, I can't understand why the POST request is failing. The web app's doPost function is configured to handle JSON type POST data. I would appreciate any advice on what I might be missing. Thank you. What I Tried: I sent a POST request to a Google Apps Script web app using a Python script. I authenticated using OAuth 2.0, included the access token in the header, and sent the request. The doPost function in Google Apps Script was set up to receive and process JSON data. What I Expected: Having successfully completed the authentication process and obtained a valid access token, I expected the web app to receive the POST request and respond appropriately. What Actually Happened: While accessing the web app via a web browser with a GET request works fine, sending a POST request via the Python script results in a 401 Unauthorized error. Consequently, the web app is not processing the request.
About the status code 401, it means The HyperText Transfer Protocol (HTTP) 401 Unauthorized response status code indicates that the client request has not been completed because it lacks valid authentication credentials for the requested resource.. Ref In order to request Web Apps with the access token, in the current stage, the scope of Drive API is required to be included. I guessed that when I saw your showing script, this might be due to the reason for your current issue. So, how about the following modification? From: SCOPES = ['openid', 'https://www.googleapis.com/auth/script.projects', 'https://www.googleapis.com/auth/script.webapp.deploy'] To: SCOPES = ["https://www.googleapis.com/auth/drive"] or SCOPES = ["https://www.googleapis.com/auth/drive.readonly"] or, if you are required to use the scopes of 'openid', 'https://www.googleapis.com/auth/script.projects', 'https://www.googleapis.com/auth/script.webapp.deploy', please modify as follows. SCOPES = ["https://www.googleapis.com/auth/drive", 'openid', 'https://www.googleapis.com/auth/script.projects', 'https://www.googleapis.com/auth/script.webapp.deploy'] or SCOPES = ["https://www.googleapis.com/auth/drive.readonly", 'openid', 'https://www.googleapis.com/auth/script.projects', 'https://www.googleapis.com/auth/script.webapp.deploy'] Note: Unfortunately, I have no information about your Google Apps Script. So, this answer is for modifying your showing Python script. And, it supposes that your Google Apps Script works fine. Please be careful about this. Reference: Taking advantage of Web Apps with Google Apps Script (Author: me)
3
1
78,104,696
2024-3-5
https://stackoverflow.com/questions/78104696/new-column-sampled-from-list-based-on-column-value
values = [1,2,3,2,3,1] colors = ['r','g','b'] expected_output = ['r', 'g', 'b', 'g', 'b', 'r'] # how to create this in pandas? df = pd.DataFrame({'values': values}) df['colors'] = expected_output I want to make a new column in my dataframe where the colors are selected based on values in an existing column. I remember doing this in xarray with a vectorised indexing trick, but I can't remember if the same thing is possible in pandas. It feels like it should be a basic indexing task. The current answers are a nice start, thanks! They take a bit too much advantage of the numerical nature of "values" though. I'd rather something generic that would also work if say values = ['a', 'b', 'c', 'b', 'c', 'a'] I guess the "map" method probably still works.
Code use numpy indexing import numpy as np df['colors'] = np.array(colors)[df['values'] - 1] df values color 0 1 r 1 2 g 2 3 b 3 2 g 4 3 b 5 1 r If you want to solve this problem using only Pandas, use map function. (with @Onyambu comment) m = dict(enumerate(colors, 1)) df['colors'] = df['values'].map(m)
7
5
78,103,638
2024-3-4
https://stackoverflow.com/questions/78103638/sklearn-linearregression-dont-require-iterations-and-learning-rate-as-paramet
As far as I know the cost is minimized using the Gradient Descent algorithm by updating the weights(repeat until convergence), In case of linear regression we have m : slope c : intercept (constant value) import numpy as np from sklearn.linear_model import LinearRegression x= np.asarray([(i+np.random.randint(1,7)) for i in range(1,31)]).reshape(-1,1) y= np.dot([3],x.T) + 5 reg = LinearRegression() reg.fit(x,y) I have used the sklearn library but here we're not taking iterations and the learning rate as input at the time of initialization or doing reg.fit(). Why does not sklearn's linear regression asks for iterations and learning rate? Is it set with some default value or it uses some other method?
Just to add a demonstration to Muhammed's answer (you should accept it, btw), here is an example import numpy as np np.random.seed(12) # Just to have a reproducible example from sklearn.linear_model import LinearRegression X=np.random.normal(0,1,(20,10)) # Or any array of X you want. Just a mre Y=np.random.normal(0,1,(20,)) # Learning with LinearRegerssion, without intercept (so learning M, Y=LX, L begin an array of coefficients) reg=LinearRegression(fit_intercept=False) reg.fit(X,Y) print(reg.coef_) # The coefficients. # With my random example and seed, shows #[-0.06999151 -0.0586993 0.77203288 0.11928812 0.05656448 -0.37281412 # -0.35447307 0.06957882 0.26701851 0.06950227] # Meaning that model is to predict that Y=-0.06999151*Xβ‚€ -0.0586993*X₁ + ... # For example prediction for a given X Xtest=np.random.randint(-5,5, (1,10)) # A single sample of 10 features reg.predict(Xtest) # returns 4.49749641 # Which is simply sum(Xtest[0,i]*reg.coef_[i] for i in range(10)) # Or, using linear algebra operation reg.coef_@Xtest[0] # Now, Moore-Penrose's version Coef = np.linalg.inv(X.T@X)@X.T@Y print(Coef) #[-0.06999151 -0.0586993 0.77203288 0.11928812 0.05656448 -0.37281412 # -0.35447307 0.06957882 0.26701851 0.06950227] # See, same coefficients! Not "approximately the same". But the same... # including the non significative decimal places, where you would expect some # numerical error. Showing that it is really the same computation done, not an equivalent one # prediction is likewise Coef@Xtest[0] So, no mystery here. LinearRegression is just a Moore-Penrose's pseudo inverse. Aka a least-square value. Aka an orthogonal projection (same thing: point P of subspace Vec(X₁,Xβ‚‚,...) for which distance β€–X-Pβ€– is minimum is also the orthogonal projection of X onto subspace Vec(X₁,Xβ‚‚,...) And even if you have no recollection of notion such as subspace, Vec, Moore-Penrose, ... (I say "recollection" because, probably, if your are doing this kind of stuff, you probably had some math lesson at university/college degree at some point; and that is something that is taught in any scientific curiculum in the world... but that most people quickly forget later), at least, you can see that it is not an iterative process. Just a formula (Xα΅€X)⁻¹Xα΅€Y I've simplified my example here, because I've removed the intercept. But the intercept is just the coefficent to an extra "1" vector. X1=np.ones((20,11)) X1[:,:10]=X CoefI = np.linalg.inv(X1.T@X1)@X1.T@Y # Returns # array([-0.1068548 , -0.09027332, 0.73712907, 0.1136123 , 0.0904737 , # -0.36593051, -0.38649945, 0.02849317, 0.18063291, 0.05866195, # -0.17597287]) regI=LinearRegression() regI.fit(X,Y) regI.coef_ #array([-0.1068548 , -0.09027332, 0.73712907, 0.1136123 , 0.0904737 , # -0.36593051, -0.38649945, 0.02849317, 0.18063291, 0.05866195]) # aka the 10 first coefficients (the one apply to the 10 "real" columns of X) regI.intercept_ #-0.17597287204667314 # aka the 11th coefficient of Moore-Penrose's inverse. That is the one # apply to the "all 1" vector. # Comparison of prediction is almost as easy regI.predict(Xtest) CoefI[:10]@Xtest[0]+CoefI[10] # both return the same 4.633604110000001 So, even with intercept, it is still just a linear algebra formula, not an interative process. May be sklearn is more efficient. But that is not obvious with normal size dataset (with small examples, like my 20Γ—10, direct Moore-Penrose is 10 times faster. But that is probably just because of the overhead of class initialization. But even with big datasets like 2000Γ—1000 β€” still not huge tho β€” Moore Penrose is still 3 times faster. Maybe it is because sklearn ensure some better conditioning. Or maybe it works better with way bigger dataset with sparse values. I don't know). From a math perspective it does nothing more than a Moore-Penrose inverse. From an implementation perspective, it is not easy to exhibit example of what it does more (it is not faster, and I could not generate examples where it is more stable)
2
3
78,102,561
2024-3-4
https://stackoverflow.com/questions/78102561/is-there-a-way-to-have-tuples-woking-fine-as-index-in-pandas
I would like to use a MultiIndex in Pandas where at every level I have a nested tuple. I know I could in principle unpack the thing but this would be less legible and annoying. In general, the elements of the tuple (a class name and some parameters) have meaning only together, I would like to make it harder to end up with nonsensical pairs, the tuples have different lengths, and I'd like to use MultiIndex.from_product. Everything works fine when creating the DataFrame and accessing values, but when writing I get results I wasn't expecting. In a simple example, the following code: import pandas as pd index=pd.MultiIndex.from_arrays([[("foo","spam"),("foo","spam")],[("bar","egg"),("bar","egg")],[("baz","bacon"),("pam","bacon")]]) this_index = (("foo","spam"),("bar","egg"),("baz","bacon")) df = pd.DataFrame(index=index, columns=["value"]) print(df) print(df.loc[this_index]) df.loc[this_index]=0 # df.loc[this_index,"value"]=0 print(df) First prints the table I expected (three tuples as index and NaNs in the column value), then prints the correctly retrieved value NaN, but at the last line shows two extra columns named "bar" and "egg" both set to 0: value bar egg (foo, spam) (bar, egg) (baz, bacon) 0 0.0 0.0 (pam, bacon) NaN NaN NaN In this case, using the commented line for the assignment gives the expected result. However, in my case, I need "spam", "egg", and "bacon" to be tuples as well. If I change lines 2 and 3 in the code above putting: index=pd.MultiIndex.from_arrays([[("foo",("spam",)),("foo",("spam",))],[("bar",("egg",)),("bar",("egg",))],[("baz",("bacon",)),("pam",("bacon",))]]) this_index = (("foo",("spam",)),("bar",("egg",)),("baz",("bacon",))) I have again the expected behaviour with the first two prints, the third gives (now somehow expected): value bar (egg,) (foo, (spam,)) (bar, (egg,)) (baz, (bacon,)) 0 0.0 0.0 (pam, (bacon,)) NaN NaN NaN But trying the same workaround as above gives: ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (3, 2) + inhomogeneous part. And I couldn't find any way to adapt the trick. The best workaround I found at the moment is to use str() on the tuples and then parse again the content if needed, but I feel like there should be a better way. The only trace I found here is an unanswered comment to this answer.
If I understand correctly, your issue is with this assignments: index=pd.MultiIndex.from_arrays([[("foo",("spam",)),("foo",("spam",))],[("bar",("egg",)),("bar",("egg",))],[("baz",("bacon",)),("pam",("bacon",))]]) this_index = (("foo",("spam",)),("bar",("egg",)),("baz",("bacon",))) df = pd.DataFrame(index=index, columns=["value"]) df.loc[this_index, 'value']=0 Which you can solve using a list for the columns or for the index: df.loc[this_index, ['value']] = 0 # or df.loc[[this_index], 'value'] = 0 Output: value (foo, (spam,)) (bar, (egg,)) (baz, (bacon,)) 0 (pam, (bacon,)) NaN
2
1
78,101,392
2024-3-4
https://stackoverflow.com/questions/78101392/switch-row-column-on-a-chartsheet-using-openpyxl
I have a large Excel sheet with data. Due to several reasons, I have data that is structured in a transposed way: Histogram bin 0 bin 1 bin 2 bin 3 bin 4 bin 5 test 1 0 1 5 2 1 0 test 2 0 0 1 7 2 0 When I plot this data, I get 5 series with each 2 values in my plot. What I would like is 2 Series with each 5 points in the plot (displaying the histogram). This corresponds to pressing the "Switch Row/Column" button in Excel. Unfortunately it is not possible to manually press this button for each chartsheet as there are a lot of them. How could I achieve this effect within python? Any help would be greatly appreciated. I have made a simplified version of my code. It uses the same table as the one shown above. I realize using MultiIndex is not required in this case, in my full code however, it is required. import pandas as pd from openpyxl import load_workbook from openpyxl.chart import LineChart, Reference excel_file = 'test.xlsx' # create and write data excel_tests_data = [[0, 1, 5, 2, 1, 0], [0, 0, 1, 7, 2, 0]] df = pd.DataFrame(excel_tests_data) df.columns=pd.MultiIndex.from_tuples([('bin '+str(i), ) for i in range(6)]) df.to_excel(excel_file, startrow=0, sheet_name='test') # open workbook workbook = load_workbook(excel_file) worksheet = workbook.worksheets[0] # create chartsheet chartsheet = workbook.create_chartsheet() chartsheet.title = "Testsheet" chart = LineChart() data = Reference(worksheet, min_col = 2, min_row = 3, max_row = 4, max_col = 7) chart.title = "test" chart.add_data(data) chartsheet.add_chart(chart) # save and close workbook workbook.save(excel_file) workbook.close()
If you refer to the Switch Row/Column button, you can simply turn-on the from_rows parameter. chart.add_data(data) chart.add_data(data, from_rows=True) Output (text.xlsx):
2
2
78,101,029
2024-3-4
https://stackoverflow.com/questions/78101029/multidimensional-indexing-in-numpy-with-tuples-as-indices-for-certain-axes
I have a 3D numpy array which is really an array of matrices. I want to set the diagonals to zero by using the following approach. When I print the tuple and din it is exactly the same, but it returns different views of the array. m = np.random.normal(0, 0.2, (10, 4, 4)) din = np.diag_indices(m.shape[1], ndim = 2) m[:, np.array([0,1,2,3]), np.array([0,1,2,3])]) # It returns an array of diagonals as expected m[:, tuple(din)] # It returns the array What did I miss here?
As described in comments, you need to unpack the indices. Since python 3.11, you can use: m[:, *din] Output: array([[ 8.61622699e-02, -1.46919069e-01, -9.37771599e-02, 1.94698315e-03], [ 1.60933774e-01, -2.77077615e-02, -1.74135776e-01, -1.72223723e-01], [-1.54804225e-01, 1.08146714e-01, 2.51844877e-01, -2.91622737e-02], [ 1.22213756e-02, 1.59703456e-02, -1.41757563e-01, -5.02470362e-02], [ 1.49296012e-01, -9.60208199e-03, -4.82484338e-01, 1.58012139e-02], [-3.09847219e-01, -1.13959996e-01, -6.71019475e-01, 3.17810448e-01], [ 2.04860543e-04, -2.16311908e-01, 1.39098046e-01, -1.40102017e-01], [-5.82402679e-02, 2.55831587e-01, -3.74597159e-01, 1.23205316e-01], [-1.23942861e-01, 1.40365188e-02, -2.16884333e-02, -2.08800511e-02], [ 1.02934324e-01, -1.81953630e-01, 2.35600757e-01, -2.29315601e-01]]) This syntax is however not supported in older python versions, in this case you can build a single tuple: m[tuple((slice(None), *din))] # or m[(slice(None), *din)]
2
3
78,100,258
2024-3-4
https://stackoverflow.com/questions/78100258/as-index-false-groupby-doesnt-work-with-count
When I run groupby on a dataframe with as_index set to False, count seems to not work at all. For example, import pandas as pd word_list = ['a', 'b', 'c', 'a', 'c', 'c', 'b', 'a', 'c'] df = pd.DataFrame(word_list, columns=['word']) as_counts = df.groupby('word', as_index=False).count() print(as_counts) word 0 a 1 b 2 c If you try the same with .size() as in as_counts = df.groupby('word', as_index=False).size() it works as expected word size 0 a 3 1 b 2 2 c 4 What difference between the two functions explains this difference in output? This post is asking a different question entirely as, in this case, the frame is neither MultiIndex nor seeking to maintain non-unique indices. Equally, none of the answers in that post address the problem of this question.
groupby.count is designed to work on the columns other than the grouping columns while groupby.size is designed to output a single new column (named size), irrespective of the number of existing columns. This makes sense since count depends on the NA status, while size is always constant within a group (the number of rows, irrespective of the values). This can easily be seen by adding several dummy columns: df.assign(dummy1=1, dummy2=None).groupby('word', as_index=False).count() # as many output columns as the input columns (minus grouper) # only the non NA values are counted word dummy1 dummy2 0 a 3 0 1 b 2 0 2 c 4 0 VS: df.assign(dummy1=1, dummy2=None).groupby('word', as_index=False).size() # grouper columns + ONE "size" column word size 0 a 3 1 b 2 2 c 4 This is actually explicitly stated in the documentation: DataFrameGroupBy.size() Returns: DataFrame or Series Number of rows in each group as a Series if as_index is True or a DataFrame if as_index is False. So back to your original example, groupby.counts outputs nothing since there are no columns to use (the grouping columns are excluded from the aggregation unless specifically sliced from the DataFrameGroupBy object). workaround As noted elsewhere, the workaround if you want to count one of the grouping columns is to explicitly slice it: df.groupby('word', as_index=False)['word'].count() word 0 3 1 2 2 4 Amusingly, you can slice it more than once and get as many columns in the output: df.groupby('word', as_index=False)[['word', 'word']].count() word word 0 3 3 1 2 2 2 4 4
2
3
78,089,594
2024-3-1
https://stackoverflow.com/questions/78089594/python-reading-long-lines-with-asyncio-streamreader-readline
The asyncio version of readline() (1) allows reading a single line from a stream asynchronously. However, if it encounters a line that is longer than a limit, it will raise an exception (2). It is unclear how to resume reading after such an exception is raised. I would like a function similar to readline(), that simply discards parts of lines that exceed the limit, and continues reading until the stream ends. Does such a method exist? If not, how to write one? 1: https://docs.python.org/3/library/asyncio-stream.html#streamreader 2: https://github.com/python/cpython/blob/3.12/Lib/asyncio/streams.py#L549
The standard library implementation of asyncio.StreamReader.readline is almost what you want. That function raises a ValueError in the event of a buffer overrun, but it first removes a chunk of data from the buffer and discards it. If you catch the ValueError and keep calling readline, the stream would keep going. But you would lose the first chunk of data, which is not what you want. To achieve what you want you can call a slightly lower-level function StreamReader.readuntil(separator). If you call reader.readuntil('\n') it's equivalent to reader.readline. On an overrun this function raises a asyncio.LimitOverrunError, which has an instance variable named consumed. This is an integer representing the current length of the buffer contents. There are a couple of strange details here. Despite the variable's name, these bytes have not been "consumed" - they are still in the buffer. In order to keep the stream going you have to consume them yourself, which you can do by calling the function StreamReader.readexactly. The second strange thing is that the value of LimitOverrunError.consumed can be greater than the value of the keyword argument limit= that you originally passed to the open_connection constructor. That argument was supposed to set the size of the input buffer but it appears that it merely sets the threshold for raising the LimitOverrunError. Consequently, the buffer might contain the separator already, but fortunately the character count in LimitOverrunError.consumed does not include the separator. So a call to readexactly will not consume the separator. You can achieve exactly what you want by calling readuntil, with added logic to handle LimitOverrunError. In the exception handler you save the first chunk of data and continue to read from the stream. When you finally encounter the end of the line you return the first chunk that you saved. Here is a full working demo program. To test the code I set limit=512 in asyncio.open_connection. The server transmits a line that's longer than this followed by some shorter lines. The client reads each of the lines without any exception being raised. The function you are asking for is readline_no_limit. import asyncio _PORT = 54242 MY_LIMIT = 512 async def readline_no_limit(reader): """ Return a bytes object. If there has not been a buffer overrun the returned value will end with include the line terminator, otherwise not. The length of the returned value may be greater than the limit specified in the original call to open_connection.""" discard = False first_chunk = b'' while True: try: chunk = await reader.readuntil(b'\n') if not discard: return chunk break except asyncio.LimitOverrunError as e: print(f"Overrun detected, buffer length now={e.consumed}") chunk = await reader.readexactly(e.consumed) if not discard: first_chunk = chunk discard = True return first_chunk async def client(): await asyncio.sleep(1.0) reader, writer = await asyncio.open_connection(host='localhost', port=_PORT, limit=MY_LIMIT) writer.write(b"Hello\n") while True: line = await readline_no_limit(reader) print(f"Received {len(line)} bytes, first 40: {line[:40]}") if line == b"End\n": break writer.write(b"Quit\n") async def got_cnx(reader, writer): while True: msg = await reader.readline() if msg == b"Quit\n": break if msg != b"Hello\n": continue long_line = ' '.join([format(x, "x") for x in range(1000)]) writer.write(bytes(long_line, encoding='utf8')) writer.write(b"\n") writer.write(b"Short line\n") writer.write(b"End\n") async def main(): def shutdown(_future): print("\nClient quit, server shutdown") server.close() client_task = asyncio.create_task(client()) server = await asyncio.start_server(got_cnx, host='localhost', port=_PORT, start_serving = False) client_task.add_done_callback(shutdown) try: await server.serve_forever() except asyncio.CancelledError: print("Server closed normally") if __name__ == "__main__": asyncio.run(main()) Output: Overrun detected, buffer length now=3727 Received 3727 bytes, first 40: b'0 1 2 3 4 5 6 7 8 9 a b c d e f 10 11 12' Received 11 bytes, first 40: b'Short line\n' Received 4 bytes, first 40: b'End\n' Client quit, server shutdown Server closed normally Python 3.11, Ubuntu
4
2
78,095,190
2024-3-3
https://stackoverflow.com/questions/78095190/how-to-run-a-background-task-when-using-websockets-in-fastapi-starlette
My ultimate goal is to write code that only triggers other servers when calling an endpoint and waits for data from a specific channel in Redis to come in. I don't want to know external server's business logic is done due to latency. The call_external_server background task function below will notify the external server to send data to Redis (pub/sub). However, it doesn't execute. Here is my code: async def call_external_server(channel, text): print("call_external_server start") async with aiohttp.ClientSession() as session: async with session.get(f"http://localhost:9000/pub?channel={channel}&text={text}") as resp: print(resp) print("call_external_server finished") return {"response": "external_server is done"} @app.websocket("/ws") async def websocket_endpoint(channel: str, websocket: WebSocket, background_task: BackgroundTasks): await websocket.accept() client_info = dict(websocket.headers) text = client_info.get("text") redis_reader: redis.client.PubSub = await get_redis_pubsub() await redis_reader.subscribe(channel) # Problem is Here background_task.add_task(call_external_server, channel, text) # Background task Doesn't work properly try: while True: message = await redis_reader.get_message(ignore_subscribe_messages=True) if message is not None: decoded_msg = message["data"].decode() if decoded_msg == STOPWORD: print("(Reader) STOP") break await websocket.send_text(decoded_msg) except Exception as e: print(e) await websocket.close() return await websocket.close() return
A BackgroundTask is executed after returning a response to a client's request (not on ongoing websocket connections)β€”see this answer, as well as this answer and this related comment for more details on Background Tasks. As explained by @tiangolo (the creator of FastAPI): Background Tasks internally depend on a Request, and they are executed after returning the request to the client. In the case of a WebSocket, as the WebSocket is not really "finished" but is an ongoing connection, it wouldn't be a background task. You could, however, have functions executed in the background, using one of the options described in this answer. If your background task is an async def function, you should rather use Option 3 (see the linked answer above for more details), i.e., using asyncio.create_task(). Have a look at this related answer as well. Example async def call_external_server(channel, text): pass @app.websocket("/ws") async def websocket_endpoint(websocket: WebSocket): await websocket.accept() # ... asyncio.create_task(call_external_server(channel, text)) I would also suggest having a look at this answer, as well as this answer and this answer on how to spawn an HTTP Client once at application startup and reuse it every time is needed, instead of creating a new connection every time the endpoint is called (as shown in the example provided in your question).
2
4
78,089,964
2024-3-1
https://stackoverflow.com/questions/78089964/syntax-highlights-intellisense-and-autocomplete-not-working-in-jupyter-notebook
for some background, I usually use vscode for creating input for my calculation, especially using the jupyter notebook extension to my WSL Ubuntu. so I've been setting up my vscode using standard Python and Jupyter noteboook extensions. Everything looked good and ran smoothly as the day before until two days ago (29-02-2024) when I was modifying my input and the python IntelliSense started not responding and syntax highlight was not responding. At first, I thought maybe my Python kernel shutting down so I reconnected it to my standard Python at /bin/python3, and nothing was changed, so I restarted my vscode with the hope that everything back again with a clean start. It seems to work since the script I added as the previous session (before closing) has been colored with proper syntax highlights, but when I add the next syntax, commenting, the highlights don't work and all syntax isn't colored properly. It looks like the color came from the previous line or line after that like it was locked for each line with the color loaded during the first opening of my jupyter notebook. as an example, this is my dummy syntax on the notebook cell when I loaded my notebook first load and this is when I modified it slightly by deleting some dummy comments, the color highlights are not working properly. It all also happened when I added another syntax to the cell. deleting dummy comments and typing syntax this is my extension used right now vscode extension on WSL side Hence, I thought it was a problem with my extension since when I restart my vscode, it gives me some notification regarding the February 2024 update. So I removed and reinstalled my Python and jupyter notebook extensions either on the local or remote WSL side, and nothing worked. I am also surfing through some StackOverflow feeds and vscode forum, trying some of the others' recommendations in comment sections, been reinstalling my vscode after clean uninstalled my vscode from my Windows side (deleting %APPDATA%\Code and %USERPROFILE%.vscode folders) and also from the WSL side (~/.vscode-server), nothing changes on the syntax highlights on my ipynb. Some comment sections also talk about disabling Dependency Analytics, but I haven't used Dependency Analytics before. On a mission to specify which part of my vscode setup that broken, I tried to modify other ipynb files, and it still didn't work. But, when I check my ipynb file on another PC at the office, it can work properly there. Other findings were the problem with syntax highlights, intellisense, and autocorrection/autocomplete only happened on my jupyter notebook on the WSL Ubuntu side. The highlights of ipynb worked properly on the local Windows side but since the program that I need is on the WSL side, then the intellisense will not work properly, but at least the highlights work on Windows side. I also try to check my python.py script on the WSL side, and the syntax highlights ... work perfectly. So this should be the problem specific to WSL Ubuntu, especially on jupyter notebook settings of vscode. additional info, I have tried to install another Linux distribution from WSL and try to connect using vscode and the problem with ipynb still exists. So I think there must be some problem with the vscode specifically on jupyter notebook settings on syntax highlights and intelisense. So, the question is, which part of my vscode needs to be checked so I can get all of my jupyter notebook features to be fully operated normally as it was before? I need some insight since I think even a complete reinstall of my WSL Ubuntu (literally get rid of WSL from Windows, including the WSL2kernel, then reinstalling everything, starting from reinstalling the WSL2 kernel, then the Ubuntu, build-essential, etc) still didn't help me at this point. Now, I only use the Developer: Reload Window command to reload my vscode window after modifying my ipynb. other info: vscode version 1.87.0 019f4d1419fbc8219a181fab7892ebccf7ee29a2 x64 WSL2 Ubuntu extension used on WSL side: isort, Jupyter, Jupyter Cell Tags, Jupyter Notebook, Jupyter Slide Show, Pylance, Python, Python Debugger, Jupyter Keymap
According to this reply in github, the fix will come in the stable recovery release 1.87.1. You can either wait or try an older version of VSCode, such as vscode v1.86.2
2
2
78,095,216
2024-3-3
https://stackoverflow.com/questions/78095216/pyright-lsp-install-in-neovim-nodeutil-module-not-found
Fresh installed neovim on a new LinuxMint machine with lsp-config, mason plugins and pyright LSP Server (through Mason) and found out that it was not working with lsp.log registering this !!! .../vim/lsp/rpc.lua:734 "rpc" "/home/manjunath/.local/share/nvim-kick/mason/bin/pyright-langserver" "stderr" "internal/modules/cjs/loader.js:818 throw err; ^ Error: Cannot find module 'node:util' Require stack: - /home/manjunath/.local/share/nvim-kick/mason/packages/pyright/node_modules/pyright/dist/pyright-langserver.js - /home/manjunath/.local/share/nvim-kick/mason/packages/pyright/node_modules/pyright/langserver.index.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js:815:15) at Function.Module._load (internal/modules/cjs/loader.js:667:27) at Module.require (internal/modules/cjs/loader.js:887:19) at require (internal/modules/cjs/helpers.js:74:18) at Object.9632 (/home/manjunath/.local/share/nvim-kick/mason/packages/pyright/node_modules/pyright/dist/pyright-langserver.js:1:557) at o (/home/manjunath/.local/share/nvim-kick/mason/packages/pyright/node_modules/pyright/dist/pyright-langserver.js:1:1142) at Object.1264 (/home/manjunath/.local/share/nvim-kick/mason/packages/pyright/node_modules/pyright/dist/vendor.js:2:794958) at o (/home/manjunath/.local/share/nvim-kick/mason/packages/pyright/node_modules/pyright/dist/pyright-langserver.js:1:1142) at Object.1476 (/home/manjunath/.local/share/nvim-kick/mason/packages/pyright/node_modules/pyright/dist/pyright-internal.js:1:441151) at o (/home/manjunath/.local/share/nvim-kick/mason/packages/pyright/node_modules/pyright/dist/pyright-langserver.js:1:1142) { code: 'MODULE_NOT_FOUND', requireStack: [ '/home/manjunath/.local/share/nvim-kick/mason/packages/pyright/node_modules/pyright/dist/pyright-langserver.js', '/home/manjunath/.local/share/nvim-kick/mason/packages/pyright/node_modules/pyright/langserver.index.js' ] } Actions taken: Fresh installed node and npm again. Checked with a sample node app using utils module - WORKING Outcome: Working pyright LSP server with Neovim
Found out what the issue was. node.js on apt was outdated. Installing latest version from here fixed it.
6
10
78,092,187
2024-3-2
https://stackoverflow.com/questions/78092187/highlight-the-changes-or-areas-of-discrepancy-between-the-two-images
These are my images, I want to Convert the PDF documents (file_1.pdf and file_2.pdf) into images. Compare the images to detect any differences between them. Highlight the changes or areas of discrepancy between the two images. This is my code from pdf2image import convert_from_path import cv2 import numpy as np from PIL import Image, ImageOps from IPython.display import display` def process_and_display_image(pdf_path, target_size=(800, 600),save_path='processed_image.jpeg'): images = convert_from_path(pdf_path) image = images[0] image = ImageOps.exif_transpose(image) image.thumbnail(target_size, Image.Resampling.LANCZOS) image_np = np.array(image) image_np = cv2.cvtColor(image_np, cv2.COLOR_RGB2GRAY) image_processed = Image.fromarray(image_np) display(image_processed) image_processed.save(save_path, 'JPEG') print(f"Image saved as {save_path}") # Display image from PDF process_and_display_image("file_1.pdf",save_path='file_1.jpeg') process_and_display_image("file_2.pdf",save_path='file_2.jpeg') import matplotlib.pyplot as plt image1 = cv2.imread('file_1.jpeg', cv2.IMREAD_UNCHANGED) image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB) image2 = cv2.imread('file_2.jpeg', cv2.IMREAD_UNCHANGED) image2 = cv2.cvtColor(image2, cv2.COLOR_BGR2RGB) if image1.shape == image2.shape: difference = cv2.absdiff(image1,image2) difference = 255 - difference if image1.shape == image2.shape: overlay = cv2.addWeighted(image1, 0.5, image2, 0.5, 0) difference = overlay plt.imshow(difference) plt.axis('off') plt.show() My output is -> Expected output ->
I'd like to present some simple operations to get exactly the colors you want, without thresholding. im1 and im2 (grayscale): # assuming white sheet of paper, ink is the inverse # positive = added # negative = deleted signed_difference = (255 - im2).astype(np.int16) - (255 - im1).astype(np.int16) (white = added, black = removed. If you need to look at it, imshow("window", signed_difference / (255*2) + 0.5)) Starting from a version that contains only "ink" that's in both pictures, I'll add colored "ink" for the additions and deletions. The reshape and expand_dims stuff helps numpy do the right thing. canvas = np.maximum(im1, im2) # same ink in both images canvas = cv.cvtColor(canvas, cv.COLOR_GRAY2BGR) # add colored ink: subtract the inverse of those colors to "ink" the page add_color = np.array([255, 128, 0]).reshape((1, 1, 3)) # additions blue del_color = np.array([0, 0, 255]).reshape((1, 1, 3)) # deletions red strength = np.expand_dims(np.abs(signed_difference) / 255, axis=2) # those are just for knowing *what pixels* are affected at all. # results are all grayscale, no thresholding. add_mask = np.expand_dims(signed_difference > 0, axis=2) del_mask = np.expand_dims(signed_difference < 0, axis=2) canvas = np.where(add_mask, canvas - (255 - add_color) * strength, canvas) canvas = np.where(del_mask, canvas - (255 - del_color) * strength, canvas) canvas = np.clip(canvas, 0, 255).astype(np.uint8) And that's it. You'll want to make sure your pictures are aligned well, even sub-pixel accurately. You can use findTransformECC() for sub-pixel refinement. The worse the initial alignment, the more blurring (gaussFiltSize) you'll need. All kinds of "fixed borders" will ruin this process. Make sure to only feed the "art" in, not any framing. The core of that operation: H = np.eye(3).astype(np.float32) (rv, H) = cv.findTransformECC( templateImage=im1, inputImage=im2, warpMatrix=H[:2], motionType=cv.MOTION_AFFINE, criteria=(cv.TERM_CRITERIA_EPS | cv.TERM_CRITERIA_COUNT, 20, -1), # -1 to ignore eps and use all iterations inputMask=None, gaussFiltSize=5 ) im2_aligned = cv.warpAffine(im2, H, (im1.shape[1], im1.shape[0]), flags=cv.INTER_CUBIC + cv.WARP_INVERSE_MAP) And this is what your input looks like, aligned and colored, without borders: You'll notice that it also colors gray-to-gray differences. Your reference picture does not show that. Your reference picture was probably made by the original CAD program. The CAD program is aware of the geometry and colors it according to flexible styles. I can only work on the image data. Sure, it's possible to exclude "gray" from the coloring but then you'll also see the gray pixels near black pixels (fuzzy lines) not-colored. Here's a complete gray-to-gray display:
2
2
78,097,177
2024-3-3
https://stackoverflow.com/questions/78097177/how-to-get-specific-unique-combinations-of-a-dataframe-using-only-dataframe-oper
I have this data that contains entries of 2 players playing a game one after another then after they both gain a win or lose they get a shared score (logic of this is unimportant numbers are random anyway this is just an example to describe what I want). So there are scores obtained for every possible outcome after player P1 and player P2 have played. The logic of the game is not important, all I want to know is if I can create a new dataframe of all unique combinations of these 4 players playing using my initial dataframe. So calculate a new score for all possible combinations of these 4 Players if they all play and get a score together, let's say their total scores would be summed up. Example: Player_1 Player_2 Player_3 Player_4 Outcome_1 Outcome_2 Outcome_3 Outcome_4 Score P1 P2 P3 P4 win win win win 72 and other possible unique combinations. The key is to get a score of 30 from the combination where both P1 and P2 win and get 42 from the combination where both P3 and P4 have won and sum them to create the score if these 4 players have played and they have all won. I can do this with generating unique combinations etc., but in a real use case with larger parameters etc. its too long and results in dirty hard to read code. What I want to know is is there a way to achieve this using only operations such as merge, groupby, join, agg etc. import pandas as pd data = { "Player_1": ["P1", "P1", "P1", "P1", "P2", "P2", "P2", "P2", "P1", "P1", "P1", "P1", "P3", "P3", "P3", "P3"], "Player_2": ["P2", "P2", "P2", "P2", "P3", "P3", "P3", "P3", "P4", "P4", "P4", "P4", "P4", "P4", "P4", "P4"], "Outcome_1": ["win", "win", "lose", "lose", "win", "win", "lose", "lose", "win", "win", "lose", "lose", "win", "win", "lose", "lose"], "Outcome_2": ["win", "lose", "win", "lose", "win", "lose", "win", "lose", "win", "lose", "win", "lose", "win", "lose", "win", "lose"], "Score": [30, 45, 12, 78, 56, 21, 67, 90, 15, 32, 68, 88, 42, 74, 8, 93] } df = pd.DataFrame(data) print(df) Player_1 Player_2 Outcome_1 Outcome_2 Score 0 P1 P2 win win 30 1 P1 P2 win lose 45 2 P1 P2 lose win 12 3 P1 P2 lose lose 78 4 P2 P3 win win 56 5 P2 P3 win lose 21 6 P2 P3 lose win 67 7 P2 P3 lose lose 90 8 P1 P4 win win 15 9 P1 P4 win lose 32 10 P1 P4 lose win 68 11 P1 P4 lose lose 88 12 P3 P4 win win 42 13 P3 P4 win lose 74 14 P3 P4 lose win 8 15 P3 P4 lose lose 93
I hope I've understood your question right. From the comments I'm supposing you have total 4 players in the dataframe: from itertools import product p1, p2, p3, p4 = np.unique(df[["Player_1", "Player_2"]].values) df = df.set_index(["Player_1", "Player_2", "Outcome_1", "Outcome_2"]) all_data = [] for p1o1, p2o2, p3o1, p4o2 in product(["win", "lose"], repeat=4): all_data.append( ( p1, p2, p3, p4, p1o1, p2o2, p3o1, p4o2, df.loc[(p1, p2, p1o1, p2o2), "Score"] + df.loc[(p3, p4, p3o1, p4o2), "Score"], ) ) out = pd.DataFrame( all_data, columns=[ "Player_1", "Player_2", "Player_3", "Player_4", "Outcome_1", "Outcome_2", "Outcome_3", "Outcome_4", "Score", ], ) Prints: Player_1 Player_2 Player_3 Player_4 Outcome_1 Outcome_2 Outcome_3 Outcome_4 Score 0 P1 P2 P3 P4 win win win win 72 1 P1 P2 P3 P4 win win win lose 104 2 P1 P2 P3 P4 win win lose win 38 3 P1 P2 P3 P4 win win lose lose 123 4 P1 P2 P3 P4 win lose win win 87 5 P1 P2 P3 P4 win lose win lose 119 6 P1 P2 P3 P4 win lose lose win 53 7 P1 P2 P3 P4 win lose lose lose 138 8 P1 P2 P3 P4 lose win win win 54 9 P1 P2 P3 P4 lose win win lose 86 10 P1 P2 P3 P4 lose win lose win 20 11 P1 P2 P3 P4 lose win lose lose 105 12 P1 P2 P3 P4 lose lose win win 120 13 P1 P2 P3 P4 lose lose win lose 152 14 P1 P2 P3 P4 lose lose lose win 86 15 P1 P2 P3 P4 lose lose lose lose 171
3
3
78,094,782
2024-3-3
https://stackoverflow.com/questions/78094782/make-code-faster-to-solve-which-letter-represents-which-digit-in-a-given-equatio
I am going to do a math competition which allows writing programs to solve problems. The code attempts to solve this problem: WHITE+WATER=PICNIC where each distinct letter represents a different digit. Find the number represented by PICNIC. No imports are allowed (tqdm was just a progress bar). What I tried is below. At the rate of which my computer goes, it is not fast enough for the times competition because it needs to range over 10 to the power of digits there is. Is there a clear solution to this problem? from tqdm import tqdm def find_solution(): for W in tqdm(range(10)): for H in tqdm(range(10), desc='H'): for I in tqdm(range(10), desc='I'): for T in tqdm(range(10)): for E in tqdm(range(10)): for A in (range(10)): for R in (range(10)): for P in (range(1, 10)): # P cannot be 0 for C in (range(10)): for N in (range(10)): white = W * 10000 + H * 1000 + I * 100 + T * 10 + E water = W * 10000 + A * 1000 + T * 100 + E * 10 + R picnic = P * 100000 + I * 10000 + C * 1000 + N * 100 + I * 10 + C if white + water == picnic: return {'W': W, 'H': H, 'I': I, 'T': T, 'E': E, 'A': A, 'R': R, 'P': P, 'C': C, 'N': N} return None solution = find_solution() if solution: print("Solution found:") print(solution) else: print("No solution found.")
Observations: P=1 due to a carry to its sixth digit W must be 5-9 due to a carry being generated (1) No letter can be the same value as another letter. def find_solution(): P = 1 for W in range(5,10): for H in range(10): if H in (P,W): continue for I in range(10): if I in (H,P,W): continue for T in range(10): if T in (I,H,P,W): continue for E in range(10): if E in (T,I,H,P,W): continue for A in range(10): if A in (E,T,I,H,P,W): continue for R in range(10): if R in (A,E,T,I,H,P,W): continue for C in range(10): if C in (R,A,E,T,I,H,P,W): continue for N in range(10): if N in (C,R,A,E,T,I,H,P,W): continue white = W * 10000 + H * 1000 + I * 100 + T * 10 + E water = W * 10000 + A * 1000 + T * 100 + E * 10 + R picnic = P * 100000 + I * 10000 + C * 1000 + N * 100 + I * 10 + C if white + water == picnic: print(f' WHITE= {white}') print(f' WATER= {water}') print(f'PICNIC={picnic}') print() find_solution() Output (2 solutions, ~1 second): WHITE= 83642 WATER= 85427 PICNIC=169069 WHITE= 85642 WATER= 83427 PICNIC=169069 Faster, of course, if you can use libraries (~0.3 sec). Note the documentation of itertools.permutations has a rough Python-only implementation, but it is slower than the hard-coded version above: from itertools import permutations def find_solution(): P=1 for W,H,I,T,E,A,R,C,N in permutations([0,2,3,4,5,6,7,8,9]): if W < 5: continue white = W * 10000 + H * 1000 + I * 100 + T * 10 + E water = W * 10000 + A * 1000 + T * 100 + E * 10 + R picnic = P * 100000 + I * 10000 + C * 1000 + N * 100 + I * 10 + C if white + water == picnic: print(f' WHITE= {white}') print(f' WATER= {water}') print(f'PICNIC={picnic}') print() find_solution()
2
3
78,094,521
2024-3-2
https://stackoverflow.com/questions/78094521/how-to-shift-matrix-upper-right-triangular-values-to-lower-right
Given the following upper triangular matrix, I need to shift the values to the bottom as shown: input_array = np.array([ [np.nan, 1, 2, 3], [np.nan, np.nan, 4, 5], [np.nan, np.nan, np.nan, 6], [np.nan, np.nan, np.nan, np.nan], ]) result = np.array([ [np.nan, np.nan, np.nan, np.nan], [np.nan, np.nan, np.nan, 3], [np.nan, np.nan, 2, 5], [np.nan, 1, 4, 6], ]) What is the best way to do this in numpy in terms of fewer lines and speed? Here is the solution I've got using meshgrid: import numpy as np n = input_array.shape[0] j, i = np.meshgrid(np.arange(n), np.arange(n)) i2 = i - np.arange(1,n+1)[::-1] result = input_array[i2, j]
Try: ind = (~np.isnan(input_array)).argsort(axis=0) print(np.take_along_axis(input_array, ind, axis=0)) Prints: [[nan nan nan nan] [nan nan nan 3.] [nan nan 2. 5.] [nan 1. 4. 6.]]
2
1
78,094,327
2024-3-2
https://stackoverflow.com/questions/78094327/streamlining-generic-property-creation
I'm new to Python and I find that when making classes I keep having to write @property def example(self): return self._example @example.setter def example(self, example): if #some conditions here: self._example = example I am working on a project and there are over a dozen of these across it, so I tried to make a function that would take a property name and some conditions and make a getter, setter and deleter for it. class Util: @classmethod def custom_property(cls, prop, conds=None, deletable=False): def getter(self): return getattr(self, f'_{prop}') def setter(self, value): if conds is None or conds(value): setattr(self, f'_{prop}', value) else: raise ValueError(f'Invalid value for {prop}') def deleter(self): if deletable: del self._value else: print("This attribute can't be deleted.") # setattr(target, prop, property(getter, setter, deleter)) return [getter, setter, deleter] and here is an example of me trying to use it: from utils import Util def name_conds(name): if not isinstance(name, str): raise TypeError("City name must be a string") if not name.strip(): raise ValueError("City name cannot be blank") if not 2 <= len(name) <= 25: raise ValueError("City name length must be between 2 and 25 characters") return True class City: def __init__(self, name): self._name = name [name_getter, name_setter, name_deleter] = Util.custom_property("name", name_conds) name = property(name_getter, name_setter, name_deleter) Unfortunately, this doesn't totally work. If I make a city with a name "asdf" and try to change it to "a", I get the right error that the length is wrong, but if I instantiate a new city with a name "a", I am allowed to with no problem. Surely there is some way people have gotten around writing properties out like I have? Also, as the commented line in Util shows, I wanted to be able to use the method in a quicker way by just calling Util.custom_property inside whatever class needed a property, but that added the property to Util (of course, since it is adding it to cls). It seems there is no way to give the class City as an argument inside City, but then calling custom_property outside of city might mean the functions in city can't reference name properly.
The problem isn't with your property, but with the fact that you aren't using the property in City.__init__. _name is really a private implementation detail of the property itself (though the value is stored on the instance of City), so City methods should not use it directly, either. While I'm answering the question, I'll point out that custom_property doesn't have to be a class method (or a method of any class); it can be an ordinary function. Further, it can return a property instance itself that you can assign directly to an attribute of City. Finally, you can optimize setter by defining one that unconditionally sets the value if conds is None, and you can just omit the deleter if deletable is False. def custom_property(cls, prop, conds=None, deletable=False): name = f'_{prop}' def getter(self): return getattr(self, name) if conds is None: def setter(self, value): setattr(self, name, value) else: def setter(self, value): if conds(value): setattr(self, name, value) else: raise ValueError(f'Invalid value for {prop}') if deletable: def deleter(self): delattr(self, name) else: deleter = None return property(getter, setter, deleter) Then from utils import custom_property def name_conds(name): if not isinstance(name, str): raise TypeError("City name must be a string") if not name.strip(): raise ValueError("City name cannot be blank") if not 2 <= len(name) <= 25: raise ValueError("City name length must be between 2 and 25 characters") return True class City: def __init__(self, name): self.name = name # Use the property, not _name name = custom_property("name", name_conds)
3
3
78,093,070
2024-3-2
https://stackoverflow.com/questions/78093070/pandas-how-to-transform-multi-select-column-to-index-column
I have a dataframe looking like that : 2023-12-31 2024-01-31 myvalue1 myvalue2 myvalue1 myvalue2 a 2 6 2 6 b 3 7 3 7 c 4 8 4 8 which can be created by : df = pd.DataFrame(index=['a', 'b', 'c'], data= {'myvalue1':[2,3, 4], 'myvalue2':[6,7,8]})\ .reindex(['myvalue1', 'myvalue2'] * 2, axis=1) \ .set_axis(pd.MultiIndex.from_product([['2023-12-31','2024-01-31'], ['myvalue1', 'myvalue2']]), axis=1) How can I get the sum in each column in the following format : myvalue1 myvalue2 2023-12-31 9 21 2024-01-31 9 21 Note that this is only the example where pair of columns are the same. Thank you
sum the rows and reshape with unstack: out = df.sum().unstack() Output: myvalue1 myvalue2 2023-12-31 9 21 2024-01-31 9 21
3
3
78,092,661
2024-3-2
https://stackoverflow.com/questions/78092661/which-one-should-i-used-in-order-to-check-the-uniqueness-of-enum-values-in-pytho
I have an enum in Python, which I want to make sure that its values are unique. I see that there are 2 ways I can use to achieve it: Wrapping the class with @verify(UNIQUE) Wrapping the class with @unique What is the difference with using each one of them? Which one should I use to gain the best performance?
You can use which ever you prefer. In terms of implementation the both are the same and I mean the same, its literally copy pasted code in enum.py in in the cpython repo. This code is for the @verify(Unique) and this one for the @unique. I would suggest to use @verify if you have other checks as well but if you only want to check for unique it would be better to use @unique since it will only reference the code needed.
5
3
78,076,552
2024-2-28
https://stackoverflow.com/questions/78076552/how-can-i-generate-detect-signals-2-4ghz-and-generate-spectrograms-from-them-l
I'm new to the world of RF and signal processing. I have a USRP B210 and I'm trying to detect and generate spectrograms that will be later used to train an AI model that identifies if the signal is a wifi, bluetooth, drone... which means we're focusing on 2.4GHz I guess. this is the code provided by ettus research on import uhd import numpy as np usrp = uhd.usrp.MultiUSRP() num_samps = 10000 # number of samples received center_freq = 100e6 # Hz sample_rate = 1e6 # Hz gain = 50 # dB usrp.set_rx_rate(sample_rate, 0) usrp.set_rx_freq(uhd.libpyuhd.types.tune_request(center_freq), 0) usrp.set_rx_gain(gain, 0) # Set up the stream and receive buffer st_args = uhd.usrp.StreamArgs("fc32", "sc16") st_args.channels = [0] metadata = uhd.types.RXMetadata() streamer = usrp.get_rx_stream(st_args) recv_buffer = np.zeros((1, 1000), dtype=np.complex64) # Start Stream stream_cmd = uhd.types.StreamCMD(uhd.types.StreamMode.start_cont) stream_cmd.stream_now = True streamer.issue_stream_cmd(stream_cmd) # Receive Samples samples = np.zeros(num_samps, dtype=np.complex64) for i in range(num_samps//1000): streamer.recv(recv_buffer, metadata) samples[i*1000:(i+1)*1000] = recv_buffer[0] # Stop Stream stream_cmd = uhd.types.StreamCMD(uhd.types.StreamMode.stop_cont) streamer.issue_stream_cmd(stream_cmd) print(len(samples)) print(samples[0:10]) I tried the basic code provided by Ettus Research on their website and experimented with different values and parameters, sometimes adding a filter... But It seems I'm still very far from my objective This is the last code I used import uhd import numpy as np import matplotlib.pyplot as plt from scipy.signal import butter, lfilter def butter_lowpass_filter(data, cutoff_freq, sample_rate, order=5): nyquist = 0.5 * sample_rate normal_cutoff = cutoff_freq / nyquist b, a = butter(order, normal_cutoff, btype='low', analog=False) y = lfilter(b, a, data) return y usrp = uhd.usrp.MultiUSRP() num_samps = 1000000 center_freq = 2.4e9 sample_rate = 1e7 gain = 2 duration = 0.25 order = 10 cutoff_frequency = 2.5e6 denoised_samples = butter_lowpass_filter(samples, cutoff_frequency, sample_rate, order) usrp.set_rx_rate(sample_rate, 0) usrp.set_rx_freq(uhd.libpyuhd.types.tune_request(center_freq), 0) usrp.set_rx_gain(gain, 0) st_args = uhd.usrp.StreamArgs("fc32", "sc16") st_args.channels = [0] metadata = uhd.types.RXMetadata() streamer = usrp.get_rx_stream(st_args) recv_buffer = np.zeros((1, 1000), dtype=np.complex64) stream_cmd = uhd.types.StreamCMD(uhd.types.StreamMode.start_cont) stream_cmd.stream_now = True streamer.issue_stream_cmd(stream_cmd) samples = np.zeros(num_samps, dtype=np.complex64) for i in range(num_samps // 1000): streamer.recv(recv_buffer, metadata) samples[i * 1000:(i + 1) * 1000] = recv_buffer[0] stream_cmd = uhd.types.StreamCMD(uhd.types.StreamMode.stop_cont) streamer.issue_stream_cmd(stream_cmd) denoised_samples = butter_lowpass_filter(samples, cutoff_frequency, sample_rate) print(len(denoised_samples)) print(denoised_samples[0:10]) plt.specgram(denoised_samples, NFFT=int(duration * sample_rate), Fs=sample_rate, noverlap=int(duration * sample_rate * 0.9), cmap='viridis') plt.xlabel('Time (s)') plt.ylabel('Frequency (Hz)') plt.title('Spectrogram aver filter pass-bas 2') cbar = plt.colorbar() cbar.set_label('Intensity (dB)') plt.show() This is What I get: This is my objective:
Here's an example of my code that I used to get a signal similar to the one you're trying to get import uhd import numpy as np import matplotlib.pyplot as plt import scipy usrp = uhd.usrp.MultiUSRP() num_samps = 1000 * 1000 # number of samples received num_samps_h = 2000 # number of samples received num_samps_w = 2000 # number of samples received center_freq = 2422e6 # Hz sample_rate = 15e6 # Hz gain = 50 # dB usrp.set_rx_rate(sample_rate, 0) usrp.set_rx_bandwidth(20e6) usrp.set_rx_freq(uhd.libpyuhd.types.tune_request(center_freq), 0) usrp.set_rx_gain(gain, 0) print("bandwidth", usrp.get_rx_bandwidth()) # Set up the stream and receive buffer st_args = uhd.usrp.StreamArgs("fc32", "sc16") st_args.channels = [0] metadata = uhd.types.RXMetadata() streamer = usrp.get_rx_stream(st_args) recv_buffer = np.zeros((1, num_samps_w), dtype=np.complex64) # Start Stream stream_cmd = uhd.types.StreamCMD(uhd.types.StreamMode.start_cont) stream_cmd.stream_now = True streamer.issue_stream_cmd(stream_cmd) # Receive Samples samples = np.zeros((num_samps_h, num_samps_w), dtype=np.complex64) for i in range(num_samps_h): streamer.recv(recv_buffer, metadata) samples[i] = scipy.fft.fft(recv_buffer[0]) # Stop Stream stream_cmd = uhd.types.StreamCMD(uhd.types.StreamMode.stop_cont) streamer.issue_stream_cmd(stream_cmd) print(len(samples)) print(samples[0:10]) abs_samples = np.absolute(samples) print(abs_samples.max(), abs_samples.mean(), abs_samples.std()) max_display_value = abs_samples.mean() + 2 * abs_samples.std() abs_samples[np.where(abs_samples > max_display_value)] = max_display_value plt.imshow(abs_samples) plt.show() As a result, I managed to get waterfall images of the spectra, as in the attached pictures. Horizontal frequency, vertical time enter image description here
3
1
78,092,439
2024-3-2
https://stackoverflow.com/questions/78092439/calculating-the-average-of-a-specific-row-with-loc-loc-is-not-finding-the-row-v
I'm new to Python and stackoverflow. So please forgive any shortcomings in my posting. I want to calculate the row average of a specific row(e.g. Bahrain) and I'm having problem achieving it. I used df.loc but It's returning a key error with the name of the country as it doesn't recognize the name. My dataframe look like something like this: Country 1990 1995 2000 2005 2010 2015 0 Bahrain 5 4 3 2 1 5 1 Maldives 10 9 8 7 6 5 2 Germany 7 4 3 2 1 7 . .... .. .. .. .. .. .. . .... .. .. .. .. .. .. . .... .. .. .. .. .. .. I wrote this and I get an error.I got a key error with the name of row. mean=df.loc['Bahrain'].mean(axis=1) I am getting the following error: The error I prefer to have an output in format of numpy.float64.
You need to specify the Bahrain at the df["Country"]. And also, you cannot mix text with numbers and take average. Here is how you can do it: rowAverage = df[df['Country'] == 'Bahrain'].iloc[:, 1:].mean()
3
4
78,091,150
2024-3-2
https://stackoverflow.com/questions/78091150/conditional-python-polars-cum-sum-over-a-group-is-there-a-better-way
I want to create a column that is a cumulative sum over a group column but the cumulative sum only happens when the 'days' column meets a certain condition. I have come up with what I regard as a "duct tape" solution, there must be a more elegant way. import polars as pl # Create a DataFrame with literal values df = pl.DataFrame({ "days": [0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 6, 7, 1], "amount": [100, 200, 150, 300, 250, 180, 220, 280, 210, 320,21,456,111], "group": ["A", "B", "A", "C", "B", "A", "C", "B", "C", "A","C","B","B"] }) # Display the DataFrame print(df) #My duct tape solution df = ( df .with_columns( pl.when(pl.col("days") > 2) .then(pl.col("amount")) .otherwise(0).alias("3+days_amount") ) .with_columns( pl.col("3+days_amount").cum_sum().over("group").alias("group_cumsum") ) ) print(df) shape: (13, 3) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ days ┆ amount ┆ group β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•ͺ════════β•ͺ═══════║ β”‚ 0 ┆ 100 ┆ A β”‚ β”‚ 1 ┆ 200 ┆ B β”‚ β”‚ 2 ┆ 150 ┆ A β”‚ β”‚ 3 ┆ 300 ┆ C β”‚ β”‚ 4 ┆ 250 ┆ B β”‚ β”‚ … ┆ … ┆ … β”‚ β”‚ 3 ┆ 210 ┆ C β”‚ β”‚ 4 ┆ 320 ┆ A β”‚ β”‚ 6 ┆ 21 ┆ C β”‚ β”‚ 7 ┆ 456 ┆ B β”‚ β”‚ 1 ┆ 111 ┆ B β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ shape: (13, 5) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ days ┆ amount ┆ group ┆ 3+days_amount ┆ group_cumsum β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ str ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•ͺ════════β•ͺ═══════β•ͺ═══════════════β•ͺ══════════════║ β”‚ 0 ┆ 100 ┆ A ┆ 0 ┆ 0 β”‚ β”‚ 1 ┆ 200 ┆ B ┆ 0 ┆ 0 β”‚ β”‚ 2 ┆ 150 ┆ A ┆ 0 ┆ 0 β”‚ β”‚ 3 ┆ 300 ┆ C ┆ 300 ┆ 300 β”‚ β”‚ 4 ┆ 250 ┆ B ┆ 250 ┆ 250 β”‚ β”‚ … ┆ … ┆ … ┆ … ┆ … β”‚ β”‚ 3 ┆ 210 ┆ C ┆ 210 ┆ 510 β”‚ β”‚ 4 ┆ 320 ┆ A ┆ 320 ┆ 320 β”‚ β”‚ 6 ┆ 21 ┆ C ┆ 21 ┆ 531 β”‚ β”‚ 7 ┆ 456 ┆ B ┆ 456 ┆ 706 β”‚ β”‚ 1 ┆ 111 ┆ B ┆ 0 ┆ 706 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ polars expressions seem very elegant generally, hoping there's something I am missing.
As mentioned by @jqurious, a single pl.DataFrame.with_columns call can be used to compute the conditional cumulative sum. Moreover, the conditional amount can be computed without a pl.when().then().otherwise() construct, by multiplying the amount with the condition. ( df .with_columns( (pl.col("amount") * (pl.col("days") > 2)).cum_sum().over("group").alias("group_sum") ) )
2
2
78,091,020
2024-3-2
https://stackoverflow.com/questions/78091020/how-do-i-check-that-parameters-of-a-variadic-callable-are-all-of-a-certain-subcl
This might be a tough one. Suppose I have a type JSON = Union[Mapping[str, "JSON"], Sequence["JSON"], str, int, float, bool, None] And I have a function def memoize[**P, T: JSON](fn: Callable[P,T]) -> Callable[P,T]: # ...make the function memoizable return wrapped_fn How do I constrain the parameters of fn to all be subtypes of JSON? Alternatively, if this can't be done statically, how do I check this inside memoize before creating the wrapper? I tried giving bounds to the ParamSpec variable **P, but it seems this isn't implemented yet. I also tried issubclass but this doesn't play nice with typehints.
There isn't a way of currently doing this, if fn has an arbitrary signature. I believe that the next best thing is generating errors at the call site (see Pyright playground): import collections.abc as cx import typing as t type JSON = cx.Mapping[str, JSON] | cx.Sequence[JSON] | str | int | float | bool | None class _JSONOnlyCallable(t.Protocol): def __call__(self, /, *args: JSON, **kwargs: JSON) -> JSON: ... def memoize[F: cx.Callable[..., t.Any]](fn: F, /) -> F | _JSONOnlyCallable: return fn @memoize def f(a: int, b: str, c: set[int]) -> str: ... >>> f(1, "", {1, 2}) ^^^^^^ pyright: Argument of type "set[int]" cannot be assigned to parameter "args" of type "JSON" in function "__call__"
2
2
78,089,462
2024-3-1
https://stackoverflow.com/questions/78089462/how-to-extract-dominant-frequency-from-numpy-array
I am working with simulations of populations of neurons, and would like to extract the dominant frequency from their local field potential. This takes the form of just a single vector of values, which I have plotted here. Clearly, there is some oscillatory activity happening, and I am interested in what frequency the large scale oscillations are occurring at. I am not very familiar with Fourier transforms outside of the general idea of them, but I believe they are the tool needed here? The sampling distance in this simulation was 0.00006 seconds or the sampling rate is 1/0.00006 Hz. I tried using the numpy.fft.fft function, but I am a little unsure about how to interpret the results. I expected a large peak around 1, corresponding to what appears like 1 Hz oscillations. Here is the code I ran, and the resulting plot. raw = np.loadtxt(r"./file.txt") #calculate the frequencies associated with our signal freqs = fft.rfftfreq(len(raw),0.0006) sp = fft.rfft(raw) plt.plot(freqs, sp.real, freqs, sp.imag) I believe the high sampling rate is causing the calculation of such high frequencies, and I know I am just interested in those from ~0-4 Hz so I can plot those as: Or to make it easier to visualize, just ~1-4 Hz: Note, I am not calculating the FFTs any differently, just plotting different regions of the results. My issue is I am unclear about how to interpret the results or also how to realize if I am making an error somewhere in my code. I can see some spikes in the results, but they are not where I expected them to be, nor am I sure on how to determine when they are significant. Any help with this is appreciated. :^)
Why you don't find a peak at 1 Just to reproduce the probable problem with my own [mre] import numpy as np import matplotlib.pyplot as plt # time with a 0.00006 second per sample t=np.arange(0,12,0.00006) # y is roughly sin(2Ο€.f.t)⁴. But with some noise. # Because of power 4, it is a 2f periodic signal. So I choose f # around 0.5 to mimic the 1Hz signal. And I choose not exactly 0.5 # but a more specific 0.47, to verify my ability to find it back y=(np.sin(2*np.pi*0.47*t)+np.random.normal(0,0.1,(len(t),)))**4 plt.plot(t,y) plt.show() Not the same signal, but close enough for demo purpose. Now, what you did is freqs = np.fft.rfftfreq(len(y),0.0006) sp = np.fft.rfft(y) Which gives (once zoomed on the interesting part) With a big value at 0, because signal is not 0 on average (it is the constant term, or 0Hz) Another one just before 0.1 (0.094, we surmise). And a small one (first harmonic) before 0.2. And then, because you were sure that anything before 1 should be ignored (but why? even if there was no error, if fft tells your the biggest peak is at 0.1 Hz, it is worth wondering why) , you zoomed on the big flat 0 area. And because of the noise, with a zoom, nothing is really flat. What you should have done is freqs = np.fft.rfftfreq(len(y),0.00006) Note that this has no influence whatsover on fft. The fft line doesn't even use freqs as parameter. It is only when plotting that it just changes what is indicated on x axis. Same exact plot. With just a different ticks on x-axis. (I did not put lot of efforts to zoom exactly likewise, since I did that with the mouse. But you see that it is the same, except than what was 0.1 is now 1) A more accurate estimation You may have noted that all we see is that peak in fft is just slightly under 1. Probably my 0.94. But hard to really tell if it is 0.94, 0.96 ... Because it is a very low resolution in low frequencies. (First frequency is 0.08333... that is a period of 12 second, the the entire span. Second is twice that, so period of 6 seconds. Etc. There is no frequency in freqs between 0.91666 and 1. Hard to find accurately the 0.94 from that). Here, I feel that you are more looking for a repetition period than a frequency (same thing, you may say. But in fft, focus is on providing a base of periodic function that can reproduce the signal. Not to map all frequencies. For high frequencies that is almost the same thing, since resolution is High there. But for low frequencies/high periods, it is not). So I would rather just try to find the best autocorrelation here. For example, simply with a convolution conv=np.convolve(y,y[::-1]) It gives you this What is interesting, of course here, are the peaks. They occur for the shift when signals match best with itself (Think of the signal printed on 2 transparent graph. That you slide on each other). And there, we have not fixed a base frequency for a new vector base. Unit are "samples". What I see roughly is the position of 2nd peak, at 17760. (The one at the middle is when y is matched with itself with no shift at all. Then each peak is with a shift of 1,2,3,... or -1,-2,-3... periods) And 17760*0.06 is 1.0596 seconds. So 1/1.0596 = 0.94 Hz...
2
3
78,088,193
2024-3-1
https://stackoverflow.com/questions/78088193/integrate-a-function-that-takes-as-input-a-scalar-value-and-outputs-a-matrix
I have a function that takes as an input a scalar value, and outputs a matrix (following is an MRE with size 2x2, my actual function has problem specific matrix dimensions, like say 100x100). I'd like to integrate said function over a range. import numpy as np from scipy.integrate import quad def func(x): return np.array([[np.cos(x), np.sin(x)], [np.sin(x), np.cos(x)]]) quad(func, 0, np.pi) This gives me the following error : Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/scipy/integrate/_quadpack_py.py", line 465, in quad retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/scipy/integrate/_quadpack_py.py", line 577, in _quad return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: only size-1 arrays can be converted to Python scalars I wanted to avoid looping over each individual function, so I vectorized the integration operation, in the hopes that it would accommodate this sort of a function : grid_quad = np.vectorize(quad) grid_quad(func, 0, np.pi) I still get the same error : Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/numpy/lib/function_base.py", line 2329, in __call__ return self._vectorize_call(func=func, args=vargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/numpy/lib/function_base.py", line 2407, in _vectorize_call ufunc, otypes = self._get_ufunc_and_otypes(func=func, args=args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/numpy/lib/function_base.py", line 2367, in _get_ufunc_and_otypes outputs = func(*inputs) ^^^^^^^^^^^^^ File "/scipy/integrate/_quadpack_py.py", line 465, in quad retval = _quad(func, a, b, args, full_output, epsabs, epsrel, limit, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/scipy/integrate/_quadpack_py.py", line 577, in _quad return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: only size-1 arrays can be converted to Python scalars When I explicitly wrote out my integrand as an array of arrays, this method worked efunc = [[np.cos, np.sin],[np.sin, np.cos]] grid_quad(efunc, 0, np.pi) and gave the expected output (array([[4.92255263e-17, 2.00000000e+00], [2.00000000e+00, 4.92255263e-17]]), array([[2.21022394e-14, 2.22044605e-14], [2.22044605e-14, 2.21022394e-14]])) which is a matrix that contains the values obtained by integrating the "function" associated with each matrix position. In pen and paper terms, I have a matrix whose entries are functions that depend on a variable. I would like to integrate each one of these functions over an interval, and receive as output a matrix of the definite integral values. Is there a "(num)pythonic" way to achieve this?
Try scipy.integrate.quad_vec. import numpy as np from scipy.integrate import quad_vec def func(x): return np.array([[np.cos(x), np.sin(x)], [np.sin(x), np.cos(x)]]) quad_vec(func, 0, np.pi) # (array([[2.22044605e-16, 2.00000000e+00], # [2.00000000e+00, 2.22044605e-16]]), # 1.3312465980656846e-13) The first output is your result, the second is the error estimate.
2
2
78,086,474
2024-3-1
https://stackoverflow.com/questions/78086474/recursive-matrix-construction-with-numpy-array-issues-broadcasting
The following is a seemingly simple recursion for finding a hybercube matrix. The recursion is defined as: (Formula) I tried to put it into code but I keep running into broadcasting issues with numpy. I tried instead of taking the size of the previous matrix, just taking powers of 2 to reduce recursive calls (for the parameter of the identity matrix). import numpy as np from numpy import linalg as LA Q1 = np.array([[0, 1], [1, 0]]) def Qn(n): if n <= 1: return Q1 else: return np.array([[Qn(n-1), np.identity(int(np.exp2(n-1)))], [np.identity(int(np.exp2(n-1))), Qn(n-1)]]) Q3= Qn(3) eig_value, eig_vectors = LA.eig(Q3) print(eig_value) Q1 is the base case of my matrix. It should be very simple, yet I keep running into issues. Traceback (most recent call last): File "e:\Coding\Python\test.py", line 15, in <module> Q3= Qn(3) File "e:\Coding\Python\test.py", line 12, in Qn return np.array([[Qn(n-1), np.identity(int(np.exp2(n-1)))], ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (2, 2) + inhomogeneous part. I get this error ^^
Use np.block instead of np.array to assemble the block matrix. import numpy as np from numpy import linalg as LA Q1 = np.array([[0, 1], [1, 0]]) def Qn(n): if n <= 1: return Q1 else: Qnm1 = Qn(n-1) I = np.eye(2**(n-1)) return np.block([[Qnm1, I], [I, Qnm1]]) Q3 = Qn(3) eig_value, eig_vectors = LA.eig(Q3) print(eig_value) # [ 3. -3. -1. -1. -1. 1. 1. 1.]
2
2
78,088,133
2024-3-1
https://stackoverflow.com/questions/78088133/all-columns-to-uppercase-in-a-dataframe
Suppose there are hundreds of columns in a DataFrame. Some of the column names are in lower and some are in upper case. Now, I want to convert all the columns to upper case. import polars as pl df = pl.DataFrame({ "foo": [1, 2, 3, 4, 5, 8], "baz": [5, 4, 3, 2, 1, 9], }) What I tried: df.columns = [x.upper() for x in df.columns] It worked, but is there any other way preferably without a for loop?
.rename() also accepts a Callable. df.rename(str.upper) shape: (6, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ FOO ┆ BAZ β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1 ┆ 5 β”‚ β”‚ 2 ┆ 4 β”‚ β”‚ 3 ┆ 3 β”‚ β”‚ 4 ┆ 2 β”‚ β”‚ 5 ┆ 1 β”‚ β”‚ 8 ┆ 9 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
3
3