question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
78,719,078
2024-7-8
https://stackoverflow.com/questions/78719078/how-to-type-hint-a-pl-date
Suppose we create some dates: import polars as pl df = pl.DataFrame( [ pl.Series("start", ["2023-01-01"], dtype=pl.Date).str.to_date(), pl.Series("end", ["2024-01-01"], dtype=pl.Date).str.to_date(), ] ) Now I can create a date range from these: dates = pl.date_range(df[0, "start"], df[0, "end"], "1mo", eager=True) But I want to define a function which takes a couple of dates and spits out a range, as a wrapper around pl.date_range: def my_date_range(start: pl.Date, end: pl.Date) -> pl.Series: return pl.date_range(start, end, "1mo", eager=True) The above doesn't typecheck with pyright/Pylance, because: Argument of type "Date" cannot be assigned to parameter "start" of type "IntoExprColumn | date | datetime" in function "date_range" Type "Date" is incompatible with type "IntoExprColumn | date | datetime" "Date" is incompatible with "date" "Date" is incompatible with "datetime" "Date" is incompatible with "Expr" "Date" is incompatible with "Series" "Date" is incompatible with "str"PylancereportArgumentType If I check out type(df[0, "start"]), I see: datetime.date and pl.Date is no good because isinstance(df[0, "start"], pl.Date) == False. I cannot figure out how to import datetime.date in order to use it as a type annotation (trying import polars.datetime as dt raises No module named 'polars.datetime'). How can this be done? Or put differently: how should my_date_range's date arguments be annotated?
Since datetime.date is compatible with the start and end parameters expected by pl.date_range() this should be sufficient: import polars as pl from datetime import date def my_date_range(start: date, end: date) -> pl.Series: return pl.date_range(start, end, "1mo", eager=True)
2
3
78,716,751
2024-7-7
https://stackoverflow.com/questions/78716751/removing-one-field-from-a-struct-in-polars
I want to remove one field from a struct. Currently, I have it set up like this, but is there a simpler way to achieve this? import polars as pl import polars.selectors as cs def remove_one_field(df: pl.DataFrame) -> pl.DataFrame: meta_data_columns = (df.select('meta_data') .unnest('meta_data') .select(cs.all() - cs.by_name('system_data')).columns) print(meta_data_columns) return (df.unnest('meta_data') .select(cs.all() - cs.by_name('system_data')) .with_columns(meta_data=pl.struct(meta_data_columns)) .drop(meta_data_columns)) # Example usage input_df = pl.DataFrame({ "id": [1, 2], "meta_data": [{"system_data": "to_remove", "user_data": "keep"}, {"user_data": "keep_"}] }) output_df = remove_one_field(input_df) print(output_df) ['user_data'] shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ meta_data β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ struct[1] β”‚ β•žβ•β•β•β•β•β•ͺ═══════════║ β”‚ 1 ┆ {"keep"} β”‚ β”‚ 2 ┆ {"keep_"} β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Something like select on fields within a struct?
You can use struct.field() which can accept either list of strings or multiple string arguments. You know your DataFrame' schema() so you can easily create list of fields you want fields = [c[0] for c in input_df.schema["meta_data"] if c[0] != "system_data"] input_df.with_columns( meta_data = pl.struct( pl.col.meta_data.struct.field(fields) ) ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ meta_data β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ struct[1] β”‚ β•žβ•β•β•β•β•β•ͺ═══════════║ β”‚ 1 ┆ {"keep"} β”‚ β”‚ 2 ┆ {"keep_"} β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
5
5
78,713,279
2024-7-5
https://stackoverflow.com/questions/78713279/pyspark-retrieve-the-value-from-the-field-dynamically-specified-in-other-field
I'm working with PySpark and have a challenging scenario where I need to dynamically retrieve the value of a field specified in another field of the same DataFrame. I then need to compare this dynamically retrieved value with a fixed value. Here’s the context: I have a DataFrame df_exploded with the following schema: root |-- \_id: string (nullable = true) |-- name: string (nullable = true) |-- precondition: struct (nullable = true) | |-- field: string (nullable = true) | |-- matchingType: string (nullable = true) | |-- matchingValue: string (nullable = true) |-- businessTransaction: struct (nullable = true) | |-- someField1: string (nullable = true) | |-- someField2: string (nullable = true) | |-- someField3: string (nullable = true) The precondition field is a struct containing: field: the name of the field in businessTransaction whose value I need to compare. matchingType: the type of comparison (e.g., equals, greater than). matchingValue: the value to compare against. Objective: For each row, I need to dynamically retrieve the value from the field specified in precondition.field (which can vary across rows) within businessTransaction and compare it with precondition.matchingValue. Question: How can I dynamically retrieve and compare the value from a field specified in another field of the same DataFrame in PySpark? Is there a way to use expr or another function to evaluate the column path stored in dynamic_field_path to get the actual value? Example Data: +----+----+----------------------------+---------------------------+ | id|name|precondition |businessTransaction | | 1 |John|{someField1, equals, 100} |{someField1 -\> 100, ...} | | 2 |Jane|{someField2, equals, 200} |{someField2 -\> 150, ...} | +----+----+----------------------------+---------------------------+ In this example, I need to dynamically compare businessTransaction.someField1 with 100 for the first row and businessTransaction.someField2 with 200 for the second row. Any help or guidance on how to achieve this would be greatly appreciated! Here’s a simplified version of my approach: from pyspark.sql.functions import col, concat_ws, expr, when # Create the dynamic field path df_exploded = df_exploded.withColumn( 'dynamic_field_path', concat_ws(".", lit("businessTransaction"), col('precondition.field')) ) # Try to retrieve the value from the dynamically specified field using expr df_exploded = df_exploded.withColumn( 'dynamic_field', expr("dynamic_field_path") ) # Check preconditions df_preconditions_checked = df_exploded.withColumn( "is_matching_precondition", when( col('dynamic_field') == col('precondition.matchingValue'), True ).otherwise(False) ) # Filter distinct _id where is_matching_precondition is True df_matching_preconditions = df_preconditions_checked.filter(col("is_matching_precondition") == True).select(col('_id')).distinct() Problem: The above code does not work as intended. The dynamic_field column ends up containing the literal string path instead of the actual value from the dynamically specified field. I receive an error indicating that columns are not iterable.
You can create a udf to handle the dynamic comparison part as follows: from pyspark.sql import SparkSession from pyspark.sql import functions as F from pyspark.sql.types import StructType, StructField, StringType spark = SparkSession.builder.getOrCreate() data = [ ("1", "John", ("someField1", "equals", "100"), {"someField1": "100", "someField2": "120", "someField3": "150"}), ("2", "Jane", ("someField2", "greaterThan", "200"), {"someField1": "100", "someField2": "150", "someField3": "180"}) ] schema = StructType([ StructField("_id", StringType(), True), StructField("name", StringType(), True), StructField("precondition", StructType([ StructField("field", StringType(), True), StructField("matchingType", StringType(), True), StructField("matchingValue", StringType(), True) ]), True), StructField("businessTransaction", StructType([ StructField("someField1", StringType(), True), StructField("someField2", StringType(), True), StructField("someField3", StringType(), True) ]), True) ]) df = spark.createDataFrame(data, schema) df.printSchema() # root # |-- _id: string (nullable = true) # |-- name: string (nullable = true) # |-- precondition: struct (nullable = true) # | |-- field: string (nullable = true) # | |-- matchingType: string (nullable = true) # | |-- matchingValue: string (nullable = true) # |-- businessTransaction: struct (nullable = true) # | |-- someField1: string (nullable = true) # | |-- someField2: string (nullable = true) # | |-- someField3: string (nullable = true) def evaluate_precondition(row): field = row.precondition.field matching_type = row.precondition.matchingType matching_value = row.precondition.matchingValue actual_value = row.businessTransaction[field] if matching_type == "equals": return int(actual_value) == int(matching_value) elif matching_type == "greaterThan": return int(actual_value) > int(matching_value) elif matching_type == "lessThan": return int(actual_value) < int(matching_value) else: return False evaluate_precondition_udf = F.udf(evaluate_precondition) df_preconditions_checked = df.withColumn( "is_matching_precondition", evaluate_precondition_udf(F.struct("precondition", "businessTransaction")) ) df_preconditions_checked.show(truncate=False) # +---+----+------------------------------+-------------------+------------------------+ # |_id|name|precondition |businessTransaction|is_matching_precondition| # +---+----+------------------------------+-------------------+------------------------+ # |1 |John|{someField1, equals, 100} |{100, 120, 150} |true | # |2 |Jane|{someField2, greaterThan, 200}|{100, 150, 180} |false | # +---+----+------------------------------+-------------------+------------------------+ df_matching_preconditions = ( df_preconditions_checked .filter(F.col("is_matching_precondition") == True) .select(F.col("_id")).distinct() ) df_matching_preconditions.show() # +---+ # |_id| # +---+ # | 1| # +---+ To make this work we are passing a new struct column to the UDF and then fetching the values "dynamically" from it, inside the UDF. Also, you can expand the comparison operators as per your choice in the UDF.
2
1
78,716,778
2024-7-7
https://stackoverflow.com/questions/78716778/how-can-i-use-groupby-in-a-way-that-each-group-is-grouped-with-the-previous-over
My DataFrame: import pandas as pd df = pd.DataFrame( { 'a': list('xxxxxxxxxxyyyyyyyyy'), 'b': list('1111222333112233444') } ) Expected output is a list of groups: a b 0 x 1 1 x 1 2 x 1 3 x 1 4 x 2 5 x 2 6 x 2 a b 4 x 2 5 x 2 6 x 2 7 x 3 8 x 3 9 x 3 a b 10 y 1 11 y 1 12 y 2 13 y 2 a b 12 y 2 13 y 2 14 y 3 15 y 3 a b 14 y 3 15 y 3 16 y 4 17 y 4 18 y 4 Logic: Grouping starts with df.groupby(['a', 'b']) and then after that I want to join each group with its previous one which gives me the expected output. Maybe the initial grouping that I mentioned is not necessary. Note that in the expected output a column cannot contain both x and y. Honestly overlapping rows is not what I have used to do when using groupby. So I don't know how to try to do it. I tried df.b.diff() but It is not even close.
You can combine groupby, itertools.pairwise and concat: from itertools import pairwise out = [pd.concat([a[1], b[1]]) for a, b in pairwise(df.groupby(['a', 'b']))] Functional variant: from itertools import pairwise from operator import itemgetter out = list(map(pd.concat, pairwise(map(itemgetter(1), df.groupby(['a', 'b']))))) Note that you might need to use sort=False in groupby if you want to keep the original order. Output: [ a b 0 x 1 1 x 1 2 x 1 3 x 1 4 x 2 5 x 2 6 x 2, a b 4 x 2 5 x 2 6 x 2 7 x 3 8 x 3 9 x 3, a b 7 x 3 8 x 3 9 x 3 10 y 1 11 y 1, a b 10 y 1 11 y 1 12 y 2 13 y 2, a b 12 y 2 13 y 2 14 y 3 15 y 3, a b 14 y 3 15 y 3 16 y 4 17 y 4 18 y 4]
2
4
78,716,735
2024-7-7
https://stackoverflow.com/questions/78716735/scraping-table-from-web-page
I'm trying to scrape a table from a webpage using Selenium and BeautifulSoup but I'm not sure how to get to the actual data using BeautifulSoup. webpage: https://leetify.com/app/match-details/5c438e85-c31c-443a-8257-5872d89e548c/details-general I tried extracting table rows (tag <tr>) but when I call find_all, the array is empty. When I inspect element, I see several elements with a tr tag, why don't they show up with BeautifulSoup.find_all() ?? I tried extracting table rows (tag <tr>) but when I call find_all, the array is empty. Code: from selenium import webdriver from bs4 import BeautifulSoup driver = webdriver.Chrome() driver.get("https://leetify.com/app/match-details/5c438e85-c31c-443a-8257-5872d89e548c/details-general") html_source = driver.page_source soup = BeautifulSoup(html_source, 'html.parser') table = soup.find_all("tbody") print(len(table)) for entry in table: print(entry) print("\n")
why don't they show up with BeautifulSoup.find_all() ?? after taking a quick glance, it seems like it takes a long time for the page to load. The thing is, when you pass the driver.page_source to BeautifulSoup, not all the HTML/CSS is loaded yet. So, the solution would be to use an Explicit wait: Wait until page is loaded with Selenium WebDriver for Python or, even, (less recommended): from time import sleep sleep(10) but I'm not 100% sure, since I don't currently have Selenium installed on my machine However, I'd like to take on a completely different solution: If you take a look at your browsers Network calls (Click on F12 in your browser, and it'll open the developer options), you'll see that data (the table) your looking for, is loaded through sending a GET request the their API: The endpoint is under: https://api.leetify.com/api/games/5c438e85-c31c-443a-8257-5872d89e548c which you can view directly from your browser. So, you can directly use the requests library to make a GET request to the above endpoint, which will be much more efficent: import requests from pprint import pprint response = requests.get('https://api.leetify.com/api/games/5c438e85-c31c-443a-8257-5872d89e548c') data = response.json() pprint(data) Prints (trucated): {'agents': [{'gameFinishedAt': '2024-07-06T07:10:02.000Z', 'gameId': '5c438e85-c31c-443a-8257-5872d89e548c', 'id': '63e38340-d1ae-4e19-b51c-e278e3325bbb', 'model': 'customplayer_tm_balkan_variantk', 'steam64Id': '76561198062922849', 'teamNumber': 2}, {'gameFinishedAt': '2024-07-06T07:10:02.000Z', 'gameId': '5c438e85-c31c-443a-8257-5872d89e548c', 'id': 'e10f9fc4-759d-493b-a17f-a85db2fcd09d', 'model': 'customplayer_ctm_fbi_variantg', 'steam64Id': '76561198062922849', 'teamNumber': 3}, This approach bypasses the need to wait for the page to load, allowing you to directly access the data.
2
2
78,715,993
2024-7-6
https://stackoverflow.com/questions/78715993/how-do-i-add-legend-handles-in-matplotlib
I would like to add a legend to my Python plot, with a title and legend handles. My sincere apologies as a complete novice in Python, I got my code from a post. The code below works, but I want to add a legend. All the plots I have googled deal with line plots with several lines. import geopandas as gpd import matplotlib.pyplot as plt from datetime import date from mpl_toolkits.basemap import Basemap map_df = gpd.read_file("../Shapefiles/lso_adm_fao_mlgca_2019/lso_admbnda_adm1_FAO_MLGCA_2019.shx") risks_df=pd.read_csv("../Output/Wndrisks.csv") merged_df = map_df.merge(risks_df, left_on=["ADM1_EN"], right_on=["District"]) d = {1: "green", 2: "yellow", 3: "orange", 4: "red"} colors = map_df["ADM1_EN"].map(risks_df.set_index("District")["risk"].map(d)) ax = map_df.plot(color=colors, edgecolor="k", alpha=0.7, legend=True, legend_kwds={"label": "Risk Level", "orientation": "vertical"}) map = Basemap(projection='merc', llcrnrlon=26.5,llcrnrlat=-31.0,urcrnrlon=30.0,urcrnrlat=-28.5, epsg=4269) map.drawlsmask(land_color='grey',ocean_color='aqua',lakes=True) legend = plt.legend(handles=[one, two, three, four], title="Risk Levels", loc=4, fontsize='small', fancybox=True) plt.title(f"Strong Wind Risks 01-10Jun24", y=1.04) plt.tick_params( axis="both", # affect both the X and Y which="both", # get rid of both major and minor ticks top=False, # get rid of ticks top/bottom/left/right bottom=False, left=False, right=False, labeltop=False, # get rid of labels top/bottom/left/right labelbottom=False, labelleft=False, labelright=False) plt.axis("off") # Get rid of the border around the map plt.subplots_adjust(right=0.85) # Nudge the country to the left a bit plt.savefig('wndriskmap.png', dpi=300) plt.show() The data is for the form: "District","risk" "Berea",3 "Butha-Buthe",4 "Leribe",4 "Mafeteng",4 "Maseru",4 "Mohale's Hoek",4 "Mokhotlong",4 "Qacha's Nek",4 "Quthing",4 "Thaba-Tseka",4 The plot I get is as attached I can attach the shapefile if required. I want the legend to have a title "Risk Level" and the levels 1=no risk, 2=low risk, 3=medium risk and 4=high risk. what I have included in legend = plt.legend(...) does not work. Assistance will be appreciated.
You could map the risks to the labels and make a categorical plot : from matplotlib.colors import ListedColormap colors = {1: "green", 2: "yellow", 3: "orange", 4: "red"} # or a list labels = {1: "no risk", 2: "low risk", 3: "medium risk", 4: "high risk"} catego = map_df["risk"].astype(str).str.cat(map_df["risk"].map(labels), sep="- ") fig, ax = plt.subplots(figsize=(5, 5)) map_df.plot( column=catego, categorical=True, edgecolor="k", alpha=0.7, cmap=ListedColormap(colors.values()), legend=True, legend_kwds={ "title": "Risk Level", "shadow": True, "loc": "lower right", }, ax=ax, ) ax.set_axis_off() ax.set_title("Strong Winds in 10-D Risks 01-10Jun") NB: I used the risks as a prefix/workaround to preserve the labels' order in the legend.
3
1
78,715,315
2024-7-6
https://stackoverflow.com/questions/78715315/filter-openstreetmap-edges-on-surface-type
I'm accessing OpenStreetMap data using osmnx, using: import osmnx as ox graph = ox.graph_from_place('Bennekom') nodes, edges = ox.graph_to_gdfs(graph) I know from openstreetmap.org/edit that all (?) street features have an attribute Surface, which can be Unpaved, Asphalt, Gravel, et cetera. However, that info is not included in the GeoDataFrames, so I cannot filter or select certain surface types. Is that somehow possible?
You need to include the surface key in the useful_tags_way : import osmnx as ox ox.settings.useful_tags_way = ["surface"] # << add this line graph = ox.graph_from_place("Bennekom") nodes, edges = ox.graph_to_gdfs(graph) NB: You might need to explode the surface column in order to get all matches when filtering. print( edges.iloc[:, :-1] .sample(frac=1, random_state=111) .to_string(max_rows=10, max_cols=5, index=False) ) osmid surface oneway reversed length [1213295165, 6920271] [paving_stones, asphalt] False True 34.747 6920805 asphalt False True 69.793 189224820 asphalt False True 21.313 701436450 NaN False False 14.168 6920602 paving_stones False False 67.469 ... ... ... ... ... [1281296947, 604729524] asphalt False True 92.612 173209511 NaN False True 84.315 604729512 NaN False True 157.071 857498475 NaN False False 17.064 365986402 NaN False True 14.977
2
2
78,714,232
2024-7-6
https://stackoverflow.com/questions/78714232/how-to-convert-binary-to-string-uuid-without-udf-in-apache-spark-pyspark
I can't find a way to convert a binary to a string representation without using a UDF. Is there a way with native PySpark functions and not a UDF? from pyspark.sql import DataFrame, SparkSession import pyspark.sql.functions as F import uuid from pyspark.sql.types import Row, StringType spark_test_instance = (SparkSession .builder .master('local') .getOrCreate()) df: DataFrame = spark_test_instance.createDataFrame([Row()]) df = df.withColumn("id", F.lit(uuid.uuid4().bytes)) df = df.withColumn("length", F.length(df["id"])) uuidbytes_to_str = F.udf(lambda x: str(uuid.UUID(bytes=bytes(x), version=4)), StringType()) df = df.withColumn("id_str", uuidbytes_to_str(df["id"])) df = df.withColumn("length_str", F.length(df["id_str"])) df.printSchema() df.show(1, truncate=False) gives: root |-- id: binary (nullable = false) |-- length: integer (nullable = false) |-- id_str: string (nullable = true) |-- length_str: integer (nullable = false) +-------------------------------------------------+------+------------------------------------+----------+ |id |length|id_str |length_str| +-------------------------------------------------+------+------------------------------------+----------+ |[0A 35 DC 67 13 C8 47 7E B0 80 9F AB 98 CA FA 89]|16 |0a35dc67-13c8-477e-b080-9fab98cafa89|36 | +-------------------------------------------------+------+------------------------------------+----------+
You can use hex for getting the id_str: from pyspark.sql import SparkSession import pyspark.sql.functions as F import uuid spark = SparkSession.builder.getOrCreate() data = [(uuid.uuid4().bytes,)] df = spark.createDataFrame(data, ["id"]) df = df.withColumn("id_str", F.lower(F.hex("id"))) df.show(truncate=False) # +-------------------------------------------------+--------------------------------+ # |id |id_str | # +-------------------------------------------------+--------------------------------+ # |[8C 76 42 18 BD CA 47 A1 9C D7 9D 74 0C 3C A4 76]|8c764218bdca47a19cd79d740c3ca476| # +-------------------------------------------------+--------------------------------+ Update: You can also use regexp_replace to get the exact id_str, in the same format you have shared, as follows: from pyspark.sql import SparkSession import pyspark.sql.functions as F import uuid spark = SparkSession.builder.getOrCreate() data = [(uuid.uuid4().bytes,)] df = spark.createDataFrame(data, ["id"]) df = df.withColumn( "id_str", F.regexp_replace( F.lower(F.hex("id")), "(.{8})(.{4})(.{4})(.{4})(.{12})", "$1-$2-$3-$4-$5" ) ) df.show(truncate=False) # +-------------------------------------------------+------------------------------------+ # |id |id_str | # +-------------------------------------------------+------------------------------------+ # |[F8 25 99 3E 2D 7A 40 E4 A1 24 C0 28 B9 30 F6 03]|f825993e-2d7a-40e4-a124-c028b930f603| # +-------------------------------------------------+------------------------------------+
3
2
78,712,629
2024-7-5
https://stackoverflow.com/questions/78712629/gpt-langchain-experimental-agent-allow-dangerous-code
I'm creating a chatbot in VS Code where it will receive csv file through a prompt on Streamlit interface. However from the moment that file is loaded, it is showing a message with the following content: ValueError: This agent relies on access to a python repl tool which can execute arbitrary code. This can be dangerous and requires a specially sandboxed environment to be safely used. Please read the security notice in the doc-string of this function. You must opt-in to use this functionality by setting allow_dangerous_code=True.For general security guidelines, please see: https://python.langchain.com/v0.2/docs/security/ Traceback File "c:\Users\ \langchain-ask-csv\.venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 589, in _run_script exec(code, module.__dict__) File "C:\Users\ \langchain-ask-csv\main.py", line 46, in <module> main() File "C:\Users\ \langchain-ask-csv\main.py", line 35, in main agent = create_csv_agent( OpenAI(), csv_file, verbose=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\ \langchain-ask-csv\.venv\Lib\site-packages\langchain_experimental\agents\agent_toolkits\csv\base.py", line 66, in create_csv_agent return create_pandas_dataframe_agent(llm, df, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\ T\langchain-ask-csv\.venv\Lib\site-packages\langchain_experimental\agents\agent_toolkits\pandas\base.py", line 248, in create_pandas_dataframe_agent raise ValueError( Here's is part of the code where I'm passing the file: def main(): load_dotenv() # Load the OpenAI API key from the environment variable if os.getenv("OPENAI_API_KEY") is None or os.getenv("OPENAI_API_KEY") == "": print("OPENAI_API_KEY is not set") exit(1) else: print("OPENAI_API_KEY is set") st.set_page_config(page_title="Ask your CSV") st.header("Ask your CSV πŸ“ˆ") csv_file = st.file_uploader("Upload a CSV file", type="csv") if csv_file is not None: agent = create_csv_agent( OpenAI(), csv_file, verbose=True) user_question = st.text_input("Ask a question about your CSV: ") if user_question is not None and user_question != "": with st.spinner(text="In progress..."): st.write(agent.run(user_question)) if __name__ == "__main__": main() I checked the link given as suggestion and also tried to search on similar reports but haven't had success. What might be wrong and how to fix it?
The referenced security notice is in https://api.python.langchain.com/en/latest/agents/langchain_experimental.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent.html. Just do what the message tells you. Do a security analysis, create a sandbox environment for your thing to run in, and then add allow_dangerous_code=True to the arguments you pass to create_csv_agent, which just forwards the argument to create_pandas_dataframe_agent and run it in the sandbox.
2
2
78,709,252
2024-7-5
https://stackoverflow.com/questions/78709252/recursive-types-in-python-and-difficulties-inferring-the-type-of-typex
Trying to build recursive types to annotate a nested data structure, I hit the following. This code is correct according to mypy: IntType = int | list["IntType"] | tuple["IntType", ...] StrType = str | list["StrType"] | tuple["StrType", ...] def int2str(x: IntType) -> StrType: if isinstance(x, list): return list(int2str(v) for v in x) if isinstance(x, tuple): return tuple(int2str(v) for v in x) return str(x) But not this one, which should be equivalent: IntType = int | list["IntType"] | tuple["IntType", ...] StrType = str | list["StrType"] | tuple["StrType", ...] def bad_int2str(x: IntType) -> StrType: if isinstance(x, (list, tuple)): return type(x)(bad_int2str(v) for v in x) # error here return str(x) The error message is line 6: error: Incompatible return value type ( got "list[int | list[IntType] | tuple[IntType, ...]] | tuple[int | list[IntType] | tuple[IntType, ...], ...]", expected "str | list[StrType] | tuple[StrType, ...]" ) [return-value] line 6: error: Generator has incompatible item type "str | list[StrType] | tuple[StrType, ...]"; expected "int | list[IntType] | tuple[IntType, ...]" [misc] I would assume mypy could infer that type(x) is either list or tuple. Is this a limitation of mypy or is there something fishy with this code? If so, where does the limitation come from?
There's no type erasure for type(x). What should mypy say about the following? x: list[int] = [1] reveal_type(type(x)) If we ask, it says: Revealed type is "type[builtins.list[builtins.int]]" So, when you ask for type(x)(some_strtype_iterator), it rightfully complains that you try to construct a list[StrType] | tuple[StrType, ...] from an iterable of IntType. Both errors hint about that fact: mypy thinks that you return a list or tuple of IntType, because the error inside the expression does not make it lose an already known type. And it also points out that you can't build a list/tuple of IntType from a generator yielding StrType - you didn't intend to build IntType sequence, but type(x) mandates that. I just started a discussion on the typing forum to clarify the reasons for not erasing the generics for type(x).
2
2
78,706,223
2024-7-4
https://stackoverflow.com/questions/78706223/how-to-efficiently-compute-running-geometric-mean-of-a-numpy-array
Rolling arithmetic mean can simply be computed with Numpy's 'convolve' function, but how could I efficiently create an array of running geometric means of some array a and a given window size? To give an example, for an array: [0.5 , 2.0, 4.0] and window size 2, (with window size decreasing at the edges) I want to quickly generate the array: [0.5, 1.0, 2.83, 4.0]
import numpy as np from numpy.lib.stride_tricks import sliding_window_view from scipy.stats import gmean window = 2 a = [0.5, 2.0, 4.0] padded = np.pad(a, window - 1, mode="constant", constant_values=np.nan) windowed = sliding_window_view(padded, window) result = gmean(windowed, axis=1, nan_policy="omit") print(result) # >>> [0.5 1. 2.82842712 4. ] Use the built-in gmean() from scipy. Pad with nan, which, in combination with gmean(…, nan_policy="omit"), produces decreasing window sizes at the boundaries. Use sliding_window_view() to create the running result. Put everyting together (see above). If you don't need the decreasing window sizes at the boundaries (this is referring to your comment), you can skip the padding step and choose the nan_policy that suits you best. Update: Realizing that gmean() provides a weights argument, we can replace the nan padding with an equivalent weights array (1 for the actual values, 0 for the padded values), and then are free again to choose the nan_policy of our liking, even in the case of decreasing window sizes at the boundaries. This means, we could write: padded = np.pad(a, window - 1, mode="constant", constant_values=1.) windowed = sliding_window_view(padded, window) weights = sliding_window_view( np.pad(np.ones_like(a), window - 1, mode="constant", constant_values=0.), window) result = gmean(windowed, axis=1, weights=weights) – which will produce exactly the same result as above. My gut feeling tells me that the original version is faster, but I did not do any speed tests.
2
3
78,711,101
2024-7-5
https://stackoverflow.com/questions/78711101/change-mantissa-in-scientific-notation-from-0-1-instead-of-1-10
I want to format a number so that the mantissa is between 0 and 1 instead of 1 and 10 in scientific notation. For example; a=7.365 print("{:.6e}".format(a)) This will output 7.365000e+00, but I want it to be 0.736500e+01.
I suppose you could always try constructing your own preformatted string. (No idea if this works in Python 2.7, though). import math def fixit( x, n ): if x == 0.0: return f" {x:.{n}e}" s = ' ' if x >= 0 else '-' y = math.log10( abs(x) ) m = math.floor(y) + 1 z = 10 ** ( y - m ) return s + f"{z:.{n}f}e{m:+03d}" for x in [ -7635, -763.5, -76.35, -7.635, -0.7635, -0.07635, -0.007635, 0.007635, 0.07635, 0.7635, 7.635, 76.35, 763.5, 7635 ]: print( fixit( x, 6 ) ) for x in [ -10, -1, -0.1, -0.01, 0.0, 0.01, 0.1, 1, 10 ]: print( fixit( x, 6 ) ) Output: -0.763500e+04 -0.763500e+03 -0.763500e+02 -0.763500e+01 -0.763500e+00 -0.763500e-01 -0.763500e-02 0.763500e-02 0.763500e-01 0.763500e+00 0.763500e+01 0.763500e+02 0.763500e+03 0.763500e+04 -0.100000e+02 -0.100000e+01 -0.100000e+00 -0.100000e-01 0.000000e+00 0.100000e-01 0.100000e+00 0.100000e+01 0.100000e+02
2
1
78,710,552
2024-7-5
https://stackoverflow.com/questions/78710552/unexpected-generator-behaviour-when-not-assigned-to-a-variable
Could someone explain the difference between these two executions? Here is my generator function: def g(n): try: yield n print("first") except BaseException as e: print(f"exception {e}") raise e finally: print("second") When I execute: >>> a = next(g(2)) exception second Could someone explain why a = next(g(2)) raises an exception and what is the difference with the below execution? >>> x = g(2) >>> next(x) 2 My expectation was that when I execute a = next(g(2)) the function g(2) returns a generator and next() returns the first yield of the generator but apparently an exception is raised. Why is this happening?
In the first scenario, the generator is garbage collected after next(), triggering finally and raising an exception. In the second, the generator is kept by x, preventing this and allowing normal operation. Edit: GeneratorExit exception is thrown when a generator is closed, after that finally block will be exceuted. Also, from the docs: If the generator is not resumed before it is finalized (by reaching a zero reference count or by being garbage collected), the generator-iterator’s close() method will be called, allowing any pending finally clauses to execute.
3
4
78,710,347
2024-7-5
https://stackoverflow.com/questions/78710347/pyspark-join-fields-in-json-to-a-dataframe
I am trying to pull out some fields from a JSONn string into a dataframe. I can achieve this by put each field in a dataframe then join all the dataframes like below. But is there some easier way to do this? Because this is just an simplified example and I have a lot more fields to extract in my project. from pyspark.sql import Row s = '{"job_id":"123","settings":{"task":[{"taskname":"task1"},{"taskname":"task2"}]}}' json_object = json.loads(s) # json_object job_id_l = [Row(job_id=json_object['job_id'])] job_id_df = spark.createDataFrame(job_id_l) # display(job_id_df) tasknames = [] for t in json_object['settings']["task"]: tasknames.append(Row(taskname=t["taskname"])) tasknames_df = spark.createDataFrame(tasknames) # display(tasknames_df) job_id_df.crossJoin(tasknames_df).display() Result: job_id taskname 123 task1 123 task2
Update1: You don't even have to explicitly define the schema here and instead, you may simply use schema_of_json as follows: from pyspark.sql import SparkSession from pyspark.sql.functions import from_json, col, explode, schema_of_json, lit spark = SparkSession.builder.getOrCreate() s = '{"job_id":"123","settings":{"task":[{"taskname":"task1"},{"taskname":"task2"}]}}' schema = schema_of_json(lit(s)) result_df = ( spark.createDataFrame([s], "string") .select(from_json(col("value"), schema).alias("data")) .select("data.job_id", explode("data.settings.task.taskname").alias("taskname")) ) result_df.show() # +------+--------+ # |job_id|taskname| # +------+--------+ # | 123| task1| # | 123| task2| # +------+--------+ As you mentioned there are a lot more fields - this will take away some work from your hands. OriginalAnswer: An easier way is to mirroring the schema with JSON string s and using from_json as follows: from pyspark.sql import SparkSession from pyspark.sql.functions import from_json, col, explode spark = SparkSession.builder.getOrCreate() s = '{"job_id":"123","settings":{"task":[{"taskname":"task1"},{"taskname":"task2"}]}}' schema = "struct<job_id:string, settings:struct<task:array<struct<taskname:string>>>>" result_df = ( spark.createDataFrame([s], "string") .select(from_json(col("value"), schema).alias("data")) .select("data.job_id", explode("data.settings.task.taskname").alias("taskname")) ) result_df.show() # +------+--------+ # |job_id|taskname| # +------+--------+ # | 123| task1| # | 123| task2| # +------+--------+
2
1
78,705,284
2024-7-4
https://stackoverflow.com/questions/78705284/using-variable-out-of-nested-function
class Solution: def isBalanced(self, root: Optional[TreeNode]) -> bool: balanced = [True] def node_height(root): if not root or not balanced[0]: return 0 left_height = node_height(root.left) right_height = node_height(root.right) if abs(left_height - right_height) > 1: balanced[0] = False return 0 return 1 + max(left_height, right_height) node_height(root) return balanced[0] I understand why the code above works but when I change the value of variable 'balanced' to balanced = True instead of balanced = [True] in line 3 and change balanced[0] to balanced in line 6, 13 and 19, I get an error. class solution: def func(): a = 5 b = 7 c = True def nested_func(): return (a + b, c) return nested_func() print('sum is:', func()) So I tried the code above to test (maybe nested function cannot get variable that has boolean as a value) but was able to get the result of 'sum is: (12, True)' which shows that nested functions are able to get variables outside of them. Could someone explain this?
Nested functions can access variables from their parent functions. However, in your first case, not only you are accessing the value of the balanced variable, but you are also attempting to modify it. Whenever you create a variable in a nested function with the same name as a variable in the parent function, the nested function uses its own variable that is created inside the nested function. So in this code def node_height(root): if not root or not balanced: return 0 left_height = node_height(root.left) right_height = node_height(root.right) if abs(left_height - right_height) > 1: balanced = False # You are creating the variable here, # but you are trying to access it above in the first if-else statement. # That is why you are getting the UnboundlocalError return 0 return 1 + max(left_height, right_height) And about the first list approach, as you know list are mutable so you can update them inside the function, and it will update in all scope, so that is why it was working. And in this example, As already Answered by Aemyl and mentioned by me above ("Nested functions can access variables from their parent functions."), In this code you are not updating the value instead you are just accessing the value of the parent's variable(s), so that is why you are not getting any error. class solution: def func(): a = 5 b = 7 c = True def nested_func(): return (a + b, c) return nested_func() print('sum is:', func())
4
5
78,708,646
2024-7-4
https://stackoverflow.com/questions/78708646/how-to-translate-this-sql-line-to-work-on-flask-sqlalchemy
I need some help to translate a sql command to work on Flask. I am trying to filter the results of a query. This command below do what I need in mysql: select t.date, t.name from `capacitydata`.`allflash_dev` t inner join ( select name, max(date) as MaxDate from `capacitydata`.`allflash_dev` group by name) tm on t.name = tm.name and t.date = tm.MaxDate; But I am trying without success to do the same on Flask. So far I got the query working, but showing all the lines # Creating Models class Block(db.Model): __tablename__ = "allflash_dev" index = db.Column(db.Date, nullable=False, unique=True, primary_key=True) date = db.Column(db.Date, nullable=False) name = db.Column(db.String(45), nullable=False) raw = db.Column(db.String(45), nullable=False) free = db.Column(db.String(45), nullable=False) frep = db.Column(db.String(45), nullable=False) util = db.Column(db.String(45), nullable=False) utip = db.Column(db.String(45), nullable=False) def create_db(): with app.app_context(): db.create_all() # Home route @app.route("/") def block(): details = Block.query.order_by(Block.date.desc()) return render_template("block.html", details=details) Thanks in advance. I tried something like this but didn't work: details = db.select([Block.name, db.func.max(Block.date)]).group_by(Block.date, Block.name)
You can use this. Use statement with your flask sqlalchemy session / query. from sqlalchemy import func, select from sqlalchemy.orm import aliased allflash_dev = aliased(Block) tm = ( select(allflash_dev.name, func.max(allflash_dev.date).label("MaxDate")) .group_by(allflash_dev.name) .subquery() ) statement = select(Block.date, Block.name).join( tm, (Block.name == tm.c.name) & (Block.date == tm.c.MaxDate) ) This generates the following sql. SELECT allflash_dev.date, allflash_dev.name FROM allflash_dev INNER JOIN ( SELECT allflash_dev_1.name AS name, max(allflash_dev_1.date) AS `MaxDate` FROM allflash_dev AS allflash_dev_1 GROUP BY allflash_dev_1.name ) AS anon_1 ON allflash_dev.name = anon_1.name AND allflash_dev.date = anon_1.`MaxDate`
2
1
78,709,058
2024-7-4
https://stackoverflow.com/questions/78709058/subtracting-pandas-series-from-all-elements-of-another-pandas-series-with-a-comm
I have a pandas series.groupby objects, call it data. If I print out the elements, it looks like this: <pandas.core.groupby.generic.SeriesGroupBy object at ***> (1, 0 397.44 1 12.72 2 422.40 Name: value, dtype: float64) (2, 3 398.88 4 6.48 5 413.52 Name: value, dtype: float64) (3, 6 398.40 7 68.40 8 18.96 9 56.64 10 406.56 Name: value, dtype: float64) (4, 11 398.64 12 14.64 13 413.76 Name: value, dtype: float64) ... I want to make an equivalent object, where the entries are the cumulative sum of each sublist in the series, minus the first entry of that list. So, for example, the first element would become: (1, 0 0 #(= 397.44 - 397.44) 1 12.72 #(= 397.44 + 12.72 - 397.44) 2 435.12 #(= 397.44 + 12.72 + 422.40 - 397.44) I can get the cumulative sum easily enough using apply: cumulative_sums = data.apply(lambda x: x.cumsum()) but when I try to subtract the first element of the list in what I would think of as the intuitive way (lambda x: x.cumsum()-x[0]) , I get a KeyError. How can I achieve what I am trying to do?
Try: cumulative_sums = data.apply(lambda x: x.cumsum() - x.iat[0]) print(cumulative_sums) Prints: a b 1 0 0.00 1 12.72 2 435.12 2 3 0.00 4 6.48 5 420.00 3 6 0.00 7 68.40 8 87.36 9 144.00 10 550.56 Name: value, dtype: float64
2
1
78,708,937
2024-7-4
https://stackoverflow.com/questions/78708937/optimize-this-python-code-that-involves-matrix-inversion
I have this line of code that involves a matrix inversion: X = A @ B @ np.linalg.pinv(S) A is an n by n matrix, B is an n by m matrix, and S is an m by m matrix. m is smaller than n but usually not orders of magnitude smaller. Usually m is about half of n. S is a symmetrical positive definite matrix. How do I make this line of code run faster in Python? I can do X = np.linalg.solve(S.T, (A@B).T).T But I am also curious if I can take advantage of the fact that S is symmetrical.
So your problem is XS = AB = C. As you've stated, this can be rewritten as S'X' = B'A' = C'. C is of size m x n, but this batched problem can be solved using scipy.linalg.solve. In this case, I recommend the scipy alternative (rather than numpy) because you have stated that S is symmetric, so you can pass assume_a="sym" argument so that scipy selects a solver that takes advantage of the matrix structure. So, your code will look like this: X = scipy.linalg.solve(S.T, (A@B).T, assume_a="sym").T
2
2
78,706,643
2024-7-4
https://stackoverflow.com/questions/78706643/how-to-speed-up-the-interpolation-for-this-particular-example
I made a script that performs tri-linear interpolation on a set of points using pandas for data handling and Numba for computational efficiency. Currently, it requires $\mathcal{O}(1) \text{ s}$ if considering $10^{5}$ points. This is the code, assuming some test tabulated data: import numpy as np import pandas as pd from numba import jit # Define the symbolic function def custom_function(x, y, z): return np.sin(y) * np.cos(3 * y)**(1 + 5 * x) * np.exp(-np.sqrt(z**2 + x**2) * np.cos(3 * y) / 20) / z # Define the grid ranges x_range = np.arange(0.5, 5.5, 0.5) y_range = np.logspace(np.log10(0.0001), np.log10(0.1), int((np.log10(0.1) - np.log10(0.0001)) / 0.1) + 1) z_range = np.arange(0.5, 101, 5) # Generate the DataFrame data = {'x': [], 'y': [], 'z': [], 'f': []} for x in x_range: for y in y_range: for z in z_range: data['x'].append(x) data['y'].append(y) data['z'].append(z) data['f'].append(custom_function(x, y, z)) df = pd.DataFrame(data) # Define the tri-linear interpolation function using Numba @jit(nopython=True, parallel=True) def trilinear_interpolation(rand_points, grid_x, grid_y, grid_z, distr): results = np.empty(len(rand_points)) len_y, len_z = grid_y.shape[0], grid_z.shape[0] for i in range(len(rand_points)): x, y, z = rand_points[i] idx_x1 = np.searchsorted(grid_x, x) - 1 idx_x2 = idx_x1 + 1 idx_y1 = np.searchsorted(grid_y, y) - 1 idx_y2 = idx_y1 + 1 idx_z1 = np.searchsorted(grid_z, z) - 1 idx_z2 = idx_z1 + 1 idx_x1 = max(0, min(idx_x1, len(grid_x) - 2)) idx_x2 = max(1, min(idx_x2, len(grid_x) - 1)) idx_y1 = max(0, min(idx_y1, len_y - 2)) idx_y2 = max(1, min(idx_y2, len_y - 1)) idx_z1 = max(0, min(idx_z1, len_z - 2)) idx_z2 = max(1, min(idx_z2, len_z - 1)) x1, x2 = grid_x[idx_x1], grid_x[idx_x2] y1, y2 = grid_y[idx_y1], grid_y[idx_y2] z1, z2 = grid_z[idx_z1], grid_z[idx_z2] z111 = distr[idx_x1, idx_y1, idx_z1] z211 = distr[idx_x2, idx_y1, idx_z1] z121 = distr[idx_x1, idx_y2, idx_z1] z221 = distr[idx_x2, idx_y2, idx_z1] z112 = distr[idx_x1, idx_y1, idx_z2] z212 = distr[idx_x2, idx_y1, idx_z2] z122 = distr[idx_x1, idx_y2, idx_z2] z222 = distr[idx_x2, idx_y2, idx_z2] xd = (x - x1) / (x2 - x1) yd = (y - y1) / (y2 - y1) zd = (z - z1) / (z2 - z1) c00 = z111 * (1 - xd) + z211 * xd c01 = z112 * (1 - xd) + z212 * xd c10 = z121 * (1 - xd) + z221 * xd c11 = z122 * (1 - xd) + z222 * xd c0 = c00 * (1 - yd) + c10 * yd c1 = c01 * (1 - yd) + c11 * yd result = c0 * (1 - zd) + c1 * zd results[i] = np.exp(result) return results # Provided x value fixed_x = 2.5 # example provided x value # Random points for which we need to perform tri-linear interpolation num_rand_points = 100000 # Large number of random points rand_points = np.column_stack(( np.full(num_rand_points, fixed_x), np.random.uniform(0.0001, 0.1, num_rand_points), np.random.uniform(0.5, 101, num_rand_points) )) # Prepare the grid and distribution values grid_x = np.unique(df['x']) grid_y = np.unique(df['y']) grid_z = np.unique(df['z']) distr = np.zeros((len(grid_x), len(grid_y), len(grid_z))) for i in range(len(df)): ix = np.searchsorted(grid_x, df['x'].values[i]) iy = np.searchsorted(grid_y, df['y'].values[i]) iz = np.searchsorted(grid_z, df['z'].values[i]) distr[ix, iy, iz] = df['f'].values[i] # Perform tri-linear interpolation interpolated_values = trilinear_interpolation(rand_points, grid_x, grid_y, grid_z, distr) # Display the results for point, value in zip(rand_points[:10], interpolated_values[:10]): print(f"Point {point}: Interpolated value: {value}") I am wondering if there are any optimization techniques or best practices that I can apply to further speed up this code, especially given that all x-values are fixed. Any suggestions or advice would be greatly appreciated!
First of all, grid_x, grid_y and grid_z are small so a binary search is not the most efficient way to find a value. A basic linear search is faster for small arrays. Here is an implementation: @nb.njit('(float64[::1], float64)', inline='always') def searchsorted_opt(arr, val): i = 0 while i < arr.size and val > arr[i]: i += 1 return i When the array is as significantly more items, then you can start at the middle of the array and skip 1 item over N (typically with a small N). When the array is huge, then a binary search becomes a fast solution. One can build an index to avoid cache misses or used cache-friendly data structures like B-tree. In practice, I do not expect such a data structure to be useful in your case since you operate on a 3D grid so the 3 arrays should certainly always be rather small. An alternative solution is to build a lookup-table (LUT) based on values in the grid_* arrays. For items following a uniform distribution, you can do something like idx = LUT[int(searchedValue * stride + offset)]. In other cases, you can compute a polynomial correction before the integer conversion so for the LUT access to be uniform and for the LUT to stay small. For smooth functions, you can directly compute the function or a polynomial approximation of it and then truncate the result -- no need for a LUT. But again, this only worth it if the grid_* arrays are significantly bigger. Moreover, your code does not currently benefit from multiple threads. You need to explicitly use prange instead of range as pointed out by max9111 in comments. Finally, you can specify a signature so to avoid a possible lazy compilation time, as pointed out by dankal444. Here is the resulting code: import numba as nb @nb.njit('(float64[:,::1], float64[::1], float64[::1], float64[::1], float64[:,:,::1])', parallel=True) def trilinear_interpolation(rand_points, grid_x, grid_y, grid_z, distr): results = np.empty(len(rand_points)) len_y, len_z = grid_y.shape[0], grid_z.shape[0] for i in nb.prange(len(rand_points)): x, y, z = rand_points[i] idx_x1 = searchsorted_opt(grid_x, x) - 1 idx_x2 = idx_x1 + 1 idx_y1 = searchsorted_opt(grid_y, y) - 1 idx_y2 = idx_y1 + 1 idx_z1 = searchsorted_opt(grid_z, z) - 1 idx_z2 = idx_z1 + 1 idx_x1 = max(0, min(idx_x1, len(grid_x) - 2)) idx_x2 = max(1, min(idx_x2, len(grid_x) - 1)) idx_y1 = max(0, min(idx_y1, len_y - 2)) idx_y2 = max(1, min(idx_y2, len_y - 1)) idx_z1 = max(0, min(idx_z1, len_z - 2)) idx_z2 = max(1, min(idx_z2, len_z - 1)) x1, x2 = grid_x[idx_x1], grid_x[idx_x2] y1, y2 = grid_y[idx_y1], grid_y[idx_y2] z1, z2 = grid_z[idx_z1], grid_z[idx_z2] z111 = distr[idx_x1, idx_y1, idx_z1] z211 = distr[idx_x2, idx_y1, idx_z1] z121 = distr[idx_x1, idx_y2, idx_z1] z221 = distr[idx_x2, idx_y2, idx_z1] z112 = distr[idx_x1, idx_y1, idx_z2] z212 = distr[idx_x2, idx_y1, idx_z2] z122 = distr[idx_x1, idx_y2, idx_z2] z222 = distr[idx_x2, idx_y2, idx_z2] xd = (x - x1) / (x2 - x1) yd = (y - y1) / (y2 - y1) zd = (z - z1) / (z2 - z1) c00 = z111 * (1 - xd) + z211 * xd c01 = z112 * (1 - xd) + z212 * xd c10 = z121 * (1 - xd) + z221 * xd c11 = z122 * (1 - xd) + z222 * xd c0 = c00 * (1 - yd) + c10 * yd c1 = c01 * (1 - yd) + c11 * yd result = c0 * (1 - zd) + c1 * zd results[i] = np.exp(result) return results Note that np.exp can be computed faster using both the SIMD-friendly library Intel SVML (only for x86-64 CPUs) and multiple threads, by moving np.exp away of the loop and computing it in a second step (while making sure SVML can be using by Numba on the target platform). That being said, the speed up should be small since np.exp only takes a small fraction of the execution time. The provided code is 6.7 times faster on my i5-9600KF CPU with 6 cores. I do not think this can can be optimized further using Numba on mainstream CPUs (besides using aforementioned methods). At least, it is certainly not possible with the current target input (especially since everything fit in the L3 cache on my machine and distr even fit in the L2 cache).
2
2
78,707,895
2024-7-4
https://stackoverflow.com/questions/78707895/type-of-iterator-is-any-in-zip
The following script: from collections.abc import Iterable, Iterator class A(Iterable): _list: list[int] def __init__(self, *args: int) -> None: self._list = list(args) def __iter__(self) -> Iterator[int]: return iter(self._list) a = A(1, 2, 3) for i in a: reveal_type(i) for s, j in zip("abc", a): reveal_type(j) yields the following mypy output: $ mypy test.py test.py:17: note: Revealed type is "builtins.int" test.py:20: note: Revealed type is "Any" Success: no issues found in 1 source file Why is the type Any when iterating on zip, but not on the object directly? Note, subclassing class A(Iterable[int]) does allow for correct type resolution, but that's not the question here ;)
Comparing for loop and zip object is not an apples-to-apples comparison. For loop How is for i in X checked? At least in current mypy source, for loop is checked by looking up __iter__ signature on the type of X and using its return type. So it doesn't look at your Iterable inheritance, it finds the nearest __iter__ in MRO which happens to be the one you provided. Why is it implemented this way? For loop is a language native construct, defined at the syntactical level - in some sense it's more fundamental than existence of some protocol types encapsulating iteration behaviour. zip Now, zip is just a class. It defines its own __iter__ method, so for a,b in zip(A, B) will be checked according to its return type. This class is generic in iterator type, supporting up to 5 distinct iterables and falling back to general case for 6 and more. Here's what it looks like: typeshed permalink. There is no black magic involved. I'll quote the relevant part: class zip(Iterator[_T_co]): ... @overload def __new__(cls, iter1: Iterable[_T1], /, *, strict: bool = ...) -> zip[tuple[_T1]]: ... @overload def __new__(cls, iter1: Iterable[_T1], iter2: Iterable[_T2], /, *, strict: bool = ...) -> zip[tuple[_T1, _T2]]: ... .... def __iter__(self) -> Self: ... def __next__(self) -> _T_co: ... (there are version-dependent branches and more overloads, I'll omit them here for brewity) You hit the two-arg overload. It looks for _T1 and _T2 by "solving" the following equation: Iterable[_T2] = A Here's the culprit: you declare that A is an Iterable[Any]. Omitting generic type vars is fully equivalent to providing Any instead. There is no room for type checker inference. So mypy happily says "wow, I already know that A <: Iterable[Any], hence _T2 = Any, next". It does not touch the MRO and does not look into Iterable semantics at all: it just wants to know what type var should parameterize it. At this point, mypy already knows that A is a subtype of Iterable[Any] - you told it so! Even if your class doesn't contain __iter__ at all, mypy would have already reported that at definition, no need to repeat here. It would be wasteful to recheck protocol conformance at every usage site, and would also produce a bunch of undesired errors. Imagine that, for some strange reason, you have the class as follows: class B(Iterable[str]): def __iter__(self) -> Iterator[int]: ... # type: ignore[override] Oops. This makes no sense here, but may happen in real life for more complex protocols. If mypy were to check __iter__ everywhere, you'd have to # type: ignore all places where B is used as Iterable[str]. Structural subtyping Side note: Iterable quacks like a typing.Protocol. If you do not inherit from it at all, your class still can be a subtype of Iterable - that's called structural subtyping. It will even pass assert issubclass(A, Iterable)! If you don't declare implementation explicitly, type checkers should infer the corresponding type as precisely as possible. So your case written without explicit Iterable inheritance would produce the desired outcome. General note I do understand your problem with this. Actually I'm also not sure that this approach is fully justified: probably for loop should also treat the iterable as Iterable, solving the same equation? But that's how the implementation currently behaves, and both approaches are equally valid IMO.
2
1
78,704,660
2024-7-4
https://stackoverflow.com/questions/78704660/how-to-format-the-dataframe-into-a-2d-table
I have following issue with formatting a pandas dataframe into a 2D format. My data is: +----+------+-----------+---------+ | | Jobs | Measure | Value | |----+------+-----------+---------| | 0 | Job1 | Temp | 43 | | 1 | Job1 | Humidity | 65 | | 2 | Job2 | Temp | 48 | | 3 | Job2 | TempS | 97.4 | | 4 | Job2 | Humidity | nan | | 5 | Job3 | Humidity | 55 | | 6 | Job1 | Temp | 41 | | 7 | Job1 | Duration | 23 | | 8 | Job3 | Temp | 39 | | 9 | Job1 | Temp | nan | | 10 | Job1 | Humidity | 55 | | 11 | Job2 | Temp | 48 | | 12 | Job2 | TempS | 97.4 | | 13 | Job2 | Humidity | nan | | 14 | Job3 | Humidity | 55 | | 15 | Job1 | Temp | nan | | 16 | Job1 | Duration | 25 | | 17 | Job3 | Temp | nan | | 18 | Job2 | Humidity | 61 | +----+------+-----------+---------+ and my code for now is: from tabulate import tabulate import pandas as pd df = pd.read_csv('logs.csv') #print(df) print(tabulate(df, headers='keys', tablefmt='psql')) grouped = df.groupby(['Jobs','Measure'], dropna=True) average_temp = grouped.mean() errors = df.groupby(['Jobs','Measure']).agg(lambda x: x.isna().sum()) frames = [average_temp, errors] df_merged = pd.concat(frames, axis=1).set_axis(['Avg', 'Error'], axis='columns') print(df_merged) and the output of the print is: Table-1 Avg Error Jobs Measure Job1 Duration 24.0 0 Humidity 60.0 0 Temp 42.0 2 Job3 Humidity 55.0 0 Temp 39.0 1 Job2 Humidity 61.0 2 TempS 97.4 0 Temp 48.0 0 How can I format this table into something like this: Table-2 Jobs Avg.Temp Err.Temp Avg.Humidity Err.Humidity Avg.Duration ... Job1 42.0 2 60.0 0 24.0 Job2 48.0 0 61.0 0 - Job3 39.0 1 55.0 1 - So, what we see is that for example, Avg.Temp for Job1 in Table-2 is the Avg. value of Job1->Temp in Table-1. Another thing is that not all Jobs need to have the same measure fields and can also differ like for Job2 we have 'TempS'. Update: using the answer from user24714682 the table looks like this. +--------------+----------------+----------------+--------------+------------+------------------+------------------+----------------+--------------+ | Jobs | Avg.Duration | Avg.Humidity | Avg.S.Temp | Avg.Temp | Error.Duration | Error.Humidity | Error.S.Temp | Error.Temp | |--------------+----------------+----------------+--------------+------------+------------------+------------------+----------------+--------------| | Job1 | 24 | 60 | nan | 42 | 0 | 0 | nan | 2 | | Job3 | nan | 55 | nan | 39 | nan | 0 | nan | 1 | | Job2 | nan | 61 | 97.4 | 48 | 1 | 2 | 0 | 0 | +--------------+----------------+----------------+--------------+------------+------------------+------------------+----------------+--------------+ How can I now sort the columns in that way to first show the Measure that has the highest total Error count first and the descending to the rest of total Error counts. example: +--------------+------------+--------------+----------------+------------------+... | Jobs | Avg.Temp | Error.Temp | Avg.Humidity | Error.Humidity | |--------------+------------+--------------|----------------+------------------+... | Job1 | 42 | 2 | 60 | 0 | | Job3 | 39 | 1 | 55 | 0 | | Job2 | 48 | 0 | 61 | 2 | +--------------+------------+--------------+----------------+------------------+... In the above table the columns are sorted 1st Avg.Temp bacause it is the sensor with highest total error count of 3 and then it shows Avg.Humidity because it has the 2nd highest total error count and so on.
You can use unstack() and join(): import pandas as pd from tabulate import tabulate data = { 'Jobs': ['Job1', 'Job1', 'Job2', 'Job2', 'Job2', 'Job3', 'Job1', 'Job1', 'Job3', 'Job1', 'Job1', 'Job2', 'Job2', 'Job2', 'Job3', 'Job1', 'Job1', 'Job3', 'Job2'], 'Measure': ['Temp', 'Humidity', 'Temp', 'TempS', 'Humidity', 'Humidity', 'Temp', 'Duration', 'Temp', 'Temp', 'Humidity', 'Temp', 'TempS', 'Humidity', 'Humidity', 'Temp', 'Duration', 'Temp', 'Humidity'], 'Value': [43, 65, 48, 97.4, None, 55, 41, 23, 39, None, 55, 48, 97.4, None, 55, None, 25, None, 61] } df = pd.DataFrame(data) grouped = df.groupby(['Jobs', 'Measure']) average_temp = grouped.mean() errors = df.groupby(['Jobs', 'Measure']).agg(lambda x: x.isna().sum()) frames = (average_temp, errors) df_merged = pd.concat(frames, axis=1).set_axis(['Avg', 'Error'], axis='columns') df_avg, df_err = df_merged['Avg'].unstack(), df_merged['Error'].unstack() res = pd.concat((df_avg, df_err), axis=1, keys=('Avg', 'Error')) res.columns = ['.'.join(col).strip() for col in res.columns.values] print(tabulate(res, headers='keys', tablefmt='psql')) Prints +--------+----------------+----------------+------------+-------------+------------------+------------------+--------------+---------------+ | Jobs | Avg.Duration | Avg.Humidity | Avg.Temp | Avg.TempS | Error.Duration | Error.Humidity | Error.Temp | Error.TempS | |--------+----------------+----------------+------------+-------------+------------------+------------------+--------------+---------------| | Job1 | 24 | 60 | 42 | nan | 0 | 0 | 2 | nan | | Job2 | nan | 61 | 48 | 97.4 | nan | 2 | 0 | 0 | | Job3 | nan | 55 | 39 | nan | nan | 0 | 1 | nan | +--------+----------------+----------------+------------+-------------+------------------+------------------+--------------+---------------+
3
1
78,699,964
2024-7-3
https://stackoverflow.com/questions/78699964/how-can-one-combine-iterables-keeping-only-the-first-element-with-each-index
Let's say I have a number of iterables: [[1, 2], [3, 4, 5, 6], [7, 8, 9], [10, 11, 12, 13, 14]] How can I get only each element that is the first to appear at its index in any of the iterables? In this case: [1, 2, 5, 6, 14] Visualized: [1, 2] [_, _, 5, 6] [_, _, _] [_, _, _, _, 14]
can it done in more functional style? Sure, but I wouldn't. Davis Herring's approach is already lovely. Here's a "more functional" way, but more obscure to my eyes: from itertools import zip_longest SKIP = object() def chain_suffixes(*iters): return (next(a for a in s if a is not SKIP) for s in zip_longest(*iters, fillvalue=SKIP)) print(list(chain_suffixes( [1, 2], [3, 4, 5, 6], [7, 8, 9], [10, 11, 12, 13, 14]))) which prints [1, 2, 5, 6, 14] EDIT BTW, "more functional" may be in the eye of the beholder. To my eyes, various forms of generator comprehension are just as "functional" as other ways of spelling it. At an extreme, it's possible to rewrite the above without using comprehensions at all, or even typing "def". from itertools import zip_longest SKIP = object() not_SKIP = lambda x: x is not SKIP first_not_SKIP = lambda s: next(filter(not_SKIP, s)) chain_suffixes = lambda *iters: map( first_not_SKIP, zip_longest(*iters, fillvalue=SKIP))
3
3
78,701,979
2024-7-3
https://stackoverflow.com/questions/78701979/casting-rdd-to-a-different-type-from-float64-to-double
I have a code like below, which uses pyspark. test_truth_value = RDD. test_predictor_rdd = RDD. valuesAndPred = test_truth_value.zip(lasso_model.predict(test_predictor_rdd)).map(lambda x: ((x[0]), (x[1]))) metrics = RegressionMetrics(valuesAndPred) When i run the code, I get the following error pyspark.errors.exceptions.base.PySparkTypeError: [CANNOT_ACCEPT_OBJECT_IN_TYPE] `DoubleType()` can not accept object `-44604.288415296396` in type `float64`. This happens with the below portion. metrics = RegressionMetrics(valuesAndPred) In general, I would fix the type of RDD by following something like the below link answer. Pyspark map from RDD of strings to RDD of list of doubles However...I have three questions now. What is the difference between float64 and double? Swift Difference Between Double and Float64 From this link, it seems like the pyspark is differenciating float64 and double? When I created the previous RDDs, I already casted them into double like below. double_cast_list = ['price','bed','bath','acre_lot','house_size'] for cast_item in double_cast_list: top_zip_df = top_zip_df.withColumn(cast_item, col(cast_item).cast(DoubleType())) lasso_df = top_zip_df.select('price','bed','bath','acre_lot','house_size') train_df, test_df = lasso_df.randomSplit(weights = [0.7,0.3], seed = 100) def scaled_rdd_generation(df): rdd = df.rdd.map(lambda row: LabeledPoint(row[0], row[1::])) # separate the features and the lables from rdd - only need to standardize the features. features_rdd = rdd.map(lambda row: row.features) # this is possible, because the LabeledPoint class has label and feature columns already built in scaler = StandardScaler(withMean = True, withStd = True) # for the standard scaler, you need to fit the scaler and then transforme the df. # scaler.fit(rdd) -> computes the mean and variance and stores as a model to be used later scaler_model = scaler.fit(features_rdd) scaled_feature_rdd = scaler_model.transform(features_rdd) # rdd zip method: zips RDD with another one. returns key-value pair. scaled_rdd = rdd.zip(scaled_feature_rdd).map(lambda x: LabeledPoint(x[0].label, x[1])) return scaled_rdd model_save_path = r'C:\Users\ra064640\OneDrive - Honda\Desktop\Spark\Real Estate Linear Regression' train_scaled_rdd = scaled_rdd_generation(train_df) test_scaled_rdd = scaled_rdd_generation(test_df) test_predictor_rdd = test_scaled_rdd.map(lambda x: x.features) test_truth_value = test_scaled_rdd.map(lambda x: x.label) Where in there is it transforming the double to float64? How should I fix this? I do not see a function similar to double(x[0]) as suggested by float(x[0]) in the previous link. Thanks!
First, as mentioned in Spark docs - here's the difference between float and double type: FloatType: Represents 4-byte single-precision floating point numbers. DoubleType: Represents 8-byte double-precision floating point numbers. Second, as you mentioned the error comes here: valuesAndPred = test_truth_value.zip(lasso_model.predict(test_predictor_rdd)).map(lambda x: ((x[0]), (x[1]))) metrics = RegressionMetrics(valuesAndPred) More specifically, the issue may have arisen because of this part: lasso_modle.predict(test_predictor_rdd). Finally, to fix this you may try casting the predictions as well as lasso_model.predict(test_predictor_rdd).map(float). Modified code: valuesAndPred = test_truth_value.zip(lasso_model.predict(test_predictor_rdd).map(float)).map(lambda x: ((x[0]), (x[1]))) metrics = RegressionMetrics(valuesAndPred)
2
1
78,701,305
2024-7-3
https://stackoverflow.com/questions/78701305/pandas-string-selection
I would like to extract rows containing a particular string - the string can be a part of a larger, space-separated string (which I would want to count in), or can be a part of another (continuous) string (which I would NOT want to count in). The string can be either at start, middle or end of the string value. Example - say I would like to extract any row containing "HC": df = pd.DataFrame(columns=['test']) df['test'] = ['HC', 'CHC', 'HC RD', 'RD', 'MRD', 'CEA', 'CEA HC'] test 0 HC 1 CHC 2 HC RD 3 RD 4 MRD 5 CEA 6 CEA HC Desired output test 0 HC 2 HC RD 6 CEA HC
You can use the str.contains method with the regex query \bHC\b >>> df[df['test'].str.contains(r'\bHC\b')] test 0 HC 2 HC RD 6 CEA HC \b: Word boundary
2
1
78,702,365
2024-7-3
https://stackoverflow.com/questions/78702365/how-to-quickly-find-the-minimum-element-to-the-right-for-every-element-of-a-nump
Let's say I have an array: a = [1,4,3,6,4] I want to get an array where for every element I have the smallest value in a to the right (inclusive). That is, for a, I would want to create an array: [1,3,3,4,4] Is there a quick, concise way to do this?
You question is actually ambiguous. Do you want to consider the following value or all following values? considering all following values compute a cumulated minimum on the reversed array with minimum.accumulate: a = np.array([1,4,3,6,4]) out = np.minimum.accumulate(a[::-1])[::-1] Output: array([1, 3, 3, 4, 4]) With pure python and itertools.accumulate: from itertools import accumulate a = [1,4,3,6,4] out = list(accumulate(a[::-1], min))[::-1] # [1, 3, 3, 4, 4] considering only the next value You could shift the values and use np.minimum: a = np.array([1,4,3,6,4]) out = np.minimum(a, np.r_[a[1:], a[-1]]) Output: array([1, 3, 3, 4, 4]) or compare the successive values and build a mask to modify a in place: a = np.array([1,4,3,6,4]) m = a[:-1] > a[1:] # or # m = np.diff(a) < 0 a[np.r_[m, False]] = a[1:][m] Modified a: array([1, 3, 3, 4, 4]) difference a = np.array([1,4,3,6,5,6,4]) np.minimum.accumulate(a[::-1])[::-1] # array([1, 3, 3, 4, 4, 4, 4]) np.minimum(a, np.r_[a[1:], a[-1]]) # array([1, 3, 3, 5, 5, 4, 4])
2
5
78,702,312
2024-7-3
https://stackoverflow.com/questions/78702312/python-pandas-market-calendars
Question on calendar derivation logic in the Python module https://pypi.org/project/pandas-market-calendars/. Does this module depend on any third party API's to get the calendars or the calendars are derived based on rules within the code?
There is no communication with a third party API, everything is hardcoded. You can easily see this in the source code. There is a calendars folder with definitions of each calendar (see for example the file for ASX). The project's documentation also mentions that: As of v2.0 this package provides a mirror of all the calendars from the exchange_calendars package, which itself is the now maintained fork of the original trading_calendars package. This adds over 50 calendars. There is a list of available calendars here
2
1
78,702,014
2024-7-3
https://stackoverflow.com/questions/78702014/how-to-get-the-current-domain-name-in-django-template
How to get the current domain name in Django template? Similar to {{domain}} for auth_views. I tried {{ domain }}, {{ site }}, {{ site_name }} according to below documentation. It didnt work. <p class="text-right">&copy; Copyright {% now 'Y' %} {{ site_name }}</p> It can be either IP address 192.168.1.1:8000 or mydomain.com https://docs.djangoproject.com/en/5.0/ref/contrib/sites/ In the syndication framework, the templates for title and description automatically have access to a variable {{ site }}, which is the Site object representing the current site. Also, the hook for providing item URLs will use the domain from the current Site object if you don’t specify a fully-qualified domain. In the authentication framework, django.contrib.auth.views.LoginView passes the current Site name to the template as {{ site_name }}.
You can use {{ request.get_host }}
4
1
78,700,997
2024-7-3
https://stackoverflow.com/questions/78700997/how-to-generate-a-hierarchical-colourmap-in-matplotlib
I have a hierarchical dataset that I wish to visualise in this manner. I've been able to construct a heatmap for it. I want to generate a colormap in matplotlib such that Level 1 get categorical colours while Level 2 get different shades of the Level 1 colour. I was able to get Level 1 colours from a "tab20" palette but I can't figure out how to generate shades of the base Level 1 colour. EDIT: Just to be clear, this needs to be a generic script. So I can't hard code the colormap. MWE At the moment this just creates a colormap based on the level 1 values. I am not sure how to generate the shades for the level 2 colours: import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import matplotlib as mpl df = pd.DataFrame({"Level 2": [4, 5, 6, 6, 7], "Level 1": [0, 0, 1, 1, 1]}).T colours = mpl.colormaps["tab20"].resampled(len(df.loc["Level 1"].unique())).colors colour_dict = { item: colour for item, colour in zip(df.loc["Level 1"].unique(), colours) } sns.heatmap( df, cmap=mpl.colors.ListedColormap([colour_dict[item] for item in colour_dict.keys()]), ) colours In this example, 4 and 5 should be shades of the colour for 0 and 6 and 7 should be shades of the colour for 1. Edit 2 Applying @mozway's answer below, this is the heatmap I see: This is with 423 values in level 2 and n=500.
What about combining several gradients to form a multi-colored cmap, then rescaling your data? import matplotlib as mpl from matplotlib.colors import LinearSegmentedColormap df = pd.DataFrame({"Level 2": [1, 2, 1, 2, 3, 2, 3, 4, 0, 1, 5], "Level 1": [0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]}).T n = 5 # max value per level level1 = pd.factorize(df.loc['Level 1'])[0]*n n_levels = df.loc['Level 1'].nunique() cmap = mpl.colormaps['tab10'] i = np.linspace(0, 1, num=n_levels+1) colors = list(zip(np.sort(np.r_[i, i[1:-1]-0.001]), [x for c in cmap.colors[:n_levels+1] for x in (c, 'w')])) multi_cmap = LinearSegmentedColormap.from_list('hierachical', colors) tmp = pd.DataFrame({'Level 2': level1+df.loc['Level 2'], 'Level 1': level1 }).T sns.heatmap(tmp, cmap=multi_cmap, vmin=0, vmax=n*n_levels, square=True, cbar_kws={'orientation': 'horizontal'}) Output: If you want to annotate with the real values: sns.heatmap(tmp, annot=df, cmap=multi_cmap, vmin=0, vmax=n*n_levels, square=True, cbar_kws={'orientation': 'horizontal'}) Output: further customization If you want to reverse the order of the gradients: import matplotlib as mpl from matplotlib.colors import LinearSegmentedColormap df = pd.DataFrame({"Level 2": [1, 2, 1, 2, 3, 2, 3, 4, 0, 1, 5], "Level 1": [0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]}).T n = 5 # max value per level level1 = pd.factorize(df.loc['Level 1'])[0]*n n_levels = df.loc['Level 1'].nunique() cmap = mpl.colormaps['tab10'] i = np.linspace(0, 1, num=n_levels+1) colors = list(zip(np.sort(np.r_[i, i[1:-1]-0.001]), [x for c in cmap.colors[:n_levels+1] for x in ('w', c)])) multi_cmap = LinearSegmentedColormap.from_list('hierachical', colors) tmp = pd.DataFrame({'Level 2': level1+df.loc['Level 2'], 'Level 1': level1+0.999*n }).T sns.heatmap(tmp, annot=df, cmap=multi_cmap, vmin=0, vmax=n*n_levels, square=True, cbar_kws={'orientation': 'horizontal'}) Output: Output with n = 10
2
1
78,700,935
2024-7-3
https://stackoverflow.com/questions/78700935/advanced-logic-with-groupby-apply-and-transform-compare-row-value-with-previo
I have the following pandas dataframe: d= {'Time': [0,1,2,0,1,2,2,3,4], 'Price': ['Auction', 'Auction','800','900','By Negotiation','700','250','250','Make Offer'],'Item': ['Picasso', 'Picasso', 'Picasso', 'DaVinci', 'DaVinci', 'DaVinci', 'Dali', 'Dali', 'Dali']} df = pd.DataFrame(data=d) I would like to create a fourth column 'Listing-history' which would specify the following: 'first seen' if the listing is seen for the first time (this is not necessarily Time == 0) 'ongoing listing' if there is no change to the price field from one timepoint to the next 'Price->Auction' if the price field changes from a numeric value (which is actually encoded as a string in my dataframe) to the 'Auction' string, and vice versa if the price changes from a numeric value to the 'Auction' string. I would like the code to be agnostic to the exact price field string, for example: 'Price->By Negotiation' if the price field changes from a numeric value to 'By Negotiation' I want to group by Item, and then apply the above logic. Finding whether a listing is 'first seen' is pretty straight forward using something like the following: df['Price_coerced_to_numeric'] = pd.to_numeric(df['Price'], errors='coerce') df['Price_diff'] = df.groupby(['Item'])['Price_coerced_to_numeric'].diff(1) I suspect there is a way of using pandas apply and transform but I haven't been able to work it out. Any tips much appreciated.
You could use groupby.shift and numpy.select: # replace numbers by "Price" price = df['Price'].mask(pd.to_numeric(df['Price'], errors='coerce') .notna(), 'Price') # get previous price prev_price = price.groupby(df['Item']).shift() # identify first row per Item m1 = ~df['Item'].duplicated() # identify change in price m2 = price.ne(prev_price) # combine conditions df['Listing-history'] = np.select([m1, m2], ['first seen', prev_price+'->'+price], 'ongoing listing') Output: Time Price Item Listing-history 0 0 Auction Picasso first seen 1 1 Auction Picasso ongoing listing 2 2 800 Picasso Auction->Price 3 0 900 DaVinci first seen 4 1 By Negotiation DaVinci Price->By Negotiation 5 2 700 DaVinci By Negotiation->Price 6 2 250 Dali first seen 7 3 250 Dali ongoing listing 8 4 Make Offer Dali Price->Make Offer If you really want to use groupby.transform you code refactor the code a bit: def history(col): price = col.mask(pd.to_numeric(col, errors='coerce').notna(), 'Price') prev_price = price.shift() return ((prev_price+'->'+price) .where(price.ne(prev_price), 'ongoing listing') .fillna('first seen') ) df['Listing-history'] = df.groupby('Item')['Price'].transform(history) variant if you can have NaNs in the original column: def history(col): price = col.mask(pd.to_numeric(col, errors='coerce').notna(), 'Price') prev_price = price.shift() out = (prev_price+'->'+price).where(price.ne(prev_price), 'ongoing listing') out.iat[0] = 'first seen' return out df['Listing-history'] = df.groupby('Item')['Price'].transform(history)
2
1
78,700,714
2024-7-3
https://stackoverflow.com/questions/78700714/polars-groupby-mean-on-list
I want to make mean on groups of embeddings vectors. For examples: import polars as pl pl.DataFrame({ "id": [1,1 ,2,2], "values": [ [1,1,1], [3, 3, 3], [1,1,1], [2, 2, 2] ] }) shape: (4, 2) id values i64 list[i64] 1 [1, 1, 1] 1 [3, 3, 3] 2 [1, 1, 1] 2 [2, 2, 2] Expected result. import numpy as np pl.DataFrame({ "id":[1,2], "values": np.array([ [[1,1,1], [3, 3, 3]], [[1,1,1], [2, 2, 2]] ]).mean(axis=1) }) shape: (2, 2) id values i64 list[f64] 1 [2.0, 2.0, 2.0] 2 [1.5, 1.5, 1.5]
int_ranges() to create ordinality for lists. explode() to explode lists. group_by() twice, first to calculate mean() and then to combine results into lists. ( df .with_columns(i = pl.int_ranges(pl.col.values.list.len())) .explode('values', 'i') .group_by('id', 'i', maintain_order = True) .mean() .group_by('id', maintain_order = True) .agg('values') ) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ values β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════║ β”‚ 1 ┆ [2.0, 2.0, 2.0] β”‚ β”‚ 2 ┆ [1.5, 1.5, 1.5] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
1
78,700,174
2024-7-3
https://stackoverflow.com/questions/78700174/attributeerror-module-pyperclip-has-no-attribute-waitforpaste
I installed pyperclip (a clipboard utility) on Windows VM and somehow some of the functions are not working. I am trying to use the waitForPaste() function as below: import pyperclip as pyc import os pyc.waitForPaste() However, this is producing the following error: AttributeError Traceback (most recent call last) Cell In\[18\], line 1 \----\> 1 pyc.waitForPaste() AttributeError: module 'pyperclip' has no attribute 'waitForPaste Strangely, if the following code works just fine! I was able to write this snippet: pyc.copy('The text to be copied to the clipboard.') pyc.paste() Which produced: 'The text to be copied to the clipboard.' I am not facing the same issue on MACOS, only in Windows.
A CTRL+F for "waitForPaste" in the pyperclip source code suggests that the function no longer exists despite still being in the documentation; the only hits it gets are in the documentation. You have a few options if you still need this functionality: Find a workaround. This could be a similar function from another library or your own custom implementation of it. Find an old version. It seems like waitForPaste() was implemented at some time but later deleted; if you look back far enough in the history, you might be able to find a version that works. Ask for the feature back. You can try the pyperclip issues page, although there seems to be quite a bit of backlog there at the moment. Add the feature back yourself. pyperclip is open-source and publicly editable; if you write a working implementation or find the original implementation in the history, you can create a pull request to add it to the official source. Of course, there might be a reason this feature was deleted; if that is the case, it's probably best to let sleeping dogs lie. Corroborating evidence (not crucial) The pyperclip docs contain the following code snippet and error message to demonstrate the timeout argument in waitForPaste(). >>> import pyperclip >>> pyperclip.waitForNewPaste(5) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\github\pyperclip\src\pyperclip\__init__.py", line 689, in waitForNewPaste raise PyperclipTimeoutException('waitForNewPaste() timed out after ' + str(timeout) + ' seconds.') pyperclip.PyperclipTimeoutException: waitForNewPaste() timed out after 5 seconds. However, line 689 no longer exists in __init__.py; the last line is 658, which seems to imply that this function once existed but has since been deleted.
2
0
78,699,625
2024-7-3
https://stackoverflow.com/questions/78699625/how-to-call-asyncio-create-task-within-asyncio-create-task
I have been attempting to run 10 different looping tasks simultaneously with asyncio. All 10 tasks call "asyncio.create_task()" within their loops. I can't use "asyncio.run()" on all of them because this functions blocks the thread its called on until the task is done. So I thought that I could simply circumvent this by calling a function with "asyncio.run()" and then inside that function call my 10 looping functions through "asyncio.create_task()". The 10 looping functions are indeed called, but they themselves cannot use "asyncio.create_task()". Any suggestions on how to fix this issue? This is a based code I have written to demonstrate my issue: async def task2(): print("Task 2 was called") async def task1(): print("Task 1 was called") asyncio.create_task(task2()) async def main(): print("Main was called") asyncio.create_task(task1()) asyncio.run(main()) It prints: Main was called Task 1 was called Any ideas or suggestions would be appreciated :D
You can create all the tasks and then asyncio.gather() them await asyncio.gather(*[task1(), task2()])
3
2
78,697,859
2024-7-2
https://stackoverflow.com/questions/78697859/how-to-solve-complex-equations-numerically-in-python
Originally, I had the following equation: 2mgvΓ—sin(\alpha) = CdAΓ—\rho(v^2 + v_{wind}^2 + 2vv_{wind}cos(\phi))^(3/2) which I could express as the following non-linear equation: (K Γ— v)^(2/3) = v^2 + v_{wind} + 2vv_{wind}cos(\phi) To solve this, I need to use a numerical approach. I tried writing a code for this in Python using fsolve from scipy.optimize. However, the results I got are not too promising. What else should I try? Should I use a different approach/package or just my code needs to be improved? I also experienced that the result is highly dependent on v_initial_guess. Please note, that I would consider myself as a beginner in programming. I also tried writing a code to solve the equation for v using the Newton-Raphson method but I wasnt too successful. Here is my code: import numpy as np from scipy.optimize import fsolve m = 80 g = 9.81 alpha = np.radians(10) #incline CdA = 0.65 rho = 1.225 v_w = 10 phi = np.radians(30) #wind angle with the direction of motion sin_alpha = np.sin(alpha) cos_phi = np.cos(phi) def equation(v): K = ((m * g * sin_alpha) / ((CdA * rho))**(2/3)) return K * v**(2/3) - v**2 - 2*v*v_w*cos_phi - v_w**2 v_initial_guess = 30 v_solution = fsolve(equation, v_initial_guess, xtol=1e-3) print("v:", v_solution[0])type here EDIT: This is what my code looks like now: import numpy as np from scipy.optimize import fsolve import matplotlib.pyplot as plt m = 80 g = 9.81 alpha = np.radians(2) # incline CdA = 0.321 rho = 1.22 v_w = 5 phi = np.radians(180) # wind angle with the direction of motion sin_alpha = np.sin(alpha) cos_phi = np.cos(phi) def lhs(v): return m * g * v * sin_alpha def rhs(v): return 0.5 * CdA * rho * (v**2 + v_w**2 + 2*v*v_w*cos_phi)**(3) def difference(v): return lhs(v) - rhs(v) # fsolve to find the intersection v_initial_guess = 8 v_intersection = fsolve(difference, v_initial_guess)[0] v_values = np.linspace(0.1, 50, 500) lhs_values = lhs(v_values) rhs_values = rhs(v_values) plt.figure(figsize=(10, 6)) plt.plot(v_values, lhs_values, label='$2mgv\\sin(\\alpha)$', color='blue') plt.plot(v_values, rhs_values, label='$CdA\\rho(v^2 + v_{wind}^2 + 2vv_{wind}\\cos(\\phi))^{3/2}$', color='red') plt.xlabel('Velocity (v)') plt.xlim(0, 20) plt.title('LHS and RHS vs. Velocity') plt.legend() plt.grid(True) plt.ylim(0, 2000) plt.show() print(f"The intersection occurs at v = {v_intersection:.2f} m/s") P_grav_check = m *g * sin_alpha * v_intersection P_air_check = 0.5 * CdA * rho * (v_intersection ** 2 + v_w ** 2 + 2 * v_intersection * v_w * cos_phi) ** (3) print(P_grav_check) print(P_air_check)
The solution, using the secant method is the following: import numpy as np import matplotlib.pyplot as plt m = 60 g = 9.81 alpha = np.radians(2) # incline CdA = 0.321 rho = 1.225 v_wind = 4 phi = np.radians(60) # wind angle with the direction of motion mu = 0.005 cos_phi = np.cos(phi) sin_alpha = np.sin(alpha) lhs = m * g * sin_alpha # gravitational force def rhs(v): return 0.5 * CdA * rho * (v + v_wind *cos_phi)**2 # drag force def difference(v): return lhs - rhs(v) def secant_method(func, v0, v1, tolerance=1e-6, max_iterations=1000): for _ in range(max_iterations): f_v0 = func(v0) f_v1 = func(v1) v_new = v1 - f_v1 * (v1 - v0) / (f_v1 - f_v0) if abs(v_new - v1) < tolerance: return v_new v0, v1 = v1, v_new raise ValueError("Secant method did not converge") v_initial_guess_1 = 15 v_initial_guess_2 = 25 v_intersection = secant_method(difference, v_initial_guess_1, v_initial_guess_2) v_values = np.linspace(0.1, 50, 500) rhs_values = rhs(v_values) P_grav_check = m * g * sin_alpha * v_intersection P_air_check = 0.5 * CdA * rho * (v_intersection + v_wind * cos_phi)**2 * v_intersection P_friction = mu * m * g * np.cos(alpha) * v_intersection alpha_degree = np.degrees(alpha) alpha_percentage = np.tan(alpha) *100 print(f"\nopposing component of v_wind: ",v_wind *cos_phi, "m/s") print(f"The intersection occurs at v = {v_intersection:.4f} m/s") print(f" at a {alpha_degree} degrees incline which equals to {alpha_percentage:.2f} %") print(f"\nP_grav: {P_grav_check:.2f} W") print(f"P_air_check: {P_air_check:.2f} W") print(f"P_friction: {P_friction:.2f} W") print(f"P = {P_grav_check + P_air_check + P_friction}") plt.figure(figsize=(10, 6)) plt.plot(v_values, [lhs]*len(v_values), label='$mg\\sin(\\alpha)$', color='blue') plt.plot(v_values, rhs_values, label='$0.5CdA\\rho ({v + v_{wind}\\cos(\\phi)})^2$', color='red') plt.xlabel('Velocity (v)') plt.xlim(0, 20) plt.title('Gravitational vs. Drag Force') plt.legend() plt.grid(True) plt.ylim(0, 200) plt.show()
3
0
78,669,908
2024-6-26
https://stackoverflow.com/questions/78669908/why-is-re-pattern-generic
import re x = re.compile(r"hello") In the above code, x is determined to have type re.Pattern[str]. But why is re.Pattern generic, and then specialized to string? What does a re.Pattern[int] represent?
re.Pattern was made generic because you can also compile a bytes pattern that will operate only on bytes objects: p = re.compile(b'fo+ba?r') p.search(b'foobar') # fine p.search('foobar') # TypeError: cannot use a bytes pattern on a string-like object At type-checking time, it is defined as generic over AnyStr: class Pattern(Generic[AnyStr]): ... ...where AnyStr is a TypeVar with two constraints, str and bytes: AnyStr = TypeVar("AnyStr", str, bytes) re.Pattern[int] is therefore meaningless and would cause a type-checking error.
4
5
78,689,083
2024-6-30
https://stackoverflow.com/questions/78689083/combination-of-pso-and-gekko-error-intermediate-variable-with-no-equality
I want to optimize the best polynomial coefficients which describes a temperature profile in the interval [0, tf], where tf=100 min: p0 = 0.1 p1 = (t - 50)*(3**0.5/500) p2 = (t**2 - 100*t + 5000/3)*(3/(500000000**0.5)) T = coef0*p0 + coef1*p1 + coef2*p2 This temperature profile should maximize biodiesel concentration at final time (J = max x4(tf)), subject to a system of ODEs regarding to reaction velocities, and the inequality constraint: 298 K < T < 338 K. This problem is originated from an article whose authors proposed a combination of Monte Carlo algorithm random search and Genetic Algorithm for the polynomials parameters parameterization, and Ode45 in Matlab environment for x4(tf) optimization. Article: doi.org/10.1016/j.cherd.2021.11.001 I tried a combination of Particle Swarm Optimization and GEKKO. I want that the PSO finds the best polynomials coefficients so that GEKKO module maximizes x4(tf). So my intention is that after every PSO iteration, it computes the coefficients into the GEKKO optimization block, then the PSO algorithm verifies if that is the best objective function: I know I have to get somewhere x4(tf) = 0.8 mol/L. import matplotlib.pyplot as plt import numpy as np from gekko import GEKKO xmin=np.array([-12,-12,-12]) # Minimum bounds xmax=np.array([3400,3400,3400]) # Maximum bounds # H=abs(xmax-xmin) # DiferenΓ§a entre mΓ‘ximo e mΓ­nimo N=30 # Particle number # PSO parameters c1=0.8 # individual c2=1.2 # social D=3 # Dimension tmax = 500 # Maximum iterations m = GEKKO() def objective_function(x): coef0, coef1, coef2 = x[0], x[1], x[2] tf = 100 # Final time m.time = np.linspace(0, tf, tf) # Time interval t = m.time # Temperature profile with coefficients to be optimized p0 = 0.1 p1 = (t - 50)*(3**0.5/500) p2 = (t**2 - 100*t + 5000/3)*(3/(500000000**0.5)) T = coef0*p0 + coef1*p1 + coef2*p2 # Arrhenius equations A1, b1 = 3.92e7, 6614.83 A2, b2 = 5.77e5, 4997.98 A3, b3 = 5.88e12, 9993.96 A4, b4 = 0.98e10, 7366.64 A5, b5 = 5.35e3, 3231.18 A6, b6 = 2.15e4, 4824.87 k1 = m.Intermediate(A1*m.exp(-b1/T)) k2 = m.Intermediate(A2*m.exp(-b2/T)) k3 = m.Intermediate(A3*m.exp(-b3/T)) k4 = m.Intermediate(A4*m.exp(-b4/T)) k5 = m.Intermediate(A5*m.exp(-b5/T)) k6 = m.Intermediate(A6*m.exp(-b6/T)) # Decision variables' initial values x1 = m.Var(value=0.3226) x2 = m.Var(value=0) x3 = m.Var(value=0) x4 = m.Var(value=0) x5 = m.Var(value=1.9356) x6 = m.Var(value=0) # Dynamic model m.Equation(x1.dt() == -k1*x1*x5 + k2*x2*x4) m.Equation(x2.dt() == k1*x1*x5 - k2*x2*x4 - k3*x2*x5 + k4*x3*x4) m.Equation(x3.dt() == k3*x2*x5 - k4*x3*x4 - k5*x3*x5 + k6*x6*x4) m.Equation(x4.dt() == k1*x1*x5 - k2*x2*x4 + k3*x2*x5 - k4*x3*x4 + k5*x3*x5 - k6*x6*x4) m.Equation(x5.dt() == -(k1*x1*x5 - k2*x2*x4 + k3*x2*x5 - k4*x3*x4 + k5*x3*x5 - k6*x6*x4)) m.Equation(x6.dt() == k5*x3*x5 - k6*x6*x4) p = np.zeros(tf) p[-1] = 1.0 final = m.Param(value=p) m.Maximize(x4*final) m.options.IMODE = 6 m.solve(disp=False, debug=True) f = x4.value[-1] return f #Inicialize PSO parameters x=np.zeros((N,D)) X=np.zeros(N) p=np.zeros((N,D)) # best position P=np.zeros(N) # best f_obj value v=np.zeros((N,D)) for i in range(N): # iteration for each particle for d in range(D): x[i,d]=xmin[d]+(xmax[d]- xmin[d])*np.random.uniform(0,1) # inicialize position v[i,d]=0 # inicialize velocity (dx) X[i]= objective_function(x[i,:]) p[i,:]=x[i,:] P[i]=X[i] if i==0: g=np.copy(p[0,:]) ############ G=P[0] # fobj global value if P[i]<G: g=np.copy(p[i,:]) #################### G=P[i] # registering best fobj of i # Plotting fig, axs = plt.subplots(2, 2, gridspec_kw={'hspace': 0.7, 'wspace': 0.7}) axs[0, 0].plot(x[:,0],x[:,1],'ro') axs[0, 0].set_title('IteraΓ§Γ£o Inicial') axs[0, 0].set_xlim([xmin[0], xmax[0]]) axs[0, 0].set_ylim([xmin[1], xmax[1]]) #Iterations tmax=500 for tatual in range(tmax): for i in range(N): R1=np.random.uniform(0,1) # random value for R1 R2=np.random.uniform(0,1) # random value for R2 # Inertia wmax=0.9 wmin=0.4 w=wmax-(wmax-wmin)*tatual/tmax # inertia factor v[i,:]=w*v[i,:]+ c1*R1*(p[i,:]-x[i,:])+c2*R2*(g-x[i,:]) # velocity x[i,:]=x[i,:]+v[i,:] # position for d in range(D): # guarantee of bounds if x[i,d]<xmin[d]: x[i,d]=xmin[d] v[i,d]=0 if x[i,d]>xmax[d]: x[i,d]=xmax[d] v[i,d]=0 X[i]=objective_function(x[i,:]) if X[i]<P[i]: p[i,:]=x[i,:] # particle i best position P[i]=X[i] # P update (best fobj) if P[i]< G: # verify if it's better than global fobj g=np.copy(p[i,:]) # registering best global position G=P[i] if tatual==49: axs[0, 1].plot(x[:,0],x[:,1],'ro') axs[0, 1].set_title('IteraΓ§Γ£o 20') axs[0, 1].set_xlim([xmin[0], xmax[0]]) axs[0, 1].set_ylim([xmin[1], xmax[1]]) if tatual==99: axs[1, 0].plot(x[:,0],x[:,1],'ro') axs[1, 0].set_title('IteraΓ§Γ£o 100') axs[1, 0].set_xlim([xmin[0], xmax[0]]) axs[1, 0].set_ylim([xmin[1], xmax[1]]) if tatual==499: axs[1, 1].plot(x[:,0],x[:,1],'ro') axs[1, 1].set_title('IteraΓ§Γ£o 499') axs[1, 1].set_xlim([xmin[0], xmax[0]]) axs[1, 1].set_ylim([xmin[1], xmax[1]]) for ax in axs.flat: ax.set(xlabel='x1', ylabel='x2') print('Optimal x:', g) print('Optimal Fobj(x):', objective_function(g)) However, I am returned with this weird GEKKO error: Traceback (most recent call last): exec(code, globals, locals) P[i] = objective_function(x[i]) m.solve(disp=False) raise Exception(apm_error) Exception: @error: Intermediate Definition Error: Intermediate variable with no equality (=) expression -72.31243538-74.05913067-75.76877023-77.43020035-79.03172198 STOPPING... I've tried both GEKKO and PSO codes separately for testing purposes, and both work. But I can't make them work together. I've tried so many things, besides I'm newby in GEKKO. If you guys could help me, I'd appreciate it a lot. Thank you very much in advance!
Change how T is defined because it is an array of values. Also, optimization is not needed when Gekko is used for simulation. T = m.Param(coef0*p0 + coef1*p1 + coef2*p2,lb=298,ub=338) #m.Maximize(x4*final) m.options.IMODE = 4 For the Temperature profile T, Gekko expects m.Param() inputs when they are vectors. Here is the new code: PSO Temperature Profile Optimization import matplotlib.pyplot as plt import numpy as np from gekko import GEKKO xmin=np.array([-12,-12,-12]) # Minimum bounds xmax=np.array([3400,3400,3400]) # Maximum bounds # H=abs(xmax-xmin) # DiferenΓ§a entre mΓ‘ximo e mΓ­nimo N=30 # Particle number # PSO parameters c1=0.8 # individual c2=1.2 # social D=3 # Dimension tmax = 500 # Maximum iterations m = GEKKO(remote=False) def objective_function(x): coef0, coef1, coef2 = x[0], x[1], x[2] tf = 100 # Final time m.time = np.linspace(0, tf, tf) # Time interval t = m.time # Temperature profile with coefficients to be optimized p0 = 0.1 p1 = (t - 50)*(3**0.5/500) p2 = (t**2 - 100*t + 5000/3)*(3/(500000000**0.5)) T = m.Param(coef0*p0 + coef1*p1 + coef2*p2,lb=298,ub=338) # Arrhenius equations A1, b1 = 3.92e7, 6614.83 A2, b2 = 5.77e5, 4997.98 A3, b3 = 5.88e12, 9993.96 A4, b4 = 0.98e10, 7366.64 A5, b5 = 5.35e3, 3231.18 A6, b6 = 2.15e4, 4824.87 k1 = m.Intermediate(A1*m.exp(-b1/T)) k2 = m.Intermediate(A2*m.exp(-b2/T)) k3 = m.Intermediate(A3*m.exp(-b3/T)) k4 = m.Intermediate(A4*m.exp(-b4/T)) k5 = m.Intermediate(A5*m.exp(-b5/T)) k6 = m.Intermediate(A6*m.exp(-b6/T)) # Decision variables' initial values x1 = m.Var(value=0.3226) x2 = m.Var(value=0) x3 = m.Var(value=0) x4 = m.Var(value=0) x5 = m.Var(value=1.9356) x6 = m.Var(value=0) # Dynamic model m.Equation(x1.dt() == -k1*x1*x5 + k2*x2*x4) m.Equation(x2.dt() == k1*x1*x5 - k2*x2*x4 - k3*x2*x5 + k4*x3*x4) m.Equation(x3.dt() == k3*x2*x5 - k4*x3*x4 - k5*x3*x5 + k6*x6*x4) m.Equation(x4.dt() == k1*x1*x5 - k2*x2*x4 + k3*x2*x5 - k4*x3*x4 + k5*x3*x5 - k6*x6*x4) m.Equation(x5.dt() == -(k1*x1*x5 - k2*x2*x4 + k3*x2*x5 - k4*x3*x4 + k5*x3*x5 - k6*x6*x4)) m.Equation(x6.dt() == k5*x3*x5 - k6*x6*x4) p = np.zeros(tf) p[-1] = 1.0 final = m.Param(value=p) #m.Maximize(x4*final) m.options.IMODE = 4 m.solve(disp=False, debug=True) f = x4.value[-1] print(f'Objective: {f} with c0: {coef0:0.2f}, c1: {coef1:0.2f}, c2: {coef2:0.2f}') return f #Inicialize PSO parameters x=np.zeros((N,D)) X=np.zeros(N) p=np.zeros((N,D)) # best position P=np.zeros(N) # best f_obj value v=np.zeros((N,D)) for i in range(N): # iteration for each particle for d in range(D): x[i,d]=xmin[d]+(xmax[d]- xmin[d])*np.random.uniform(0,1) # inicialize position v[i,d]=0 # inicialize velocity (dx) X[i]= objective_function(x[i,:]) p[i,:]=x[i,:] P[i]=X[i] if i==0: g=np.copy(p[0,:]) ############ G=P[0] # fobj global value if P[i]<G: g=np.copy(p[i,:]) #################### G=P[i] # registering best fobj of i # Plotting fig, axs = plt.subplots(2, 2, gridspec_kw={'hspace': 0.7, 'wspace': 0.7}) axs[0, 0].plot(x[:,0],x[:,1],'ro') axs[0, 0].set_title('IteraΓ§Γ£o Inicial') axs[0, 0].set_xlim([xmin[0], xmax[0]]) axs[0, 0].set_ylim([xmin[1], xmax[1]]) #Iterations tmax=500 for tatual in range(tmax): for i in range(N): R1=np.random.uniform(0,1) # random value for R1 R2=np.random.uniform(0,1) # random value for R2 # Inertia wmax=0.9 wmin=0.4 w=wmax-(wmax-wmin)*tatual/tmax # inertia factor v[i,:]=w*v[i,:]+ c1*R1*(p[i,:]-x[i,:])+c2*R2*(g-x[i,:]) # velocity x[i,:]=x[i,:]+v[i,:] # position for d in range(D): # guarantee of bounds if x[i,d]<xmin[d]: x[i,d]=xmin[d] v[i,d]=0 if x[i,d]>xmax[d]: x[i,d]=xmax[d] v[i,d]=0 X[i]=objective_function(x[i,:]) if X[i]<P[i]: p[i,:]=x[i,:] # particle i best position P[i]=X[i] # P update (best fobj) if P[i]< G: # verify if it's better than global fobj g=np.copy(p[i,:]) # registering best global position G=P[i] if tatual==49: axs[0, 1].plot(x[:,0],x[:,1],'ro') axs[0, 1].set_title('IteraΓ§Γ£o 20') axs[0, 1].set_xlim([xmin[0], xmax[0]]) axs[0, 1].set_ylim([xmin[1], xmax[1]]) if tatual==99: axs[1, 0].plot(x[:,0],x[:,1],'ro') axs[1, 0].set_title('IteraΓ§Γ£o 100') axs[1, 0].set_xlim([xmin[0], xmax[0]]) axs[1, 0].set_ylim([xmin[1], xmax[1]]) if tatual==499: axs[1, 1].plot(x[:,0],x[:,1],'ro') axs[1, 1].set_title('IteraΓ§Γ£o 499') axs[1, 1].set_xlim([xmin[0], xmax[0]]) axs[1, 1].set_ylim([xmin[1], xmax[1]]) for ax in axs.flat: ax.set(xlabel='x1', ylabel='x2') print('Optimal x:', g) print('Optimal Fobj(x):', objective_function(g)) The output shows that the PSO algorithm is finding values that are within the desired range of x4(tf)>=0.8. Objective: 0.69040319802 with c0: 480.37, c1: 1051.17, c2: 1058.53 Objective: 0.72953879156 with c0: 586.82, c1: 46.14, c2: 1942.71 Objective: 0.80819933391 with c0: 2070.86, c1: 2254.40, c2: 318.32 Objective: 0.78391848015 with c0: 597.28, c1: 178.80, c2: 3303.17 Objective: 0.76343411822 with c0: 2237.15, c1: 505.65, c2: 959.68 Objective: 0.76438551815 with c0: 1274.94, c1: 1470.23, c2: 1192.04 Objective: 0.78609899052 with c0: 2079.49, c1: 910.99, c2: 1578.82 Objective: 0.76287892509 with c0: 1204.67, c1: 1508.01, c2: 1147.28 Objective: 0.47995105183 with c0: 243.01, c1: 865.63, c2: 136.56 Objective: 0.79657986275 with c0: 1549.06, c1: 2994.10, c2: 1932.67 Objective: 0.81878356134 with c0: 2615.95, c1: 2228.12, c2: 99.50 Objective: 0.81402792602 with c0: 3301.50, c1: 3027.91, c2: 1771.54 Objective: 0.76571967789 with c0: 1422.29, c1: 486.43, c2: 1752.17 Objective: 0.79774007627 with c0: 1946.12, c1: 3114.17, c2: 2915.55 Objective: 0.77935267715 with c0: 393.45, c1: 2859.27, c2: 2975.77 Objective: 0.81096055569 with c0: 2799.38, c1: 493.08, c2: 1955.81 Objective: 0.78843478051 with c0: 2353.52, c1: 975.26, c2: 1331.36 Objective: 0.81830447795 with c0: 3341.08, c1: 588.18, c2: 3206.25 Objective: 0.80886129367 with c0: 2370.03, c1: 1513.05, c2: 3226.52 Objective: 0.70160310864 with c0: 300.87, c1: 676.89, c2: 1835.97 Objective: 0.82196905283 with c0: 3349.32, c1: 3316.72, c2: 687.28 Objective: 0.81945046508 with c0: 3363.60, c1: 359.99, c2: 1237.87 Objective: 0.75740562215 with c0: 1026.78, c1: 1414.63, c2: 1488.10 Objective: 0.81832258587 with c0: 3366.10, c1: 315.24, c2: 2954.72 Objective: 0.75937879255 with c0: -7.19, c1: 1969.84, c2: 2659.16 Objective: 0.76653792682 with c0: 977.31, c1: 606.03, c2: 2199.25 Objective: 0.7927165714 with c0: 1223.03, c1: 787.54, c2: 3077.25 ... A limitation of the PSO Temperature Profile Optimization is that the profile is limited to polynomial solutions. PSO is slow, but it does have the advantage of better finding the global optimum. Gekko Temperature Profile Optimization The other way to solve this is to use Gekko's optimization capabilities with gradient-based solvers. import matplotlib.pyplot as plt import numpy as np from gekko import GEKKO m = GEKKO(remote=False) tf = 100 # Final time m.time = np.linspace(0, tf, tf) # Time interval t = m.time # Temperature profile T = m.MV(300,lb=298,ub=338) T.STATUS = 1 # Arrhenius equations A1, b1 = 3.92e7, 6614.83 A2, b2 = 5.77e5, 4997.98 A3, b3 = 5.88e12, 9993.96 A4, b4 = 0.98e10, 7366.64 A5, b5 = 5.35e3, 3231.18 A6, b6 = 2.15e4, 4824.87 k1 = m.Intermediate(A1*m.exp(-b1/T)) k2 = m.Intermediate(A2*m.exp(-b2/T)) k3 = m.Intermediate(A3*m.exp(-b3/T)) k4 = m.Intermediate(A4*m.exp(-b4/T)) k5 = m.Intermediate(A5*m.exp(-b5/T)) k6 = m.Intermediate(A6*m.exp(-b6/T)) # Decision variables' initial values x1 = m.Var(value=0.3226) x2 = m.Var(value=0) x3 = m.Var(value=0) x4 = m.Var(value=0) x5 = m.Var(value=1.9356) x6 = m.Var(value=0) # Dynamic model m.Equation(x1.dt() == -k1*x1*x5 + k2*x2*x4) m.Equation(x2.dt() == k1*x1*x5 - k2*x2*x4 - k3*x2*x5 + k4*x3*x4) m.Equation(x3.dt() == k3*x2*x5 - k4*x3*x4 - k5*x3*x5 + k6*x6*x4) m.Equation(x4.dt() == k1*x1*x5 - k2*x2*x4 + k3*x2*x5 - k4*x3*x4 + k5*x3*x5 - k6*x6*x4) m.Equation(x5.dt() == -(k1*x1*x5 - k2*x2*x4 + k3*x2*x5 - k4*x3*x4 + k5*x3*x5 - k6*x6*x4)) m.Equation(x6.dt() == k5*x3*x5 - k6*x6*x4) p = np.zeros(tf) p[-1] = 1.0 final = m.Param(value=p) m.Maximize(x4*final) m.options.IMODE = 6 m.solve(disp=True, debug=True) f = x4.value[-1] print(f'Objective: {f}') plt.figure(figsize=(6,4)) plt.subplot(2,1,1) plt.plot(t,T,'r--',label='Temperature') plt.grid(); plt.legend(); plt.ylabel('T (K)') plt.subplot(2,1,2) plt.plot(t,x1,label='x1') plt.plot(t,x2,label='x2') plt.plot(t,x3,label='x3') plt.plot(t,x4,label='x4') plt.plot(t,x5,label='x5') plt.plot(t,x6,label='x6') plt.grid(); plt.legend() plt.xlabel('Time'); plt.ylabel('Mole frac') plt.tight_layout() plt.show() A solution of x4(tf)=0.83177 is found in 0.832 seconds. EXIT: Optimal Solution Found. The solution was found. The final value of the objective function is -0.8313902000699634 --------------------------------------------------- Solver : IPOPT (v3.12) Solution time : 0.8324999999999999 sec Objective : -0.8313902000699634 Successful solution --------------------------------------------------- Objective: 0.83177020385 The solver drives the temperature to the maximum allowable temperature to maximize x4. Control the rate of change of the MV with T.DMAX=10 to limit the amount that the temperature can change each time period. It is also possible to optimize the values of the coefficients of a polynomial for a differentiable temperature profile. import matplotlib.pyplot as plt import numpy as np from gekko import GEKKO m = GEKKO(remote=False) tf = 100 # Final time m.time = np.linspace(0, tf, tf) # Time interval t = m.time # Temperature profile c0, c1, c2 = m.Array(m.FV,3,lb=-1000,ub=1e4) c0.STATUS = 1; c1.STATUS = 1; c2.STATUS = 1 p0 = 0.1 p1 = m.Param((t - 50)*(3**0.5/500)) p2 = m.Param((t**2 - 100*t + 5000/3)*(3/(500000000**0.5))) T = m.Var(lb=298,ub=338) m.Equation(T==c0*p0 + c1*p1 + c2*p2) # Arrhenius equations A1, b1 = 3.92e7, 6614.83 A2, b2 = 5.77e5, 4997.98 A3, b3 = 5.88e12, 9993.96 A4, b4 = 0.98e10, 7366.64 A5, b5 = 5.35e3, 3231.18 A6, b6 = 2.15e4, 4824.87 k1 = m.Intermediate(A1*m.exp(-b1/T)) k2 = m.Intermediate(A2*m.exp(-b2/T)) k3 = m.Intermediate(A3*m.exp(-b3/T)) k4 = m.Intermediate(A4*m.exp(-b4/T)) k5 = m.Intermediate(A5*m.exp(-b5/T)) k6 = m.Intermediate(A6*m.exp(-b6/T)) # Decision variables' initial values x1 = m.Var(value=0.3226) x2 = m.Var(value=0) x3 = m.Var(value=0) x4 = m.Var(value=0) x5 = m.Var(value=1.9356) x6 = m.Var(value=0) # Dynamic model m.Equation(x1.dt() == -k1*x1*x5 + k2*x2*x4) m.Equation(x2.dt() == k1*x1*x5 - k2*x2*x4 - k3*x2*x5 + k4*x3*x4) m.Equation(x3.dt() == k3*x2*x5 - k4*x3*x4 - k5*x3*x5 + k6*x6*x4) m.Equation(x4.dt() == k1*x1*x5 - k2*x2*x4 + k3*x2*x5 - k4*x3*x4 + k5*x3*x5 - k6*x6*x4) m.Equation(x5.dt() == -(k1*x1*x5 - k2*x2*x4 + k3*x2*x5 - k4*x3*x4 + k5*x3*x5 - k6*x6*x4)) m.Equation(x6.dt() == k5*x3*x5 - k6*x6*x4) p = np.zeros(tf) p[-1] = 1.0 final = m.Param(value=p) m.Maximize(x4*final) m.options.IMODE = 6 m.solve(disp=True, debug=True) f = x4.value[-1] print(f'Objective: {f}') print(f'c0: {c0.value[0]}, c1: {c1.value[0]}, c2: {c2.value[0]}') plt.figure(figsize=(6,4)) plt.subplot(2,1,1) plt.plot(t[1:],T[1:],'r--',label='Temperature') plt.grid(); plt.legend(); plt.ylabel('T (K)') plt.subplot(2,1,2) plt.plot(t,x1,label='x1') plt.plot(t,x2,label='x2') plt.plot(t,x3,label='x3') plt.plot(t,x4,label='x4') plt.plot(t,x5,label='x5') plt.plot(t,x6,label='x6') plt.grid(); plt.legend() plt.xlabel('Time'); plt.ylabel('Mole frac') plt.tight_layout() plt.savefig('results.png',dpi=300) plt.show() The initial condition is not calculated so it is left out of the plot. plt.plot(t[1:],T[1:],'r--',label='Temperature') Another interesting problem that is similar to this one is the Oil Shale Pyrolysis and other benchmark problems listed on the website.
2
1
78,696,026
2024-7-2
https://stackoverflow.com/questions/78696026/jupyter-notebook-how-to-direct-the-output-to-a-specific-cell
Is there a way to specify the output cell where a function should print its output? In my specific case, I have some threads running, each with a logger. The logger output is printed on any running cell, interfering with that cell's intended output. Is there a way I can force the logger to print only on cell #1, for example?
You could use the following approach: Redirect all log messages in the root logger (which you will get by calling getLogger()) to a QueueHandler to accumulate the log messages in a queue.Queue. In the intended output cell, start a QueueListener that wraps a StreamHandler. The QueueListener, as its name implies, will listen to new items on the logging queue. It will pass new items to the StreamHandler, which will actually print them. Assuming we want to print below cell 1, this could look as follows: # Cell 1 import logging, queue, threading, time from logging.handlers import QueueHandler, QueueListener log_queue = queue.Queue(-1) logging.getLogger().addHandler(QueueHandler(log_queue)) listener = QueueListener(log_queue, logging.StreamHandler()) listener.start() In cell 2, we will simulate some activity: # Cell 2 def log_activity_1(): while True: logging.getLogger().warning("Activity 1") time.sleep(1) threading.Thread(target=log_activity_1, daemon=True).start() And likewise, in cell 3: # Cell 3 def log_activity_2(): while True: logging.getLogger().warning("Activity 2") time.sleep(2) threading.Thread(target=log_activity_2, daemon=True).start() The output will happen, basically in real time, under the cell that contains the listener.start() call, thus under cell 1 (and only there) in our case. It will look as expected: For each logged "Activity 2", we will see "Activity 1" logged in alternation and approximately twice as often, as we sleep 2 seconds in the former, and 1 second in the latter: Once processing has finished, we can stop the QueueListener (either programmatically or manually) via listener.stop() – or rather, we should stop the listener this way, following its documentation: if you don’t call [stop()] before your application exits, there may be some records still left on the queue, which won’t be processed.
5
6
78,694,739
2024-7-2
https://stackoverflow.com/questions/78694739/why-does-flask-seem-to-require-a-redirect-after-post
I have an array of forms I want rendered in a flask blueprint called fans. I am using sqlalchemy to SQLLite during dev to persist the data and flask-wtforms to render. The issue appears to be with DecimalRangeField - if I have two or more fans and change the slider on just one, the other slider appears to move to match it, despite the data value of the DecimalRangeField being unchanged. Note: the code below is working, the issue arises when the redirect line I highlighted below is deleted. Here is the routes.py code with the "fix" of the redirect added: @bp.route('/', methods=['GET', 'POST']) def fans_index(): fans = Fan.query.all() if fans.__len__() == 0: return redirect(url_for('fans.newfan')) form = FanForm(request.form) if form.validate_on_submit(): # request.method == 'POST' for fan in fans: if fan.name == form.name.data: fan.swtch = form.swtch.data fan.speed = round(form.speed.data) db.session.commit() return redirect(url_for('fans.fans_index')) # <-- THIS is required, why? else: # request.method == 'GET' pass forms = [] for fan in fans: form = FanForm() form.name.data = fan.name form.swtch.data = fan.swtch form.speed.data = fan.speed forms.append(form) return render_template('fans_index.html', title='Fans!', forms=forms) Here is the form used: class FanForm(FlaskForm): name = HiddenField('Name') swtch = BooleanField('Switch', render_kw={'class': 'swtch'}) speed = DecimalRangeField('Speed', render_kw={'class': 'speed'}, validators=[DataRequired()]) submit = SubmitField('Save Fan') And here is the html template: <h1>Fans</h1> <div class="container"> <div class="row"> {% for form in forms %} <div class="col mx-1 shadow-5-strong border border-white rounded" style="max-width: 220px"> <h2 class="ms-1">{{ form.name.data }}:</h2> <form class="mx-auto ms-3" name="{{ form.name.data }}" action="" method="post"> {{ form.hidden_tag() }} <div>{{ form.name }}</div> <p> {{ form.speed.label }}: <span class="speed_display_val">{{ form.speed.data | round }}%</span><br> {{ form.speed(min=20) }}<br> {% for error in form.speed.errors %} <span style="color: red;">[{{ error }}]</span> {% endfor %} </p> <p> {{ form.swtch.label }} <span class="ms-3">{{ form.swtch }}</span><br> {% for error in form.swtch.errors %} <span style="color: red;">[{{ error }}]</span> {% endfor %} </p> <p>{{ form.submit }}</p> </form> </div> {% endfor %} </div> </div> I also have some simple javascript for this page to animate the slider and do submits when a user moves the slider or clicks a checkbox for turning the fans on and off: /* script to animate the slider value changing and post data on slider mouseup or switch click */ const values = Array.from(document.getElementsByClassName('speed_display_val')); const speeds = Array.from(document.getElementsByClassName('speed')); const swtches = Array.from(document.getElementsByClassName('swtch')); //Note: switch is a reserved word in JS speeds.forEach((speed, i) => { speed.oninput = (e) => { values[i].textContent = Math.round(e.target.value) + '%' }; speed.onmouseup = () => { speed.form.requestSubmit() }; }); swtches.forEach((swtch) => { swtch.onclick = () => { swtch.form.requestSubmit() }; });
FlaskForm will automatically use the values ​​from flask.request.form and flask.request.files. To work around this, you can pass None for the formdata attribute of the form. This way, the redirect that resets flask.request.form is no longer necessary. Your code would then look something like this. @bp.route('/', methods=['GET', 'POST']) def fans_index(): if Fan.query.count() == 0: return redirect(url_for('.newfan')) form = FanForm(request.form) if form.validate_on_submit(): if fan := Fan.query.filter_by(name=form.name.data).first(): fan.swtch = form.swtch.data fan.speed = round(form.speed.data) db.session.commit() forms = [FanForm(formdata=None, obj=fan) for fan in Fan.query.all()] return render_template('fans_index.html', **locals())
2
2
78,689,530
2024-6-30
https://stackoverflow.com/questions/78689530/type-error-when-running-model-trained-in-roboflow-in-production-environment
I can make inference from my trained Roboflow model using Google Colab and the AWS cloud9 test environment. To do this, I used the following code: from roboflow import Roboflow rf = Roboflow(api_key="xxxxxxxxxxxxxxxxx") path = "/context/image.jpeg" project = rf.workspace().project("xxxxxx") model = project.version(x).model # infer on a local image currency_result= model.predict(path, confidence=40).json()``` However, when I put it into production I got the following error: [ERROR] TypeError: expected str, bytes or os.PathLike object, not NoneType Traceback (most recent call last): File "/var/lang/lib/python3.8/imp.py", line 234, in load_module return load_source(name, filename, file) File "/var/lang/lib/python3.8/imp.py", line 171, in load_source module = _load(spec) File "<frozen importlib._bootstrap>", line 702, in _load File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/var/task/app.py", line 1, in <module> from roboflow import Roboflow File "/var/lang/lib/python3.8/site-packages/roboflow/__init__.py", line 10, in <module> from roboflow.adapters import rfapi File "/var/lang/lib/python3.8/site-packages/roboflow/adapters/rfapi.py", line 8, in <module> from roboflow.config import API_URL, DEFAULT_BATCH_NAME, DEFAULT_JOB_NAME File "/var/lang/lib/python3.8/site-packages/roboflow/config.py", line 51, in <module> API_URL = get_conditional_configuration_variable("API_URL", "https://api.roboflow.com") File "/var/lang/lib/python3.8/site-packages/roboflow/config.py", line 22, in get_conditional_configuration_variable default_path = os.path.join(os.getenv("HOME"), ".config/roboflow/config.json") File "/var/lang/lib/python3.8/posixpath.py", line 76, in join a = os.fspath(a) What I found interesting was that I was able to run the code in cloud9 using sam local invoke, but in the production environment I got the above error.
As @iurisilvio pointed out, try adding HOME environmental variable into your lambda configuration. The stack trace hints you where the error is, os.getenv("HOME") is None, and hence os.path.join(os.getenv("HOME"), ".config/roboflow/config.json") results in error. Below is from python3.8 >>> import os >>> os.path.join(None, "foo") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.8/posixpath.py", line 76, in join a = os.fspath(a) TypeError: expected str, bytes or os.PathLike object, not NoneType
3
1
78,689,702
2024-6-30
https://stackoverflow.com/questions/78689702/different-embeddings-for-same-sentences-with-torch-transformer
Hey all and apologies in advance for what is probably a fairly basic question - I have a theory about what's causing the issue here, but would be great to confirm with people who know more about this than I do. I've been trying to implement this python code snippet in Google colab. The snippet is meant to work out similarity for sentences. The code runs fine, but what I'm finding is that the embeddings and distances change every time I run it, which isn't ideal for my intended use case. import torch from scipy.spatial.distance import cosine from transformers import AutoModel, AutoTokenizer # Import our models. The package will take care of downloading the models automatically tokenizer = AutoTokenizer.from_pretrained("qiyuw/pcl-bert-base-uncased") model = AutoModel.from_pretrained("qiyuw/pcl-bert-base-uncased") # Tokenize input texts texts = [ "There's a kid on a skateboard.", "A kid is skateboarding.", "A kid is inside the house." ] inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt") # Get the embeddings with torch.no_grad(): embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output # Calculate cosine similarities # Cosine similarities are in [-1, 1]. Higher means more similar cosine_sim_0_1 = 1 - cosine(embeddings[0], embeddings[1]) cosine_sim_0_2 = 1 - cosine(embeddings[0], embeddings[2]) print("Cosine similarity between \"%s\" and \"%s\" is: %.3f" % (texts[0], texts[1], cosine_sim_0_1)) print("Cosine similarity between \"%s\" and \"%s\" is: %.3f" % (texts[0], texts[2], cosine_sim_0_2)) I think the issue must be model specific since I receive the warning about newly initialized pooler weights, and pooler_output is ultimately what the code reads to inform similarity: Some weights of RobertaModel were not initialized from the model checkpoint at qiyuw/pcl-roberta-large and are newly initialized: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Switching to an alternative model which does not give this warning (for example, sentence-transformers/all-mpnet-base-v2) makes the outputs reproducible, so I think it is because of the above warning about initialization of weights. So here are my questions: Can I make the output reproducible by initialising/seeding the model differently? If I can't make the outputs reproducible, is there a way in which I can improve the accuracy to reduce the amount of variation between runs? Is there a way to search huggingface models for those which will initialise the pooler weights so I can find a model which does suit my purposes? Thanks in advance
You are correct the model layer weights for bert.pooler.dense.bias and bert.pooler.dense.weight are initialized randomly. You can initialize these layers always the same way for a reproducible output, but I doubt the inference code that you have copied from there readme is correct. As already mentioned by you the pooling layers are not initialized and their model class also makes sure that the pooling_layer is not added: ... self.bert = BertModel(config, add_pooling_layer=False) ... The evaluation script of the repo should be called, according to the readme with the following command: python evaluation.py --model_name_or_path qiyuw/pcl-bert-base-uncased --mode test --pooler cls_before_pooler When you look into it, your inference code for qiyuw/pcl-bert-base-uncased should be the following way: import torch from scipy.spatial.distance import cosine from transformers import AutoModel, AutoTokenizer # Import our models. The package will take care of downloading the models automatically tokenizer = AutoTokenizer.from_pretrained("qiyuw/pcl-bert-base-uncased") model = AutoModel.from_pretrained("qiyuw/pcl-bert-base-uncased") # Tokenize input texts texts = [ "There's a kid on a skateboard.", "A kid is skateboarding.", "A kid is inside the house." ] inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt") # Get the embeddings with torch.inference_mode(): embeddings = model(**inputs) embeddings = embeddings.last_hidden_state[:, 0] # Calculate cosine similarities # Cosine similarities are in [-1, 1]. Higher means more similar cosine_sim_0_1 = 1 - cosine(embeddings[0], embeddings[1]) cosine_sim_0_2 = 1 - cosine(embeddings[0], embeddings[2]) print("Cosine similarity between \"%s\" and \"%s\" is: %.3f" % (texts[0], texts[1], cosine_sim_0_1)) print("Cosine similarity between \"%s\" and \"%s\" is: %.3f" % (texts[0], texts[2], cosine_sim_0_2)) Output: Cosine similarity between "There's a kid on a skateboard." and "A kid is skateboarding." is: 0.941 Cosine similarity between "There's a kid on a skateboard." and "A kid is inside the house." is: 0.779 Can I make the output reproducible by initialising/seeding the model differently? Yes, you can. Use torch.maunal_seed: import torch from transformers import AutoModel, AutoTokenizer model_random = AutoModel.from_pretrained("qiyuw/pcl-bert-base-uncased") torch.manual_seed(42) model_repoducible1 = AutoModel.from_pretrained("qiyuw/pcl-bert-base-uncased") torch.manual_seed(42) model_repoducible2 = AutoModel.from_pretrained("qiyuw/pcl-bert-base-uncased") print(torch.allclose(model_random.pooler.dense.weight, model_repoducible1.pooler.dense.weight)) print(torch.allclose(model_random.pooler.dense.weight, model_repoducible2.pooler.dense.weight)) print(torch.allclose(model_repoducible1.pooler.dense.weight, model_repoducible2.pooler.dense.weight)) Output: False False True
3
1
78,689,321
2024-6-30
https://stackoverflow.com/questions/78689321/how-to-solve-sql-compilation-error-object-snowpark-temp-stage-flgviwvuc-alre
I have been using Snowflake to do ML works. I have built a Multiple linear regression. I am writing an output df as a table using session.write_pandas. I get this error 'SQL compilation error: Object 'SNOWPARK_TEMP_STAGE_XXXXX' already exists.' in snowflake streamlit app. This goes if I refresh the page. But why it occurs? I could not find issues related to this in web. I get this error only if i write dataframe as pandas. Below is the code where i get this error. from snowflake.snowpark.context import get_active_session session = get_active_session() session.write_pandas(df_final, "TEMPTABLE",database="db",schema="schema", auto_create_table=True, overwrite=True) I have tried to clear or drop the stage every time before write_pandas code. import uuid # Drop the stage if it already exists session.sql(f"DROP STAGE IF EXISTS {stage_name}").collect() # Create a new temporary stage session.sql(f"CREATE TEMPORARY STAGE {stage_name}").collect() This does not work and what is the real reason I get this error? Please help I have no idea why this happens.
Trying to patch away two types of errors when writing from pandas Dataframe() to tables in Snowflake. SNOWPARK_TEMP_FILE_FORMAT_XXXX already exists SNOWPARK_TEMP_STAGE_XXXX already exists There is also a race condition which ever may occur first, so we have to write try-except within a try-except to resolve this issue. This issue seen in all SNOWPARK related tools like Streamlit in Snowflake, Python-workbook and Python Stored procedures etc. # EXAMPLE: # Modify this fix to your own use accordingly βœ… # Snowflake may patch this BUG in the future. # This BUG is really annoying. import re from snowflake.snowpark.context import get_active_session session = get_active_session() def try_to_write_pd_to_table(my_session, my_df, my_tablename, my_db, my_sch): try: my_session.write_pandas(my_df, my_tablename, database=f"{my_db}", schema="{my_sch}", auto_create_table=True, overwrite=True) except Exception as e1: try: tmp_file_format=re.findall('SNOWPARK_TEMP_FILE_FORMAT_[A-Z]{10}', str(e1)) if(len(tmp_file_format)>0): my_session.sql(f'DROP FILE FORMAT IF EXISTS {my_db}.{my_sch}.{tmp_file_format[0]};').collect(); tmp_stage=re.findall('SNOWPARK_TEMP_STAGE_[A-Z]{10}', str(e1)) if(len(tmp_stage)>0): my_session.sql(f'DROP STAGE IF EXISTS {my_db}.{my_sch}.{tmp_stage[0]};').collect(); my_session.write_pandas(my_df, my_tablename, database=f"{my_db}", schema="{my_sch}", auto_create_table=True, overwrite=True) except Exception as e2: tmp_file_format=re.findall('SNOWPARK_TEMP_FILE_FORMAT_[A-Z]{10}', str(e2)) if(len(tmp_file_format)>0): my_session.sql(f'DROP FILE FORMAT IF EXISTS {my_db}.{my_sch}.{tmp_file_format[0]};').collect(); tmp_stage=re.findall('SNOWPARK_TEMP_STAGE_[A-Z]{10}', str(e2)) if(len(tmp_stage)>0): my_session.sql(f'DROP STAGE IF EXISTS {my_db}.{my_sch}.{tmp_stage[0]};').collect(); my_session.write_pandas(my_df, my_tablename, database=f"{my_db}", schema="{my_sch}", auto_create_table=True, overwrite=True) try_to_write_pd_to_table(session, df_final, "TEMPTABLE", 'LLM_DATABASE', 'ML_SCHEMA');
3
1
78,679,676
2024-6-27
https://stackoverflow.com/questions/78679676/how-to-remove-white-background-on-kivy-app-icon
Kivy app icon is not filling the entire space designed for It. I am not English fluent, but I will describe my problem as best as I can. I am using an icon with 480 x 320 of size and It works fine on the smartphone, but instead of occupying the entire space, the icon is being reduced to approximatelly 50% of the size. The rest is being occupied by a white background. How do I do to put the icon for occupying the entire space? The only change I made in spec file related to the icon was: icon.filename = directory_name/icon_name.png
There are in fact two ways to add icons for kivy apps. The quick one that you used which is simple but does not allow resize of the icon and an adaptative one which works as follows. You have to edit the following lines of the buildozer.spec file instead : # (str) Adaptive icon of the application (used if Android API level is 26+ at runtime) icon.adaptive_foreground.filename = ./resources/ic_launcher.png icon.adaptive_background.filename = ./resources/ic_launcher_background.png The two images must be defined as explained on this page : https://developer.android.com/develop/ui/views/launch/icon_design_adaptive Both images should have the same size, I recommend using 512x512. The foreground should have content only in the center square of dimension 312x312 (with recommended dimension).
2
1
78,689,283
2024-6-30
https://stackoverflow.com/questions/78689283/exposing-11434-port-in-docker-container-to-access-ollama-local-model
I am trying to connect local Ollama 2 model, that uses port 11434 on my local machine, with my Docker container running Linux Ubuntu 22.04. I can confirm that Ollama model definitely works and is accessible through http://localhost:11434/. In my Docker container, I am also running GmailCTL service and was able to successfully connect with Google / Gmail API to read and send emails from Google account. Now I want to wait for an email and let the LLM answer the email back to the sender. However, I am not able to publish the 11434 port in order to connect model with container. I tried setting up devcontainer.json file to forward the ports: { "name": "therapyGary", "build": { "context": "..", "dockerfile": "../Dockerfile" }, "forwardPorts": [80, 8000, 8080, 11434] } I tried exposing the ports in the Dockerfile: EXPOSE 80 EXPOSE 8000 EXPOSE 8080 EXPOSE 11434` These seem to add the ports to the container and Docker is aware of them, but when I check the port status for the currently used image, I get this message: "Error: No public port '11434' published for 5ae41009199a" I also tried setting up the docker-compose.yaml file: services: my_service: image: 53794c7c792c # Replace with your actual Docker image name ports: - "11434:11434" - "8000:8000" - "8080:8080" - "80:80" But there seems to be a problem with it, where any container with it automatically stops. I tried stopping the Ollama model, before running the container as to not create a conflict, but that did not help either. Any suggestions are very welcome. Thanks! -- edit -- adding Dockerfile code: FROM ubuntu:22.04 ENV DEBIAN_FRONTEND=noninteractive ENV GMAILCTL_VERSION=0.10.1 RUN apt-get update && apt-get install -y python3 python3-pip xdotool curl software-properties-common libreoffice unzip && apt-get clean RUN pip3 install --upgrade pip RUN pip3 install google-api-python-client google-auth-httplib2 google-auth-oauthlib pandas requests RUN useradd -ms /bin/bash devuser RUN mkdir -p /workspace && chown -R devuser:devuser /workspace USER root WORKDIR /workspace COPY . . RUN chown -R devuser:devuser /workspace EXPOSE 80 EXPOSE 8000 EXPOSE 8080 EXPOSE 11434 CMD [ "bash" ]
So remove the EXPOSE 11434 statement, what that does is let you connect to a service in the docker container using that port. 11434 is running on your host machine, not your docker container. To let the docker container see port 11434 on your host machine, you need use the host network driver, so it can see anything on your local network. To do this, you can use the runArgs parameter: { "name": "therapyGary", "build": { "context": "..", "dockerfile": "../Dockerfile" }, "forwardPorts": [80, 8000, 8080, 11434] } would become { "name": "therapyGary", "build": { "context": "..", "dockerfile": "../Dockerfile" }, "runArgs": ["--net=host"] } Then, from within your container, you should be able to contact the LLM on port 11434 by referencing localhost or 127.0.0.1 from your container. E.g. in netcat nc localhost 11434. If you're using Docker Desktop, you need to enable host networking by going into Features in development tab in Settings and select the Enable host networking option, per the documentation here: Docker Desktop As a side note, you can use --net=host or --network=host, both work on my machine using Windows 11 and Docker Desktop. If you want to use a docker compose yaml file, you would use the network_mode parameter: services: my_service: image: 53794c7c792c # Replace with your actual Docker image name network_mode: "host" Because you're putting the container on the host network, there is no need to expose ports, since it's like plugging your container directly into your network. See the Note in the documentation. References: Dev Container Image Specific Properties Docker Engine Host Network Driver Reference
4
0
78,683,593
2024-6-28
https://stackoverflow.com/questions/78683593/defining-dynamic-constraints-for-scipy-optimize-in-python
I wanted to abstract the following function that calculates minimum value of a objective function and values when we can get this minimal value for arbitrary number of g's. I started with simple case of two variables, which works fine import numpy as np from scipy.optimize import minimize def optimize(g_0, s, g_max, eff, dist): objective = lambda x: s[0] * x[0] + s[1] * x[1] cons = [ {'type': 'ineq', 'fun': lambda x: x[0] - ((dist[0] + dist[1]) / eff - g_0)}, # g_1 > (dist[0] + dist[1]) / eff - g_0 {'type': 'ineq', 'fun': lambda x: x[1] - ((dist[0] + dist[1] + dist[2]) / eff - x[0] - g_0)}, # g_2 > (dist[0] + dist[1] + dist[2]) / eff - g_1 - g_0 {'type': 'ineq', 'fun': lambda x: g_max - (dist[0] / eff - g_0) - x[0]}, # g_1 < g_max - (dist[0] / eff - g_0) {'type': 'ineq', 'fun': lambda x: g_max - (g_0 + x[0] - (dist[0] - dist[1]) / eff) - x[1]}, # g_2 < g_max - (g_0 + g_1 - (dist[0] - dist[1]) / eff) ] # General constraints for all g for i in range(len(s)): cons.append({'type': 'ineq', 'fun': lambda x, i=i: x[i]}) # g_i > 0 cons.append({'type': 'ineq', 'fun': lambda x, i=i: g_max - x[i]}) # g_i < g_max # Bounds for the variables (g_1 and g_2) g1_lower_bound = max(0, (dist[0] + dist[1]) / eff - g_0) g1_upper_bound = min(g_max, g_max - (dist[0] / eff - g_0)) # Initial guess for the variables x0 = [g1_lower_bound, max(0, ((dist[0] + dist[1] + dist[2]) / eff - g1_lower_bound - g_0) + 1)] solution = minimize(objective, x0, method='SLSQP', bounds=[(g1_lower_bound, g1_upper_bound), (0, g_max)], constraints=cons) g_1, g_2 = map(round, solution.x) return g_1, g_2, round(solution.fun, 2) g_0 = 80 s = [4.5, 3] g_max = 135 eff = 5 dist = [400, 500, 600] optimal_g1, optimal_g2, minimum_value = optimize(g_0, s, g_max, eff, dist) print(f"Optimal values: g_1 = {optimal_g1}, g_2 = {optimal_g2}") print(f"Minimum value of the objective function: {minimum_value}") Then I started to abstract it for any arbitrary numbers of g (i>=1). Here's the generale rule for inequality So far I came to this code, but when I tried to send the exact same parametrs it gives different result import numpy as np from scipy.optimize import minimize def optimize(g_0, s, g_max, eff, dist): objective = lambda x: sum(s[i] * x[i] for i in range(len(x))) cons = [] for i in range(len(s)): cons.append({'type': 'ineq', 'fun': lambda x, i=i: x[i] - (sum(dist[:i+2]) / eff - sum([g_0] + x[:i]))}) # g_i > sum of dists from dist[0] to dist[i] / eff - sum of g from g_0 to g_(i-1) cons.append({'type': 'ineq', 'fun': lambda x, i=i: g_max - (sum([g_0] + x[:i]) - (sum(dist[:i+1]) / eff)) - x[i]}) # g_i < g_max - (sum of g from g_0 to g_(i-1) - sum of dists from dist[0] to dist[i] / eff) # General constraints to ensure each g is between 0 and g_max cons.append({'type': 'ineq', 'fun': lambda x, i=i: x[i]}) # g_i > 0 cons.append({'type': 'ineq', 'fun': lambda x, i=i: g_max - x[i]}) # g_i < g_max g1_lower_bound = max(0, (dist[0] + dist[1]) / eff - g_0) g1_upper_bound = min(g_max, g_max - (dist[0] / eff - g_0)) # Initial guess for the variables x0 = [g1_lower_bound, max(0, ((dist[0] + dist[1] + dist[2]) / eff - g1_lower_bound - g_0) + 1)] solution = minimize(objective, x0, method='SLSQP', bounds=[(0, g_max) for _ in range(len(s))], constraints=cons) g_values = list(map(round, solution.x)) return g_values, round(solution.fun, 2) g_0 = 80 s = [4.5, 3] g_max = 135 eff = 5 dist = [400, 500, 600] optimal_g_values, minimum_value = optimize(g_0, s, g_max, eff, dist) print(f"Optimal values: g = {optimal_g_values}") print(f"Minimum value of the objective function: {minimum_value}") The first function gives following result, which is also correct one: Optimal values: g_1 = 100, g_2 = 120 Minimum value of the objective function: 810.0 But the second, after trying to make it for arbitrary len of g it returns: Optimal values: g = [135, 85] Minimum value of the objective function: 862.5 I checked the lambda functions in second function and they seems to be correct. Also when I try to execute this code: import numpy as np from scipy.optimize import minimize def optimize(g_0, s, g_max, eff, dist): objective = lambda x: sum(s[i] * x[i] for i in range(len(x))) cons = [ {'type': 'ineq', 'fun': lambda x: x[0] - ((dist[0] + dist[1]) / eff - g_0)}, # g_1 > (dist[0] + dist[1]) / eff - g_0 {'type': 'ineq', 'fun': lambda x: x[1] - ((dist[0] + dist[1] + dist[2]) / eff - x[0] - g_0)}, # g_2 > (dist[0] + dist[1] + dist[2]) / eff - g_1 - g_0 ] for i in range(len(s)): #cons.append({'type': 'ineq', 'fun': lambda x, i=i: x[i] - (sum(dist[:i+2]) / eff - sum([g_0] + x[:i]))}) # g_i > sum of dists from dist[0] to dist[i] / eff - sum of g from g_0 to g_(i-1) cons.append({'type': 'ineq', 'fun': lambda x, i=i: g_max - (sum([g_0] + x[:i]) - (sum(dist[:i+1]) / eff)) - x[i]}) # g_i < g_max - (sum of g from g_0 to g_(i-1) - sum of dists from dist[0] to dist[i] / eff) # General constraints to ensure each g is between 0 and g_max cons.append({'type': 'ineq', 'fun': lambda x, i=i: x[i]}) # g_i > 0 cons.append({'type': 'ineq', 'fun': lambda x, i=i: g_max - x[i]}) # g_i < g_max g1_lower_bound = max(0, (dist[0] + dist[1]) / eff - g_0) g1_upper_bound = min(g_max, g_max - (dist[0] / eff - g_0)) # Initial guess for the variables x0 = [g1_lower_bound, max(0, ((dist[0] + dist[1] + dist[2]) / eff - g1_lower_bound - g_0) + 1)] solution = minimize(objective, x0, method='SLSQP', bounds=[(0, g_max) for _ in range(len(s))], constraints=cons) g_values = list(map(round, solution.x)) return g_values, round(solution.fun, 2) g_0 = 80 s = [4.5, 3] g_max = 135 eff = 5 dist = [400, 500, 600] optimal_g_values, minimum_value = optimize(g_0, s, g_max, eff, dist) print(f"Optimal values: g = {optimal_g_values}") print(f"Minimum value of the objective function: {minimum_value}") It gives the correct answer: Optimal values: g_1 = 100, g_2 = 120 Minimum value of the objective function: 810.0 But if we apply for the other constrain: import numpy as np from scipy.optimize import minimize def optimize(g_0, s, g_max, eff, dist): objective = lambda x: sum(s[i] * x[i] for i in range(len(x))) cons = [ {'type': 'ineq', 'fun': lambda x: g_max - (dist[0] / eff - g_0) - x[0]}, # g_1 < g_max - (dist[0] / eff - g_0) {'type': 'ineq', 'fun': lambda x: g_max - (g_0 + x[0] - (dist[0] - dist[1]) / eff) - x[1]}, # g_2 < g_max - (g_0 + g_1 - (dist[0] - dist[1]) / eff) ] for i in range(len(s)): cons.append({'type': 'ineq', 'fun': lambda x, i=i: x[i] - (sum(dist[:i+2]) / eff - sum([g_0] + x[:i]))}) # g_i > sum of dists from dist[0] to dist[i] / eff - sum of g from g_0 to g_(i-1) #cons.append({'type': 'ineq', 'fun': lambda x, i=i: g_max - (sum([g_0] + x[:i]) - (sum(dist[:i+1]) / eff)) - x[i]}) # g_i < g_max - (sum of g from g_0 to g_(i-1) - sum of dists from dist[0] to dist[i] / eff) # General constraints to ensure each g is between 0 and g_max cons.append({'type': 'ineq', 'fun': lambda x, i=i: x[i]}) # g_i > 0 cons.append({'type': 'ineq', 'fun': lambda x, i=i: g_max - x[i]}) # g_i < g_max g1_lower_bound = max(0, (dist[0] + dist[1]) / eff - g_0) g1_upper_bound = min(g_max, g_max - (dist[0] / eff - g_0)) # Initial guess for the variables x0 = [g1_lower_bound, max(0, ((dist[0] + dist[1] + dist[2]) / eff - g1_lower_bound - g_0) + 1)] solution = minimize(objective, x0, method='SLSQP', bounds=[(0, g_max) for _ in range(len(s))], constraints=cons) g_values = list(map(round, solution.x)) return g_values, round(solution.fun, 2) g_0 = 80 s = [4.5, 3] g_max = 135 eff = 5 dist = [400, 500, 600] optimal_g_values, minimum_value = optimize(g_0, s, g_max, eff, dist) print(f"Optimal values: g = {optimal_g_values}") print(f"Minimum value of the objective function: {minimum_value}") It gives the correct values, but wrong minimal value of objective function: Optimal values: g = [100, 120] Minimum value of the objective function: 810.65 At this point my best guess is that either I have written wrong lambda function in second function or that something unexpected happening in loop
I see two mistakes. One is a mistake in the way you originally specified the two-variable constraint. The second is a mistake in the way the N-variable constraint is specified. I also see some opportunities for general improvements. Let's start with the mistake in the original specification: {'type': 'ineq', 'fun': lambda x: g_max - (g_0 + x[0] - (dist[0] - dist[1]) / eff) - x[1]}, # g_2 < g_max - (g_0 + g_1 - (dist[0] - dist[1]) / eff) Based on the inequality you are trying to implement, the expression (dist[0] - dist[1]) ought to be (dist[0] + dist[1]). Second, I see a mistake relating to the way that NumPy implements +, in the following two lines of code: cons.append({'type': 'ineq', 'fun': lambda x, i=i: x[i] - (sum(dist[:i+2]) / eff - sum([g_0] + x[:i]))}) # g_i > sum of dists from dist[0] to dist[i] / eff - sum of g from g_0 to g_(i-1) cons.append({'type': 'ineq', 'fun': lambda x, i=i: g_max - (sum([g_0] + x[:i]) - (sum(dist[:i+1]) / eff)) - x[i]}) # g_i < g_max - (sum of g from g_0 to g_(i-1) - sum of dists from dist[0] to dist[i] / eff) To explain why, let me start with some simple examples. If you use +, for Python lists this means "concatenate," or put the second list at the end of the first list. For example: >>> [1] + [2, 3] [1, 2, 3] >>> [1] + [] [1] >>> [1, 2] + [3, 4] [1, 2, 3, 4] However, in NumPy, + means add. If either of the operands to + is a NumPy array, then the two arrays are added together. For example: >>> np.array([1]) + [2, 3] array([3, 4]) >>> np.array([1]) + [] array([], dtype=float64) >>> np.array([1, 2]) + [3, 4] array([4, 6]) These are very different results. The effect of that is that the expression sum([g_0] + x[:i]) does different things depending on whether x is a list or array. When SciPy is optimizing your function, x will always be a NumPy array. For that reason, I suggest that you replace sum([g_0] + x[:i]) with (g_0 + sum(x[:i])), which provides the same results for both lists and arrays. Also, these constraints are redundant: cons.append({'type': 'ineq', 'fun': lambda x, i=i: x[i]}) # g_i > 0 cons.append({'type': 'ineq', 'fun': lambda x, i=i: g_max - x[i]}) # g_i < g_max These do the same thing as your bounds, and bounds are more efficient than constraints. I would remove these. Also, given that all of your constraints are a linear function of x and some constants, you might find that scipy.optimize.LinearConstraint is more appropriate. Your constraints (except the redundant ones) are equivalent to the following LinearConstraint: A = np.zeros((len(s), len(s))) cons_lb = np.zeros(len(s)) cons_ub = np.zeros(len(s)) for i in range(len(s)): A[i, :i + 1] = 1 cons_lb[i] = sum(dist[:i+2]) / eff - g_0 cons_ub[i] = g_max - g_0 + (sum(dist[:i+1]) / eff) cons = LinearConstraint(A, cons_lb, cons_ub) This gives a number of benefits. (SciPy does not need numeric differentiation with LinearConstraint, and evaluating a matrix multiply is much faster than evaluating a number of Python functions.) Speaking of differentiation, you can also speed this up by providing a jacobian, and using NumPy to calculate your objective function. For 2 variables this does not matter much, but for N variables, numeric differentiation takes N calls to your objective function, so it's best to avoid it for large N. Here is the final code after improving it: import numpy as np from scipy.optimize import minimize, LinearConstraint def optimize(g_0, s, g_max, eff, dist): s = np.array(s) objective = lambda x: np.sum(s * x) jac = lambda x: s A = np.zeros((len(s), len(s))) cons_lb = np.zeros(len(s)) cons_ub = np.zeros(len(s)) for i in range(len(s)): A[i, :i + 1] = 1 cons_lb[i] = sum(dist[:i+2]) / eff - g_0 cons_ub[i] = g_max - g_0 + (sum(dist[:i+1]) / eff) cons = LinearConstraint(A, cons_lb, cons_ub) g1_lower_bound = max(0, (dist[0] + dist[1]) / eff - g_0) g1_upper_bound = min(g_max, g_max - (dist[0] / eff - g_0)) # Initial guess for the variables x0 = [g1_lower_bound, max(0, ((dist[0] + dist[1] + dist[2]) / eff - g1_lower_bound - g_0) + 1)] solution = minimize( objective, x0, jac=jac, method='SLSQP', bounds=[(0, g_max) for _ in range(len(s))], constraints=cons ) g_values = list(map(round, solution.x)) return g_values, round(solution.fun, 2) This is about 3x faster, and fixes the bug with the expression sum([g_0] + x[:i]).
2
1
78,680,128
2024-6-27
https://stackoverflow.com/questions/78680128/redis-om-python-custom-primary-key
I've been trying to create a custom PK based on fields in the model. https://redis.io/learn/develop/python/redis-om "The default ID generation function creates ULIDs, though you can change the function that generates the primary key for models if you'd like to use a different kind of primary key." I would like to customise the generation of primary keys based on some of the fields of the model (instance). I noticed that if I create a field called "pk," redis-om does take this value as the primary key. But is there a way. I can just declaratively assign fields as primary keys?
You can add Field(primary_key=True) to any of the attributes in your model. Here is the code provided in their example with the default pk: import datetime from typing import Optional from redis_om import HashModel class Customer(HashModel): first_name: str last_name: str email: str join_date: datetime.date age: int bio: Optional[str] = "Super dope" andrew = Customer( first_name="Andrew", last_name="Brookins", email="[email protected]", join_date=datetime.date.today(), age=38) print(andrew.pk) # > '01FJM6PH661HCNNRC884H6K30C' andrew.save() assert Customer.get(andrew.custom_pk) == andrew To assign your own primary key, you would update the code as follows: import datetime from typing import Optional from redis_om import HashModel, Field class Customer(HashModel): custom_pk:str = Field(primary_key=True) first_name: str last_name: str email: str join_date: datetime.date age: int bio: Optional[str] = "Super dope" andrew = Customer( custom_pk = "customPkValue", first_name="Andrew", last_name="Brookins", email="[email protected]", join_date=datetime.date.today(), age=38) andrew.save() assert Customer.get(andrew.custom_pk) == andrew
3
1
78,696,011
2024-7-2
https://stackoverflow.com/questions/78696011/how-can-i-set-all-form-fields-readonly-in-odoo-16-depending-on-a-field
In Odoo 16, I'm trying to make all fields from a form view readonly depending on the value of other field of the same form. First I've tried the following: <xpath expr="//field" position="attributes"> <attribute name="attrs">{'readonly': [('my_field', '=', True)]}</attribute> </xpath> With no result. I can't use <form edit="false"> neither since I have to check the field value. A rule with <field name="perm_write">1</field> works, but it doesn't behave as I need, since it allows you to modify the whole form until you click on Save and get the permission error. And overwriting get_view is not a valid option, since cannot depend on my_field value. The only solution I can find is to modify each field of the form with xpath, which is pretty disturbing, and is not consistent if in the future more fields are added to the form view via other apps. Does anyone have a better solution for this?
Extending get_view actually is a good idea. There is a module in server-ux repo of the OCA where something similar is done: when something is saved in a one2many field, every field in the form view will be set to readonly. To do this, the readonly modifier is rewritten for each field. The module: base_tier_validation The interesting part of code: get_view Code for your case: @api.model def get_view(self, view_id=None, view_type="form", **options): res = super().get_view( view_id=view_id, view_type=view_type, **options, ) if view_type == "form": doc = etree.XML(res["arch"]) for field in doc.xpath("//field[@name][not(ancestor::field)]"): modifiers = json.loads( field.attrib.get("modifiers", '{"readonly": false}') ) if modifiers.get("readonly") is not True: modifiers["readonly"] = OR( [ modifiers.get( "readonly", [] ) or [], [("my_field", "=", True)], ] ) field.attrib["modifiers"] = json.dumps(modifiers) res["arch"] = etree.tostring(doc, pretty_print=True) return res
2
2
78,699,450
2024-7-2
https://stackoverflow.com/questions/78699450/what-is-the-fastest-way-to-calculate-a-daily-balance-with-compound-interest-in-p
I have a DataFrame (DF) with deposits and withdrawals aggregated by day, and I want to know what is the fastest way to calculate the balance for each day. Because it must be able to scale. Answers in both Pandas and Spark are welcome! Here is an example of how the input DF looks like: Input date deposit withdrawal 2024-01-01 100.00 0.00 2024-01-02 0.00 0.00 2024-01-03 50.00 30.00 2024-01-04 0.00 0.00 2024-01-05 0.00 200.00 2024-01-06 20.00 0.00 2024-01-07 20.00 0.00 2024-01-08 0.00 0.00 These deposits and withdrawals are from an investment account that yields 10% per day. Unless the balance is negative. In this case, the daily return must be zero. The pseudo-code calculations to get the daily_return and balance columns are: Movements = Previous day balance + Deposit - Withdrawal Interest = 0.1 if Movements > 0 else 0 Daily return = Movements * Interest Balance = Movements + Daily return And below is an example of the desired output DF: Desired output date deposit withdrawal daily_return balance 2024-01-01 100.00 0.00 10.00 110.00 2024-01-02 0.00 0.00 11.00 121.00 2024-01-03 50.00 30.00 14.10 155.10 2024-01-04 0.00 0.00 15.51 170.61 2024-01-05 0.00 200.00 0.00 -29.39 2024-01-06 20.00 0.00 0.00 -9.39 2024-01-07 20.00 0.00 1.06 11.67 2024-01-08 0.00 0.00 1.17 12.84 What I have I have a solution in Pandas that achieves the desired output, however it iterates over every line of the DF, i.e. it's slow. Is there a way to vectorize this calculation to speed it up? Or maybe another approach? Here is my implementation: import pandas as pd df = pd.DataFrame({ "date": pd.date_range(start="2024-01-01", end="2024-01-08"), "deposit": [100.0, 0.0, 50.0, 0.0, 0.0, 20.0, 20.0, 0.0], "withdrawal": [0.0, 0.0, 30.0, 0.0, 200.0, 0.0, 0.0, 0.0] }) daily_returns = [] balances = [] prev_balance = 0 for _, row in df.iterrows(): movements = prev_balance + row["deposit"] - row["withdrawal"] interest = 0.1 if movements > 0 else 0 daily_return = movements * interest balance = movements + daily_return daily_returns.append(daily_return) balances.append(balance) prev_balance = balance df["daily_return"] = daily_returns df["balance"] = balances
For this type of computations I'd use numba, e.g.: from numba import njit @njit def calculate(deposits, withdrawals, out_daily_return, out_balance): prev_balance = 0 for i, (deposit, withdrawal) in enumerate(zip(deposits, withdrawals)): movements = prev_balance + deposit - withdrawal interest = 0.1 if movements > 0 else 0 daily_return = movements * interest balance = movements + daily_return out_daily_return[i] = daily_return out_balance[i] = balance prev_balance = balance df["daily_return"] = 0.0 df["balance"] = 0.0 calculate( df["deposit"].values, df["withdrawal"].values, df["daily_return"].values, df["balance"].values, ) print(df) Prints: date deposit withdrawal daily_return balance 0 2024-01-01 100.0 0.0 10.0000 110.0000 1 2024-01-02 0.0 0.0 11.0000 121.0000 2 2024-01-03 50.0 30.0 14.1000 155.1000 3 2024-01-04 0.0 0.0 15.5100 170.6100 4 2024-01-05 0.0 200.0 -0.0000 -29.3900 5 2024-01-06 20.0 0.0 -0.0000 -9.3900 6 2024-01-07 20.0 0.0 1.0610 11.6710 7 2024-01-08 0.0 0.0 1.1671 12.8381 Quick benchmark: from time import monotonic df = pd.DataFrame( { "date": pd.date_range(start="2024-01-01", end="2024-01-08"), "deposit": [100.0, 0.0, 50.0, 0.0, 0.0, 20.0, 20.0, 0.0], "withdrawal": [0.0, 0.0, 30.0, 0.0, 200.0, 0.0, 0.0, 0.0], } ) df = pd.concat([df] * 1_000_000) print(f"{len(df)=}") start_time = monotonic() df["daily_return"] = 0.0 df["balance"] = 0.0 calculate( df["deposit"].values, df["withdrawal"].values, df["daily_return"].values, df["balance"].values, ) print("Time =", monotonic() - start_time) Prints on my AMD 5700x: len(df)=8000000 Time = 0.11215395800536498
3
4
78,699,293
2024-7-2
https://stackoverflow.com/questions/78699293/efficiently-reparsing-string-series-in-a-dataframe-into-a-struct-recasting-th
Consider the following toy example: import polars as pl xs = pl.DataFrame( [ pl.Series( "date", ["2024 Jan", "2024 Feb", "2024 Jan", "2024 Jan"], dtype=pl.String, ) ] ) ys = ( xs.with_columns( pl.col("date").str.split(" ").list.to_struct(fields=["year", "month"]), ) .with_columns( pl.col("date").struct.with_fields(pl.field("year").cast(pl.Int16())) ) .unnest("date") ) ys shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ year ┆ month β”‚ β”‚ --- ┆ --- β”‚ β”‚ i16 ┆ str β”‚ β•žβ•β•β•β•β•β•β•ͺ═══════║ β”‚ 2024 ┆ Jan β”‚ β”‚ 2024 ┆ Feb β”‚ β”‚ 2024 ┆ Jan β”‚ β”‚ 2024 ┆ Jan β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ I think it would be more efficient to do the operations on a unique series of date data (I could use replace, but I have opted for join for no good reason): unique_dates = ( pl.DataFrame([xs["date"].unique()]) .with_columns( pl.col("date") .str.split(" ") .list.to_struct(fields=["year", "month"]) .alias("struct_date") ) .with_columns( pl.col("struct_date").struct.with_fields( pl.field("year").cast(pl.Int16()) ) ) ) unique_dates shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ struct_date β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ struct[2] β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════════════║ β”‚ 2024 Jan ┆ {2024,"Jan"} β”‚ β”‚ 2024 Feb ┆ {2024,"Feb"} β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ zs = ( xs.join(unique_dates, on="date", left_on="date", right_on="struct_date") .drop("date") .rename({"struct_date": "date"}) .unnest("date") ) zs shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ year ┆ month β”‚ β”‚ --- ┆ --- β”‚ β”‚ i16 ┆ str β”‚ β•žβ•β•β•β•β•β•β•ͺ═══════║ β”‚ 2024 ┆ Jan β”‚ β”‚ 2024 ┆ Feb β”‚ β”‚ 2024 ┆ Jan β”‚ β”‚ 2024 ┆ Jan β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ What can I do to improve the efficiency of this operation even further? Am I using polars idiomatically enough?
.str.splitn() should be more efficient as it avoids the List creation + .list.to_struct() .struct.field() can also be used to "unnest" the fields directly. xs.select( pl.col.date.str.splitn(" ", 2) .struct.rename_fields(["year", "month"]) .struct.with_fields(pl.field("year").cast(pl.Int16)) .struct.field("year", "month") ) shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ year ┆ month β”‚ β”‚ --- ┆ --- β”‚ β”‚ i16 ┆ str β”‚ β•žβ•β•β•β•β•β•β•ͺ═══════║ β”‚ 2024 ┆ Jan β”‚ β”‚ 2024 ┆ Feb β”‚ β”‚ 2024 ┆ Jan β”‚ β”‚ 2024 ┆ Jan β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜
3
1
78,698,840
2024-7-2
https://stackoverflow.com/questions/78698840/get-the-dpi-for-a-pyplot-figure
At least in Spyder, the PyPlot plots can be low resolution, e.g., from: import numpy as np import matplotlib.pyplot as plt # import seaborn as sns rng = np.random.default_rng(1) scat = plt.scatter( *rng.integers( 10, size=(2,10) ) ) My web surfing has brought me to a suggested solution: Increase the dots-per-inch. plt.rcParams['figure.dpi']=300 Surfing indicates that the default is 100. After much experimentation, is there a way to "get" the DPI from the plotted object, e.g., scat above? There doesn't seem to be a scat.dpi, scat.getdpi, or scat.get_dpi. Afternote: Thanks to BigBen for pointing out the object-oriented interface. It requires the definition of a figure and axes therein before actually plotting the data. His code patterns seem to return a DPI, but in Spyder, the displayed figure isn't updated with the plotted data. Web surfing yields indications that "inline" plots in Spyder are static. I'm wasn't entirely sure if this was the cause, since the plots in the "Plots" window aren't inline (top half of above image), but they also fail to show the plotted data. Eventually, I found that setting the following did allow BigBen's solution to work: Tools -> Preferences -> IPython console -> Graphics -> Graphics backend -> Automatic. A separate window opens for the figure and axes when it is defined by the source code, and it is updated with plot of the data when scatter is invoked.
There doesn't seem to be a scat.dpi, scat.getdpi, or scat.get_dpi. This makes sense, because scatter returns a PathCollection. The relevant property and method are Figure.dpi or Figure.get_dpi. Highly suggest you use the object-oriented interface: fig, ax = plt.subplots() ax.scatter(*rng.integers(10, size=(2,10))) print(fig.dpi) # returns 100.0 Using the pyplot interface: plt.gcf().dpi or plt.gcf().get_dpi()
2
1
78,698,110
2024-7-2
https://stackoverflow.com/questions/78698110/is-s-rfind-in-python-implemented-using-iterations-in-backward-direction
Does rfind iterates over a string from end to start? I read the docs https://docs.python.org/3.12/library/stdtypes.html#str.rfind and see str.rfind(sub[, start[, end]]) Return the highest index in the string where substring sub is found, such that sub is contained within s[start:end]. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure. And the doc says not much about the implementation. Maybe there are some implementation notes somewhere else in the docs. I have tried to look up the source code using my IDE (Visual Code) and it showed me something pretty much like an interface stub for some hidden native (C/C++) code. def rfind(self, sub: str, start: SupportsIndex | None = ..., end: SupportsIndex | None = ..., /) -> int: ... Then I have tried to find the appropriate source code on Github in Python repositories but it turned out not so easy. I am a newbie in Python. So while it may be obvious for everyone around how to simply look up the source code needed to find the answer it is not straightforward for me.
The sources are easier to navigate if you're familiar with some history of Python. Specifically, the type str was historically called unicode in CPython and is still called unicode in the C sources. So, for string methods: headers are found in Include/unicodeobject.h implementation can be found in Objects/unicodeobject.c Python str.rfind will eventually get to C unicode_rfind_impl. In the main branch you can find auto-generated declarations at Objects/clinic/unicodeobject.c.h#L1074-L1116 and impl at Objects/unicodeobject.c#L12721-L12731.. Note: These declarations generated Argument Clinic are a relatively recent development from gh-117431: Adapt str.find and friends to use the METH_FASTCALL calling convention (Apr 2024), and they are not used in any officially released version yet. For current 3.12.4 you should look in unicodeobject.c directly. You'll note that unicode_rfind_impl calls any_find_slice, passing in the direction -1. This direction <= 0 is used to select a specific implementation depending on the width of the underlying unicode: asciilib_rfind_slice (for both strings ASCII) ucs1lib_rfind_slice ucs2lib_rfind_slice ucs4lib_rfind_slice These calls end up in stringlib routines (stringlib/find.h:rfind_slice -> stringlib/find.h:rfind -> stringlib/fastsearch.h:FASTSEARCH). Then, for the special case of 1-char substrings, we continue to stringlib/fastsearch.h:rfind_char and eventually end up here where CPython seems to use either memrchr or reverse iteration, depending on the glibc version. For longer substrings we go to stringlib/fastsearch.h:default_rfind, implemented here, which looks like some sort of Boyer-Moore algo with a bloom filter. An old effbot page describes an earlier version of this code as a "simplication (sic) of Boyer-Moore, incorporating ideas from Horspool and Sunday ... (with) ... a simple Bloom filter", but the implementation detail may have shifted somewhat since then (2006). Finally, you can use stdlib timeit interactively to emprically verify that str.rfind does not meaningfully slow down when dealing with longer strings. Taken by itself, this does not guarantee there is a reverse iteration, but it's certainly evidence that the implementation isn't just a naive iteration from the start. >>> s1 = 'x' * 1000 + 'y' >>> s2 = 'x' * 1_000_000 + 'y' >>> timeit s1.rfind('y') 68.7 ns Β± 0.183 ns per loop (mean Β± std. dev. of 7 runs, 10,000,000 loops each) >>> timeit s2.rfind('y') 68.9 ns Β± 0.00835 ns per loop (mean Β± std. dev. of 7 runs, 10,000,000 loops each) Compare with putting the 'y' at the start, where we go from nanos to micros: >>> s1 = 'y' + ('x' * 1000) >>> s2 = 'y' + ('x' * 1_000_000) >>> timeit s1.rfind('y') 73.6 ns Β± 0.00866 ns per loop (mean Β± std. dev. of 7 runs, 10,000,000 loops each) >>> timeit s2.rfind('y') 22.3 ΞΌs Β± 10.8 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
4
6
78,697,829
2024-7-2
https://stackoverflow.com/questions/78697829/folium-plugins-featuregroupsubgroup-how-to-remove-the-name-of-the-tiles-from-th
I'm building a map with Folium. I used plugins.FeatureGroupSubGroup in order to create four subgroup so that one can filter the markers. As you can see in the picture below, at the top of the white box there is the name of the tiles I'm using (Cartodb dark_matter). Is there any chance to have the box without that writing? If so, how can I remove it? I tried to search for a solution on StackOverflow and in Folium documentation but couldn't find an answer.
One option would be to add the TileLayer manually and turn-off its control : import folium m = folium.Map(location=[0, 0], zoom_start=6, tiles=None) t = folium.TileLayer(tiles="cartodbdark_matter", control=False).add_to(m) # irrelevant / just for reproducibility from folium.plugins import FeatureGroupSubGroup, MarkerCluster mcg = MarkerCluster(control=False).add_to(m) coordinates = [[-1, -1], [-1, 1], [1, 1], [1, -1]] groups = ["A", "B", "C", "D"] for coo, grp in zip(coordinates, groups): sg = FeatureGroupSubGroup(mcg, grp).add_to(m) _ = folium.Marker(coo).add_to(sg) folium.LayerControl(collapsed=False).add_to(m) Output (m) :
2
2
78,696,575
2024-7-2
https://stackoverflow.com/questions/78696575/error-failed-to-build-installable-wheels-for-some-pyproject-toml-based-projects
I am trying to install Pyrebase to my NewLoginApp Project using PyCharm IDE and Python. I checked and upgraded the version of the software and I selected the project as my interpreter, but I still get this error: ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (pycryptodome) Below is the screenshot of the error that I am getting: Below is the whole code I wrote to the terminal to fix the problem:
I think you should install Pyrebase4 pip install Pyrebase4 or pip3 install Pyrebase4 https://pypi.org/project/Pyrebase4/ A simple python wrapper for the Firebase API with current deps This is a more recent one with last released on Apr 30, 2024 The old one was : pip install Pyrebase it was last released on Jan 7, 2017 so it will not be supported by new python versions. https://pypi.org/project/Pyrebase/
7
4
78,692,458
2024-7-1
https://stackoverflow.com/questions/78692458/extract-the-closest-two-numbers-that-multiply-to-create-a-given-number
I made a CPU based raytracer in PyGame which uses a tile per thread to render each section of the screen. Currently I divide the screen vertically between lines, this doesn't give me the best distribution: I want to divide threads in even boxes covering both the X and Y directions. For example: If my resolution is x = 640, y = 320 and I have 4 threads, I want a list of 4 boxes representing tile boundaries in the form (x_min, y_min, x_max, y_max), in this case the result being [(0, 0, 320, 160), (320, 0, 640, 160), (0, 160, 320, 320), (320, 160, 640, 320)]. Problem is I don't see how to automatically divide the number of threads into a 2D grid: I want to extract the closest two whole numbers that multiply to match the thread setting. If this number can't be divided evenly, jump to the closest one that can... for instance no two integers can multiply to create 7, use 6 or 8 instead. I tried math.sqrt but it only works for perfectly divisible numbers like 16, even when rounding that it won't give accurate results for values like 32. What is the simplest solution? Examples: 4 = 2 * 2, 6 = 2 * 3, 8 = 2 * 4, 9 = 3 * 3, 16 = 4 * 4, 24 = 4 * 6, 32 = 4 * 8, 64 = 8 * 8.
If I understand you correctly, you are looking for the following: Given a number x > 0, find the pair of factors (z, h) where the absolute difference of (z, h) is the at a minimum. x is the number of available threads and (z, h) will then be the number of tilings you will have (vertically, horizontally). If the number of pixels doesn't divide perfectly in your number of tilings, you can add the remainder to the last tiling. The most efficient algorithm that I can think of is as follows: start with x, the number we want to factor. calculate sqrt(x) and round it down to the nearest integer. Call this s. the smaller factor we're looking for will be the largest divisor of x that's less than or equal to s. In python code it'll look something like this: import math def closest_factors(x): s = int(math.sqrt(x)) for z in range(s, 0, -1): if x % z == 0: return z, x // z x = 60 factor1, factor2 = closest_factors(x) print(f"The factors of {x} with the least difference are: {factor1} and {factor2}") print(f"Their difference is: {abs(factor1 - factor2)}") There may be a mathematical formula for this that speeds it up, but I'm not familiar with it. Updated according to @MirceaKitsune 's suggestion in the comments.
2
1
78,695,836
2024-7-2
https://stackoverflow.com/questions/78695836/in-a-gradio-tab-gui-a-button-calls-the-other-tab
I have the following script import gradio as gr # Define the function for the first tab def greet(text): return f"Hello {text}" # Define the function for the second tab def farewell(text): return f"Goodbye {text}" # Create the interface for the first tab with gr.Blocks() as greet_interface: input_text = gr.Textbox(label="Input Text") output_text = gr.Textbox(label="Output Text") button = gr.Button("Submit") button.click(greet, inputs=input_text, outputs=output_text) # Create the interface for the second tab with gr.Blocks() as farewell_interface: input_text = gr.Textbox(label="Input Text") output_text = gr.Textbox(label="Output Text") button = gr.Button("Submit") button.click(farewell, inputs=input_text, outputs=output_text) # Combine the interfaces into tabs with gr.Blocks() as demo: with gr.Tabs(): with gr.TabItem("Greet"): greet_interface.render() with gr.TabItem("Farewell"): farewell_interface.render() # Launch the interface # demo.launch() demo.launch(server_name="0.0.0.0", server_port=7861) I am scratching my head because this works in one environment I have, and yet in the other it fails terribly. How it fails: In the second tab (farewell) when the button is pressed, it actually calls the greet function. farewell is never called. I can see that some processing is being done in the output_text of the second tab but it never is completed. Instead the output text of the first tab is filled I can not comprehend why this is happening. The only difference I have is that of the environments The environment where it works: Python 3.11.1 use venv gradio 4.37.2 The environment where it fails Python 3.9.16 use poetry gradio 4.32.2 Can someone help me with this strange occurrence? Is tabbed gradio buggy? Btw, I already tried using completely different variables per tab but that does not work
Yes, tabbed gradio was buggy. It was fixed in version 4.36.1 See the changelog: Fixes TabbedInterface bug where only first interface events get triggered. This explains the differences you see between both versions (4.32.2 < 4.36.1 < 4.37.2)
2
2
78,694,862
2024-7-2
https://stackoverflow.com/questions/78694862/problems-with-recursive-functions
I have recently challenged myself to write the Depth First Search algorithm for maze generation and am on the home stretch of completing it but I have been battling a specific error for most of the final half of the project. I use binary for notating the connections between two neighboring nodes on the tree (learn network theory if you haven't already, it's absolutely wonderful and a very relevant field to programming) which goes as follows: 0:no directions,1:left,2:right,4:up,8:down, and any of these added together with produce their directions, ie: 3:left-right, 12:up-down, 7:left-right-up... The following function is the primary function and theoretically works for any size 2d list (not considering Python cutting me off because of too many iterations >:^<). def DepthFirstSearch(map,inX,inY,priorPoses,iteration,seed,mapSize): if len(priorPoses) == mapSize: print(F"Finished in {iteration} iterations") print(map) return map x = inX y = inY mapHold = map history = priorPoses random.seed(seed + iteration) if CheckNeighbors(map, x, y) == []: CheckPriorPositions(map,priorPoses) print(F"Check prior positions, {CheckNeighbors(map,x,y)}") return DepthFirstSearch(mapHold,CheckPriorPositions(map,priorPoses)[0],CheckPriorPositions(map,priorPoses)[1], priorPoses,iteration+1,seed,mapSize) else: move = CheckNeighbors(map, x, y) move = random.choice(move) if move == 1: mapHold[inY][inX] += move x -= 1 mapHold[y][x] += 2 else: if move == 2: mapHold[inY][inX] += move x += 1 mapHold[y][x] += 1 else: if move == 4: mapHold[inY][inX] += move y += 1 mapHold[y][x] += 8 else: if move == 8: mapHold[inY][inX] += move y -= 1 mapHold[y][x] += 4 history.append([x,y]) return DepthFirstSearch(mapHold,x,y,priorPoses,iteration+1,seed,mapSize) The CheckNeighbors function works perfectly fine but the CheckPriorPositions function has been cause for concern but I can't find a problem, I'll include it anyway though. If you have any tips on it then please give a tip, I somewhat feel like I'm missing something that would completeley trivialize this CheckPriorPositions function. def CheckPriorPositions(map,priorPoses): posesToSearch = priorPoses posesToSearch.reverse() for poses in range(0,len(posesToSearch)): if CheckNeighbors(map,posesToSearch[poses][0],posesToSearch[poses][1]) != []: return posesToSearch[poses] The particular error I keep getting thrown is as follows: Traceback (most recent call last): File "C:\Users\Wyatt\Desktop\python prjects\DepthFirstSearchMazeGenProject\DepthFirstSearch.py", line 87, in <module> DepthFirstSearch(testMapD,0,0,testHistoryD,0,5,4) File "C:\Users\Wyatt\Desktop\python prjects\DepthFirstSearchMazeGenProject\DepthFirstSearch.py", line 71, in DepthFirstSearch return DepthFirstSearch(mapHold,x,y,priorPoses,iteration+1,seed,mapSize) File "C:\Users\Wyatt\Desktop\python prjects\DepthFirstSearchMazeGenProject\DepthFirstSearch.py", line 71, in DepthFirstSearch return DepthFirstSearch(mapHold,x,y,priorPoses,iteration+1,seed,mapSize) File "C:\Users\Wyatt\Desktop\python prjects\DepthFirstSearchMazeGenProject\DepthFirstSearch.py", line 71, in DepthFirstSearch return DepthFirstSearch(mapHold,x,y,priorPoses,iteration+1,seed,mapSize) File "C:\Users\Wyatt\Desktop\python prjects\DepthFirstSearchMazeGenProject\DepthFirstSearch.py", line 46, in DepthFirstSearch return DepthFirstSearch(mapHold,CheckPriorPositions(map,priorPoses)[0],CheckPriorPositions(map,priorPoses)[1], TypeError: 'NoneType' object is not subscriptable I don't really know where to start, but I do have some test data to give. The following scenarios are simplified versions of real scenarios meant to test the functions: testMapA = [[0,0],[0,0],[0,0]] testHistoryA = [] DepthFirstSearch(testMapA,0,0,testHistoryA,0,5,6) testMapB = [[4,0],[10,5],[2,9]] testHistoryB = [[0,0],[0,1],[1,1],[1,2],[0,2]] DepthFirstSearch(testMapB,0,2,testHistoryB,5,5,6) testMapC = [[4,0],[14,5],[8,8]] testHistoryC = [[0,0],[0,1],[0,2],[1,1],[1,2]] DepthFirstSearch(testMapC,1,2,testHistoryC,5,5,6) testMapD = [[0,0],[0,0]] testHistoryD = [] DepthFirstSearch(testMapD,0,0,testHistoryD,0,5,4) testMapE = [[0,0]] testHistoryE = [] DepthFirstSearch(testMapE,0,0,testHistoryE,0,5,2)
There are several issues: With a depth-first traversal you shouldn't recur deeper when backtracking. Backtracking involves getting out of recursion. That also means you don't need an explicit stack like you have priorPoses, ...etc. The callstack serves for that purpose. For the reason in the previous point, you wouldn't need CheckPriorPositions, but it is also the cause of the runtime error: it can return None when the if condition is never true. And you have code that tries to take the first index of the returned value, so that raises an exception when it happens to be None. Your code suggests that you think that an assignment like history = priorPoses makes a copy of the list. This is not true. Both names will reference the same list, so that mutations you do to that list using history will be reflected in what you see with priorPoses and vice versa. For instance, the call of priorPoses.reverse() is such a mutation. I didn't find code that checks whether a cell was already visited. It looks like your approach would eventually "break all walls". You'd need to mark cells as already visited to avoid that you make multiple routes to the same cell. So I would suggest introducing a matrix with booleans to hold that "visited" information. Calling the maze map shadows the native Python function with the same name. You don't want to do that. Call it maze or grid or board... but not map. You can also reduce much of your code by avoiding repetition of similar code. The bit configuration allows for using bit operators, like ^ to get an opposite direction. Not a problem, but I would not use function names that start with a capital. That is commonly done for class names. I prefer snake_case for function names: import random LEFT = 1 RIGHT = 2 UP = 4 DOWN = 8 SIDES = ((0, -1), (0, 1), (-1, 0), (1, 0)) def get_moves_to_nonvisited(visited, y, x): height = len(visited) width = len(visited[0]) return [ (side, y + dy, x + dx) for side, (dy, dx) in enumerate(SIDES) if 0 <= y + dy < height and 0 <= x + dx < width and not visited[y + dy][x + dx] ] def depth_first_search(maze): row = [False] * len(maze[0]) visited = [row[:] for _ in maze] def recur(y, x): visited[y][x] = True while True: moves = get_moves_to_nonvisited(visited, y, x) if not moves: return # just backtrack! move, y2, x2 = random.choice(moves) maze[y][x] |= 1 << move # Formula to convert 0,1,2,3 to 1,2,4,8 maze[y2][x2] |= 1 << (move ^ 1) # Formula for opposite direction recur(y2, x2) recur(0, 0) I used this helper function to visualise the maze: CROSSES = " β•΄β•Άβ”€β•΅β”˜β””β”΄β•·β”β”Œβ”¬β”‚β”€β”œβ”Ό" def stringify(maze): row = [0] * (len(maze[0])*2+1) spots = [row[:] for _ in range(len(maze)*2+1)] for y, row in enumerate(maze): y *= 2 for x, cell in enumerate(row): x *= 2 for x2 in range(x, x+4, 2): if (cell & 1) == 0: spots[y ][x2] |= DOWN spots[y+1][x2] |= UP | DOWN spots[y+2][x2] |= UP cell >>= 1 for y2 in range(y, y+4, 2): if (cell & 1) == 0: spots[y2][x ] |= RIGHT spots[y2][x+1] |= RIGHT | LEFT spots[y2][x+2] |= LEFT cell >>= 1 return "\n".join( "".join(CROSSES[spot] * (1 + (x % 2)*2) for x, spot in enumerate(row)) for y, row in enumerate(spots) ) An example run: width = height = 8 maze = [[0] * width for _ in range(height)] depth_first_search(maze) print(stringify(maze)) ...outputs something like this: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β” └───┐ β•· ╢───┐ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β•· β”œβ”€β”€β”€β” └───┴───┐ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β•΅ β”œβ”€β”€β”€β”€β”€β”€β”€β•΄ β”‚ └──── β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€β”€β•΄ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β•΄ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ └──────── └───┐ ╢──────── β”‚ β”‚ β”‚ β”‚ β”‚ ╢───┐ └───┐ β”œβ”€β”€β”€β”€β”€β”€β”€β•΄ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€β”€β•΄ └───┐ β•΅ β•΅ β”Œβ”€β”€β”€β•΄ β”‚ β”‚ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Iterative solution Using recursion is elegant, but as here a recursive call is made for every visit, the recursion depth could get equal to the total number of cells. For a 100x100 maze this would be 10000, which will exceed the (default) maximum recursion depth that Python allows. To avoid that, you could work with an iterative solution that uses an explicit stack: def depth_first_search(maze): row = [False] * len(maze[0]) visited = [row[:] for _ in maze] stack = [(0, 0)] visited[0][0] = True while stack: y, x = stack[-1] moves = get_moves_to_nonvisited(visited, y, x) if not moves: stack.pop() else: move, y2, x2 = random.choice(moves) maze[y][x] |= 1 << move maze[y2][x2] |= 1 << (move ^ 1) stack.append((y2, x2)) visited[y2][x2] = True
3
4
78,695,350
2024-7-2
https://stackoverflow.com/questions/78695350/pyparsing-back-to-basics
In attempting to put together a very simple example to illustrate an issue I'm having with pyparsing, I have found that I can't get my simple example to work - this example is barely more complex than Hello, World! Here is my example code: import pyparsing as pp import textwrap as tw text = ( """ A) Red B) Green C) Blue """ ) a = pp.AtLineStart("A)") + pp.Word(pp.alphas) b = pp.AtLineStart("B)") + pp.Word(pp.alphas) c = pp.AtLineStart("C)") + pp.Word(pp.alphas) grammar = a + b + c grammar.run_tests(tw.dedent(text).strip()) I would expect it to return ["A)", "Red", "B)", "Green", "C)", "Blue"] but instead I get: A) Red A) Red ^ ParseException: not found at line start, found end of text (at char 6), (line:1, col:7) FAIL: not found at line start, found end of text (at char 6), (line:1, col:7) B) Green B) Green ^ ParseException: Expected 'A)', found 'B' (at char 0), (line:1, col:1) FAIL: Expected 'A)', found 'B' (at char 0), (line:1, col:1) C) Blue C) Blue ^ ParseException: Expected 'A)', found 'C' (at char 0), (line:1, col:1) FAIL: Expected 'A)', found 'C' (at char 0), (line:1, col:1) Why would it say it's found end of text after the first line??? Why is it expecting A) after the first line??? (Note: textwrap.dedent() and strip() have no impact on the results of this script.)
My dude! You forgot to wrap raw string into a list! I tested for 30 mins and felt something was odd, then found this in document: run_tests(tests: Union[str, List[str]], ...) -> ... Execute the parse expression on a series of test strings, showing each test, the parsed results or where the parse failed. Quick and easy way to run a parse expression against a list of sample strings. Parameters: tests - a list of separate test strings, or a multiline string of test strings Basically you (and me for last half hour) by doing this: grammar.run_tests(tw.dedent(text).strip()) ...Was telling it to treat each line as individual tests! """ # You are a test 0 now! A) Red # You are a test 1 B) Green # test 2 for you, C) Blue # test 3 for ya, """ # finally you're test 4! Mhahahah! (And of course the use of pp.line_end so line actually consume end of line) >>> import pyparsing as pp ... import textwrap as tw ... ... text = ( ... """ ... A) Red ... B) Green ... C) Blue ... """ ... ) ... ... a = pp.AtLineStart("A)") + pp.Word(pp.alphas) + pp.line_end.suppress() ... b = pp.AtLineStart("B)") + pp.Word(pp.alphas) + pp.line_end.suppress() ... c = pp.AtLineStart("C)") + pp.Word(pp.alphas) ... ... grammar = a + b + c ... grammar.run_tests([tw.dedent(text).strip()]) A) Red B) Green C) Blue ['A)', 'Red', 'B)', 'Green', 'C)', 'Blue'] >>>
2
2
78,678,526
2024-6-27
https://stackoverflow.com/questions/78678526/iterate-over-intflag-enumeration-using-iter-differs-in-python-3-8-and-3-12-4
I have the following working in python 3.8.1 @unique class Encoder(IntFlag): # H1 sensor signal H1 = 0x00 # H2 sensor signal H2 = 0x01 # H3 sensor signal H3 = 0x02 Then I am trying to catch with an assert if the value is not within the enum, i.e. from enum import unique, IntFlag signal = Encoder.H1 assert signal in iter(Encoder), f"Valid Encoder line is integer 0 to 2 inclusive." I noticed on python 3.8, the signal in iter(Encoder) returns True but False in python 3.12.4 It might be a change in some version from 3.8.1 to 3.12.4 but I am not sure where to star looking for getting this working in both.
enum.Flag implements __contains__. For membership checks, drop the iter call and just use: signal in Encoder This will work in both 3.8.1 and 3.12.4. Note: The change in iteration behavior for flags happened in Python 3.11 and is mentioned in the changelog here. Also, you should probably be using IntEnum rather than IntFlag anyway. IntFlag would be appropriate in a use case where Encoder.H2 | Encoder.H3 was meaningful, and meant "both of these". In your use case, Encoder.H2 | Encoder.H3 is an invalid value.
2
3
78,690,568
2024-7-1
https://stackoverflow.com/questions/78690568/openpyxl-delete-rows-doesnt-completely-remove-rows-if-row-height-is-set
Any suggestions? If I ever the row height in openpyxl, delete_rows will appear to remove the rows, but when I save and open in Excel, the rows are empty but not all row information is removed. The scroll bar still scrolls to where the last row was prior to deleting. So "ghosts" of the empty rows remain. Sample code: from openpyxl import Workbook # Create a new workbook and select the active sheet wb = Workbook() ws = wb.active # Write values into the cells for row in range(1, 101): for col in range(1, 6): cell = ws.cell(row=row, column=col) cell.value = f"Cell {row}-{col}" # Set row height for row in ws.iter_rows(): ws.row_dimensions[row[0].row].height = 14.4 # Unset row height for row in ws.iter_rows(): ws.row_dimensions[row[0].row].height = None # Delete the rows ws.delete_rows(35, 100-35) # Save the workbook wb.save('test.xlsx') If I comment out both row height loops, it works correctly. If either loop is included, remnants of the empty rows remain.
Yes, the delete clears the cells however as the row dimensions have been adjusted (by changing the height) they now exist in the Sheet profile. The row delete function does not remove that from the Sheet. The result being that the workbook retains the rows to 100 defined in the Sheet with those above row 35 having no cell details. You can fix this by deleting the row dimensions too. for x in range(36, 101): del ws.row_dimensions[x] Of course deleting the row dimension instead of setting to None would also affect the same change; # Unset row height for row in ws.iter_rows(): # ws.row_dimensions[row[0].row].height = None del ws.row_dimensions[row[0].row]
2
1
78,695,314
2024-7-2
https://stackoverflow.com/questions/78695314/weird-behavior-when-updating-the-values-using-iloc-in-pandas-dataframe
While copying a pandas dataframe, ideally, we should use .copy(), which is a deep copy by default. We could also achieve the same using new_df = pd.Dataframe(old_df), which is also intuitive (and common style across most programming languages since in principle, we are calling a copy constructor). In both cases, they have different memory identifers as shown below. But when I change the old_df using .iloc, it changes the value for the new_df. Is this the expected behaviour? I couldn't figure out this behaviour using the docs. Am I missing something trivial? # ! pip install smartprint from smartprint import smartprint as sprint import pandas as pd data = {'A': [1, 2, 3], 'B': [4, 5, 6]} original_df = pd.DataFrame(data) new_df = pd.DataFrame(original_df) original_df['A'] = original_df['A'].apply(lambda x: x * 1000) print ("============ Changing a column with pd.Dataframe(old_df); No change") sprint(original_df) sprint(new_df) sprint (id(new_df), id(original_df)) data = {'A': [1, 2, 3], 'B': [4, 5, 6]} original_df = pd.DataFrame(data) new_df = pd.DataFrame(original_df) original_df.iloc[0,:] = 20 print ("============ Using .iloc with pd.Dataframe(old_df); Change") sprint(original_df) sprint(new_df) sprint (id(new_df), id(original_df)) data = {'A': [1, 2, 3], 'B': [4, 5, 6]} original_df = pd.DataFrame(data) new_df = original_df.copy() original_df.iloc[0,:] = 20 print ("============ Using .iloc with old_df.copy(); No change") sprint(original_df) sprint(new_df) sprint (id(new_df), id(original_df)) Output: ============ Changing a column with pd.Dataframe(old_df); No change original_df : A B 0 1000 4 1 2000 5 2 3000 6 new_df : A B 0 1 4 1 2 5 2 3 6 id(new_df), id(original_df) : 140529132556544 140529514510944 ============ Using .iloc with pd.Dataframe(old_df); Change original_df : A B 0 20 20 1 2 5 2 3 6 new_df : A B 0 20 20 1 2 5 2 3 6 id(new_df), id(original_df) : 140528893052000 140528894065584 ============ Using .iloc with old_df.copy(); No change original_df : A B 0 20 20 1 2 5 2 3 6 new_df : A B 0 1 4 1 2 5 2 3 6 id(new_df), id(original_df) : 140529057828336 140528940223984 My python and pandas versions are listed below: import sys sys.version Out[16]: '3.8.17 (default, Jul 5 2023, 16:18:40) \n[Clang 14.0.6 ]' pd.__version__ Out[17]: '1.4.3'
This is the expected behavior of DataFrame, it is also happening when you're passing a numpy array as input: a = np.array([[1,2],[3,4]]) df = pd.DataFrame(a) a[0][0] = 9 print(df) 0 1 0 99 2 1 3 4 It is actually well described in the DataFrame documentation: copy: bool or None, default None Copy data from inputs. For dict data, the default of None behaves like copy=True. For DataFrame or 2d ndarray input, the default of None behaves like copy=False. If data is a dict containing one or more Series (possibly of different dtypes), copy=False will ensure that these inputs are not copied. a = np.array([[1,2],[3,4]]) df = pd.DataFrame(a, copy=True) a[0][0] = 99 print(df) 0 1 0 1 2 1 3 4 what about changing a full column? In this case you don't mutate the column but overwrite it with a new one. This means that the underlying objects are now different: a = np.array([[1,2],[3,4]]) df = pd.DataFrame(a) df[0] = 99 print(a) [[1 2] [3 4]]
2
1
78,689,947
2024-6-30
https://stackoverflow.com/questions/78689947/how-to-restructure-instance-segmentation-predictions-into-a-custom-dictionary-fo
I'm performing instance segmentation using a model trained in RoboFlow, for the prediction result I'm getting: [InstanceSegmentationInferenceResponse(visualization=None, frame_id=None, time=None, image=InferenceResponseImage(width=720, height=1280), predictions=[ InstanceSegmentationPrediction(x=352.0, y=569.0, width=420.0, height=1066.0, confidence=0.9057266712188721, class_name='animal', class_confidence=None, points=[Point(x=244.125, y=36.0), Point(x=243.0, y=38.0), Point(x=241.875, y=38.0), Point(x=237.375, y=46.0), Point(x=236.25, y=46.0), Point(x=232.875, y=52.0), Point(x=232.875, y=54.0), Point(x=193.5, y=128.0), Point(x=354.375, y=42.0), Point(x=353.25, y=40.0), Point(x=351.0, y=40.0), Point(x=349.875, y=38.0), Point(x=347.625, y=38.0), Point(x=346.5, y=36.0)], class_id=0, detection_id='d5c78348-38e1-4281-aa68-9edcbf2cad9e', parent_id=None), InstanceSegmentationPrediction(x=367.5, y=536.0, width=43.0, height=38.0, confidence=0.8523976802825928, class_name='moeda', class_confidence=None, points=[Point(x=354.375, y=518.0), Point(x=353.25, y=520.0), Point(x=351.0, y=520.0), Point(x=352.125, y=550.0), Point(x=357.75, y=550.0), Point(x=360.0, y=554.0), Point(x=374.625, y=554.0), Point(x=375.75, y=552.0), Point(x=376.875, y=552.0),Point(x=381.375, y=518.0)], class_id=1, detection_id='c93327f3-afce-4038-932b-1fc623fcc949', parent_id=None)])] However, I need to reorganize my data so that it looks like this: {'predictions': [{'x': 352.0, 'y': 565.0, 'width': 420.0, 'height': 1060.0, 'confidence': 0.9052466154098511, 'class': 'animal', 'points': [{'x': 244.125, 'y': 36.0}, {'x': 243.0, 'y': 38.0}, {'x': 241.875, 'y': 38.0}, {'x': 239.625, 'y': 42.0}, {'x': 238.5, 'y': 42.0}, {'x': 232.875, 'y': 52.0}, {'x': 232.875, 'y': 54.0}, {'x': 229.5, 'y': 60.0}, {'x': 229.5, 'y': 62.0}, {'x': 209.25, }, {'x': 354.375, 'y': 42.0}, {'x': 353.25, 'y': 40.0}, {'x': 351.0, 'y': 40.0}, {'x': 349.875, 'y': 38.0}, {'x': 348.75, 'y': 38.0}, {'x': 347.625, 'y': 36.0}], 'class_id': 0, 'detection_id': '71c453d6-3654-4a53-a56c-4db6a012cfe8', 'image_path': '/content/17.jpeg', 'prediction_type': 'InstanceSegmentationModel'}, {'x': 368.0, 'y': 536.0, 'width': 44.0, 'height': 38.0, 'confidence': 0.8534654378890991, 'class': 'moeda', 'points': [{'x': 354.375, 'y': 518.0}, {'x': 353.25, 'y': 520.0}, {'x': 351.0, 'y': 520.0}, {'x': 348.75, 'y': 524.0}, {'x': 347.625, 'y': 524.0}, {'x': 346.5, 'y': 526.0}, {'x': 346.5, 'y': 542.0}, {'x': 348.75, 'y': 542.0}, {'x': 351.0, 'y': 546.0}, {'x': 351.0, {'x': 387.0, 'y': 530.0}, {'x': 385.875, 'y': 528.0}, {'x': 385.875, 'y': 526.0}, {'x': 384.75, 'y': 524.0}, {'x': 384.75, 'y': 522.0}, {'x': 383.625, 'y': 522.0}, {'x': 381.375, 'y': 518.0}], 'class_id': 1, 'detection_id': 'a37bfa9e-c169-4d4b-b46e-213ad9508d9b', 'image_path': '/content/17.jpeg', 'prediction_type': 'InstanceSegmentationModel'}], 'image': {'width': '720', 'height': '1280'}} With this, I can access the points by doing: if result['predictions']: points_1 = result['predictions'][1]['points'] points_o = result['predictions'][0]['points'] I need to access the points of the first data, I tried to change InstanceSegmentationInferenceResponse to the specific dictionary but I couldn't do it.
I'm assuming you're getting this as a response from the inference call on Roboflow and you want to use a more JSON-esque approach to things, I used (assuming you set your response to 'model_response'): json_data_string = model_response.model_dump_json() results = json.loads(json_data_string) Some real good documentation for how it's set up is here.
4
2
78,692,255
2024-7-1
https://stackoverflow.com/questions/78692255/merge-dataframe-based-on-substring-column-labeld-while-keep-the-original-columns
I have a dataframe having columns with the a label pattern (name/startDateTime/endDateTime) import pandas as pd pd.DataFrame({ "[RATE] BOJ presser/2024-03-19T07:30:00Z/2024-03-19T10:30:00Z": [1], "[RATE] BOJ/2024-01-23T04:00:00Z/2024-01-23T07:00:00Z": [2], "[RATE] BOJ/2024-03-19T04:00:00Z/2024-03-19T07:00:00Z": [3], "[RATE] BOJ/2024-04-26T03:00:00Z/2024-04-26T06:00:00Z": [4], "[RATE] BOJ/2024-04-26T03:00:00Z/2024-04-26T08:00:00Z": [5], "[RATE] BOJ/2024-06-14T03:00:00Z/2024-06-14T06:00:00Z": [6], "[RATE] BOJ/2024-06-14T03:00:00Z/2024-06-14T08:00:00Z": [7], "[RATE] BOJ/2024-07-31T03:00:00Z/2024-07-31T06:00:00Z": [8], "[RATE] BOJ/2024-07-31T03:00:00Z/2024-07-31T08:00:00Z": [9], "[RATE] BOJ/2024-09-20T03:00:00Z/2024-09-20T06:00:00Z": [10], "[RATE] BOJ/2024-09-20T03:00:00Z/2024-09-20T08:00:00Z": [11], "[RATE] BOJ/2024-10-31T04:00:00Z/2024-10-31T07:00:00Z": [12], "[RATE] BOJ/2024-10-31T04:00:00Z/2024-10-31T09:00:00Z": [13], "[RATE] BOJ/2024-12-19T04:00:00Z/2024-12-19T07:00:00Z": [14], "[RATE] BOJ/2024-12-19T04:00:00Z/2024-12-19T09:00:00Z": [15], }) I would like to merge the columns (summing its values) having same name and start date (without the time), the column name should be the orignal one (First to be used) This should give the following result pd.DataFrame({ "[RATE] BOJ presser/2024-03-19T07:30:00Z/2024-03-19T10:30:00Z": [1], "[RATE] BOJ/2024-01-23T04:00:00Z/2024-01-23T07:00:00Z": [2], "[RATE] BOJ/2024-03-19T04:00:00Z/2024-03-19T07:00:00Z": [3], "[RATE] BOJ/2024-04-26T03:00:00Z/2024-04-26T06:00:00Z": [9], "[RATE] BOJ/2024-06-14T03:00:00Z/2024-06-14T06:00:00Z": [13], "[RATE] BOJ/2024-07-31T03:00:00Z/2024-07-31T06:00:00Z": [17], "[RATE] BOJ/2024-09-20T03:00:00Z/2024-09-20T06:00:00Z": [21], ... }) In my example, every column has one raw, but in reality it has multiple based on datetime index
There are a couple of ways to do this, since you are operating along the columns you can do this in either a pure pandas or more typical Python approach. The whole idea is that you need to find the unique groups in your columns and perform aggregation within those groups. Pandas groupby(…) Pandas (version <= 2.1.0) supported grouped operations along the columns, however this functionality has been deprecated. So for a pure pandas approach you may need to transpose your DataFrame (which can be an expensive operation depending) and plan like so: import pandas as pd groupings = ( df.columns.str.extract(r'([^/]+)/(\d{4}-\d{2}-\d{2})') ) unique = groupings.assign(orig=df.columns).drop_duplicates([0, 1]) result = ( df.T .groupby([col.values for _, col in groupings.items()]).sum() .set_axis(unique['orig']).T ) print(result.T) # transpose just for viewing output # 0 # orig # [RATE] BOJ presser/2024-03-19T07:30:00Z/2024-03... 2 # [RATE] BOJ/2024-01-23T04:00:00Z/2024-01-23T07:0... 3 # [RATE] BOJ/2024-03-19T04:00:00Z/2024-03-19T07:0... 9 # [RATE] BOJ/2024-04-26T03:00:00Z/2024-04-26T06:0... 13 # [RATE] BOJ/2024-06-14T03:00:00Z/2024-06-14T06:0... 17 # [RATE] BOJ/2024-07-31T03:00:00Z/2024-07-31T06:0... 21 # [RATE] BOJ/2024-09-20T03:00:00Z/2024-09-20T06:0... 25 # [RATE] BOJ/2024-10-31T04:00:00Z/2024-10-31T07:0... 29 # [RATE] BOJ/2024-12-19T04:00:00Z/2024-12-19T07:0... 1 Pandas groupby(…, axis=1) If you have an older version of pandas you can use .groupby(…, axis=1) this should free you of a possibly expensive transpose. import pandas as pd groupings = ( df.columns.str.extract(r'([^/]+)/(\d{4}-\d{2}-\d{2})') ) unique = groupings.assign(orig=df.columns).drop_duplicates([0, 1]) result = ( df.groupby([col.values for _, col in groupings.items()], axis=1).sum() .set_axis(unique['orig'], axis=1) ) print(result.T) # transpose just for viewing output # 0 # orig # [RATE] BOJ presser/2024-03-19T07:30:00Z/2024-03... 2 # [RATE] BOJ/2024-01-23T04:00:00Z/2024-01-23T07:0... 3 # [RATE] BOJ/2024-03-19T04:00:00Z/2024-03-19T07:0... 9 # [RATE] BOJ/2024-04-26T03:00:00Z/2024-04-26T06:0... 13 # [RATE] BOJ/2024-06-14T03:00:00Z/2024-06-14T06:0... 17 # [RATE] BOJ/2024-07-31T03:00:00Z/2024-07-31T06:0... 21 # [RATE] BOJ/2024-09-20T03:00:00Z/2024-09-20T06:0... 25 # [RATE] BOJ/2024-10-31T04:00:00Z/2024-10-31T07:0... 29 # [RATE] BOJ/2024-12-19T04:00:00Z/2024-12-19T07:0... 1 More Python Instead of relying on explicit .groupby operations you can create your groups in Python. Considering these operations are performed along each column this should be quite performant as well and does not require a possibly expensive transpose or a deprecate API. The idea here is to use itertools to create the groupings, store the intermediate results in a dictionary and recreate a new DataFrame from those parts. import pandas as pd from itertools import groupby def extract_unique(column): splitted = column.split('/') return splitted[0], splitted[1][:10] result = {} for _, col_group in groupby(sorted(df.columns), key=extract_unique): first, *remaining = col_group result[first] = df[[first, *remaining]].sum(axis=1) result = pd.DataFrame(result) print(result.T) # 0 # [RATE] BOJ presser/2024-03-19T07:30:00Z/2024-03... 1 # [RATE] BOJ/2024-01-23T04:00:00Z/2024-01-23T07:0... 2 # [RATE] BOJ/2024-03-19T04:00:00Z/2024-03-19T07:0... 3 # [RATE] BOJ/2024-04-26T03:00:00Z/2024-04-26T06:0... 9 # [RATE] BOJ/2024-06-14T03:00:00Z/2024-06-14T06:0... 13 # [RATE] BOJ/2024-07-31T03:00:00Z/2024-07-31T06:0... 17 # [RATE] BOJ/2024-09-20T03:00:00Z/2024-09-20T06:0... 21 # [RATE] BOJ/2024-10-31T04:00:00Z/2024-10-31T07:0... 25 # [RATE] BOJ/2024-12-19T04:00:00Z/2024-12-19T07:0... 29
2
2
78,689,833
2024-6-30
https://stackoverflow.com/questions/78689833/texttestrunner-doesnt-recognize-modules-when-executing-tests-in-a-different-pro
i am currently working on a project, where i need to run tests inside a different file structure like this: /my_project β”œβ”€β”€ __init__.py β”œβ”€β”€ ...my python code /given_proj β”œβ”€β”€ __init__.py β”œβ”€β”€ /package β”‚ β”œβ”€β”€ __init__.py β”‚ └── main.py └── /tests └── test_main.py Current approach From inside my project i want to execute the tests within the given project. My current approach is using unittest.TextTestRunner like this: unittest.TextTestRunner().run(unittest.defaultTestLoader.discover('../given_proj/tests')). Problem with the current approach Of course the test file wants to import from main.py like this from package.main import my_function. However when i run my code, the tests fail to run because the "package" module cannot be found: ...\given_proj\tests\test_main.py", line 2, in <module> from package.main import my_function ModuleNotFoundError: No module named 'package' When i run the tests with python -m unittest discover -s tests from the command line in the directory of the given_proj they run fine. What i tried I tried changing the working directory to given_proj with os.chdir('../given_proj') however it produces the same result. What i kinda tried, was importing the module manually with importlib.import_module(). There i am not sure if i did it wrong or it doesnt work either. My question How do i make it, that the tests get run, as if i would run it from the actual directory they are supposed to run? Ideally i wouln't need to change the "given_project" at all, because i want to do this with multiple projects. Reproducing it I reduced it to a very minimal project, if anybody wants to try and reproduce it. The file-structure is the one at the top of the post. All __init__.py files are empty. /my_project/main.py: import os import unittest import os import unittest if __name__ == "__main__": dirname = "../given_proj/tests" #either "./" or "../" depending of where you run the python file from unittest.TextTestRunner().run(unittest.defaultTestLoader.discover(dirname)) /given_proj/package/main.py: def my_function(num): return num*2 /given_proj/tests/test_main.py: import unittest from package.main import my_function class TestMain(unittest.TestCase): def test_my_function(self): result = my_function(5) self.assertEqual(result, 10) result = my_function(10) self.assertEqual(result, 20) result = my_function(0) self.assertEqual(result, 0) if __name__ == '__main__': unittest.main()
A possible solution is to add the following instructions in your file test_main.py: import unittest import sys # <-- add this import sys.path.insert(1, '..') # <-- add this instruction print(sys.path) # <--- TO CHECK THE CONTENT OF sys.path #from package.main import my_function # <--- comment your import from given_proj.package.main import my_function # <--- add this import class TestMain(unittest.TestCase): def test_my_function(self): result = my_function(5) self.assertEqual(result, 10) result = my_function(10) self.assertEqual(result, 20) result = my_function(0) self.assertEqual(result, 0) if __name__ == '__main__': unittest.main() If I execute test method test_my_function() on my system it passes. The instruction sys.path.insert(1, '..') add a path where the test code search package. Output on my system If I change directory to the folder given_proj and execute the following command: > cd /path/to/given_proj /path/to/given_proj> python tests/test_main.py The output of the execution is the following: ['/path/to/given_proj/tests', '..', '/other/paths'] . Ran 1 test in 0.000s OK In the output you can see the print of the content of sys.path.
3
2
78,679,818
2024-6-27
https://stackoverflow.com/questions/78679818/terminology-for-ordinal-index-versus-dataframe-index
I've set up a dataframe: df = pd.DataFrame({'Name': ['Alice', 'Bob', 'Aritra'], 'Age': [25, 30, 35], 'Location': ['Seattle', 'New York', 'Kona']}, index=([10, 20, 30])) I ask the user to put in which row of the data they want to see. They input a 0, indicating the first row. However, df.loc[0] does not refer to the first row. Instead, it doesn't exist, because the index only has the values 10, 20, and 30. Is there terminology to distinguish these two types of indexes? The best I could come up with is ordinal indexes (for "What number row is this?") and dataframe indexes (for "What is the index of this row in the dataframe?"). To clarify, under my definitions, df.index(ordinal_index) == df_index. Is there a standard terminology for this?
If you look at the documentation you've linked, pandas uses the term index label to describe what you call a dataframe index. It's more clearly explained in this documentation on indexing, where the term position is used to refer to what you call an ordinal index. (Note that some popular answers on Stack Overflow use the term location rather than position.) Note that this terminology might get confusing because: The df.index property is essentially a mapping from row positions to row labels. (And this issue is especially compounded since the positions and labels are usually equivalent for most dataframe rows.) The df.columns.get_loc(column_label) function returns the position of a column with the given column label (and yet is called get_loc, thus why some people use the term location instead of position). The df.loc property expects to be accessed with a label (not a position/location): e.g. df.loc[row_label, column_label]. It's the (now depreciated) df.iloc function that should be accessed with a position/location. (I'm answering my own question because I found an answer just before I posted the question - I've also updated this answer as I've found more accurate info.)
2
2
78,693,052
2024-7-1
https://stackoverflow.com/questions/78693052/how-to-efficiently-calculate-the-share-of-an-aggregated-column
I have the following DataFrame and want to calulate the "share". import pandas as pd d = {"col1":["A", "A", "A", "B", "B", "B"], "col2":["start_amount", "mid_amount", "end_amount", "start_amount", "mid_amount", "end_amount"], "amount":[0, 2, 8, 1, 2, 3]} df_test = pd.DataFrame(d) df_test["share"] = 0 for i in range(len(df_test)): df_test.loc[i, "share"] = df_test.loc[i, "amount"] / df_test.loc[(df_test["col1"] == df_test.loc[i, "col1"]) & (df_test["col2"] == "end_amount"), "amount"].values This works but is far from efficient. Is there a better way to do my calculation?
This is equivalent to selecting the rows with "end_amount", then performing a map per "col1" to then divide "amount": s = df_test.loc[df_test['col2'].eq('end_amount')].set_index('col1')['amount'] df_test['share'] = df_test['amount']/df_test['col1'].map(s) Output: col1 col2 amount share 0 A start_amount 0 0.000000 1 A mid_amount 2 0.250000 2 A end_amount 8 1.000000 3 B start_amount 1 0.333333 4 B mid_amount 2 0.666667 5 B end_amount 3 1.000000
4
2
78,688,703
2024-6-30
https://stackoverflow.com/questions/78688703/altair-barchart-is-blank-matplotlib-equivalent-shows-correct-visualization
I have been using altair for a time and this is the first time that I have experienced this issue, I have this simple code: import pandas as pd import altair as alt # extract data into a nested list data = { "Name of District": ["Kollam", "Beed", "Kalahandi", "West Medinipur", "Birbhum", "Howrah"], "No. of Cases": [19, 11, 42, 145, 199, 85], } # create a dataframe from the extracted data df = pd.DataFrame( data ) # Display the first 5 rows print(df.head().to_markdown(index=False, numalign="left", stralign="left")) # Create a bar chart with `Name of District` on the x-axis and `No. of Cases` on the y-axis chart = alt.Chart(df).mark_bar().encode( x='Name of District', y='No. of Cases', tooltip=['Name of District', 'No. of Cases'] ).properties( title='Bar Chart of No. of Cases by Name of District' ).interactive() # Save the chart chart.save('no_of_cases_by_name_of_district_bar_chart.json') display(chart) it returns this plot: If I plot it using matplotlib, then i get the correct plot: I have tried this on colab and on my local machine and I have gotten the same results, why could this happen?
Altair requires that special characters be escaped in the column names. See documentation here. This code should produce the plot you're looking for. chart = alt.Chart(df).mark_bar( ).encode( x='Name of District', y='No\. of Cases', tooltip=['Name of District', 'No\. of Cases'] ).properties( title='Bar Chart of No. of Cases by Name of District' ).interactive()
3
2
78,692,959
2024-7-1
https://stackoverflow.com/questions/78692959/polars-truncates-decimals
I'm trying to truncate floating point numbers in my DataFrame to a desired number of decimal places. I've found that this can be done using Pandas and NumPy here, but I've also seen that it might be possible with polars.Config.set_float_precision. Below is my current approach, but I think I might be taking extra steps. import polars as pl data = { "name": ["Alice", "Bob", "Charlie"], "grade": [90.23456, 80.98765, 85.12345], } df = pl.DataFrame(data) ( df # Convert to string .with_columns( pl.col("grade").map_elements( lambda x: f"{x:.5f}", return_dtype=pl.String ).alias("formatted_grade") ) # Slice to get desired decimals .with_columns( pl.col("formatted_grade").str.slice(0, length = 4) ) # Convert back to Float .with_columns( pl.col("formatted_grade").cast(pl.Float64) ) )
You can use the Polars - Numpy integration like this: df = df.with_columns(truncated_grade=np.trunc(pl.col("grade") * 10) / 10) Output: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ name ┆ grade ┆ truncated_grade β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ══════════β•ͺ═════════════════║ β”‚ Alice ┆ 90.23456 ┆ 90.2 β”‚ β”‚ Bob ┆ 80.98765 ┆ 80.9 β”‚ β”‚ Charlie ┆ 85.12345 ┆ 85.1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Full code: import numpy as np import polars as pl data = { "name": ["Alice", "Bob", "Charlie"], "grade": [90.23456, 80.98765, 85.12345], } df = pl.DataFrame(data) df = df.with_columns(truncated_grade=np.trunc(pl.col("grade") * 10) / 10) print(df)
3
6
78,690,388
2024-7-1
https://stackoverflow.com/questions/78690388/python3-8-cannot-import-name-windowed-complete
Ubuntu 20.04 Python 3.8 Got error ImportError: cannot import name 'windowed_complete' from 'more_itertools' However more_itertools is clearly installed. Is it possible, that Python 3.9 required to run windowed_complete ? Or how this error should be resolved ? Traceback (most recent call last): File "diarize.py", line 9, in <module> from nemo.collections.asr.models.msdd_models import NeuralDiarizer File "/home/admin/.local/lib/python3.8/site-packages/nemo/collections/asr/__init__.py", line 15, in <module> from nemo.collections.asr import data, losses, models, modules File "/home/admin/.local/lib/python3.8/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module> from nemo.collections.asr.losses.audio_losses import SDRLoss File "/home/admin/.local/lib/python3.8/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module> from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like File "/home/admin/.local/lib/python3.8/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module> from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer File "/home/admin/.local/lib/python3.8/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module> from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor File "/home/admin/.local/lib/python3.8/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module> from nemo.collections.common.parts.preprocessing import collections, parsers File "/home/admin/.local/lib/python3.8/site-packages/nemo/collections/common/parts/preprocessing/collections.py", line 23, in <module> from nemo.collections.common.parts.preprocessing import manifest, parsers File "/home/admin/.local/lib/python3.8/site-packages/nemo/collections/common/parts/preprocessing/parsers.py", line 23, in <module> from nemo.collections.common.parts.preprocessing import cleaners File "/home/admin/.local/lib/python3.8/site-packages/nemo/collections/common/parts/preprocessing/cleaners.py", line 17, in <module> import inflect File "/home/admin/.local/lib/python3.8/site-packages/inflect/__init__.py", line 80, in <module> from more_itertools import windowed_complete ImportError: cannot import name 'windowed_complete' from 'more_itertools' (/usr/lib/python3/dist-packages/more_itertools/__init__.py)```
pip install more-itertools==10.3.0 (latest version) solved it
2
1
78,688,251
2024-6-30
https://stackoverflow.com/questions/78688251/making-sale-line-id-field-invisible
I'm working on customizing Odoo and have encountered an issue with the sale_line_id field. I'm inheriting the project.task model in my custom module (wsl_available_drivers). In view, I need to make the sale_line_id field invisible. <record id="view_task_form2" model="ir.ui.view"> <field name="name">Project.task.view.form.inherit.available.drivers</field> <field name="model">project.task</field> <field name="inherit_id" ref="project.view_task_form2"/> <field name="arch" type="xml"> <xpath expr="//field[@name='user_ids']" position="before"> <field name="first_user" invisible="1"/> <field name="Journey_start_date"/> <field name="recurring_task" invisible="1"/> </xpath> <xpath expr="//field[@name='sale_line_id']" position="attribute"> <attribute name="invisible">1</attribute> </xpath> </field> </record> When attempting to hide sale_line_id using XPath, I receive an error stating that the field does not exist in the project.task model. Upon further investigation: I found that sale_line_id is related to the Task model, with relatedmodel pointing to sale.order.line. However, I cannot locate the sale_line_id field within the sale.order.line model or its related modules.
The sale_project module adds two sale_line_id fields and the visibility of the field depends on the group to which the user belongs, the second field has an additional invisible attribute The XPath expression will match the first node it finds. If you need to hide the two fields, add another XPath to target the second field Example: <record id="view_task_form_inherit" model="ir.ui.view"> <field name="name">project.task.form.inherit</field> <field name="model">project.task</field> <field name="inherit_id" ref="sale_project.view_sale_project_inherit_form"/> <field name="arch" type="xml"> <xpath expr="//field[@name='sale_line_id']" position="attributes"> <attribute name="invisible">1</attribute> </xpath> <xpath expr="//field[@name='sale_line_id'][2]" position="attributes"> <attribute name="invisible">1</attribute> </xpath> </field> </record> For more details check the View resolution and Inheritance specs documentation sections
3
1
78,692,139
2024-7-1
https://stackoverflow.com/questions/78692139/sum-multiple-rows-from-multiple-columns-in-a-dataframe-for-a-group
For each group in a groupby, I want to sum certain rows from several columns and output them in a new column, is_m_days. Each Group (a Group has CT/RT and has a Quantity from 1 or 2 or 3 or more rows, randomly mixed up) in 'ATEXT' For the Sum, each group has a Row before and after. DataFrame: data = {'ATEXT': ['', 'CT', 'RT', '', '', '', '', 'CT', 'CT', 'CT', 'TT', ''], 'BEGUZ_UE': [11.0, 23.0, 33.0, 15.0, 12.75, 19.75, 14.75, 23.0, 24.0, 24.0, 33.0, 15.0], 'subtract': [0.0, 0.0, 0.0, 0.2, np.nan, np.nan, 2.0, np.nan, np.nan, np.nan, np.nan, 0.0], 'add': [3.92, 0.0, 0.0, 0.0, np.nan, np.nan, 0.0, np.nan, np.nan, np.nan, np.nan, 3.57], 'UE_more_days': [np.nan, np.nan, 56.0, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 104.0, np.nan]} Result should be: ATEXT BEGUZ_UE subtract add UE_more_days is_m_days 0 11.00 *0.00* *3.92* 1 CT *23.00* 0.00 0.00 2 RT *33.00* 0.00 0.00 56.0 3 *15.00* 0.20 0.00 *74.92* 4 12.75 5 19.75 6 14.75 *2.00* *0.00* 7 CT *23.00* 8 RT *24.00* 9 CT *24.00* 10 CT *33.00* 104.0 11 *15.00* 0.00 3.57 *117.00* 12 etc My try was: m = df['ATEXT'].eq("") cond = (~m) & m.shift(-1) df['UE_more_days'] = (df['BEGUZ_UE'].mask(m) .groupby(m.cumsum()).cumsum() .where(cond) ) tmv = (df[['subtract', 'add']] .shift() .groupby(m.cumsum()) .transform('max') .eval('add-subtract') ) df['is_m_days'] = (df.groupby(m[::-1].cumsum())['BEGUZ_UE'] .transform('sum') .add(tmv) .where(cond) .shift() ) Is there a better solution?
Your approach is good, you could simplify it to use a single groupby (with extra boolean masks): m1 = df['ATEXT'].eq('') m2 = m1 & m1.shift(fill_value=True) m3 = m1!=m2 group = m2.cumsum() df.loc[m3, 'is_m_days'] = (pd .DataFrame({'A': df['BEGUZ_UE'].mask(m2), 'B': df['add'].sub(df['subtract']).where(m2)}) .groupby(group).transform('sum').sum(axis=1) ) Output: ATEXT BEGUZ_UE subtract add UE_more_days is_m_days 0 11.00 0.0 3.92 NaN NaN 1 CT 23.00 0.0 0.00 NaN NaN 2 RT 33.00 0.0 0.00 56.0 NaN 3 15.00 0.2 0.00 NaN 74.92 4 12.75 NaN NaN NaN NaN 5 19.75 NaN NaN NaN NaN 6 14.75 2.0 0.00 NaN NaN 7 CT 23.00 NaN NaN NaN NaN 8 CT 24.00 NaN NaN NaN NaN 9 CT 24.00 NaN NaN NaN NaN 10 TT 33.00 NaN NaN 104.0 NaN 11 15.00 0.0 3.57 NaN 117.00
3
2
78,676,400
2024-6-27
https://stackoverflow.com/questions/78676400/customizing-pygtk-file-chooser-dialog
I am creating a Gtk file chooser dialog as follows (see below for full example): dialog = Gtk.FileChooserDialog( title="Select a File", action=Gtk.FileChooserAction.OPEN) I would also like to add a checkbox and dropdown combobox as extra widgets. Adding one extra widget works fine: cb = Gtk.CheckButton("Only media files") dialog.set_extra_widget(cb) However, I would like to have a label and a combo box as well. I tried this: cb = Gtk.CheckButton("Only media files") dialog.set_extra_widget(cb) db = Gtk.ComboBoxText() db.append_text("Option 1") db.append_text("Option 2") db.set_active(0) dialog.set_extra_widget(db) However this only shows the combo box, not the check button. I thought maybe only one widget is allowed, so I created an hbox: cb = Gtk.CheckButton("Only media files") db = Gtk.ComboBoxText() db.append_text("Option 1") db.append_text("Option 2") db.set_active(0) hbox = Gtk.HBox(spacing=10) hbox.pack_start(cb, False, False, 0) hbox.pack_start(db, False, False, 0) dialog.set_extra_widget(hbox) Nope, nothing is shown. That doesn't work either. Then I read in the manual that "To pack widgets into a custom dialog, you should pack them into the Gtk.Box, available via Gtk.Dialog.get_content_area()." So I tried this: cb = Gtk.CheckButton("Only media files") db = Gtk.ComboBoxText() db.append_text("Option 1") db.append_text("Option 2") db.set_active(0) hbox = Gtk.HBox(spacing=10) hbox.pack_start(cb, False, False, 0) hbox.pack_start(db, False, False, 0) dbox = dialog.get_content_area() dbox.pack_start(hbox, False, False, 0) Thus, my question is this: how can I add multiple custom widgets to the standard file chooser dialog from pygtk? Here is a minimal reproducible (I hope) example. Just exchange the code between scisors with the fragments above if you want to test it. import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk def on_button_clicked(button): dialog = Gtk.FileChooserDialog( title="Select a File", action=Gtk.FileChooserAction.OPEN) # 8< ------------------- cb = Gtk.CheckButton("Only media files") db = Gtk.ComboBoxText() db.append_text("Option 1") db.append_text("Option 2") db.set_active(0) hbox = Gtk.HBox(spacing=10) hbox.pack_start(cb, False, False, 0) hbox.pack_start(db, False, False, 0) dbox = dialog.get_content_area() dbox.pack_start(hbox, False, False, 0) # 8< ------------------- response = dialog.run() dialog.destroy() window = Gtk.Window() window.set_default_size(300, 100) window.connect("destroy", Gtk.main_quit) button = Gtk.Button(label="Open FileChooserDialog") button.connect("clicked", on_button_clicked) window.add(button) window.show_all() Gtk.main()
It appears that solution #2 was the right way to go. However, I did not use the show_all method to actually show the widgets, which was the problem. With the following fragment the file chooser dialog with extra widgets works: cb = Gtk.CheckButton("Only media files") db = Gtk.ComboBoxText() db.append_text("Option 1") db.append_text("Option 2") db.set_active(0) hbox = Gtk.VBox(spacing=10) hbox.pack_start(cb, False, False, 0) hbox.pack_start(db, False, False, 0) hbox.show_all() dialog.set_extra_widget(hbox)
2
1
78,690,267
2024-7-1
https://stackoverflow.com/questions/78690267/how-to-add-a-column-with-json-representation-of-rows-in-polars-dataframe
I want to use polars to take a csv input and get for each row another column (e.g called json_per_row) where the entry per row is the json representation of the entire row. I also want to select only a subset of the columns to be included alongside the json_per_row column. Ideally I don’t want to hardcode the number / names of the columns of my input but just to illustrate I’ve provided a simple example below: # Input: csv with columns time, var1, var2,... s1 = pl.Series("time", [100, 200, 300]) s2 = pl.Series("var1", [1,2,3]) s3 = pl.Series("var2", [4,5,6]) # I want to add this column with polars somehow output_col = pl.Series("json_per_row", [ json.dumps({ "time": 100, "var1":1, "var2":4 }), json.dumps({ "time": 200, "var1":2, "var2":5 }), json.dumps({ "time":300 , "var1":3, "var2":6 }) ]) # Desired output df = pl.DataFrame([s1, output_col]) print(df) So is there a way to do this with the functions in the polars library? I'd rather not use json.dumps if it's not needed since as the docs say it can affect performance if you have to bring in external / user defined functions. Thanks
you can use read_csv() to read your csv data, but here I'll just use Series data you provided. .struct() to combine all the columns into one struct column. struct.json_encode() to convert to json. ( pl.DataFrame([s1,s2,s3]) .select( pl.col.time, json_per_row = pl.struct(pl.all()).struct.json_encode() ) ) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ time ┆ json_per_row β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ str β”‚ β•žβ•β•β•β•β•β•β•ͺ════════════════════════════════║ β”‚ 100 ┆ {"time":100,"var1":1,"var2":4} β”‚ β”‚ 200 ┆ {"time":200,"var1":2,"var2":5} β”‚ β”‚ 300 ┆ {"time":300,"var1":3,"var2":6} β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
2
78,689,910
2024-6-30
https://stackoverflow.com/questions/78689910/why-is-numpy-converting-an-object-int-type-to-an-object-float-type
This could be a bug, or could be something I don't understand about when numpy decides to convert the types of the objects in an "object" array. X = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [1158941147679947299,0] Y = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [11589411476799472995,0] Z = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [115894114767994729956,0] print(type(X[0]),X[0]) # <class 'int'> 7047216832217320738 print(type(Y[0]),Y[0]) # <class 'float'> 1.7477687161336848e+19 print(type(Z[0]),Z[0]) # <class 'int'> 121782390452532103395 The arrays themselves remain "object" type (as expected). It is unexpected that the Y array's objects got converted to "floats". Why is that happening? As a consequence I immediately loose precision in my combinatorics. To make things even stranger, removing the 0 fixes things: X = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [1158941147679947299] Y = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [11589411476799472995] Z = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [115894114767994729956] print(type(X[0]),X[0]) # <class 'int'> 7047216832217320738 print(type(Y[0]),Y[0]) # <class 'int'> 17477687161336846434 print(type(Z[0]),Z[0]) # <class 'int'> 121782390452532103395 I have tried other things, such as using larger/smaller numbers, but rarely (if ever) end up with "floats". It is something very specific about the size of these particular "int" values. Better code that shows the problem. import numpy as np A = np.array([1,1],dtype=object) + [2**62,0] B = np.array([1,1],dtype=object) + [2**63,0] C = np.array([1,1],dtype=object) + [2**64,0] D = np.array([1,1],dtype=object) + [2**63] E = np.array([1,1],dtype=object) + [2**63,2**63] print(type(A[0]),A[0]) # <class 'int'> 4611686018427387905 print(type(B[0]),B[0]) # <class 'float'> 9.223372036854776e+18 print(type(C[0]),C[0]) # <class 'int'> 18446744073709551617 print(type(D[0]),D[0]) # <class 'int'> 9223372036854775809 print(type(E[0]),E[0]) # <class 'int'> 9223372036854775809
In [323]: X = np.array([5888275684537373439, 1945629710750298993],dtype=object) Case 1 - not too large integer in second argument: In [324]: X+[1158941147679947299,0] Out[324]: array([7047216832217320738, 1945629710750298993], dtype=object) Same thing if we explicity make an object array: In [325]: X+np.array([1158941147679947299,0],object) Out[325]: array([7047216832217320738, 1945629710750298993], dtype=object) 2nd case - conversion to floats: In [326]: X+[11589411476799472995,0] Out[326]: array([1.7477687161336848e+19, 1.945629710750299e+18], dtype=object) Again with explicit object it's ok: In [327]: X+np.array([11589411476799472995,0],object) Out[327]: array([17477687161336846434, 1945629710750298993], dtype=object) Converting the list to array, without dtype spec makes a float - which propagates through the sum: In [328]: np.array([11589411476799472995,0]) Out[328]: array([1.15894115e+19, 0.00000000e+00]) where as the first case is small enough to be int64: In [329]: np.array([1158941147679947299,0]) Out[329]: array([1158941147679947299, 0], dtype=int64) third case - remaining int: In [330]: X+[115894114767994729956,0] Out[330]: array([121782390452532103395, 1945629710750298993], dtype=object) In [331]: X+np.array([115894114767994729956,0],object) Out[331]: array([121782390452532103395, 1945629710750298993], dtype=object) This is large enough to remain object dtype: In [332]: np.array([115894114767994729956,0]) Out[332]: array([115894114767994729956, 0], dtype=object) So the key difference is in how the list is made into an array. Object dtype is a fallback option, something that's used when it can't make a "regular" numeric array. You should always assume that object dtype math is a 'step child', something that's chosen as second best. The second case, without the 0, is another dtype: In [334]: np.array([11589411476799472995]) Out[334]: array([11589411476799472995], dtype=uint64) It is never wise to make assumptions about when a list is converted into an object dtype array. It that feature is important, make it explicit!
2
1
78,682,941
2024-6-28
https://stackoverflow.com/questions/78682941/updating-callback-parameters-for-sending-post-requests-to-a-site
It is simple enough to send a GET request to the url https://apps.fpb.org.za/erms/fpbquerytitle.aspx?filmtitle=&Submit1=Search to get the 1st page of search results of the wesbite. However, I cannot figure out how to request subsequent pages, which involve sending a POST request to the same url but with a lot form data parameters. I have managed to get the results of page 2 as follows with requests.session() as s: ### REQUESTING FIRST PAGE r = s.request('get', 'https://apps.fpb.org.za/erms/fpbquerytitle.aspx?filmtitle=&Submit1=Search') soup = BeautifulSoup(r.content, 'html.parser') ### REQUESTING SECOND PAGE page_number = 2 form_data = { '__EVENTTARGET': '', '__EVENTARGUMENT': '', '__VIEWSTATE': soup.find('input', id = '__VIEWSTATE').get('value'), '__VIEWSTATEGENERATOR': soup.find('input', id = '__VIEWSTATEGENERATOR').get('value'), '__EVENTVALIDATION': soup.find('input', id = '__EVENTVALIDATION').get('value'), 'vwfpbquerytitle_DXKVInput': soup.find('input', id = 'vwfpbquerytitle_DXKVInput').get('value'), 'vwfpbquerytitle$CallbackState': soup.find('input', id = 'vwfpbquerytitle_CallbackState').get('value'), 'vwfpbquerytitle$DXSelInput': soup.find('input', id = 'vwfpbquerytitle_DXSelInput').get('value'), 'vwfpbquerytitle_DXHFPWS': soup.find('input', id = 'vwfpbquerytitle_DXHFPWS').get('value'), 'popupControlWS': soup.find('input', id = 'popupControlWS').get('value'), 'DXScript': '1_155,1_87,1_147,1_97,1_123,1_106,1_113,1_84,1_139,1_137,1_98,1_135', '__CALLBACKID': 'vwfpbquerytitle', '__CALLBACKPARAM': ( f"c0:KV|777;{soup.find('input', id = 'vwfpbquerytitle_DXKVInput').get('value')};GB|20;12|PAGERONCLICK3|PN{page_number - 1};", ), } r2 = s.request('post', 'https://apps.fpb.org.za/erms/fpbquerytitle.aspx?filmtitle=&Submit1=Search', data = form_data) soup2 = BeautifulSoup(r.content, 'html.parser') But the last response r2 does not give me any indication for what the constantly changing __CALLBACKPARAM and vwfpbquerytitle$CallbackState parameters should be for the next request, as they are not present anywhere in the HTML response. In fact the only reason I can set the __CALLBACKPARAM correctly for the 2nd request above is because I can see by inspecting the website that the patterns such as c0:KV|777, etc. are always there for that 2nd request. So my question is how can I request subsequent pages past this point? Because setting __CALLBACKPARAM in the previous way, e.g. page_number = 5 '__CALLBACKPARAM': f"c0:KV|777;{soup.find('input', id = 'vwfpbquerytitle_DXKVInput').get('value')};GB|20;12|PAGERONCLICK3|PN{page_number - 1};" just gives me the same 2nd page of results as before
The only variable part of __CALLBACKPARAM that matters is: PAGERONCLICK{}, 2{};12 and PN{}. __VIEWSTATE, __CALLBACKID and vwfpbquerytitle$CallbackState don't actually need to be changed. credit to @xoxouser for figuring out that 777 was the length of str(array), it was the only part that seemed random, but in the end it did not actually matter; as long as it is consistent with the supplied array. Here's the code: import requests headers = { 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36', } params = { 'filmtitle': '', 'Submit1': 'Search', } data = { "__VIEWSTATE": "/wEPDwUKLTMyMDI3MDk3MQ9kFgICAw9kFgICAQ88KwAaAgAPFgIeD0RhdGFTb3VyY2VCb3VuZGdkFzwrAAYBBRQrAAJkZGQYAgUeX19Db250cm9sc1JlcXVpcmVQb3N0QmFja0tleV9fFgUFD3Z3ZnBicXVlcnl0aXRsZQUVdndmcGJxdWVyeXRpdGxlJERYSEZQBSJ2d2ZwYnF1ZXJ5dGl0bGUkRFhIRlAkVFBDRkNtMSRUQyRPBSJ2d2ZwYnF1ZXJ5dGl0bGUkRFhIRlAkVFBDRkNtMSRUQyRDBQxwb3B1cENvbnRyb2wFEUVudGl0eURhdGFTb3VyY2UxDzwrAAkBAQ9oZGQs+akQiwv01gWVLky5TJXi94fPtkrfInkrLr1p5PmBGQ==", "__CALLBACKID": "vwfpbquerytitle", "vwfpbquerytitle$CallbackState": "BwMHAgIERGF0YQb+GgAAAACfmwAAn5sAAAAAAABkAAAAAAYAAAAKbWF0ZXJpYWxpZAptYXRlcmlhbGlkBAAADW1hdGVyaWFsdGl0bGUNbWF0ZXJpYWx0aXRsZQcAABBwdWJsaWNhdGlvbl95ZWFyEHB1YmxpY2F0aW9uX3llYXIDAAALcnVudGltZV9taW4LcnVudGltZV9taW4HAAAUbWF0Zm9ybWF0ZGVzY3JpcHRpb24UbWF0Zm9ybWF0ZGVzY3JpcHRpb24HAAACY2ECY2EHAAAAAAAABwAHAAcABwAG//8EBV1DAQAHAg8gQSBET0dTIEpPVVJORVkDBuMHBwIDMTA4BwISRElHSVRBTCBWSURFTyBESVNDBwIKNy05UEcgRCAgVgcABwAG//8EBVAuAQAHAhEgQkFMTEVSUyBTRUFTT04gMQMG4AcHAgMyNzEHAhJESUdJVEFMIFZJREVPIERJU0MHAgoxNiBEICBMICBTBwAHAAb//wQFlCABAAcCDSBCT0JCWSBKQVNPT1MDBt4HBwIDMTIwBwIIVEhFQVRSRSAHAgRQRyBWBwAHAAb//wQGFmsHAgwgQ0xPVUQgQVRMQVMDBt0HBwIDMTY1BwISRElHSVRBTCBWSURFTyBESVNDBwIIMTYgTCBTIFYHAAcABv//BAUSUAEABwIoIERFTU9OIFNMQVlFUiAtVE8gVEhFIFNXT1JEU01JVEggVklMTEFHRQMG5wcHAgMxMTAHAgdUSEVBVFJFBwIEMTMgVgcABwAG//8EBcRSAQAHAgogSU1BR0lOQVJZAwboBwcCAzEwNAcCB1RIRUFUUkUHAg0xNiBEICBIICBMICBWBwAHAAb//wQFXEoBAAcCHiBNQVlBIFRIRSBCRUUgMztUSEUgR09MREVOIE9SQgMG5QcHAgI4OAcCB1RIRUFUUkUHAgNQRyAHAAcABv//BAWtIgEABwIWIE1JS0UgJiBNT0xMWSBTRUFTT04gNAMG3gcHAgM0MTMHAhJESUdJVEFMIFZJREVPIERJU0MHAgcxMyBEICBMBwAHAAb//wQFpVIBAAcCFSBNT1RIRVLigJlTIElOU1RJTkNUIAMG6AcHAgI5NAcCB1RIRUFUUkUHAg4xNiBDVCAgRCAgUyAgVgcABwAG//8EBdgjAQAHAhAgUk9CT1QgT1ZFUkxPUkRTAwbeBwcCAjkwBwIIVEhFQVRSRSAHAgkxMC0xMlBHIFYHAAcABv//BAW1UAEABwIlIFNQSURFUi1NQU4gOiBBQ1JPU1MgVEhFIFNQSURFUi1WRVJTRQMG5wcHAgMxNDAHAgdUSEVBVFJFBwIHMTMgTCAgVgcABwAG//8EBTxTAQAHAhYgVEhFIEJPWSBBTkQgVEhFIEhFUk9OAwboBwcCAzEyNQcCB1RIRUFUUkUHAgc3LTlQRyBWBwAHAAb//wQFliMBAAcCHCBUSEUgQ1VSU0UgT0YgSElHSFdBWSBTSEVJTEEDBt4HBwIDMTMwBwIIVEhFQVRSRSAHAg4xMyBIICBMICBTViAgVgcABwAG//8EBTxQAQAHAicgVEhFIFVOTElLRUxZIFBJTEdSSU1BR0UgT0YgSEFST0xEIEZSWSADBucHBwIDMTA4BwIHVEhFQVRSRQcCCjE2IEQgIEwgIFYHAAcABv//BAWZMwEABwIQIFRIRSBXSE9MRSBUUlVUSAMG3wcHAgEyBwIIVEhFQVRSRSAHAgMxMCAHAAcABv//BAYdAgcCAyIxIgMG3QcHAgMxMDcHAhJESUdJVEFMIFZJREVPIERJU0MHAgZQRyBMIFYHAAcABv//BAYzYQcCEyMxIENIRUVSTEVBREVSIENBTVADBtsHBwICOTIHAhJESUdJVEFMIFZJREVPIERJU0MHAggxNiBMIE4gUwcABwAG//8EBURSAQAHAhcjTE9WRU1ZU0VMRklFIChGRUFUVVJFKQMG5wcHAgI4NAcCB1RIRUFUUkUHAgoxMyBMICBQICBWBwAHAAb//wQFAU8BAAcCDChVTilDUkVESVRFRAMG5gcHAgI0NgcCBk9OTElORQcCCTEwLTEyUEcgTAcABwAG//8EBgA9BwIDLjQ1AwbXBwcCAjkzBwISRElHSVRBTCBWSURFTyBESVNDBwIIMTggTCBTIFYHAAcABv//BAZlKQcCCytUSEUrREFNTkVEAwbVBwcCAjcxBwISRElHSVRBTCBWSURFTyBESVNDBwIEUEcgTAcABwAG//8EBuoKBwIMMSBHSUFOVCBMRUFQAwbSBwcCAzEyOQcCEkRJR0lUQUwgVklERU8gRElTQwcCBDE2IEwHAAcABv//BAaOGwcCDDEgR0lBTlQgTEVBUAMG0gcHAgMxMjkHAgVWSURFTwcCBDE2IEwHAAcABv//BAZaAwcCHzEgUFcgV1JFU1RMSU5HIC0gQkxPT0QgQU5EIEdVVFMDBtgHBwIDMjMwBwISRElHSVRBTCBWSURFTyBESVNDBwIDMTYgBwAHAAb//wQGEhMHAgIxMAMGzwcHAgMxMTYHAhJESUdJVEFMIFZJREVPIERJU0MHAgMxMyAHAAcABv//BAYhQAcCAjEwAwbXBwcCAzEyMgcCEkRJR0lUQUwgVklERU8gRElTQwcCAzEzIAcABwAG//8EBlRRBwIaMTAgQUNUSU9OIE1PVklFUyAoQk9YIFNFVCkDBtUHBwIBMAcCEkRJR0lUQUwgVklERU8gRElTQwcCBjE2IEwgVgcABwAG//8EBa4xAQAHAhMxMCBDTE9WRVJGSUVMRCBMQU5FAwbgBwcCAjk5BwISRElHSVRBTCBWSURFTyBESVNDBwIJMTAtMTJQRyBWBwAHAAb//wQFWC0BAAcCEzEwIENMT1ZFUkZJRUxEIExBTkUDBt8HBwIDMTA0BwIIVEhFQVRSRSAHAgQxMyBWBwAHAAb//wQFVCwBAAcCHzEwIENMT1ZFUkZJRUxEIExBTkUgKFRSQUlMRVIgQSkDBt8HBwIBMwcCCFRIRUFUUkUgBwIIMTAtMTJQRyAHAAcABv//BAXnNgEABwITMTAgREFZUyBJTiBTVU4gQ0lUWQMG4QcHAgI4NwcCCFRIRUFUUkUgBwIHMTMgTCAgVgcABwAG//8EBek2AQAHAhkxMCBEQVlTIElOIFNVTiBDSVRZIFRSTCAxAwbhBwcCATIHAghUSEVBVFJFIAcCCTEwLTEyUEcgVgcABwAG//8EBlhPBwIOMTAgREFZUyBUTyBXQVIDBtkHBwIDMTAyBwISRElHSVRBTCBWSURFTyBESVNDBwIEMTAgTAcABwAG//8EBhBQBwILMTAgREVBRCBNRU4DBtkHBwICOTMHAhJESUdJVEFMIFZJREVPIERJU0MHAgYxOCBMIFYHAAcABv//BAYhJQcCDTEwIEdMQURJQVRPUlMDBtYHBwIDMTAxBwISRElHSVRBTCBWSURFTyBESVNDBwIEMTAgVgcABwAG//8EBkw5BwIQMTAgSVRFTVMgT1IgTEVTUwMG1wcHAgI2NQcCEkRJR0lUQUwgVklERU8gRElTQwcCA1BHIAcABwAG//8EBphNBwIQMTAgSVRFTVMgT1IgTEVTUwMG2QcHAgI3NgcCEkRJR0lUQUwgVklERU8gRElTQwcCA1BHIAcABwAG//8EBskvBwIdMTAgS1VORyBGVSBDTEFTU0lDUyAoQk9YIFNFVCkDBtUHBwIBMAcCEkRJR0lUQUwgVklERU8gRElTQwcCBDE2IFYHAAcABv//BAUKUwEABwISMTAgTElWRVMgKEZFQVRVUkUpAwboBwcCAjg4BwIHVEhFQVRSRQcCBFBHIFYHAAcABv//BAXSUgEABwIQMTAgTElWRVMgKFRSTCBBKQMG6AcHAgEyBwIGT05MSU5FBwIEUEcgVgcABwAG//8EBoNUBwIWMTAgTUFHTklGSUNFTlQgS0lMTEVSUwMG2QcHAgI4NgcCEkRJR0lUQUwgVklERU8gRElTQwcCBjEzUEcgVgcABwAG//8EBvlpBwIWMTAgTUlOVVRFUyBUTyBNSUROSUdIVAMG3AcHAgI5OAcCEkRJR0lUQUwgVklERU8gRElTQwcCCjE4IEwgTiBTIFYHAAcABv//BAbKLwcCKDEwIE1PVklFUyAtIEhPTExZV09PRCBCQUQgQk9ZUyAoQk9YIFNFVCkDBtUHBwIBMAcCEkRJR0lUQUwgVklERU8gRElTQwcCCjE2IEwgTiBTIFYHAAcABv//BAb2LgcCJTEwIE1PVklFUyAtIEhPTExZV09PRCBESVZBUyAoQk9YIFNFVCkDBtUHBwIBMAcCEkRJR0lUQUwgVklERU8gRElTQwcCAzE4IAcABwAG//8EBjlKBwIfMTAgUVVFU1RJT05TIEZPUiBUSEUgREFMQUkgTEFNQQMG2AcHAgI4NwcCEkRJR0lUQUwgVklERU8gRElTQwcCA1BHIAcABwAG//8EBkNDBwITMTAgUklMTElOR1RPTiBQTEFDRQMG1wcHAgMxMDYHAhJESUdJVEFMIFZJREVPIERJU0MHAgQxMCBWBwAHAAb//wQGthoHAhoxMCBUSElOR1MgSSBIQVRFIEFCT1VUIFlPVQMGzwcHAgI5NAcCEkRJR0lUQUwgVklERU8gRElTQwcCAzEwIAcABwAG//8EBrcaBwIaMTAgVEhJTkdTIEkgSEFURSBBQk9VVCBZT1UDBs8HBwICOTQHAgVWSURFTwcCAzE2IAcABwAG//8EBf7/AAAHAhoxMCBUSElOR1MgSSBIQVRFIEFCT1VUIFlPVQMGzwcHAgI5NQcCB1RIRUFUUkUHAgNQRyAHAAcABv//BAX1PQEABwIHMTAgWCAxMAMG4gcHAgI4MwcCEkRJR0lUQUwgVklERU8gRElTQwcCBzE2IEwgIFYHAAcABv//BAVpBwEABwIJMTAsMDAwIEJDAwbYBwcCAzEwOAcCB1RIRUFUUkUHAgQxMyBWBwAHAAb//wQGJlkHAgkxMCwwMDAgQkMDBtoHBwIDMTA0BwIMQkxVLVJBWSBESVNDBwIEMTMgVgcABwAG//8EBlZFBwIJMTAsMDAwIEJDAwbYBwcCAzEwNAcCEkRJR0lUQUwgVklERU8gRElTQwcCBDEzIFYHAAcABv//BAaqXAcCCDEwMCBEQVlTAwbaBwcCAzEzMwcCEkRJR0lUQUwgVklERU8gRElTQwcCBjEzUEcgVgcABwAG//8EBiBfBwIWMTAwIERBWVMgSU4gVEhFIEpVTkdMRQMG2gcHAgI5MwcCEkRJR0lUQUwgVklERU8gRElTQwcCBjEzIEwgVgcABwAG//8EBplfBwIIMTAwIEZFRVQDBtsHBwICOTIHAhJESUdJVEFMIFZJREVPIERJU0MHAggxNiBMIFMgVgcABwAG//8EBqASBwIJMTAwIEdJUkxTAwbSBwcCAjkyBwIFVklERU8HAgQxNiBTBwAHAAb//wQGoRIHAgkxMDAgR0lSTFMDBtIHBwICOTIHAhJESUdJVEFMIFZJREVPIERJU0MHAgQxNiBTBwAHAAb//wQGmA8HAg0xMDAgTUlMRSBSVUxFAwbTBwcCAjk4BwISRElHSVRBTCBWSURFTyBESVNDBwIEMTYgTAcABwAG//8EBj0gBwINMTAwIE1JTEUgUlVMRQMG0wcHAgI5OAcCBVZJREVPBwIEMTYgTAcABwAG//8EBtRdBwIOMTAwIE1JTExJT04gQkMDBtoHBwICODUHAhJESUdJVEFMIFZJREVPIERJU0MHAgYxNiBMIFYHAAcABv//BAXcMwEABwILMTAwIFNUUkVFVFMDBuEHBwICODkHAhJESUdJVEFMIFZJREVPIERJU0MHAgoxNiBEICBMICBWBwAHAAb//wQGcgkHAhoxMDAgV09NRU4gKEFLQSBHSVJMIEZFVkVSKQMG0gcHAgI5MQcCEkRJR0lUQUwgVklERU8gRElTQwcCBDEzIFMHAAcABv//BAZzCQcCGjEwMCBXT01FTiAoQUtBIEdJUkwgRkVWRVIpAwbSBwcCAjkxBwIFVklERU8HAgQxMyBTBwAHAAb//wQGfmsHAhExMDAlIFZPTFJPT00gTUVMVAMG3QcHAgI1NAcCEkRJR0lUQUwgVklERU8gRElTQwcCBjEzIEwgRAcABwAG//8EBe5IAQAHAgkxMDAlIFdPTEYDBuQHBwICOTYHAgdUSEVBVFJFBwIGNy05UEcgBwAHAAb//wQFJkEBAAcCETEwMCwgVEhFIFNFQVNPTiA1AwbiBwcCAzUzMwcCEkRJR0lUQUwgVklERU8gRElTQwcCDTE4IEQgIEggIEwgIFYHAAcABv//BAXdUAEABwIKMTAwMSBEQVlTIAMG5wcHAgI5OAcCBk9OTElORQcCCDE2IFNWICBWBwAHAAb//wQGZWsHAg8xMDBNICAgTEVFVUxPT1ADBt0HBwICOTMHAhJESUdJVEFMIFZJREVPIERJU0MHAgNQRyAHAAcABv//BAUODAEABwINMTAwTSBMRUVVTE9PUAMG3QcHAgI5NAcCB1RIRUFUUkUHAgNQRyAHAAcABv//BAbIGgcCHjEwMSBEQUxNQVRJT05TIChFWFRSQSBGT09UQUdFKQMG0QcHAgE5BwISRElHSVRBTCBWSURFTyBESVNDBwICQSAHAAcABv//BAYRBwcCHDEwMSBEQUxNQVRJT05TIChMSVZFIEFDVElPTikDBtMHBwICOTkHAhJESUdJVEFMIFZJREVPIERJU0MHAgJBIAcABwAG//8EBYT/AAAHAg0xMDEgUkVZS0pBVklLAwbSBwcCAzEwNAcCB1RIRUFUUkUHAgQxOCBTBwAHAAb//wQFPAABAAcCDjEwMiBEQUxNQVRJT05TAwbQBwcCAzEwMAcCB1RIRUFUUkUHAgJBIAcABwAG//8EBhIHBwIdMTAyIERBTE1BVElPTlMoRVhUUkEgRk9PVEFHRSkDBtEHBwIDMTI3BwISRElHSVRBTCBWSURFTyBESVNDBwICQSAHAAcABv//BAW0PAEABwIMMTAyIE5PVCBPVVQgAwbiBwcCAzEyMAcCCFRIRUFUUkUgBwIIMTAtMTJQRyAHAAcABv//BAVQPAEABwIXMTAyIE5PVCBPVVQgLSBUUkFJTEVSIEEDBuIHBwIBMgcCCFRIRUFUUkUgBwICQSAHAAcABv//BAZqYAcCIjEwNDA6IENIUklTVElBTklUWSBJTiBUSEUgTkVXIEFTSUEDBtsHBwICNzgHAhJESUdJVEFMIFZJREVPIERJU0MHAgNQRyAHAAcABv//BAZkOQcCDTEwVEggQU5EIFdPTEYDBtcHBwIDMTA0BwISRElHSVRBTCBWSURFTyBESVNDBwIGMTYgTCBWBwAHAAb//wQG3zwHAhgxMSBEQVlTIDExIE5JR0hUUyBQQVJUIDEDBtcHBwICNjAHAhJESUdJVEFMIFZJREVPIERJU0MHAgNQRyAHAAcABv//BAbePAcCGDExIERBWVMgMTEgTklHSFRTIFBBUlQgMgMG1wcHAgI0NQcCEkRJR0lUQUwgVklERU8gRElTQwcCA1BHIAcABwAG//8EBihCBwILMTEgVEggIEhPVVIDBtgHBwICODkHAhJESUdJVEFMIFZJREVPIERJU0MHAgNQRyAHAAcABv//BAYKKAcCBTExLjE0AwbVBwcCAjgyBwIFVklERU8HAgYxNiBMIFYHAAcABv//BAYdVQcCBTExLjE0AwbVBwcCAjgyBwISRElHSVRBTCBWSURFTyBESVNDBwIGMTYgTCBWBwAHAAb//wQFTi8BAAcCETExLjIyLjYzIFNFQVNPTiAxAwbgBwcCAzQxOAcCEkRJR0lUQUwgVklERU8gRElTQwcCBzEzIEwgIFYHAAcABv//BAZLTwcCBTExOjE0AwbZBwcCAjgyBwISRElHSVRBTCBWSURFTyBESVNDBwIIMTYgTCBTIFYHAAcABv//BAaiOQcCBTExOjU5AwbXBwcCAzEwMAcCEkRJR0lUQUwgVklERU8gRElTQwcCBDEzIEwHAAcABv//BAY1ZQcCCDExLTExLTExAwbbBwcCAjkzBwISRElHSVRBTCBWSURFTyBESVNDBwIGMTMgViBCBwAHAAb//wQGPU0HAg4xMVRIICBIT1VSLFRIRQMG2AcHAgI5MgcCEkRJR0lUQUwgVklERU8gRElTQwcCA1BHIAcABwAG//8EBSwGAQAHAgkxMVRIIEhPVVIDBtcHBwICOTEHAgdUSEVBVFJFBwIDUEcgBwAHAAb//wQGhhQHAh8xMVRIIEhPVVIgIFRIRSAtIFNUT1JJRVMgRlJPTSBBAwbOBwcCATcHAhJESUdJVEFMIFZJREVPIERJU0MHAgJBIAcABwAG//8EBrcOBwIMMTIgQU5HUlkgTUVOAwbRBwcCAjkzBwISRElHSVRBTCBWSURFTyBESVNDBwIDUEcgBwAHAAb//wQGaTMHAhQxMiBET0dTIE9GIENIUklTVE1BUwMG1QcHAgMxMDMHAhJESUdJVEFMIFZJREVPIERJU0MHAgNQRyAHAAcABv//BAZGQgcCEDEyIEhPVVJTIFRPIExJVkUDBtgHBwICODkHAhJESUdJVEFMIFZJREVPIERJU0MHAgYxMCBTIFYHAAcABv//BAaHHQcCCjEyIE1PTktFWVMDBtEHBwIDMTI0BwISRElHSVRBTCBWSURFTyBESVNDBwIGMTMgTCBWBwAHAAb//wQFRicBAAcCFTEyIE1PTktFWVMgLSBTRUFTT04gMQMG3wcHAgM1MzcHAhJESUdJVEFMIFZJREVPIERJU0MHAgQxNiBWBwAHAAb//wQFIDABAAcCFTEyIE1PTktFWVMgLSBTRUFTT04gMgMG4AcHAgM1NTgHAhJESUdJVEFMIFZJREVPIERJU0MHAgcxNiBIICBWBwAHAAb//wQGJxsHAg8xMiBPJ0NMT0NLIEhJR0gDBtIHBwIDMTI4BwISRElHSVRBTCBWSURFTyBESVNDBwICQSAHAAcABv//BAbGIQcCDzEyIE8nQ0xPQ0sgSElHSAMG1AcHAgEwBwISRElHSVRBTCBWSURFTyBESVNDBwICQSAHAAcABv//BAVRCAEABwIJMTIgUk9VTkRTAwbZBwcCAzEwNwcCB1RIRUFUUkUHAgYxMyBMIFYCBVN0YXRlB2AHBgcAAgEHAQIBBwICAQcDAgEHBAIBBwUCAQcABwAHAAcAAgAFAAAAgAkCCm1hdGVyaWFsaWQHAQIKbWF0ZXJpYWxpZAQJAgACAAMHBAIABwACAQWfmwAABwACAQcABwACCFBhZ2VTaXplAwdk" } page = 1 digits = len(str(page - 1)) callback_param = "c0:KV|777;['82781','77392','73876','27414','86034','86724','84572','74413','86693','74712','86197','86844','74646','86076','78745','541','24883','86596','85761','15616','10597','2794','7054','858','4882','16417','20820','78254','77144','76884','79591','79593','20312','20496','9505','14668','19864','12233','86794','86738','21635','27129','12234','12022','19001','17219','6838','6839','65534','81397','67433','22822','17750','23722','24352','24473','4768','4769','3992','8253','24020','78812','2418','2419','27518','84206','82214','86237','27493','68622','6856','1809','65412','65596','1810','81076','80976','24682','14692','15583','15582','16936','10250','21789','77646','20299','14754','25909','19773','67116','5254','3767','13161','16966','7559','75590','77856','6951','8646','67665'];{args};" args = f'GB|{digits + 19};12|PAGERONCLICK{digits + 2}|PN{page - 1}' data["__CALLBACKPARAM"] = callback_param.format(args=args) url = 'https://apps.fpb.org.za/erms/fpbquerytitle.aspx' response = requests.post(url, params=params, headers=headers, data=data) print(response.text)
2
2
78,689,213
2024-6-30
https://stackoverflow.com/questions/78689213/polars-cumulative-count-over-sequential-dates
Here's some sample data import polars as pl df = pl.DataFrame( { "date": [ "2024-08-01", "2024-08-02", "2024-08-03", "2024-08-04", "2024-08-04", "2024-08-05", "2024-08-06", "2024-08-08", "2024-08-09", ], "type": ["A", "A", "A", "A", "B", "B", "B", "A", "A"], } ).with_columns(pl.col("date").str.to_date()) And my desired output would look something like this shape: (9, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ type ┆ days_in_a_row β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ str ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═══════════════║ β”‚ 2024-08-01 ┆ A ┆ 1 β”‚ β”‚ 2024-08-02 ┆ A ┆ 2 β”‚ β”‚ 2024-08-03 ┆ A ┆ 3 β”‚ β”‚ 2024-08-04 ┆ A ┆ 4 β”‚ β”‚ 2024-08-04 ┆ B ┆ 1 β”‚ β”‚ 2024-08-05 ┆ B ┆ 2 β”‚ β”‚ 2024-08-06 ┆ B ┆ 3 β”‚ β”‚ 2024-08-08 ┆ A ┆ 1 β”‚ β”‚ 2024-08-09 ┆ A ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Where my days_in_a_row counter gets reset upon a date gap greater than 1 day. What I've tried so far df.with_columns(days_in_a_row=pl.cum_count("date").over("type")) Which gives me shape: (9, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ type ┆ days_in_a_row β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ str ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═══════════════║ β”‚ 2024-08-01 ┆ A ┆ 1 β”‚ β”‚ 2024-08-02 ┆ A ┆ 2 β”‚ β”‚ 2024-08-03 ┆ A ┆ 3 β”‚ β”‚ 2024-08-04 ┆ A ┆ 4 β”‚ β”‚ 2024-08-04 ┆ B ┆ 1 β”‚ β”‚ 2024-08-05 ┆ B ┆ 2 β”‚ β”‚ 2024-08-06 ┆ B ┆ 3 β”‚ β”‚ 2024-08-08 ┆ A ┆ 5 β”‚ β”‚ 2024-08-09 ┆ A ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Which is not resetting after the gap. I can't quite nail this one down. I've also tried variations with df .with_columns(date_gap=pl.col("date").diff().over("type")) .with_columns(days_in_a_row=(pl.cum_count("date").over("date_gap", "type"))) Which get's closer, but it still ends up not resetting where I'd want it shape: (9, 4) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ type ┆ date_gap ┆ days_in_a_row β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ str ┆ duration[ms] ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ══════════════β•ͺ═══════════════║ β”‚ 2024-08-01 ┆ A ┆ null ┆ 1 β”‚ β”‚ 2024-08-02 ┆ A ┆ 1d ┆ 1 β”‚ β”‚ 2024-08-03 ┆ A ┆ 1d ┆ 2 β”‚ β”‚ 2024-08-04 ┆ A ┆ 1d ┆ 3 β”‚ β”‚ 2024-08-04 ┆ B ┆ null ┆ 1 β”‚ β”‚ 2024-08-05 ┆ B ┆ 1d ┆ 1 β”‚ β”‚ 2024-08-06 ┆ B ┆ 1d ┆ 2 β”‚ β”‚ 2024-08-08 ┆ A ┆ 4d ┆ 1 β”‚ β”‚ 2024-08-09 ┆ A ┆ 1d ┆ 4 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
What about first creating a secondary grouper based on the duration β‰₯ 1day? from datetime import timedelta (df.with_columns(group=pl.col("date").diff().gt(timedelta(days=1)) .fill_null(True).cum_sum().over("type")) .with_columns(days_in_a_row=pl.cum_count("date").over(["type", "group"])) ) Output: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ type ┆ group ┆ days_in_a_row β”‚ β”‚ --- ┆ --- ┆ --- ┆ --- β”‚ β”‚ date ┆ str ┆ u32 ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═══════β•ͺ═══════════════║ β”‚ 2024-08-01 ┆ A ┆ 1 ┆ 1 β”‚ β”‚ 2024-08-02 ┆ A ┆ 1 ┆ 2 β”‚ β”‚ 2024-08-03 ┆ A ┆ 1 ┆ 3 β”‚ β”‚ 2024-08-04 ┆ A ┆ 1 ┆ 4 β”‚ β”‚ 2024-08-04 ┆ B ┆ 1 ┆ 1 β”‚ β”‚ 2024-08-05 ┆ B ┆ 1 ┆ 2 β”‚ β”‚ 2024-08-06 ┆ B ┆ 1 ┆ 3 β”‚ β”‚ 2024-08-08 ┆ A ┆ 2 ┆ 1 β”‚ β”‚ 2024-08-09 ┆ A ┆ 2 ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
3
3
78,689,352
2024-6-30
https://stackoverflow.com/questions/78689352/why-doesnt-randrangestart-100-raise-an-error
In the docs for randrange(), it states: Keyword arguments should not be used because they can be interpreted in unexpected ways. For example randrange(start=100) is interpreted as randrange(0, 100, 1). If the signature is random.randrange(start, stop[, step]), why doesn't randrange(start=100) raise an error, since stop is not passed a value? Why would randrange(start=100) be interpreted as randrange(0, 100, 1)? I'm not trying to understand the design choice of whoever wrote the code, so much as understanding how it's even possible. I thought parameters without default values need to be passed arguments or else a TypeError would be raised.
This is because the code for randrange must also support the alternative signature: random.randrange(stop) The implementation performs a check to see whether the call uses this signature or the other one: random.randrange(start, stop[, step]) It does so with this code: istart = _index(start) if stop is None: if istart > 0: return self._randbelow(istart) This makes sense when you call randrange(100), where it translates into a call of _randbelow(100), which is intended. But when you call randrange(start=100), then we get into the same flow, and the value of start is also interpreted as stop.
2
3
78,688,751
2024-6-30
https://stackoverflow.com/questions/78688751/how-to-explicitly-list-allowed-keyword-arguments-in-python-for-ide-support
I'm using argparse.ArgumentParser in python and giving users the option to add their own functions to my script. For that, I'm giving them a function that generates data that gets used in argparse's add_argument() function. I noticed that add_argument's signature is add_argument(self, *args, **kwargs) but when looking at the documentation in vscode I can see all the different arguments it can get (action, nargs, const, default, etc.). How does python do it? As far as I know when you give the user **kwargs it is shown as **kwargs and that's it. I looked all over this site and the internet in general but I can't find the syntax used for it.
VSCode's language server is pulling from the stub file in the typeshed, where the current definition is: def add_argument( self, *name_or_flags: str, action: _ActionStr | type[Action] = ..., nargs: int | _NArgsStr | _SUPPRESS_T | None = None, const: Any = ..., default: Any = ..., type: _ActionType = ..., choices: Iterable[_T] | None = ..., required: bool = ..., help: str | None = ..., metavar: str | tuple[str, ...] | None = ..., dest: str | None = ..., version: str = ..., **kwargs: Any, ) -> Action: ... Sidenote: It looks like you're using Pylance, which is adding some detail to the types. For example with _T@add_argument it's saying that _T is scoped to add_argument. I'm using Jedi which shows different details. Reference Editing Python in Visual Studio Code Β§ Autocomplete and IntelliSense Pylance is the default language server for Python in VS Code, and is installed alongside the Python extension to provide IntelliSense features. Pylance is based on Microsoft's Pyright static type checking tool, leveraging type stubs (.pyi files) and lazy type inferencing to provide a highly-performant development experience. See also Stub files - mypy documentation
3
2
78,684,997
2024-6-29
https://stackoverflow.com/questions/78684997/efficiently-storing-data-that-is-partially-columnar-into-a-duckdb-database-in-a
I have some partially columnar data like this: "hello", "2024 JAN", "2024 FEB" "a", 0, 1 If it were purely columnar, it would look like: "hello", "year", "month", "value" "a", 2024, "JAN", 0 "a", 2024, "FEB", 1 Suppose the data is in the form of a numpy array, like this: import numpy as np data = np.array([["hello", "2024 JAN", "2024 FEB"], ["a", "0", "1"]], dtype="<U") data array([['hello', '2024 JAN', '2024 FEB'], ['a', '0', '1']], dtype='<U8') Imagine also that I created a table: import duckdb as ddb conn = ddb.connect("hello.db") conn.execute("CREATE TABLE columnar (hello VARCHAR, year UINTEGER, month VARCHAR, value VARCHAR);") How could I go about efficiently inserting data into the DuckDB table columnar? The naive/easy way would be to brute-force transform the data into a columnar format in-memory, in Python, before inserting it into the DuckDB table. Here I mean specifically: import re data_header = data[0] data_proper = data[1:] date_pattern = re.compile(r"(?P<year>[\d]+) (?P<month>JAN|FEB)") common_labels: list[str] = [] known_years: set[int] = set() known_months: set[str] = set() header_to_date: Dict[str, tuple[int, str]] = dict() for header in data_header: if matches := date_pattern.match(header): year, month = int(matches["year"]), str(matches["month"]) known_years.add(year) known_months.add(month) header_to_date[header] = (year, month) else: common_labels.append(header) # hello, year, month, value new_rows_per_old_row = len(known_years) * len(known_months) new_headers = ["year", "month", "value"] purely_columnar = np.empty( ( 1 + data_proper.shape[0] * new_rows_per_old_row, len(common_labels) + len(new_headers), ), dtype=np.object_, ) purely_columnar[0] = common_labels + ["year", "month", "value"] for rx, row in enumerate(data_proper): common_data = [] ym_data = [] for header, element in zip(data_header, row): if header in common_labels: common_data.append(element) else: year, month = header_to_date[header] ym_data.append([year, month, element]) for yx, year_month_value in enumerate(ym_data): purely_columnar[ 1 + rx * new_rows_per_old_row + yx, : len(common_labels) ] = common_data purely_columnar[ 1 + rx * new_rows_per_old_row + yx, len(common_labels) : ] = year_month_value print(f"{purely_columnar=}") purely_columnar= array([[np.str_('hello'), 'year', 'month', 'value'], [np.str_('a'), 2024, 'JAN', np.str_('0')], [np.str_('a'), 2024, 'FEB', np.str_('1')]], dtype=object) Now it is easy enough to store this data in DuckDB: purely_columnar_data = np.transpose(purely_columnar[1:]) conn.execute( """INSERT INTO columnar SELECT * FROM purely_columnar_data """ ) conn.sql("SELECT * FROM columnar") β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ hello β”‚ year β”‚ month β”‚ value β”‚ β”‚ varchar β”‚ uint32 β”‚ varchar β”‚ varchar β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ a β”‚ 2024 β”‚ JAN β”‚ 0 β”‚ β”‚ a β”‚ 2024 β”‚ FEB β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ But is there any other way in which I can insert the data into a DuckDB in a purely columnar form, apart from brute-forcing the data into a purely columnar form first? Note: I have tagged this question with postgresql because DuckDB's SQL dialect closely follows that of PostgreSQL.
Note: I added another row ("b",1,0) to make the data a bit more substantive so that it's easier to see what's happening. Essentially what you have is a "pivoted" dataset: D SELECT * FROM 'pivoted-data.csv'; β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ hello β”‚ 2024 JAN β”‚ 2024 FEB β”‚ β”‚ varchar β”‚ int64 β”‚ int64 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ a β”‚ 0 β”‚ 1 β”‚ β”‚ b β”‚ 1 β”‚ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ So UNPIVOT it: D SELECT hello, string_split(month, ' ')[1]::INTEGER AS year, string_split(month, ' ')[2] AS month, value FROM (UNPIVOT 'pivoted-data.csv' ON '2024 JAN', '2024 FEB' INTO NAME month VALUE value); β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ hello β”‚ year β”‚ month β”‚ value β”‚ β”‚ varchar β”‚ int32 β”‚ varchar β”‚ int64 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€ β”‚ a β”‚ 2024 β”‚ JAN β”‚ 0 β”‚ β”‚ a β”‚ 2024 β”‚ FEB β”‚ 1 β”‚ β”‚ b β”‚ 2024 β”‚ JAN β”‚ 1 β”‚ β”‚ b β”‚ 2024 β”‚ FEB β”‚ 0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Following which you could use one of the string functions to parse the month further. EDIT: I had a crack at splitting the string, it's not the most efficient, but I figured the query optimiser probably sees the code duplication, and optimises it. EDIT2: Okay, EXPLAIN confirms that the duplication is optimised away: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”‚ β”‚β”‚ Physical Plan β”‚β”‚ β”‚β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ PROJECTION β”‚ β”‚ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ β”‚ β”‚ hello β”‚ β”‚ year β”‚ β”‚ month β”‚ β”‚ value β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ PROJECTION β”‚ β”‚ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ β”‚ β”‚ hello β”‚ β”‚ string_split(month, ' ') β”‚ β”‚ value β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ PROJECTION β”‚ β”‚ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ β”‚ β”‚ #0 β”‚ β”‚ #3 β”‚ β”‚ #4 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ FILTER β”‚ β”‚ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ β”‚ β”‚ (value IS NOT NULL) β”‚ β”‚ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ β”‚ β”‚ EC: 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ UNNEST β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ READ_CSV_AUTO β”‚ β”‚ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ β”‚ β”‚ hello β”‚ β”‚ 2024 JAN β”‚ β”‚ 2024 FEB β”‚ β”‚ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ β”‚ β”‚ EC: 3 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
1
78,687,512
2024-6-30
https://stackoverflow.com/questions/78687512/move-buttons-to-east-with-grid-method
So a little stylistic thing I want to do is move some buttons to the east side of the window while having them all on the same row. Sounds easy enough and the grid method has a sticky parameter that should make this easy to do, right? No. This is the basic code from tkinter import * window = Tk() window.geometry("200x200") btn1 = Button(window, text="btn1") btn1.grid(row=0, column=0, sticky="E") btn2 = Button(window, text="btn2") btn2.grid(row=0, column=1, sticky="E") window.mainloop() What I want is: ------------------------- | ------ ------ | | |btn1| |btn2| | | ------ ------ | ------------------------- What I'm getting is: ------------------------- | ------ ------ | | |btn1| |btn2| | | ------ ------ | ------------------------- Which is sticking to the West even though I specified East What am I doing wrong? How can I get the result I want?
Give column 0 some weigth. It will force column 1 and 2 to right. from tkinter import * window = Tk() window.geometry("200x200") window.columnconfigure(0, weight=3) btn1 = Button(window, text="btn1") btn1.grid(row=0, column=1) btn2 = Button(window, text="btn2") btn2.grid(row=0, column=2) window.mainloop() Another way to do this is using pack() and Frame() from tkinter import * window = Tk() window.geometry("200x200") cbframe = Frame(window) btn1 = Button(cbframe, text="btn1") btn1.pack(side="right", fill=None, expand=False) btn2 = Button(cbframe, text="btn2") btn2.pack(side="right", fill=None, expand=False) cbframe.pack(side="top", fill="both", expand=False) window.mainloop()
2
2
78,684,832
2024-6-29
https://stackoverflow.com/questions/78684832/issue-installing-matplotlib-on-python-32-bit
I'm trying to install Matplotlib on a 32-bit version of Python. When I run pip install matplotlib, I get the following error when it tries to "Prepare metadata." WARNING: Failed to activate VS environment: Could not find C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe ..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']] The following exception(s) were encountered: Running `icl ""` gave "[WinError 2] The system cannot find the file specified" Running `cl /?` gave "[WinError 2] The system cannot find the file specified" Running `cc --version` gave "[WinError 2] The system cannot find the file specified" Running `gcc --version` gave "[WinError 2] The system cannot find the file specified" Running `clang --version` gave "[WinError 2] The system cannot find the file specified" Running `clang-cl /?` gave "[WinError 2] The system cannot find the file specified" Running `pgcc --version` gave "[WinError 2] The system cannot find the file specified" I really just want this to work on any 32-bit version of Python. This is on Python 3.9.1, but I've tried the most recent version of Python and I get the same error. I've tried different Matplotlib versions too. Really early Matplotlib versions (2.0.x,1.4.3) give me a different error, but it still doesn't work. Thanks for your help!
I had this same issue and here's how I solved it. Originally, I wanted to use Numpy, Scipy, Matplotlib, PyQt, and pyinstaller on a 32-bit version of Python. I've found out these libraries need to have "wheels" (I don't know what that actually is) that are compatible with 32-bit Python. That means that on pypi.org under "Download Files" of the specific library's version, there needs to be a file listed that goes like "library-version-pythoncompatibility-win32.whl". This means that it can be installed on a 32-bit version of python. Additionally, the libraries I listed aren't very robust in terms of 32-bit versions of python, so I needed to find versions of the above libraries that are all compatible with the same version of 32-bit Python. This is what I got to work. Python: 3.9.13 (32-bit) numpy: 1.23.5 scipy: 1.8.1 matplotlib: 3.7.5 PyQt5: 5.15.10 pyinstaller: 6.8.0 Each library version is compatible with more Python 32-bit versions than just 3.9.13, but 3.9.13 I found works for all of them at the same time. I am obviously not a computer scientist, so please add anything that might be more useful.
2
3
78,686,295
2024-6-29
https://stackoverflow.com/questions/78686295/import-python-libraries-in-jinja2-templates
I have this template: % template.tmpl file: % set result = fractions.Fraction(a*d + b*c, b*d) %} The solution of ${{a}}/{{b}} + {{c}}/{{d}}$ is ${{a * d + b*c}}/{{b*d}} = {{result.numerator}}/{{result.denominator}}$ which I invoke by from jinja2 import Template import fractions with open("c.jinja") as f: t = Template(f.read()) a = 2 b = 3 c = 4 d = 5 print(t.render(a = a, b = b, c=c, d=d)) I get jinja2.exceptions.UndefinedError: 'fractions' is undefined but I want The solution of $2/3 + 4/5$ is $22/15=22/15$. Is this possible to achieve that?
It can be done in two ways. Calculate the result in Python code and then pass it as a parameter to the template. You can pass the fractions module as a parameter to the template. Folder structure: . β”œβ”€β”€ c.jinja └── template_example.py Option 1: Pass result to the template: template_example.py: from jinja2 import Template import fractions with open("c.jinja") as f: t = Template(f.read()) a = 2 b = 3 c = 4 d = 5 result = fractions.Fraction(a * d + b * c, b * d) print(t.render(a=a, b=b, c=c, d=d, result=result)) c.jinja: The solution of ${{a}}/{{b}} + {{c}}/{{d}}$ is ${{a * d + b*c}}/{{b*d}} = {{result.numerator}}/{{result.denominator}}$ Option 2: Pass the fractions: template_example.py: from jinja2 import Template import fractions with open("c.jinja") as f: t = Template(f.read()) a = 2 b = 3 c = 4 d = 5 print(t.render(a=a, b=b, c=c, d=d, fractions=fractions)) c.jinja: {% set result = fractions.Fraction(a*d + b*c, b*d) %} The solution of ${{a}}/{{b}} + {{c}}/{{d}}$ is ${{a * d + b*c}}/{{b*d}} = {{result.numerator}}/{{result.denominator}}$ Output for both options: The solution of $2/3 + 4/5$ is $22/15 = 22/15$ I would recommend Option 1: Pass the result to the template. This approach keeps the template focused on its primary role: rendering text with provided data, while the Python code handles all the computations.
2
3
78,686,253
2024-6-29
https://stackoverflow.com/questions/78686253/encountering-valueerror-upon-joining-two-pandas-dataframes-on-a-datetime-index-c
I have two tables which I need to join on a date column. I want to preserve all the dates in both tables, with the empty rows in each table just being filled with NaNs in the final combined array. I think an outer join is what I'm looking for. So I've written this code (with data_1 and data_2 acting as mockups of my actual tables) import pandas as pd def main(): data_1 = [["May-2024", 10, 5], ["June-2024", 3, 5], ["April-2015", 1, 3]] df1 = pd.DataFrame(data_1, columns = ["Date", "A", "B"]) df1["Date"] = pd.to_datetime(df1["Date"], format="%B-%Y") print(df1) data_2 = [["01-11-2024", 10, 5], ["01-06-2024", 3, 5], ["01-11-2015", 1, 3]] df2 = pd.DataFrame(data_2, columns = ["Date", "C", "D"]) df2["Date"] = pd.to_datetime(df2["Date"], format="%d-%m-%Y") print(df2) merged = df1.join(df2, how="outer", on=["Date"]) print(merged) if __name__ == "__main__": main() But when I try and perform an outer join on two pandas dataframes, I get the error ValueError: You are trying to merge on object and int64 columns for key 'Date'. If you wish to proceed you should use pd.concat I checked the datatype of both columns by printing print(df1["Date"].dtype, df2["Date"].dtype) and they both seem to be datetime64[ns] datetime64[ns] datetimes. So I'm not quite sure why I'm getting a ValueError Any help is appreciated, thanks.
You need to use merge, not join (that will use the index): # ensure datetime df1['Date'] = pd.to_datetime(df1['Date'], format='%B-%Y') df2['Date'] = pd.to_datetime(df2['Date'], dayfirst=True) # use merge merged = df1.merge(df2, how='outer', on=['Date']) For join to work: merged_df = df1.set_index('Date').join(df2.set_index('Date')).reset_index() Output: Date A B C D 0 2015-04-01 1.0 3.0 NaN NaN 1 2015-11-01 NaN NaN 1.0 3.0 2 2024-05-01 10.0 5.0 NaN NaN 3 2024-06-01 3.0 5.0 3.0 5.0 4 2024-11-01 NaN NaN 10.0 5.0
3
1
78,685,143
2024-6-29
https://stackoverflow.com/questions/78685143/how-to-show-one-figure-in-loop-in-python
I want to show a figure that is calculated in a loop, let say with 5 iterations. This is the code that I wrote import numpy as np import matplotlib.pyplot as plt x = np.linspace(0,1,100) y = np.linspace(0,1,100) xx,yy = np.meshgrid(x,y) for n in range(5): a = np.sin(xx-2*n) plt.imshow(a,interpolation='bilinear') plt.show() With this code, I got 5 figures. How to make it runs in one figure for each iteration? I used google collab, is it possible to make the result (figure) opened in new window (undocked) like in matlab?
You can simulate an animation by using a display/clear_output from ipython (used by Colab): import time import matplotlib.pyplot as plt import numpy as np from IPython.display import clear_output, display x = np.linspace(0, 1, 100) y = np.linspace(0, 1, 100) xx, yy = np.meshgrid(x, y) # you could initialize a subplots or whatever here.. for n in range(5): a = np.sin(xx - 2 * n) plt.imshow(a, interpolation="bilinear") # this one is optional (to verbose my output) plt.gca().set_title(f"Plot nΒ°{n+1}", fontweight="bold") # added these three lines display(plt.gcf()) clear_output(wait=True) time.sleep(0.5) plt.show(); NB: This works in any IPython environment (e.g., Jupyter, Colab, ..).
2
1
78,682,716
2024-6-28
https://stackoverflow.com/questions/78682716/spark-getitem-shortcut
I am doing the following in spark sql: spark.sql(""" SELECT data.data.location.geometry.coordinates[0] FROM df""") This works fine, however I do not want to use raw SQL, I use dataframe API like so: df.select("data.data.location.geometry.coordinates[0]") Unfortunately this does not work: AnalysisException: [DATATYPE_MISMATCH.UNEXPECTED_INPUT_TYPE] Cannot resolve "data.data.location.geometry.coordinates[0]" due to data type mismatch: Parameter 2 requires the "INTEGRAL" type, however "0" has the type "STRING".; 'Project [data#680.data.location.geometry.coordinates[0] AS 0#697] +- Relation [data#680,id#681,idempotencykey#682,source#683,specversion#684,type#685] json I know that I can use the F.col api and go with a getItem(0), but is there a built-in way to have the shortcut of getItem? '.' is the shortcut of getField is there one for array slicing? Thank you for your insight
TL;DR; use selectExpr() df.selectExpr('data.data.location.geometry.coordinates[0]') Looks like indexing inside an array using [] is considered an expression, where as data.data.location... is considered a column name. DataFrame.select() takes column name as arguments, so doesn't understand [0], what you want is DataFrame.selectExpr(). >>> >>> df = spark.createDataFrame([({ ... "data": { ... "location": { ... "geometry": { ... "coordinates": [41.84201, -89.485937] ... } ... } ... } ... },)]).withColumnRenamed('_1', 'data') >>> df.show(truncate=False) +-----------------------------------------------------------------------------+ |data | +-----------------------------------------------------------------------------+ |{data -> {location -> {geometry -> {coordinates -> [41.84201, -89.485937]}}}}| +-----------------------------------------------------------------------------+ >>> df.printSchema() root |-- data: map (nullable = true) | |-- key: string | | |-- key: string | | |-- value: map (valueContainsNull = true) | | | |-- key: string | | | |-- value: map (valueContainsNull = true) | | | | |-- key: string | | | | |-- value: array (valueContainsNull = true) | | | | | |-- element: double (containsNull = true) >>> >>> df.select('data.data.location.geometry.coordinates').show(truncate=False) +----------------------+ |coordinates | +----------------------+ |[41.84201, -89.485937]| +----------------------+ >>> df.select('data.data.location.geometry.coordinates[1]').show() +--------------+ |coordinates[1]| +--------------+ | null| +--------------+ >>> >>> df.selectExpr('data.data.location.geometry.coordinates[1]').show() +----------------------------------------------+ |data[data][location][geometry][coordinates][1]| +----------------------------------------------+ | -89.485937| +----------------------------------------------+ >>> df.selectExpr( ... 'data.data.location.geometry.coordinates[0] as coord_lat', ... 'data.data.location.geometry.coordinates[1] as coord_long' ... ).show() +---------+----------+ |coord_lat|coord_long| +---------+----------+ | 41.84201|-89.485937| +---------+----------+ >>>
2
1
78,683,884
2024-6-28
https://stackoverflow.com/questions/78683884/is-there-any-benefit-of-using-a-dictionary-comprehension-if-an-equivalent-dictz
Recently I have been working with dictionaries and I was to told to make dictionaries out of two lists like this: zipped = {key: value for key, value in zip(drinks, caffeine)} Later I forgot how to do that and found a different way that seems simpler to me: zipped = dict(zip(drinks, caffeine)) Is there something worse about using this second example instead of the first?
Dictionary/List comprehensions are great flexible tools. You were taught this way so that the solution was more flexible and aligns with current coding practices. It's a "skill issue" as they say. There's nothing wrong with either. Always read company coding guidelines where possible.
2
1
78,681,694
2024-6-28
https://stackoverflow.com/questions/78681694/how-to-detect-edges-in-image-using-python
I'm having problems trying to detect edges in images corresponding to holes in a glass sample. Images look like this: images Single sample: Each image contains part of a hole that was cut into a glass sample. Inspecting the images with my eyes, I can clearly see two regions, the glass and the hole. Sadly, all methods of trying to properly detect the edge haven't led to good results. The main reason why my attempts have failed, is the insufficient contrast between the glass and the hole, I believe. The hole is not cut through the entire thickness of the glass, leaving a glass bottom in the hole scattering light back into the camera thus making the contrast worse for me. Image processing things I've already tried: blurring (gaussian, bilateral) sharpening adjusting contrast mean shift filtering adaptive thresholding canny edge detection sobel edge detection watershed segmentation When taking the images I'm using an industrial camera with a ring light made of LEDs. The ring light can only be turned on or off, I can't adjust directions of the light or brightness. Taking images with various exposure times and analogue gains hasn't yielded much, since the contrast would stay the same throughout the measurements. Does anyone have an idea what steps I could take in order to properly detect the edges in the images? Be it image processing, programming or tips on how to take better pictures, any idea is appreciated! Here's an excerpt of my script: import cv2 image = cv2.imread( r'path/to/images') blurred = cv2.GaussianBlur(image, (7, 7), 9) gray = cv2.cvtColor(blurred, cv2.COLOR_BGR2GRAY) thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 71, 5) edges = cv2.Canny(thresh, 100, 200) contours, _ = cv2.findContours( edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) output_image = image.copy() for contour in contours: cv2.drawContours(output_image, [contour], -1, (0, 255, 0), 2) cv2.imshow('Circle Edge', cv2.resize(output_image, (1000, 1000))) cv2.waitKey(0) cv2.destroyAllWindows() with the detected edges painted in green on the original image (found in the images in this post) Thanks in advance! mentioned in the previous text already various image processing / manipulation steps taken
It looks to me that the variance/uniformity is significantly different in the two regions of the image, so maybe consider calculating the variance/standard-deviation within each 25x25 pixel block and normalising the result. I am doing it with ImageMagick here, because I am quicker with that, but you can do the same with OpenCV: magick YOURIMAGE.bmp -statistic standarddeviation 25x25 -normalize result.png If you then flood fill the homogenous area, with some tolerance, you'll get:
2
4
78,684,010
2024-6-28
https://stackoverflow.com/questions/78684010/cartesian-product-of-dict-of-lists-in-python
I would like to have a Python function cartesian_product which takes a dictionary of lists as input and returns as output a list containing as elements all possible dictionary which can be formed by taking from each list one element. Here would be an example: Calling cartesian_product({1: ['a', 'b'], 2: ['c', 'd'], 3: ['e', 'f']}) should return [ {1: 'a', 2: 'c', 3: 'e'}, {1: 'a', 2: 'c', 3: 'f'}, {1: 'a', 2: 'd', 3: 'e'}, {1: 'a', 2: 'd', 3: 'f'}, {1: 'b', 2: 'c', 3: 'e'}, {1: 'b', 2: 'c', 3: 'f'}, {1: 'b', 2: 'd', 3: 'e'}, {1: 'b', 2: 'd', 3: 'f'} ]
This should do the trick: import itertools def cartesian_product(d): return [dict(zip(d, p)) for p in itertools.product(*d.values())] Demo: >>> d = {1: ['a', 'b'], 2: ['c', 'd'], 3: ['e', 'f']} >>> from pprint import pp >>> pp(cartesian_product(d)) [{1: 'a', 2: 'c', 3: 'e'}, {1: 'a', 2: 'c', 3: 'f'}, {1: 'a', 2: 'd', 3: 'e'}, {1: 'a', 2: 'd', 3: 'f'}, {1: 'b', 2: 'c', 3: 'e'}, {1: 'b', 2: 'c', 3: 'f'}, {1: 'b', 2: 'd', 3: 'e'}, {1: 'b', 2: 'd', 3: 'f'}]
2
1
78,683,848
2024-6-28
https://stackoverflow.com/questions/78683848/attribute-error-none-type-object-in-linked-list
I tried to create a simple single-linked list as the following: class node: def __init__(self, data=None): self.data = data self.next = None class linkedlist: def __init__(self): self.head = node() def append(self, data): curr = self.head new_node = node(data) while curr.next != None: curr = curr.next curr.next = new_node def total(self): curr = self.head total = 0 while curr.next != None: curr = curr.next total += 1 return total def display(self): # added total here as well curr = self.head em = [] total = 0 while curr.next != None: curr = curr.next em.append(curr.data) total += 1 print(f"LinkedList: {em} \n Total: {self.total()} ") def look(self, index): if index >= self.total(): print("ERROR: Index error" ) curr_index = self.head idx = 0 while True: curr_index = curr_index.next if idx == index: print(curr_index.data) idx += 1 Whenever I am calling the look() function, I am getting the following error: curr_index = curr_index.next ^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'next' What could this mean? As far as I know, the curr_index is using the self.head, which is inturn using node(). The node() have does have the next attribute. This error isn't there when I call the other functions in the class. Moreover, the other functions -- except the look() function -- returns their respective values error-free. Where is my code going wrong?
In this implementation, each node point to the next node of the list, and the last one points to None, signifying that's the end of the list. The loop in look tries to iterate over the nodes until it gets to the indexth item and then prints it, but the loop is not terminated there. In fact, it will continue indefinitely until at some point it passes the last element, making the current item None, and then fail with this error. One approach is to explicitly terminate the loop once you've reached the desired element: def look(self, index): if index >= self.total(): # Need to throw here, not just print # because printing out an error won't prevent the function from moving on throw ValueError("ERROR: Index error") curr_index = self.head idx = 0 while True: curr_index = curr_index.next if idx == index: print(curr_index.data) return # We're done here, return idx += 1 Having said that, checking idx == index on each iteration is superfolous. You could just advance curr_index index times explicitly: def look(self, index): if index >= self.total(): # Need to throw here, not just print # because printing out an error won't prevent the function from moving on throw ValueError("ERROR: Index error") curr_index = self.head for _ in range(index): curr_index = curr_index.next print(curr_index.data)
2
4
78,677,255
2024-6-27
https://stackoverflow.com/questions/78677255/optimal-multiplication-of-two-3d-arrays-having-a-variable-dimension
I would like to multiply tensors R = {R_1, R_2, ..., R_M} and X = {X_1, X_2, ..., X_M} where R_i and X_i are 3Γ—3 and 3Γ—N_i matrices, respectively. How can I make maximum use of NumPy functionalities during the formation of the R_i Γ— X_i arrays? My MWE is the following: import numpy as np np.random.seed(0) M = 5 R = [np.random.rand(3, 3) for _ in range(M)] X = [] for i in range(M): N_i = np.random.randint(1, 6) X_i = np.random.rand(3, N_i) X.append(X_i) result = np.zeros((3, 0)) for i in range(M): R_i = R[i] X_i = X[i] result = np.hstack((result, np.dot(R_i, X_i))) print(result) Edit #1: Thanks for everyone who helped me with his valuable comments. Meanwhile I was thinking about the role of N_is in my real problem and came to the conclusion that the number of unique N_is is in fact small (1 to 5; 2 is the most common one, but 1 is also very frequent). In this case, would there be a more efficient solution to the treatment of multiplications? Another aspect which would be important: in practice, I store a 3 Γ— N matrix X, not the individual X_i blocks. The columns of X are not ordered w.r.t. the R list. Instead, I store only an index vector p which provides the correct ordering for the X columns. In this case, an einsum version would be the following (in comparison with the "direct" multiplication): import numpy as np M = 30 N = 100 np.random.seed(0) p = np.random.randint(M, size=N) R = np.random.rand(M, 3, 3) X = np.random.rand(3, N) result_einsum = np.einsum('ijk,ki->ji', R[p], X) result_direct = np.zeros((3, N)) for i in range(N): result_direct[:, i] = np.dot(R[p[i]], X[:, i]) print(np.allclose(result_einsum, result_direct)) Edit #2: It seems that Numba helps quite a lot: import numpy as np import numba from timeit import Timer M = 30 N = 100 np.random.seed(0) p = np.random.randint(M, size=N) R = np.random.rand(M, 3, 3) X = np.random.rand(3, N) @numba.njit def numba_direct(R, p, X, result_direct, N): for i in range(N): p_i = p[i] for j in range(3): res = 0.0 for k in range(3): res += R[p_i, j, k] * X[k, i] result_direct[j, i] = res result_direct = np.zeros((3, N)) numba_direct(R, p, X, result_direct, N) result_einsum = np.einsum('ijk,ki->ji', R[p], X) print(np.allclose(result_einsum, result_direct)) ntimes = 10000 einsum_timer = Timer(lambda: np.einsum('ijk,ki->ji', R[p], X)) einsum_time = einsum_timer.timeit(number=ntimes) numba_direct_timer = Timer(lambda: numba_direct(R, p, X, result_direct, N)) numba_direct_time = numba_direct_timer.timeit(number=ntimes) print(f'Einsum runtime: {einsum_time:.4f} seconds') print(f'Numba direct runtime: {numba_direct_time:.4f} seconds') The execution times are the following for the above code: Einsum runtime: 0.0979 seconds Numba direct runtime: 0.0129 seconds
I know I am neither @mozway nor @hpaulj (this is referring to @chrslg's comment), but indeed there seems to be a feasible solution with einsum: np.einsum("ijk,ki->ji", np.repeat(R, [x.shape[1] for x in X], axis=0), np.hstack(X)) Here is the full code with which I tested: import numpy as np from timeit import Timer np.random.seed(0) L, M, timeit_n_times = 3, 5, 10_000 R = [np.random.rand(L, L) for _ in range(M)] X = [] for i in range(M): N_i = np.random.randint(1, 6) X_i = np.random.rand(L, N_i) X.append(X_i) def original(R, X): result = np.zeros((L, 0)) for i in range(M): R_i = R[i] X_i = X[i] result = np.hstack((result, np.dot(R_i, X_i))) return result def list_stack(R, X): result = [] for i in range(M): R_i = R[i] X_i = X[i] result.append(np.dot(R_i, X_i)) return np.hstack(result) def preallocate(R, X): result=np.empty((L, sum(x.shape[1] for x in X))) k=0 for i in range(M): R_i = R[i] X_i = X[i] result[:, k:k+X_i.shape[1]] = np.dot(R_i, X_i) k += X_i.shape[1] return result def with_einsum(R, X): return np.einsum("ijk,ki->ji", np.repeat(R, [x.shape[1] for x in X], axis=0), np.hstack(X)) assert np.allclose(original(R, X), list_stack(R, X)) assert np.allclose(original(R, X), preallocate(R, X)) assert np.allclose(original(R, X), with_einsum(R, X)) print("original", Timer(lambda: original(R, X)).timeit(timeit_n_times)) print("list_stack", Timer(lambda: list_stack(R, X)).timeit(timeit_n_times)) print("preallocate", Timer(lambda: preallocate(R, X)).timeit(timeit_n_times)) print("with_einsum", Timer(lambda: with_einsum(R, X)).timeit(timeit_n_times)) Some observations: The calculation times of list_stack() and preallocate() are marginally different, but are always faster than original(), which is also what is suggested and implied in @chrslg's answer. Whether or not with_einsum() is faster or slower than the others depends pretty much on the shape of the problem: For small, but many LΓ—L matrices (L==3 in the question), with_einsum() wins.Here are the results for L, M, timeit_n_times = 3, 500, 10_000: original 19.07941191700229 list_stack 6.437358160997974 preallocate 7.638774587001535 with_einsum 3.6944152869982645 For large, but few LΓ—L matrices, with_einsum() loses.Here are the results for L, M, timeit_n_times = 100, 5, 100_000: original 6.661783802999707 list_stack 4.67112236200046 preallocate 4.94899292899936 with_einsum 28.233751856001618 I did not check the influence of the size and variation of N_i. Update Based on the update to the question and @chrslg's thoughts, I tried whether zero-padding would also be a viable way to go. Bottom line is: probably not (at least not for the given example). Here is the new testing code: from timeit import Timer import numpy as np M, N, timeit_n_times = 30, 100, 10_000 np.random.seed(0) p = np.random.randint(M, size=N) R = np.random.rand(M, 3, 3) X = np.random.rand(3, N) einsum = lambda: np.einsum('ijk,ki->ji', R[p], X) def direct(): res = np.zeros((3, N)) for i in range(N): res[:, i] = np.dot(R[p[i]], X[:, i]) return res # Rearrange data for padding max_count = np.max(np.unique(p, return_counts=True)[1]) counts = np.zeros(M, dtype=int) X_padded = np.zeros((M, max_count, 3)) for i, idx in enumerate(p): X_padded[idx, counts[idx]] = X[:, i] counts[idx] += 1 padded = lambda: np.einsum('ijk,ilk->ilj', R, X_padded) result_einsum = einsum() result_direct = direct() result_padded_raw = padded() # Extract padding result (reverse steps of getting from X to X_padded) result_padded = np.zeros((3, N)) counts = np.zeros(M, dtype=int) for i, idx in enumerate(p): result_padded[:, i] = result_padded_raw[idx, counts[idx]] counts[idx] += 1 assert np.allclose(result_einsum, result_direct) assert np.allclose(result_padded, result_direct) print("einsum", Timer(einsum).timeit(timeit_n_times)) print("direct", Timer(direct).timeit(timeit_n_times)) print("padded", Timer(padded).timeit(timeit_n_times)) Here, we first rearrange the data into X_padded, which is MΓ—max_countΓ—3-shaped, where max_count is the maximum number of references to one of the indices in R from p. We iteratively fill up X_padded for each index, leaving unused space zero-filled. In the end, we can again use a version of einsum to calculate the result (padded = lambda: np.einsum('ijk,ilk->ilj', R, X_padded)). If we want to compare the result to the results of the other methods, then we need to rearrange it back again, basically inverting the steps of getting from X to X_padded. Observations: Memory-wise, we have: Saved on the side of R, as we don't need to replicate the individual 3Γ—3 matrices, any more. Paid on the side of X, as we need zero-padding. Speed-wise, we have lost a bit, it seems;here are the results for M, N, timeit_n_times = 30, 100, 100_000: einsum 0.7587459350002064 direct 13.305958173999898 padded 1.502786388000004 Note that this does not even include the times for rearranging the data. So probably padding won't help here – at least not in the way that I tried.
4
4
78,680,411
2024-6-28
https://stackoverflow.com/questions/78680411/how-to-predict-the-resulting-type-after-indexing-a-pandas-dataframe
I have a Pandas DataFrame, as defined here: df = pd.DataFrame({'Name': ['Alice', 'Bob', 'Aritra'], 'Age': [25, 30, 35], 'Location': ['Seattle', 'New York', 'Kona']}, index=([10, 20, 30])) However, when I index into this DataFrame, I can't accurately predict what type of object is going to result from the indexing: # (1) str df.iloc[0, df.columns.get_loc('Name')] # (2) Series df.iloc[0:1, df.columns.get_loc('Name')] # (3) Series df.iloc[0:2, df.columns.get_loc('Name')] # (4) DataFrame df.iloc[0:2, df.columns.get_loc('Name'):df.columns.get_loc('Age')] # (5) Series df.iloc[0, df.columns.get_loc('Name'):df.columns.get_loc('Location')] # (6) DataFrame df.iloc[0:1, df.columns.get_loc('Name'):df.columns.get_loc('Location')] Note that each of the pairs above contain the same data. (e.g. (2) is a Series that contains a single string, (4) is a DataFrame that contains a single column, etc.) Why do they output different types of objects? How can I predict what type of object will be output? Given the data, it looks like the rule is based on how many slices (colons) you have in the index: 0 slices ((1)): scalar value 1 slice ((2), (3), (5)): Series 2 slices ((4), (6)): DataFrame However, I'm not sure if this is always true, and even if it is always true, I want to know the underlying mechanism as to why it is like that. I've spent a while looking at the indexing documentation, but it doesn't seem to describe this behavior clearly. The documentation for the iloc function also doesn't describe the return types. I'm also interested in the same question for loc instead of iloc, but, since loc is inclusive, the results aren't quite as bewildering. (That is, you can't get pairs of indexes with different types where the indexes should pull out the exact same data.)
You got the general idea. To make it simple, what matter is not the number of items but the type of indexer. You can index as 0D (with a scalar), let's just consider the index for now: df.iloc[0] df.loc[0] or 1D (with a slice or iterable): df.loc[[0]] df.loc[1:2] df.loc[:0] Then the rule is simple, consider both axes, if both are 0D you get a scalar (here a string), if both are 1D you get a DataFrame, else a Series: columns 0D 1D index 0D scalar Series 1D Series DataFrame Some examples to illustrate this: type(df.iloc[1:2, 1:2]) # 1D / 1D # pandas.core.frame.DataFrame type(df.iloc[:0, :0]) # 1D / 1D # pandas.core.frame.DataFrame (EMPTY DataFrame) type(df.iloc[[], []]) # 1D / 1D # pandas.core.frame.DataFrame (EMPTY DataFrame) type(df.iloc[[1,2], 0]) # 1D / 0D # pandas.core.series.Series type(df.iloc[0, [0]]) # 0D / 1D # pandas.core.series.Series type(df.iloc[0, 0]) # 0D / 0D # str
2
1
78,676,602
2024-6-27
https://stackoverflow.com/questions/78676602/how-can-i-determine-whether-a-word-document-has-a-password
I am trying to read word documents using Python. However, I am stuck in places where the document is password protected, as I do not have the password for the file(s). How can I detect if the file has password, so can I ignore such files from opening? Currently, the below code opens a dialog/prompt window in MS-Word to enter the password and keeps waiting for a response. word = win32.gencache.EnsureDispatch('Word.Application') doc = word.Documents.Open(r"D:\appointment\PasswordProtectedDoc.doc")
Well I figured out the answer myself. Passed a wrong password as a parameter and it raised an exception saying Invalid/Incorrect password which I can handle it in a try except block. Little strange but works well. word = win32.gencache.EnsureDispatch('Word.Application') word.Visible = False try: doc = word.Documents.Open(r"D:\appointment\PasswordProtectedDoc.doc",PasswordDocument='ddd') doc.Activate () except: print(f"Failed to open") finally: word.Quit()
2
2
78,680,015
2024-6-27
https://stackoverflow.com/questions/78680015/behavior-of-object-new-python-dunder-what-is-happening-under-the-hood
I'm experimenting with metaprogramming in Python (CPython 3.10.13) and noticed some weird behavior with object.__new__ (well, weird to me, at least). Take a look at the following experiment (not practical code, just an experiment) and the comments. Note that object.__new__ seems to change it's behavior based on the first argument: # Empty class inherit __new__ and __init__ from object class Empty: pass # Confirmation of inheritance assert Empty.__new__ is object.__new__, "Different __new__" assert Empty.__init__ is object.__init__, "Different __init__" empty_obj = Empty() uinit_empty_obj = object.__new__(Empty) assert type(empty_obj) is type(uinit_empty_obj), "Different types" try: object.__new__(Empty, 10, 'hi', hello='bye') except TypeError as e: # repr(e) mentioned the Empty class print(repr(e)) # Overwrite the object __new__ and __init__ methods # __new__ and __init__ with the same signature class Person: def __new__(cls, name, age): """Does nothing bassicaly. Just overwrite `object.__new__`.""" print(f'Inside {cls.__name__}.__new__') return super().__new__(cls) def __init__(self, name, age): print(f'Inside {type(self).__name__}.__init__') self.name = name self.age = age a_person = Person('John Doe', 25) uinit_person = Person.__new__(Person, 'Michael', 40) try: # Seems an obvious error since object() doesn't take any arguments another_uinit_person = object.__new__(Person, 'Ryan', 25) except TypeError as e: # Indeed raises TypeError, but now there isn't a mention of the Person class in repr(e) print('`another_uinit_person` :', repr(e)) # Now, some weird things happen (well, weird for me). # Inherit __new__ from object and overwrite __init__. # __new__ and __init__ with unmatching signatures. # A basic Python class. Works just fine like suppose to. class Vehicle: def __init__(self, model): self.model = model # Confirmation of __new__ inheritance. assert Vehicle.__new__ is object.__new__, "Nop, it isn't" a_vehicle = Vehicle('Honda') # I would understand if CPython autogenerated a __new__ method matching __init__ # or a __new__ method that accepts all arguments. # The following try-except-else suggests the last, but the assert statement above # indicates that Vehicle.__new__ is actually object.__new__. try: # Doesn't raise any exceptions uinit_vehicle = Vehicle.__new__(Vehicle, 'Honda', 10, ('four-wheels',), hello='bye') except Exception as e: print(repr(e)) else: print('`uinit_vehicle` : constructed just fine', uinit_vehicle) # Now the following runs just fine try: # Doesn't raise any exceptions another_unit_vehicle = object.__new__(Vehicle, 'Toyota') another_unit_vehicle = object.__new__(Vehicle, 'Toyota', 100, four_wheels=True) except Exception as e: print(repr(e)) else: print('`another_unit_vehicle` : constructed just fine:', another_unit_vehicle) I got the following output: TypeError('Empty() takes no arguments') Inside Person.__new__ Inside Person.__init__ Inside Person.__new__ `another_uinit_person` : TypeError('object.__new__() takes exactly one argument (the type to instantiate)') `uinit_vehicle` : constructed just fine <__main__.Vehicle object at 0x00000244D15A7A90> `another_unit_vehicle` : constructed just fine: <__main__.Vehicle object at 0x00000244D15A7A30> My questions: Why the first TypeError mentioned the Empty class and the second just object.__new__? Why object.__new__(Person, 'Ryan', 25) raised TypeError and object.__new__(Vehicle, 'Toyota') and object.__new__(Vehicle, 'Toyota', 100, four_wheels=True) didn't? Basically: what object.__new__ does under the hood? It seems to me that it is performing a somewhat weird check on the first argument's __new__ and/or __init__ override methods, if any.
Python's object.__init__ and object.__new__ base methods suppress errors about excess arguments in the common situation where exactly one of them has been overridden, and the other has not. The non-overriden method will ignore the extra arguments, since they usually get passed in automatically (rather than by an explicit call to __new__ or __init__ where the programmer should know better). That is, neither of these classes will cause issues in the methods they inherit: class OnlyNew: def __new__(self, *args): pass # __init__ is inherited from object class OnlyInit: def __init__(self, *args): pass # __new__ is inherited from object # tests: object.__new__(OnlyInit, 1, 2, 3, 4) # no error object.__init__(object.__new__(OnlyNew), 1, 2,3, 4) # also no error However, when you override one of the methods, you must avoid excess arguments when you call the base class version of the method you overrode. # bad tests: try: object.__new__(OnlyNew, 1, 2, 3, 4) except Exception as e: print(e) # object.__new__() takes exactly one argument (the type to instantiate) try: object.__init__(object.__new__(OnlyInit), 1, 2, 3, 4) except Exception as e: print(e) # object.__init__() takes exactly one argument (the type to instantiate) Furthermore if you override both __new__ and __init__, you need to both of call the base class methods with no extra arguments, since you should know what you're doing if you're implementing both methods. class OverrideBoth: def __new__(self, *args): pass def __init__(self, *args): pass # more bad tests, object has zero tolerance for extra arguments in this situation try: object.__new__(OverrideBoth, 1, 2, 3, 4) except Exception as e: print(e) # object.__new__() takes exactly one argument (the type to instantiate) try: object.__init__(object.__new__(OverrideBoth), 1, 2,3, 4) except Exception as e: print(e) # object.__init__() takes exactly one argument (the instance to initialize) You can see the implementation of these checks in the CPython source code. Even if you don't know C very well, it's pretty clear what it's doing. There's a different code path that handles classes like your Empty that don't override either method (which is why that exception message is a bit different).
4
6
78,679,962
2024-6-27
https://stackoverflow.com/questions/78679962/gitlab-ci-artefacts
I am doing my first CI project and I have recently got confused about artefacts... Say I have config with next jobs: cleanup_build: tags: - block_autotest stage: cleanup script: - Powershell $env:P7_TESTING_INSTALLATION_PATH\client\p7batch.exe --log-level=error --run $env:JOBS_FOLDER_PATH\clear.py install_block: tags: - block_autotest stage: installation script: - Powershell $env:P7_TESTING_INSTALLATION_PATH\client\p7batch.exe --log-level=error --run $env:JOBS_FOLDER_PATH\setup_block.py "install_block" job is not to be done if the job "cleanup_build" has failed. So, I have to create some kind of artifact after "cleanup_build" has succeeded so this artefact is visible at the stage "installation" for the job "install_block". At the job "install_block" I could use python to address the artifact and ensure the one exists. Also I have created a speciad folder for artifacts: ARTEFACTS_FOLDER_PATH: $CI_PROJECT_DIR\autotest\artefacts So within the job "cleanup_build" I create a file "clean" at the artefact folder. But it seems that CI reloads repository at project directory, because if I leave just "cleanup_build" job (delete "install_block" from yml) I can see the "clean" file at the project, but if I leave both jobs this file dissapears before "install_block" job begins...
By default, every job starts with a 'clean' workspace. If one job modifies the workspace, it is not persisted into any other job. To pass files between jobs, you must explicitly declare artifacts to be passed between each job. Also note that the aritfact path is relative to the workspace root. stages: - one - two my_job: stage: one script: - echo "change" > myfile.txt artifacts: paths: # this must be a relative path, not absolute! - myfile.txt my_next_job: stage: two script: # or use 'type' instead of 'cat' on Windows - cat myfile.txt When the job with artifacts: defined completes, you will see a message at the end of the job log indicating the number of matched/uploaded artifacts. Also note that, by default, if a job fails, subsequent jobs will not run. That is: the when: behavior is on_success by default. You can enable to jobs to run additionally (or only) on failures by using when: always (or when: on_failure).
2
2
78,679,802
2024-6-27
https://stackoverflow.com/questions/78679802/how-to-make-a-functools-reduce-implementation-that-looks-similarly-as-reduce
Here is an R example of using Reduce x <- c(1, 2, 2, 4, 10, 5, 5, 7) Reduce(\(a, b) if (tail(a, 1) != b) c(a, b) else a, x) # equivalent to `rle(x)$values` The code above is to sort out the extract unique values in terms of run length, which can be easily obtained by rle(x)$values. I know in Python there is itertools.groupby that performs the same thing as rle in R, BUT, what I am curious about is: Is it possible to have a highly similar translation by using functools.reduce in Python to achieve the same functionality, say, for example from functools import reduce x = [1,2,2,4,10,5,5,7] reduce(lambda a, b: a + [b] if a[-1]!= b else a, x) but which unfortunately gives errors like { "name": "TypeError", "message": "'int' object is not subscriptable", "stack": "--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[58], line 4 1 from functools import reduce 2 x = [1,2,2,4,10,5,5,7] ----> 4 reduce(lambda a, b: a + [b] if a[-1]!= b else a, x) Cell In[58], line 4, in <lambda>(a, b) 1 from functools import reduce 2 x = [1,2,2,4,10,5,5,7] ----> 4 reduce(lambda a, b: a + [b] if a[-1]!= b else a, x) TypeError: 'int' object is not subscriptable" } My question is: Is there any one-liner of reduce in Python that looks like R code?
you could use a list as the initial: from functools import reduce x = [1,2,2,4,10,5,5,7] reduce(lambda a, b: a + [b] if a[-1]!= b else a, x, [x[0]]) [1, 2, 4, 10, 5, 7] Note that you could use groupby from itertools: from itertools import groupby [i for i,j in groupby(x)] [1, 2, 4, 10, 5, 7]
3
5
78,679,079
2024-6-27
https://stackoverflow.com/questions/78679079/finding-count-of-unique-sequence-made-up-of-values-spanned-over-several-columns
A contains columns x, y1, y2, y3 & y4. I am interested in studying the {y1,y2,y3,y4} sequence w.r.t x. To find unique {y1,y2,y3,y4} sequence occurring for each x, I do the following: B = pd.DataFrame() for x_temp in A['x'].unique(): B = pd.concat([B, A[A['x'] == x_temp][['x','y1','y2','y3','y4']]]) B = B.drop_duplicates().sort_values(by=['x','y1','y2','y3','y4']) del x_temp I want to introduce a new column called 'count' in B, which contains # of unique {y1,y2,y3,y4} that occurred for that specific x in A. B['count'] = A.apply(lambda row: (A['y1'] == row['y1']) & (A['y2'] == row['y2']) & (A['y3'] == row['y3']) & (A['y4'] == row['y4']), axis=1).sum() This works, however, it doesn't work if A or B has missing values. I want it to treat missing values also as a unique value. Example: A = pd.DataFrame({'x':['1','1','1','1','2','2','2','2','2','1'], 'y1':['1','2','2',np.nan,'2',np.nan,'2','2','2','1'], 'y2':['2','1','2','2',np.nan,np.nan,'2','2','1','2'], 'y3':['1','1',np.nan,'2',np.nan,'2',np.nan,np.nan,'1','1'], 'y4':['2','2','2',np.nan,np.nan,'1','2','2','2','2']}) B = pd.DataFrame() for x_temp in A['x'].unique(): B = pd.concat([B, A[A['x'] == x_temp][['x','y1','y2','y3','y4']]]) B = B.drop_duplicates().sort_values(by=['x','y1','y2','y3','y4']) del x_temp B['count'] = A.apply(lambda row: (A['y1'] == row['y1']) & (A['y2'] == row['y2']) & (A['y3'] == row['y3']) & (A['y4'] == row['y4']), axis=1).sum() print(B) x y1 y2 y3 y4 count 0 1 1 2 1 2 2 1 1 2 1 1 2 2 2 1 2 2 NaN 2 0 3 1 NaN 2 2 NaN 0 8 2 2 1 1 2 2 6 2 2 2 NaN 2 0 4 2 2 NaN NaN NaN 0 5 2 NaN NaN 2 1 0
Assuming you want to count the values and then get the sum across the different xs, you could use: cols = ['x','y1','y2','y3','y4'] out = (A[cols].value_counts(dropna=False, sort=False) .reset_index(name='count') .sort_values(by=cols, na_position='last') .assign(count=lambda x: x.groupby(cols[1:], dropna=False) ['count'].transform('sum') ) ) Output: x y1 y2 y3 y4 count 0 1 1 2 1 2 2 1 1 2 1 1 2 2 2 1 2 2 NaN 2 3 3 1 NaN 2 2 NaN 1 4 2 2 1 1 2 2 5 2 2 2 NaN 2 3 6 2 2 NaN NaN NaN 1 7 2 NaN NaN 2 1 1 If you want to set the rows with NaNs as 0: cols = ['x','y1','y2','y3','y4'] out = (A[cols].value_counts(dropna=False, sort=False) .reset_index(name='count') .sort_values(by=cols, na_position='last') .assign(count=lambda x: x.groupby(cols[1:]) ['count'].transform('sum') .fillna(0).convert_dtypes() ) ) Output: x y1 y2 y3 y4 count 0 1 1 2 1 2 2 1 1 2 1 1 2 2 2 1 2 2 NaN 2 0 3 1 NaN 2 2 NaN 0 4 2 2 1 1 2 2 5 2 2 2 NaN 2 0 6 2 2 NaN NaN NaN 0 7 2 NaN NaN 2 1 0 If you don't want to sum across the xs: cols = ['x','y1','y2','y3','y4'] out = (A[cols].value_counts(dropna=False, sort=False) .reset_index(name='count') .sort_values(by=cols, na_position='last') ) Output: x y1 y2 y3 y4 count 0 1 1 2 1 2 2 1 1 2 1 1 2 1 2 1 2 2 NaN 2 1 3 1 NaN 2 2 NaN 1 4 2 2 1 1 2 1 5 2 2 2 NaN 2 2 6 2 2 NaN NaN NaN 1 7 2 NaN NaN 2 1 1
3
1
78,676,973
2024-6-27
https://stackoverflow.com/questions/78676973/how-can-i-preserve-the-previous-value-to-find-the-row-that-is-greater-than-it
This is my DataFrame: import pandas as pd df = pd.DataFrame( { 'start': [3, 11, 9, 19, 22], 'end': [10, 17, 10, 25, 30] } ) And expected output is creating column x: start end x 0 3 10 10 1 11 17 17 2 9 10 NaN 3 19 25 25 4 22 30 NaN Logic: I explain it row by row. For row 0, x is df.end.iloc[0]. Now this value of x needs to be preserved until a greater value is found in the next rows and in the start column. So 10 should be saved then the process moves to row 1. Is 11 > 10? If yes then x of second row is 17. For the next row, Is 9 > 17? No so the value is NaN. The process moves to next row. Since no values is found that is greater than 17, 17 is preserved. Is 19 > 17? Yes so x is set to 25. And for the last row since 22 < 25, NaN is selected. I have provided additional examples with different df and the desired outputs: df = pd.DataFrame({'start': [3, 20, 11, 19, 22],'end': [10, 17, 21, 25, 30]}) start end x 0 3 10 10.0 1 20 17 17.0 2 11 21 NaN 3 19 25 25.0 4 22 30 NaN df = pd.DataFrame({'start': [3, 9, 11, 19, 22],'end': [10, 17, 21, 25, 30]}) start end x 0 3 10 10.0 1 9 17 NaN 2 11 21 21.0 3 19 25 NaN 4 22 30 30.0 df = pd.DataFrame({'start': [3, 11, 9, 19, 22],'end': [10, 17, 21, 25, 30]}) start end x 0 3 10 10.0 1 11 17 17.0 2 9 21 NaN 3 19 25 25.0 4 22 30 NaN This gives me the result. Is there a vectroized way to do this? l = [] for ind, row in df.iterrows(): if ind == 0: x = row['end'] l.append(x) continue if row['start'] > x: x = row['end'] l.append(x) else: l.append(np.NaN)
updated answer If the previous end should be propagated, then the logic cannot be vectorized. However, it is possible to be much faster than iterrows using numba: from numba import jit @jit(nopython=True) def f(start, end): prev_e = -np.inf out = [] for s, e in zip(start, end): if s>prev_e: out.append(e) prev_e = e else: out.append(None) return out df['x'] = f(df['start'].to_numpy(), df['end'].to_numpy()) Output: # example 1 start end x 0 3 10 10.0 1 11 17 17.0 2 9 10 NaN 3 19 25 25.0 4 22 30 NaN # example 2 start end x 0 3 10 10.0 1 20 17 17.0 2 11 21 NaN 3 19 25 25.0 4 22 30 NaN # example 3 start end x 0 3 10 10.0 1 9 17 NaN 2 11 21 21.0 3 19 25 NaN 4 22 30 30.0 # example 4 start end x 0 3 10 10.0 1 11 17 17.0 2 9 21 NaN 3 19 25 25.0 4 22 30 NaN original answer IIUC, you could use shift to form a boolean mask and mask to hide the non-valid values: df['x'] = df['end'].mask(df['start'].le(df['end'].shift())) The trick here is to compare start <= end.shift, which will result in False for the first row because of the NaN. If you want to exclude the first row then you should have used df['end'].where(df['start'].gt(df['end'].shift())). Output: start end x 0 3 10 10.0 1 11 17 17.0 2 9 10 NaN 3 19 25 25.0 4 22 30 NaN Intermediates: start end x end.shift start<=end.shift 0 3 10 10.0 NaN False 1 11 17 17.0 10.0 False 2 9 10 NaN 17.0 True 3 19 25 25.0 10.0 False 4 22 30 NaN 25.0 True
4
3
78,678,138
2024-6-27
https://stackoverflow.com/questions/78678138/how-to-apply-hierarchical-numbering-to-indented-titles
I have a table of content in the form of indention to track the hierarchy like: - title1 -- title1-1 -- title1-2 --- title1-2-1 --- title1-2-2 - title2 -- title2-1 -- title2-2 - title3 - title4 I want to translate them with a numbering format like: 1 title1 1.1 title1-1 1.2 title1-2 1.2.1 title1-2-1 1.2.2 title1-2-2 2 title2 2.1 title2-1 2.2 title2-2 3 title3 4 title4 This is just an example where the string "title-*" could be any heading text. Also the size of an indent could get greater than in this example. This comes from my real work, where I collect headings, or manually hand-written headings, in a Word document and reformat these possible headings from beginning to end aiming to correct any wrong order and indention. I have tried this myself, and while mostly these headings were transformed into the desired format, for some it did not work out. How should this be done?
You could use the replacer callback of re.sub to implement the logic. In that callback use a stack (that is maintained across multiple replacements) to track the chapter numbers of upper "levels". Code: import re def add_numbers(s): stack = [0] def replacer(s): indent = len(s.group(0)) - 1 del stack[indent+1:] if indent >= len(stack): stack.append(0) stack[indent] += 1 return ".".join(map(str,stack)) return re.sub(r"^-+", replacer, s, flags=re.M) Here is how you would call it on your example: message_string = """- title1 -- title1-1 -- title1-2 --- title1-2-1 --- title1-2-2 - title2 -- title2-1 -- title2-2 - title3 - title4""" res = add_numbers(message_string) print(res) This prints: 1 title1 1.1 title1-1 1.2 title1-2 1.2.1 title1-2-1 1.2.2 title1-2-2 2 title2 2.1 title2-1 2.2 title2-2 3 title3 4 title4
3
5
78,676,407
2024-6-27
https://stackoverflow.com/questions/78676407/polars-pandas-equivalent-of-selecting-column-names-from-a-list
I have two DataFrames in polars, one that describes the meta data, and one of the actual data (LazyFrames are used as the actual data is larger): import polars as pl df = pl.LazyFrame( { "ID": ["CX1", "CX2", "CX3"], "Sample1": [1, 1, 1], "Sample2": [2, 2, 2], "Sample3": [4, 4, 4], } ) df_meta = pl.LazyFrame( { "sample": ["Sample1", "Sample2", "Sa,mple3", "Sample4"], "qc": ["pass", "pass", "fail", "pass"] } ) I need to select the columns in df for samples that have passing qc using the information in df_meta. As you can see, df_meta has an additional sample, which of course we are not interested in as it's not part of our data. In pandas, I'd do (not very elegant but does the job): df.loc[:, df.columns.isin(df_meta.query("qc == 'pass'")["sample"])] However I'm not sure about how doing this in polars. Reading through SO and the docs didn't give me a definite answer. I've tried: df.with_context( df_meta.filter(pl.col("qc") == "pass").select(pl.col("sample").alias("meta_ids")) ).with_columns( pl.all().is_in("meta_ids") ).collect() Which however raises an exception: InvalidOperationError: `is_in` cannot check for String values in Int64 data I assume it's checking the content of the columns, but I'm interested in the column names. I've also tried: meta_ids = df_meta.filter(pl.col("qc") == "pass").get_column("sample") df.select(pl.col(meta_ids)) but as expected, an exception is raised as there's one sample not accounted for in the first dataFrame: ColumnNotFoundError: Sample4 What would be the correct way to do this?
Just to build upon https://stackoverflow.com/a/78676922/ - I find require_all=False rather cryptic. It is also possible to set intersect with the cs.all() selector: >>> meta_ids = df_meta.filter(pl.col("qc") == "pass")["sample"] >>> cs.all() & cs.by_name(meta_ids) (cs.all() & cs.by_name('Sample1', 'Sample2', 'Sample4')) df.select(cs.all() & cs.by_name(meta_ids)) shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Sample1 ┆ Sample2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ═════════║ β”‚ 1 ┆ 2 β”‚ β”‚ 1 ┆ 2 β”‚ β”‚ 1 ┆ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
2
2
78,676,983
2024-6-27
https://stackoverflow.com/questions/78676983/get-rows-from-related-table-orm-object-via-array-agg
I want to get the product table data array using the array_agg function. In Postgers, this works great, but in SQLalchemy this can only be done with data types such as integer, strings, and so on. How do I implement this in SQLalchemy? Code in postgresql: select DISTINCT seller.title, array_agg(product), COUNT(product.id) from seller_product INNER JOIN seller ON seller.ozon_id = seller_product.id_seller INNER JOIN product ON product.ozon_id = seller_product.id_product WHERE start_id = 36 GROUP BY seller.title ORDER BY COUNT(product.id) DESC Code python. I'm trying to do it like this: slct_stmt_now = select(Seller, func.array_agg(Product.__table__), func.count(Product.id)).distinct().select_from(seller_product) slct_stmt_now = slct_stmt_now.join(Seller, Seller.ozon_id == seller_product.columns["id_seller"]).join(Product, Product.ozon_id == seller_product.columns["id_product"]).where(seller_product.columns["start_id"] == LAST_START_ID).group_by(Seller) now_data_txt = session.execute(slct_stmt_now.order_by(func.count(Product.id).desc())).all()
The GitHub discussion that @snakecharmerb cites above includes the comment relationship does exactly what you are looking for in an object oriented way. # https://stackoverflow.com/q/78676983/2144390 # fmt: off from sqlalchemy import Column, ForeignKey, Integer, String, Table, create_engine, select from sqlalchemy.orm import Session, declarative_base, joinedload, relationship # fmt: on engine = create_engine("postgresql://scott:[email protected]/test") Base = declarative_base() seller_product = Table( "seller_product", Base.metadata, Column( "id_seller", Integer, ForeignKey("seller.ozon_id"), primary_key=True ), Column( "id_product", Integer, ForeignKey("product.ozon_id"), primary_key=True ), ) class Product(Base): __tablename__ = "product" id = Column(Integer, primary_key=True) ozon_id = Column(Integer, nullable=False) name = Column(String, nullable=True) def __repr__(self): return f"Product({repr(self.name)})" class Seller(Base): __tablename__ = "seller" id = Column(Integer, primary_key=True) ozon_id = Column(Integer, nullable=False) title = Column(String, nullable=True) # using this instead of array_agg() products = relationship("Product", secondary=seller_product) def __repr__(self): return ( f"Seller(title={repr(self.title)}, products={repr(self.products)})" ) engine.echo = True with Session(engine) as sess: a_seller = sess.scalars( select(Seller).options(joinedload(Seller.products)) ).first() """ SELECT seller.id, seller.ozon_id, seller.title, product_1.id AS id_1, product_1.ozon_id AS ozon_id_1, product_1.name FROM seller LEFT OUTER JOIN ( seller_product AS seller_product_1 JOIN product AS product_1 ON product_1.ozon_id = seller_product_1.id_product ) ON seller.ozon_id = seller_product_1.id_seller """ print(a_seller) """ Seller(title='Harbor Freight', products=[Product('wrench'), Product('hammer')]) """
2
1
78,676,841
2024-6-27
https://stackoverflow.com/questions/78676841/dealing-with-columns-with-datatype-object
I read in a stored numpy array with dtype object in polars. I want to change column dtype from object to float in the polars dataframe. Minimal example: # generate sample data data = {"1": ["1.0", "2.0", "3.0", "4.0"], "2": [10, 20, 30, 40]} df = pl.DataFrame(data, schema={'1':pl.Object, '2':pl.Object}) # what I want to do df.with_columns(pl.col('2').cast(pl.Float64, strict=False)) The last line yields an error: polars.exceptions.ComputeError: cannot cast 'Object' type How can I do this?
Usually, by the time an object dtype made it into your dataframe, it is already too late to perform any native polars operation on it. Still, you can apply pl.Expr.map_elements to map a user-defined function over elements. df.with_columns( pl.col("1").map_elements(lambda x: x, return_dtype=pl.String), pl.col("2").map_elements(lambda x: x, return_dtype=pl.Int32), ) shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ 1 ┆ 2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ i32 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1.0 ┆ 10 β”‚ β”‚ 2.0 ┆ 20 β”‚ β”‚ 3.0 ┆ 30 β”‚ β”‚ 4.0 ┆ 40 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜
2
3
78,676,685
2024-6-27
https://stackoverflow.com/questions/78676685/why-does-x-1-not-reverse-a-grouped-pandas-dataframe-column-in-python-3-12-3
I have a large pandas DataFrame (160k records, 60 columns of mostly text) but for this example, I have the following DataFrame: df1 = pd.DataFrame([{"GROUP": "A", "COL2": "1", "COL3": "P"}, {"GROUP": "A", "COL2": "2", "COL3": "Q"}, {"GROUP": "A", "COL2": "3", "COL3": "R"}, {"GROUP": "B", "COL2": "4", "COL3": "S"}, {"GROUP": "B", "COL2": "5", "COL3": "T"}, {"GROUP": "B", "COL2": "6", "COL3": "U"}, {"GROUP": "B", "COL2": "7", "COL3": "V"}]) I am trying to create another column that reverses the order of COL2 but only within groups of A and B...so I would expect from top to bottom, the values to be 3,2,1,7,6,5,4. This is achieved with this line of code: df1['REVERSED_COL'] = df1.groupby("GROUP")["COL2"].transform(lambda x: x[::-1]) Instead, I get this: This line of code worked when I was using Python 3.11.7. However, I recently upgraded to 3.12.3 (and all other modules including pandas to 2.2.1) and removed 3.11.7 interpreter from my machine so I can't go back and test it again. I also have another machine using Python 3.7.4 where the same line of code still works as expected. I tried using the x[::-1] on a list, and that reversed them as expected: This code: x = [1,2,3,4,5,6,7,8,9] print(x) y = x[::-1] print(y) Results in: [1, 2, 3, 4, 5, 6, 7, 8, 9] [9, 8, 7, 6, 5, 4, 3, 2, 1] My question is, am I doing something wrong? Is there a different way I can do this? I checked the pandas and python documentation on Python changes and pandas 2.2.1 changes (respectively) but I couldn't find anything relevant enough.
Because the output of transform is aligned back to the original index when it is a Series, which in your case reverts it back to its original order. To avoid this you must convert to array/list (with values/array/to_numpy): df1['REVERSED_COL'] = (df1.groupby("GROUP")["COL2"] .transform(lambda x: x[::-1].array) ) Output: GROUP COL2 COL3 REVERSED_COL 0 A 1 P 3 1 A 2 Q 2 2 A 3 R 1 3 B 4 S 7 4 B 5 T 6 5 B 6 U 5 6 B 7 V 4
3
1
78,676,037
2024-6-27
https://stackoverflow.com/questions/78676037/turn-a-list-of-tuples-into-pandas-dataframe-with-single-column
I have a list of tuples like: tuple_lst = [('foo', 'bar'), ('bar', 'foo'), ('ping', 'pong'), ('pong', 'ping')] And I want to create a Dataframe with one column containing each tuple pair, like: | one col | | -------- | | ('foo', 'bar') | | ('bar', 'foo') | | ('ping', 'pong') | | ('pong', 'ping') | I tried: df = pd.DataFrame(tuple_lst, columns='one col') But this throws an error as it's trying to split the tuples into 2 separate columns. I know if I pass a list of 2 column names here, it would produce a dataframe with 2 columns which is not what I want. I guess I could then put these two columns back together into a list of tuples, but this feels like a lot of work to break them up and put them back together, I feel there must be a simpler way to do this? I need the output to be a dataframe not a series so I can add other columns etc later on.
Use a dictionary, this will ensure the DataFrame constructor doesn't try to interpret the data as 2D: pd.DataFrame({'one col': tuple_lst}) You could also have used a Series and converted to_frame: pd.Series(tuple_lst).to_frame(name='one col') Or, closer to your original approach, which could be useful if you have constraints on the format passed to the constructor. Although not as efficient (for small lists): pd.DataFrame(pd.Series(tuple_lst), columns=['one col']) Output: one col 0 (foo, bar) 1 (bar, foo) 2 (ping, pong) 3 (pong, ping) timings For small lists pd.DataFrame(pd.Series(tuple_lst), columns=['one col']) is not as efficient, but for large lists all solutions are equivalent:
3
2