question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
77,707,524 | 2023-12-23 | https://stackoverflow.com/questions/77707524/how-to-remove-redundancy-when-computing-sums-for-many-rings | I have this code to compute the sum of the values in a matrix that are closer than some distance but further away than another. Here is the code with some example data: square = [[ 3, 0, 1, 3, -1, 1, 1, 3, -2, -1], [ 3, -1, -1, 1, 0, -1, 2, 1, -2, 0], [ 2, 2, -2, 0, 1, -3, 0, -2, 2, 1], [ 0, -3, -3, -1, -1, 3, -2, 0, 0, 3], [ 2, 2, 3, 2, -1, 0, 3, 0, -3, -1], [ 1, -1, 3, 1, -3, 3, -2, 0, -3, 0], [ 2, -2, -2, -3, -2, 1, -2, 0, 0, 3], [ 0, 3, 0, 1, 3, -1, 2, -3, 0, -2], [ 0, -2, 2, 2, 2, -2, 0, 2, 1, 3], [-2, -2, 0, -2, -2, 2, 0, 2, 3, 3]] def enumerate_matrix(matrix): """ Enumerate the elements in the matrix. """ for x, row in enumerate(matrix): for y, value in enumerate(row): yield x, y, value def sum_of_values(matrix, d): """ Calculate the sum of values based on specified conditions. """ total_sum = 0 for x, y, v in enumerate_matrix(matrix): U = x * x + x + y * y + y + 1 if d * d * 2 < U < (d + 1) ** 2 * 2: total_sum += v return total_sum For this case, I want to compute sum_of_values(square, x) for x in [0.5, 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5, 9.5]. This is fast enough but I also want to do it for much larger matrices and the code is then doing a lot of redundant computation. How can I remove this redundancy? For example: import numpy as np square = np.random.randint(-3, 4, size=(1000, 1000)) for i in range(1000): result = sum_of_values(square, i + 0.5) print(f"Sum of values: {result} {i}") This is too slow as I will need to perform this calculation for thousands of different matrices. How can the redundant calculations in my code be removed? The key problem I think is that enunerate_matrix should only be looking at cells in the matrix that are likely to be the right distance instead of repeatedly rechecking all the cells in the matrix . Timings For a 400 by 400 matrix my code takes approx 26 seconds. def calc_values(matrix, n): scores = [] for i in tqdm(range(n)): result = sum_of_values(square, i + 0.5) scores.append(result) return scores n = 400 square = np.random.randint(-3, 4, size=(n, n)) %timeit calc_values(square, n) RomanPerekhrest's code takes approx 119ms even including making the U_arrays matrix. Reinderien's code takes approx 149ms. | Traverse the input square matrix just once to generate an array of pairs where calculated U parameter mapped to the respective value. Then apply a vectorized operation to sum up values filtered by U params matched the condition. def make_U_array(mtx): """Make an array of (U, value) pairs""" arr = np.array([(x * x + x + y * y + y + 1, value) for x, row in enumerate(mtx) for y, value in enumerate(row)]) return arr def sum_values_by_cond(U_values, d): # mask values where U parameters fit the condition m = (d * d * 2 < U_values[:, 0]) & (U_values[:, 0] < (d + 1) ** 2 * 2) return np.sum(U_values[:, 1][m]) Update: alternative and faster version of make_U_array function based on np.indices (to get row/column indices), it should give about 3x time speedup compared to a previous list-comprehension approach: def make_U_array(mtx): """Make an array of (U, value) pairs""" x, y = np.indices(mtx.shape) x, y = x.flatten(), y.flatten() # row/column indices arr = np.column_stack((x * x + x + y * y + y + 1, np.ravel(mtx))) return arr Sample case (assuming you initial square array): U_arr = make_U_array(square) for i in range(10): result = sum_values_by_cond(U_arr, i + 0.5) print(f"Sum of values: {result} {i}") Sum of values: 6 0 Sum of values: 3 1 Sum of values: -1 2 Sum of values: 4 3 Sum of values: 3 4 Sum of values: 3 5 Sum of values: -11 6 Sum of values: 4 7 Sum of values: 7 8 Sum of values: 3 9 | 3 | 2 |
77,707,167 | 2023-12-23 | https://stackoverflow.com/questions/77707167/pandas-create-new-column-that-contains-the-most-recent-index-where-a-condition | In the below example, I wish to return the last index ocurrance relative to the current row in which the "lower" was >= the "upper" column. I am able to do this with the results as expected but it is not truly vectorized and is very inefficient for larger dataframes. import pandas as pd # Sample DataFrame data = {'lower': [7, 1, 6, 1, 1, 1, 1, 11, 1, 1], 'upper': [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]} df = pd.DataFrame(data=data) df['DATE'] = pd.date_range('2020-01-01', periods=len(data['lower'])) df['DATE'] = pd.to_datetime(df['DATE']) df.set_index('DATE', inplace=True) # new columns that contains the most recent index of previous rows, where the previous "lower" is greater than or equal to the current "upper" def get_most_recent_index(row): previous_indices = df.loc[:row.name - pd.Timedelta(minutes=1)] recent_index = previous_indices[previous_indices['lower'] >= row['upper']].index.max() return recent_index df['prev'] = df.apply(get_most_recent_index, axis=1) print(df) How would I rewrite this to be most efficient? EDIT: Firstly, thank you to you all for your responses. On the topic of performance between the four viable solutions, the clear winner is bisect, proposed by Andrej Kesely. I have excluded pyjanitor as with any data volume approaching my set, we quickly run into memory alloc errors. baseline: 1min 35s ± 5.15 s per loop (mean ± std. dev. of 2 runs, 2 loops each) bisect: 1.76 s ± 82.5 ms per loop (mean ± std. dev. of 2 runs, 2 loops each) enumerate: 1min 13s ± 2.17 s per loop (mean ± std. dev. of 2 runs, 2 loops each) import pandas as pd import numpy as np from bisect import bisect_left import janitor def get_sample_df(rows=100_000): # Sample DataFrame data = {'lower': np.random.default_rng(seed=1).uniform(1,100,rows), 'upper': np.random.default_rng(seed=2).uniform(1,100,rows)} df = pd.DataFrame(data=data) df = df.astype(int) df['DATE'] = pd.date_range('2020-01-01', periods=len(data['lower']), freq="min") df['DATE'] = pd.to_datetime(df['DATE']) df.set_index('DATE', inplace=True) return df def get_baseline(): df = get_sample_df() # new columns that contains the most recent index of previous rows, where the previous "lower" is greater than or equal to the current "upper" def get_most_recent_index(row): previous_indices = df.loc[:row.name - pd.Timedelta(minutes=1)] recent_index = previous_indices[previous_indices['lower'] >= row['upper']].index.max() return recent_index df['prev'] = df.apply(get_most_recent_index, axis=1) return df def get_pyjanitor(): df = get_sample_df() df.reset_index(inplace=True) # set the DATE column as an index # after the operation you can set the original DATE # column as an index left_df = df.assign(index_prev=df.index) right_df = df.assign(index_next=df.index) out=(left_df .conditional_join( right_df, ('lower','upper','>='), ('index_prev','index_next','<'), df_columns='index_prev', right_columns=['index_next','lower','upper']) ) # based on the matches, we may have multiple returns # what we need is the closest to the current row closest=out.index_next-out.index_prev grouper=[out.index_next, out.lower,out.upper] min_closest=closest.groupby(grouper).transform('min') closest=closest==min_closest # we have out matches, which is defined by `index_prev` # use index_prev to get the relevant DATE prev=out.loc[closest,'index_prev'] prev=df.loc[prev,'DATE'].array # avoid index alignment here index_next=out.loc[closest,'index_next'] # now assign back to df, based on index_next and prev prev=pd.Series(prev,index=index_next) df = df.assign(prev=prev) return df def get_bisect(): df = get_sample_df() def get_prev_bs(lower, upper, _date): uniq_lower = sorted(set(lower)) last_seen = {} for l, u, d in zip(lower, upper, _date): # find index of element that is >= u idx = bisect_left(uniq_lower, u) max_date = None for lv in uniq_lower[idx:]: if lv in last_seen: if max_date is None: max_date = last_seen[lv] elif last_seen[lv] > max_date: max_date = last_seen[lv] yield max_date last_seen[l] = d df["prev"] = list(get_prev_bs(df["lower"], df["upper"], df.index)) return df def get_enumerate(): df = get_sample_df() df.reset_index(inplace=True) date_list=df["DATE"].values.tolist() lower_list=df["lower"].values.tolist() upper_list=df["upper"].values.tolist() new_list=[] for i,(x,y) in enumerate(zip(lower_list,upper_list)): if i==0: new_list.append(None) else: if (any(j >= y for j in lower_list[0:i])): for ll,dl in zip(reversed(lower_list[0:i]),reversed(date_list[0:i])): if ll>=y: new_list.append(dl) break else: continue else: new_list.append(None) df['prev']=new_list df['prev']=pd.to_datetime(df['prev']) return df print("baseline:") %timeit -n 2 -r 2 get_baseline() # Unable to allocate 37.2 GiB for an array with shape (4994299505,) and data type int64 # print("pyjanitor:") # %timeit -n 2 get_pyjanitor() print("bisect:") %timeit -n 2 -r 2 get_bisect() print("enumerate:") %timeit -n 2 -r 2 get_enumerate() | I'm not sure if this can be vectorized (because you have variables that are dependent on past state). But you can try to speed up the computation using binary search, e.g.: from bisect import bisect_left def get_prev(lower, upper, _date): uniq_lower = sorted(set(lower)) last_seen = {} for l, u, d in zip(lower, upper, _date): # find index of element that is >= u idx = bisect_left(uniq_lower, u) max_date = None for lv in uniq_lower[idx:]: if lv in last_seen: if max_date is None: max_date = last_seen[lv] elif last_seen[lv] > max_date: max_date = last_seen[lv] yield max_date last_seen[l] = d df["prev_new"] = list(get_prev(df["lower"], df["upper"], df.index)) print(df) Prints: lower upper prev prev_new DATE 2020-01-01 7 2 NaT NaT 2020-01-02 1 3 2020-01-01 2020-01-01 2020-01-03 6 4 2020-01-01 2020-01-01 2020-01-04 1 5 2020-01-03 2020-01-03 2020-01-05 1 6 2020-01-03 2020-01-03 2020-01-06 1 7 2020-01-01 2020-01-01 2020-01-07 1 8 NaT NaT 2020-01-08 11 9 NaT NaT 2020-01-09 1 10 2020-01-08 2020-01-08 2020-01-10 1 11 2020-01-08 2020-01-08 | 3 | 2 |
77,705,057 | 2023-12-22 | https://stackoverflow.com/questions/77705057/how-to-pass-a-pem-key-in-github-actions-via-environment-variable-without-charac | I have a GitHub environment secret {{ secrets.GITHUBAPP_KEY } that holds a .pem key, in a workflow step, I'm trying to pass the secret to an env variable GITHUBAPP_KEY - name: Do Certain Step run: Insert fancy command here env: GITHUBAPP_KEY: "${{ secrets.GITHUBAPP_KEY }}" Here is the error the GitHub actio workflow gets when I run it: error: error parsing STDIN: invalid Yaml document separator: --END RSA PRIVATE KEY-----" The key is the correct format, I have also wrapped double quotes around the secrets context, yet, the file still does not get parsed correctly. How can I solve this issue? | As the SSH private key is spanned over multiple lines, you can assign it to an env var as a multiline string by using pipe | like this: env: GITHUBAPP_KEY: | ${{ secrets.GITHUBAPP_KEY }} See https://yaml-multiline.info/ for interactive examples. | 2 | 5 |
77,704,406 | 2023-12-22 | https://stackoverflow.com/questions/77704406/how-to-isolate-parts-of-a-venn-diagram | Here is a Venn Diagram I drew in Python: !pip install matplotlib_venn from matplotlib_venn import venn2 import matplotlib.pyplot as plt from matplotlib_venn import venn2 import matplotlib.pyplot as plt set1_size = 20 set2_size = 25 intersection_size = 5 venn2(subsets=(set1_size, set2_size, intersection_size)) plt.show() This makes a brown part, a green part and a red part. I want to extract the green part separately, the brown part separately and the red part separately. Is there something that can be done to do this? I tried looking but found nothing. Is it even possible? Or is it possible to just have the venn diagram as blank color, and then just individually highlight certain parts? Ex: center as yellow, (left circle without center) as purple, etc? | Possible arguments for .get_patch_by_id() include '10', '11', and '01'. See also this link. See the patch documentation for other patch parameters (e.g. color) that you can set. from matplotlib_venn import venn2 import matplotlib.pyplot as plt set1_size = 20 set2_size = 25 intersection_size = 5 fig, ax = plt.subplots() vd = venn2(subsets=(set1_size, set2_size, intersection_size)) vd.get_patch_by_id('10').set(visible=False, label=None) vd.get_label_by_id('10').set_text("") vd.get_patch_by_id('11').set(visible=False, label=None) vd.get_label_by_id('11').set_text("") plt.show() | 2 | 2 |
77,704,256 | 2023-12-22 | https://stackoverflow.com/questions/77704256/create-filtered-value-using-pandas | I have a csv file in which the first line reads something like the following: Pyscip_V1.11 Ref: #001=XYZ_0[1234] #50=M3_0[112] #51=M3_1[154] #52=M3_2[254]... and so on. What I'd like to do is create filtered value such that the first column is Ref and it takes all the values after the # sign like 001,50,51,52... The second column name is ID and it takes the value after = like XYZ_0,M3_0,M3_1,M3_2,M3_3... And finally make a third column which takes all the values present in the square brackets like 1234,112,154,254,... header_pattern = r'Pyscip_V(\d+\.\d+) Ref:' version_match = re.search(header_pattern, first_line.iloc[0, 0]) version_number = version_match.group(1) if version_match else '' matches = re.findall(r'#(\d+)=(\w+_\d)\[([\d]+)\]', first_line.iloc[0, 0]) parsed_df = [] for match in matches: row_dict = { 'Ref': match[0] if match[0] else '', 'ID': match[1] if match[1] else '', 'Ser_No': match[2] if match[2] else '' } parsed_df.append(row_dict) new_df = pd.DataFrame(parsed_df) However, I only get enpty dataframe. What seems to be the problem here? Edit: the data from 3rd row looks like the following: ID Date XYZ_0 M3_0 M3_1 M3_2 1 22.12.2023 12.6 0.5 1.2 2.3 The expected outcome is Ref ID Num 001 XYZ_0 1234 50 M3_0 112 51 M3_1 154 | I would open the csv file, extract the first line and process it, only after read the rest of the CSV with pandas. For that, your original approach and regex are fine. import re import pandas as pd with open('my_csv.csv') as f: first_line = next(f) header_df = pd.DataFrame(re.findall(r'#(\d+)=(\w+_\d)\[([\d]+)\]', first_line), columns=['Ref', 'ID', 'Num']) data_df = pd.read_csv(f, sep=r'\s+') print(header_df) print(data_df) Output: # header_df Ref ID Num 0 001 XYZ_0 1234 1 50 M3_0 112 2 51 M3_1 154 3 52 M3_2 254 # data_df ID Date XYZ_0 M3_0 M3_1 M3_2 0 1 22.12.2023 12.6 0.5 1.2 2.3 | 3 | 2 |
77,698,828 | 2023-12-21 | https://stackoverflow.com/questions/77698828/difference-between-x-in-list-vs-x-in-dict-and-x-in-set-where-x-is-a-pol | This is mostly a question about python: How is x in [a, b, c] evaluated differently than x in {a, b, c}. The context I'm struggling with is like this: import polars as pl s = pl.Series(["a", "b"], dtype=pl.Categorical) s.dtype in [pl.Categorical, pl.Enum] # True s.dtype in {pl.Categorical, pl.Enum} # False s.dtype in {pl.Categorical: 1, pl.Enum: 2} # False I want to understand python better. I also wonder whether polars can do anything to make the second case work, since it currently seems like a footgun. | EDIT: polars dtypes do not follow the usual equals-contract in multiple ways, and this is considered by design: https://github.com/pola-rs/polars/issues/9564 (thanks @jqurious), specifically, they violate transitivity and hashcode-consistency. In polars, a specific datatype is considered equal to a more general datatype even when different specific datatypes are not equal to each other. eg "pl.List == pl.List(str) => True pl.List(int) == pl.List(str) => False". Original answer, where I assumed hashcode-inconsistency is all that is happening here: This is a broken equality-implementation. See this: import polars as pl s = pl.Series(["a", "b"], dtype=pl.Categorical) print(s.dtype is pl.Categorical) # False print(s.dtype == pl.Categorical) # True print(hash(s.dtype) == hash(pl.Categorical) ) # False s.dtype is a new object that is considered equal to pl.Categorical. However, the __hash__-function is not overridden correctly: Two objects that are considered equal must have the same hashcode (or ban usage of the hash-function). Algorithms may rely on this and produce weird results if this is not true for a class! If you override __eq__ on a class, read this: https://docs.python.org/3/reference/datamodel.html#object.__hash__ What is happening here is that obj in list iterates the list and tests equality of the elements with obj, while obj in set uses the hashcode of obj to locate the object in the set, and if no object with that hashcode exists in the set the conclusion is that it is not in the set. | 2 | 5 |
77,701,771 | 2023-12-22 | https://stackoverflow.com/questions/77701771/unable-to-connect-to-mysql-container-unable-to-create-tables-due-to-2003-hy000 | I am trying to connect mysql container with my python flask app and I have this error when running docker-compose build --no-cache && docker-compose up: Creating network "hdbresalepricealert_sql_network" with the default driver Creating hdbresalepricealert_mysql_1 ... done Creating hdbresalepricealert_python_app_1 ... done Attaching to hdbresalepricealert_mysql_1, hdbresalepricealert_python_app_1 mysql_1 | 2023-12-22 04:48:45+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.2.0-1.el8 started. mysql_1 | 2023-12-22 04:48:46+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mysql_1 | 2023-12-22 04:48:46+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.2.0-1.el8 started. mysql_1 | '/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock' mysql_1 | 2023-12-22T04:48:46.269421Z 0 [System] [MY-015015] [Server] MySQL Server - start. mysql_1 | 2023-12-22T04:48:46.860627Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. mysql_1 | 2023-12-22T04:48:46.866169Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.2.0) starting as process 1 mysql_1 | 2023-12-22T04:48:46.891660Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. mysql_1 | 2023-12-22T04:48:47.157634Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. mysql_1 | 2023-12-22T04:48:47.482067Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. mysql_1 | 2023-12-22T04:48:47.482195Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. mysql_1 | 2023-12-22T04:48:47.485815Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. mysql_1 | 2023-12-22T04:48:47.512210Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock mysql_1 | 2023-12-22T04:48:47.512345Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.2.0' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. python_app_1 | INFO:services.dbService:Entering create tables method from dbservice python_app_1 | ERROR:services.dbService:Unable to create tables due to 2003 (HY000): Can't connect to MySQL server on 'mysql:3307' (111) python_app_1 | * Serving Flask app 'app' python_app_1 | * Debug mode: on python_app_1 | INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. python_app_1 | * Running on all addresses (0.0.0.0) python_app_1 | * Running on http://127.0.0.1:5000 python_app_1 | * Running on http://172.22.0.3:5000 python_app_1 | INFO:werkzeug:Press CTRL+C to quit python_app_1 | INFO:werkzeug: * Restarting with stat python_app_1 | INFO:services.dbService:Entering create tables method from dbservice python_app_1 | ERROR:services.dbService:Unable to create tables due to 2003 (HY000): Can't connect to MySQL server on 'mysql:3307' (111) python_app_1 | WARNING:werkzeug: * Debugger is active! python_app_1 | INFO:werkzeug: * Debugger PIN: 111-520-781 My docker-compose file: version: '3' services: mysql: image: mysql:latest ports: - "3307:3306" environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: db MYSQL_USER: user MYSQL_PASSWORD: password networks: - sql_network volumes: - /mysql_data :/var/lib/mysql python_app: build: . ports: - "5000:5000" networks: - sql_network depends_on: - mysql networks: sql_network: Relevant portion of the dbService.py: config = { 'host': 'mysql', # This should match the service name in Docker Compose 'port': '3307', # This should match the exposed port on the host 'user': 'user', 'password': 'password', 'database': 'db', } def create_tables(): logger.info("Entering create tables method from dbservice") try: connection = mysql.connector.connect(**config) cursor = connection.cursor() # Check if 'emails' table already exists cursor.execute("SHOW TABLES LIKE 'emails'") table_exists = cursor.fetchone() if not table_exists: cursor.execute( ''' CREATE TABLE emails ( id INT PRIMARY KEY AUTO_INCREMENT, created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, email VARCHAR(255) NOT NULL, verified BOOLEAN NOT NULL, flatType VARCHAR(255) NOT NULL, streetName VARCHAR(255) NOT NULL, blkFrom INT NOT NULL, blkTo INT NOT NULL, lastSent TIMESTAMP, token VARCHAR(255)) ''') connection.commit() connection.close() except Exception as e: logger.error(f"Unable to create tables due to {e}") My dockerfile: FROM python:3.12.1-bullseye ARG REPO_DIR="." ARG PROJECT_USER="randy" ARG HOME_DIR="/home/$PROJECT_USER" ARG DESTINATION_FOLDER="hdb" WORKDIR $HOME_DIR # The copy is from local (where the docker command is executed) to this COPY $REPO_DIR $DESTINATION_FOLDER RUN pip install -r $DESTINATION_FOLDER/app/run-requirements.txt RUN groupadd -g 2222 $PROJECT_USER && useradd -u 2222 -g 2222 -m $PROJECT_USER RUN chown -R 2222:2222 $HOME_DIR && \ rm /bin/sh && ln -s /bin/bash /bin/sh USER 2222 WORKDIR $HOME_DIR/${DESTINATION_FOLDER} CMD [ "python3", "app/app.py" ] Relevant portion of my app.py: time.sleep(10) create_tables() Essentially I'm trying to create the tables first using the create tables function in the dbservice.py, I included the wait as I read that if I connected before the mysql container is up there might be some issues. I'm able to connect to the sql container using docker exec -it <container image> bash and login, no problem, I see that the container is up and running in docker-compose ps. I just cannot connect to the mysql container when running docker-compose build --no-cache && docker-compose up. Its been bugging me for days and I have scoured the web for all possible solutions including chatgpt but to no avail. I think it may be some small gotcha that I've missed. Can anyone help? | You need to access MySQL with 3306 because it's the port that MySQL listens to inside the Docker container. If you try accessing mysql database with a python script running outside Docker, you'll need to use port 3307 because database is connected with this port on your local machine. "3307:3306" means you connect your database to your local machine with port 3307, and you connect your MySQL database inside your Docker container network with port 3306. | 2 | 3 |
77,702,504 | 2023-12-22 | https://stackoverflow.com/questions/77702504/pydantic-exclude-computed-field-from-dump | In pydantic v2, the following code: from __future__ import annotations import pydantic from pprint import pprint class System(pydantic.BaseModel): id: int name: str subsystems: list[System] | None = None @pydantic.computed_field() @property def computed(self) -> str: return self.name.upper() systems = System( id=1, name="Main system", subsystems=[ System(id=2, name="Subsystem A"), System(id=3, name="Subsystem B"), ], ) pprint(systems.model_dump(), indent=2) prints: { 'computed': 'MAIN SYSTEM', 'id': 1, 'name': 'Main system', 'subsystems': [ { 'computed': 'SUBSYSTEM A', 'id': 2, 'name': 'Subsystem A', 'subsystems': None}, { 'computed': 'SUBSYSTEM B', 'id': 3, 'name': 'Subsystem B', 'subsystems': None}]} I want to exclude the computed field computed. pprint(systems.model_dump(exclude={'computed': True}), indent=2) prints only root element without computed: { 'id': 1, 'name': 'Main system', 'subsystems': [ { 'computed': 'SUBSYSTEM A', 'id': 2, 'name': 'Subsystem A', 'subsystems': None}, { 'computed': 'SUBSYSTEM B', 'id': 3, 'name': 'Subsystem B', 'subsystems': None}]} How to exclude computed field (e.g. computed) from dump? I want to serialize via .model_dump() because: I have many models and models those include System are serialized via .model_dump() method as well. .model_dump() has some useful parameters like exclude_defaults, by_alias and more. I use them as well. | According to the documentation on computed_field: computed_field Decorator to include property and cached_property when serializing models or dataclasses. This is useful for fields that are computed from other fields, or for fields that are expensive to compute and should be cached. In other words, if don't want to include (= exclude) a field we shouldn't use computed_field decorator: from __future__ import annotations import pydantic from pprint import pprint class System(pydantic.BaseModel): id: int name: str subsystems: list[System] | None = None # comment or remove it # @pydantic.computed_field() @property def computed(self) -> str: return self.name.upper() systems = System( id=1, name="Main system", subsystems=[ System(id=2, name="Subsystem A"), System(id=3, name="Subsystem B"), ], ) pprint(systems.model_dump(), indent=2) print(systems.computed) # but it is still there Output is: { 'id': 1, 'name': 'Main system', 'subsystems': [ {'id': 2, 'name': 'Subsystem A', 'subsystems': None}, {'id': 3, 'name': 'Subsystem B', 'subsystems': None}]} MAIN SYSTEM | 3 | 2 |
77,698,814 | 2023-12-21 | https://stackoverflow.com/questions/77698814/mamba-error-while-creating-a-new-virtual-environment-could-not-open-lock-file | When I run the following command: mamba create --name eco-tech-h2gam-venv regionmask cartopy I get this error: Looking for: ['regionmask', 'cartopy'] error libmamba Could not open lockfile 'C:\ProgramData\anaconda3\pkgs\cache\cache.lock' Any idea how to overcome it? Additional Details OS: Windows 11 Anaconda 3 base distribution | Lock files can be cleared using the mamba clean --locks command. See in the documentation (abridged): $ mamba clean -h # usage: mamba clean [-h] [-a] [-i] [-p] [-t] [-f] [-c [TEMPFILES ...]] [-l] [--json] [-v] # [-q] [-d] [-y] [--locks] # # Removal Targets: # --locks Remove lock files. | 2 | 1 |
77,701,188 | 2023-12-22 | https://stackoverflow.com/questions/77701188/how-to-get-features-selected-using-boruta-in-a-pandas-dataframe-with-headers | I have this boruta code, and I want to generate the results in pandas with columns included model = RandomForestRegressor(n_estimators=100, max_depth=5, random_state=42) # let's initialize Boruta feat_selector = BorutaPy( verbose=2, estimator=model, n_estimators='auto', max_iter=10, # numero di iterazioni da fare random_state=42, ) # train Boruta # N.B.: X and y must be numpy arrays feat_selector.fit(np.array(X), np.array(y)) # print support and ranking for each feature print("\n------Support and Ranking for each feature------\n") for i in range(len(feat_selector.support_)): if feat_selector.support_[i]: print("Passes the test: ", X.columns[i], " - Ranking: ", feat_selector.ranking_[i], "✔️") else: print("Doesn't pass the test: ", X.columns[i], " - Ranking: ", feat_selector.ranking_[i], "❌") # features selected by Boruta X_filtered = feat_selector.transform(np.array(X)) My selected result is this: X.columns[feat_selector.support_] Index(['J80', 'J100', 'J160', 'J200', 'J250'], dtype='object') X_filtered array([[12.73363 , 8.518314 , 5.2625847 , ..., 0.06733382]]) How do I generate the result in Pandas dataframe with the headers? Now I have up to 25 headers. | Since support_ is a boolean mask, you can index the columns and create a new dataframe. X_filtered = pd.DataFrame( feat_selector.transform(X.values), columns=X.columns[feat_selector.support_] ) Then again, with the latest master version, you can pass a dataframe to transform() and flag return_df=True. So that would look like: X_filtered = feat_selector.transform(X, return_df=True) | 2 | 1 |
77,701,100 | 2023-12-21 | https://stackoverflow.com/questions/77701100/github-self-hosted-runner-complains-of-python-version | I am trying to set a GitHub self-hosted runner for my project and I keep getting the following error: Run actions/setup-python@v3 Version 3.9 was not found in the local cache Error: Version 3.9 with arch x64 not found The list of all available versions can be found here: https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json I have to install pytorch, and so I could not use the runners provided by GitHub. The python version of the machine where the runner is installed is Python 3.9.13, and the operational system is AlmaLinux release 9.3 (Shamrock Pampas Cat). I am also linking the .yaml file responsible for the workflow: name: Python application on: push: branches: [ "main" ] pull_request: branches: [ "main" ] permissions: contents: read jobs: build: # Due to cuda constraings, I needed to set up a self-hosted runner! #runs-on: ubuntu-latest runs-on: self-hosted steps: - uses: actions/checkout@v3 - name: Set up Python 3.9 uses: actions/setup-python@v3 with: python-version: "3.9" cache: "pip" env: AGENT_TOOLSDIRECTORY: /opt/hostedtoolcache - name: Install dependencies run: | python -m pip install --upgrade pip pip install flake8 pytest # Do I need to create the full enviroment =p #conda env create -f environment.yml #conda activate flow_corrections if [ -f requirements.yml ]; then pip install -r requirements.yml; fi - name: Lint with flake8 run: | # stop the build if there are Python syntax errors or undefined names flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics - name: Test with pytest run: | pytest *.py I know that the python version written here is 3.9, but I also tested writing 3.9.13 and I got the same error! I am also making available the requirements.yml for the condo environment: name: flow_corrections channels: - conda-forge - defaults dependencies: - _libgcc_mutex=0.1=conda_forge - _openmp_mutex=4.5=2_kmp_llvm - _py-xgboost-mutex=2.0=gpu_0 - alsa-lib=1.2.8=h166bdaf_0 - attr=2.5.1=h166bdaf_1 - boost-histogram=1.4.0=py310hd41b1e2_1 - brotli=1.1.0=hd590300_1 - brotli-bin=1.1.0=hd590300_1 - bzip2=1.0.8=hd590300_5 - ca-certificates=2023.11.17=hbcca054_0 - cairo=1.16.0=hbbf8b49_1016 - certifi=2023.11.17=pyhd8ed1ab_0 - click=8.1.7=unix_pyh707e725_0 - colorama=0.4.6=pyhd8ed1ab_0 - contourpy=1.2.0=py310hd41b1e2_0 - cramjam=2.7.0=py310hcb5633a_1 - cuda-cudart=12.0.107=hd3aeb46_7 - cuda-cudart_linux-64=12.0.107=h59595ed_7 - cuda-nvrtc=12.0.76=hd3aeb46_2 - cuda-nvtx=12.0.76=h59595ed_1 - cuda-version=12.0=hffde075_2 - cudnn=8.8.0.121=h264754d_4 - cycler=0.12.1=pyhd8ed1ab_0 - dbus=1.13.6=h5008d03_3 - exceptiongroup=1.2.0=pyhd8ed1ab_0 - expat=2.5.0=hcb278e6_1 - fastparquet=2023.10.1=py310h1f7b6fc_0 - filelock=3.13.1=pyhd8ed1ab_0 - font-ttf-dejavu-sans-mono=2.37=hab24e00_0 - font-ttf-inconsolata=3.000=h77eed37_0 - font-ttf-source-code-pro=2.038=h77eed37_0 - font-ttf-ubuntu=0.83=h77eed37_1 - fontconfig=2.14.2=h14ed4e7_0 - fonts-conda-ecosystem=1=0 - fonts-conda-forge=1=0 - fonttools=4.46.0=py310h2372a71_0 - freetype=2.12.1=h267a509_2 - fsspec=2023.12.1=pyhca7485f_0 - gettext=0.21.1=h27087fc_0 - glib=2.78.3=hfc55251_0 - glib-tools=2.78.3=hfc55251_0 - gmp=6.3.0=h59595ed_0 - gmpy2=2.1.2=py310h3ec546c_1 - graphite2=1.3.13=h58526e2_1001 - gst-plugins-base=1.22.3=h938bd60_1 - gstreamer=1.22.3=h977cf35_1 - harfbuzz=7.3.0=hdb3a94d_0 - hep_ml=0.7.2=pyhd8ed1ab_0 - hist=2.7.2=ha770c72_1 - hist-base=2.7.2=pyhd8ed1ab_1 - histoprint=2.4.0=pyhd8ed1ab_0 - icu=72.1=hcb278e6_0 - iminuit=2.24.0=py310hc6cd4ac_0 - iniconfig=2.0.0=pyhd8ed1ab_0 - jinja2=3.1.2=pyhd8ed1ab_1 - joblib=1.3.2=pyhd8ed1ab_0 - keyutils=1.6.1=h166bdaf_0 - kiwisolver=1.4.5=py310hd41b1e2_1 - krb5=1.20.1=h81ceb04_0 - lame=3.100=h166bdaf_1003 - lcms2=2.15=h7f713cb_2 - ld_impl_linux-64=2.40=h41732ed_0 - lerc=4.0.0=h27087fc_0 - libabseil=20230802.1=cxx17_h59595ed_0 - libblas=3.9.0=20_linux64_openblas - libbrotlicommon=1.1.0=hd590300_1 - libbrotlidec=1.1.0=hd590300_1 - libbrotlienc=1.1.0=hd590300_1 - libcap=2.69=h0f662aa_0 - libcblas=3.9.0=20_linux64_openblas - libclang=16.0.6=default_hb11cfb5_3 - libclang13=16.0.6=default_ha2b6cf4_3 - libcublas=12.0.1.189=hd3aeb46_3 - libcufft=11.0.0.21=hd3aeb46_2 - libcups=2.3.3=h36d4200_3 - libcurand=10.3.1.50=hd3aeb46_1 - libcusolver=11.4.2.57=hd3aeb46_2 - libcusparse=12.0.0.76=hd3aeb46_2 - libdeflate=1.19=hd590300_0 - libedit=3.1.20191231=he28a2e2_2 - libevent=2.1.12=hf998b51_1 - libexpat=2.5.0=hcb278e6_1 - libffi=3.4.2=h7f98852_5 - libflac=1.4.3=h59595ed_0 - libgcc-ng=13.2.0=h807b86a_3 - libgcrypt=1.10.3=hd590300_0 - libgfortran-ng=13.2.0=h69a702a_3 - libgfortran5=13.2.0=ha4646dd_3 - libglib=2.78.3=h783c2da_0 - libgpg-error=1.47=h71f35ed_0 - libhwloc=2.9.3=default_h554bfaf_1009 - libiconv=1.17=h166bdaf_0 - libjpeg-turbo=2.1.5.1=hd590300_1 - liblapack=3.9.0=20_linux64_openblas - libllvm16=16.0.6=h5cf9203_2 - libmagma=2.7.2=h173bb3b_1 - libmagma_sparse=2.7.2=h173bb3b_1 - libnsl=2.0.1=hd590300_0 - libnvjitlink=12.0.76=hd3aeb46_2 - libogg=1.3.4=h7f98852_1 - libopenblas=0.3.25=pthreads_h413a1c8_0 - libopus=1.3.1=h7f98852_1 - libpng=1.6.39=h753d276_0 - libpq=15.3=hbcd7760_1 - libprotobuf=4.24.4=hf27288f_0 - libsndfile=1.2.2=hc60ed4a_1 - libsqlite=3.44.2=h2797004_0 - libstdcxx-ng=13.2.0=h7e041cc_3 - libsystemd0=255=h3516f8a_0 - libtiff=4.6.0=h29866fb_1 - libuuid=2.38.1=h0b41bf4_0 - libuv=1.46.0=hd590300_0 - libvorbis=1.3.7=h9c3ff4c_0 - libwebp-base=1.3.2=hd590300_0 - libxcb=1.15=h0b41bf4_0 - libxgboost=1.7.6=cuda120_h75debf4_6 - libxkbcommon=1.6.0=h5d7e998_0 - libxml2=2.11.5=h0d562d8_0 - libzlib=1.2.13=hd590300_5 - llvm-openmp=17.0.6=h4dfa4b3_0 - lz4-c=1.9.4=hcb278e6_0 - magma=2.7.2=h51420fd_1 - markupsafe=2.1.3=py310h2372a71_1 - matplotlib=3.8.2=py310hff52083_0 - matplotlib-base=3.8.2=py310h62c0568_0 - mkl=2022.2.1=h84fe81f_16997 - mpc=1.3.1=hfe3b2da_0 - mpfr=4.2.1=h9458935_0 - mpg123=1.32.3=h59595ed_0 - mplhep=0.3.31=pyhd8ed1ab_0 - mplhep_data=0.0.3=pyhd8ed1ab_0 - mpmath=1.3.0=pyhd8ed1ab_0 - munkres=1.1.4=pyh9f0ad1d_0 - mysql-common=8.0.33=hf1915f5_6 - mysql-libs=8.0.33=hca2cd23_6 - nccl=2.19.4.1=h3a97aeb_0 - ncurses=6.4=h59595ed_2 - networkx=3.2.1=pyhd8ed1ab_0 - nspr=4.35=h27087fc_0 - nss=3.95=h1d7d5a4_0 - numpy=1.26.2=py310hb13e2d6_0 - openjpeg=2.5.0=h488ebb8_3 - openssl=3.1.4=hd590300_0 - packaging=23.2=pyhd8ed1ab_0 - pandas=2.1.3=py310hcc13569_0 - pcre2=10.42=hcad00b1_0 - pillow=10.0.1=py310h29da1c1_1 - pip=23.3.1=pyhd8ed1ab_0 - pixman=0.42.2=h59595ed_0 - pluggy=1.3.0=pyhd8ed1ab_0 - ply=3.11=py_1 - pthread-stubs=0.4=h36c2ea0_1001 - pulseaudio-client=16.1=hb77b528_5 - py-xgboost=1.7.6=cuda120_py310h6bc6e9e_6 - pyparsing=3.1.1=pyhd8ed1ab_0 - pyqt=5.15.9=py310h04931ad_5 - pyqt5-sip=12.12.2=py310hc6cd4ac_5 - pytest=7.4.3=pyhd8ed1ab_0 - python=3.10.13=hd12c33a_0_cpython - python-dateutil=2.8.2=pyhd8ed1ab_0 - python-tzdata=2023.3=pyhd8ed1ab_0 - python_abi=3.10=4_cp310 - pytorch=2.1.0=cuda120py310ha3a684c_301 - pytz=2023.3.post1=pyhd8ed1ab_0 - pyyaml=6.0.1=py310h5eee18b_0 - qt-main=5.15.8=h01ceb2d_12 - readline=8.2=h8228510_1 - scikit-learn=1.3.2=py310h1fdf081_2 - scipy=1.11.4=py310hb13e2d6_0 - setuptools=68.2.2=pyhd8ed1ab_0 - sip=6.7.12=py310hc6cd4ac_0 - six=1.16.0=pyh6c4a22f_0 - sleef=3.5.1=h9b69904_2 - sympy=1.12=pypyh9d50eac_103 - tbb=2021.11.0=h00ab1b0_0 - threadpoolctl=3.2.0=pyha21a80b_0 - tk=8.6.13=noxft_h4845f30_101 - toml=0.10.2=pyhd8ed1ab_0 - tomli=2.0.1=pyhd8ed1ab_0 - tornado=6.3.3=py310h2372a71_1 - typing-extensions=4.8.0=hd8ed1ab_0 - typing_extensions=4.8.0=pyha770c72_0 - tzdata=2023c=h71feb2d_0 - uhi=0.4.0=pyhd8ed1ab_0 - unicodedata2=15.1.0=py310h2372a71_0 - wheel=0.42.0=pyhd8ed1ab_0 - xcb-util=0.4.0=hd590300_1 - xcb-util-image=0.4.0=h8ee46fc_1 - xcb-util-keysyms=0.4.0=h8ee46fc_1 - xcb-util-renderutil=0.3.9=hd590300_1 - xcb-util-wm=0.4.1=h8ee46fc_1 - xgboost=1.7.6=cuda120_py310h6bc6e9e_6 - xkeyboard-config=2.40=hd590300_0 - xorg-kbproto=1.0.7=h7f98852_1002 - xorg-libice=1.1.1=hd590300_0 - xorg-libsm=1.2.4=h7391055_0 - xorg-libx11=1.8.7=h8ee46fc_0 - xorg-libxau=1.0.11=hd590300_0 - xorg-libxdmcp=1.1.3=h7f98852_0 - xorg-libxext=1.3.4=h0b41bf4_2 - xorg-libxrender=0.9.11=hd590300_0 - xorg-renderproto=0.11.1=h7f98852_1002 - xorg-xextproto=7.3.0=h0b41bf4_1003 - xorg-xf86vidmodeproto=2.3.1=h7f98852_1002 - xorg-xproto=7.0.31=h7f98852_1007 - xz=5.2.6=h166bdaf_0 - yaml=0.2.5=h7f98852_2 - zlib=1.2.13=hd590300_5 - zstd=1.5.5=hfc55251_0 - pip: - zuko==1.0.1 prefix: /home/home1/institut_3a/daumann/.conda/envs/flow_corrections I cannot really see what is wrong, and I don't have much experience with this, so I would appreciate any help! Thanks in advance. | As documented, the setup-python action requires that you use a specific version of Ubuntu or Windows (same as the GitHub Actions environments) for the Action to work properly. As noted (emphasis added): Python distributions are only available for the same environments that GitHub Actions hosted environments are available for. If you are using an unsupported version of Ubuntu such as 19.04 or another Linux distribution such as Fedora, setup-python may not work. Since you noted you are using AlmaLinux (which is not an Ubuntu or even Debian-based OS), that would likely be the reason why you are experiencing this issue. | 4 | 2 |
77,700,624 | 2023-12-21 | https://stackoverflow.com/questions/77700624/weird-behavior-checking-if-np-nan-is-in-list-from-pandas-dataframe | It seems that checking if np.nan is in a list after pulling the list from a pandas dataframe does not correctly return True as expected. I have an example below to demonstrate: from numpy import nan import pandas as pd basic_list = [0.0, nan, 1.0, 2.0] nan_in_list = (nan in basic_list) print(f"Is nan in {basic_list}? {nan_in_list}") df = pd.DataFrame({'test_list': basic_list}) pandas_list = df['test_list'].to_list() nan_in_pandas_list = (nan in pandas_list) print(f"Is nan in {pandas_list}? {nan_in_pandas_list}") I would expect the output of this program to be: Is nan in [0.0, nan, 1.0, 2.0]? True Is nan in [0.0, nan, 1.0, 2.0]? True But instead it is Is nan in [0.0, nan, 1.0, 2.0]? True Is nan in [0.0, nan, 1.0, 2.0]? False What is the cause of this odd behavior or am I missing something? Edit: Adding on to this, if I run the code: for item in pandas_list: print(type(item)) print(item) it has the exact same output as if I were to swap pandas_list with basic_list. However pandas_list == basic_list evaluates to False. | TL;DR pandas is using different nan object than np.nan and in operator for list checks if the object is the same. The in operator invokes __contains__ magic method of list, here is source code: static int list_contains(PyListObject *a, PyObject *el) { PyObject *item; Py_ssize_t i; int cmp; for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(a); ++i) { item = PyList_GET_ITEM(a, i); Py_INCREF(item); cmp = PyObject_RichCompareBool(item, el, Py_EQ); Py_DECREF(item); } return cmp; } You see there is PyObject_RichCompareBool called which states: If o1 and o2 are the same object, PyObject_RichCompareBool() will always return 1 for Py_EQ and 0 for Py_NE. So: basic_list = [0.0, nan, 1.0, 2.0] for v in basic_list: print(v == nan, v is nan) print(nan in basic_list) Prints: False False False True False False False False True And: df = pd.DataFrame({"test_list": basic_list}) pandas_list = df["test_list"].to_list() for v in pandas_list: print(v == nan, v is nan) print(nan in pandas_list) Prints: False False False False False False False False False Evidently, pandas is using different nan object. | 3 | 2 |
77,693,813 | 2023-12-20 | https://stackoverflow.com/questions/77693813/populate-nan-cells-in-dataframe-table-based-on-reference-table-based-on-specific | I have two tables. The first reference table is below: | Name | Target | Bonus | |------|--------:|------:| | Joe | 40 | 46 | | Phil | 38 | 42 | | Dean | 65 | 70 | The Python code to generate the table is: # Data for the table data = { 'Name': ['Joe', 'Phil', 'Dean'], 'Target': [40, 38, 65], 'Bonus': [46, 42, 70] } # Creating the DataFrame ref = pd.DataFrame(data) My second table is below: | week | Metrics | Joe | Dean | |------------|---------|----:|-----:| | 11/6/2023 | Target | 40 | 65 | | 11/6/2023 | Bonus | 46 | 70 | | 11/6/2023 | Score | 33 | 71 | | 11/13/2023 | Target | 40 | NaN | | 11/13/2023 | Bonus | 46 | NaN | | 11/13/2023 | Score | 45 | NaN | | 11/20/2023 | Target | 40 | 65 | | 11/20/2023 | Bonus | 46 | 70 | | 11/20/2023 | Score | 35 | 68 | | 11/27/2023 | Target | NaN | 65 | | 11/27/2023 | Bonus | NaN | 70 | | 11/27/2023 | Score | NaN | 44 | | 12/4/2023 | Target | 40 | 65 | | 12/4/2023 | Bonus | 46 | 70 | | 12/4/2023 | Score | 42 | 66 | The Python code to generate this table is: # Data for the new table data = { 'week': ['11/6/2023', '11/6/2023', '11/6/2023', '11/13/2023', '11/13/2023', '11/13/2023', '11/20/2023', '11/20/2023', '11/20/2023', '11/27/2023', '11/27/2023', '11/27/2023', '12/4/2023', '12/4/2023', '12/4/2023'], 'Metrics': ['Target', 'Bonus', 'Score', 'Target', 'Bonus', 'Score', 'Target', 'Bonus', 'Score', 'Target', 'Bonus', 'Score', 'Target', 'Bonus', 'Score'], 'Joe': [40, 46, 33, 40, 46, 45, 40, 46, 35, None, None, None, 40, 46, 42], 'Dean': [65, 70, 71, None, None, None, 65, 70, 68, 65, 70, 44, 65, 70, 66] } # Creating the DataFrame df = pd.DataFrame(data) As you can see Dean has a week where his Target, Bonus, and Score cells are blank. So does Joe in a later week. In these specific instances where the cell is NaN I want to populate them using the following rules: Get Target and Bonus cell values for each person from the first reference table and populate the NaN cell accordingly. Set the Score cell equal to the Target cell value for the person. My desired output table would look like this: | week | Metrics | Joe | Dean | |------------|---------|----:|-----:| | 11/6/2023 | Target | 40 | 65 | | 11/6/2023 | Bonus | 46 | 70 | | 11/6/2023 | Score | 33 | 71 | | 11/13/2023 | Target | 40 | 65 | | 11/13/2023 | Bonus | 46 | 70 | | 11/13/2023 | Score | 45 | 65 | | 11/20/2023 | Target | 40 | 65 | | 11/20/2023 | Bonus | 46 | 70 | | 11/20/2023 | Score | 35 | 68 | | 11/27/2023 | Target | 40 | 65 | | 11/27/2023 | Bonus | 46 | 70 | | 11/27/2023 | Score | 40 | 44 | | 12/4/2023 | Target | 40 | 65 | | 12/4/2023 | Bonus | 46 | 70 | | 12/4/2023 | Score | 42 | 66 | | Only one block of NaN per column at most Another possible solution, which loops through the df columns corresponding to each person and, for each block of NaN (identified by loc), assigns the corresponding block of values in ref (also identified by loc): names = ['Joe', 'Dean'] d = ref.assign(Score = ref['Target']) for x in names: df.loc[df[x].isna(), x] = d.loc[d['Name'].eq(x), 'Target':'Score'].T.values General case In case there is more than a single block of NaN per person, we need to change the code slightly: names = ['Joe', 'Dean'] d = ref.assign(Score = ref['Target']) for x in names: n_blocks = df[x].isna().sum() // 3 df.loc[df[x].isna(), x] = np.tile(d.loc[d['Name'].eq(x), 'Target':'Score'] .values.flatten(), n_blocks) Edit To satisfy the new requirement of the OP: Instead of order Target, Bonus and Score, it is needed the order Bonus, Target and Score. In such a case, we need to readjust the previous code: names = ['Joe', 'Dean'] d = ref.assign(Score = ref['Target']) d = d[['Name', 'Bonus', 'Target', 'Score']] for x in names: n_blocks = df[x].isna().sum() // 3 df.loc[df[x].isna(), x] = np.tile(d.loc[d['Name'].eq(x), 'Bonus':'Score'] .values.flatten(), n_blocks) Output: week Metrics Joe Dean 0 11/6/2023 Target 40.0 65.0 1 11/6/2023 Bonus 46.0 70.0 2 11/6/2023 Score 33.0 71.0 3 11/13/2023 Target 40.0 65.0 4 11/13/2023 Bonus 46.0 70.0 5 11/13/2023 Score 45.0 65.0 6 11/20/2023 Target 40.0 65.0 7 11/20/2023 Bonus 46.0 70.0 8 11/20/2023 Score 35.0 68.0 9 11/27/2023 Target 40.0 65.0 10 11/27/2023 Bonus 46.0 70.0 11 11/27/2023 Score 40.0 44.0 12 12/4/2023 Target 40.0 65.0 13 12/4/2023 Bonus 46.0 70.0 14 12/4/2023 Score 42.0 66.0 | 2 | 2 |
77,700,489 | 2023-12-21 | https://stackoverflow.com/questions/77700489/how-to-perform-a-conditional-sort-in-polars | I'm performing a binary classification and I want to manually review cases where the model either made an incorrect guess or it was correct but with low confidence. I want the most confident incorrect predictions to appear first, followed by less confident predictions, followed by correct predictions sorted from least to most confident. I want to manually check these examples to see if there's a pattern to the types of examples where the model needs help. My real project involves images created using Stable Diffusion, so I can create more targeted training examples if I see patterns. Here's a simplified example of my data. import polars as pl pl.DataFrame({"name":["Alice", "Bob", "Caroline", "Dutch", "Emily", "Frank", "Gerald", "Henry", "Isabelle", "Jack"], "truth":[1,0,1,0,1,0,0,1,1,0], "prediction": [1,1,1,0,0,1,0,1,1,0], "confidence": [0.343474,0.298461,0.420634,0.125515,0.772971,0.646964,0.833705,0.837181,0.790773,0.144983]}).with_columns( (1*(pl.col("truth") == pl.col("prediction"))).alias("correct_prediction") ) Emily should appear first because she's the highest-confidence incorrect classification. After the other wrong predictions, Dutch should appear next because he has the lowest-confidence correct guess. name truth prediction confidence correct_prediction Emily 1 0 0.772971 0 Frank 0 1 0.646964 0 Bob 0 1 0.298461 0 Dutch 0 0 0.125515 1 Jack 0 0 0.144983 1 Alice 1 1 0.343474 1 Caroline 1 1 0.420634 1 Isabelle 1 1 0.790773 1 Gerald 0 0 0.833705 1 Henry 1 1 0.837181 1 I'm moving from Pandas to Polars and can't figure out how to perform this sort. According to the documentation, you can use an expression with sort(), but it's not clear how I can include an if statement in the expression. I'd also be open to calculating a new sort column and then performing a simple sort() on that, if there's some formula that would do what I want. I know I could split the DataFrame into correct_predictions and incorrect_predictions, use different sorting logic on each, and then concat() them back together. I'm looking for something more elegant and less messy. | You made your example with random numbers and then listed off the names as though they're fixed so therefore I don't have the same order as you're saying but from the description I believe this is right. df.sort([ (good_pred:=pl.col('truth').eq(pl.col('prediction'))), (good_pred-1)*pl.col('confidence'), pl.col('confidence') ]) shape: (10, 5) ┌──────────┬───────┬────────────┬────────────┬────────────────────┐ │ name ┆ truth ┆ prediction ┆ confidence ┆ correct_prediction │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ f64 ┆ i32 │ ╞══════════╪═══════╪════════════╪════════════╪════════════════════╡ │ Bob ┆ 0 ┆ 1 ┆ 0.359116 ┆ 0 │ │ Emily ┆ 1 ┆ 0 ┆ 0.088916 ┆ 0 │ │ Frank ┆ 0 ┆ 1 ┆ 0.03878 ┆ 0 │ │ Alice ┆ 1 ┆ 1 ┆ 0.100859 ┆ 1 │ │ Jack ┆ 0 ┆ 0 ┆ 0.175356 ┆ 1 │ │ Henry ┆ 1 ┆ 1 ┆ 0.307952 ┆ 1 │ │ Gerald ┆ 0 ┆ 0 ┆ 0.376189 ┆ 1 │ │ Isabelle ┆ 1 ┆ 1 ┆ 0.586589 ┆ 1 │ │ Dutch ┆ 0 ┆ 0 ┆ 0.701469 ┆ 1 │ │ Caroline ┆ 1 ┆ 1 ┆ 0.989262 ┆ 1 │ └──────────┴───────┴────────────┴────────────┴────────────────────┘ The first sort expression checks if truth == prediction. When it doesn't then it returns false which appears first in the sort. In the second sort expression, I'm relying on bools being converted to 0/1. Since the first results you want to see are false, that's a 0. I subtract 1 from that and multiply it by the confidence. Since the result of that is either the negative of the confidence or zero it means that you get confidence in descending order for wrong predictions without interfering with correct prediction order. The last sort expression is just the confidence as-is which will apply to the rows where truth==prediction. | 2 | 2 |
77,699,109 | 2023-12-21 | https://stackoverflow.com/questions/77699109/replace-nan-values-in-a-column-with-values-from-a-different-column-from-the-same | I'm very new to polars and i'm trying to translate some pandas statements. The pandas line is as follows: df.loc[df['col_x'].isna(), 'col_y'] = df['col_z'] that is to say: replace the values of col_y corresponding to null values of col_x with values of col_z. In polars i'm trying with select, but to no avail. | Here you have the solution for Polars and Pandas: Polars: df = ( df .with_columns(pl.when(pl.col('col_x').is_nan()).then(pl.col('col_z')) .otherwise(pl.col('col_y')).alias('col_y')) ) Pandas: df["col_y"] = np.where(pd.isnull(df['col_x']), df['col_z'], df['col_y']) I hope this helps! | 2 | 3 |
77,696,026 | 2023-12-21 | https://stackoverflow.com/questions/77696026/cannot-close-browser-in-drissionpage | I have move from Selenium/SeleniumBase to DrissionPage for it's Undetectable Ability. Everything was easy to understand. Except the last part of it. That is Closing the Browser. The Docs says that page.quit() would close the browser. But When I run it in my local machine I get this error: Exception in thread Thread-1: Traceback (most recent call last): File "C:\Users\PC\AppData\Local\Programs\Python\Python39\lib\threading.py", line 973, in _bootstrap_inner self.run() File "C:\Users\PC\AppData\Local\Programs\Python\Python39\lib\threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "C:\Users\PC\AppData\Local\Programs\Python\Python39\lib\site-packages\DrissionPage\chromium_driver.py", line 123, in _recv_loop message = loads(message_json) File "C:\Users\PC\AppData\Local\Programs\Python\Python39\lib\json\__init__.py", line 346, in loads return _default_decoder.decode(s) File "C:\Users\PC\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\PC\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) Here's The least of code: from DrissionPage import ChromiumPage from time import sleep page = ChromiumPage() page.get('https://www.google.com/') sleep(5) page.quit() My specs are: { "OS": "Windows 10 64 Bit", "OS_Version": "10.0 Build (10240)", "Browser": "Google Chrome", "Browser Version": "120.0.6099.130 (Official Build) (64-bit)" } How to fix this error and close it? Or, is there any alternative way to close it? | I am the author of DrissionPage, thank you for liking my work. Version 4.0 is fine. Now in beta, the latest is 4.0.0b26. pip install DrissionPage==4.0.0b26 | 2 | 4 |
77,692,905 | 2023-12-20 | https://stackoverflow.com/questions/77692905/opencv-contour-width-measurement-at-intervals | I wrote a OpenCV python script to identify the contour of an object in a microscope image. Selected contour image example: I need to meassure the width of the object at N intervals, where width is the distance between the right side to the left side of the contour in that interval, the line meassured is always horisontal. this will later be repeated on about 30000 images. Thsi is my latest attempt but it obviously not working # Find contours in the inverse binary image contours, _ = cv2.findContours(255 - binary_opened, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # Choose the largest contour (assumes the blob is the largest connected component) largest_contour = max(contours, key=cv2.contourArea) # Divide the contour into 10 equal horizontal intervals num_intervals = 10 interval_height = len(largest_contour) // num_intervals # Create evenly spaced intervals along the y-axis intervals_y = np.linspace(0, len(largest_contour), num_intervals + 1, dtype=int) # Draw and display each interval on the original image for i in range(num_intervals): start_idx = intervals_y[i] end_idx = intervals_y[i + 1] # Get points in the interval interval_points = largest_contour[start_idx:end_idx] # Draw the contour of the interval on the original image cv2.drawContours(image, [interval_points], -1, (0, 0, 255), 2) I am new to openCV so any tip would be appreciated | I had a little attempt at this as follows: #!/usr/bin/env python3 import cv2 as cv import numpy as np # Load image and extract width and height im = cv.imread('osjKP.jpg') h, w = im.shape[:2] # Convert to HSV to find all the green pixels HSV = cv.cvtColor(im, cv.COLOR_BGR2HSV) lo = np.array([50,200, 100]) # lower limit for green in HSV space hi = np.array([70,255, 255]) # upper limit for green in HSV space greens = cv.inRange(HSV, lo, hi) cv.imwrite('DEBUG-greens.png', greens) # Floodfill with white from the centre _, result, _, _ = cv.floodFill(greens, None, (int(w/2), int(h/2)), 255) cv.imwrite('DEBUG-floodfilled.png', result) # Count the non-zero pixels in each row, i.e. widths[0] is the width of the top row widths = np.count_nonzero(result, axis=1) print(widths) My assumption that it is safe to flood fill from the centre may not be correct for your other images (e.g. if the item you are measuring is narrow) so you may want to fill from the top-left and top-right corner and then invert, for example. If you only want the widths of every 20th row, change the last couple of lines to: # Count the non-zero pixels in each row N = 20 widths = np.count_nonzero(result[::N,:], axis=1) print(widths) Output [ 918 902 907 899 896 904 911 894 912 913 897 884 912 916 895 910 892 878 869 871 888 881 870 846 856 860 866 896 876 873 878 880 881 889 892 898 900 917 931 954 966 976 982 990 992 1030] | 2 | 1 |
77,694,820 | 2023-12-20 | https://stackoverflow.com/questions/77694820/how-can-i-implement-retry-policy-with-httpx-in-python | I need to communicate with other services in my Python and FastAPI application, therefore I use the httpx library to be able to communicate asynchronously. So, I have the following code for POST requests: from typing import Any, Dict, Optional, Tuple from fastapi import File from httpx._client import AsyncClient async def post( *, url: str, files: Optional[Dict[str, File]] = None, json: Optional[Dict[str, Any]] = None, data: Optional[Dict[str, str]] = None, params: Optional[Dict[str, str]] = None, timeout: int = 10000 ) -> Tuple[bool, Any]: try: async with AsyncClient() as client: response = await client.post(url, files=files, json=json, data=data, timeout=timeout) response = response.json() if response.status_code == 200 else None if not response: return False, None return True, response except Exception as e: print(e) return False, None I would like to implement a retry policy so that if a request fails, it is retried, for example, up to 3 times. Is this possible and makes sense with httpx and async? I was looking at some tutorials on the internet but they seem to be outdated since the information they contain does not work Update: I tried the following approach with HTTPTransport but it didn't work for me: from httpx import HTTPTransport # here try: async with AsyncClient(transport=transport) as client: # here response = await client.post(url, files=files, json=json, data=data, timeout=timeout) response = response.json() if response.status_code == 200 else None if not response: return False, None return True, response except Exception as e: print(e) return False, None transport = HTTPTransport(retries=3) I get: 'HTTPTransport' object has no attribute 'aenter' | With the Async HTTPTransport class of httpx you can configure retry: from typing import Any, Dict, Optional, Tuple from fastapi import File from httpx import AsyncHTTPTransport from httpx._client import AsyncClient transport = AsyncHTTPTransport(retries=3) async def post( *, url: str, files: Optional[Dict[str, File]] = None, json: Optional[Dict[str, Any]] = None, data: Optional[Dict[str, str]] = None, params: Optional[Dict[str, str]] = None, timeout: int = 100 ) -> Tuple[bool, Any]: if params: parameters = [key + "=" + parameter for key, parameter in params.items()] parameters = "&".join(parameters) url += "?" + parameters try: async with AsyncClient(transport=transport) as client: response = await client.post(url, files=files, json=json, data=data, timeout=timeout) response = response.json() if response.status_code == 200 else None if not response: return False, None return True, response except Exception as e: print(e) return False, None I tried the above and it worked! | 3 | 6 |
77,693,044 | 2023-12-20 | https://stackoverflow.com/questions/77693044/transform-a-string-representing-a-list-in-each-cell-of-a-polars-dataframe-column | I am new to polars library and the title says it all regarding what I am trying to do. Doing this with the pandas library I would use apply() and the build in eval() function of Python. since eval("[1,2,3]") returns [1,2,3]. This can be done in polars as well - below I have an expected output example - but polars strongly recommends to use its Expression API. I searched the Expr.str attribute but didn't find an expression that does this. Am I missing something or should go with apply()? data = {'col_string': ['[1,2,3]', '[4,5,6]']} df = pl.DataFrame(data) df = df.with_columns(pl.col('col_string').map_elements(eval).alias('col_list')) shape: (2, 2) ┌────────────┬───────────┐ │ col_string ┆ col_list │ │ --- ┆ --- │ │ str ┆ list[i64] │ ╞════════════╪═══════════╡ │ [1,2,3] ┆ [1, 2, 3] │ │ [4,5,6] ┆ [4, 5, 6] │ └────────────┴───────────┘ | As long as your string column is valid JSON, you could use polars.Expr.str.json_decode as follows. df.with_columns( pl.col("col_string").str.json_decode().alias("col_list") ) Output. shape: (2, 2) ┌────────────┬───────────┐ │ col_string ┆ col_list │ │ --- ┆ --- │ │ str ┆ list[i64] │ ╞════════════╪═══════════╡ │ [1,2,3] ┆ [1, 2, 3] │ │ [4,5,6] ┆ [4, 5, 6] │ └────────────┴───────────┘ | 4 | 2 |
77,692,836 | 2023-12-20 | https://stackoverflow.com/questions/77692836/how-to-promote-qpdfview-in-qt-designer | I am trying to upgrade my PDF viewer on a form from QWebEngineView to QPdfView (PyQt6). I originally promoted QWebEngineView in the QT Designer without an issue but as far as PDF functionality its a bit limited so upgrading to QPdfView. I promoted using QWidget as base: <?xml version="1.0" encoding="UTF-8"?> <ui version="4.0"> <class>MainWindow</class> <widget class="QMainWindow" name="MainWindow"> <property name="geometry"> <rect> <x>0</x> <y>0</y> <width>800</width> <height>600</height> </rect> </property> <property name="windowTitle"> <string>MainWindow</string> </property> <widget class="QWidget" name="centralwidget"> <widget class="QPdfView" name="widget" native="true"> <property name="geometry"> <rect> <x>109</x> <y>69</y> <width>291</width> <height>371</height> </rect> </property> </widget> <widget class="QComboBox" name="comboBox"> <property name="geometry"> <rect> <x>60</x> <y>490</y> <width>79</width> <height>22</height> </rect> </property> </widget> <widget class="QPushButton" name="pushButton"> <property name="geometry"> <rect> <x>60</x> <y>530</y> <width>80</width> <height>22</height> </rect> </property> <property name="text"> <string>PushButton</string> </property> </widget> </widget> <widget class="QMenuBar" name="menubar"> <property name="geometry"> <rect> <x>0</x> <y>0</y> <width>800</width> <height>19</height> </rect> </property> </widget> <widget class="QStatusBar" name="statusbar"/> </widget> <customwidgets> <customwidget> <class>QPdfView</class> <extends>QWidget</extends> <header>PyQt6.QtPdfWidgets</header> <container>1</container> </customwidget> </customwidgets> <resources/> <connections/> </ui> However, when I try to load it using: import sys from PyQt6 import QtWidgets, uic app = QtWidgets.QApplication(sys.argv) window = uic.loadUi("mainwindow.ui") window.show() app.exec() I get the following: Traceback (most recent call last): File "/home/serveracct/proj/promos/main.py", line 14, in <module> window = MainWindow() File "/home/serveracct/proj/promos/main.py", line 10, in __init__ uic.loadUi("test2.ui", self) File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/load_ui.py", line 86, in loadUi return DynamicUILoader(package).loadUi(uifile, baseinstance) File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/Loader/loader.py", line 62, in loadUi return self.parse(filename) File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/uiparser.py", line 1014, in parse self._handle_widget(ui_file.widget) File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/uiparser.py", line 842, in _handle_widget self.traverseWidgetTree(el) File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/uiparser.py", line 818, in traverseWidgetTree handler(self, child) File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/uiparser.py", line 280, in createWidget self.traverseWidgetTree(elem) File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/uiparser.py", line 818, in traverseWidgetTree handler(self, child) File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/uiparser.py", line 271, in createWidget self.stack.push(self._setupObject(widget_class, parent, elem)) File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/uiparser.py", line 233, in _setupObject obj = self.factory.createQtObject(class_name, object_name, File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/objcreator.py", line 119, in createQtObject return self._cpolicy.instantiate(ctor, object_name, ctor_args, File "/home/serveracct/.pyenv/versions/3.10.8/lib/python3.10/site-packages/PyQt6/uic/Loader/qobjectcreator.py", line 145, in instantiate return ctor(*ctor_args, **ctor_kwargs) TypeError: QPdfView(parent: Optional[QWidget]): not enough arguments I tried promoting QPdfView in Qt Designer and tried to load the window using uic.loadUi() | This is a "bug" (quotes required) caused by the fact that, unlike any other widget, the parent argument of QPdfView is required. When child widgets are created by uic, they are always created with their parent in the constructor, using the parent=[parent widget] keyword. PyQt classes are a bit peculiar, and their constructors don't behave exactly like normal Python classes. Consider the following case: class MyClass: def __init__(self, parent): pass With the above, doing MyClass(parent=None) will be perfectly valid. The QPdfView constructor is the following: QPdfView(parent: Optional[QWidget]) This would make us think that using keyworded parent argument will be equally acceptable as using it as a positional one, but, in reality this won't work. I suspect that the main problem relies on Qt, and PyQt just "fails" because it correctly follows the wrong signature: the parent shouldn't be mandatory, and that can be demonstrated by the fact that you can perfectly do QPdfView(None). There are two possible workarounds: fix the PyQt file that constructs the classes, or use a custom QPdfView subclass as a promoted widget. Fix qobjectcreator.py The culprit is the qobjectcreator.py file; its path should be the following: <python-lib>/site-packages/PyQt6/uic/Loader/. Then go to the instantiate() function (around line 136) and insert the following before the return: if ctor.__name__ == 'QPdfView' and 'parent' in ctor_kwargs: ctor_args = (ctor_kwargs.pop('parent'), ) Use a promoted subclass This is probably a better choice, as it doesn't require changing PyQt files and ensures compatibility for future versions. Just create a QPdfView subclass like this: class MyPdfView(QPdfView): def __init__(self, parent=None): super().__init__(parent) Then use MyPdfView instead of QPdfView as class name for the promoted widget, using the file name as header. You can even include that class in your main script, just ensure that you properly use the if __name__ == '__main__': block to create the QApplication. | 2 | 3 |
77,689,241 | 2023-12-20 | https://stackoverflow.com/questions/77689241/how-can-literal-be-invoked-with-square-brackets-e-g-literaltest1 | The following code snippet shows a function named Literal being invoked with square brackets: Literal['r', 'rb', 'w', 'wb'] The example is from the CPython GitHub repo Though we can use square brackets to call instances by using __getitem__ method in Python classes, it doesn't seem to be the case here. | In the source code there are these decorators @_TypedCacheSpecialForm # this is a class @_tp_cache(typed=True) # not relevant here def Literal(self, *parameters): ... _TypedCacheSpecialForm is a class, when using a class a decorator the function, here def Literal, is used to initialize an instance. Written differently you could write: Literal = _TypedCacheSpecialForm(lambda self, *parameters: ...) # or def _literal_function(self, *parameters): ... #contents of def Literal Literal = _TypedCacheSpecialForm(_literal_function) In short: Literal is a instance of a class which has a __getitem__ method. With the call to __getitem__ Literal._getitem is invoked, which is the actual function declared as def Literal in the source code. Literal[str] -> __getitem__(str) -> _getitem(str)/def Literal(str) returns then an instance of the _LiteralGenericAlias class. Which you could say is the actual type hint. | 2 | 2 |
77,690,364 | 2023-12-20 | https://stackoverflow.com/questions/77690364/how-to-return-plain-text-in-fastapi | In this example, the entrypoint http://127.0.0.1:8000/ returns formatted text: "Hello \"World\"!" The quotes are masked by a slash, and quotes are added both at the beginning and at the end. How to return unformatted text, identical to my string Hello "World"!. import uvicorn from fastapi import FastAPI app = FastAPI() @app.get("/",) def read_root(): return 'Hello "World"!' uvicorn.run(app) | When you return a string from a FastAPI endpoint, FastAPI will automatically convert it to a JSON response, which is why the quotes are being escaped in your example. JSON strings must escape double quotes with a backslash. If you want to return unformatted text (plain text) and not JSON, you can do so by using FastAPI's PlainTextResponse object (docs here). I'm using FastApi version 0.104 here: import uvicorn from fastapi import FastAPI from fastapi.responses import PlainTextResponse app = FastAPI() @app.get( "/", response_class=PlainTextResponse, ) def read_root(): return 'Hello "World"!' uvicorn.run(app) | 8 | 13 |
77,689,970 | 2023-12-20 | https://stackoverflow.com/questions/77689970/python-not-working-on-a-bash-script-but-working-on-the-terminal-how-to-solve-it | I am trying to run a Python program through a Bash script. When I execute the python file by shell it works with no problem, but when I try to execute the script, the python keyword is not recognized. I have a fresh Ubuntu 23.10 installation and I added, on top of the ~/.bashrc file, the following line: alias python=python3 How to solve this problem? Toy example For sake of simplicity I made a toy example that displays the same behavior. I have an hello-world.py file, whose content is simply print("Hello World!") If I run it with the shell ($ python hello-world.py), it works smoothly: Hello World! Now I created a script start.sh whose content is: #!/bin/bash python hello-world.py But if I execute it $ ./start.sh I get the following error: ./start.sh: line 3: python: command not found | So, the issue is that your shebang isn't running bash as a login shell (which is pretty normal). You are aliasing python to python3 in your .bashrc, but that will only get run in a login shell. So try adding: #!/bin/bash -l to your shebang. Although, to be clear, I would consider this a hack. I would suggest that ideally you just use python3 in the shell script. Or, create a symlink. If you are on ubuntu, you can use python-is-python3: https://packages.ubuntu.com/focal/python-is-python3 | 2 | 3 |
77,680,073 | 2023-12-18 | https://stackoverflow.com/questions/77680073/ignore-specific-rules-in-specific-directory-with-ruff | I am using Ruff, a formatter or linter tool for Python code. I want to ignore some specific rules, and to write that config in pyproject.toml. My package structure is as follows. . ├── LICENSE ├── pyproject.toml ├── README.md ├── src │ └── mypackage/mymodule.py └── tests ├── doc.md └── test_mymodule.py And, I want to ignore rules pydocstyle (D) in the tests/ directory. | You can use Ruff's per-file-ignores and specify individual files, or select a directory tree with a wildcard (star). You can ignore a "letter class" by specifying only the letter. Example: Ignore specific rule in a specific file To ignore line-length violations in your tests, add this to pyproject.toml: [tool.ruff.lint.per-file-ignores] "foofile.py" = ["E501"] Example: Ignore a class of rules in a whole directory tree To ignore all pydocstyle errors (starting with "D") in your tests, add this to pyproject.toml: [tool.ruff.lint.per-file-ignores] "tests/*" = ["D"] The related Ruff docs are here. | 20 | 29 |
77,684,593 | 2023-12-19 | https://stackoverflow.com/questions/77684593/is-there-a-way-to-retain-keys-in-polars-left-join | Is there a way to retain both the left and right join keys when performing a left join using the Python Polars library? Currently it seems that only the left join key is retained but the right join key gets dropped. Please see example code below: import polars as pl df = pl.DataFrame( { "foo": [1, 2, 3], "bar": [6.0, 7.0, 8.0], "ham": ["a", "b", "c"], } ) other_df = pl.DataFrame( { "apple": ["x", None, "z"], "ham": ["a", "b", "d"], } ) df.join(other_df, on="ham", how="left") In other data manipulation libraries this is possible for example using the R dplyr library you can do the below or even in standard SQL we can access the right join key in the SELECT clause. library(dplyr) df <- data.frame( foo = c(1, 2, 3), bar = c(6.0, 7.0, 8.0), ham = c("a", "b", "c") ) other_df <- data.frame( apple = c("x", NA, "z"), ham = c("a", "b", "d") ) df %>% left_join(other_df, by = c("ham" = "ham"), suffix = c("", "_right"), keep = TRUE) | join now has a coalesce= parameter to control this behaviour. df.join(other_df, on="ham", how="left", coalesce=False) shape: (3, 5) ┌─────┬─────┬─────┬───────┬───────────┐ │ foo ┆ bar ┆ ham ┆ apple ┆ ham_right │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ f64 ┆ str ┆ str ┆ str │ ╞═════╪═════╪═════╪═══════╪═══════════╡ │ 1 ┆ 6.0 ┆ a ┆ x ┆ a │ │ 2 ┆ 7.0 ┆ b ┆ null ┆ b │ │ 3 ┆ 8.0 ┆ c ┆ null ┆ null │ └─────┴─────┴─────┴───────┴───────────┘ | 3 | 1 |
77,676,556 | 2023-12-18 | https://stackoverflow.com/questions/77676556/whats-the-correct-way-to-use-user-local-python-environment-under-pep668 | I have tried to install any python packages on Ubuntu 24.04, but found I cannot do that as in 22.04, with --user PEP668 said it is for avoiding package conflict between system-wide package and user installed package. example: $ pip install setuptools --user error: externally-managed-environment × This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. I am really confused with current rules and can not install any package to user local env. How can I manage my user local environment now? And how can I use latest pip (not linux-distro version) and other packages by default for current user? My Environment (dockerfile just for reproduce): FROM ubuntu:24.04 # add python RUN apt install -y python3-pip python3-venv python-is-python3 pipx USER ubuntu WORKDIR /app I know I can use some env manage tools (pyenv) to do that, but is there any built-in method to bring my user local env back? | I finally found the best solution is use third-party tools (pyenv, conda, mini-forge). Pyenv use it's own python and pip independent with system's and use it by default. see $ where pip /home/john/.pyenv/shims/pip /home/john/.local/bin/pip /usr/local/bin/pip /usr/bin/pip /bin/pip So, system python and daily used python are separated. User can use pip install xxx like in previous days. | 8 | 3 |
77,657,891 | 2023-12-14 | https://stackoverflow.com/questions/77657891/in-polars-how-can-you-add-row-numbers-within-windows | How can I add row numbers within windows in a DataFrame? This example depicts what I want (in the counter column): >>> df = pl.DataFrame([{'groupings': 'a', 'target_count_over_windows': 1}, {'groupings': 'a', 'target_count_over_windows': 2}, {'groupings': 'a', 'target_count_over_windows': 3}, {'groupings': 'b', 'target_count_over_windows': 1}, {'groupings': 'c', 'target_count_over_windows': 1}, {'groupings': 'c', 'target_count_over_windows': 2}, {'groupings': 'd', 'target_count_over_windows': 1}, {'groupings': 'd', 'target_count_over_windows': 2}, {'groupings': 'd', 'target_count_over_windows': 3}]) >>> df shape: (9, 2) ┌───────────┬───────────────────────────┐ │ groupings ┆ target_count_over_windows │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═══════════╪═══════════════════════════╡ │ a ┆ 1 │ │ a ┆ 2 │ │ a ┆ 3 │ │ b ┆ 1 │ │ c ┆ 1 │ │ c ┆ 2 │ │ d ┆ 1 │ │ d ┆ 2 │ │ d ┆ 3 │ └───────────┴───────────────────────────┘ Obviously the df.with_row_numbers() method exists, but it adds row numbers for the whole dataframe. | You can generate an .int_range() over each group with a window function. df.with_columns(count = 1 + pl.int_range(pl.len()).over("groupings")) shape: (9, 3) ┌───────────┬───────────────────────────┬───────┐ │ groupings ┆ target_count_over_windows ┆ count │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 │ ╞═══════════╪═══════════════════════════╪═══════╡ │ a ┆ 1 ┆ 1 │ │ a ┆ 2 ┆ 2 │ │ a ┆ 3 ┆ 3 │ │ b ┆ 1 ┆ 1 │ │ c ┆ 1 ┆ 1 │ │ c ┆ 1 ┆ 2 │ │ d ┆ 1 ┆ 1 │ │ d ┆ 2 ┆ 2 │ │ d ┆ 3 ┆ 3 │ └───────────┴───────────────────────────┴───────┘ | 2 | 4 |
77,683,711 | 2023-12-19 | https://stackoverflow.com/questions/77683711/writing-nested-dictionary-to-excel-using-openpyxl-in-python | I want to write a nested dictionary to a sheet using openpyxl in which the keys should be used as header and values which are also a inner dictionary which has list values in it as well to be the records for header in which parent keys they are present. My code hasn't been able to achieve the complete the scenario but I was able divide data into required stages which I want but the issues are blank cells and List values which I want to be divided like: LKP_GetData -> ('FIL_UpdateChangedRecord', 'EXP_DetectChanges') in stage3 which has list values should be broken to LKP_GetData -> ('FIL_UpdateChangedRecord') and LKP_GetData -> ('EXP_DetectChanges') in stage 3 and placed in individual cells. Same goes with other such data in other stages My code: import openpyxl # Your nested dictionary nested_dict = { 0: {'DIM_NRC_CUSTOMER_TMP': ['SQ_DIM_NRC_CUSTOMER_TMP'], 'SAT_CONTACT': ['SQ_HUB_CUSTOMER'], 'SAT_CUSTOMER_DERIVED': ['SQ_HUB_CUSTOMER'], 'SAT_CUSTOMER': ['SQ_HUB_CUSTOMER'], 'SAT_CUSTOMER_PREDICTED': ['SQ_HUB_CUSTOMER'], 'HUB_CUSTOMER': ['SQ_HUB_CUSTOMER']}, 1: {'SQ_DIM_NRC_CUSTOMER_TMP': ['UPDTRANS'], 'SQ_HUB_CUSTOMER': ['LKP_SH_MOBILE_PH_NO', 'EXP_PREPARE_DATA', 'lkp_LNK_CUSTOMER_CUST_GROUP', 'LKP_CIS_CUSTOMER', 'lkp_REF_EARLY_ADOPTERS', 'LKP_GPRS_USAGE_AVG', 'LKP_EMP_FLAG', 'lkp_SH_MOBILE_PH_Active', 'exp_MOBILE_PH_NO']}, 2: {'UPDTRANS': ['DIM_NRC_CUSTOMER_Update'], 'LKP_SH_MOBILE_PH_NO': ['EXP_PREPARE_DATA'], 'EXP_PREPARE_DATA': ['FIL_InsertNewRecord', 'FIL_UpdateChangedRecord', 'EXP_DetectChanges', 'LKP_GetData'], 'lkp_LNK_CUSTOMER_CUST_GROUP': ['EXP_PREPARE_DATA'], 'LKP_CIS_CUSTOMER': ['EXP_PREPARE_DATA'], 'lkp_REF_EARLY_ADOPTERS': ['EXP_PREPARE_DATA'], 'LKP_GPRS_USAGE_AVG': ['EXP_PREPARE_DATA'], 'LKP_EMP_FLAG': ['EXP_PREPARE_DATA'], 'lkp_SH_MOBILE_PH_Active': ['EXP_PREPARE_DATA'], 'exp_MOBILE_PH_NO': ['lkp_SH_MOBILE_PH_Active']}, 3: {'FIL_InsertNewRecord': ['UPD_ForceInserts'], 'FIL_UpdateChangedRecord': ['UPD_ChangedUpdate'], 'EXP_DetectChanges': ['FIL_InsertNewRecord', 'FIL_UpdateChangedRecord'], 'LKP_GetData': ['FIL_UpdateChangedRecord', 'EXP_DetectChanges']}, 4: {'UPD_ForceInserts': ['DIM_NRC_CUSTOMER_Insert'], 'UPD_ChangedUpdate': ['DIM_NRC_CUSTOMER_TMP_Insert']} } # Create a new Excel workbook workbook = openpyxl.load_workbook("C:\\Desktop\\multiple_sheets_details_new.xlsx") sheet = workbook.create_sheet(title = "MAPPING",index=0) # Write headers headers = [f'Stage{i}' for i in nested_dict.keys()] sheet.append(headers) # Write data to Excel for key in set(key for inner_dict in nested_dict.values() for key in inner_dict): row_data = [] for inner_dict in nested_dict.values(): if key in inner_dict and isinstance(inner_dict[key], list): row_data.append(f'{key} -> {tuple(inner_dict[key])}') else: row_data.append(inner_dict.get(key, [""])[0]) sheet.append(row_data) # Save the workbook workbook.save("C:\\Desktop\\multiple_sheets_details_new.xlsx") The issue is I am getting an output where there are blank cells in each column record(stage 0, stage 1, etc.) and I want them to be in sequence in each column without any blank cells. Also the List values should be broken in to key value records in the same column. Like LKP_GetData -> ('FIL_UpdateChangedRecord', 'EXP_DetectChanges') in stage3 which has list values should be broken to LKP_GetData -> ('FIL_UpdateChangedRecord') and LKP_GetData -> ('EXP_DetectChanges') in stage 3 and placed in individual cells My Output: I am expecting the output to look like: | Thanks for the expected result screen shot, I was close. Either way the easiest is probably just reading the dict and adding the values to the cells as they are read. Here is a quick option which I think is correct though may not be sorted the same way as your example; import openpyxl from openpyxl.utils.cell import get_column_letter from openpyxl.styles import Font # Your nested dictionary nested_dict = { 0: {'DIM_NRC_CUSTOMER_TMP': ['SQ_DIM_NRC_CUSTOMER_TMP'], 'SAT_CONTACT': ['SQ_HUB_CUSTOMER'], 'SAT_CUSTOMER_DERIVED': ['SQ_HUB_CUSTOMER'], 'SAT_CUSTOMER': ['SQ_HUB_CUSTOMER'], 'SAT_CUSTOMER_PREDICTED': ['SQ_HUB_CUSTOMER'], 'HUB_CUSTOMER': ['SQ_HUB_CUSTOMER']}, 1: {'SQ_DIM_NRC_CUSTOMER_TMP': ['UPDTRANS'], 'SQ_HUB_CUSTOMER': ['LKP_SH_MOBILE_PH_NO', 'EXP_PREPARE_DATA', 'lkp_LNK_CUSTOMER_CUST_GROUP', 'LKP_CIS_CUSTOMER', 'lkp_REF_EARLY_ADOPTERS', 'LKP_GPRS_USAGE_AVG', 'LKP_EMP_FLAG', 'lkp_SH_MOBILE_PH_Active', 'exp_MOBILE_PH_NO']}, 2: {'UPDTRANS': ['DIM_NRC_CUSTOMER_Update'], 'LKP_SH_MOBILE_PH_NO': ['EXP_PREPARE_DATA'], 'EXP_PREPARE_DATA': ['FIL_InsertNewRecord', 'FIL_UpdateChangedRecord', 'EXP_DetectChanges', 'LKP_GetData'], 'lkp_LNK_CUSTOMER_CUST_GROUP': ['EXP_PREPARE_DATA'], 'LKP_CIS_CUSTOMER': ['EXP_PREPARE_DATA'], 'lkp_REF_EARLY_ADOPTERS': ['EXP_PREPARE_DATA'], 'LKP_GPRS_USAGE_AVG': ['EXP_PREPARE_DATA'], 'LKP_EMP_FLAG': ['EXP_PREPARE_DATA'], 'lkp_SH_MOBILE_PH_Active': ['EXP_PREPARE_DATA'], 'exp_MOBILE_PH_NO': ['lkp_SH_MOBILE_PH_Active']}, 3: {'FIL_InsertNewRecord': ['UPD_ForceInserts'], 'FIL_UpdateChangedRecord': ['UPD_ChangedUpdate'], 'EXP_DetectChanges': ['FIL_InsertNewRecord', 'FIL_UpdateChangedRecord'], 'LKP_GetData': ['FIL_UpdateChangedRecord', 'EXP_DetectChanges']}, 4: {'UPD_ForceInserts': ['DIM_NRC_CUSTOMER_Insert'], 'UPD_ChangedUpdate': ['DIM_NRC_CUSTOMER_TMP_Insert']} } # Create a new Excel workbook workbook = openpyxl.load_workbook("C:\\Desktop\\multiple_sheets_details_new.xlsx") sheet = workbook.create_sheet(title="MAPPING", index=0) # Write headers headers = [f'Stage{i}' for i in nested_dict.keys()] sheet.append(headers) for col, inner_dict in enumerate(nested_dict.values(), 1): row = 2 # Start from row 2 sheet.cell(row=row-1, column=col).font = Font(bold=True) # Bold the header cells sheet.column_dimensions[get_column_letter(col)].width = 60 # Set a larger column width for key in inner_dict: for item in inner_dict[key]: sheet.cell(row=row, column=col).value = f"{key} -> {str(item)}" row += 1 # Save the workbook workbook.save("C:\\Desktop\\multiple_sheets_details_new.xlsx") | 2 | 2 |
77,686,009 | 2023-12-19 | https://stackoverflow.com/questions/77686009/when-i-want-to-logout-in-drf-api-browsable-it-shows-http-error-405 | I'm new in DRF and now I have an Error in logout in DRF API browsable when I want to logout in API browsable it shows me Error 405 (method not Allowed) this is my setting.py: REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.BasicAuthentication', ), } this is my urls.py: urlpatterns = [ path('admin/', admin.site.urls), path('api-auth/', include('rest_framework.urls')) ] please tell what can I do to fix this Error as I said I'm new in DRF so I just Wrote this codes nothing else I think this Error its because BasicAuthentication | It depends on Django version you are using. If you are using Version 5, try downgrading to 4.2.7. | 2 | 2 |
77,688,251 | 2023-12-19 | https://stackoverflow.com/questions/77688251/why-is-a-celery-task-not-consumed-by-a-worker-that-is-free | I have an application in Python with a microservices architecture and pipelines for which I use Celery along with RabbitMQ and Redis. In an application flow (machine learning training) 8 methodologies are needed, therefore a first worker called "Worker Training" sends 8 tasks to another worker called "Worker Training Model". This second one has 3 replicas to be able to finish the training faster. At first it works well, each worker consumes a methodology and processes it until finishing and consuming the next one. However I am seeing that for example at this moment, 5 of the 8 tasks have already been completed, there are 2 workers (of the 3 replicas) processing a task, but there is 1 worker doing nothing! It should be processing the last missing methodology! Any idea why this happens? I think that in the end it will end up being processed in one of the other 2 workers when they finish with the one they have at the moment, but I need to be more efficient and not have workers doing nothing that can consume tasks. My RabbitMQ dashboard looks like this right now: (That task is the missing methodology and should be done by the free worker...) | This is most likely caused by the prefetching. Read the Prefetch Limits section, and the section after to find out how to make worker reserve as many tasks as they have (free) worker processes. In short (TLDR): task_acks_late = True worker_prefetch_multiplier = 1 | 2 | 0 |
77,686,348 | 2023-12-19 | https://stackoverflow.com/questions/77686348/how-to-combine-different-styles-in-one-styler-map | I have a dataframe with 3 kind of data types - int64, float64 and object (string). How to apply format for two different data types in one function? Sample: def abc_style_format(cell_val): default = "" style_a = "background-color: green" style_b = "background-color: gold" style_c = "background-color: salmon" style_float = "precision=2" match cell_val: case "A": return style_a case "B": return style_b case "C": return style_c if type(cell_val) == "float64": return style_float return default df_abc.style.map(abc_style_format) Block match/case works properly, but part precision=2 doesn't work. The result: | Here is one way to do it by defining as many helper functions as needed: import pandas as pd def _abc_style_format(s: pd.Series) -> list[str]: """Helper function to format 'abc' columns. Args: s: target column. Returns: list of styles to apply to each target column cell. """ background_colors = [] for v in s: match v: case "A": background_colors.append("background-color: green") case "B": background_colors.append("background-color: gold") case "C": background_colors.append("background-color: salmon") return background_colors def _qty_format(s: pd.Series) -> list[str]: """Helper function to format 'qty' column. Args: s: target column. Returns: list of styles to apply to each target column cell. """ font_colors = [] for v in s: if v > 150: font_colors.append("color: red") else: font_colors.append("color: green") return font_colors And one main function: from pandas.io.formats.style import Styler def style(df: pd.DataFrame) -> Styler: """Function to style whole dataframe. Args: df: target dataframe. Returns: styled dataframe. """ styler = df.style for col in df.columns: match df[col].dtype: case "object": styler = styler.apply(_abc_style_format, subset=col) case "float64": styler = styler.format(precision=2, subset=col) case "int64": styler = styler.apply(_qty_format, subset=col) case _: pass return styler Then, with the following toy dataframe: df = pd.DataFrame( { "description": ["product1", "product2", "product3"], "qty": [130, 160, 110], "gm": [43.20000, 40.70000, 35.20000], "abc_gm_category": ["A", "B", "C"], "abc_amount_category": ["C", "A", "B"], } ) You can simply do: styled_df = style(df) styled_df Which gets you the expected output: | 2 | 0 |
77,662,046 | 2023-12-14 | https://stackoverflow.com/questions/77662046/oserror-winerror-6-the-handle-is-invalid-python | Hey i am trying to make a application working in windows. It's basically adjusting your monitor brightness. There is a function provides minimize application. So i have issue about minimize. When clicking second time to minimize button getting error: Exception in Tkinter callback Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 1921, in __call__ return self.func(*args) File "C:\Users\user\OneDrive\Desktop\Python_simple_projects\Desktop_brightness_app\test.py", line 249, in minimize_app self.icon.run() File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\pystray\_base.py", line 212, in run self._run() File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\pystray\_win32.py", line 120, in _run self._hwnd = self._create_window(self._atom) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\pystray\_win32.py", line 244, in _create_window hwnd = win32.CreateWindowEx( File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\site-packages\pystray\_util\win32.py", line 204, in _err raise ctypes.WinError() OSError: [WinError 6] The handle is invalid. There is a few part of code below: import ttkbootstrap as tb import ttkbootstrap as ttk import PIL.Image import pystray class DesktopBrightnessApp: def __init__(self): self.minimize_button = tb.Button(self.root, text="Minimize", command=self.minimize_app) self.minimize_button.place(x=215, y=200) self.icon = pystray.Icon('icon', img, menu=pystray.Menu( pystray.MenuItem("Open GUI", self.on_move_clicked), pystray.MenuItem("Exit", self.on_move_clicked) )) def on_move_clicked(self, icon, item): if str(item) == "Open GUI": icon.stop() self.root.deiconify() elif str(item) == "Exit": icon.stop() self.root.destroy() def minimize_app(self): self.root.withdraw() self.icon.run() def run(self): self.root.mainloop() if __name__ == "__main__": app = DesktopBrightnessApp() app.run() | When icon.stop() is executed, you need to create a new instance of pystray.Icon in order to run icon.run(): class DesktopBrightnessApp: def __init__(self): ... # convert img to instance variable self.img self.img = ... ... # create the icon menu once self.menu = pystray.Menu( pystray.MenuItem("Open GUI", self.on_move_clicked), pystray.MenuItem("Exit", self.on_move_clicked) ) # don't create instance of pystray.Icon here ... def minimize_app(self): self.root.withdraw() # create instance of pystray.Icon here self.icon = pystray.Icon('icon', self.img, menu=self.menu) self.icon.run() ... | 2 | 3 |
77,651,663 | 2023-12-13 | https://stackoverflow.com/questions/77651663/ffmpeg-transform-mulaw-8000khz-audio-buffer-data-into-valid-bytes-format | I'm trying to read a bytes variable using ffmpeg, but the audio stream I listen to, sends me buffer data in mulaw encoded buffer like this: https://github.com/boblp/mulaw_buffer_data/blob/main/buffer_data I'm having trouble running the ffmpeg_read function from the transformers library found here: def ffmpeg_read(bpayload: bytes, sampling_rate: int) -> np.array: """ Helper function to read an audio file through ffmpeg. """ ar = f"{sampling_rate}" ac = "1" format_for_conversion = "f32le" ffmpeg_command = [ "ffmpeg", "-i", "pipe:0", "-ac", ac, "-ar", ar, "-f", format_for_conversion, "-hide_banner", "-loglevel", "quiet", "pipe:1", ] try: with subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE) as ffmpeg_process: output_stream = ffmpeg_process.communicate(bpayload) except FileNotFoundError as error: raise ValueError("ffmpeg was not found but is required to load audio files from filename") from error out_bytes = output_stream[0] audio = np.frombuffer(out_bytes, np.float32) if audio.shape[0] == 0: raise ValueError( "Soundfile is either not in the correct format or is malformed. Ensure that the soundfile has " "a valid audio file extension (e.g. wav, flac or mp3) and is not corrupted. If reading from a remote " "URL, ensure that the URL is the full address to **download** the audio file." ) return audio But everytime I get: raise ValueError( "Soundfile is either not in the correct format or is malformed. Ensure that the soundfile has " "a valid audio file extension (e.g. wav, flac or mp3) and is not corrupted. If reading from a remote " "URL, ensure that the URL is the full address to **download** the audio file." ) If I grab any wav file I can do something like this: import wave with open('./emma.wav', 'rb') as fd: contents = fd.read() print(contents) And running it through the function does work! So my question would be: How can I transform my mulaw encoded buffer data into a valid bytes format that works with ffmpeg_read()? EDIT: I've found a way using pywav (https://pypi.org/project/pywav/) # 1 stands for mono channel, 8000 sample rate, 8 bit, 7 stands for MULAW encoding wave_write = pywav.WavWrite("filename.wav", 1, 8000, 8, 7) wave_write.write(mu_encoded_data) wave_write.close() This is the result: https://github.com/boblp/mulaw_buffer_data/blob/main/filename.wav the background noise is acceptable. However, I want to use a FFMPEG instead to avoid creating a tmp file. | This worked for me: import subprocess import numpy as np import io def ffmpeg_read_mulaw(bpayload: bytes, sampling_rate: int) -> np.array: ar = f"{sampling_rate}" ac = "1" format_for_conversion = "f32le" ffmpeg_command = [ "ffmpeg", "-f", "mulaw", "-ar", ar, "-ac", ac, "-i", "pipe:0", "-b:a",#change the bitrate "256k", #change the bitrate to 256k "-f", format_for_conversion, "-hide_banner", "-loglevel", "quiet", "pipe:1", ] try: with subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE) as ffmpeg_process: output_stream = ffmpeg_process.communicate(bpayload) except FileNotFoundError as error: raise ValueError("ffmpeg was not found but is required to load audio files from filename") from error out_bytes = output_stream[0] audio = np.frombuffer(out_bytes, np.float32) if audio.shape[0] == 0: raise ValueError("Failed to decode mu-law encoded data with FFMPEG.") return audio # Example usage: # mu_encoded_data is your mu-law encoded buffer data mu_encoded_data = b"\x7F\xFF\x80\x01\x7F\xFF" sampling_rate = 8000 decoded_audio = ffmpeg_read_mulaw(mu_encoded_data, sampling_rate) print(decoded_audio) | 3 | 2 |
77,654,869 | 2023-12-13 | https://stackoverflow.com/questions/77654869/plumbum-fails-to-connect-via-jump-host-but-raw-command-succeeds | I'm trying to SSH into final_machine via the jump host portal_machine. Both remote machines are Linux, local is Windows. When I run the following command in a cmd, I can successfully connect ssh {username}@{final_machine} -oProxyCommand="ssh -W {final_machine}:22 {username}@{portal_machine}" And I can also successfully connect through python with ssh_command = plumbum.local["ssh"][f"{username}@{final_machine}", "-o", f"ProxyCommand=ssh -W {final_machine}:22 {username}@{portal_machine}"] ssh_command() However, I need to connect via an SshMachine object for compatibility, and when I try the following, it fails plumbum.SshMachine(final_machine, user=username, ssh_opts=[fr'-o ProxyCommand="ssh -W {final_machine}:22 {username}@{portal_machine}"']) with error Return code: | None Command line: | 'true ' Host: | {final machine} Stderr: | CreateProcessW failed error:2 I've tried replacing ssh with C:\Windows\System32\OpenSSH\ssh.exe, but no change. I have SSH keys set up from my machine to portal_machine, my machine to final_machine, and portal_machine to final_machine. Any other suggestions for how to debug would be appreciated. When I connect simply to the portal machine, it works fine. | I would rather use ProxyJump than ProxyCommand, and move as many settings as possible into ~/.ssh/config. Sample configuration for the mentioned hosts: HOST portal HostName portal.machine.host.name.or.IP User username IdentityFile ~/.ssh/your.portal.key HOST final HostName final.machine.host.name.or.IP User username IdentityFile ~/.ssh/your.final.key ProxyJump portal Tunnel then with: machine = plumbum.SshMachine('final') pwd = machine['pwd'] pwd() # '/home/ec2-user\n' | 2 | 1 |
77,677,185 | 2023-12-18 | https://stackoverflow.com/questions/77677185/how-can-i-reference-an-environment-variable-for-python-path-in-vscode-launch-con | I am using vscode on my desktop and laptop and each machine would generate a random folder name with a hash for the virtual environment path when creating the virtual environment using poetry. In order to use the same launch.json file in my project for both computers, I'd like to reference an environment variable instead of hard coding the virtual environment path names. I've tried the below but vscode is stating "The Python path in your debug configuration is invalid." How can I reference the environment variable for the "python" path? my ~/.zshrc: export PROJ_VENV=$HOME/.cache/pypoetry/virtualenvs/myproj-NMmw6p6o-py3.12 my launch.json: { "version": "0.2.0", "configurations": [ { "name": "Python: Django", "type": "python", "python": "${env:PROJ_VENV}/bin/python", "request": "launch", "program": "${workspaceFolder}/src/manage.py", "args": [ "runserver", ], "django": true } ] } | As of vscode 1.85, specifying "python": "${env:PROJ_VENV}/bin/python" in launch.json will not work. As a workaround, you can remove "python" from launch.json and set "python.defaultInterpreterPath": "${env:PROJ_VENV}/bin/python" in settings.json. You can then select "Use Python from python.defaultInterpreterPath" when selecting your python interpreter. | 3 | 1 |
77,686,427 | 2023-12-19 | https://stackoverflow.com/questions/77686427/how-to-set-vscode-mypy-type-checker-args-argument-for-python | I am learning python , i use Vs code as my IDE, i am have some very strange and annoying errors. examples: when i import modules import telegram i get error "telegram" is not accessed Pylance invalid syntaxMypysyntax (module) telegram on my files, which is very annoying. I have write import telegram #type:ignore for the error to go away. when i also write async def hello(self, update: Update, context: ContextTypes.DEFAULT_TYPE): await context.bot.send_message(chat_id=update.effective_chat.id, text="Hello, my name is MAS.") i get this error on chat_id=update.effective_chat.id Item "None" of "Optional[Chat]" has no attribute "id"Mypyunion-attr Item "None" of "Optional[Chat]" has no attribute "id"Mypyunion-attr Item "None" of "Optional[Chat]" has no attribute "id"Mypyunion-attr Item "None" of "Optional[Chat]" has no attribute "id"Mypyunion-attr Item "None" of "Optional[Chat]" has no attribute "id"Mypyunion-attr (property) effective_chat: Chat | None . Even with these errors i runs the files perfectly without any issues, i don't why vscode is acting up. it is impacting my productivity. i have to add #type:ignore to make these errors go away. i have these running extensions. I have searched all of google and stack overflow and found nothing that solve my problem. i came across mypy-type-checker.args on Vscode marketplace. i believe it could solve my problem, but there is no docs on how to set the arguments. nothing at all on vscode marketplace. The docs for mypy even made less sense to me as a beginner. | I enabled Copilot, ask it to explain the probem to me . it told me to do if update.effective_chat is not None: await context.bot.send_message(chat_id=update.effective_chat.id, text="Hello, my name is MAS, i am your bot that provides movies to you for free.") instead of await context.bot.send_message(chat_id=update.effective_chat.id, text="Hello, my name is MAS, i am your bot that provides movies to you for free.") My code is now error free. if you have a Copilot subscription enable the extension to *6 your productivity. | 2 | 0 |
77,686,072 | 2023-12-19 | https://stackoverflow.com/questions/77686072/issues-with-azure-identity-when-using-federated-credentials | I'm running a GH actions pipeline which is configured to login Azure cli using federated credentials (and this works fine): env: ARM_USE_OIDC: true steps: - name: az login uses: azure/login@v1 with: client-id: ${{ env.ARM_CLIENT_ID }} tenant-id: ${{ env.ARM_TENANT_ID }} subscription-id: ${{ env.ARM_SUBSCRIPTION_ID }} then I'm running pytest which does some python code, some TF runs, etc. plain Az Cli\TF calls using subprocess work, however when I'm using AzureCliCredential or DefaultAzureCredential calls to get_token fail with: Output: ERROR: AADSTS700024: Client assertion is not within its valid time range. Current time: 2023-12-19T07:20:31.3554289Z, assertion valid from 2023-12-19T05:59:24.0000000Z, expiry time of assertion 2023-12-19T06:04:24.0000000Z The same code was working previously using certificate auth EDIT: what was confusing me is the fact that the tokens I'm getting have proper lifetime, which doesnt match the error. what I'm thinking now is that the underlying OIDC token issued by Github has only 5 minutes of lifetime hence it doesnt matter if oAuth tokens have 1h lifetime EDIT: related issues: https://github.com/Azure/login/issues/372 https://github.com/Azure/login/issues/180 | the underlying OIDC token issued by Github has only 5 minutes of lifetime hence it doesnt matter if oAuth tokens have 1h lifetime. my workaround is to refresh Azure Cli auth everytime I need to use it until anything can be done about this (at least I dont see a way to extend token lifetime: https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) def get_azure_credentials(): token_request = os.environ.get("ACTIONS_ID_TOKEN_REQUEST_TOKEN") token_uri = os.environ.get("ACTIONS_ID_TOKEN_REQUEST_URL") subprocess_helper(f'token=$(curl -H "Authorization: bearer {token_request}" "{token_uri}&audience=api://AzureADTokenExchange" | jq .value -r) && az login --service-principal -u {CLIENT_ID} -t {TENANT_ID} --federated-token $token') return AzureCliCredential() ps. subprocess_helper is a wrapper around: subprocess.run(["/bin/sh", "-c", run_me], stdout=subprocess.PIPE, stderr=subprocess.PIPE) pps. if any1 knows a better solution - I'm up for it | 2 | 0 |
77,657,592 | 2023-12-14 | https://stackoverflow.com/questions/77657592/r-equivalent-of-python-add-volume-function | I am working with the R programming language. I found this tutorial/website over here that shows how to fill the volume under the surface in Python/Plotly (https://community.plotly.com/t/fill-volume-under-the-surface/64944/2): import numpy as np import plotly.express as px import plotly.graph_objects as go f = lambda x,y: np.cos(x)+np.sin(y)+4 #surface x, y = np.meshgrid( np.linspace(-5, 5, 100), np.linspace(-5, 5, 100) ) z = f(x,y) #patch of surface lower_x, upper_x = -1, 2 lower_y, upper_y = -2, 4 int_x, int_y = np.meshgrid( np.linspace(lower_x, upper_x, 50), np.linspace(lower_y, upper_y, 70) ) int_z = f(int_x,int_y) fig= go.Figure() fig.add_surface(x=x[0], y=y[:, 0], z=z, showscale=False, colorscale=px.colors.sequential.Pinkyl, opacity=0.6) fig.add_surface(x=int_x[0],y=int_y[:, 0],z=int_z, showscale=False, colorscale=px.colors.sequential.Agsunset, opacity=1) lower_z= 0 upper_z= int_z.max() X, Y, Z = np.mgrid[lower_x:upper_x:50j, lower_y:upper_y:50j, lower_z:upper_z:75j] vals = Z-f(X,Y) fig.add_volume(x=X.flatten(), y=Y.flatten(), z=Z.flatten(), value= vals.flatten(), surface_show=True, surface_count=2, colorscale=[[0, px.colors.sequential.Agsunset[4]],[1.0, px.colors.sequential.Agsunset[4]]], showscale=False, isomin=-upper_z, isomax=0) #isomin=-upper_z corresponds to z=f(x,y)-upper_z, and isomax=0, to z=f(x,y) fig.update_layout(height=600, width=600, scene_zaxis_range=[0, upper_z+0.5], scene_camera_eye=dict(x=1.85, y=1.85, z=0.7)) My Question: I am trying to replicate this code in R. Doing some research into this, I noticed that Plotly within R does not have an add_volume() function as in Python. I tried to attempt to work around this and find a new way to do this: Attempt 1: library(plotly) f <- function(x, y) { cos(x) + sin(y) + 4 } # Surface x <- seq(-5, 5, length.out = 100) y <- seq(-5, 5, length.out = 100) z <- outer(x, y, f) # Patch of surface lower_x <- -1 upper_x <- 2 lower_y <- -2 upper_y <- 4 int_x <- seq(lower_x, upper_x, length.out = 50) int_y <- seq(lower_y, upper_y, length.out = 70) int_z <- outer(int_x, int_y, f) fig <- plot_ly() %>% add_surface(x = ~x, y = ~y, z = ~z, showscale = FALSE, opacity = 0.6, colors = c('#e31a1c', '#fb9a99')) %>% add_surface(x = ~int_x, y = ~int_y, z = ~int_z, showscale = FALSE, opacity = 1, colors = c('#a6cee3', '#1f78b4')) fig <- layout(fig, scene = list(zaxis = list(range = c(0, max(int_z) + 0.5)), camera = list(eye = list(x = 1.85, y = 1.85, z = 0.7)))) fig Attempt 2: Identify the Area library(plotly) f <- function(x, y) { cos(x) + sin(y) + 2 } # surface x <- seq(-5, 5, length.out = 200) y <- seq(-5, 5, length.out = 200) z <- outer(x, y, f) # patch of surface lower_x <- -1 upper_x <- 2 lower_y <- -2 upper_y <- 4 int_x <- seq(lower_x, upper_x, length.out = 200) int_y <- seq(lower_y, upper_y, length.out = 200) int_z <- outer(int_x, int_y, f) fig <- plot_ly(x = ~x, y = ~y, z = ~z, type = "surface", showscale = FALSE, opacity = 0.4) %>% add_surface(x = ~int_x, y = ~int_y, z = ~int_z, showscale = FALSE, opacity = 1) %>% layout(showlegend = FALSE) fig But so far, I can't get anything to work. Can someone please show me how to do this? Thanks! | You can add a volume trace using add_trace(type = "volume"). The input is different from the surface plot though. You need a regular 3D grid of points given as vectors of x, y and z co-ordinates, such as you might create with expand.grid in R. Each of these points must also have a value. Only values above the isomin argument and below the isomax argument get filled. To fill a volume within an enclosed surface, you can simply calculate whether each 3D gridpoint should be filled or not. Taking your own example, the set-up would be: f <- function(x, y) cos(x) + sin(y) + 2 x <- y <- seq(-5, 5, length.out = 200) z <- outer(x, y, f) xmin <- -1 xmax <- 2 ymin <- -2 ymax <- 4 volume_df <- expand.grid(x = seq(xmin, xmax, length.out = 20 * (xmax - xmin)), y = seq(ymin, ymax, length.out = 20 * (ymax - ymin)), z = seq(0, 4, length.out = 80)) volume_df$value <- with(volume_df, ifelse(z > f(x, y), 0, 1)) The plotting code is then straightforward: library(plotly) plot_ly(showscale = FALSE) %>% add_surface(x = x, y = y, z = z, opacity = 0.4) %>% add_trace(type = "volume", data = volume_df, x = ~x, y = ~y, z = ~z, value = ~value, opacity = 0.2, isomin = 0.5, colorscale = list(c(0, 1), c("red", "red"))) | 2 | 2 |
77,688,198 | 2023-12-19 | https://stackoverflow.com/questions/77688198/how-to-decode-string-address-into-pubkey-address-on-solana | I am querying data from raydium using their SDK, but i am trying to do it in Python instead. So, I am obtaining an address: <BN: b870e12dd379891561d2e9fa8f26431834eb736f2f24fc2a2a4dff1fd5dca4df> And I know it correspond to : PublicKey(DQyrAcCrDXQ7NeoqGgDCZwBvWDcYmFCjSb9JtteuvPpz)] But anyone know how one could decode the first into the second please, so i could implement this in Python. Thanks | With Solana use base58 for encode address. In python, you can use module base58 to encode from hex string address. import base58 hex_sting="b870e12dd379891561d2e9fa8f26431834eb736f2f24fc2a2a4dff1fd5dca4df" byte_string = bytes.fromhex(hex_sting) encoded_string = base58.b58encode(byte_string).decode() # DQyrAcCrDXQ7NeoqGgDCZwBvWDcYmFCjSb9JtteuvPpz | 2 | 3 |
77,688,214 | 2023-12-19 | https://stackoverflow.com/questions/77688214/indexing-a-python-dataframe | I am attempting to analyse high school attendance and behaviour data. I have a raw excel data sheet as shown in the image. I get lost with how to index it. Essentially the first three columns should be "Student Name, Class and Percent Attendance". Then all subsequent columns are a 2 level index, the first of which is "date" and the sub-index "period name". How can I delete the appropriate rows and columns to achieve this and the index be correct. Many thanks Graham I don't really know what to try? Historically I have "hard-wired" what I want into a Python Array but I really want to become more proficient with dataframes and see this as a good project to have a go upon. | You can read_excel to load the spreadsheet as it is, then post-process it : raw = pd.read_excel("file.xlsx") idx_names = ["Student Name", "Class", "Percent Attendance"] left = raw.iloc[2:, :3].set_axis(idx_names, axis=1) col_mux = pd.MultiIndex.from_arrays(raw.iloc[:2, 3:].ffill(axis=1).to_numpy()) right = raw.iloc[2:, 3:] df = (pd.concat([left, right], axis=1) .set_index(idx_names).set_axis(col_mux, axis=1)) Output : # df.index.nlevels # 2 # df.columns.nlevels # 3 print(df) 19/9/2023 20/9/2023 AM PM P1 P2 P3 P4 P5 AM PM P1 P2 P3 Student Name Class Percent Attendance Student A 7M 100% / \ / / / NaN / / \ / / / Student B 12H 100% / \ NaN / N NaN NaN / \ / / / Student C 8R 100% / \ / / / NaN NaN / \ / / / [3 rows x 12 columns] | 2 | 1 |
77,682,768 | 2023-12-19 | https://stackoverflow.com/questions/77682768/python-streaming-from-webcam-through-an-ml-process-to-the-internet | I am trying to develop a program in Python that streams a webcam captured video, through an ML process, to "the internet". Are there particular packages for Python that are designed to handle the live webcam capture and streaming/receiving from the internet? The ML processes I understand fairly well but currently require loading from storage whereas I'm trying to understand essentially how to make a P2P video chat client in Python. Any pointer are greatly appreciated | You can capture video from webcam using the opencv-python like the following: import cv2 cap = cv2.VideoCapture(0) # Open the default camera (0) while True: ret, frame = cap.read() # Read a frame from the webcam # Perform your ML processing here on 'frame' cv2.imshow('Webcam', frame) # Display the frame if cv2.waitKey(1) & 0xFF == ord('q'): # Press 'q' to exit break cap.release() cv2.destroyAllWindows() For streaming video over a network, you can use imagezmq library for streaming OpenCV images from one computer to another. It's based on ZeroMQ, a messaging library that allows you to connect multiple devices over a network. for sender device: import cv2 import zmq import base64 context = zmq.Context() socket = context.socket(zmq.PUB) socket.bind("tcp://*:5555") # Set the appropriate address and port cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() # Perform your ML processing here on 'frame' _, buffer = cv2.imencode('.jpg', frame) jpg_as_text = base64.b64encode(buffer) socket.send(jpg_as_text) cap.release() for receiver device: import zmq import cv2 import numpy as np import base64 context = zmq.Context() socket = context.socket(zmq.SUB) socket.connect("tcp://sender_ip:5555") # Replace 'sender_ip' with the actual sender's IP socket.setsockopt_string(zmq.SUBSCRIBE, '') while True: jpg_as_text = socket.recv() jpg_original = base64.b64decode(jpg_as_text) jpg_as_np = np.frombuffer(jpg_original, dtype=np.uint8) frame = cv2.imdecode(jpg_as_np, flags=1) cv2.imshow('Receiver', frame) if cv2.waitKey(1) & 0xFF == ord('q'): # Press 'q' to exit break cv2.destroyAllWindows() This is a basic approach for video streaming setup using OpenCV for webcam capture, ML processing on frames, and imagezmq for streaming frames over a network. But do keep in mind that setting up a P2P video chat client involves more complexities, such as handling network discovery, establishing connections, handling latency, and potentially incorporating signaling protocols. For a full-fledged P2P video chat, you might consider additional libraries or frameworks specifically designed for real-time communication, like WebRTC. You can get a more clear idea from this tutorial I found: https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/ | 2 | 3 |
77,685,797 | 2023-12-19 | https://stackoverflow.com/questions/77685797/create-a-dynamic-case-when-statement-based-on-pyspark-dataframe | I have a dataframe called df with features, col1, col2, col3. Their values should be combined and produce a result. What result each combination will produce is defined in mapping_table. However, mapping_table sometimes has the value '*'. This mean that this feature can have any value, it doesn't affect the result. This makes a join impossible(?) to make, since I need to evaluate which features to use in the join for every row. What would be a good pyspark solution for this problem? from pyspark.sql import SparkSession from pyspark.sql.functions import col # Create a Spark session spark = SparkSession.builder.appName("example").getOrCreate() # Example DataFrames map_data = [('a', 'b', 'c', 'good'), ('a', 'a', '*', 'very good'), ('b', 'd', 'c', 'bad'), ('a', 'b', 'a', 'very good'), ('c', 'c', '*', 'very bad'), ('a', 'b', 'b', 'bad')] columns = ["col1", "col2", 'col3', 'result'] mapping_table = spark.createDataFrame(X, columns) data =[[('a', 'b', 'c'), ('a', 'a', 'b' ), ('c', 'c', 'a' ), ('c', 'c', 'b' ), ('a', 'b', 'b'), ('a', 'a', 'd') ]] columns = ["col1", "col2", 'col3'] df = spark.createDataFrame(data, columns) | Transform map_data into a case statement: ressql = 'case ' for m in map_data: p = [f"{p[0]} = '{p[1]}'" for p in zip(columns, m[:3]) if p[1] != "*"] ressql = ressql + ' when ' + ' and '.join(p) + f" then '{m[3]}'" ressql = ressql + ' end' df.withColumn('result', F.expr(ressql)).show() ressql is now case when col1 = 'a' and col2 = 'b' and col3 = 'c' then 'good' when col1 = 'a' and col2 = 'a' then 'very good' when col1 = 'b' and col2 = 'd' and col3 = 'c' then 'bad' when col1 = 'a' and col2 = 'b' and col3 = 'a' then 'very good' when col1 = 'c' and col2 = 'c' then 'very bad' when col1 = 'a' and col2 = 'b' and col3 = 'b' then 'bad' end Result: +----+----+----+---------+ |col1|col2|col3| result| +----+----+----+---------+ | a| b| c| good| | a| a| b|very good| | c| c| a| very bad| | c| c| b| very bad| | a| b| b| bad| | a| a| d|very good| +----+----+----+---------+ | 2 | 2 |
77,660,162 | 2023-12-14 | https://stackoverflow.com/questions/77660162/pygame-how-to-reset-alpha-layer-without-fill0-0-0-255-every-frame | I'm programming a 2D game with a background and a fog of war. Both are black pygame.Surface, one with an alpha chanel (fog of war) and one without (background). Their sizes are the same and are constants. The only thing that change is the alpha channel of some pixels in fog of war. Profiling my code, I observed that 15% of execution time was spent in filling those surfaces with black. I go from 3.5s/1000 frames to 3s/1000 frames with/without filling. I was wondering: is there any more optimized way to compute things? Here is a minimal example which goes from 2sec to 1.5 sec with or without filling on my machine : import pygame import random import cProfile from pstats import Stats pygame.init() wh = 1000 screen = pygame.display.set_mode((wh, wh)) fog_of_war = pygame.Surface((wh, wh), pygame.SRCALPHA) pr = cProfile.Profile() pr.enable() for i in range(1000): screen.fill((255, 255, 255)) fog_of_war.fill((0, 0, 0, 255)) pygame.draw.circle(fog_of_war, (0, 0, 0, 0), (wh/2+random.randint(-5,5), wh/2+random.randint(-5,5)), 50) screen.blit(fog_of_war, (0, 0)) pygame.display.flip() pr.disable() s = Stats(pr) s.strip_dirs() s.sort_stats('tottime').print_stats(5) pygame.quit() I see maybe two ways of doing so but I don't know how to implement them: Maybe create the filled surfaces once, and find a way to say "at the begining of each frame, screen is this and fog of war is this" Specifically for the fog of war layer: resetting only the alpha layer to 255 everywhere without touching to RGB which will always be 0 Thanks for your help :) Edit: On the real thing, there are several circles which are moving from one frame to another, so that screen has to be reseted to black in some way. One solution could have been to create a third surface outside the loop called "starting_black_screen", to fill it black and to blit it to screen every frame but it's actually slower than filling the screen black. | I found a solution for the second option from https://github.com/pygame/pygame/issues/1244, which explains how to modify the alpha layer. The key function goes like this: def reset_alpha(s): surface_alpha = np.array(s.get_view('A'), copy=False) surface_alpha[:,:] = 255 return s Though doing so requires to create a numpy array with individual alpha pixel values, which takes about as much time as fog_of_war.fill((0, 0, 0, 255)). Pygame's fill method seems optimized as hell to do 4x the job of Numpy's fill in the same time. Here is complete code: import pygame import numpy as np import random import cProfile from pstats import Stats pygame.init() wh = 1000 def reset_alpha(s): surface_alpha = np.array(s.get_view('A'), copy=False) surface_alpha[:,:] = 255 return s screen = pygame.display.set_mode((wh, wh)) fog_of_war = pygame.Surface((wh, wh), pygame.SRCALPHA) pr = cProfile.Profile() pr.enable() fog_of_war.fill((0, 0, 0, 255)) for i in range(1000): screen.fill((255, 255, 255)) fog_of_war = reset_alpha(fog_of_war) pygame.draw.circle(fog_of_war, (0, 0, 0, 0), (wh/2+random.randint(-5,5), wh/2+random.randint(-5,5)), 50) screen.blit(fog_of_war, (0, 0)) pygame.display.flip() pr.disable() s = Stats(pr) s.strip_dirs() s.sort_stats('tottime').print_stats(5) pygame.quit() | 3 | 1 |
77,686,302 | 2023-12-19 | https://stackoverflow.com/questions/77686302/using-python-variable-in-sql-query | I would like to use python to do an SQL query that prompts the user for input to complete the query. Currently when I run my code it will tell me I have a TypeError (see below for full error message). My code will be posted below first. import pyodbc from datetime import datetime import pandas as pd def today(): return datetime.now().strftime('%Y-%m-%d') ## return (datetime.now()+timedelta(days=1)).strftime('%Y-%m-%d') #create connection string conn = pyodbc.connect('Driver={SQL Server};' 'Server=OPTSAPPS02,1433\\SQLEXPRESS;' 'Database=OPTS Production Tracking Service;' 'UID=fetch;' 'PWD=Reader123') cursor = conn.cursor() #create query parameter for target targetDTString = today()+' 23:59' targetDate = datetime.strptime(targetDTString,'%Y-%m-%d %H:%M') ###FIND A WAY TO TAKE INPUT FOR QUERY AND MAKE IT SO YOU CAN SEARCH/FILTER BY MOTHER AND STAMPER NUMBER#### query = cursor.execute("""SELECT TOOL_OUT, TANK, STATION, FORMING_CURRENT, TIME, BUILDING_CURRENT, CYCLE_TIME FROM ELECTROFORMING WHERE PROCESS='ELECTROFORMING' AND (TOOL_OUT=?)""", (input("TOOL_OUT: "))) DF = pd.read_sql_query(query,conn) print(DF) Traceback (most recent call last): File "C:\Users\mberardi\Documents\python scripts\print-eforming-stats.py", line 39, in DF = pd.read_sql_query(query,conn) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\mberardi\AppData\Local\anaconda3\Lib\site-packages\pandas\io\sql.py", line 486, in read_sql_query return pandas_sql.read_query( ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\mberardi\AppData\Local\anaconda3\Lib\site-packages\pandas\io\sql.py", line 2328, in read_query cursor = self.execute(sql, params) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\mberardi\AppData\Local\anaconda3\Lib\site-packages\pandas\io\sql.py", line 2260, in execute raise TypeError("Query must be a string unless using sqlalchemy.") TypeError: Query must be a string unless using sqlalchemy. | You have to pass query directly as a string without using conn.cursor() and pass the TOOL_OUT value into params as a tuple: query = """SELECT TOOL_OUT, TANK, STATION, FORMING_CURRENT, TIME, BUILDING_CURRENT, CYCLE_TIME FROM ELECTROFORMING WHERE PROCESS='ELECTROFORMING' AND (TOOL_OUT=?)""" tool_out = input("TOOL_OUT: ") DF = pd.read_sql_query(query, conn, params=(tool_out,)) print(DF) | 2 | 2 |
77,686,120 | 2023-12-19 | https://stackoverflow.com/questions/77686120/get-lowest-elementwise-values-from-multiple-numpy-arrays-which-may-be-different | I can get the lowest values from multiple arrays by using the following: first_arr = np.array([0,1,2]) second_arr = np.array([1,0,3]) third_arr = np.array([3,0,4]) fourth_arr = np.array([1,1,9]) print(np.minimum.reduce([first_arr, second_arr, third_arr, fourth_arr])) Result = [0 0 2] However if the arrays are different lengths or empty I get an error: ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (4,) + inhomogeneous part. The arrays will often be different lengths or empty. How can I handle this? I want to compare all elements that exist. So, changing the above example: first_arr = np.array([0,1]) second_arr = np.array([1,0,3]) third_arr = np.array([3,0,4]) fourth_arr = np.array([1,1,9]) print(np.minimum.reduce([first_arr, second_arr, third_arr, fourth_arr])) The result should be [0 0 3]. | If you don't mind the overhead, use pandas's DataFrame.min: l = [first_arr, second_arr, third_arr, fourth_arr] out = pd.DataFrame(l).min().to_numpy() Or with itertools.zip_longest and numpy.nanmin: from itertools import zip_longest l = [first_arr, second_arr, third_arr, fourth_arr] out = np.nanmin(np.c_[list(zip_longest(*l, fillvalue=np.nan))], axis=1) Output: array([0., 0., 3.]) | 3 | 2 |
77,655,853 | 2023-12-13 | https://stackoverflow.com/questions/77655853/gekko-optimization-exit-invalid-number-in-nlp-function-or-derivative-detected | Hello I have the aim to optimize a currently simple set of equations in order to obtain the value of kd with a fit. Generate testing data (works fine) def solve_gekko_equations(H0input, D0input, Kinput): m = GEKKO(remote=False) d0 = m.Const(D0input) h0 = m.Const(H0input) hdLimit=min([D0input,H0input]) d = m.Var(1,0,D0input) h = m.Var(1,0,H0input) hd = m.Var(1,0,hdLimit) kd = m.Const(Kinput) m.Equations([h + hd == h0, d + hd == d0, kd == h * d / hd]) m.solve(disp=False) return d.value[0], h.value[0], hd.value[0] H0values= np.linspace(0, 12, 100) solve_gekko_equations_vec = np.vectorize(solve_gekko_equations) dvalues, hvalues, hdvalues = solve_gekko_equations_vec(H0values, 2, 0.5) Optimize kd from gekko import GEKKO m = GEKKO(remote=False) kd = m.FV(value=1) kd.STATUS = 1 d0 = m.Const(2) hdmeas = m.Param(value=hdvalues) h0 = m.Param(value=H0values) d = m.Var(1,0,2) h = m.Var(1,0) hd = m.Var(1,0) m.Equation(h + hd == h0) m.Equation(d + hd == d0) # Objective m.Obj(h * d / hd) m.Minimize(((hd-hdmeas)/hdmeas)**2) # Solve m.solve(disp=False) m.options.IMODE = 2 # Final objective print('Final Objective: '+ str(m.options.objfcnval)) # Print solution print('Solution') print('kd: ' + str(kd.value)) I constantly receive this error: EXIT: Invalid number in NLP function or derivative detected. Please help. | Thank you very much for your answer, and now the optimization finishes, but the result is wrong. we generated testing data for kd=0.5 but the fitting result is kd=1.0 in the suggested solution. It seemed that any initalValue in mFV(value=initialValue) was in the end the found solution. So I modified the code to only provide one minimize function: from gekko import GEKKO H0values= np.linspace(0, 12, 100) dvalues, hvalues, hdvalues = solve_gekko_equations_vec(H0values, 2, 0.5) print(len(hdvalues)) print(len(H0values)) print('optimize kd') m = GEKKO(remote=False) kd = m.FV(value=2) kd.STATUS = 1 d0 = 2 hdmeas = m.Param(value=hdvalues) h0 = m.Param(value=H0values) d = m.Var(1,lb=0,ub=2) h = m.Var(1,lb=0) hd = m.Var(1,lb=0) m.Equation(h + hd == h0) m.Equation(d + hd == d0) m.Equation(kd == (h * d)/hd) # Objective m.Minimize(((hd-hdmeas)/(hdmeas+0.001))**2) # Solve m.options.IMODE = 2 m.solve(disp=True) # Final objective print('Final Objective: '+ str(m.options.objfcnval)) # Print solution print('Solution') print('kd: ' + str(kd.value[0])) The solve_gekko_equations_vec is the same as in my initial question. Now the result is correct but it strongly depends on the range of the numbers. This was just an example; the final ranges will be around 110-9 - 110-6; if I use the current code and make a range with np.linspace(0, 0.00012, 100) it already does not find a solution; is there maybe a more robust minimization approach or other suggestions? Thank you very much, I really appriciate your help! Stephan | 3 | 0 |
77,683,810 | 2023-12-19 | https://stackoverflow.com/questions/77683810/fastest-way-to-use-cmaps-in-polars | I would like to create a column of color lists of type [r,g,b,a] from another float column using a matplotlib cmap. Is there a faster way then: data.with_columns( pl.col("floatCol")/100) .map_elements(cmap1) ) Minimal working example: import matplotlib as mpl import polars as pl cmap1 = mpl.colors.LinearSegmentedColormap.from_list("GreenBlue", ["limegreen", "blue"]) data = pl.DataFrame( { "floatCol": [12,135.8, 1235.263,15.236], "boolCol": [True, True, False, False] } ) data = data.with_columns( pl.when( pl.col("boolCol").not_() ) .then( mpl.colors.to_rgba("r") ) .otherwise( ( pl.col("floatCol")/100) .map_elements(cmap1) ) .alias("c1") ) | If you're using map_elements, then it can (nearly) always be faster ;) data.with_columns( pl.when(pl.col("boolCol").not_()) .then(mpl.colors.to_rgba("r")) .otherwise((pl.col("floatCol") / 100).map_batches(lambda x: pl.Series(cmap1(x)))) .alias("c1") ) Use map_batches to operate over the entire column | 3 | 3 |
77,681,935 | 2023-12-18 | https://stackoverflow.com/questions/77681935/can-this-simpler-case-of-the-2d-maximum-submatrix-problem-be-solved-more-quickly | Given an n by m matrix of integers, there is a well known problem of finding the submatrix with maximum sum. This can be solved with a 2d version of Kadane's algorithm. An explanation of an O(nm^2) time algorithm for the 2D version which is based on Kadane's algorithm can be found here. I am interested in a simpler version where only submatrices that include the top left cell of the matrix are considered. Can this version be solved faster in O(nm) time? If so, how can you do it? I would also like to see what the optimal submatrix is, not just its sum. | Yes, you can do it in O(nm), for example by accumulating over the rows and then accumulating over the columns. That gives you all the sums, so just take the max. Example implementation to find the max sum: from itertools import accumulate max(max(accumulate(c)) for c in zip(*map(accumulate, A))) And to also find the bottom-right corner indexes of the max sum submatrix: max( (sum, i, j) for j, column in enumerate(zip(*map(accumulate, A))) for i, sum in enumerate(accumulate(column)) ) A few variations and testing (fast4 returns the indexes of the bottom-right cell of the max sum submatrix instead of the sum, and fast5 to fast7 return sum and indexes): import random from itertools import accumulate, chain, count, repeat def sum_until(A, i, j): return sum(sum(row[:j+1]) for row in A[:i+1]) def naive(A): return max( sum(sum(row[:width]) for row in A[:height]) for height in range(len(A) + 1) for width in range(len(A[0]) + 1) ) def fast1(A): return max(max(accumulate(c)) for c in zip(*map(accumulate, A))) def fast2(A): return max(map(max, map(accumulate, zip(*map(accumulate, A))))) def fast3(A): return max(chain(*map(accumulate, zip(*map(accumulate, A))))) def fast4(A): sums = *chain(*map(accumulate, zip(*map(accumulate, A)))), return divmod(sums.index(max(sums)), len(A))[::-1] def fast5(A): return max( max(zip(accumulate(c), count(), repeat(j))) for j, *c in zip(count(), *map(accumulate, A)) ) def fast6(A): return max( (sum, i, j) for j, column in enumerate(zip(*map(accumulate, A))) for i, sum in enumerate(accumulate(column)) ) def fast7(A): return max(map(max, map(zip, map(accumulate, zip(*map(accumulate, A))), map(count, repeat(0)), map(repeat, count()) ) )) random.seed(4) for _ in range(8): A = [random.choices(range(-15, 16), k=23) for _ in range(42)] print( {naive(A), fast1(A), fast2(A), fast3(A), sum_until(A, *fast4(A))}, {fast5(A), fast6(A), fast7(A)} ) Output for eight test cases, showing that all solutions find the same max sum (Attempt This Online!): {311} {(311, 23, 22)} {429} {(429, 33, 21)} {252} {(252, 28, 3)} {262} {(262, 39, 13)} {196} {(196, 41, 22)} {513} {(513, 39, 14)} {252} {(252, 41, 4)} {365} {(365, 16, 14)} | 3 | 1 |
77,682,802 | 2023-12-19 | https://stackoverflow.com/questions/77682802/is-there-a-way-for-selenium-to-connect-to-an-already-open-instance | I need to access a browser that is already open. I'm using Python, but if you have another language, I welcome help! I'm trying to access a website and when I break its captcha even manually (the browser is starting with Selenium) it simply reports that it's wrong even though it's right and I can't access the certificate. In anonymous mode it shows the certificate but when clicking it goes wrong anyway. I tried to access in anonymous mode, as another user, doing different ways so the site wouldn't recognize the robot. | you can run chrome with command: chrome.exe --remote-debugging-port=9222 --user-data-dir="D:/ChromeProfile" the user-data-dir is a new dir for user data. and in selenium, you should create driver like this: from selenium.webdriver.chrome.options import Options from selenium import webdriver chrome_options = Options() chrome_options.add_experimental_option("debuggerAddress", "127.0.0.1:9222") driver = webdriver.Chrome(options=chrome_options) remember use the same port. | 2 | 3 |
77,680,831 | 2023-12-18 | https://stackoverflow.com/questions/77680831/signalr-communication-between-python-and-aspnet | I COMMUNICATE BETWEEN ASP NET C# AND PYTHON using a websocket, or more specifically, I use a signalR library in both cases. I'm trying to get my server to respond in some way when a Python client sends a message. nothing helps for now. anyone had this problem? it is enough that after the client sends the message, the function in the hub named SendGemstoneDetailMessage will work.that's what I mean Python Code: import logging from signalrcore.hub_connection_builder import HubConnectionBuilder import asyncio class SignalRClient: def __init__(self): self.hub_connection = None def connect(self): try: self.hub_connection = HubConnectionBuilder() \ .configure_logging(logging.DEBUG) \ .with_automatic_reconnect({ "type": "raw", "keep_alive_interval": 10, "reconnect_interval": 5, "max_attempts": 5 }) \ .with_url("https://localhost:7294/WebSocketMessageHub", options={"verify_ssl": False}) \ .build() self.hub_connection.on("ReceiveMessage", self.receive_message) self.hub_connection.start() self.hub_connection.on_open(lambda: print("connection opened and handshake received ready to send messages")) self.hub_connection.on_close(lambda: print("connection closed")) self.hub_connection.on_error(lambda data: print(f"An exception was thrown closed{data.error}")) print("Connected to SignalR Hub") # Send a message after connection self.hub_connection.send("SendGemstoneDetailMessage", ["dwdw"]) except Exception as e: print(f"Failed to connect to the server: {e}") def receive_message(self, *args, **kwargs): # Handle received messages print(f"ITS MEE: {args}") async def listen_forever(self): while True: await asyncio.sleep(1) # not blocking threads async def main(): signalr_client = SignalRClient() signalr_client.connect() await signalr_client.listen_forever() if __name__ == "__main__": asyncio.run(main()) C# code: public class WebSocketHub : Hub { private readonly IWebSocketService _webSocketService; public WebSocketHub(IWebSocketService webSocketService) { _webSocketService = webSocketService; } public override Task OnConnectedAsync() { _webSocketService.ConnectAdmin(Context); return base.OnConnectedAsync(); } public async Task SendGemstoneDetailMessage(List<string> messag) { Console.WriteLine("HELLO"); } } Logs from client in python: 2023-12-18 18:41:25,857 - SignalRCoreClient - DEBUG - Handler registered started ReceiveMessage 2023-12-18 18:41:25,857 - SignalRCoreClient - DEBUG - Connection started 2023-12-18 18:41:25,857 - SignalRCoreClient - DEBUG - Negotiate url:https://localhost:7294/WebSocketMessageHub/negotiate /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py:1100: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#tls-warnings warnings.warn( Connected to SignalR Hub 2023-12-18 18:41:27,021 - SignalRCoreClient - DEBUG - Response status code200 2023-12-18 18:41:27,021 - SignalRCoreClient - DEBUG - start url:wss://localhost:7294/WebSocketMessageHub?id=0JER4PQd5JeF97061acC2Q 2023-12-18 18:41:27,021 - SignalRCoreClient - DEBUG - Sending message InvocationMessage: invocation_id 6c1a818b-9aae-4d95-b08b-85c2c08606fc, target SendGemstoneDetailMessage, arguments ['dwdw'] 2023-12-18 18:41:27,021 - SignalRCoreClient - DEBUG - {"type": 1, "headers": {}, "target": "SendGemstoneDetailMessage", "arguments": ["dwdw"], "invocationId": "6c1a818b-9aae-4d95-b08b-85c2c08606fc"} 2023-12-18 18:41:27,021 - SignalRCoreClient - WARNING - Connection closed socket is already closed. 2023-12-18 18:41:27,022 - SignalRCoreClient - INFO - on_reconnect not defined 2023-12-18 18:41:27,022 - SignalRCoreClient - DEBUG - Negotiate url:https://localhost:7294/WebSocketMessageHub/negotiate?id=0JER4PQd5JeF97061acC2Q /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py:1100: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#tls-warnings warnings.warn( 2023-12-18 18:41:27,541 - SignalRCoreClient - DEBUG - Response status code200 2023-12-18 18:41:27,541 - SignalRCoreClient - DEBUG - start url:wss://localhost:7294/WebSocketMessageHub?id=cQ-mlq9xR3QcJLxuREJVyw 2023-12-18 18:41:27,738 - SignalRCoreClient - DEBUG - -- web socket open -- 2023-12-18 18:41:27,738 - SignalRCoreClient - DEBUG - Sending message <signalrcore.messages.handshake.request.HandshakeRequestMessage object at 0x103513bd0> 2023-12-18 18:41:27,738 - SignalRCoreClient - DEBUG - {"protocol": "json", "version": 1} 2023-12-18 18:41:28,272 - SignalRCoreClient - DEBUG - -- web socket open -- 2023-12-18 18:41:28,272 - SignalRCoreClient - DEBUG - Sending message <signalrcore.messages.handshake.request.HandshakeRequestMessage object at 0x1058e3b50> 2023-12-18 18:41:28,272 - SignalRCoreClient - DEBUG - {"protocol": "json", "version": 1} 2023-12-18 18:41:28,280 - SignalRCoreClient - DEBUG - Message received{} 2023-12-18 18:41:28,280 - SignalRCoreClient - DEBUG - Evaluating handshake {} connection opened and handshake received ready to send messages 2023-12-18 18:41:31,695 - SignalRCoreClient - DEBUG - Message received{"type":7,"error":"Connection closed with an error.","allowReconnect":true} 2023-12-18 18:41:31,695 - SignalRCoreClient - DEBUG - Raw message incomming: 2023-12-18 18:41:31,695 - SignalRCoreClient - DEBUG - {"type":7,"error":"Connection closed with an error.","allowReconnect":true} 2023-12-18 18:41:31,695 - SignalRCoreClient - INFO - Close message received from server 2023-12-18 18:41:31,695 - SignalRCoreClient - DEBUG - Connection stop | # Send a message after connection # self.hub_connection.send("SendGemstoneDetailMessage", ["dwdw"]) self.hub_connection.send("SendGemstoneDetailMessage", [["dwdw"]]) You are missing [] in your frontend code, since you are using SendGemstoneDetailMessage(List<string> messag), you should use [["dwdw"]]. If you want send string messagem like SendGemstoneDetailMessage(string messag), and ["dwdw"] should work. | 2 | 1 |
77,681,797 | 2023-12-18 | https://stackoverflow.com/questions/77681797/how-to-mock-a-constant-value | I have a structure like so: mod1 ├── mod2 │ ├── __init__.py │ └── utils.py └── tests └── test_utils.py where __init__.py: CONST = -1 utils.py: from mod1.mod2 import CONST def mod_function(): print(CONST) test_utils.py: from mod1.mod2.utils import mod_function def test_mod_function(mocker): mock = mocker.patch("mod1.mod2.CONST") mock.return_value = 1000 mod_function() I'm using pytest with mocker. By running python -m pytest -s ./mod1/tests I expected to see 1000 as an output, but got -1. Why? How to patch a constant from the __init__.py file? | What patch("mod1.mod2.CONST") does is really to set the CONST attribute of the module object mod1.mod2 to a different object. It does not affect any existing names referencing the original mod1.mod2.CONST object. When utils.py does: from mod1.mod2 import CONST it creates a name CONST in the namespace of the mod1.mod2.utils module, referencing the object -1 currently referenced by the mod1.mod2.CONST name. And when test_mod_function then does: mock = mocker.patch("mod1.mod2.CONST") it modifies mod1.mod2 such that its CONST attribute now references a Mock object, while the CONST name in mod1.mod2.utils continues to reference the object -1, which is why setting mock.return_value does not affect the outcome of mod_function. To properly test mod_function with a mock CONST you can either patch CONST in the namespace where mod_function is defined: mock = mocker.patch("mod1.mod2.utils.CONST") or you can defer the import of mod1.mod2.utils until mod1.mod2.CONST has been patched: def test_mod_function(mocker): mock = mocker.patch("mod1.mod2.CONST") mock.return_value = 1000 from mod1.mod2.utils import mod_function mod_function() | 2 | 1 |
77,680,320 | 2023-12-18 | https://stackoverflow.com/questions/77680320/getting-different-score-values-between-manual-cross-validation-and-cross-val-sco | I created a python for loop to split the training dataset into stratified KFolds and used a classifier inside the loop to train it. Then used the trained model to predict with the validation data. The metrics achieved using this process where quite different to that achieved with the cross_val_score function. I expected the same results using both methods. This code is for text classification and I use TF-IDF to vectorize the text Code for manual implementation of cross validation: #Importing metrics functions to measure performance of a model from sklearn.metrics import f1_score, accuracy_score, precision_score, recall_score from sklearn.model_selection import StratifiedKFold data_validation = [] # list used to store the results of model validation using cross validation skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) accuracy_val = [] f1_val = [] # use ravel function to flatten the multi-dimensional array to a single dimension for train_index, val_index in (skf.split(X_train, y_train)): X_tr, X_val = X_train.ravel()[train_index], X_train.ravel()[val_index] y_tr, y_val = y_train.ravel()[train_index] , y_train.ravel()[val_index] tfidf=TfidfVectorizer() X_tr_vec_tfidf = tfidf.fit_transform(X_tr) # vectorize the training folds X_val_vec_tfidf = tfidf.transform(X_val) # vectorize the validation fold #instantiate model model= MultinomialNB(alpha=0.5, fit_prior=False) #Training the empty model with our training dataset model.fit(X_tr_vec_tfidf, y_tr) predictions_val = model.predict(X_val_vec_tfidf) # make predictions with the validation dataset acc_val = accuracy_score(y_val, predictions_val) accuracy_val.append(acc_val) f_val=f1_score(y_val, predictions_val) f1_val.append(f_val) avg_accuracy_val = np.mean(accuracy_val) avg_f1_val = np.mean(f1_val) # temp list to store the metrics temp = ['NaiveBayes'] temp.append(avg_accuracy_val) #validation accuracy score temp.append(avg_f1_val) #validation f1 score data_validation.append(temp) #Create a table ,using dataframe, which contains the metrics for all the trained and tested ML models result = pd.DataFrame(data_validation, columns = ['Algorithm','Accuracy Score : Validation','F1-Score : Validation']) result.reset_index(drop=True, inplace=True) result Output: Algorithm Accuracy Score : Validation F1-Score : Validation 0 NaiveBayes 0.77012 0.733994 Now code to use cross_val_score function: from sklearn.model_selection import cross_val_score from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer scores = ['accuracy', 'f1'] #Text vectorization of training and testing datasets using NLP technique TF-IDF tfidf=TfidfVectorizer() X_tr_vec_tfidf = tfidf.fit_transform(X_train) skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) nb=MultinomialNB(alpha=0.5, fit_prior=False) for score in ["accuracy", "f1"]: print (f'{score}: {cross_val_score(nb,X_tr_vec_tfidf,y_train,cv=skf,scoring=score).mean()} ') Output: accuracy: 0.7341283583255231 f1: 0.7062017090972422 As can be seen the accuracy and f1 metrics are quite different using the two methods. The difference in metrics is much worse when I use the KNeighborsClassfier. | TL;DR: The two ways of calculation are not equivalent due to the different way you handle the TF-IDF transformation; the first calculation is the correct one. In the first calculation you correctly apply fit_transform only to the training data of each fold, and transform to the validation data fold: X_tr_vec_tfidf = tfidf.fit_transform(X_tr) # vectorize the training folds X_val_vec_tfidf = tfidf.transform(X_val) # vectorize the validation fold But in the second calculation you do not do that; instead, you apply fit_transform to the whole of the training data, before it is split to training and validation folds: X_tr_vec_tfidf = tfidf.fit_transform(X_train) hence the difference. The fact that you seem to get a better accuracy with the second, wrong way of calculation, is due to information leakage (your validation data is not actually unseen, they have participated in the TF-IDF transformation). The correct way to use cross_val_score when we have transformations is via a pipeline (API, User's Guide): from sklearn.pipeline import Pipeline tfidf = TfidfVectorizer() nb = MultinomialNB(alpha=0.5, fit_prior=False) pipeline = Pipeline([('transformer', tfidf), ('estimator', nb)]) skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) scores = cross_val_score(pipeline, X_train, y_train, cv = skf) | 2 | 3 |
77,676,879 | 2023-12-18 | https://stackoverflow.com/questions/77676879/determining-and-filtering-out-low-frequent-component-from-a-real-dataset | I am trying to use FFT to filter out low frequency components from a signal and retain high frequency components in a real dataset (hourly electricity demand in California). I have tried this so far: X = fft(df['demand']) y_fft_shift = fftshift(X) n = 20 # number of low freq components to be removed (selected randomly) m = len(X)/2 sig_fft_filtered_img = y_fft_shift.copy() sig_fft_filtered_img[int(m-n):int(m+n+1)] = 0 y_ifft_shift = ifftshift(sig_fft_filtered_img) y_ifft = ifft(y_ifft_shift) # compare original signal vs filtered signal plt.figure(figsize = (25, 6)) plt.plot(df['demand'],'b') #df['hour'], plt.plot(abs(y_ifft.real),'r') plt.xlabel('Datetime') plt.ylabel('demand') plt.title('original vs filtered signal') plt.xticks(rotation=25) plt.show() I am not sure whether (a) my implementation is correct, and (b) the results obtained from inverse discrete fourier transform is the expected results. For instance, if I don't take abs(y_ifft.real) I get negative values. I tried the following two approaches on synthetic signal before applying the second approach to the real dataset. from scipy.fftpack import fft, ifft, fftfreq sr = 2000 # sampling rate ts = 1.0/sr # sampling interval t = np.arange(0,1,ts) #generate a signal freq = 1. x = 3*np.sin(2*np.pi*freq*t) freq = 4. x += np.sin(2*np.pi*freq*t) freq = 7. x += 0.5* np.sin(2*np.pi*freq*t) y = fft(x, axis=0) #FT of original signal freq = fftfreq(len(x), d=1.0/len(x)) #compute freq. # define the cut-off frequency cut_off = 4.5 # high-pass filter by assign zeros to the # FFT amplitudes where the absolute # frequencies smaller than the cut-off sig_fft_filtered[np.abs(freq) < cut_off] = 0 # get the filtered signal in time domain filtered = ifft(sig_fft_filtered) I compared the output from the above with the below code with a criterion that I want to remove only lowest four frequency component: y = fft(x, axis=0) y_fft_shift = fftshift(y) n = 4 m = len(x)/2 # y_fft_shift[m-n+1:m+n+1] = 0 sig_fft_filtered_img = y_fft_shift.copy() sig_fft_filtered_img[int(m-n):int(m+n+1)] = 0 y_ifft_shift = ifftshift(sig_fft_filtered_img) y_ifft = ifft(y_ifft_shift) Dataset used: link to electricity demand dataset used above P.S.: Many answers on SO helped me to understand the concept on image denoising using FFT as well as on synthetic data but couldn't find much on applying FFT on real dataset Ref: High Pass Filter for image processing in python by using scipy/numpy How to interpret the output of scipy.fftpack.fft? How to interpret the results of the Discrete Fourier Transform (FFT) in Python Understanding FFT output in python | Don't use the complex versions of the FFT; use the real ones. import matplotlib import numpy as np from matplotlib import pyplot as plt sr = 2_000 # sampling rate ts = 1/sr # sampling interval t = np.arange(0, 1, ts) # generate a signal freq = 1 y = 3*np.sin(2*np.pi*freq*t) freq = 4 y += np.sin(2*np.pi*freq*t) freq = 7 y += 0.5*np.sin(2*np.pi*freq*t) fft = np.fft.rfft(y) # FT of original signal f = np.fft.rfftfreq(len(y), d=ts) # compute freq. # define the cut-off frequency f_cutoff = 4.5 i_cutoff = round(f_cutoff/sr * len(t)) # high-pass filter by assign zeros to the # FFT amplitudes where the absolute # frequencies smaller than the cut-off fft_filtered = fft.copy() fft_filtered[:1 + i_cutoff] = 0 # get the filtered signal in time domain y_filtered = np.fft.irfft(fft_filtered) matplotlib.use('TkAgg') ax_t: plt.Axes ax_f: plt.Axes fig, (ax_t, ax_f) = plt.subplots(nrows=2, ncols=1) ax_t.plot(t, y, label='unfiltered') ax_t.plot(t, y_filtered, label='filtered') ax_t.set_xlabel('t (s)') ax_t.set_ylabel('y') ax_t.legend() ax_f.loglog(f, np.abs(fft), label='unfiltered') ax_f.loglog(f, np.abs(fft_filtered), label='filtered') ax_f.set_xlabel('f (Hz)') ax_f.set_ylabel('|y|') ax_f.legend() plt.show() | 2 | 1 |
77,679,383 | 2023-12-18 | https://stackoverflow.com/questions/77679383/validationerror-1-validation-error-for-structuredtool | I was getting an error when trying to use a Pydantic schema as an args_schema parameter value on a @tool decorator, following the DeepLearning.AI course. My code was: from pydantic import BaseModel, Field class SearchInput(BaseModel): query: str = Field(description="Thing to search for") @tool(args_schema=SearchInput) def search(query: str) -> str: """Searches for weather online""" return "21c" And was getting this error: ValidationError: 1 validation error for StructuredTool args_schema subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel) | Downgrading to pydantic 1.10.10 worked for me. Add pydantic==1.10.10 to your requirements.txt and install it with pip install -r requirements.txt or with the command pip install pydantic==1.10.10 | 3 | 3 |
77,678,603 | 2023-12-18 | https://stackoverflow.com/questions/77678603/create-multiple-functions-in-one-azure-function-app | I want to create multiple python Azure Functions within one Azure Functions App using the azure cli/core tools. One http triggered function and one blob triggered function. I have the following folder structure: ├── azure-functions │ ├── az-func-blob │ │ ├── .python_packages │ │ ├── .vscode │ │ ├── .gitignore │ │ ├── function_app.py │ │ ├── host.json │ │ ├── local_settings.json │ │ ├── requirements.txt │ ├── az-func-http │ │ ├── .python_packages │ │ ├── .vscode │ │ ├── .gitignore │ │ ├── function_app.py │ │ ├── host.json │ │ ├── local_settings.json │ │ ├── requirements.txt │ ├── README.md Note: I created the individual function directories with the following commands within my project dir "azure-functions": Blob function: func init az-func-blob --worker-runtime python --model V2 cd az-func-blob func new --template "Blob trigger" --name BlobTrigger Http function: func init az-func-http --worker-runtime python --model V2 cd az-func-http func new --template "HTTP trigger" --name HTTPTrigger If I navigate to the directory of one of these functions and run the command func azure functionapp publish SPIEFUNC2 it works fine. However running this command again in the other function directory will overwrite the Trigger/Function in Azure instead of appending a new one even though the trigger names are different. Apparently that's just how it works in Azure but I don't know how to create multiple functions within one Function app. I saw this post here where someone suggested to use the same "host.json" for all functions. I don't know how to do this as every host.json is exactly the same anyway and looks like this: { "version": "2.0", "logging": { "applicationInsights": { "samplingSettings": { "isEnabled": true, "excludedTypes": "Request" } } }, "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[4.*, 5.0.0)" } } Can someone please explain in detail how to get set this up using only one Azure Function App instead of multiple function apps? | Can someone please explain in detail how to get set this up using only one Azure Function App instead of multiple function apps? I am able to deploy multiple functions to a Function App using func azure functionapp publish <function_APP_Name> command. Thanks @Thomas for the comment, you need to run func init --worker-runtime python --model V2 command once and func new twice. I have created one Http Triggered Function and Blob Triggered Function as shown below- When you will run func init --worker-runtime python --model V2, all the default files will get created for a function. Upon running func new, it will add the triggers in it. I have deployed the functions to the function app and got successful deployed output. I am able to see the functions in the function app. Output- | 3 | 2 |
77,678,481 | 2023-12-18 | https://stackoverflow.com/questions/77678481/using-libclang-python-bindings-how-do-you-retrieve-annotations-added-to-c-cla | Using the following C++ struct as an example: __attribute__((annotate("MyAttribute"))) struct TestComponent { __attribute__((annotate("MyAttribute"))) int32_t testInt; __attribute__((annotate("MyAttribute"))) bool testBool; __attribute__((annotate("MyAttribute"))) char testChar; }; Given a node (cursor) from clang using (clang module's cindex) while parsing the AST, using the following I can get the annotations on the class members when node is pointing to TestComponent and has a kind equal to CursorKind.STRUCT_DECL: def get_annotations(node): annotations = [c.displayname for c in node.get_children() if c.kind == clang.cindex.CursorKind.ANNOTATE_ATTR] return annotations But class/struct annotations never show up as children of a struct/class. Is there a way to get to those? Even in the C bindings - I can try Monkeypatching it, but couldn't find anything. | tl;dr: attribute should be placed between struct and TestComponent struct __attribute__((annotate("MyAttribute"))) TestComponent If you print tu.diagnostics, clang does issue a -Wignored-attributes warning: test.cpp:2:17: warning: attribute 'annotate' is ignored, place it after "struct" to apply attribute to type declaration [-Wignored-attributes] Once fixed, ANNOTATE_ATTR is visible in AST: test.cpp CursorKind.TRANSLATION_UNIT TestComponent CursorKind.STRUCT_DECL MyStruct CursorKind.ANNOTATE_ATTR testInt CursorKind.FIELD_DECL MyInt CursorKind.ANNOTATE_ATTR testBool CursorKind.FIELD_DECL MyBool CursorKind.ANNOTATE_ATTR testChar CursorKind.FIELD_DECL MyChar CursorKind.ANNOTATE_ATTR My test code for reference: code = """ __attribute__((annotate("InvalidAttribute"))) struct __attribute__((annotate("MyStruct"))) TestComponent { __attribute__((annotate("MyInt"))) int testInt; __attribute__((annotate("MyBool"))) bool testBool; __attribute__((annotate("MyChar"))) char testChar; }; """ from clang.cindex import Cursor, Index, Config Config.set_library_file('C:/llvm-16/bin/libclang.dll') index = Index.create() args = ['-x', 'c++', '-std=c++20', 'test.cpp'] tu = index.parse(None, args, unsaved_files=[('test.cpp', code)]) for d in tu.diagnostics: print(d) def recurse(c: Cursor, indent=0): print(' ' * indent, c.spelling, c.kind) for n in c.get_children(): recurse(n, indent + 2) recurse(tu.cursor) | 2 | 3 |
77,676,117 | 2023-12-17 | https://stackoverflow.com/questions/77676117/why-does-python-read3-seek-to-end-of-file | I am trying to understand file IO in Python, with the different modes for open, and reading and writing to the same file object (just self learning). I was surprised by the following code (which was just me exploring): with open('blah.txt', 'w') as f: # create a file with 13 characters f.write("hello, world!") with open('blah.txt', 'r+') as f: for i in range(5): # write one character f.write(str(i)) # then print current position and next 3 characters print(f"{f.tell()}: ", f"{f.read(3)}") with open('blah.txt', 'r') as f: # look at how the file was modified print(f.read()) Which output: 1: ell 14: 15: 16: 17: 0ello, world!1234 As I expected, the first character was overwritten by 0, then the next 3 characters read were ell, but I expected the 1 to be written over the o in hello, then the next 3 characters read to be , w. I'm reading the docs here, but I don't see where it explains the behavior that I observed. It appears that the first read, no matter what the size, seeks to the end of the file. Can anyone provide a link to where it explains this in the docs? I tried searching for a similar question on this site, but while there were many questions related to read, none that I found mentioned this behavior. UPDATE After more exploration, it is not the first read that seeks to the end of the file, but rather the second write that does. Again, I'm not sure why, which is why I'm hoping to find somewhere in the docs that explains this behavior. Here's my change to the code above that shows that it's not the first read: with open('blah.txt', 'w') as f: # create a file with 13 characters f.write("hello, world!") with open('blah.txt', 'r+') as f: for i in range(3): # write one character f.write(str(i)) # then print current position and next 3 characters print(f"{f.tell()}: ", f"{f.read(3)}") print(f"{f.tell()}: ", f"{f.read(3)}") with open('blah.txt', 'r') as f: # look at how the file was modified print(f.read()) Which output: 1: ell 4: o, 14: 14: 15: 15: 0ello, world!12``` | Consider this example: with open('test.txt', 'w') as f: f.write('HelloEmpty') with open('test.txt', 'r+') as f: print(f.read(5)) print(f.write('World')) f.flush() f.seek(0) print(f.read(10)) You might expect this to print: Hello 5 HelloWorld Instead, it prints: Hello 5 HelloEmpty And the file contains 'HelloEmptyWorld' after execution. Even though this code: with open('test.txt', 'w') as f: f.write('HelloEmpty') with open('test.txt', 'r+') as f: print(f.read(5)) print(f.read(5)) Works as expected and prints: Hello Empty So, the .read() doesn't position the pointer at the end of the file, otherwise the second print statement should have caused an error or come up empty. However, consider this example: with open('test.txt', 'w') as f: for _ in range(10000): f.write('HelloEmpty') with open('test.txt', 'r+') as f: print(f.read(5)) print(f.write('World')) If you execute this code, and then look at the file, you will find that at position 8193, the word 'World' has been written. So, it appears that Python reads the text data in 8192 byte or character chunks, and although consecutive calls to .read() track the position in the read buffer, calls to .write() will use the actual file pointer, which has been moved 8k ahead (or to the end of the file, whichever comes first). Whether it's characters or bytes, you can tell from this: with open('test.txt', 'w', encoding='utf16') as f: for _ in range(10000): f.write('HelloEmpty') with open('test.txt', 'r+', encoding='utf16') as f: print(f.read(5)) print(f.write('World')) Now, the character size in the file is 2 bytes, and the word 'world' gets written at position 4097 (in characters, but still after byte 8192), so the buffer size is in bytes. (note that 8192 and 4096 are powers of two, in case those numbers seem arbitrary; also note that the first large file is exactly 100000 bytes in size, as expected, but the second one is 200002 bytes in size, which causes the world 'World' to be offset a bit, due to the encoding chosen and the byte order mark) Edit, a bit more information: Consider this: with open('test1.txt', 'w') as f: f.write('x' * 100000) with open('test1.txt', 'r+') as f: s1 = f.read(5) # 5. read first 5 bytes, expected to be 'xxxxx' f.seek(0) # 6. seek back to beginning of file f.write('y' * 5) # 7. write 5 bytes of 'yyyyy' f.read(5) # 8. then read another 5 bytes (discarded, but would be 'xxxxx') f.flush() # 9. flush the buffer, if any f.seek(0) # 10. seek the beginning of the file once more s2 = f.read(5) # 11. read 5 characters, expected to be 'yyyyy', but in fact 'xxxxx' (Win 10, Python 3.11.6) print(s1, s2) # without line 8: with open('test2.txt', 'w') as f: f.write('x' * 100000) with open('test2.txt', 'r+') as f: s1 = f.read(5) f.seek(0) f.write('y' * 5) # f.read(5) f.flush() f.seek(0) s2 = f.read(5) print(s1, s2) # the expected result Output: xxxxx xxxxx xxxxx yyyyy This shows that performing a .read() after a .write(), before flushing the buffer, can cause very unexpected results. You'll find that test2.txt will have yyyyy written at the start, as expected, but test1.txt will have it written after the first read buffer. I'm not sure if this shouldn't in fact be considered a bug in Python... | 3 | 2 |
77,672,842 | 2023-12-16 | https://stackoverflow.com/questions/77672842/infer-type-hint-superclass-init-args-in-child | I'm trying to see whether it is possible to solve the following problem Suppose I have to classes: class A(): def __init__(self, param: str, **kwargs) -> None: ... class B(A): def __init__(self, **kwargs) -> None: ... super().__init__(**kwargs) When using pyright, I don't get any (type) checking on arguments to class A since the B's **kwargs just take all arguments. Is it possible to type hint class B to mimic A._init_'s signature? In older posts the main solution seems to be to just redefine param, however in my case A's signature might change over time and I would prefer it if I didn't have to touch B in those cases. I was hoping this was maybe achievable with the "somewhat recent" additions of TypeVar and ParamSpec. I went down a rabbit hole where I would have class B have a more generic signature: P = ParamSpec('P') T = TypeVar('T') class B(Generic[P, T]): def __init__(self, cls: Type[T], *args: P.args, **kwargs: P.kwargs) -> None: ... super().__init__(**kwargs) But in this case B obviously wouldn't inherit from my generic T and I'm not even sure whether that's possible or would even solve my problem so I stopped pursuing that road | If you merely want to run some custom logic in the child class' __init__, a decorator approach will preserve the signature without the need to explicitly repeat it. from typing import Callable, Concatenate, ParamSpec, Protocol, TypeVar P = ParamSpec("P") SelfT = TypeVar("SelfT", contravariant=True) class Init(Protocol[SelfT, P]): def __call__(__self, self: SelfT, *args: P.args, **kwds: P.kwargs) -> None: ... def overinit(init: Callable[Concatenate[SelfT, P], None]) -> Init[SelfT, P]: def __init__(self: SelfT, *args: P.args, **kwargs: P.kwargs) -> None: # put your code here init(self, *args, **kwargs) return __init__ class Parent: def __init__(self, a: int, b: str, c: float) -> None: self.a = a self.b = b self.c = c class Child(Parent): __init__ = overinit(Parent.__init__) Child(1, "123", 123) | 4 | 3 |
77,674,469 | 2023-12-17 | https://stackoverflow.com/questions/77674469/does-python-c-or-exec-create-temporary-files | Where are temporary files saved when running the commands or are there no temporary files created? python -c "print('Hello, World!')" and exec("print('Hello, World!')") Both Windows and macOS. Does Python run directly or is it compiled into a temporary file and then run? I use Google Translate. | No, neither of these operations creates any temporary file. Python creates .pyc files only as a performance optimization to speed future loading of the same code, and works perfectly well without them. This is typical behavior for interpreted languages as opposed to compiled ones. As Python is generally compiled only to bytecode and not to native executables (even tools that create native executables from Python generally pack an interpreter and source code or bytecode for it to work on into that executable, instead of properly compiling to native CPU instructions); and as that bytecode can be interpreted in-process by the same tool that compiles Python source into said bytecode, Python is for this purpose effectively interpreted. (There's a larger discussion of the semantics at Is Python interpreted, or compiled, or both?, but the definition doesn't really matter for practical purposes so I'm not going to go into it further here). What would create a temporary file (with most but not all shells) is: python <<EOF print("Hello, world!") EOF ...but in that case the file is created by the shell, not by Python itself. | 2 | 1 |
77,672,765 | 2023-12-16 | https://stackoverflow.com/questions/77672765/how-can-a-line-at-the-end-of-the-code-suddenly-cause-a-bug-in-an-earlier-line | Somehow I managed to create a program that gives an error in an earlier line by adding a line at the end. The code below runs without any error. However, if I uncomment the last line an error occurs in the line before that: Traceback (most recent call last): File "/path/to/code.py", line 233, in <module> print(current_step.right.down) AttributeError: 'NoneType' object has no attribute 'down This bug is caused because the program tries to jump to a node that is outside of the grid, but I don't understand why that bug doesn't show up if comment out the last line. class Puzzle: def __init__(self, year, day): self.year = year self.day = day grid = '''7.77F7F|-F.J-J7-LF|-7.FFL7F-L-7--7-JF-7F.LL.7-|FFF7..F-7-J777FF.77.L-FL-7-FF77-L7-F-F--FJFF|-F77F-7F7-.L-FFL-|-7-LJ77F7-F-FJ77.77J.J77F-L77. F-F-J|FL-J7-L|.L|FJ|LF-7JL|J.|J.||LLJJLJ-.L7F7|L7|L7-L7J.|LJ-JJ-JJ-|F77.|.LLFJLLJ-JFJJF|-F-J7.LL7.J7J-7.7-|.LJFF.|-J7L--|..|-F--J||LF-|7.|-. FF-JF77.L7--7F--JLF|LL-L-.7-|J-L7|7F|.J.L7.|.-7L-77J.L|.L--7-|..FL7LFJLF|7JLL|||J..|7|7.|L|F|7JFL-JF7-F-|.J-7F7..FJ|||L7F77.FL-|77JL|-FF-||7 |.||L7JFLF7L7.|J|LLJ7JF-JL-7|7-FL7L|J7JF--.|7||-FFJL|F|F|77F-|-|7L--|FLJ|L7.L|-J--.J-|-F|.LJ.-7LLJ7L7F|-F-|7LJ.FF|F||F|.||LF-L-|JL7FL.LLF-77 7F|-|L7JF|7JL.F-7.F-J-FJLJFLJ7J||L-|L--J|J--J|J|F77LF---7-F7F7FF7F|FJ7JLJ.J7F7LF|L7LF.|.LF-77|F-J|JLF7F777F-|J7FFL7|F7LJ777.-7F|.FJJ|7FL|||| L-JJJLJ-FJ-7FJ7FJ-F-7.L7F|F|7F7LJ7FL7|--|F7J.777||F7L-7FJFJ|||F7JF-J|77|L|JFLJ.FJF|7LJ-.F-JFF-J..|77|LJL77J.|LJ7.L--.L7--F-J||J-|L|JF-77LL7J LL7F--7FJ.F--L77.|FF|-LJJF7-JFJLF77F7J|7FF7F7F7FJ||L7FJ|FJFJ|LJL77-LLJL7.|-FJ.|.FL|LFJLFL7F|..FFF7J7L-7FJL|-L7LF7FLJ7FF-.JJ7FJ7JF-J-J7LJFF|J F.F|FJ--7.F--7|F--||..FJJ7JJLL7FFL77.|L|JF7|||||FJ|FJL7|L7L7|F--J|||LFJ77F-|FL7FLJL|J7.|-|J..|7FJ|.|LFJ|.L||-7FJ|L7L-|.|..|LF7FFJ7JFF-J7-J.J --LJ|||.L-F-LFLJLLJ--7--LJ-.LFLFJLJ7FF.FFJ||LJLJ|FJL--JL7L7|||F7-LFJ-JF|JJFF7JL--F|F7.-J|.|L-F7L7L-7FL7L77L--LL|JL7L7.777.|JL|JJ.|F-J.L7-LL. L7.L|-7-L-77-JFL.|JF-J|J|LFFF-J|L-JFF--7L7|L-7F-JL----7FJFJLJLJL7JJLF-JJFLFJ|-|7FF-JL7FF7-F-L||7L-7|F7|FJF77F7|L|-J7L--77LF...FLLL77|F7||J.J FJJ.|FJ|JFL77-LLF7FJ-F-.7F7--|FJJ-FFJF7L-JL7L||F-7F7F-J|FJF-----J7.7L7F-7-L7L7F-7|F7FJ7FF7F7|||JFFJ||LJL7||F7F-7|.F---FF|JF-J77F|.JFJ--L--J| |-|L7-7J77JLJF|.L.FL7.FJ7F7J7FF7.FFL-JL7F-7L7|||FJ||L-7|L7L-7F7F7--|7L--FF-L7|L7|||||.FFJ|||FJ|JFJFJL--7|||||7|LF7-JFFF7JFJJ.7-J|-|FL77||-7F F---|LJLF-7-JJ-JJ|LF7FL7|F7-|L|L7F7F7|LLJLL7||||L7||F7||L|F-J|||||F|-7-F--7F||FJ|||LJF-JFJ||L7L7L7L7F7FJ||||L7F7||FLFF|L7JJ|7L|.--FJ--J--.7- LJJ.L7|7LFJ|FLJ7L|-JJ7-FL7LF--JFJ|||L7.FF7-|||||FJ|||LJL7|L--JLJL-7JL|J|F7L-J|L7LJL7JL7FJFJ|FJFJFJFJ|||FJ|||FJ|LJL77JF|FJ|F|.7|7--J7FL7.LFJ| -JF7.L--|L-J7F-J|L7J|FF7L7.L--7|FJ||FJF7||FJLJ|||7||L--7LJF7F-7F--JJFF7LJL--7|FJF--J|FJL7L7|L7|FL7L7|LJL-JLJL7L--7L7-F||7||LJL||FJJ|-|FF.--J |-||7-|7|7.L|JF7|J|L7L-F-F--7FJ||FJ||7||||L7F7LJL7|L---JF-JLJ-LJ|F--7|L7FF7FJLJFJF-7-L-7|FJL-JL7FJFJL--7F----JF--JFJF-JL--7F|-JJJJFJ.F7JFLJ. F7.F--F-LL77.7L-J.FF77-7FL-7|L7|||-||FJ|||JLJL7F-JL----7L--7F7F--JF-JL7L7|||F--JFJFJFF7||L---7FJL7||LF-JL--7F7L--7|-L7F---J---L77F|FFJ.|L|-7 LJ-|7-7.FJL7..777.FJ.77|F--JL-JLJ|FJ|L7LJL7-F7|L-7LF7F7L7F-J|LJF--JJF7L7||||L-7FJFJ.FJ||L-7F7||F7|L7FJF----J|L-7FJ|F7||F-7.|F|L|-|77.JF--L-7 LF-LFFJ-7-LLF77JLFFJ7JLL|F----7F-JL7|FL--7L7|||F-JFJLJL-JL7J|F-JF-7FJ|FJ||||F-JL7L-7L7|L-7||LJ||||FJL7L-7F--JF-JL7LJLJLJFJ-7F|7J.L7JFJ-|7FJ| -J.|||F-77J||FJL7LF7J.|LLJF7F-J|F-7|L7F7FJFJ||||F7L-----7FJFJL-7L7||FJ|FJ||||F7FJF-JFJL7.||L-7||||L7L|F-J|F--JF-7L7F-7F-J|J||L7L|F-.J-FJFJ.7 L|-L|JF77..LLL--F-J|L-F7F7||L-7LJFJ|FJ||L7L7||||||F7|F7.||FJF7FJFJ|||FJ|FJ|||||L7L-7L7FJFJL--JLJ||FJFJL-7||.F7|FJFJL7|L-7JJLJ-|7||L7|FJJ-L-L F77.7FJJJF7.L|J.|F7L7L|||LJ|F7L-7|FJ|FJL7|FJ|||LJ||L7||FJ||FJLJFJFJ||L7|L7LJ|||FJF-JFJL7|F---7F7LJL7|F7FJ|L7|LJL-JF-JL7FJ|FFJFL|JLF|F|LJ|L|| LJL-J-J..F7FF-.FLJ|FJ.||L-7LJL--JLJFJL7FJ||FJ|L-7LJFJ|||FJ|L7F7|FJFJ|FJL7L-7||LJFJF7L7FJLJF-7LJL--7|||LJ.L7LJF----JF-7LJLFJ.LF-|JJL.-LJ.FJL7 F||LJ.7FJ-|7|JF7LFJL7FJ|F7L--7F-7F7L--JL-JLJFJF7|F-JJ|||L7|FJ|||L7L7||F-JF-JLJF7|L||FJL-7FJFJF7F--JLJL7F7LL7FJF7F7FJFJF-77-||.F|7-.FLF-|L-77 J----7.--77FF-JL-JF-JL7LJ|7F7LJFJ|L---7F----JFJ||L-7FJ||FJ|L7||L7|FJ|||F7L--7FJLJFJ||F--JL7L7||L--7F-7LJ|F-J|FJLJLJFJJL7L--7-FJL|-LF-JFJFFJ| L-|-L7.JJF-7L----7|F77L-7L7||F7L-JF---JL7-F-7L7LJF7|L7||L-JFJ||FJ|L7LJLJL7F-J|F7|L7|||F7-FJFJ||F-7|L7L--JL-7|L7F--7L-7FJF--J-F7.|FJ.|F7.F|-7 |F-JL77FFL7|F7F-7|||L7F7L7|||||F7.L----7L7L7L7|F7|LJFJ||F--JFJ|L7L-JF7F7FJL-7|||F7|||||L7L7||||L7LJFJF-7F7FJL-JL-7L--J|FJF7.FJL-7|.J-JJFLL-J F-JFFJFFF7|||LJFJ||L7|||F|LJ||LJL7JF7F7|FJFJFJLJ||F7L7LJ|F7.L7L7L7F-JLJLJF-7|||LJLJLJ|L7|FJL7|L7L-7L-JFJ||L7F----JF-7F|L-JL7L7F-J-JJJ7F7.L-J |J.||-F-JLJ|L-7L-JL-JLJL7L7FJL7F7L-J||||L7|FJF--J||L7L-7||L7FJFJFJ|F7F7F7L7LJ|L-7F--7L7|LJF-JL7|F7|F--JFJL-JL--7F7|FJFJF---JFJL7JJ.JFFJ-7J7F F..|J-L---7|F7L--7F-7F-7L7|L-7LJL--7LJ||FJ|L7L--7||FJF-J|L7||FJJL7LJLJ||L7L-7|FFJL-7L-JL-7|F7.|LJ||||F7L--7F--7LJLJL-JFJF--7L7FJJF|.FJ|J|7F| FF7.|FJ7F7|||L---JL7LJ7|FJL-7L7|F-7L7FJ||FJFJ7F-JLJ|LL7FJFJLJL7F-JF---JL7L7FJL7|F7FJ|F-7FJLJL7L-7||L7|L7F7||F7L----7F-JJ|F-JFJL-7F77|LJ-J-J7 |L--|7|L|LJLJF7F7F7|F--J|7F7L7L7|FJFJL7||L7L-7L-7F-JF-JL7L7F--JL-7L--7F7L7LJF-J||LJF7L7|L-7F-JLFJLJFJ|FJ|LJLJL7-F77LJJF-JL7FJF--J|L7J|L-J.J. |LL7|JL-L7F7FJ||||||L7F7L-JL7L7|||FJF-J||JL7FJ-FJ|F7L--7|FJL7F7F7|F--J||7L-7L7FJL-7||FJL7-|L7F7L7F7|.||-L-7F--JFJL-7-FJF-7||FJF7J|FJ|F7|F7-| ||F-JFJF7|||L7||||LJ7||L---7L-JLJ|L7|F7|L7FJ|F7|FJ||F7FJ||F-J|LJ|||F7FJ|F7|L7||F--J|LJF-JFJFJ||FJ|LJFJL77FJL---JF--JFJFJFLJ||-|L7||F--77||F| F|7JFLFJLJ|L7|LJLJ-F7LJF7F7L-7F-7L7|||||FJ|FJ||||-|||||FJ||F7|F-J||||L7|||F7||||F7-L7FJF7L7|J|||FJF7L7FJFJF----7L--7|FJF77FJL7L7||||F-J7-FF7 L7J-7-L-7FJJLJF7LF-JL7FJLJL7FJ|FL7||LJLJ|FJL7||||FJLJ||L7|LJ||L7FJ|||FJ||||||||LJL7FJ|FJ|FJ|FJLJL7||FJ|FJFJLF7LL---J|L7||FJF-JFJLJLJL7JJ|JL| 7LLJ|7.FJL-7|F|L-JF-7|L---7|L7|F-JLJF-7FJ|F-J|LJ|||F7||FLJF-JL7||FJ||L7||||||LJFF-JL7LJFLJFJL7LF-J|LJ|LJ7L--JL------JFJ||L7L--JF--7F-J.FL7F| |7L-77-L7F-JF7L---JFJL----JL7LJL----JFJL7||F7L-7||FJLJ|F77L-7FJLJ|FJ|FJLJLJ|L--7L-7FJLF---JF-JFJF7|F--------------7F7L-JL-JF-7FJF7|L7J.|JFL7 LL7.L-7FLJF-JL---7FJF-7F---7L-7F--7F-JF7|||||F7|LJL7F7LJL7F-J|F7F|L7||-F---JF--J7FJL-7|F-7FJF-JFJ||L7F7F------7F-7|||F7F7F7|.|L-J||FJJ.-7|-- ..-7LL|7|LL-----7LJFJLLJF-7L-7|L-7|L-7||LJLJ|||L-7FJ||F-7|L-7LJL7L-JLJFJF7F7L--7FJF7FJ||FJ|-L-7|7LJFJ|LJF-----J|FJ|||||||||L7|F--J|||F-JLL-7 -F7F..|J7.LF-7F7L--J-FF7L7|F-J|F-JL--J|L7F--J||F7|L7||L7|L7FJF--JF7F-7L7|LJL7F7|L7|||FJ|L7L-7FJ|F7JL-JF7L-7F7F7|L7LJLJLJ|||FJ|L--7LJ-L7FJJ77 F-7|L7|.F-F|FJ|L----7FJL-J|L-7|L------JFJL7F7|||LJFJ|L7||FJL7L---J||FJJLJF--J|LJFJ|LJ|FJFJF-J|FJ|L----JL--J|||LJFJLF7F-7|||L7L7F-JF7JJLJ7.-7 7LLJ7LJFF--J|FJF---7|L---7|F7||F7F-----JF7LJLJ|L7FL7L7|||L7J|F7F-7LJL---7|F-7L7|L7L-7||LL7L-7||7|F-7F-7F-7F|||F-JF7|||FJLJL-J||L7|||L7J.L-|| L7L|LJ-FL--7|L-JF7J|L----JLJLJLJ|L------JL---7L7|F-JFJ|||FJFJ||L7L7F-7F7|LJ|L7L-7|F-J||F7L7FJ||FJL7LJ-|L7L-JLJL--JLJLJL---7JF7L-JFJL7J-F.-L- |LFF.|.F---JL7F-JL-JF----------7|F7F-7F------JF||L7FJFJ|LJFJFJ|FJFJL7|||L--7FJF7|||F7LJ|L-JL7|||F-JJF7L-JF-7F-7F7F7F7F7F7FJFJL---JF-JJ.F7.|J F--F77FL----7|L-----JF---------JLJLJL|L-------7||FJL7L7L7FJFJ-LJJ|F7|LJ|F-7|L7||||LJL-7L7F7FJ|||L7F-JL---J|LJLLJ||LJ||LJLJ7L7F---7L7JF-J|-77 |J-FF-7J.F--JL--7F7F7L7F7F7F7F7F----7|F7F7F--7|LJ|F7L7L7|L7L--7F7LJ|L-7||FJ|-|||||F7F-JL||||FJ|L7|L--7F7F-7F---7LJF7LJJF7F7FJ|F-7L-JFJF-J--J L|LLL7|FFL-----7|||||FLJLJLJ||||F---J||LJ|L-7|L-7|||FJ7|L7|F7FJ|L7FJF7||||FJFJ|LJ||||F--J|LJL-JLLJF--J|LJFJ|F7FJF-JL--7|LJLJFJ|FJ|JL|FJ7.L|J LL7-L||F7F7F-7FJLJLJL-7F7F7FJ|LJL7F7|LJF7L--JL--JLJ||F-JFJ|||L-JFJL7|LJLJLJFJFJF-J|||L--7|7F7.F--7L---JF7L-J||L7L7F---J|F---J|||F777||F7F7|J .LFJL|LJLJ|L7LJF7F---7||||LJFJF7LLJL7F-JL---7FF7F-7LJL7FJFJ||F-7L7FJL---7F7L7L7|F7|||F--JL7|L7L-7||F77FJL---JL-JFJL7F7FJ|F7|F7|||L7FJLJLJL-7 --L7.L7F-7L-JF7|||F77||||L--J|||F7F-JL7F---7L-JLJFJFF-J|FJFJ||-L-JL-7F--J|||L7|LJ||LJL7F7FJL7|-FJL-JL7L----7-F-7|F-J||L7||L-J||||FJ|F---7F7| |FLJL7LJFL---J||||||FJLJ|F----JLJ|L---JL--7L7F7F7L7FJF7|L7|FJL--7JF7|L---JL7FJL7FJ|F-7||||F7|L7|F-7F7L----7L7L7|||.FJL7|||F--J|||L-JL--7||LJ FF.FFLFF-----7LJLJ|LJF-7|L----7F7L-------7L7||LJL7|L-JLJ7||L7F-7|FJ||F7F7F7|L7FJL-JL7|||LJ||L7LJL7LJL--7F7L7L7|||L-JF7LJLJ|F--JLJF7F7F7|||J7 L7.-|-FJF---7L----JF7|FJ|F7F--J|L-------7L-JLJF7FLJF-----J|FJ|J|LJFJ||||||LJFJL--7F-JLJ|F-JL-JF-7L7FF--J|L7L7LJLJF-7|L----J|F-7F-JLJ||LJLJ-7 FJJ.F.|FJF--JF-7F--JLJL7LJLJF7FJJF------JF--7FJL7F7L--7F7FJL7L7L7FJJLJ|||L7.L7F7FJ|F---JL----7|7L7|FJF--JFJ.L7F--J7|L------JL7|L---7LJJLL|-| L7LF|-LJ|L-7FJ-LJLF----JF-7FJ||F7L----7F7L-7LJF7LJL--7|||L7L|FJFJL---7LJL7|F-J|||FLJF-7F-----JL-7||L7L7F-JF--J|F7F7L7F-7F7F-7||F7F-J.F7.JJLJ LJ-J|LL-F--JL7F--7L----7|J|L7LJ|L----7LJL--JF-JL-----JLJL7|FJL7L7F7F7L7-FJ|L7FJ|L--7L7LJF--7F7F7||||L-JL-7L-7FJ|||L7|L7||||L||LJ|L7JF|L77.|7 .|..LFJ.L---7|L-7L-----J|FJFJF7|F---7L-----7|FF-----7F7F7LJL--JFJ||||FJLL7L7LJFJF--JFL-7|F-J|||||||F-----JF7|L-JLJFJ|FJ||||FJL-7L-JF-JFJJJ-L -LJFLLF--LF-J|F-JF7F7F7FJL7|FJ|||F--JF7F--7LJFJF----J|||L7F--7FJFJ||LJ||LL-J|LL-JF7-F--J||F-J||LJLJL------J||F7F7FJ.LJ|||||L7F7L-7L|F-JJ|LFJ L|-F77||..L--JL--JLJLJLJ.LLJL7LJ|L---JLJF-JF-JFJFF7F7||L7||F-J|FJ7||JJ|7-LJL|J7F-JL-JF7FJLJF7LJLF7F7F------JLJ|||L----7||||FJ||F7L-JL-7|L77J F|-JLLJ-7.F7F7F-------7F7F--7L7FJF------JF-JF-JF7||||||FJLJL-7LJF7||7F|-7JJ.|.FJF-7F7|LJ|F-J|F--J||||F------7FLJL----7|LJLJL-JLJL7F-7FJ77|L- LFJ-7J|.LFJ|||L------7LJ|L-7|.LJ|L-----7FJF7L--J||||||LJF-7F-JF-J|LJ7|||.|LFFFJFJJLJLJF-7L-7|L7F7|||LJF77F--JF77F7F7.||FF-------7|L7||||L77| J.L.7-|7.L7LJL7F7F7F7|F-JF7||F--------7LJFJL7F7FJ|||||F7L7|L--JF7L-7JLJ7.J.FJ|FJFFF7F7|FJF7||JLJ|||L7FJ|FJF--JL-JLJL7LJFJF-7F-7FJL7||L77L7J7 .|.F||LF7LL--7LJLJLJLJL--J|||L-------7|F7L-7LJLJ-|LJLJ||FJL--7FJL7FJJ.|77.F--||--FJLJLJL-JLJL7F7||L7|L7|L7L7F---7F--JF7|FJLLJ|||JJLJL-JJL|.L F|.F7|J|FL|F7L---7F7F---7FJ||F7F7F---JLJ|F-JF7F--JF7F-J|L---7|L-7||7.-JF---JLLJ7LL-7F-7F7F--7LJLJL-JL-JL-J-|L--7|L---J||L----7LJL7.FLJ|-L77| ||-LJ-7|LJFJL----J|||F--JL-JLJLJ|L---7F7|L--JLJF-7||L7FL---7|L--JLJ-|FJ||FLJ.J.77FFJ|FLJLJF7L-7F7F7F7F-7F--JF7FJL7F7F-J|F----JF|FF77JL7.LJ77 LJ.LL7JJ77L7F----7|||L7F-7F7F-7FJF7F7LJ|L7F-7F7L7LJL-JF7F7|LJ|F7F77.L|L|77.|FF-F--JFJF7F--JL--J|LJ|||||LJF-7|LJF7LJLJFFJL---7F7F7|L-7-777JL7 FJ-JJLJ.77FJ|F---J|||.|L7||||FJL-J|||F-JF|||||L7L-----JLJL-7F-JLJL--7J|LJLF|-|||F7FJFJLJF7F7F7FJF-J|||F--JFJL--JL7F7F7|F----J|LJ||F-J-LL|.7J F7LFF7|F--JFJ|F7F7|||FJFJLJLJL-7F7LJ|L--7||FJL7|F---7F--7F7LJF-7F7F-J.7.L-7J7|-LJLJ|L--7|LJLJLJ.L--J||L--7L-----7LJLJ||L7LF-7|F-J|L-7FFJJFFJ |7.FL|7L7F-JFJ|LJ||||L7|F------J||F7L---J|||F7||L--7|L-7|||F-J.||LJ|LJ-L--FJFJ7.||LFFF-JL-77F--7FF-7||7F7L-7F--7L-7F-JL7L-JFJ||F7|F-J7||J|J7 J--7|LJ|LJ|LL7|LFJ|||FJ|L7F7F--7|LJL----7LJLJ|||F--J|F7||||L--7LJ-|7|-|7FL|||L-L77-FFJF---JFJF7L-JFJLJFJL--J|F-JF7LJ7F7|F--JFJLJLJL7LF777L-J |J|.F-F-|7F.LLJFJFJLJL-JFLJLJJFJ|F--7F7FJ7F-7|||L--7||||||L7F-JF-7.7J-F|...J77LFLFF7L7|F--7L7|L--7|F7FJF---7||LFJ|F7FJLJ|F7-|F7F-7FJ-F77-|LL LFFF7LF-FJ7.FLFL-JF-7F-7F7F7F7L-JL-7LJ|L--JFJ||L7F7|||||LJFJL--JFJ7|..FJ.|-|JJLF-FJL-J||F-J7LJJF-JLJLJFJF--J|L7|FJ|||F-7LJL-J|LJJLJF-J|7|L.J FFJ||--7|F7-|.FF-7L7|L7LJ|||||F7F--JF7L----JL||LLJLJLJ||F7L7F7F7|J7JL-||F7|J|7F|.L7F7FJ||F7F7F7L7F--7FJLL--7|FJ|L-JLJL7|F7F-7L---7FJF7L77FJ. J7-JJ-LFJJJ.LF7L7L7|L7L-7LJLJLJ|L---JL------7LJF7F7F7LLJ|L7LJ||LJ||.|.L|J|7-JF-LL7||||FJLJ||||L-J|F-J|F----J||FJF-7F--JLJ|L7|F--7LJFJ|FJLJJ7 L||L.F.|7J-F.||7L7||FJF7L-----7|F7F-7F------JF-JLJ||L7F7|FJF7LJF7F7-F-FJL|-7J|J.LFLJLJL7F7LJ|L7F7|L7FJL7F---JLJFJJ|L----7L-J|L-7L7FJFLJJ-|.F LF-7FF-J7-FJF|L--JLJL7|L-77F7FJLJ|L7LJF7F-7F7|F7F-J|FJ||||FJL--JLJL7J..F7J|-.L77L|J|F7.LJL-7||LJLJJLJF7LJF-7F7FJF7L7F7F7L--7|F7|JLJJLJ|...FF |L-J-7..--F7FJF--7F-7|L-7L-JLJF-7L7L--JLJFJ||LJ|L--JL-JLJLJF-7F7F--J||-.F7J.7-J|.LJF||F-7F7||F------7||F-JFJ||L-JL7LJ||L---JLJ||.||.|JJL.--. F-F|-|F7.F|LJFJF7LJFJL--JF7F-7|JL7L-----7|FJL7FJF--7F-7F7F7|FJ|LJ-|7LJL.LJ.FLLFF7JFFJ|L7|||||L-----7|||L-7L7||F7F7L-7|L--7F7F7||F777..LLFJ|7 JFF--|F--L|F7|FJL--JF7F--JLJ|LJF7L-----7LJL-7|L7|F7LJ-LJLJLJL7|7JFJLF7JFLL-F7FF||F7L7|FJLJLJ|F-----JLJL-7L7LJ||LJL--JL--7LJLJ|LJ|L-7--F7J7L| |7J|.L-7F7LJLJL----7|||F-------JL-----7L7F7FJL-JLJ|F7FF----7LLJ7LJJL-J77.FFF7-FJLJL7|LJF---7|L7F--7F7F-7L-JF7|L----7F7F7|F7F-JF-JF-J7J|F.JFJ J|FL7|.7JF7||F-7F7L||||L--------7F---7L-J|LJF7F7F7LJL7L7F--J77|7-|JLF7-F-7F||FL--7FJL--JF--JL7LJF7LJLJFJF7-|||F-7F-J||||LJ|L--JF-J7L77LJFL|. |L7FF--J-J7|FL7LJ|FJ|LJF--------J|F--JF7F|F7|||LJ|F7FJFJL7LF7F7.F-7.FLJL7|7||F77FJL--7JFJF--7L--JL7F7FJFJL7||LJ-LJLFJLJL7LL----JF777LFJL-JF. F-J-J|J.|-F7F7L-7||FJ.FJF7F7F--7FJL---JL-J|LJ|L7FJ||L-JF7L-JLJL-7|F7JF77|L-JLJL7|F7F7L-JFJF-JF---7LJLJFJF-J|L------JF--7L----7FFJL7--..|.FL| L7J-|77FLFJLJL-7||LJF7L-JLJ||F7LJ..F7F----JF7L-JL-J|F7FJ|F7F----JF-7-FLFL-----7|LJLJL7F-JFJF7|F--JF7F-J.L--JF-------JF7|F----JFJF-J||77F|--J |||F.F-JJ|F---7LJL--JL----7LJ|L--7FJ|L---7FJL-----7LJ||LLJ|L-7LF-JFJF--F---7F-JL7F7F7LJF7L7|LJL---JLJF77F---J|F7LF-7F||LJ7F---JFJ7FLF7F7.7.7 F77JL.|FF||F-7L-----7F--7FJF7L--7|L7|F7F7LJF-----7L-7|L7F7|F-J.L-7|F7-LL--7|L-7FJ|LJL-7|L7LJF7F7F7J.FJL7L-----JL7L7L-JL--7|F-7FJLFJ|FJFL-JFJ ||L--7-JLLJL7L-----7LJLFJL-J|F7FJL-JLJ|||F7L----7L--JL-J|LJ|F77F7||||F---7|L7FJL7L---7||FJF-JLJLJL7FJF7L--------JFJF7F7F7LJL7||F7J||J.||7JFJ |-7LL|JF--F-JF----7L-7FJF--7LJLJF7F--7LJ||L-7F-7L-7F7F77|F-J|L7|LJ|||L--7||FJL7FJF--7|LJL7L-7F-7F7LJFJL-----7F7F-JFJLJLJL---J|LJL7F7LFFF-F-7 |7|L||F|L||F7|F---JF7LJFJF-JF7F-JLJF7L-7|L-7|L7|F7||LJL7LJF7|FJ|F7||L7JFJ|||JFJL7L-7|L-7FJF7LJ7LJ|F7L7F-----J|||F-JJ7F7F----7L7F-JF7.|LFJ||| --LFFJ7J7FLJ|||F7F7||F7|FJF-JLJF---JL7FJL--JL7|||LJ|F-7L7FJ||L7LJ||L7L-JFJ|L7|F7L7FJ|F7||-||F----J||FJL7F7F7FJLJL--7FJ||F---J-LJF7||-7FLFJJ. LF7||L-7-FJFJ|LJ||||||LJL-JFF--JF--7.LJF----7LJ||F-J|7L-J|FJL7L7FJ|J|F--JFJFJLJ|FJL7||||L7|LJF--7FJ||F7LJLJLJF----7LJFJ|L-----7FJ|||77LF77L. ||LJJ7.J.L-L-JF7LJLJLJF-----JF--J|FJF--JF7F7|F7LJL-7L7F7||L-7|FJ|FJFJ|F7-L7|LF-JL--JLJ||FJL--JF7LJ|LJ||F7F7F7L---7|F-JFJF-7F--JL7LJL-77||7JF L7LJ7-|-FF|JF-JL---7F7L--7F--JF7F7L-JF--JLJ|LJL7LF7|FJ|L7L7FJ||FJL7L7LJL-7|L7L--7F-7F-J|L7F7F7|L7F---JLJLJLJL-7F7|||F7L7|F|L---7L7F7FJJFJ|-7 |--L-F---7JLL----7FJ||7F-J|F--JLJL---JF7F7||F7FJFJ||L7|FJFJ|7||L-7L7|F---JL7L--7LJFJL-7|FJ|||||FJLS---------7|||||LJ|L-JL7|F---JFJ||L77L7-.| |LF..L|LFL7.LF---JL-JL7L7FJL---------7|LJL7LJLJ7|FJ|FJ|L7L7|FJ|JFJFJ|L7FF7FJF--J-FJF--J||FJLJLJ|F7F77F7F---7L7||LJFFJF7F-J||F7-FJFJL-J7..LF7 |.|.FJ|L|L|7FL7F-----7L7LJF------7|F7LJF-7L7F--7|L7|L7L7|FJ|L7L7L7L7|FJFJ|L7|F77FJFJF-7|||F---7|||||FJLJF-7L-JLJF7FJFJ||F7LJ|L7L7L7F7F--7-F7 |7FFJ-L-J|F-JLLJF7FF7|FJF7L---7F7L-JL--JJL-JL7FJ|FJ|FJFJ|L7L7L7|.|FJ||JL7|FJ|||FJFJFJFJ||LJF7|||||||L---JFJ-F--7|LJFJ|LJ||F7|FJFJFJ|LJF7L7J| .FJ|LF|-|7.||7-FJL-JLJL7||F--7LJL--7F-----7F7||FJL7||7L7|-L7L7||FJL-JL7FJLJFJ|||FJ7L7L7|L7FJL7|||LJL7F7F-JF7|F-JL7FJF7F7|||||L-JFJFJF7|L-JL- -L-7-LL-L--7LF-JF7F7F-7|||L-7L-----J|F----J|LJ|L-7|||F-JL7J|FJ||L7F---JL--7|FJ||L-7.L7||FJL-7||||F--J|LJF-JLJ|F7FJL-JLJLJLJLJF-7L-JFJLJJ.|7. |-FJ|.J7FL-|-L-7|||LJ||||L7FL-------JL-7.F7L-7L7-||||L-7FJFJL7||FJL7F7F-7J|||FJ|F-JF-J|||F7FJ||LJL7F7L--JF--7LJLJF7F--7F7F--7|-L7F7L-7JLFL|7 |-J-J-77FLF.F|-LJLJJF-J|L7|F7F------7F7L7|L7.L7|FJLJL7L||FJF-J||L7FJ||L7|FJ||L7||F7L-7|||||L7|L-7FJ||F---JF7L7F--J|L-7LJLJF7|L-7LJ|F7|-F-7LJ -J.F|F-LJLF--7F7F---JF7L7||||L7F-7F7LJL-JL7L77||L-7F-JFJ|L7|F7||FJL7||FJ||FJ|FJ|LJ|F7|||||L7|L7FJL7||L----JL7LJF-7|F7L7F7F||L--JF7LJLJ|L-JLL LF7|F-JL7|L7FJ||L----JL7LJLJL-J|L||L7F7F-7L7|FJL7FJL7FL7L7||||||L-7LJ||FJ|L7||FJF-J|||LJ||FJL7LJF-J||F-7F--7L--JFJLJL-J|L-JL---7|L7-||JFLJL| L|JF7-7---FJ|FJ|F7F7|F7L7F7F-7FJFJ|FJ|||FJ7||L7FJ|F-JF7|FJ||||||F7L-7|||7L7|||L7|F7||L7FJ||F7|F-JF7||L7|L-7L---7|F--7F7|F------J|FJF7--J7-F| 7-FJL7LF--L7|L7LJLJL-JL-J|||FJ|JL-JL7|||L7FJL7||FJ|F7|||L7||||||||F7|||L7FJ||L7||||||FJ|FJ|||||F7|||L7||F7L---7|LJF-J|LJL-----7FJL-J|.-J|.F7 L-7---FJLFFJ|FL---7F-7F7FJLJL-JF----J|LJFJL7LLJ|L7LJ||||FJ|||LJLJ|||||L7LJL|L7||||LJ|L7|L7|||||||||L7|||||F7F-J|F7|F7|F-7F----J|F---J7J7F7F7 FF--JF-7.LL7L-----J|FJ|LJF--7F-JF--7FJF-JF-JF--JFJF-J|LJL7||L--7FJ|||L-JF--JFJ|||L7FJFJL7|||||LJ||L-JLJ|||||L7JLJ|LJLJ|-||-F---JL---7JFL7FJJ L7JL-JJ|7|F|F7F7F7FJL7|F7|F7LJF7|F-J|FJF7L7.L7F7|FL7FJF--J|L7F7|L7|LJF--JF7FJFLJL7||FJF-J|LJ|L-7|L---7FJ|||L7L--7L7F-7L7LJFJF7F7F---JFJL|-|J FF7L7|.LLF-J||||||L7FJ||LJ|L7FJ||L7FJL7|L-JF-J||L-7|L7L--7|FJ||L7LJF-JF--J|L-7JF-J|LJLL-7L-7|F-J|F---JL7||L7|F--J||L7L7L--JFJLJ|L--77J.LFJ|7 J.FL7F7J|L-7|LJLJL-J|FJL-7|||L7LJJLJF7|L-7FJF7|L7FJL-J7F-J|L7|L7||FJF7L--7|F7L7|F7L7-F--JF7|||F-JL7F7LFLJL7||L--7FJFJ|L7F7FJF7FJF--J-7F7|-JJ FLJFLJL--||||JLLF7F-J|F-7LJFJFJ.F7F-JLJF7|L7||||||F----JF-JFJL7||FJFJ|F7FJ|||FJ||L7|FJF-7|||LJL7F7LJL7F---J||F7FJL7L--7|||L-J|L7L7J|LFJ--77. FJ.-7.|.-LLLJ.F-JLJF7|L7L--JFJF-JLJF7F7|LJ|LJ||FJ||F7F-7L77L7FJLJL7L7LJ|L7LJ||FJ|LLJL-JJLJ|L-7F||L7F7||F7F7|LJLJF7L7F-J||L7F-JFL-J.J.77F|--| L-7FJ.F-|7J.FFL---7|LJFJF7F7L7|F7F7||||L--77FJ|L7||||L7|FJF-JL---7L-JF7L7L-7LJL7|F---7LF7FJF-JFJL7|||||||||L----J|FJL-7||FJL----7.-L-LJLLJ7| J-777.7LL---LF----J|F-JFJLJL-JLJ||LJLJ|F--JFJFJ7||LJ|FJ||JL-7F7F7L7F-JL-JF7L-7FJ|L--7L-J|L7L7FL7FJLJLJ|||||F7F7F7|L7F-J||L-7F7F7L7.|J|JFL-JJ LF|J7LJFJ.|7FL--7F-JL7FJF-------JL-7F-JL-7FJFJF-JL7|LJFJL-7|||LJL-JL7F--7|L7FJL7||F7L--7L-JFJF-JL---7FJ|||||LJ||LJ7|L-7||F7LJ|||FJ7F7FFJJJL7 FL7-|J.JJ7LFF---JL77|||FJF7F-7F7F-7||F-7FJ|FJFJF-7|F--JF7FJFJ|F--7F-JL7FJ|J||F7||FJL--7L7F-J7L-7F--7|L7||||L-7|L7F7L-7||||L7FJ||L--77FJL-7|F 7J.||-7J7|7|L-7F7FJF7||L7|||FJ|||FJ|||FJ|FJ|FL7|FJ||F7FJ|L7L7LJF-JL7F7|L7|FJ|||LJL-7F7L-JL---7FJL-7LJFJ|||L7FJL7LJL7.|||LJFJL-JL7F-JJL-7|LJ- -7F-J.LF-|-L..||LJFJLJ|.LJLJL7|||L-JLJL7|L-JF7||L7|LJ|L7L7|FJF7L--7||||FJ|L7||L----J|L7F7F7F7|L7F7|F7L-J||FJ|F-JF-7L7LJL7FJ7J|JL|L--7J.|F7LL LFJLLF-7LF7JF-LJLL|F-7L-7F---J|LJF-----JL---J||L7||F-JFJFJ|L7||F7FJ||||L7L7|||F7F---JL||||LJLJFJ||||L--7LJ|FJL7FJ|L7L-7FJL-7F77-L7F-J.77FJLL .|7F|LJL-J|7|LLF--LJLL7FJ|F-7FJF-JF---7F7F---JL-JLJ|F7L7L-JFJ||||L7LJ||FJFJ||LJ||F7F--J||L7F--JFJ|LJF-7L-7|L7FJL--7|F7||F-7LJ|--FJ|7|F-FJ7.| F--7LJJ.|L|.L7FL-LJ||FLJFLJFJL7|F-JF--J||L----7F---J||FJF--JFJ||L7L-7||L7|FJ|FFJ||LJF-7||FJ|F7FJFJF7L7L--JL-JL7F-7|||||||FJF-JJ.L7L7-|JL|F.F LJ7||JF-|JFJ7L--|J||7FLLJ-LL7FJLJF7L--7|L7F---JL---7|||7L7F7|FJ|||F-J||FJ|L7L7L7|L7FJFJ|LJFJ|||-L7|L-JF7-F--7F|L7LJLJLJLJL7|J|.-LL7|7L77LL-7 F-7.-.7FL.LJF7J-|F|.J7LLL-|7LJLLFJL---J|-|L--77F7F-J|||F7LJLJL7|FJL-7||L7|LL7|FJL7|L7L7L-7L7||L7FJL---JL-JF7L7|FJ|-|||7|.LLJJFJ..F||7-FJ|.|J L.L-LFF-FJ.-L-J.FLF-F7-.FL|F-77FJF7F7F7L7|F-7L7||L7FJ|||L-----J|L7F-J|L7|L7F|||F-J|FJFJF7L7||L-J|F7F7F--7FJ|FJ|L-7-J7JFJ-L|L-FJ-|-||--FJL-|J .FL7.-FJ|L7.||L-JFJ|LF.77FLL7L-JFJLJLJ|FJ|L7L7LJ|FJ||LJL--7F7F7|FJL-7|FJL7L7||LJF-J|FJFJL-JLJJF-J|LJ|L-7LJFLJ7L--J||JL|L77|..|7-|.LJJFJ.|-|- |JF-L7LF|.-7J.77.|F7F7-77F-L|F7FJF--7FJL7L7L7L7FJ|FJF-----J|||LJL7F-JLJF-JFJ||F-JF7|L7L---7F--JF7L-7|F7L7F-7J7J7|JF---JFJLFJ7F-7LL777FJ.7F-J L-F-LF--J.JJ.FL77L-JF|FLJ||.LJLJ-|F7LJF7|J|FJL|L7||7|F-7F-7|LJF--JL--7JL7FJF||L-7||L7|F-7FJL-7FJL7FJLJL7LJFJLJ.FJ.F-|F-J77|L-JJJ-|LLLL-|-L.| 7LF7.FJ|FFJ|F|JL7--7.LJ-FL7F|J..FLJ|F7||L7LJF-JFJ|L7||FJ|FJ|F-JF-7F-7L7||L7FJ|F-J||FJ|L7|L-7FJ|F-JL--7FJF-J|J.FLJF|F-7F-7---J.||FL7L7J|L|J.| L.-|--J|.|.7--7J7J..FJL-L-J||--FF-|LJ||L7L7FL-7L7L7|LJ|FJ|FJL-7|FJL7L-JFJFJ|FJL7FJ||FJFJL7FJ|FJL7F7F-JL7L--7|FJL-J|7.|7|JLLL-7FL---7L.---777 J-L|-|7.FF7|.L|.|J|.---|7|7F7FLL|FFJL||LL7|JJLL7|FJ|F-J|FJL-7FJ|L-7L--7|FJJ||JJLJ7LJL7L7.LJ7||7LLJ|L--7L7F7|J-|L|7.F-7||7-L77L7-J..F-FL.LJ-7 L7|F-7J7-FJ7F7LJ-F7.F|-F-77LL7LJ|LJLFJ|-LLJJL|LLJL7|L-7||F--JL7L-7|F7FJLJFLLJF--FF---JFJF---JL7-F7|F7FJJ||LJJFL.-J.LFJJJF-7L|7J|L-F-FLJ.|||. .|-L-|L-JL.77L7|-L|-FF.||.--FF.|F-||L-JLL-LL-F7LF-J|LFJ||L77-FJF7|||LJJF77||F|J|LL--7FJJL7F-7FJFJLJ|LJF7LJ..-7JFJ-L--JJ-J-LLFJ.||FJJ.|.JLF|J .LJL77-JF|7|7-LL.LJLLJ.LF-|-7.LLJ.F-JL|-L7L|.7||L--JJ|FJL7L-7L-J|||L--7||L-L7L77.J7F||F--J|FJL7L7F7L--JL7J77F-J||F|J||.L|J|F|7-L-JJ77J.FFFJ. -J..L7JLFJF7-7F|77L7-77F-FJ-|.|-F.|L|L7|LJ7F7JFF|J|F-JL7LL--J|LFJ||F-7LJL7.L--.J-FFFJ||F7FJL-7L7|||F-7F-J|FF-.FLJ-J.7F7.-.F7|FJFJ7LFJF-|-JF7 J--7JL7FJF|LJ-LJL--JL|F--JJ-|FF7LLF--FJJ|F-.7|FLJ|LL--7L----7J7L7||L7|F7FJJJ|FJJ-FFL-JLJ|L7|FJFJLJ|L7LJLLFFJL|77||7.L|LJL-JLLJ7-7LJ|-J.||.|| |-L|JL||F7L777||LJ-L.FJF7J.FLF--7FLF-|JL7-LJ-|L|-JJ..FJF-7F-JJFL|||FJ||||J.-JJ|F-|JJ|FF-JFJFJFJ7.||FJ-J7|J|7F|-L-J-J-J7J.|FJLJ.FJF7..F77L7-L |||.F.LL||-L-J7-LFF7-|--LJ.LF|JLJ7FF7L|FF-7J.LFJ-77F-|FJFJ|.|J|FLJLJLLJLJJ7|LJFLJLLJ|7L-7|-L-JJ7.FJ|-F7JL--J-J7JJLFJLL7-FFL77FLJJL7-.7J||F-. FLJ7LLJFL-7FJF77.LL7J|F||7|-LJ7.FJ7|LF77JLJ7FF-LJ-J77LJ|L-J--7.L-L|JL.||JLJF-FJL77.-.L.LLJJ.|.F|FL7|-L7FFJ-L-J|J7FJ7JFJ7|F7L77|FF-L-7|-LFJ.F FJ.7LL--L-7J--L-7LJJL-|-FFL-F------JF|-7-7.-FLJ.L|LL|JL|JLJJLL7L-JJJLFJ.LL-JLJL7--F--J-|.J.L-J-LFJLJ7J-7-JLLFLJ.J.LL.JJ-JLJL-J|---L7LFJLLL|.'''.split("\n") class Node: instances = dict() def __init__(self, char, row, column): self.row = row self.column = column self.char = char self.__class__.instances[f'{row}-{column}'] = self self.part_of_loop = False @property def connects_to(self): if self.char == "F": return {self.down, self.right} if self.char == "L": return {self.up, self.right} if self.char == "J": return {self.up, self.left} if self.char == "7": return {self.down, self.left} if self.char == "|": return {self.down, self.up} if self.char == "-": return {self.left, self.right} return set() @property def key(self): return self.__class__.get_key(self.row, self.column) @property def up(self): return self.__class__.get_instance(self.row - 1, self.column) @property def down(self): return self.__class__.get_instance(self.row + 1, self.column) @property def right(self): return self.__class__.get_instance(self.row, self.column + 1) @property def left(self): return self.__class__.get_instance(self.row, self.column - 1) @classmethod def get_instance(cls, row, column): key = cls.get_key(row, column) if key in cls.instances: return cls.instances[key] else: if 0 <= row < len(grid) and 0 <= column < len(grid[0]): char = grid[row][column] return cls(char, row, column) else: return None @classmethod def get_key(cls, row, column): return f'{row}-{column}' def __repr__(self): return f'{self.row},{self.column}: {self.char}' for i, row in enumerate(grid): if 'S' in row: j = row.index('S') start = Node.get_instance(i, j) start.char = '-' previous_step = start current_step = list(start.connects_to)[0] buggy_node = Node.get_instance(51, 139) while current_step != start: next_step = list(current_step.connects_to.difference([previous_step]))[0] previous_step, current_step = current_step, next_step if current_step == buggy_node: if not previous_step.row < current_step.row: print(current_step.right.down) # weird = [node for node in set() if node.column > 0] The code above was not the exact code in which I noticed this. I've tried to strip as much code as possible that still caused the bug to occur. Weirdly enough, during this process. The bug would sometimes flip around. (sometimes the code would only cause an error if the line was commented out and sometimes, the code would only cause an error if the line was executed as code. It doesn't end there: If the class Puzzle is removed from the start of the program (it is no longer referenced in the code), the code won't run into an error in either situation. | The problem has to do with this line: current_step = list(start.connects_to)[0] start.connects_to is a set, which has no defined order, so if you convert it to a list and take the first element, the result can be arbitrary. Changing the structure of the code (e.g. adding or removing a statement at the end) changes the internal state of the interpreter and can influence which element you get, but this depends on the implementation and I can't explain it. Depending on the initial value of current_step, later during the execution of the code, either print(current_step.right.down) is executed while current_step.right is None, or not. | 2 | 3 |
77,659,069 | 2023-12-14 | https://stackoverflow.com/questions/77659069/how-to-understand-and-debug-memory-usage-with-jax | I am new to JAX and trying to learn use it for running some code on a GPU. In my example I want to search for regular grids in a point cloud (for indexing X-ray diffraction data). With test_mats[4_000_000,3,3] the memory usage seems to be 15 MB. But with test_mats[5_000_000,3,3] I get an error about it wanting to allocate 19 GB. I can't tell whether this is a glitch in JAX, or because I am doing something wrong. My example code and output are below. I guess the problem is that it wants to create a temporary array of (N, 3, gvec.shape[1]) before doing the reduction, but I don't know how to see the memory profile for what happens inside the jitted/vmapped function. import sys import os import jax import jax.random import jax.profiler print('jax.version.__version__',jax.version.__version__) import scipy.spatial.transform import numpy as np # (3,N) integer grid spot positions hkls = np.mgrid[-3:4, -3:4, -3:4].reshape(3,-1) Umat = scipy.spatial.transform.Rotation.random( 10, random_state=42 ).as_matrix() a0 = 10.13 gvec = np.swapaxes( Umat.dot(hkls)/a0, 0, 1 ).reshape(3,-1) def count_indexed_peaks_hkl( ubi, gve, tol ): """ See how many gve this ubi can account for """ hkl_real = ubi.dot( gve ) hkl_int = jax.numpy.round( hkl_real ) drlv2 = ((hkl_real - hkl_int)**2).sum(axis=0) npks = jax.numpy.where( drlv2 < tol*tol, 1, 0 ).sum() return npks def testsize( N ): print("Testing size",N) jfunc = jax.vmap( jax.jit(count_indexed_peaks_hkl), in_axes=(0,None,None)) key = jax.random.PRNGKey(0) test_mats = jax.random.orthogonal(key, 3, (N,) )*a0 dev_gvec = jax.device_put( gvec ) scores = jfunc( test_mats, gvec, 0.01 ) jax.profiler.save_device_memory_profile(f"memory_{N}.prof") os.system(f"~/go/bin/pprof -top {sys.executable} memory_{N}.prof") testsize(400000) testsize(500000) Output is: gpu4-03:~/Notebooks/JAXFits % python mem.py jax.version.__version__ 0.4.16 Testing size 400000 File: python Type: space Showing nodes accounting for 15.26MB, 99.44% of 15.35MB total Dropped 25 nodes (cum <= 0.08MB) flat flat% sum% cum cum% 15.26MB 99.44% 99.44% 15.26MB 99.44% __call__ 0 0% 99.44% 15.35MB 100% [python] 0 0% 99.44% 1.53MB 10.00% _pjit_batcher 0 0% 99.44% 15.30MB 99.70% _pjit_call_impl 0 0% 99.44% 15.30MB 99.70% _pjit_call_impl_python 0 0% 99.44% 15.30MB 99.70% _python_pjit_helper 0 0% 99.44% 15.35MB 100% bind 0 0% 99.44% 15.35MB 100% bind_with_trace 0 0% 99.44% 15.30MB 99.70% cache_miss 0 0% 99.44% 15.30MB 99.70% call_impl_cache_miss 0 0% 99.44% 1.53MB 10.00% call_wrapped 0 0% 99.44% 13.74MB 89.51% deferring_binary_op 0 0% 99.44% 15.35MB 100% process_primitive 0 0% 99.44% 15.30MB 99.70% reraise_with_filtered_traceback 0 0% 99.44% 15.35MB 100% testsize 0 0% 99.44% 1.53MB 10.00% vmap_f 0 0% 99.44% 15.31MB 99.74% wrapper Testing size 500000 2023-12-14 10:26:23.630474: W external/tsl/tsl/framework/bfc_allocator.cc:296] Allocator (GPU_0_bfc) ran out of memory trying to allocate 19.18GiB with freed_by_count=0. The caller indicates that this is not a failure, but this may mean that there could be performance gains if more memory were available. Traceback (most recent call last): File "~/Notebooks/JAXFits/mem.py", line 38, in <module> testsize(500000) File "~/Notebooks/JAXFits/mem.py", line 33, in testsize scores = jfunc( test_mats, gvec, 0.01 ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 20596777216 bytes. -------------------- For simplicity, JAX has removed its internal frames from the traceback of the following exception. Set JAX_TRACEBACK_FILTERING=off to include these.``` | The vmapped function is attempting to create an intermediate array of shape [N, 3, 3430]. For N=400_000, with float32 this amounts to 15GB, and for N=500_000 this amounts to 19GB. Your best option in this situation is probably to split your computation into sequentially-executed batches using lax.map or similar. Unfortunately there's not currently any automatic way to do that kind of chunked vmao, but there is a relevant feature request at https://github.com/google/jax/issues/11319, and there are some useful suggestions in that thread. | 2 | 2 |
77,668,149 | 2023-12-15 | https://stackoverflow.com/questions/77668149/in-python-is-there-a-way-to-get-the-code-object-of-top-level-code | Is it possible to get the code object of top level code within a module? For example, if you have a python file like this: myvar = 1 print('hello from top level') def myfunction(): print('hello from function') and you want to access the code object for myfunction, then you can use myfunction.__code__. For example, myfunction.__code__.co_consts will contain the string 'hello from function' etc... Is there a way to get the code object for the top level code? That is, for the code: myvar = 1 print('hello from top level') I would like something like __main__.__code__.co_consts that will contain 'hello from top level', but I cannot find any way to get this. Does such a thing exist? | The code that is executed at the top level of a module is not directly accessible as a code object in the same way that functions' code objects are, because the top-level code is executed immediately when the module is imported or run, and it doesn't exist as a separate entity like a function does. But when Python runs a script, it compiles it first to bytecode and stores it in a code object. The top-level code (__main__ module), have a code object, but it is not directly exposed, so you need to use inspect module to dig deeper: import inspect def get_top_level_code_object(): frame = inspect.currentframe() # Go back to the top-level frame while frame.f_back: frame = frame.f_back # The code object is stored in f_code return frame.f_code if __name__ == "__main__": top_level_code_obj = get_top_level_code_object() print(top_level_code_obj.co_consts) would yield (0, None, <code object get_top_level_code_object at 0x7f970ad658f0, file "/tmp/test.py", line 3>, '__main__') | 19 | 19 |
77,653,860 | 2023-12-13 | https://stackoverflow.com/questions/77653860/how-to-insert-a-png-without-background-to-another-png-in-pyvips | I have two png images with transparent background example: and I'm trying to achieve something similar to the PIL paste() function in pyvips, where I can merge two images and 'remove the transparent background' using a mask. PIL example: image = Image.open(filepath, 'r') image_2 = Image.open(filepath, 'r') image.paste(image_2, (50, 50), mask=mask) Expected Result: I've already tried to use the pyvips insert() function, but both images retain their backgrounds. Pyvips insert: image = pyvips.Image.new_from_file(path, access='sequential') image_2 = pyvips.Image.new_from_file(path, access='sequential') image = image.insert(image_2, 50,50) Pyvips Result: How can I have the "expected result" with pyvips? | You can do it like this, using the composite() function: import pyvips # Load background and foreground images bg = pyvips.Image.new_from_file('WA8rm.png', access='sequential') fg = pyvips.Image.new_from_file('Ny50t.png', access='sequential') # Composite foreground over background and save result bg.composite(fg, 'over').write_to_file('result.png') If you want to add an offset in x and y, use: bg.composite(fg, 'over', x=200, y=100).write_to_file('result.png') | 2 | 1 |
77,664,047 | 2023-12-15 | https://stackoverflow.com/questions/77664047/find-all-combinations-of-a-polars-dataframe-selecting-one-element-from-each-row | I have a Polars Dataframe, and I would like to contrust a new Dataframe that consists of all possible combinations choosing 1 element from each row. Visually like so: An input Dataframe | Column A | Column B | Column C | | -------- | -------- | -------- | | A1 | B1 | C1 | | A2 | B2 | C2 | | A3 | B3 | C3 | would give | Column A | Column B | Column A | | -------- | -------- | -------- | | A1 | A2 | A3 | | A1 | A2 | B3 | | A1 | A2 | C3 | | A1 | B2 | A3 | | A1 | B2 | B3 | | A1 | B2 | C3 | | A1 | C2 | A3 | | A1 | C2 | B3 | | A1 | C2 | C3 | | B1 | A2 | A3 | | B1 | A2 | B3 | | B1 | A2 | C3 | | B1 | B2 | A3 | | B1 | B2 | B3 | | B1 | B2 | C3 | | B1 | C2 | A3 | | B1 | C2 | B3 | | B1 | C2 | C3 | etc... I have tried to implement this by simply using a 2D array and a double for loop which is fairly simple, however, I would really like to implment this using Polars' Dataframes and being compliant with the way Polars is built as I'm hoping it can compute much faster than a double for loop would. This library is still fairly new to me so please let me know if there is some kind of misunderstanding on my end. | There's no direct "combinations" functionality as far as I am aware. One possible approach is to .implode() then .explode() each column. df = pl.from_repr(""" ┌──────────┬──────────┬──────────┐ │ Column A ┆ Column B ┆ Column C │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str │ ╞══════════╪══════════╪══════════╡ │ A1 ┆ B1 ┆ C1 │ │ A2 ┆ B2 ┆ C2 │ │ A3 ┆ B3 ┆ C3 │ └──────────┴──────────┴──────────┘ """) (df.select(pl.all().implode()) .explode("Column A") .explode("Column B") .explode("Column C") ) shape: (27, 3) ┌──────────┬──────────┬──────────┐ │ Column A ┆ Column B ┆ Column C │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str │ ╞══════════╪══════════╪══════════╡ │ A1 ┆ B1 ┆ C1 │ │ A1 ┆ B1 ┆ C2 │ │ A1 ┆ B1 ┆ C3 │ │ A1 ┆ B2 ┆ C1 │ │ … ┆ … ┆ … │ │ A3 ┆ B2 ┆ C3 │ │ A3 ┆ B3 ┆ C1 │ │ A3 ┆ B3 ┆ C2 │ │ A3 ┆ B3 ┆ C3 │ └──────────┴──────────┴──────────┘ You can add .unique() in the case of any duplicate values. Instead of having to name each column, you could use the Lazy API and a loop: out = df.lazy().select(pl.all().implode()) for col in df.columns: out = out.explode(col) out.collect() | 2 | 2 |
77,664,095 | 2023-12-15 | https://stackoverflow.com/questions/77664095/what-are-the-differences-between-pipx-and-pip-user | I have tried to install any python packages on Ubuntu:24.04, but found I cannot do that as in 22:04 PEP668 said it is for avoiding package conflict between system-wide package and user installed package. But what's the differences between using pipx and pip --user? And why the --user option not work, it also installs packages to the user's own home. Example: $ pip install setuptools --user error: externally-managed-environment × This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. But if I do that with pipx: $ pipx install pip No apps associated with package pip or its dependencies. If you are attempting to install a library, pipx should not be used. Consider using pip or a similar tool instead. I am really confused with current rules. How can I manage my user global environment now? And how can I use latest pip(not linux-distro version) and other packages by default for current user? My environment: FROM ubuntu:24.04 # add python RUN apt install -y python3-pip python3-venv python-is-python3 pipx USER ubuntu WORKDIR /app | pip --user is a function provided by pip, and pipx is a software package. Ubuntu can guarantee that the packages you install using pipx will not pollute the system environment, but pip does not have this guarantee. Additionally, you should install setuptools using apt install python3-setuptools. | 6 | 3 |
77,663,828 | 2023-12-15 | https://stackoverflow.com/questions/77663828/regex-difference-between-ruby-python | I just started Advent of Code 2023 and am trying to use it to learn a few new programming languages. I have (some) familiarity with python, and literally just installed ruby today. Day 1, part 2, I am using a Regex to search for digits as well as their spelled out versions. The regex in python (which yields the correct result): (?=(0|1|2|3|4|5|6|7|8|9|zero|one|two|three|four|five|six|seven|eight|nine)) When I use this exact regex in Ruby, I get a nil result. Interestingly, when I use this regex, I do get the exact same result in both python and ruby, but it is the incorrect answer: r"0|1|2|3|4|5|6|7|8|9|zero|one|two|three|four|five|six|seven|eight|nine" So I believe the answer has to do with the positive lookahead assertion, but I don't know why, and what it is doing differently. Below are both of the files. Python: import re input = open("../resources/input.txt","r") lines = input.readlines() targets = [ '0','1','2','3','4','5','6','7','8','9', 'zero','one','two','three','four','five','six','seven','eight','nine' ] values = { '0': 0, '1': 1, '2': 2, '3': 3, '4': 4, '5': 5, '6': 6, '7': 7, '8': 8, '9': 9, 'zero': 0, 'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5, 'six': 6, 'seven': 7, 'eight': 8, 'nine': 9 } sum = 0 for line in lines: numbers = re.findall(r"(?=("+'|'.join(targets)+r"))", line) firstDigitValue = values[numbers[0]] * 10 lastDigitValue = values[numbers[-1]] sum += (firstDigitValue+lastDigitValue) print(sum) Ruby: # Init vars sum = 0 reg = /\d|zero|one|two|three|four|five|six|seven|eight|nine/ reg2 = /(?=(0|1|2|3|4|5|6|7|8|9|zero|one|two|three|four|five|six|seven|eight|nine))/ reg3 = /0|1|2|3|4|5|6|7|8|9|zero|one|two|three|four|five|six|seven|eight|nine/ values = { '0' => 0, '1' => 1, '2' => 2, '3' => 3, '4' => 4, '5' => 5, '6' => 6, '7' => 7, '8' => 8, '9' => 9, 'zero' => 0, 'one' => 1, 'two' => 2, 'three' => 3, 'four' => 4, 'five' => 5, 'six' => 6, 'seven' => 7, 'eight' => 8, 'nine' => 9 } # Pipe the file line by line and do per line File.foreach("../resources/input.txt", chomp: true) do |line| # Get the first and last digits as their values numbers = line.scan(reg3) firstDigitValue = values[numbers[0]] * 10 lastDigitValue = values[numbers[-1]] # accumulate sum += (firstDigitValue+lastDigitValue) end puts sum | 0|1|2|3|4|5|6|7|8|9|zero|one|two|three|four|five|six|seven|eight|nine The problem with this regex, in both Python and Ruby, is that you fail to account for overlapping matches. I made the exact same mistake doing this problem earlier this month. If the phrase eightwo, for instance, appears in your puzzle input, then both Python and Ruby will match the "eight" part and then start looking for more matches at the "w", so they won't see the word "two". (?=(0|1|2|3|4|5|6|7|8|9|zero|one|two|three|four|five|six|seven|eight|nine)) This corrects the problem by putting the whole match into a lookahead (it's probably not efficient, but we're doing coding challenges so it's good enough). When considering overlaps, lookaheads aren't considered part of the pattern, so we start searching basically right where we left off. However, in Ruby, when you have capture groups in your regular expression, then String#scan behaves differently. If the pattern contains groups, each individual result is itself an array containing one entry per group. So your output actually looks like [["4"], ["one"], ["eight"], ["nine"]] You just need to deal with this extra nesting layer. first_digit_value = values[numbers[0][0]] * 10 last_digit_value = values[numbers[-1][0]] | 2 | 4 |
77,662,696 | 2023-12-14 | https://stackoverflow.com/questions/77662696/speed-up-iterative-algorithm-in-python | I'm implementing the forward recursion for Hidden Markov Model: the necessary steps (a-b) are shown here Below is my implementation from scipy.stats import norm import numpy as np import random def gmmdensity(obs, w, mu, sd): # compute probability density function of gaussian mixture gauss_mixt = norm.pdf(obs, mu, sd)[:,None]*w return gauss_mixt def alpha(obs, states, A, pi, w, mu, sigma): dens = np.sum(gmmdensity(obs, w, mu, sigma), axis = 2) # scaling factor is used to renormalize probabilities in order to # avoid numerical underflow scaling_factor = np.ones(len(obs)) alpha_matrix = np.zeros((len(states), len(obs))) # for t = 0 alpha_matrix[:,0] = pi*dens[0] scaling_factor[0] = 1/np.sum(alpha_matrix[:,0], axis = 0) alpha_matrix[:,0] *= scaling_factor[0] # for t == 1:T for t in range(1, len(obs)): alpha_matrix[:,t] = np.matmul(alpha_matrix[:,t-1], A)*dens[t] scaling_factor[t] = 1/np.sum(alpha_matrix[:,t], axis = 0) alpha_matrix[:,t] *= scaling_factor[t] return alpha_matrix, scaling_factor Let's generate some data to run the algorithm obs = np.concatenate((np.random.normal(0, 1, size = 500), np.random.normal(1.5, 1, size = 500))).reshape(-1,1) N = 2 # number of hidden states M = 3 # number of mixture components states = list(range(N)) pi = np.array([0.5, 0.5]) # initial probabilities A = np.array([[0.8, 0.2], [0.3, 0.7]]) # transition matrix mu = np.array([np.min(obs), np.median(obs), np.max(obs)]) # means of mixture components sigma = np.array([1, 1, 1]) # variances of mixture components w = np.array([[0.2, 0.3, 0.5], [0.6, 0.2, 0.2]]) # weights of mixture components Let's see how fast the algorithm is %timeit alpha(obs, states, A, pi, w, mu, sigma) 13.6 ms ± 1.24 ms per loop (mean ± std. dev. of 7 runs, 100 loops each) Is there any possibility to make this code faster? I thought about using numba or cython, but it never fully worked in this case. | TL;DR: Numba can strongly speed up this code, especially with basic loops. This answer comes lately compared to the others, but it provides the fastest implementation so far. This is indeed typically a situation where Numba can strongly speed up the computation. Indeed, np.matmul takes more than 1 µs on my machine to multiply a vector of only 2 items with a matrix of 2x2. This should take clearly no more than 0.01 µs on my machine (assuming data are in the L1 cache). Most of the time is lost in the Numpy call overheads. np.sum takes > 3 µs while it should take about few nanoseconds (for the same reason) : it is just a sum of 2 numbers! To solve this efficiently in Numba you need to use basic loops and avoid creating new temporary arrays (allocations are expensive in such a basic loop). Note that gmmdensity cannot be easily translated to Numba since it calls norm.pdf of Scipy which is AFAIK not supported by Numba yet. Here is a fast implementation: import numba as nb import numpy as np @nb.njit('(int64, int64, float64[:,::1], float64[:,::1], float64[::1])') def fast_iteration(obsLen, statesLen, dens, A, pi): # scaling factor is used to renormalize probabilities in order to # avoid numerical underflow scaling_factor = np.ones(obsLen) alpha_matrix = np.zeros((statesLen, obsLen)) # for t = 0 alpha_matrix[:,0] = pi*dens[0] scaling_factor[0] = 1/np.sum(alpha_matrix[:,0], axis = 0) alpha_matrix[:,0] *= scaling_factor[0] tmp = np.zeros(statesLen) for t in range(1, obsLen): # Line: alpha_matrix[:,t] = np.matmul(alpha_matrix[:,t-1], A)*dens[t] for i in range(statesLen): s = 0.0 for k in range(statesLen): s += alpha_matrix[k, t-1] * A[k, i] tmp[i] = s * dens[t, i] # Line: scaling_factor[t] = 1/np.sum(alpha_matrix[:,t], axis = 0) s = 0.0 for i in range(statesLen): s += tmp[i] scaling_factor[t] = 1.0 / s # Line: alpha_matrix[:,t] *= scaling_factor[t] for i in range(statesLen): alpha_matrix[i, t] = tmp[i] * scaling_factor[t] return alpha_matrix, scaling_factor @nb.njit('(float64[:,::1], float64[::1], float64[::1])') def fast_normal_pdf(obs, mu, sigma): n, m = obs.shape[0], mu.size assert obs.shape[1] == 1 assert mu.size == sigma.size inv_sigma = 1.0 / sigma one_over_sqrt_2pi = 1 / np.sqrt(2 * np.pi) result = np.empty((n, m)) for i in range(n): for j in range(m): tmp = (obs[i, 0] - mu[j]) * inv_sigma[j] tmp = np.exp(-0.5 * tmp * tmp) result[i, j] = one_over_sqrt_2pi * inv_sigma[j] * tmp return result @nb.njit('(float64[:,::1], float64[:,::1], float64[::1], float64[::1])') def fast_gmmdensitysum(obs, w, mu, sigma): pdf = fast_normal_pdf(obs, mu, sigma) result = np.zeros((pdf.shape[0], w.shape[0])) for i in range(pdf.shape[0]): for k in range(w.shape[0]): s = 0.0 for j in range(pdf.shape[1]): s += pdf[i, j] * w[k, j] result[i, k] = s return result def fast_alpha(obs, states, A, pi, w, mu, sigma): dens = fast_gmmdensitysum(obs, w, mu, sigma) return fast_iteration(len(obs), len(states), dens, A, pi) # Usage obs = np.concatenate((np.random.normal(0, 1, size = 500), np.random.normal(1.5, 1, size = 500))).reshape(-1,1) N = 2 # number of hidden states M = 3 # number of mixture components states = list(range(N)) pi = np.array([0.5, 0.5]) # initial probabilities A = np.array([[0.8, 0.2], [0.3, 0.7]]) # transition matrix mu = np.array([np.min(obs), np.median(obs), np.max(obs)]) # means of mixture components sigma = np.array([1, 1, 1], dtype=np.float64) # variances of mixture components w = np.array([[0.2, 0.3, 0.5], [0.6, 0.2, 0.2]]) # weights of mixture components Note that the type of sigma has been explicitly provided so it is correct. fast_normal_pdf and fast_normal_pdf are optimized implementations inspired by the good answer of @NickODell. Benchmark Here are performance results on my machine with a i5-9600KF CPU: Initial code: 10417.4 µs x1 Andrej Kesely: 974.6 µs x11 Nick ODell: 335.1 µs x31 This implementation: 53.9 µs x193 <------ While this implementation is bigger than others, it is clearly the fastest implementation by a large margin. It is 193 times faster than the initial implementation on my machine, and more than 6 times faster than the other fastest one so far. fast_gmmdensitysum is the most expensive part (>60%), especially the np.exp function (~40%) which is hard to optimize further. | 2 | 3 |
77,659,393 | 2023-12-14 | https://stackoverflow.com/questions/77659393/subtracting-a-numpy-array-by-a-list-is-slow | Given a numpy array of shape 4000x4000x3, which is an image with 3 channels, subtract each channel by some values. I can do it as following: Implementation 1 values = [0.43, 0.44, 0.45] image -= values Or, Implementation 2 values = [0.43, 0.44, 0.45] for i in range(3): image[...,i] -= values[i] Surprisingly, the later solution is 20x faster than the former when performing on that large image shape, but I don't clearly know why. Could you please explain what helps the second implementation faster? Thanks. Simple script for confirmation: import time import numpy as np image = np.random.rand(4000, 4000, 3).astype("float32") values = [0.43, 0.44, 0.45] st = time.time() for i in range(3): image[..., i] -= values[i] et = time.time() print("Implementation 2", et - st) st = time.time() image -= values et = time.time() print("Implementation 1", et - st) Output Implementation 2 0.030953645706176758 Implementation 1 0.8593623638153076 | TL;DR: the performance issue is caused by multiple factors : Numpy iterators introduce a big overhead on small broadcasted arrays values is implicitly converted to a np.float64 array causing a slow np.float64-based subtraction the memory layout is sub-optimal in all implementations Numpy internal iterators One issue comes from the Numpy implementation. Numpy uses an inefficient code for handling small array that are broadcasted. This is due to an internal concept called Numpy iterators which introduce a significant overhead when arrays are small because of the number of repeated iteration on the same small array. This concept is necessary in Numpy to make the computing code more generic and to support many features, especially broadcasting. On top of that, Numpy cannot benefit from using SIMD instructions in this case because the array is too small to even fit in a SIMD register of mainstream CPUs. A simple way to prove this hypothesis is to flatten the array and analyse the performance of subtracting arrays of different size containing the same repeated values: view = image.reshape(-1, 3) %time view -= np.tile(values, 1) view = image.reshape(-1, 6) %time view -= np.tile(values, 2) view = image.reshape(-1, 12) %time view -= np.tile(values, 4) view = image.reshape(-1, 24) %time view -= np.tile(values, 8) view = image.reshape(-1, 384) %time view -= np.tile(values, 128) view = image.reshape(-1, 3*4000) %time view -= np.tile(values, 4000) # Pathological case : huge generated array view = image.reshape(-1, 3*4000*1000) %time view -= np.tile(values, 4000*1000) Here are results on my machine with an Intel Xeon W-2255 CPU using Numpy 1.20.3: Implementation 2 0.06000018119812012 Implementation 1 0.1734762191772461 Wall time: 175 ms Wall time: 137 ms Wall time: 106 ms Wall time: 95 ms Wall time: 67 ms <----- Wall time: 69 ms Wall time: 108 ms One can see that the bigger the second array the better in the above benchmark. However, when the subtracted array is too big, it takes too much time to generate the repeated array using np.tile. Indeed, the array is so big it does not fit in the (fast) caches of the CPU but the (slow) DRAM. Since the operation is rather memory bound, it can be more than twice slower. One can see that the fastest timing of the flatten operation is close to "Implementation 2". It is a bit slower (probably still certainly due to Numpy internal iterators). Datatypes and implicit conversions Another big issue is that values is a list of float objects. It is implicitly converted to a 1D array of np.float64 values while image is of type np.float32. Due to the type promotion rules in Numpy, the operations is done using the np.float64 type which is significantly slower and not needed here anyway. We can fix that by creating the Numpy array explicitly with the right data-type : values = np.array(values, dtype=np.float32). Here are the results of the same (above) benchmark code but now using a np.float32 array: Wall time: 75 ms Wall time: 46 ms Wall time: 39 ms Wall time: 31 ms Wall time: 23 ms Wall time: 18 ms <----- We can see that using only np.float32-based operations is much faster. More explanations Could you please explain what helps the second implementation faster? The second implementation operate on np.float32 values and do not suffer from the iterator overhead since there is no broadcasting done internally. This is because the right-hand-side is a unique float scalar. Numpy directly converts the value to a np.float32 iternally in this case for sake of performance since the left-hand-side is a np.float32 array. You can check that by manually converting the value to np.float32. Note that Numpy apparently convert the value to a np.float32 one even when it is a np.float64 one internally. However, this implementation is not efficient because it walks through the whole array image 3 times (so the whole array needs to be read and written back from/to the DRAM 3 times). Final optimized code Put it shortly, you can write: st = time.time() image -= np.tile(np.array(values, dtype=np.float32), image.shape[1]).reshape(-1, 3) et = time.time() print("Implementation 3", et - st) # 0.018 seconds Notes on memory layouts While repeating values image.shape[1] times is guaranteed to work and it should be generally quite fast, the best solution is to avoid using the layout height x width x components which tends to be inefficient (fundamentally not SIMD friendly), especially with Numpy. Instead, it is better to use the layout component x height x width (or even possibly height x component x width which tends to be a good tread-off of the two). See this post for example. Note that, for the same reason, it is much better to use a (2,N)-shaped array than the common (N,2) one. | 2 | 2 |
77,661,139 | 2023-12-14 | https://stackoverflow.com/questions/77661139/how-do-i-stop-the-ruff-linter-from-moving-imports-into-an-if-type-checking-state | I have a pydantic basemodel that looks something like: from pathlib import Path from pydantic import BaseModel class Model(BaseModel): log_file: Path And my ruff pre-commit hook is re-ordering it to: from typing import TYPE_CHECKING from pydantic import BaseModel if TYPE_CHECKING: from pathlib import Path class Model(BaseModel): log_file: Path Which is then causing the bug: pydantic.errors.ConfigError: field "log_file" not yet prepared so type is still a ForwardRef, you might need to call Model.update_forward_refs() Which I dont want to do. How can I stop ruff from re-ordering the imports like this? My .pre-commit-config.yaml file looks like: repos: - repo: https://github.com/charliermarsh/ruff-pre-commit rev: "v0.0.291" hooks: - id: ruff args: [--fix, --exit-non-zero-on-fix] - repo: https://github.com/psf/black rev: 23.9.1 hooks: - id: black language_version: python3 and my pyproject.toml file: [tool.black] line-length = 120 include = '\.pyi?$' exclude = ''' /( \.eggs | \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist # The following are specific to Black, you probably don't want those. | blib2to3 | tests/data | profiling )/ ''' [tool.ruff] line-length = 120 ignore = ["F405", "B008"] select = ["E", "F", "B", "C4", "DTZ", "PTH", "TCH", "I001"] # unfixable = ["C4", "B"] exclude = ["docs/conf.py", "Deployment/make_deployment_bundle.py"] [tool.ruff.per-file-ignores] "**/__init__.py" = ["F401", "F403"] [tool.ruff.isort] split-on-trailing-comma = true known-first-party = ["influxabart"] no-lines-before = ["local-folder"] section-order = ["future","standard-library","third-party","first-party","this","local-folder"] [tool.ruff.isort.sections] "this" = ["InfluxTools"] | Haven't reproduced this, but I think your list of select options is causing this. I know that this isn't the default behaviour of ruff. select = ["E", "F", "B", "C4", "DTZ", "PTH", "TCH", "I001"] Even though the official readme says TC* is the code for flake8-type-checking, this issue hints that TCH* is. From the README: TC001 Move application import into a type-checking block TC002 Move third-party import into a type-checking block TC003 Move built-in import into a type-checking block So TC003 is what's happening to you. Or TCH003 is what I'm guessing ruff calls it. So I think the solution is to remove that "TCH" field from the select in your pyproject.toml. | 4 | 3 |
77,661,003 | 2023-12-14 | https://stackoverflow.com/questions/77661003/how-to-type-hint-a-method-that-returns-an-instance-of-a-class-passed-as-paramete | using Python 3.11 and Mypy 1.7. I am trying to properly type hint a method that takes a class as parameter, with a default value, and returns an instance of that class. The class passed as parameter must be a subclass of a specific base class. How should I do that? I tried to use a type variable with upper bound like this: from typing import TypeVar class BaseClass: pass class ImplClass(BaseClass): pass T = TypeVar("T", bound=BaseClass) def method(return_type: type[T] = ImplClass) -> T: return return_type() But mypy complains with this message: scratch_10.py: note: In function "method": scratch_10.py:9:35: error: Incompatible default for argument "return_type" (default has type "type[ImplClass]", argument has type "type[T]") [assignment] def method(return_type: type[T] = ImplClass) -> T: ^~~~~~~~~ But as far as I understand this error is wrong since the default ImplClass is a subclass of BaseClass which is the upper bound of T, so it respects the type constraints. Note that removing the upperbound does not help: T = TypeVar("T") #no upperbound this time def method(return_type: type[T] = ImplClass) -> T: return return_type() (same error message) But removing the default value and explicitly calling method with ImplClass as parameter works without any complaints: def method(return_type: type[T]) -> T: # no default value this time return return_type() method(ImplClass) Success, no issues found in 1 source file So I don't understand why mypy complains about the default value but not about the actual parameter with the exact same value. Is that a false positive in mypy, or am I doing something wrong? | This is a known limitation of mypy. See the GitHub issues #8739 - mypy reporting error with default value for typevar and #3737 - Unable to specify a default value for a generic parameter for its discussion. You can make this work using typing.overload to separately annotate the case where the default value is used: from typing import TypeVar, overload class BaseClass: pass class ImplClass(BaseClass): pass T = TypeVar("T", bound=BaseClass) @overload def method() -> ImplClass: ... @overload def method(return_type: type[T]) -> T: ... def method(return_type=ImplClass): return return_type() reveal_type(method()) # ok: ImplClass reveal_type(method(BaseClass)) # ok: BaseClass reveal_type(method(ImplClass)) # ok: ImplClass # reveal_type(method(int)) # error This passes type checking with mypy (Python 3.12, v1.7.1), and outputs main.py:23: note: Revealed type is "__main__.ImplClass" main.py:24: note: Revealed type is "__main__.BaseClass" main.py:25: note: Revealed type is "__main__.ImplClass" Success: no issues found in 1 source file Try it yourself online. | 3 | 4 |
77,660,815 | 2023-12-14 | https://stackoverflow.com/questions/77660815/how-to-add-dfs-with-different-number-of-column-axis-levels-but-sharing-same-axi | I have two multi-level column dataframes: data = { 'A': { 'X': {'Value1': 2, 'Value2': 1}, 'Y': {'Value1': 2, 'Value2': 3} }, 'B': { 'X': {'Value1': 10, 'Value2': 11}, 'Y': {'Value1': 10, 'Value2': 11} } } df = pd.DataFrame(data) Which looks like this... Group A B Subgroup X Y X Y Metric Value1 Value2 Value1 Value2 Value1 Value2 Value1 Value2 2023-01-01 2 1 2 3 10 11 10 11 2023-01-02 2 1 2 3 10 11 10 11 2023-01-03 2 1 2 3 10 11 10 11 2023-01-04 2 1 2 3 10 11 10 11 2023-01-05 2 1 2 3 10 11 10 11 df2: data = { 'A': {'Value1': [3, 3, 1, 3, 3], 'Value2': [5, 2, 2, 2, 2]}, 'B': {'Value1': [3, 4, 7, 3, 3], 'Value2': [2, 2, 7, 2, 2]} } df_2 = pd.DataFrame(data, index=pd.to_datetime(['2023-01-01', '2023-01-02', '2023-01-03', '2023-01-04', '2023-01-05'])) # Convert to constructor df2 = df_2.unstack().unstack() Which looks like this... Group A B Metric Value1 Value2 Value1 Value2 2023-01-01 3 5 3 2 2023-01-02 3 2 4 2 2023-01-03 1 2 7 7 2023-01-04 3 2 3 2 2023-01-05 3 2 3 2 and would like to add df2 to df1, but for each combination of Group and Metric, add across each Subgroup that match to look like this... Group A B Subgroup X Y X Y Metric Value1 Value2 Value1 Value2 Value1 Value2 Value1 Value2 2023-01-01 5 6 5 5 13 13 13 13 2023-01-02 5 3 6 5 14 15 14 13 2023-01-03 3 3 9 10 17 18 17 18 2023-01-04 5 3 5 5 13 14 13 13 2023-01-05 5 3 5 5 13 14 13 13 Any help would be appreciated. Some ideas like merging but I think i lose the middle subgroup level above but could have been doing it incorrectly. | I think the easiest might be to droplevel on df1, then add, finally recreate the DataFrame: pd.DataFrame(df1.droplevel('Subgroup', axis=1).add(df2).to_numpy(), index=df1.index, columns=df1.columns) Alternatively, reindex df2 and convert to numpy: df1 += df2.reindex_like(df1.droplevel('Subgroup', axis=1)).to_numpy() Output: Group A B Subgroup X Y X Y Metric Value1 Value2 Value1 Value2 Value1 Value2 Value1 Value2 2023-01-01 5 5 6 8 13 13 13 13 2023-01-02 5 5 3 5 14 14 13 13 2023-01-03 3 3 3 5 17 17 18 18 2023-01-04 5 5 3 5 13 13 13 13 2023-01-05 5 5 3 5 13 13 13 13 | 2 | 1 |
77,659,552 | 2023-12-14 | https://stackoverflow.com/questions/77659552/google-generative-ai-api-error-user-location-is-not-supported-for-the-api-use | I'm trying to use the Google Generative AI gemini-pro model with the following Python code using the Google Generative AI Python SDK: import google.generativeai as genai import os genai.configure(api_key=os.environ['GOOGLE_CLOUD_API_KEY']) model = genai.GenerativeModel('gemini-pro') response = model.generate_content('Say this is a test') print(response.text) I'm getting the following error: User location is not supported for the API use. I've searched the official documentation and a few Google GitHub repositories for a while but haven't found any location restrictions stated for API usage. I live in Austria, Europe. Full traceback: Traceback (most recent call last): File "C:\Users\xxxxx\Desktop\gemini-pro.py", line 7, in <module> response = model.generate_content('Say this is a test') File "C:\Users\xxxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\google\generativeai\generative_models.py", line 243, in generate_content response = self._client.generate_content(request) File "C:\Users\xxxxx\AppData\Local\Programs\Python\Python310\lib\site-packages\google\ai\generativelanguage_v1beta\services\generative_service\client.py", line 566, in generate_content response = rpc( File "C:\Users\xxxxx\AppData\Roaming\Python\Python310\site-packages\google\api_core\gapic_v1\method.py", line 131, in __call__ return wrapped_func(*args, **kwargs) File "C:\Users\xxxxx\AppData\Roaming\Python\Python310\site-packages\google\api_core\retry.py", line 372, in retry_wrapped_func return retry_target( File "C:\Users\xxxxx\AppData\Roaming\Python\Python310\site-packages\google\api_core\retry.py", line 207, in retry_target result = target() File "C:\Users\xxxxx\AppData\Roaming\Python\Python310\site-packages\google\api_core\timeout.py", line 120, in func_with_timeout return func(*args, **kwargs) File "C:\Users\xxxxx\AppData\Roaming\Python\Python310\site-packages\google\api_core\grpc_helpers.py", line 81, in error_remapped_callable raise exceptions.from_grpc_error(exc) from exc google.api_core.exceptions.FailedPrecondition: 400 User location is not supported for the API use. EDIT I finally found the "Available regions". It's the last item in the sidebar. As of today, Austria is not on the list. I clicked through Google websites, and there were no location restrictions stated. At least on the official Gemini webpage, they could have added an asterisk somewhere to indicate that it is not available everywhere in the world. The Gemini API and Google AI Studio are available in the following countries and territories: Algeria American Samoa Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Azerbaijan The Bahamas Bahrain Bangladesh Barbados Belize Benin Bermuda Bhutan Bolivia Botswana Brazil British Indian Ocean Territory British Virgin Islands Brunei Burkina Faso Burundi Cabo Verde Cambodia Cameroon Caribbean Netherlands Cayman Islands Central African Republic Chad Chile Christmas Island Cocos (Keeling) Islands Colombia Comoros Cook Islands Côte d'Ivoire Costa Rica Curaçao Democratic Republic of the Congo Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Eswatini Ethiopia Falkland Islands (Islas Malvinas) Fiji Gabon The Gambia Georgia Ghana Gibraltar Grenada Guam Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Honduras India Indonesia Iraq Isle of Man Israel Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Kyrgyzstan Kuwait Laos Lebanon Lesotho Liberia Libya Madagascar Malawi Malaysia Maldives Mali Marshall Islands Mauritania Mauritius Mexico Micronesia Mongolia Montserrat Morocco Mozambique Namibia Nauru Nepal New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Oman Pakistan Palau Palestine Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Islands Puerto Rico Qatar Republic of the Congo Rwanda Saint Barthélemy Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and the Grenadines Saint Helena, Ascension and Tristan da Cunha Samoa São Tomé and Príncipe Saudi Arabia Senegal Seychelles Sierra Leone Singapore Solomon Islands Somalia South Africa South Georgia and the South Sandwich Islands South Korea South Sudan Sri Lanka Sudan Suriname Taiwan Tajikistan Tanzania Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Türkiye Turkmenistan Turks and Caicos Islands Tuvalu Uganda United Arab Emirates United States United States Minor Outlying Islands U.S. Virgin Islands Uruguay Uzbekistan Vanuatu Venezuela Vietnam Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe | I've looked at multiple sources and all signs point to the same issue: You're likely in a location where the generative AI isn't supported. While you're using the new Gemini Pro, the API is very similar to the previous iteration "PaLM", and they likely have the same regional limitations. https://www.googlecloudcommunity.com/gc/AI-ML/When-has-europe-access-to-PALM-and-Makersuite/m-p/644100 Error message (FailedPrecondition: 400 User location is not supported for the API use.) when using the 51GB Google Colab runtime and Palm API EU tends to have stringent laws and regulations around new technology, so it usually takes longer for these things to roll out there to give companies time to work out the legal boundaries. (E.g does our AI produce copyrighted works when prompted). | 6 | 6 |
77,658,160 | 2023-12-14 | https://stackoverflow.com/questions/77658160/how-to-delete-an-object-which-contains-a-list-of-its-bound-methods | At this code, an object of Foo() class is still alive after creating new one. I guess that reason is in the circular references on appending object's list property. So, how to let garbage collector free old object, withot manual calling gc.collect()? import gc class Foo(): def __init__(self): self.functions = [] print('CREATE', self) def some_func(self): for i in range(3): self.functions.append(self.print_func) print(self.functions) def print_func(self): print('I\'m a test') def __del__(self): print('DELETE', self) foo = Foo() foo.some_func() foo = Foo() # gc.collect() input = input() input() at the end is just for keeping program running. Real project with this problem contains while loop, so keeping old unused objects may cause a memory leak. Now, output: CREATE <__main__.Foo object at 0x000002747001D850> [<bound method Foo.print_func of <__main__.Foo object at 0x000002747001D850>>, <bound method Foo.print_func of <__main__.Foo object at 0x000002747001D850>>, <bound method Foo.print_func of <__main__.Foo object at 0x000002747001D850>>] CREATE <__main__.Foo object at 0x000002746FD0BB90> Output with calling gc.collect(): CREATE <__main__.Foo object at 0x0000021E45F0D8E0> [<bound method Foo.print_func of <__main__.Foo object at 0x0000021E45F0D8E0>>, <bound method Foo.print_func of <__main__.Foo object at 0x0000021E45F0D8E0>>, <bound method Foo.print_func of <__main__.Foo object at 0x0000021E45F0D8E0>>] CREATE <__main__.Foo object at 0x0000021E45CDBB90> DELETE <__main__.Foo object at 0x0000021E45F0D8E0> It's a result, that I want to get, but without using gc | You can use weakref.WeakMethod to avoid creating strong references to the self.print_func method in the list attribute. Note that to call a weakly referenced method you would have to dereference it first by calling the weakref before calling the actual method: from weakref import WeakMethod class Foo(): def __init__(self): self.functions = [] print('CREATE', self) def some_func(self): for i in range(3): self.functions.append(WeakMethod(self.print_func)) print(self.functions) def print_func(self): print('I\'m a test') def __del__(self): print('DELETE', self) foo = Foo() foo.some_func() foo.functions[0]()() foo = Foo() input() This outputs: CREATE <__main__.Foo object at 0x0000018F0B397150> [<weakref at 0x0000018F0B18E0A0; to 'Foo' at 0x0000018F0B397150>, <weakref at 0x0000018F0B18E1F0; to 'Foo' at 0x0000018F0B397150>, <weakref at 0x0000018F0B18E490; to 'Foo' at 0x0000018F0B397150>] I'm a test CREATE <__main__.Foo object at 0x0000018F0B397190> DELETE <__main__.Foo object at 0x0000018F0B397150> Demo: https://replit.com/@blhsing1/LightheartedTurquoiseChord | 2 | 2 |
77,656,160 | 2023-12-13 | https://stackoverflow.com/questions/77656160/pytorch-random-number-generators-and-devices | I always put on top of my Pytorch's notebooks a cell like this: device = ( "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" ) torch.set_default_device(device) In this convenient way, I can use the GPU, if the system has one, MPS on a Mac, or the cpu on a vanilla system. EDIT: Please note that, due to torch.set_default_device(device), any tensor is created, by default, on the device, e.g.: Now, I'm trying to use a Pytorch generator: g = torch.Generator(device=device).manual_seed(1) and then: A = torch.randn((3, 2), generator=g) No problem whatsoever on my Macbook (where the device is MPS) or on systems with cpu only. But on my Cuda-enabled desktop, I get: RuntimeError: Expected a 'cpu' device type for generator but found 'cuda' Any solution? If I just abstain from specifying the device for the generator, it will use the cpu, but then the tensor A will be created on the CPU too... | When defining the tensor "A", it is defaulting to creating the tensor on the CPU. This results in an error, because the generator you are passing in is on the CUDA device. You can solve this by passing in the "device" parameter into torch.randn() as follows: device = ("cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu") g = torch.Generator(device=device).manual_seed(1) A = torch.randn((3, 2), device=device, generator=g) Edit: Just noticed that you specified set_default_device() in your starting block. While this works for tensors created with torch.Tensor or similar, it doesn't work for PyTorch factory functions that return a tensor (e.g. torch.randn()). Here's the documentation to support this: https://pytorch.org/docs/stable/generated/torch.set_default_device.html. | 2 | 4 |
77,656,117 | 2023-12-13 | https://stackoverflow.com/questions/77656117/when-can-you-use-numpy-arrays-as-dict-values-in-numba | I am confused by the type rules for numba dicts. Here is an MWE that works: import numpy as np import numba as nb @nb.njit def foo(a, b, c): d = {} d[(1,2,3)] = a return d a = np.array([1, 2]) b = np.array([3, 4]) t = foo(a, b, c) But if I change the definition of foo as follows this fails: @nb.njit def foo(a, b, c): d = {} d[(1,2,3)] = np.array(a) return d TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(<built-in function array>) found for signature: >>> array(array(int64, 1d, C)) There are 2 candidate implementations: - Of which 2 did not match due to: Overload in function 'impl_np_array': File: numba/np/arrayobj.py: Line 5384. With argument(s): '(array(int64, 1d, C))': Rejected as the implementation raised a specific error: TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(<intrinsic np_array>) found for signature: >>> np_array(array(int64, 1d, C), none) There are 2 candidate implementations: - Of which 2 did not match due to: Intrinsic in function 'np_array': File: numba/np/arrayobj.py: Line 5358. With argument(s): '(array(int64, 1d, C), none)': Rejected as the implementation raised a specific error: TypingError: array(int64, 1d, C) not allowed in a homogeneous sequence raised from /home/raph/python/mypython3.10/lib/python3.10/site-packages/numba/core/typing/npydecl.py:482 During: resolving callee type: Function(<intrinsic np_array>) During: typing of call at /home/raph/python/mypython3.10/lib/python3.10/site-packages/numba/np/arrayobj.py (5395) File "../../python/mypython3.10/lib/python3.10/site-packages/numba/np/arrayobj.py", line 5395: def impl(object, dtype=None): return np_array(object, dtype) ^ raised from /home/raph/python/mypython3.10/lib/python3.10/site-packages/numba/core/typeinfer.py:1086 During: resolving callee type: Function(<built-in function array>) During: typing of call at <ipython-input-99-e05437a34ab9> (4) File "<ipython-input-99-e05437a34ab9>", line 4: def foo(a, b, c): <source elided> d = {} d[(1,2,3)] = np.array(a) ^ Why is this? | This doesn't have anything with how numba handles dictionaries. This piece of code fails with the same error: @nb.njit def foo2(a, b, c): x = np.array(a) return x When you look at the error message you see that numba doesn't know how to initialize np.array from other np.array: No implementation of function Function(<built-in function array>) found for signature: >>> array(array(int64, 1d, C)) If you change the code to: @nb.njit def foo2(a, b, c): x = np.array([*a]) return x the compilation succeeds. | 2 | 2 |
77,655,440 | 2023-12-13 | https://stackoverflow.com/questions/77655440/can-you-protect-a-python-variable-with-exec | This is kind of a hacky Python question. Consider the following Python code: def controlled_exec(code): x = 0 def increment_x(): nonlocal x x += 1 globals = {"__builtins__": {}} # remove every global (including all python builtins) locals = {"increment_x": increment_x} # expose only the increment function exec(code, globals, locals) return x I expect this function to provide a controlled code API, which simply counts the number of increment_x() calls. I tried it and I get the correct behavior. # returns 2 controlled_exec("""\ increment_x() increment_x() """) I assume this way of doing is not secure, but I wonder out of curiosity. Can I set x to an arbitrary value (say negative) by executing code via controlled_exec(...) ? How would I do that? | Yes, x can be set to an arbitrary value by code executed by controlled_exec. Consider the following demo: def controlled_exec(code): x = 0 def increment_x(): nonlocal x x += 1 print(f"{x=}") globals = {"__builtins__": {}} # remove every global (including all python builtins) locals = {"increment_x": increment_x} # expose only the increment function exec(code, globals, locals) return x controlled_exec("""\ increment_x() increment_x.__closure__[0].cell_contents = -100 increment_x() """) This outputs x=1 x=-99 This is really no way to "secure" exec. No matter what you do, at the end of the day you're executing arbitrary Python code, which is just as free to access and modify the state of the Python interpreter as the code that you write. To highlight just how difficult it would be meaningfully secure exec, note that despite your attempt to hide builtins from the executed code, an attacker can still access any builtin via increment_x.__globals__['__builtins__'] This is just one of dozens of tricks that someone trying to exploit this function could use. The fact that we can modify the variable x is really a tame example. There's literally nothing stopping the code passed to controlled_exec from doing more disastrous things, like wiping your hard drive or downloading malware. | 3 | 5 |
77,651,219 | 2023-12-13 | https://stackoverflow.com/questions/77651219/finding-the-first-row-that-meets-conditions-of-a-mask-and-selecting-one-row-afte | This is my dataframe: import pandas as pd df = pd.DataFrame( { 'a': [100, 1123, 123, 100, 1, 0, 1], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': ['a', 'b', 'c', 'd', 'e', 'f', 'g'], } ) And this is the output that I want. I want to create column x: a b c x 0 100 1000 a NaN 1 1123 11123 b NaN 2 123 1123 c NaN 3 100 0 d NaN 4 1 55 e e 5 0 0 f NaN 6 1 1 g NaN By using a mask: mask = ( (df.a > df.b) ) First of all I need to find the first occurrence of this mask which in my example is row number 3. Then I want to move one row below it and use the value in column c to create column x. So in my example, the first occurrence of mask is row 3. One row after it is row 4. That is why e is selected for column x. Note that in row 4 which is one row after the mask, no condition is needed. For example for row 4, It is NOT necessary that df.a > df.b. This is what I have tried: df.loc[mask.cumsum().eq(1) & mask, 'x'] = df.c.shift(-1) I provide some additional dfs for convenience to test whether the code works in other examples. For instance what if there are no cases that meet the conditions of mask. In that case I just want a column of NaN for x. df = pd.DataFrame({'a': [1000, 11230, 12300, 10000, 1000, 10000, 100000], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': ['a', 'b', 'c', 'd', 'e', 'f', 'g']}) df = pd.DataFrame({'a': [1, 1, 1, -1, -1, -1, -1], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': ['a', 'b', 'c', 'd', 'e', 'f', 'g']}) df = pd.DataFrame({'a': [-1, -1, -1, -1, -1, -1, 100000], 'b': [1000, 11123, 1123, 0, 55, 0, 1], 'c': ['a', 'b', 'c', 'd', 'e', 'f', 'g']}) | You can generate a mask that indicates one location past where the first value of a is greater than b: mask = (df.a > df.b).shift(fill_value=False) mask = mask & ~mask.cumsum().shift().astype(bool) You can then use that mask to set the value of x equal to c: df.loc[mask, 'x'] = df['c'] Output for each of your dfs: a b c x 0 100 1000 a NaN 1 1123 11123 b NaN 2 123 1123 c NaN 3 100 0 d NaN 4 1 55 e e 5 0 0 f NaN 6 1 1 g NaN a b c x 0 1000 1000 a NaN 1 11230 11123 b NaN 2 12300 1123 c c 3 10000 0 d NaN 4 1000 55 e NaN 5 10000 0 f NaN 6 100000 1 g NaN a b c x 0 1 1000 a NaN 1 1 11123 b NaN 2 1 1123 c NaN 3 -1 0 d NaN 4 -1 55 e NaN 5 -1 0 f NaN 6 -1 1 g NaN a b c x 0 -1 1000 a NaN 1 -1 11123 b NaN 2 -1 1123 c NaN 3 -1 0 d NaN 4 -1 55 e NaN 5 -1 0 f NaN 6 100000 1 g NaN More generically, you can use cummax and shift by N to select the next N values: N = 3 mask = (df.a > df.b).shift(fill_value=False).cummax() mask = mask & ~mask.cumsum().shift(N, fill_value=0).astype(bool) | 4 | 5 |
77,652,094 | 2023-12-13 | https://stackoverflow.com/questions/77652094/how-to-post-json-data-that-include-unicode-characters-to-fastapi-using-python-re | When a FastAPI endpoint expects a Pydantic model and one is passed with a string it works as expected unless that string contains unicode characters. First I create an example application for FastAPI with an example model. serv.py from pydantic import BaseModel class exClass(BaseModel): id: int = Field(example=1) text: str = Field(example="Text example") app = FastAPI(debug=True) @app.post("/example") async def receive_pyd(ex: exClass): print(ex) return True if __name__ == "__main__": import uvicorn uvicorn.run(app, host="127.0.0.1", port=8000) The client that shows the error in question client.py from pydantic import BaseModel, Field import requests class exClass(BaseModel): id: int = Field(example=1) text: str = Field(example="Text example") ex1 = exClass(id=1, text="working example") ex2 = exClass(id=2, text="this’ will fail") ex3 = exClass(id=3, text="🤗 <- also non-working") r = requests.post(f"http://127.0.0.1:8000/example", data=ex1.model_dump_json()) print(r.text) r = requests.post(f"http://127.0.0.1:8000/example", data=ex2.model_dump_json()) print(r.text) r = requests.post(f"http://127.0.0.1:8000/example", data=ex3.model_dump_json()) print(r.text) Output: true Invalid HTTP request received. Invalid HTTP request received. When text contains unicode characters the result is a 422 Unprocessable Entity. I have tried ex.dict(), model_dump(), and using json instead of data in the requests call. Enabling debugging in FastAPI/starlette bubbles up that the Invalid HTTP request is a JSON decode error. | This is not a problem of Pydantic and FastAPI. You should encode you request data like it's shown below: r = requests.post( f"http://127.0.0.1:8000/example", data=ex1.model_dump_json().encode('utf-8') ) print(r.text) r = requests.post( f"http://127.0.0.1:8000/example", data=ex2.model_dump_json().encode('utf-8') ) print(r.text) r = requests.post( f"http://127.0.0.1:8000/example", data=ex3.model_dump_json().encode('utf-8') ) print(r.text) | 3 | 4 |
77,646,307 | 2023-12-12 | https://stackoverflow.com/questions/77646307/correlation-matrix-like-dataframe-in-polars | I have Polars dataframe data = { "col1": ["a", "b", "c", "d"], "col2": [[-0.06066, 0.072485, 0.548874, 0.158507], [-0.536674, 0.10478, 0.926022, -0.083722], [-0.21311, -0.030623, 0.300583, 0.261814], [-0.308025, 0.006694, 0.176335, 0.533835]], } df = pl.DataFrame(data) I want to calculate cosine similarity for each combination of column col1 The desired output should be the following: ┌─────────────────┬──────┬──────┬──────┬──────┐ │ col1_col2 ┆ a ┆ b ┆ c ┆ d │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ f64 ┆ f64 ┆ f64 ┆ f64 │ ╞═════════════════╪══════╪══════╪══════╪══════╡ │ a ┆ 1.0 ┆ 0.86 ┆ 0.83 ┆ 0.54 │ │ b ┆ 0.86 ┆ 1.0 ┆ 0.75 ┆ 0.41 │ │ c ┆ 0.83 ┆ 0.75 ┆ 1.0 ┆ 0.89 │ │ d ┆ 0.54 ┆ 0.41 ┆ 0.89 ┆ 1.0 │ └─────────────────┴──────┴──────┴──────┴──────┘ Where each value represents cosine similarity between respective column values. I'm using following cosine similarity function from numpy.linalg import norm cosine_similarity = lambda a,b: (a @ b.T) / (norm(a)*norm(b)) I tried to use it with pivot method df.pivot(on="col1", values="col2", index="col1", aggregate_function=cosine_similarity) However I'm getting the following error AttributeError: 'function' object has no attribute '_pyexpr' | Update: Polars 1.8.0 added native list arithmetic allowing us to write a much more efficient cosine similarity expression. Combinations We can add a row index and use .join_where() to generate the row "combinations". df = df.with_row_index().lazy() df.join_where(df, pl.col.index <= pl.col.index_right).collect() shape: (10, 6) ┌───────┬──────┬─────────────────────────────────┬─────────────┬────────────┬─────────────────────────────────┐ │ index ┆ col1 ┆ col2 ┆ index_right ┆ col1_right ┆ col2_right │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ u32 ┆ str ┆ list[f64] ┆ u32 ┆ str ┆ list[f64] │ ╞═══════╪══════╪═════════════════════════════════╪═════════════╪════════════╪═════════════════════════════════╡ │ 0 ┆ a ┆ [-0.06066, 0.072485, … 0.15850… ┆ 0 ┆ a ┆ [-0.06066, 0.072485, … 0.15850… │ │ 0 ┆ a ┆ [-0.06066, 0.072485, … 0.15850… ┆ 1 ┆ b ┆ [-0.536674, 0.10478, … -0.0837… │ │ 0 ┆ a ┆ [-0.06066, 0.072485, … 0.15850… ┆ 2 ┆ c ┆ [-0.21311, -0.030623, … 0.2618… │ │ 0 ┆ a ┆ [-0.06066, 0.072485, … 0.15850… ┆ 3 ┆ d ┆ [-0.308025, 0.006694, … 0.5338… │ │ 1 ┆ b ┆ [-0.536674, 0.10478, … -0.0837… ┆ 1 ┆ b ┆ [-0.536674, 0.10478, … -0.0837… │ │ 1 ┆ b ┆ [-0.536674, 0.10478, … -0.0837… ┆ 2 ┆ c ┆ [-0.21311, -0.030623, … 0.2618… │ │ 1 ┆ b ┆ [-0.536674, 0.10478, … -0.0837… ┆ 3 ┆ d ┆ [-0.308025, 0.006694, … 0.5338… │ │ 2 ┆ c ┆ [-0.21311, -0.030623, … 0.2618… ┆ 2 ┆ c ┆ [-0.21311, -0.030623, … 0.2618… │ │ 2 ┆ c ┆ [-0.21311, -0.030623, … 0.2618… ┆ 3 ┆ d ┆ [-0.308025, 0.006694, … 0.5338… │ │ 3 ┆ d ┆ [-0.308025, 0.006694, … 0.5338… ┆ 3 ┆ d ┆ [-0.308025, 0.006694, … 0.5338… │ └───────┴──────┴─────────────────────────────────┴─────────────┴────────────┴─────────────────────────────────┘ Cosine Similarity You can write the formula using Expressions e.g. list arithmetic, .list.sum() and Expr.sqrt(). cosine_similarity = lambda x, y: ( (x * y).list.sum() / ( (x * x).list.sum().sqrt() * (y * y).list.sum().sqrt() ) ) out = ( df.join_where(df, pl.col.index <= pl.col.index_right) .select( col = "col1", other = "col1_right", cosine = cosine_similarity( x = pl.col.col2, y = pl.col.col2_right ) ) ) # out.collect() shape: (10, 3) ┌─────┬───────┬──────────┐ │ col ┆ other ┆ cosine │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ f64 │ ╞═════╪═══════╪══════════╡ │ a ┆ a ┆ 1.0 │ │ a ┆ b ┆ 0.856754 │ │ a ┆ c ┆ 0.827877 │ │ a ┆ d ┆ 0.540282 │ │ b ┆ b ┆ 1.0 │ │ b ┆ c ┆ 0.752199 │ │ b ┆ d ┆ 0.411564 │ │ c ┆ c ┆ 1.0 │ │ c ┆ d ┆ 0.889009 │ │ d ┆ d ┆ 1.0 │ └─────┴───────┴──────────┘ Pivot You can vertically concat/stack the reverse pairings and then .pivot() for the matrix shape. pl.concat( [ out, out.filter(pl.col.col != pl.col.other).select(col="other", other="col", cosine="cosine") ] ).collect().pivot("other", index="col") shape: (4, 5) ┌─────┬──────────┬──────────┬──────────┬──────────┐ │ col ┆ a ┆ b ┆ c ┆ d │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ f64 ┆ f64 ┆ f64 ┆ f64 │ ╞═════╪══════════╪══════════╪══════════╪══════════╡ │ a ┆ 1.0 ┆ 0.856754 ┆ 0.827877 ┆ 0.540282 │ │ b ┆ 0.856754 ┆ 1.0 ┆ 0.752199 ┆ 0.411564 │ │ c ┆ 0.827877 ┆ 0.752199 ┆ 1.0 ┆ 0.889009 │ │ d ┆ 0.540282 ┆ 0.411564 ┆ 0.889009 ┆ 1.0 │ └─────┴──────────┴──────────┴──────────┴──────────┘ | 6 | 3 |
77,630,410 | 2023-12-9 | https://stackoverflow.com/questions/77630410/fastapi-swagger-interface-showing-operation-level-options-override-server-option | I have been recently finding a popup that tells me some operation level options override the global server options: As per image: I do not understand whether this is a bug in my application or anything but normal. I would not like to show the user that message. EDIT: This is my main.py file: from fastapi import FastAPI,Depends from ml.model_recommendation import predict_intention from ml.training_ml import create_ml import crud,models,schemas from db import SessionLocal,engine from sqlalchemy.orm import Session from typing import Optional from enum import Enum import numpy as np app = FastAPI(title="ML prediction",description="API to serve data used for prediction of intended remediation date (IRD)") class Tags(Enum): ITEMS = "Retrieve ITSO and Software Versions" DELETE = "Delete Data" INSERT = "Get IRD predictions" DOWNLOAD = "Download Data" @app.on_event("startup") def on_startup(): models.Base.metadata.create_all(bind=engine) # Dependency def get_db(): db = SessionLocal() try: yield db finally: db.close() @app.get( "/service_owners/", response_model=list[schemas.Evergreen], tags=[Tags.ITEMS] ) def read_owners(service_owner: str ,software_product_version_name: Optional[str] = None,db: Session = Depends(get_db),skip: int = 0, limit: int = 100): owners = crud.get_service_owners(db,service_owner=service_owner,software_product_version_name=software_product_version_name,skip=skip,limit=limit) return owners This is my crud.py function: import models from sqlalchemy.orm import Session from sqlalchemy import select from typing import Optional def get_service_owners(db: Session, service_owner: str,software_product_version_name: Optional[str] = None,skip: int = 0, limit: int = 100): if software_product_version_name: stmt = (select(models.Evergreen.service_owner,models.Evergreen.software_product_version_name) .where(models.Evergreen.service_owner.ilike(f'%{service_owner}%')) .where(models.Evergreen.software_product_version_name.ilike(f'%{software_product_version_name}%')) ).distinct().offset(skip).limit(limit) return db.execute(stmt).all() stmt = select(models.Evergreen.service_owner,models.Evergreen.software_product_version_name).where(models.Evergreen.service_owner.ilike(f'%{service_owner}%')).distinct().offset(skip).limit(limit) return db.execute(stmt).all() And this my schema.py for validation: from typing import Optional from pydantic import BaseModel from typing_extensions import TypedDict class Evergreen(BaseModel): service_owner: Optional[str] software_product_version_name: Optional[str] class Config: from_attributes = True class Items(TypedDict): service_owner: str software_product_version_name: str class Pred(TypedDict): service_owner: str software_product_version_name: str future_expectation: int | This option seem to appear for OAS 3.1 documents. This must be a bug in swagger-ui. Although servers are not overridden on operation level, swagger-ui defaults to '/' and always displays the override. | 2 | 0 |
77,631,477 | 2023-12-9 | https://stackoverflow.com/questions/77631477/runtimeerror-cant-create-new-thread-at-interpreter-shutdown-python-3-12 | I am making a mouse jiggler and I can't get acces to the tray icon once the loop in the script is launched. I have asked another question but no one has answered yet, so I kept digging deeper into the topic in order to resolve my issue. So, I found a solution as I assumed it would be - threads. I kinda understand the code I found, but not entirely, so maybe now the issue is about my understanding and not the code itself. I found the same questions with the same error published not long ago but there were no answers. So, I might presume there is a bug with threading.py in Python 3.12 - I dunno. Here's my code: from PIL import Image import pyautogui import time import pystray import threading import os class MyClass: def __init__(self): self.__running = False self.__stop_event = threading.Event() def run(self): while not self.__stop_event.is_set(): pyautogui.moveRel(50, 0, duration = 0) pyautogui.moveRel(-50,0, duration = 0) time.sleep(5) print("running") def change_running_state(self): print(f"Running: {self.__running}") self.__running = not self.__running print(f"Running: {self.__running}") if self.__running: self.__stop_event.clear() t = threading.Thread(target=self.run) t.start() else: self.__stop_event.set() if __name__ == "__main__": def start(icon, item): print("start") cl.change_running_state() def stop(icon, item): print("stop") cl.change_running_state() def exit_program(icon, item): print("exit") cl.change_running_state() icon.stop() os._exit(1) image = Image.open("macos.jpg") cl = MyClass() icon = pystray.Icon("macos", image) icon.menu=pystray.Menu( pystray.MenuItem("Start", start), pystray.MenuItem("Stop", stop), pystray.MenuItem("Exit", exit_program), ) icon.run_detached() The tray icon works and menu items appear there whenever I click on the icon now. And menu items should have worked, I thought, too. Instead, I get the following: start Running: False Running: True An error occurred when calling message handler Traceback (most recent call last): File "C:\Users\derby\AppData\Local\Programs\Python\Python312\Lib\site-packages\pystray\_win32.py", line 412, in _dispatcher return int(icon._message_handlers.get( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\derby\AppData\Local\Programs\Python\Python312\Lib\site-packages\pystray\_win32.py", line 224, in _on_notify descriptors[index - 1](self) File "C:\Users\derby\AppData\Local\Programs\Python\Python312\Lib\site-packages\pystray\_base.py", line 328, in inner callback(self) File "C:\Users\derby\AppData\Local\Programs\Python\Python312\Lib\site-packages\pystray\_base.py", line 453, in __call__ return self._action(icon, self) ^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\derby\PythonWorkspace\Mouse Jiggler\Jiggler copy.py", line 34, in start cl.change_running_state() File "c:\Users\derby\PythonWorkspace\Mouse Jiggler\Jiggler copy.py", line 28, in change_running_state t.start() File "C:\Users\derby\AppData\Local\Programs\Python\Python312\Lib\threading.py", line 971, in start _start_new_thread(self._bootstrap, ()) RuntimeError: can't create new thread at interpreter shutdown So what's going on here? I might surely be tired after 6 hours coding, but still it is hard for me, novice programmer, to understand that by myself. | The solution is to use Python version under 3.12. Now I use 3.11.7. The code works perfectly fine. | 4 | 0 |
77,630,013 | 2023-12-9 | https://stackoverflow.com/questions/77630013/how-to-run-any-quantized-gguf-model-on-cpu-for-local-inference | In ctransformers library, I can only load around a dozen supported models. How can I run local inference on CPU (not just on GPU) from any open-source LLM quantized in the GGUF format (e.g. Llama 3, Mistral, Zephyr, i.e. ones unsupported in ctransformers)? | llama-cpp-python is my personal choice, because it is easy to use and it is usually one of the first to support quantized versions of new models. To install it for CPU, just run pip install llama-cpp-python. Compiling for GPU is a little more involved, so I'll refrain from posting those instructions here since you asked specifically about CPU inference. I also recommend installing huggingface_hub (pip install huggingface_hub) to easily download models. Once you have both llama-cpp-python and huggingface_hub installed, you can download and use a model (e.g. mixtral-8x7b-instruct-v0.1-gguf) like so: ## Imports from huggingface_hub import hf_hub_download from llama_cpp import Llama ## Download the GGUF model model_name = "TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF" model_file = "mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf" # this is the specific model file we'll use in this example. It's a 4-bit quant, but other levels of quantization are available in the model repo if preferred model_path = hf_hub_download(model_name, filename=model_file) ## Instantiate model from downloaded file llm = Llama( model_path=model_path, n_ctx=16000, # Context length to use n_threads=32, # Number of CPU threads to use n_gpu_layers=0 # Number of model layers to offload to GPU ) ## Generation kwargs generation_kwargs = { "max_tokens":20000, "stop":["</s>"], "echo":False, # Echo the prompt in the output "top_k":1 # This is essentially greedy decoding, since the model will always return the highest-probability token. Set this value > 1 for sampling decoding } ## Run inference prompt = "The meaning of life is " res = llm(prompt, **generation_kwargs) # Res is a dictionary ## Unpack and the generated text from the LLM response dictionary and print it print(res["choices"][0]["text"]) # res is short for result Keep in mind that mixtral is a fairly large model for most laptops and requires ~25+ GB RAM, so if you need a smaller model, try using one like llama-13b-chat-gguf (model_name="TheBloke/Llama-2-13B-chat-GGUF"; model_file="llama-2-13b-chat.Q4_K_M.gguf") or mistral-7b-openorca-gguf (model_name="TheBloke/Mistral-7B-OpenOrca-GGUF"; model_file="mistral-7b-openorca.Q4_K_M.gguf"). | 11 | 20 |
77,634,768 | 2023-12-10 | https://stackoverflow.com/questions/77634768/sqlalchemy-cant-communicate-to-mysql-server-upon-app-start | I'm have troubles with SQLalchemy in Flask app first minute after the app start (or restart). It looks like logger exceptions of a sort: sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (2006, 'MySQL server has gone away') <---- most often one sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically. sqlalchemy.exc.NoSuchColumnError: "Could not locate column in row for column 'users.id'" sqlalchemy.exc.NoSuchColumnError: "Could not locate column in row for column 'payments.id'" sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (2013, 'Lost connection to MySQL server during query') Then everything gets back to normal. It's not a critical issue but annoying. I tried to run warming up queries upon application start: with app.app_context(): for _ in range(5): try: db.session.execute(statement) logger.info("DB connection is successfully established") return except Exception as e: logger.warning(e) time.sleep(1) raise Exception("Couldn't establish a DB connection") It passes through just fine but then I see same issues. It doesn't happen in development environment, only in production where Flask app runs on uwsgi server. Is there a way to fix it? Update: connection URI looks like this: "mysql+mysqldb://user:password@localhost/mydb?unix_socket=/var/run/mysqld/mysqld.sock" | Seems How to correctly setup Flask + uWSGI + SQLAlchemy to avoid database connection issues worked for them. This is a copy paste of the answer just to avoid "link only answers". I would still recommend anyone arriving at this question to refer the original answer linked above. The SQLAlchemy manual provides two examples how to approach this: Using Connection Pools with Multiprocessing. The first approach involving Engine.dispose() can be approached using uwsgidecorators.postfork as suggested by Hett - simple example that should work if there's only the default binding involved: db = SQLAlchemy() def create_app(): app = Flask(__name__) app.config["SQLALCHEMY_DATABASE_URI"] = "postgres:///test" db.init_app(app) def _dispose_db_pool(): with app.app_context(): db.engine.dispose() try: from uwsgidecorators import postfork postfork(_dispose_db_pool) except ImportError: # Implement fallback when running outside of uwsgi... raise return app Meta: I am posting this as an answer as per OP's comment | 2 | 3 |
77,633,160 | 2023-12-9 | https://stackoverflow.com/questions/77633160/name-c-is-not-defined | I tried to import pytorch in Jupyter notebook: import torch When I run it I get this error message: NameError Traceback (most recent call last) <ipython-input-7-db4b44599dae> in <module> ----> 1 import torch 2 import numpy as np 3 from torch import nn 4 from tqdm.auto import tqdm 5 from torchvision import transforms /opt/anaconda3/lib/python3.8/site-packages/torch/__init__.py in <module> 463 raise # If __file__ is not None the cause is unknown, so just re-raise. 464 --> 465 for name in dir(_C): 466 if name[0] != '_' and not name.endswith('Base'): 467 __all__.append(name) NameError: name '_C' is not defined I have tried potential solutions but nothing worked. I tried deleting and reinstalling Pytorch and Cython, but I kept getting the same error. | I had this same error, I found the issue was an outdated package. I had to manually specify the latest typing_extensions: pip3 install typing_extensions==4.9.0 | 2 | 2 |
77,626,710 | 2023-12-8 | https://stackoverflow.com/questions/77626710/data-is-normally-distributed-but-ks-test-return-a-statistic-of-1-0 | I have an age variable. When I plotted it using the kde & qq-plot, the distribution seemed normal; however, when I performed the ks-test, the test statistics = 1.0, p = 0.0. Can someone please help me explain this observation? I use the ks-test on other variables, and the result was consistent with the visualization for others. # library import numpy as np import seaborn as sns import matplotlib.pyplot as plt import scipy.stats as sps # the age variable age = np.array([87, 88, 75, 76, 80, 88, 90, 80, 83, 85, 71, 73, 75, 93, 95, 68, 69, 66, 68, 78, 80, 83, 81, 82, 85, 76, 77, 88, 90, 80, 81, 85, 86, 87, 88, 92, 80, 82, 84, 72, 76, 61, 64, 86, 87, 82, 84, 69, 71, 73, 74, 64, 66, 77, 80, 60, 62, 86, 88, 91, 90, 92, 79, 80, 82, 84, 88, 89, 69, 70, 73, 75, 82, 85, 88, 89, 81, 83, 84, 86, 88, 71, 73, 75, 70, 73, 72, 73, 68, 69, 71, 75, 77, 83, 85, 77, 78, 66, 66, 68, 68, 69, 69, 70, 71, 71, 72, 92, 94, 97, 74, 78, 82, 84, 85, 87, 65, 67, 71, 73, 81, 83, 85, 78, 79, 80, 75, 78, 68, 70, 72, 79, 81, 83, 80, 81, 78, 81, 82, 61, 62, 67, 68, 71, 73, 88, 90, 81, 82, 80, 82, 84, 85, 86, 83, 84, 70, 72, 75, 76, 77, 73, 75, 66, 69, 71, 69, 73, 89, 91, 92, 69, 71, 73, 66, 68, 69, 82, 84, 78, 80, 63, 65, 96, 98, 78, 80, 70, 72, 73, 75, 76, 75, 78, 83, 84, 61, 63, 71, 72, 74, 89, 91, 74, 77, 66, 67, 80, 83, 77, 80, 82, 71, 74, 76, 82, 84, 86, 69, 74, 75, 70, 71, 86, 87, 70, 72, 77, 79, 81, 83, 62, 65, 76, 78, 73, 75, 76, 78, 73, 75, 73, 74, 76, 78, 67, 71, 81, 83, 85, 76, 78, 73, 74, 86, 88, 70, 71, 74, 75, 77, 79, 81, 81, 84, 86, 76, 79, 78, 80, 82, 65, 67, 78, 81, 70, 71, 74, 78, 74, 75, 73, 75, 67, 68, 76, 78, 81, 65, 68, 69, 71, 89, 91, 93, 77, 79, 68, 73, 80, 82, 77, 78, 80, 82, 81, 83, 73, 75, 66, 68, 69, 75, 77, 78, 81, 73, 75, 73, 76, 73, 76, 76, 78, 77, 79, 80, 82, 84, 77, 79, 78, 80, 71, 73, 76, 77, 81, 75, 79, 60, 62, 64, 70, 72, 73, 84, 87, 89, 68, 70, 89, 90, 93, 79, 81, 74, 75, 77, 73, 75, 66, 66, 68, 72, 72, 73, 80, 82, 86, 61, 63, 65]) # Visualization fig, ax = plt.subplots(1,2) # Making (row, col) of plots fig.set_figheight(4) # set height fig.set_figwidth(8) # set width sns.kdeplot(age, color = 'red', alpha = .1, fill = 'true', ax = ax[0]) # Distribution plot sm.qqplot(age, fit = True, line = '45', ax = ax[1]) # qqplot fig.tight_layout() # Tight layout plt.show() # show plots # KS test (because n > 50) print('n =', age.size) sps.kstest(age, 'norm') | @Timur Shtatland is correct. Your code is: sps.kstest(age, 'norm') without specifying the parameters of the normal distribution, you are comparing your data to a standard normal distribution (with mean 0 and standard deviation 1). So it not surprising that the p-value for the test is effectively zero. Instead you should use the mean and standard deviation of your data: import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm from scipy import stats # Data data = np.array([87, 88, 75, 76, 80, 88, 90, 80, 83, 85, 71, 73, 75, 93, 95, 68, 69, 66, 68, 78, 80, 83, 81, 82, 85, 76, 77, 88, 90, 80, 81, 85, 86, 87, 88, 92, 80, 82, 84, 72, 76, 61, 64, 86, 87, 82, 84, 69, 71, 73, 74, 64, 66, 77, 80, 60, 62, 86, 88, 91, 90, 92, 79, 80, 82, 84, 88, 89, 69, 70, 73, 75, 82, 85, 88, 89, 81, 83, 84, 86, 88, 71, 73, 75, 70, 73, 72, 73, 68, 69, 71, 75, 77, 83, 85, 77, 78, 66, 66, 68, 68, 69, 69, 70, 71, 71, 72, 92, 94, 97, 74, 78, 82, 84, 85, 87, 65, 67, 71, 73, 81, 83, 85, 78, 79, 80, 75, 78, 68, 70, 72, 79, 81, 83, 80, 81, 78, 81, 82, 61, 62, 67, 68, 71, 73, 88, 90, 81, 82, 80, 82, 84, 85, 86, 83, 84, 70, 72, 75, 76, 77, 73, 75, 66, 69, 71, 69, 73, 89, 91, 92, 69, 71, 73, 66, 68, 69, 82, 84, 78, 80, 63, 65, 96, 98, 78, 80, 70, 72, 73, 75, 76, 75, 78, 83, 84, 61, 63, 71, 72, 74, 89, 91, 74, 77, 66, 67, 80, 83, 77, 80, 82, 71, 74, 76, 82, 84, 86, 69, 74, 75, 70, 71, 86, 87, 70, 72, 77, 79, 81, 83, 62, 65, 76, 78, 73, 75, 76, 78, 73, 75, 73, 74, 76, 78, 67, 71, 81, 83, 85, 76, 78, 73, 74, 86, 88, 70, 71, 74, 75, 77, 79, 81, 81, 84, 86, 76, 79, 78, 80, 82, 65, 67, 78, 81, 70, 71, 74, 78, 74, 75, 73, 75, 67, 68, 76, 78, 81, 65, 68, 69, 71, 89, 91, 93, 77, 79, 68, 73, 80, 82, 77, 78, 80, 82, 81, 83, 73, 75, 66, 68, 69, 75, 77, 78, 81, 73, 75, 73, 76, 73, 76, 76, 78, 77, 79, 80, 82, 84, 77, 79, 78, 80, 71, 73, 76, 77, 81, 75, 79, 60, 62, 64, 70, 72, 73, 84, 87, 89, 68, 70, 89, 90, 93, 79, 81, 74, 75, 77, 73, 75, 66, 66, 68, 72, 72, 73, 80, 82, 86, 61, 63, 65]) # Fit a normal distribution to the data mu, std = norm.fit(data) shapiro_test = stats.shapiro(data) print("\nShapiro-Wilk Test:") print("Statistic: {:.2f}".format(shapiro_test[0])) print("p-value: {:.2f}".format(shapiro_test[1])) # Perform the KS test for normality ks_statistic, p_value = stats.kstest(data, 'norm', args=(mu, std)) print("\nKolmogorov-Smirnov Test:") print("Statistic: {:.2f}".format(ks_statistic)) print("p-value: {:.2f}".format(p_value))``` which produces this: Shapiro-Wilk Test: Statistic: 0.99 p-value: 0.07 Kolmogorov-Smirnov Test: Statistic: 0.05 p-value: 0.21 I would also plot a histogram of the data and then overlay a normal density with the parameters from your data: # Create histogram of the data count, bins, ignored = plt.hist(data, 20, density=True, alpha=0.5, color='gray') # Plot the PDF of the normal distribution xmin, xmax = plt.xlim() x = np.linspace(xmin, xmax, 100) p = norm.pdf(x, mu, std) plt.plot(x, p, 'k', linewidth=2) title = "Fit results: mu = %.2f, std = %.2f" % (mu, std) plt.title(title) plt.xlabel('Accuracy') plt.ylabel('Density') plt.show() | 3 | 0 |
77,636,721 | 2023-12-10 | https://stackoverflow.com/questions/77636721/how-to-get-a-certain-value-from-enum | I have the following class: class YesOrNn(enum.Enum): YES = "Y" NO = "N" I am getting my inputs such as YesOrNo("true") or YesOrNo("false"), To make this work, I think I need to change the class to: class YesOrNn(enum.Enum): YES = "true" NO = "false" But, I also have a case where whenever a variable's value is saved as YesOrNo.YES.value, it should answer "Y". I can't seem to figure out how to pull that off | You want the _missing_ method: import enum class YesOrNo(enum.Enum): YES = "Y" NO = "N" @classmethod def _missing_(cls, value): if value.lower() in ('y', 'yes', 'true', 't'): return cls.YES elif value.lower() in ('n', 'no', 'false', 'f'): return cls.NO Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library. | 2 | 2 |
77,634,718 | 2023-12-10 | https://stackoverflow.com/questions/77634718/server-for-saving-tcp-packets-from-routers | Asking for advice here. So I have a business that sells a specific kind of routers. One of the features is to monitor the activity of the router (i.e. making sure it's "live"). I asked the manufacturer of the routers to program them to send a TCP "ping" packet to my server IP address, to a specific port, every 15 minutes, with very basic data for monitoring (router serial number, "I'm alive" string). I have a Windows Server running on a static IP which receives the data. I tested this by sending a packet using netcat, and I can see it's received correctly. I used freeware called Packet Sender to open a specific TCP port and log the incoming traffic. What I need to do now is run code on my server which listens to the specific port, and saves the data (router serial number, time of transmission, etc.) in a DB. This way, customers will later be able to log into their account, and view their router's "activity". There will also be advanced features like notifying the customer whenever the server doesn't receive data from their router over a period of X minutes/hours, etc. I'm asking for advice as to how to set this up in the best way. I read about simple Python code that can listen to a port (like in this answer), and I guess I can use that to save the data to a local DB. But would that be best? What about race conditions (i.e. multiple routers sending data at the same time)? I guess these transmissions wuould need to enter into some basic queue and be dealt with one by one? Is there some good exiting framework I can use that will do all this for me? Any advice as to how to implement this would be great. I know some basic Python backend programming. | Correct answer with TCP You can basically use https://docs.python.org/3/library/socketserver.html#socketserver-tcpserver-example to accept tcp pings and write it to a database. Demo database using the standard library sqllite3 https://docs.python.org/3/library/sqlite3.html Demo server using standard library Httpserver https://docs.python.org/3.10/library/http.server.html#module-http.server to do some basic fetching import socketserver import time from http.server import HTTPServer, BaseHTTPRequestHandler from threading import Thread import sqlite3 con = sqlite3.connect("test.db", check_same_thread = False) cur = con.cursor() try: cur.execute("CREATE TABLE TCPPING(time, port, data)") except sqlite3.OperationalError: print("Table is already made") def parse_and_add_to_db(line: str, port): cur.execute(f"INSERT INTO TCPPING VALUES('{time.time()}', {port}, '{line}')") def tcp_reciever_thread(): class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler): def handle(self): data = self.request.recv(1024).decode("latin-1") parse_and_add_to_db(data, self.client_address[1]) self.request.sendall(b"ADDED") class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer): pass HOST, PORT = "localhost", 6000 ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler).serve_forever() tcp_reciever = Thread(target=tcp_reciever_thread) tcp_reciever.daemon = True tcp_reciever.start() #Server class functionserver(BaseHTTPRequestHandler): def do_GET(self): a, b = self.path[1:].split("/") print(a, b) data = cur.execute(f"SELECT * from TCPPING where {a}=='{b}'").fetchall() #OUTPUT self.send_response(200) self.end_headers() self.wfile.write((("\n".join(map(str,data))).encode())) self.wfile.flush() def log_message(args,*kwargs): return None httpd = HTTPServer(("127.0.0.1", 5000), functionserver) httpd.serve_forever() client.py import socket ip = "localhost" port = 6000 def client(message): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.connect((ip, port)) sock.sendall(message.encode()) response = str(sock.recv(1024), 'ascii') print("Received: {}".format(response)) client("abc") Wrong older answer where I thought it was ICMP ping and used tcp dump to capture icmp packets do not really have a "port" for listening to. I would suggest using tcpdump to capture this data. If possible you should use standard post requests to get the data. Maxmillion's answer is basically how to scale up an app for enterprise, not how to make it. For accomplishing the basic requirements on local, we only need the standard library and tcpdump. Overview We have a database using sqllite3 https://docs.python.org/3/library/sqlite3.html We have a daemon thread that uses subprocess.Popen https://docs.python.org/3/library/subprocess.html to get tcpdump https://www.tcpdump.org/ output, parse it and write it to the database In the main thread we start a Httpserver https://docs.python.org/3.10/library/http.server.html#module-http.server to do some basic fetching from subprocess import PIPE, Popen from http.server import HTTPServer, BaseHTTPRequestHandler from threading import Thread import sqlite3 con = sqlite3.connect("test.db", check_same_thread = False) cur = con.cursor() try: cur.execute("CREATE TABLE PINGS(time, ip1, ip2, type, id, seq, length)") except sqlite3.OperationalError: print("Table is already made") def parse_and_add_to_db(line: str): line = line.replace(",", "") time, _1, ip1, _2, ip2, _3, _4, type, _5, id, _6, seq, _7, length = line.split() ip2 = ip2[:-1] # print(f"inserting ('{time}', '{ip1}', '{ip2}', '{type}', {id}, {seq}, {length})") cur.execute(f"INSERT INTO PINGS VALUES('{time}', '{ip1}', '{ip2}', '{type}', {id}, {seq}, {length})") def tcpdump_thread(): # proc = Popen("sudo tcpdump -l -w /dev/null -i utun3 icmp --print -f -xx".split(), stdin=PIPE, stdout=PIPE, stderr=PIPE) # if we want the packet hex proc = Popen("sudo tcpdump -l -w /dev/null -i YOURINTERFACE icmp --print -f".split(), stdin=PIPE, stdout=PIPE, stderr=PIPE) # start a process to moitor pings assert proc.stdin is not None and proc.stdout is not None and proc.stderr is not None proc.stdin.write("YOURPASSWORDHERE\n".encode()) # password proc.stdin.flush() print("START") while True: parse_and_add_to_db(proc.stdout.readline().decode("latin-1")) bg_tcpdump = Thread(target=tcpdump_thread) bg_tcpdump.daemon = True bg_tcpdump.start() #Server class functionserver(BaseHTTPRequestHandler): def do_GET(self): a, b = self.path[1:].split("/") print(a, b) data = cur.execute(f"SELECT * from PINGS where {a}=='{b}'").fetchall() #OUTPUT self.send_response(200) self.end_headers() self.wfile.write((("\n".join(map(str,data))).encode())) self.wfile.flush() def log_message(args,*kwargs): return None httpd = HTTPServer(("127.0.0.1", 5000), functionserver) httpd.serve_forever() You can run this (tcpdump needs sudo and you need to put the correct interface to listen to pings on so you need to configure those) and access localhost:5000/ip1/1.1.1.1 in your browser and run ping 1.1.1.1 in the background and see the database entries pop up. If you want to do some parsing with the packet hex data you can use the line with # if we want the packet hex and you can uncomment the print statement in parse to debug. Hope this helps. Additional this is a very crude demo code do not directly put strings into sql for queries and possibly have a different process (instead of running the tcpdump and parsing in the server process) Even if you scale up using a proper database and fastapi/flask/django and sqlAlchemy for the server instead of this, the logic should be similar. For the suggested best practices - If you do not have a database, choose one on the cloud which will have an api. You don't need to bother about race conditions when using a production quality database. And tcp dump should have no issues keeping up with multiple pings. For alerts you could query the db at regular intervals (polling) for certain conditions from your webapp. For dumping the ping data you just need to dump it from your ping monitoring app. Overall it is not an overly difficult system to design and does not need advanced networking knowledge to create an unoptimized version which works reliably. Using c/c++/rust or other low level languages to get into the kernel/driver/network stack will give much better performance. But this is a python question. | 3 | 2 |
77,635,098 | 2023-12-10 | https://stackoverflow.com/questions/77635098/pathlib-with-backlashes-within-input-string | I tried to check if a given path exists on Windows/Linux (using Path.exists()) A FileNotFoundError is raised on Linux only, obviously when looking at the POSIX path object it is not converted to POSIX. I would like to know why pathlib does not convert the Windows-style path to a POSIX path (currently running on Windows): from pathlib import PosixPath, WindowsPath, Path, PurePosixPath, PureWindowsPath raw_string = r'.\mydir\myfile' print(Path(raw_string)) print(PurePosixPath(raw_string)) Output: .\mydir\myfile .\mydir\myfile Both Windows and Linux pathlib path imports of the raw_string show the same Windows path as the output, and such a Windows path is of course not usable on Linux (second output). Shouldn't pathlib take care of these conversions for Path objects to be platform-agnostic? So that the last output should look like: ./mydir/myfile | Found a solution: Path(PureWindowsPath(raw_string)) works on either platform. It’s useful if you’re developing on Windows and want to simply copy paste long Windows formatted file paths. raw_string = r'.\mydir\myfile' print(Path(PureWindowsPath(raw_string))) Output: mydir/myfile And you need the PureWindowsPath. If on a Linux system, you just code print(Path(WindowsPath(raw_string))) instead, you will get the error: NotImplementedError: cannot instantiate 'WindowsPath' on your system | 2 | 1 |
77,642,254 | 2023-12-11 | https://stackoverflow.com/questions/77642254/connect-to-dbus-signal-correct-syntax-pyside6 | can you help me with the correct syntax to connect to a DBus signal? This is one of my many tries which at least runs and matches the signature in the docs: from PySide6 import QtDBus from PySide6.QtCore import Q from PySide6.QtWidgets import QMainWindow class MainWindow(QMainWindow): __slots__ = ["__mainwidget"] __mainwidget:QWidget def __init__ (self, *args, **kwargs): super().__init__(*args, **kwargs) service = 'org.freedesktop.DBus' path = '/org/freedesktop/DBus' iface = 'org.freedesktop.DBus' conn = QtDBus.QDBusConnection.systemBus() conn.connect(service, path, iface, "NameOwnerChanged", self, "nochangeslot") #smp = QtDBus.QDBusInterface(service, path, iface, connection=QtDBus.QDBusConnection.systemBus() def nochangeslot(self, arg:object) -> None: print(arg) pass But it doesn't work and looks weird with the slot as string... On the output I see: qt.dbus.integration: Could not connect "org.freedesktop.DBus" to ochangeslot Please consider this is a PySide6 issue and not a PyQt5 issue, the signature of the call is slightly different and my code doesn't hang like in similar topics on stackoverflow. Thank you in advance for any help! | This es the full solution to my initial post, I got this running with the QT/PySide support and they also acknowledged the hangig bug and a Python crash: import sys from PySide6.QtWidgets import QMainWindow from PySide6 import QtDBus, QtCore from PySide6.QtCore import QLibraryInfo, qVersion, Slot from PySide6.QtWidgets import QApplication, QMainWindow class MainWindow(QMainWindow): def __init__ (self, *args, **kwargs): super().__init__(*args, **kwargs) service = "org.freedesktop.DBus" path = "/org/freedesktop/DBus" iface = "org.freedesktop.DBus" conn = QtDBus.QDBusConnection.systemBus() #without this, the conn.connect call hangs, seems to be a bug, is already reported and fixed. conn.registerObject('/', self) conn.connect(service, path, iface, "NameOwnerChanged", self, QtCore.SLOT("nameownerchanged(QString, QString, QString)")) pass @Slot(str, str, str) def nameownerchanged(self, arg1:str, arg2:str, arg3:str) -> None: print(arg1) print(arg2) print(arg3) pass if __name__ == '__main__': print('Python {}.{}.{} {}'.format(sys.version_info[0], sys.version_info[1], sys.version_info[2], sys.platform)) print(QLibraryInfo.build()) app = QApplication(sys.argv) window = MainWindow() window.setWindowTitle(qVersion()) window.resize(800, 480) window.show() sys.exit(app.exec()) | 2 | 1 |
77,635,685 | 2023-12-10 | https://stackoverflow.com/questions/77635685/getting-amplitude-value-of-mp3-file-played-in-python-on-raspberry-pi | I am playing an mp3 file with Python on a Raspberry Pi: output_file = "sound.mp3" pygame.mixer.init() pygame.mixer.music.load(output_file) pygame.mixer.music.play() while pygame.mixer.music.get_busy(): pass sleep(0.2) Now while the mp3 file is playing, I would like to get the current amplitude of the sound played. Is there a way to do this? The only examples I was able to find work on a microphone input but not with a file being played. | There's a VU meter code on github working with microphone. You can easily change its input stream to wave file. If you want to work with .mp3 files, check this SO link for converting .mp3 to .wav on air without storing data on storage. import wave def main(): audio = pyaudio.PyAudio() try: # from wav file chunk = 1024 wf = wave.open("kimi_no_shiranai.wav", 'rb') stream = audio.open(format = audio.get_format_from_width(wf.getsampwidth()), channels = wf.getnchannels(), rate = wf.getframerate(), output = True) data = wf.readframes(chunk) maximal = Amplitude() while data: # writing to the stream is what *actually* plays the sound. stream.write(data) data = wf.readframes(chunk) amp = Amplitude.from_data(data) if amp > maximal: maximal = amp amp.display(scale=100, mark=maximal) # from microphone # stream = audio.open(format=pyaudio.paInt16, # channels=2, # rate=RATE, # input=True, # frames_per_buffer=INPUT_FRAMES_PER_BLOCK # ) # maximal = Amplitude() # while True: # data = stream.read(INPUT_FRAMES_PER_BLOCK) # amp = Amplitude.from_data(data) # if amp > maximal: # maximal = amp # amp.display(scale=100, mark=maximal) finally: stream.stop_stream() stream.close() audio.terminate() Edit: for converting .mp3 to .wav use this code. It doesn't have syncing problem. It stores converted wave into BytesIO buffer. from pydub import AudioSegment sound = AudioSegment.from_mp3("kimi_no_shiranai.mp3") wav_form = sound.export(format="wav") wf = wave.open(wav_form, 'rb') stream = audio.open(format = audio.get_format_from_width(sound.sample_width), channels = sound.channels, rate = sound.frame_rate, output = True) SO link 1 SO link 2 | 5 | 3 |
77,645,141 | 2023-12-12 | https://stackoverflow.com/questions/77645141/how-do-you-install-two-versions-of-python-on-a-docker-image-and-switch-them-on-b | I need to create a docker image that features two versions of Python, lets say, 3.9 & 3.10. The idea is to install both versions of Python and then install the pip/wheel/lib directories from Python 3.9 as it is the default version we want to use. I then created sys links to the pip/wheel/lib directories. Then using a .sh script, we will pass the Python version via a build argument, and using that ARG, we will select the alternative version of Python, this case being 10, which will make the 3.10 version the version used by the build. But i'm not sure what the best way to do this, the dockerfile. My Dockerfile looks like this: COPY --from=python1 --chown=developer:developer /usr/local/bin/py* /usr/local/bin/ COPY --from=python1 --chown=developer:developer /usr/local/bin/pip* /usr/local/bin/ COPY --from=python1 --chown=developer:developer /usr/local/bin/wheel /usr/local/bin/ COPY --from=python1 --chown=developer:developer /usr/local/lib/lib* /usr/local/lib/ RUN for link in /usr/local/bin/py* /usr/local/bin/pip* /usr/local/bin/wheel /usr/local/lib/lib*; do \ [ -e "$link" ] || ln -s "$link" /usr/local/bin/; \ done # Copy Base Python Version COPY --from=python1 --chown=developer:developer /usr/local/lib/python3.9/ /usr/local/lib/python3.9/ # Copy Alternative Python Version COPY --from=python2 --chown=developer:developer /usr/local/lib/python3.10/ /usr/local/lib/python3.10/ # Copy the version-switching script COPY sagacity-cd/swap.sh /usr/local/bin/ # Run the script to set the default version based on the build argument USER root RUN chmod +x /usr/local/bin/swap.sh && \ /usr/local/bin/swap.sh ${DEFAULT_PYTHON_VERSION} USER developer # Copy the version-switching script COPY sagacity-cd/swap.sh /usr/local/bin/ # Run the script to set the default version based on the build argument USER root RUN chmod +x /usr/local/bin/swap.sh && \ /usr/local/bin/swap.sh ${DEFAULT_PYTHON_VERSION} USER developer My swap.sh looks like this: #!/bin/bash if [ "$1" == "3.9" ]; then rm /usr/local/bin/python ln -s /usr/local/bin/python3.9 /usr/local/bin/python elif [ "$1" == "3.10" ]; then rm /usr/local/bin/python ln -s /usr/local/bin/python3.10 /usr/local/bin/python else echo "Invalid version specified. Usage: $0 [3.9|3.10]" exit 1 fi | generic image that has both and we want to just swap the version on our CI/CD build when we build the Lambda, via Terraform passing in build args If you're already going to build the Lambda via Terraform, why don't you just build the Lambda with the correct version of Python? Say the Dockerfile for the Lambda is like this: ARG PY_VERSION FROM python:${PY_VERSION}-bookworm # then all your lambda stuff COPY . /app ENTRYPOINT /app/run.sh Now you can build your lambda for different versions: docker build --build-arg="PY_VERSION=3.10" . docker build --build-arg="PY_VERSION=3.9" . docker build --build-arg="PY_VERSION=3.8" . This is mentioned here in the docs. If you really want a base image for the different versions, you can use the same thing, and pre-build the different versions into your own container repo, and then use the same mechanism to build the next layer (but that seems like overkill to me). | 2 | 1 |
77,627,725 | 2023-12-8 | https://stackoverflow.com/questions/77627725/cannot-connect-to-my-couchbase-cluster-with-python-sdk | I can reach my couchbase cluster with a simple cURL: curl -u $CB_USERNAME:$CB_PASSWORD http://$CB_HOST/pools/default/buckets but I do not manage to connect using the Python SDK: from datetime import timedelta from couchbase.auth import PasswordAuthenticator from couchbase.cluster import Cluster from couchbase.options import ClusterOptions import os # Configuration CB_HOST = os.environ.get('CB_HOST') CB_BUCKET = os.environ.get('CB_BUCKET') CB_USERNAME = os.environ.get('CB_USERNAME') CB_PASSWORD = os.environ.get('CB_PASSWORD') # Initialize Couchbase connection auth = PasswordAuthenticator(CB_USERNAME, CB_PASSWORD) options = ClusterOptions(auth) cluster = Cluster(f'couchbase://{CB_HOST}', options) This gives this error: Traceback (most recent call last): File "/Users/luc/code/couchbase/examples/main.py", line 27, in <module> cluster = Cluster(f'couchbase://{CB_HOST}', options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/luc/code/couchbase/examples/venv/lib/python3.11/site-packages/couchbase/cluster.py", line 99, in __init__ self._connect() File "/Users/luc/code/couchbase/examples/venv/lib/python3.11/site-packages/couchbase/logic/wrappers.py", line 98, in wrapped_fn raise e File "/Users/luc/code/couchbase/examples/venv/lib/python3.11/site-packages/couchbase/logic/wrappers.py", line 82, in wrapped_fn ret = fn(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/luc/code/couchbase/examples/venv/lib/python3.11/site-packages/couchbase/cluster.py", line 105, in _connect raise ErrorMapper.build_exception(ret) couchbase.exceptions.UnAmbiguousTimeoutException: UnAmbiguousTimeoutException(<ec=14, category=couchbase.common, message=unambiguous_timeout (14), C Source=/Users/couchbase/jenkins/workspace/python/sdk/python-packaging-pipeline/py-client/src/connection.cxx:199>) Any hints what I'm doing wrong ? | Just to eliminate possible network/connectivity issues, I suggest running SDK Doctor. From the docs: SDK doctor is a tool to diagnose application-server-side connectivity issues with your Couchbase Cluster. It makes the same connections to the Couchbase Server cluster that Couchbase SDKs make during bootstrapping, and then reports on the state of the connections made — giving diagnostic information that can help to solve puzzling network issues. Hopefully that will help you identify any issues. | 2 | 2 |
77,623,089 | 2023-12-7 | https://stackoverflow.com/questions/77623089/pyvis-network-shows-html-which-is-not-100-percent-of-the-page | with this simple code the output is as expected but only in a 30% of the browser window. import networkx as nx from pyvis.network import Network G = nx.Graph() G.add_node(1) G.add_node(2) G.add_node(3) G.add_edge(1, 2) G.add_edge(1, 3) net = Network(height="100%", width="100%", directed=True, notebook=True) net.from_nx(G) net.show("simple.html") is there a better parameter to use realy 100% of the browser ? or is there a better library to actully achiev something like that with python ? | The reason why you cannot change the canvas size when you set it on 100% is that it is not 100% of browser window, it's 100% of its parent's height. If you change width and background color, you see the canvas is inside another frame. net = Network(height="500px", width="500px", directed=True, notebook=True, bgcolor="#ff0000") You just need to give its size in pixel, and it will change its parent's size too. net = Network(height="600px", width="100%", directed=True, notebook=True) | 3 | 1 |
77,645,110 | 2023-12-12 | https://stackoverflow.com/questions/77645110/iterate-on-langchain-document-items | I loaded pdf files from a directory and I need to split them to smaller chunks to make a summary. The problem is that I can't iterate on documents object in a for loop and I get an error like this: AttributeError: 'tuple' object has no attribute 'page_content' How can I iterate on my document items to call the summary function for each of them? Here is my code: # Load the documents from langchain.document_loaders import DirectoryLoader document_directory = "pdf_files" loader = DirectoryLoader(document_directory) documents = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=4000, chunk_overlap=50) # Iterate on long pdf documents to make chunks (2 pdf files here) for doc in documents: # it fails on this line texts = text_splitter.split_documents(doc) chain = load_summarize_chain(llm, chain_type="map_reduce", map_prompt=prompt, combine_prompt=prompt) | And if you want to use a for loop, you can try: for file in os.listdir(pdf_folder_path): if file.endswith('.pdf'): pdf_path = os.path.join(pdf_folder_path, file) loader = PyPDFLoader(pdf_path) documents.extend(loader.load()) text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200) chunked_documents = text_splitter.split_documents(documents) | 2 | 2 |
77,648,771 | 2023-12-12 | https://stackoverflow.com/questions/77648771/deserializtion-parsing-error-kafkaprotobuf-python | Serialization code (Go lang) 1. Producer func NewProducer(kafkaBrokerURL string, kafkaSchemaRegistryUrl string) { producerConfig := getKafkaProducerConfig(config.EnvConfig) producer, err := confluent_kafka.NewProducer(producerConfig) if err != nil { log.WithFields(log.Fields{"err": err}).Error("Failed to create Kafka Producer") log.Panicf("Unable to create Kafka Producer") } client, err := schemaregistry.NewClient(schemaregistry.NewConfig(kafkaSchemaRegistryUrl)) if err != nil { log.WithFields(log.Fields{"err": err}).Error("Failed to create Kafka Client") log.Panicf("Unable to create Kafka Client") } serializer, err := protobuf.NewSerializer(client, serde.ValueSerde, protobuf.NewSerializerConfig()) if err != nil { log.WithFields(log.Fields{"err": err}).Error("Failed to create Kafka Serializer") log.Panicf("Unable to create Kafka Serializer") } KafkaProducerInstance = &KafkaProducer{ producer: producer, serializer: serializer, } log.Info("Created Kafka Producer and Serializer") } 2. Sending Kafka Message func producerHelper[kdt KafkaMesageDataTypes](message kdt, topicName string) { deliveryChan := make(chan confluent_kafka.Event) payload, err := KafkaProducerInstance.serializer.Serialize(topicName, &message) if err != nil { log.Errorf("Failed to serialize payload: %v\n", err) close(deliveryChan) return } err = KafkaProducerInstance.producer.Produce(&confluent_kafka.Message{ TopicPartition: confluent_kafka.TopicPartition{Topic: &topicName, Partition: confluent_kafka.PartitionAny}, Value: payload, }, deliveryChan) if err != nil { log.Errorf("Failed to Produce: %v\n", err) close(deliveryChan) return } e := <-deliveryChan m := e.(*confluent_kafka.Message) if m.TopicPartition.Error != nil { log.Errorf("Delivery failed: %v\n", m.TopicPartition.Error) close(deliveryChan) return } else { log.Infof("Delivered message to topic %s [%d] at offset %v\n", *m.TopicPartition.Topic, m.TopicPartition.Partition, m.TopicPartition.Offset) } close(deliveryChan) } Trying to consumer the meesage (Diff, Application in Python) from confluent_kafka import Consumer, KafkaError import KafkaDiagnoseResult_pb2 # replace with your generated module name from google.protobuf.message import DecodeError # Kafka consumer configuration conf = { 'bootstrap.servers': "localhost:9092/v3/", # Replace with your Kafka server address 'group.id': "myGroup", 'auto.offset.reset': 'earliest' } # Create a consumer instance consumer = Consumer(conf) # Subscribe to a topic from confluent_kafka import Consumer, KafkaError import KafkaDiagnoseResult_pb2 from google.protobuf.message import DecodeError # Kafka consumer configuration conf = { 'bootstrap.servers': "localhost:9092/v3/", 'group.id': "myGroup", 'auto.offset.reset': 'earliest' } # Create a consumer instance consumer = Consumer(conf) # Subscribe to a topic consumer.subscribe(['diagnosis']) try: while True: msg = consumer.poll(1.0) if msg is None: continue if msg.error(): if msg.error().code() == KafkaError._PARTITION_EOF: # End of partition event continue else: print(msg.error()) break # Deserialize the message try: data = KafkaDiagnoseResult_pb2.KafkaDiagnoseRequest() data.ParseFromString(msg.value()) except DecodeError as e: print(f"Error parsing message: {e}") print(f"Raw message data: {msg.value()}") print("Received message: ", data) except KeyboardInterrupt: pass finally: consumer.close() Error Error parsing message I am trying to debug it but unable to. The proto file in both the applications is the same I have using proton to generate the pb2 file. Your help is appreciated. Thank you I can get the message in raw format: Raw Format message. Raw message data: b'\x00\x00\x00\x00\x02\x02\x08\n$1775100a-1a47-48b2-93b7-b7a331be59b4\x12\tcompleted' I have tried decoding it using UTF-8, it fails, as not all the fields are read. print(" Decode 1: ", dict_str) print("Decode 2: ", ast.literal_eval(dict_str)) Output from above code: Unparsed Message: b'\x00\x00\x00\x00\x02\x02\x08\n$ccb0ad7e-abb2-4af6-90d1-187381f9d47e\x12\tcompleted' Decode 1: $ccb0ad7e-abb2-4af6-90d1-187381f9d47e completed Inner Exception Here source code string cannot contain null bytes | Your Go client is serializing with the Schema Registry meaning your Python code must do the same. The records are not "just Protobuf" since there's a schema ID also encoded in the bytes, therefore a regular Protobuf parser will fail. There's example code in the repo for consuming Protobuf with the Registry integration https://github.com/confluentinc/confluent-kafka-python/blob/master/examples/protobuf_consumer.py | 2 | 1 |
77,641,455 | 2023-12-11 | https://stackoverflow.com/questions/77641455/how-to-limit-memory-usage-while-scanning-parquet-from-s3-and-join-using-polars | As a follow-up question to this question, I would like to find a way to limit the memory usage when scanning, filtering and joining a large dataframe saved in S3 cloud, with a tiny local dataframe. Suppose my code looks like the following: import pyarrow.dataset as ds import polars as pl import s3fs # S3 credentials secret = ... key = ... endpoint_url = ... # some tiny dataframe sellers_df = pl.DataFrame({'seller_id': ['0332649223', '0192491683', '0336435426']}) # scan, filter and join with huge dataframe on S3 fs = s3fs.S3FileSystem(endpoint_url=endpoint_url, key=key, secret=secret) dataset = ds.dataset(f'{s3_bucket}/benchmark_dt/dt_partitions', filesystem=fs, partitioning='hive') scan_df = pl.scan_pyarrow_dataset(dataset) \ .filter(pl.col('dt') >= '2023-05-17') \ .filter(pl.col('dt') <= '2023-10-18') \ .join(sellers_df.lazy(), on='seller_id', how='inner').collect() And my parquet files layout, looks like the following: -- dt_partitions -- dt=2023-06-09 -- data.parquet -- dt=2023-06-10 -- data.parquet -- dt=2023-06-11 -- data.parquet -- dt=2023-06-12 -- data.parquet ... When running the code I notice that Polars first loads the entire dataset to the memory, according to the given dates, and after performs the join. This causes me severe memory problems. Is there any way to perform the join in pre-defined batches/streaming to save memory? Thanks in advance. Edit: This is the explain plan (you can see no streaming applied): INNER JOIN: LEFT PLAN ON: [col("seller_id")] PYTHON SCAN PROJECT */3 COLUMNS SELECTION: ((pa.compute.field('dt') >= '2023-10-17') & (pa.compute.field('dt') <= '2023-10-18')) RIGHT PLAN ON: [col("seller_id")] DF ["seller_id"]; PROJECT */1 COLUMNS; SELECTION: "None" END INNER JOIN INNER JOIN: LEFT PLAN ON: [col("seller_id")] However, when using is_in: PYTHON SCAN PROJECT */3 COLUMNS SELECTION: ((pa.compute.field('seller_id')).isin(["0332649223","0192491683","0336435426","3628932648","5241104373","1414317462","4028203396","6445502649","1131069079","9027417785","6509736571","9214134975","7722199293","1617136891","8786329949","8260764409","5103636478","3444202168","9066806312","3961998994","7345385102","2756955097","7038039666","0148664533","5120870693","8843132164","6424549457","8242686761","3148647530","8329075741","0803877447","2228154163","8661602117","2544985488","3241983296","4756084729","5317176976","0658022895","3802149808","2368104663","0835399702","0806598632","9753553141","3473629988","1145080603","5731199445","7622500016","4980968502","6713967792","8469333969"]) & ((pa.compute.field('dt') >= '2023-10-17') & (pa.compute.field('dt') <= '2023-10-18'))) Followed @Dean MacGregor answer, added os.environ['AWS_ALLOW_HTTP'] = 'true' and it worked: --- STREAMING INNER JOIN: LEFT PLAN ON: [col("seller_id")] Parquet SCAN s3://test-bucket/benchmark_dt/dt_partitions/dt=2023-10-17/part-0.parquet PROJECT */3 COLUMNS RIGHT PLAN ON: [col("seller_id")] DF ["seller_id"]; PROJECT */1 COLUMNS; SELECTION: "None" END INNER JOIN --- END STREAMING | polars doesn't (know how to) do predicate pushdown to the pa dataset. The development efforts are to bolster its own cloud hive reading so perhaps give that a try? The syntax for using it is a bit different than pyarrow. It should look roughly like this with possible nuance for how to enter your auth info. import polars as pl scan_df = pl.scan_parquet(f's3://{s3_bucket}/benchmark_dt/dt_partitions/**/*.parquet', storage_options={'access_key_id':key, 'secret_access_key':secret} ) Note the difference in syntax: The path needs the "s3://" prefix and you have to use globbing patterns for the hive structure. For the auth, use the storage_options parameter with a dictionary with key values supported by object store. It doesn't rely on, or utilize, fsspec/s3fs as pyarrow does. From there you can do scan_df.filter(pl.col('dt') >= '2023-05-17') \ .filter(pl.col('dt') <= '2023-10-18') \ .join(sellers_df.lazy(), on='seller_id', how='inner').collect() However, for predicate_pushdown to work, since your hive doesn't partition by seller_id, your underlying parquet files would need to have row_groups that are separated by seller_id and statistics that denote that separation. Even without predicate pushdown, it is still possible to stream the data but you need to change collect() to collect(streaming=True). If you need to access a local/custom S3 endpoint then set os.environ['AWS_ALLOW_HTTP'] = 'true' to tell Object Store to connect to a non AWS url. Here are more environment variables that it will look for/at. | 2 | 2 |
77,642,547 | 2023-12-11 | https://stackoverflow.com/questions/77642547/class-methods-are-not-at-the-same-address-for-both-parent-and-child-class | i try at the bellow code (for educational reasons) to implement this logic with class methods. What i observe is that, (a) the func1 inside the NO_CALCULATE list is not at the same address as func1 of the Parent class. (b) The type of the elements inside Calculate list, are class method objects and not methods as at CALCULATE LIST. The result of this, my code evaluates as TRUE the if statement and calculates all the functions. Can someone explain me the (a),(b) and the overall behaviour as well? (reminder:I know how to implement that logic without class methods but i try to do it that way for educational purposes) class Parent: @classmethod def func1(cls): print("hello func1") @classmethod def func2(cls): print("hello func2") @classmethod def func3(cls): print("hello func3") CALCULATE = [func1, func2, func3] NO_CALCULATE = [] @classmethod def calculate_kpis(cls): for func in cls.CALCULATE: if func not in cls.NO_CALCULATE: func.__get__(cls)() class Child(Parent): NO_CALCULATE = [Parent.func1] #REMOVE ΤΗIS CALCULATION FROM CHILD if __name__ == "__main__": p1 = Child() p1.calculate_kpis() | What do you mean by "address"? In Python we don't use object addresses at all. There is, yes, an object ID, that as an implementation detail is the object pointer, (its address), but it is not usable as such from pure Python code. So, assuming you are seeing the ID's: both classmethods and ordinary instance methods are dynamic objects, and they are created new when they are retrieved. The underlying function-object that is used for each method is the same object, always - but when we use a Parent.func3 expression to retrieve the func3 classmethod, Python triggers an underlying mechanism (see write ups for "Python's descriptor protocol", if you want to dig on that), which creates a new "method object" which is bound to the target class (or instance, in the case of normal methods). This object is not cached - that is also an implementation detail, so each time it is retrieved, a new one is created. One can check these assertions by either comparing their ID, or using the is operator. Pasting directly from my interactive interpreter: In [56]: class Parent: ...: @classmethod ...: def func1(cls): ...: ... ...: In [57]: class Child(Parent): ...: pass ...: In [58]: id(Parent.func1) Out[58]: 139726686424000 In [59]: id(Parent.func1) Out[59]: 139726691941312 In [60]: id(Child.func1) Out[60]: 139727237259712 In [61]: Parent.func1 is Parent.func1 Out[61]: False In [62]: Parent.func1.__func__ is Child.func1.__func_ ...: _ Out[62]: True As you can see, methods are always different objects, but the underlying function, reachable through the __func__ attribute in a method, is the same. This explains why your logic for "No_CALCULATE" does not work: methods lack a custom equality comparison, and then Python defaults to identity (is): and the class method in the NO_CALCULATE list is does not have the same identity as the one retrieved dynamically inside the calculate_kpis for loop. The remedy would be to compare by the __func__ attribute of each method, or rather, by the method name, as a string: that will prevent methods defined in other super-classes (such as mixins or intermediary classes) from running as well: ... class Parent: ... @classmethod def calculate_kpis(cls): for func in cls.CALCULATE: # compares using the method/function name, as string. if func.__name__ not in cls.NO_CALCULATE: # No need to use the descriptor inner method # `__get__`: the first parameter can be filled # in as a positional parameter normally func(cls) # (rather than func.__get__(cls)()) ... class Child(Parent): NO_CALCULATE = ["func1"] #List by method name, as string. ... | 2 | 2 |
77,644,599 | 2023-12-12 | https://stackoverflow.com/questions/77644599/why-does-adding-a-break-statement-significantly-slow-down-the-numba-function | I have the following Numba function: @numba.njit def count_in_range(arr, min_value, max_value): count = 0 for a in arr: if min_value < a < max_value: count += 1 return count It counts how many values are in the range in the array. However, I realized that I only needed to determine if they existed. So I modified it as follows: @numba.njit def count_in_range2(arr, min_value, max_value): count = 0 for a in arr: if min_value < a < max_value: count += 1 break # <---- break here return count Then, this function becomes slower than before the change. Under certain conditions, it can be surprisingly more than 10 times slower. Benchmark code: from timeit import timeit rng = np.random.default_rng(0) arr = rng.random(10 * 1000 * 1000) # To compare on even conditions, choose the condition that does not terminate early. min_value = 0.5 max_value = min_value - 1e-10 assert not np.any(np.logical_and(min_value <= arr, arr <= max_value)) n = 100 for f in (count_in_range, count_in_range2): f(arr, min_value, max_value) elapsed = timeit(lambda: f(arr, min_value, max_value), number=n) / n print(f"{f.__name__}: {elapsed * 1000:.3f} ms") Result: count_in_range: 3.351 ms count_in_range2: 42.312 ms Further experimenting, I found that the speed varies greatly depending on the search range (i.e. min_value and max_value). At various search ranges: count_in_range2: 5.802 ms, range: (0.0, -1e-10) count_in_range2: 15.408 ms, range: (0.1, 0.09999999990000001) count_in_range2: 29.571 ms, range: (0.25, 0.2499999999) count_in_range2: 42.514 ms, range: (0.5, 0.4999999999) count_in_range2: 24.427 ms, range: (0.75, 0.7499999999) count_in_range2: 12.547 ms, range: (0.9, 0.8999999999) count_in_range2: 5.747 ms, range: (1.0, 0.9999999999) Can someone explain to me what is going on? I am using Numba 0.58.1 under Python 3.10.11. Confirmed on both Windows 10 and Ubuntu 22.04. EDIT: As an appendix to Jérôme Richard's answer: As he pointed out in the comments, the performance difference that depends on a search range is likely due to branch prediction. For example, when min_value is 0.1, min_value < a has a 90% chance of being true, and a < max_value has a 90% chance of being false. So mathematically it can be predicted correctly with 81% accuracy. I have no idea how the CPU does this, but I have come up with a way to check if this logic is correct. First, by partitioning the array with values above and below the threshold, and second, by mixing it with a certain probability of error. When the array is partitioned, the number of branch prediction misses should be unaffected by the threshold. When we include errors in it, the number of misses should increase depending on the errors. Here is the updated benchmark code: from timeit import timeit import numba import numpy as np @numba.njit def count_in_range(arr, min_value, max_value): count = 0 for a in arr: if min_value < a < max_value: count += 1 return count @numba.njit def count_in_range2(arr, min_value, max_value): count = 0 for a in arr: if min_value < a < max_value: count += 1 break # <---- break here return count def partition(arr, threshold): """Place the elements smaller than the threshold in the front and the elements larger than the threshold in the back.""" less = arr[arr < threshold] more = arr[~(arr < threshold)] return np.concatenate((less, more)) def partition_with_error(arr, threshold, error_rate): """Same as partition, but includes errors with a certain probability.""" less = arr[arr < threshold] more = arr[~(arr < threshold)] less_error, less_correct = np.split(less, [int(len(less) * error_rate)]) more_error, more_correct = np.split(more, [int(len(more) * error_rate)]) mostly_less = np.concatenate((less_correct, more_error)) mostly_more = np.concatenate((more_correct, less_error)) rng = np.random.default_rng(0) rng.shuffle(mostly_less) rng.shuffle(mostly_more) out = np.concatenate((mostly_less, mostly_more)) assert np.array_equal(np.sort(out), np.sort(arr)) return out def bench(f, arr, min_value, max_value, n=10, info=""): f(arr, min_value, max_value) elapsed = timeit(lambda: f(arr, min_value, max_value), number=n) / n print(f"{f.__name__}: {elapsed * 1000:.3f} ms, min_value: {min_value:.1f}, {info}") def main(): rng = np.random.default_rng(0) arr = rng.random(10 * 1000 * 1000) thresholds = np.linspace(0, 1, 11) print("#", "-" * 10, "As for comparison", "-" * 10) bench( count_in_range, arr, min_value=0.5, max_value=0.5 - 1e-10, ) print("\n#", "-" * 10, "Random Data", "-" * 10) for min_value in thresholds: bench( count_in_range2, arr, min_value=min_value, max_value=min_value - 1e-10, ) print("\n#", "-" * 10, "Partitioned (Yet Still Random) Data", "-" * 10) for min_value in thresholds: bench( count_in_range2, partition(arr, threshold=min_value), min_value=min_value, max_value=min_value - 1e-10, ) print("\n#", "-" * 10, "Partitioned Data with Probabilistic Errors", "-" * 10) for ratio in thresholds: bench( count_in_range2, partition_with_error(arr, threshold=0.5, error_rate=ratio), min_value=0.5, max_value=0.5 - 1e-10, info=f"error: {ratio:.0%}", ) if __name__ == "__main__": main() Result: # ---------- As for comparison ---------- count_in_range: 3.518 ms, min_value: 0.5, # ---------- Random Data ---------- count_in_range2: 5.958 ms, min_value: 0.0, count_in_range2: 15.390 ms, min_value: 0.1, count_in_range2: 24.715 ms, min_value: 0.2, count_in_range2: 33.749 ms, min_value: 0.3, count_in_range2: 40.007 ms, min_value: 0.4, count_in_range2: 42.168 ms, min_value: 0.5, count_in_range2: 37.427 ms, min_value: 0.6, count_in_range2: 28.763 ms, min_value: 0.7, count_in_range2: 20.089 ms, min_value: 0.8, count_in_range2: 12.638 ms, min_value: 0.9, count_in_range2: 5.876 ms, min_value: 1.0, # ---------- Partitioned (Yet Still Random) Data ---------- count_in_range2: 6.006 ms, min_value: 0.0, count_in_range2: 5.999 ms, min_value: 0.1, count_in_range2: 5.953 ms, min_value: 0.2, count_in_range2: 5.952 ms, min_value: 0.3, count_in_range2: 5.940 ms, min_value: 0.4, count_in_range2: 6.870 ms, min_value: 0.5, count_in_range2: 5.939 ms, min_value: 0.6, count_in_range2: 5.896 ms, min_value: 0.7, count_in_range2: 5.899 ms, min_value: 0.8, count_in_range2: 5.880 ms, min_value: 0.9, count_in_range2: 5.884 ms, min_value: 1.0, # ---------- Partitioned Data with Probabilistic Errors ---------- # Note that min_value = 0.5 in all the following. count_in_range2: 5.939 ms, min_value: 0.5, error: 0% count_in_range2: 14.015 ms, min_value: 0.5, error: 10% count_in_range2: 22.599 ms, min_value: 0.5, error: 20% count_in_range2: 31.763 ms, min_value: 0.5, error: 30% count_in_range2: 39.391 ms, min_value: 0.5, error: 40% count_in_range2: 42.227 ms, min_value: 0.5, error: 50% count_in_range2: 38.748 ms, min_value: 0.5, error: 60% count_in_range2: 31.758 ms, min_value: 0.5, error: 70% count_in_range2: 22.600 ms, min_value: 0.5, error: 80% count_in_range2: 14.090 ms, min_value: 0.5, error: 90% count_in_range2: 6.027 ms, min_value: 0.5, error: 100% I am satisfied with this result. | TL;DR: Numba uses LLVM which not able to automatically vectorize the code when there is a break. One way to fix this is to compute the operation chunk by chunk. Numba is based on the LLVM compiler toolchain to compile the Python code to a native one. Numlba generates an LLVM intermediate representation (IR) from the Python code and then gives that to LLVM so it can generate a fast native code. All the low-level optimizations are made by LLVM, not actually Numba itself. In this case, LLVM is not able to automatically vectorize the code when there is a break. Numba doesn't do any pattern recognition here nor run any code on the GPU (basic numba.njit code is always run on the CPU). Note that "vectorization" in this context means generating SIMD instructions from a scalar IR code. This word has a different meaning in the context of a Numpy Python code (which means calling native functions so to reduce the overhead but the native functions are not necessarily using SIMD instructions). Under the hood I reproduced the issue with Clang which is a C++ compiler using also the LLVM toolchain. Here is the equivalent C++ code: #include <cstdint> #include <cstdlib> #include <vector> int64_t count_in_range(const std::vector<double>& arr, double min_value, double max_value) { int64_t count = 0; for(int64_t i=0 ; i<arr.size() ; ++i) { double a = arr[i]; if (min_value < a && a < max_value) { count += 1; } } return count; } This code results in the following assembly main loop: .LBB0_6: # =>This Inner Loop Header: Depth=1 vmovupd ymm8, ymmword ptr [rcx + 8*rax] vmovupd ymm9, ymmword ptr [rcx + 8*rax + 32] vmovupd ymm10, ymmword ptr [rcx + 8*rax + 64] vmovupd ymm11, ymmword ptr [rcx + 8*rax + 96] vcmpltpd ymm12, ymm2, ymm8 vcmpltpd ymm13, ymm2, ymm9 vcmpltpd ymm14, ymm2, ymm10 vcmpltpd ymm15, ymm2, ymm11 vcmpltpd ymm8, ymm8, ymm4 vandpd ymm8, ymm12, ymm8 vpsubq ymm3, ymm3, ymm8 vcmpltpd ymm8, ymm9, ymm4 vandpd ymm8, ymm13, ymm8 vpsubq ymm5, ymm5, ymm8 vcmpltpd ymm8, ymm10, ymm4 vandpd ymm8, ymm14, ymm8 vpsubq ymm6, ymm6, ymm8 vcmpltpd ymm8, ymm11, ymm4 vandpd ymm8, ymm15, ymm8 vpsubq ymm7, ymm7, ymm8 add rax, 16 cmp rsi, rax jne .LBB0_6 The instructions vmovupd, vcmpltpd and vandpd, etc. show that the assembly code fully uses SIMD instructions. If we add a break, then it is not the case any-more: .LBB0_4: # =>This Inner Loop Header: Depth=1 vmovsd xmm2, qword ptr [rcx + 8*rsi] # xmm2 = mem[0],zero vcmpltpd xmm3, xmm2, xmm1 vcmpltpd xmm2, xmm0, xmm2 vandpd xmm2, xmm2, xmm3 vmovq rdi, xmm2 sub rax, rdi test dil, 1 jne .LBB0_2 lea rdi, [rsi + 1] cmp rdx, rsi mov rsi, rdi jne .LBB0_4 Here vmovsd moves a scalar value in the loop (and rsi is incremented of 1 per loop iteration). This later code is significantly less efficient. Indeed, it operates on only one item at a time for each iteration as opposed to 16 items for the previous code. We can use the compilation flag -Rpass-missed=loop-vectorize to check if the loop is indeed not vectorized. Clang explicitly reports: remark: loop not vectorized [-Rpass-missed=loop-vectorize To know the reason, we can use the flag -Rpass-analysis=loop-vectorize: loop not vectorized: could not determine number of loop iterations [-Rpass-analysis=loop-vectorize] Thus, we can conclude that LLVM optimizer does not support this pattern of code. Solution One way to avoid this issue is to operate on chunks. The computation of each chunk can be fully vectorized by Clang and you can break the condition early at the first chunk. Here is an untested code: @numba.njit def count_in_range_faster(arr, min_value, max_value): count = 0 for i in range(0, arr.size, 16): if arr.size - i >= 16: # Optimized SIMD-friendly computation of 1 chunk of size 16 tmp_view = arr[i:i+16] for j in range(0, 16): if min_value < tmp_view[j] < max_value: count += 1 if count > 0: return 1 else: # Fallback implementation (variable-sized chunk) for j in range(i, arr.size): if min_value < arr[j] < max_value: count += 1 if count > 0: return 1 return 0 The C++ equivalent code is properly vectorized. One need to check this is also true for the Numba code with the count_in_range_faster.inspect_llvm() but the following timings show that the above implementation is faster than the two others. Performance results Here are results on a machine with a Xeon W-2255 CPU using Numba 0.56.0: count_in_range: 7.112 ms count_in_range2: 35.317 ms count_in_range_faster: 5.827 ms <---------- | 4 | 6 |
77,628,411 | 2023-12-8 | https://stackoverflow.com/questions/77628411/how-to-convert-asynciterable-to-asyncio-task | I am using Python 3.11.5 with the below code: import asyncio from collections.abc import AsyncIterable # Leave this iterable be, the question is about # how to use many instances of this in parallel async def iterable() -> AsyncIterable[int]: yield 1 yield 2 yield 3 # How can one get multiple async iterables to work with asyncio.gather? # In other words, since asyncio.gather works with asyncio.Task, # (1) How can one convert an async iterable to a Task? # (2) How can one use asyncio.gather to run many of these tasks in parallel, # keeping the results 1-1 with the source iterables? results_1, results_2, results_3 = asyncio.gather(iterable(), iterable(), iterable()) To restate the question, how can one get: An AsyncIterable as an asyncio task, where the task iterates until exhaustion Run multiple of these tasks in parallel, storing the results on a per-task basis (e.g. for use with asyncio.gather)? I am looking for a 1 - 3 line snippet showing how to connect these dots. | According your code snippet, you're trying to pass an async generator function (iterable) directly to asyncio.gather, however, it expects awaitables (coroutines, Tasks, or Futures). So, to fix the issue, one possible solution is, to create a new coroutine that consumes the async iterable()s and collects their items in the lists. The code will be like this: async def collect(async_iterable): return [item async for item in async_iterable] results = asyncio.run(asyncio.gather( asyncio.create_task(collect(iterable())), asyncio.create_task(collect(iterable())), asyncio.create_task(collect(iterable())) )) In this way, you would have three [1,2,3] lists, but still, there is a problem — you haven't gone through a concurrency manner among the iterable()s. I mean by adding a print within the collect() you see they are running sequentially. Therefore, to have an asynchronous behaviour between your iterables you need at least an await statement within the generator to make it as an awaitable coroutine used for switching. Here we can use await asyncio.sleep(0) as a trick! Here's the whole code: import asyncio from collections.abc import AsyncIterable async def iterable() -> AsyncIterable[int]: yield 1 await asyncio.sleep(0) yield 2 await asyncio.sleep(0) yield 3 async def collect(async_iterable: AsyncIterable[int]) -> list: result = [] async for item in async_iterable: print(item) result.append(item) return result async def main(): tasks = [ asyncio.create_task(collect(iterable())), asyncio.create_task(collect(iterable())), asyncio.create_task(collect(iterable())) ] results = await asyncio.gather(*tasks) return results results_1, results_2, results_3 = asyncio.run(main()) print(results_1, results_2, results_3) Out: 1 1 1 2 2 2 3 3 3 [1, 2, 3] [1, 2, 3] [1, 2, 3] | 2 | 3 |
77,645,123 | 2023-12-12 | https://stackoverflow.com/questions/77645123/export-pandas-io-excel-object-into-excel-file | I have reponse object that contain excel file, which i am decoding like this xl = pd.ExcelFile(io.BytesIO(response.content)) Now i want to export this into a .xlsx file. This object had pandas.io.excel._base.Excelfile type. Is there any way to export this. I have tried .to_excel but that didn't worked. | ExcelFile's purpose is to read and parse excel files into pandas objects. If your object is already an excel file, why pass it to ExcelFile at all? Directly save the response.content to a file: with open('outfile.xlsx', 'wb') as f: f.write(response.content) If you want to save the individual sheets, you can indeed use pandas: xl = pd.ExcelFile(io.BytesIO(response.content)) for name in xl.sheet_names: xl.parse(name).to_csv(f'{name}.xlsx') | 2 | 1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.