question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
76,711,533 | 2023-7-18 | https://stackoverflow.com/questions/76711533/how-to-use-the-python-openai-client-with-both-azure-and-openai-at-the-same-time | OpenAI offers a Python client, currently in version 0.27.8, which supports both Azure and OpenAI. Here are examples of how to use it to call the ChatCompletion for each provider: # openai_chatcompletion.py """Test OpenAI's ChatCompletion endpoint""" import os import openai import dotenv dotenv.load_dotenv() openai.api_key = os.environ.get('OPENAI_API_KEY') # Hello, world. api_response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "Hello!"} ], max_tokens=16, temperature=0, top_p=1, frequency_penalty=0, presence_penalty=0, ) print('api_response:', type(api_response), api_response) print('api_response.choices[0].message:', type(api_response.choices[0].message), api_response.choices[0].message) And: # azure_openai_35turbo.py """Test Microsoft Azure's ChatCompletion endpoint""" import os import openai import dotenv dotenv.load_dotenv() openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") openai.api_version = "2023-05-15" openai.api_key = os.getenv("AZURE_OPENAI_KEY") # Hello, world. # In addition to the `api_*` properties above, mind the difference in arguments # as well between OpenAI and Azure: # - OpenAI from OpenAI uses `model="gpt-3.5-turbo"`! # - OpenAI from Azure uses `engine="βΉdeployment nameβΊ"`! β οΈ # > You need to set the engine variable to the deployment name you chose when # > you deployed the GPT-35-Turbo or GPT-4 models. # This is the name of the deployment I created in the Azure portal on the resource. api_response = openai.ChatCompletion.create( engine="gpt-35-turbo", # engine = "deployment_name". messages=[ {"role": "user", "content": "Hello!"} ], max_tokens=16, temperature=0, top_p=1, frequency_penalty=0, presence_penalty=0, ) print('api_response:', type(api_response), api_response) print('api_response.choices[0].message:', type(api_response.choices[0].message), api_response.choices[0].message) i.e. api_type and other settings are globals of the Python library. Here is a third example to transcribe audio (it uses Whisper, which is available on OpenAI but not on Azure): # openai_transcribe.py """ Test the transcription endpoint https://platform.openai.com/docs/api-reference/audio """ import os import openai import dotenv dotenv.load_dotenv() openai.api_key = os.getenv("OPENAI_API_KEY") audio_file = open("minitests/minitests_data/bilingual-english-bosnian.wav", "rb") transcript = openai.Audio.transcribe( model="whisper-1", file=audio_file, prompt="Part of a Bosnian language class.", response_format="verbose_json", ) print(transcript) These are minimal examples but I use similar code as part of my webapp (a Flask app). Now my challenge is that I'd like to: Use the ChatCompletion endpoint from Azure; but: Use the Transcribe endpoint from OpenAI (since it's not available on Azure) Is there any way to do so? I have a few options in mind: Changing the globals before every call. But I'm worried that this might cause side-effects I did not expect. Duplicating/Forking the library to have two versions run concurrently, one for each provider, but this also feels very messy. Use an alternative client for OpenAI's Whisper, if any. I'm not too comfortable with these and feel I may have missed a more obvious solution. Or of courseβ¦ Alternatively, I could just use Whisper with a different provider (e.g. Replicate) or an alternative to Whisper altogether. See also Someone reported the issue (but without a solution) on GitHub (openai/openai-python): Using Azure and OpenAI at the same time #411 | Each API in the library accepts per-method overrides for the configuration options. If you want to access the Azure API for chat completions, you can explicitly pass in your Azure config. For the transcribe endpoint, you can explicitly pass the OpenAI config. For example: import os import openai api_response = openai.ChatCompletion.create( api_base=os.getenv("AZURE_OPENAI_ENDPOINT"), api_key=os.getenv("AZURE_OPENAI_KEY"), api_type="azure", api_version="2023-05-15", engine="gpt-35-turbo", messages=[ {"role": "user", "content": "Hello!"} ], max_tokens=16, temperature=0, top_p=1, frequency_penalty=0, presence_penalty=0, ) print(api_response) audio_file = open("minitests/minitests_data/bilingual-english-bosnian.wav", "rb") transcript = openai.Audio.transcribe( api_key=os.getenv("OPENAI_API_KEY"), model="whisper-1", file=audio_file, prompt="Part of a Bosnian language class.", response_format="verbose_json", ) print(transcript) | 3 | 6 |
76,715,807 | 2023-7-18 | https://stackoverflow.com/questions/76715807/seeking-optimization-for-computation-heavy-mathematical-function-in-numpy | I've recently been developing a Python script that implements a specific mathematical function, which is shown in the figure below where indexation is periodic and 1 <= j <= n. The function is relatively complex and is inspired by a previous question. The main purpose of the code is to evaluate the mathematical function for large arrays x (of size 5000 or more). Here is the current implementation of this function in Python: import numpy as np import sys, os def compute_y(x, L, v): n = len(x) y = np.zeros(n) for k in range(L+1): # Generate the indices for the window around each j, with periodic wrapping indices = np.arange(-k, k+1) # Compute the weights weights_1 = k - np.abs(indices) weights_2 = k + 1 - np.abs(indices) weights_d = np.ones(2*k+1) # For each j, take the elements in the window around j and multiply by the weights x_matrix = np.take(x, (np.arange(n)[:, None] + indices) % n, mode='wrap') exp_1 = np.exp(-np.sum(weights_1[None, :] * x_matrix, axis=1)/v) exp_2 = np.exp(-np.sum(weights_2[None, :] * x_matrix, axis=1)/v) denom = np.sum(weights_d[None, :] * x_matrix, axis=1) # Compute the weighted sum for each j and add to the total y += (exp_1 - exp_2)/denom return y # Test the function x = np.random.rand(5000) L = len(x)//2 v = 1.4 y = compute_y(x, L, v) The issue I'm currently facing is that this code, although functional and vectorized, is significantly slower than desired, particularly when applied to large arrays. I believe the primary source of this slowness is the for loop which deals with generating the indices, computing the weights, and summing and calculating the exponentials. I am therefore looking for guidance and suggestions on how to speed up this code, specifically for arrays of size 5000 or larger. I am particularly interested in methods that take advantage of vectorization through Numpy to expedite the computations. Any help would be much appreciated. Thank you in advance! | Convolutions The most important thing to notice, I think, is that this is more-or-less a convolution: exp_1 = np.exp(-np.sum(weights_1[None, :] * x_matrix, axis=1)/v) If you look at the values of x_matrix, they are the x values for x in position -1, 0, 1, ... around each x value. Then it's being multiplied by weights. That's a convolution. Why do we care? This is helpful because somebody has already made fast libraries for doing convolutions. The core idea here is to replace np.take() with three convolutions, and avoid creating the x_matrix array. import scipy.signal def compute_y_convolution(x, L, v): n = len(x) y = np.zeros(n) x3 = np.tile(x, 3) for k in range(L+1): # Generate the indices for the window around each j, with periodic wrapping indices = np.arange(-k, k+1) # Compute the weights weights_1 = k - np.abs(indices) weights_2 = k + 1 - np.abs(indices) weights_d = np.ones(2*k + 1) conv = scipy.signal.convolve(x3, weights_d, mode='same')[n:-n] exp_1 = np.exp(-(scipy.signal.convolve(x3, weights_1, mode='same')[n:-n])/v) exp_2 = np.exp(-(scipy.signal.convolve(x3, weights_2, mode='same')[n:-n])/v) # Compute the weighted sum for each j and add to the total y += (exp_1 - exp_2)/conv return y (Why does this create 3 copies of the x array before doing the convolution? At the edge of each array, you want it to wrap around and access elements on the other end of the array, but scipy.signal.convolve will just treat those parts of the convolution as zero.) This works pretty well, and it achieves a 141x speedup on a 5000 element array. (718 seconds vs. 5.06 seconds) Re-using convolutions Convolutions are pretty expensive, and we end up doing three of them every loop in the previous example. Can we do better? Let's print out the weights used by the convolution each loop: k=0 weights_1=array([0]) weights_2=array([1]) weights_d=array([1.]) k=1 weights_1=array([0, 1, 0]) weights_2=array([1, 2, 1]) weights_d=array([1., 1., 1.]) We can notice three things: The weights in the denominator are all one, which is a uniform filter response. Scipy has a specialized function which is faster for uniform filters. The value of weights_2 is equivalent to the value of weights_1 plus the uniform filter. The value of weights_1 is equivalent to the value of weights_2 in the previous loop. Using those observations, we can go from 3 to 1 convolutions. def compute_y_reuse(x, L, v): n = len(x) y = np.zeros(n) last_exp_2_raw = np.zeros(n) for k in range(L+1): uniform = scipy.ndimage.uniform_filter(x, size=2*k + 1, mode='wrap') * (2*k + 1) exp_1_raw = last_exp_2_raw exp_1 = np.exp(-exp_1_raw/v) exp_2_raw = exp_1_raw + uniform exp_2 = np.exp(-exp_2_raw/v) # Compute the weighted sum for each j and add to the total y += (exp_1 - exp_2)/uniform last_exp_2_raw = exp_2_raw return y This achieves a 1550x speedup versus the original on a 5000 element array. (718 seconds vs. 0.462 seconds) Removing last convolution I looked into this further, and tried to remove the last convolution. Essentially, the idea is that in the previous loop, we calculated the sum of the N closest elements, and the next loop, we calculate the sum of the N+2 closest elements, so we can just add up the 2 elements on the very edge. I tried to use np.roll() for this, but found it was slower than uniform_filter(), because it must copy the array. Eventually I found this thread, which let me figure out how to solve this. Also, since exp_1_raw is the same as the previous iteration's exp_2_raw, we can re-use the np.exp() call by saving the output from that iteration. def fast_roll_add(dst, src, shift): dst[shift:] += src[:-shift] dst[:shift] += src[-shift:] def compute_y_noconv(x, L, v): n = len(x) y = np.zeros(n) last_exp_2_raw = np.zeros(n) last_exp_2 = np.ones(n) uniform = x.copy() for k in range(L+1): if k != 0: fast_roll_add(uniform, x, k) fast_roll_add(uniform, x, -k) exp_1_raw = last_exp_2_raw exp_1 = last_exp_2 exp_2_raw = exp_1_raw + uniform / v exp_2 = np.exp(-exp_2_raw) # Compute the weighted sum for each j and add to the total y += (exp_1 - exp_2) / uniform last_exp_2_raw = exp_2_raw last_exp_2 = exp_2 return y This achieves a 3100x speedup versus the original on a 5000 element array. (718 seconds vs. 0.225 seconds) It also no longer requires scipy as a dependency. | 2 | 5 |
76,704,097 | 2023-7-17 | https://stackoverflow.com/questions/76704097/pytube-exceptions-regexmatcherror-get-transform-object-could-not-find-match-fo | While downloading a video using the PyTube library using this code: yt.streams.get_highest_resolution().download("PATH", f"PATH.mp4") I get an error: raise RegexMatchError(caller="get_transform_object", pattern=pattern) pytube.exceptions.RegexMatchError: get_transform_object: could not find match for var for={(.*?)}; I've seen a lot of fixes on Stack Overflow and in the Git repository of PyTube, but they seem to go into different parts of cypher.py. I would like to know how I could alternate get_transform_object class in cypher.py to match the RegEx check. | Here is a quick fix in the meantime as the library makes an update. -> In file .venv/lib/python3.10/site-packages/pytube/cipher.py I am using python 3.10 and my virtual environment is called .venv You just have to find the library pytube and go to the file cipher.py and edit its source code for now. -> Find the method get_transform_object and replace it as below def get_transform_object(js: str, var: str) -> List[str]: pattern = r"var %s={(.*?)};" % re.escape(var) logger.debug("getting transform object") regex = re.compile(pattern, flags=re.DOTALL) transform_match = regex.search(js) if not transform_match: # i commented out the line raising the error # raise RegexMatchError(caller="get_transform_object", pattern=pattern) return [] # Return an empty list if no match is found return transform_match.group(1).replace("\n", " ").split(", ") | 5 | 10 |
76,678,629 | 2023-7-13 | https://stackoverflow.com/questions/76678629/how-do-i-terminate-the-script-once-q-is-pressed | Below is a complete script I am trying to automate the process of pinging multiple routers and to do that every 2 hrs but I also want the ability to terminate it at any given time. def start(): for file_name in file_list: unrechable = [] rechable = [] print("Processing:"+file_name,end="\n") open_file(file_name, rechable, unrechable) if len(unrechable) > 0: print("These IP from " + file_name + " are Unrechable:") for i in unrechable: print(i,end="\n") print("") else: print("All IP's are Rechable from " + file_name) return ''' ''' def open_file(file_name, rechable, unrechable): df = pd.read_excel("D:/Network/"+file_name+".xlsx") col_IP = df.loc[:, "IP"].tolist() col_name = df.loc[:, "Location"].tolist() check(col_IP, col_name, rechable, unrechable) return ''' ''' def check(col_IP, col_name, rechable, unrechable): for ip in range(len(col_IP)): response = os.popen(f"ping {col_IP[ip]} ").read() if("Request timed out." or "unreachable") in response: print(response) unrechable.append(str(col_IP[ip] + " at " + col_name[ip])) else: print(response) rechable.append(str(col_IP[ip] + " at " + col_name[ip])) return ''' ''' def main1(): while(True): start() print("Goint to Sleep for 2Hrs") time.sleep(60) def qu(): while(True): if(keyboard.is_pressed('q')): print("exit") os._exit ''' ''' file_list = ["ISP" , "NVR"] if __name__ == '__main__': p2 = Thread(target=main1) p3 = Thread(target=qu) p2.start() p3.start() I have created 2 threads here one runs the main script the other looks for keyboard interrupt. But once q is pressed only one of the threads terminates. I later found out it is impossible to terminate both threads at one and I am completely lost at this point | I added a queue to a thread that listens to any keyboard interrupts and once the condition is met adds it to the queue. def qu1(): while(True): if keyboard.is_pressed("q"): qu_main.put("q") print("added q") break else: pass return Next I put my sleep timer in a loop. I wanted my script to execute in 2hr intervals therefore I added a 1s timer and looped it 7200 times. while(x < 7200): if(not qu_main.empty()): if(qu_main.get() == 'q'): print("ending the Cycle") return else: print("sleep") x += 1 time.sleep(1) else: print("sleep") x += 1 time.sleep(1) | 2 | 0 |
76,686,267 | 2023-7-14 | https://stackoverflow.com/questions/76686267/what-is-the-new-way-to-declare-mongo-objectid-with-pydantic-v2-0 | This week, I started working with MongoDB and Flask, so I found a helpful article on how to use them together by using PyDantic library to define MongoDB's models. However, the article is somewhat outdated, mostly could be updated to new PyDantic's version, but the problem is that the ObjectId is a third party field and that changed drastically between versions. The article defines the ObjectId using the following code: from bson import ObjectId from pydantic.json import ENCODERS_BY_TYPE class PydanticObjectId(ObjectId): """ Object Id field. Compatible with Pydantic. """ @classmethod def __get_validators__(cls): yield cls.validate #The validator is doing nothing @classmethod def validate(cls, v): return PydanticObjectId(v) #Here you modify the schema to tell it that it will work as an string @classmethod def __modify_schema__(cls, field_schema: dict): field_schema.update( type="string", examples=["5eb7cf5a86d9755df3a6c593", "5eb7cfb05e32e07750a1756a"], ) #Here you encode the ObjectId as a string ENCODERS_BY_TYPE[PydanticObjectId] = str In the past, this code worked well. However, I recently discovered that the latest version of PyDantic has a more complex way of defining custom data types. I've tried following the Pydantic documentation, but I'm still confused and haven't been able to implement it successfully. I've tried the implementation to do the implementation for third party types, but it's not working. It's almost the same code of the documentation, but changing ints for strings, and the third party callabels for ObjectId. Again, I'm not sure why it's not working. from bson import ObjectId from pydantic_core import core_schema from typing import Annotated, Any from pydantic import BaseModel, GetJsonSchemaHandler, ValidationError from pydantic.json_schema import JsonSchemaValue class PydanticObjectId(ObjectId): """ Object Id field. Compatible with Pydantic. """ x: str def __init__(self): self.x = '' class _ObjectIdPydanticAnnotation: @classmethod def __get_pydantic_core_schema__( cls, _source_type: Any, _handler: ObjectId[[Any], core_schema.CoreSchema], ) -> core_schema.CoreSchema: @classmethod def validate_object_id(cls, v: ObjectId) -> PydanticObjectId: if not ObjectId.is_valid(v): raise ValueError("Invalid objectid") return PydanticObjectId(v) from_str_schema = core_schema.chain_schema( [ core_schema.str_schema(), core_schema.no_info_plain_validator_function(validate_object_id), ] ) return core_schema.json_or_python_schema( json_schema=from_str_schema, python_schema=core_schema.union_schema( [ # check if it's an instance first before doing any further work core_schema.is_instance_schema(PydanticObjectId), from_str_schema, ] ), serialization=core_schema.plain_serializer_function_ser_schema( lambda instance: instance.x ), ) @classmethod def __get_pydantic_json_schema__( cls, _core_schema: core_schema.CoreSchema, handler: GetJsonSchemaHandler ) -> JsonSchemaValue: # Use the same schema that would be used for `int` return handler(core_schema.int_schema()) I've searched for answers on StackOverflow, but all the answers I've found refer to older versions of Pydantic and use code that's similar to what I pasted above. If anyone knows of an alternative solution or can provide clear guidance on how to define a custom data type in the latest version of PyDantic, I would greatly appreciate it. Update A constant error that I'm getting because I'm not creating right the ObjectId type is this Unable to generate pydantic-core schema for <class 'bson.objectid.ObjectId'>. Set arbitrary_types_allowed=True in the model_config to ignore this error or implement __get_pydantic_core_schema__ on your type to fully support it. If you got this error by calling handler() within __get_pydantic_core_schema__ then you likely need to call handler.generate_schema(<some type>) since we do not call __get_pydantic_core_schema__ on <some type> otherwise to avoid infinite recursion. For further information visit https://errors.pydantic.dev/2.0.2/u/schema-for-unknown-type And the answer is to declare it as an unknown type, but I don't want it, I want to declare it as an ObjectId. | Generally best to ask questions like this on pydantic's GitHub discussions. Your solution is pretty close, I think you just have the wrong core schema. I think our documentation on using custom types via Annotated cover this fairly well, but just to help you, here is a working implementation: from typing import Annotated, Any from bson import ObjectId from pydantic_core import core_schema from pydantic import BaseModel from pydantic.json_schema import JsonSchemaValue class ObjectIdPydanticAnnotation: @classmethod def validate_object_id(cls, v: Any, handler) -> ObjectId: if isinstance(v, ObjectId): return v s = handler(v) if ObjectId.is_valid(s): return ObjectId(s) else: raise ValueError("Invalid ObjectId") @classmethod def __get_pydantic_core_schema__(cls, source_type, _handler) -> core_schema.CoreSchema: assert source_type is ObjectId return core_schema.no_info_wrap_validator_function( cls.validate_object_id, core_schema.str_schema(), serialization=core_schema.to_string_ser_schema(), ) @classmethod def __get_pydantic_json_schema__(cls, _core_schema, handler) -> JsonSchemaValue: return handler(core_schema.str_schema()) class Model(BaseModel): id: Annotated[ObjectId, ObjectIdPydanticAnnotation] print(Model(id='64b7abdecf2160b649ab6085')) print(Model(id='64b7abdecf2160b649ab6085').model_dump_json()) print(Model(id=ObjectId())) print(Model.model_json_schema()) print(Model(id='foobar')) # will error | 17 | 12 |
76,701,617 | 2023-7-17 | https://stackoverflow.com/questions/76701617/the-following-arguments-are-not-supported-with-the-native-keras-format-opti | I am building a Keras deep learning Algorithm on dogs vs cats dataset. I am able to run my code in colab. But in Jupyter lab I am getting this error. The following argument(s) are not supported with the native Keras format: ['options'] Below is the code: import os import shutil import pathlib original_dir = pathlib.Path("/content/drive/MyDrive/Reva/dogs_vs_cats/train/train") new_base_dir = pathlib.Path("/content/drive/MyDrive/Reva/dogs_vs_cats/") def make_subset(subset_name, start_index, end_index): for category in ("cat", "dog"): dir = new_base_dir / subset_name / category # Check if the folder exists and delete it if it does if os.path.exists(dir): shutil.rmtree(dir) # Create the folder again os.makedirs(dir) fnames = [f"{category}.{i}.jpg" for i in range(start_index, end_index)] for fname in fnames: shutil.copyfile(src=original_dir / fname, dst=dir / fname) make_subset("train", start_index=0, end_index=1000) make_subset("validation", start_index=1000, end_index=1500) make_subset("test", start_index=1500, end_index=2500) from tensorflow.keras.utils import image_dataset_from_directory train_dataset = image_dataset_from_directory( new_base_dir / "train", image_size=(180, 180), batch_size=32) validation_dataset = image_dataset_from_directory( new_base_dir / "validation", image_size=(180, 180), batch_size=32) test_dataset = image_dataset_from_directory( new_base_dir / "test", image_size=(180, 180), batch_size=32) from tensorflow import keras from tensorflow.keras import layers inputs = keras.Input(shape=(180, 180, 3)) x = layers.Rescaling(1./255)(inputs) x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(x) x = layers.MaxPooling2D(pool_size=2)(x) x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x) x = layers.MaxPooling2D(pool_size=2)(x) x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x) x = layers.MaxPooling2D(pool_size=2)(x) x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x) x = layers.MaxPooling2D(pool_size=2)(x) x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x) x = layers.Flatten()(x) outputs = layers.Dense(1, activation="sigmoid")(x) model = keras.Model(inputs=inputs, outputs=outputs) model.compile(loss="binary_crossentropy", optimizer="rmsprop", metrics=["accuracy"]) callbacks = [ keras.callbacks.ModelCheckpoint( filepath="convnet_from_scratch.keras", save_best_only=True, monitor="val_loss") ] history = model.fit( train_dataset, epochs=30, validation_data=validation_dataset, callbacks=callbacks) I need to know how to resolve the above code. Any suggestions to improve the time required to run the code is also welcome. | As I mentioned in the comments, there seems to be a weird behaviour related to keras saving and also versioning of TF/Keras. I could replicate your error when running TF/Keras with version 2.13 (newest right now) on colab. Standard install on colab is 2.12, where the error doesn't come up. So one solution would be to downgrade TF/Keras to 2.12.x, or change keras.callbacks.ModelCheckpoint( filepath="convnet_from_scratch.keras", ..) to keras.callbacks.ModelCheckpoint( filepath="convnet_from_scratch.x", ..) where x stands for whatever you fancy (NOT "keras") to not save in the .keras format. | 9 | 15 |
76,717,109 | 2023-7-18 | https://stackoverflow.com/questions/76717109/how-to-optimize-a-for-loop-in-python-that-references-values-in-a-data-frame-that | The code below first updates the signal column with a 1 or -1 if certain conditions are met, otherwise the signal column is set to 0. In the for-loop, the signal column gets updated to the previous signal value if certain conditions are met. I would like to replace the for-loop with a faster solution that is still able to update the signal column with the previous signal value considering the previous signal value could be updated through the process. df['signal'] = np.where(np.logical_and(df['ind_1'] == 1, df['ind_2'] <= threshold), 1, 0) df['signal'] = np.where(np.logical_and(df['ind_1'] == -1, df['ind_2'] >= 100 - threshold), -1, df['signal']) for i in range(1, len(df)): if df.at[i, 'signal'] == 0 and df.at[i - 1, 'signal'] != 0: if df.at[i, 'ind_1'] == 0: df.at[i, 'signal'] = df.at[i - 1, 'signal'] | The fastest way to manipulate data in a dataframe is through vectorization. Let me explain using below code for 1,000,000 records: import pandas as pd import numpy as np from time import time df = pd.DataFrame({ 'ind_1': np.random.randint(-1, 2, size=(1000000, )), 'ind_2': np.random.randint(0, 101, size=(1000000, )) }) threshold = 40 df['signal'] = np.where(np.logical_and(df['ind_1'] == 1, df['ind_2'] <= threshold), 1, 0) df['signal'] = np.where(np.logical_and(df['ind_1'] == -1, df['ind_2'] >= 100 - threshold), -1, df['signal']) df2 = df.copy() #duplicate dataframe for comparison I'm using df to measure the time taken for the original solution. And the duplicated df2 to measure time taken for my proposed solution. In my proposed solution, I'm using df2['signal_shift1'] = df2['signal'].shift(1) to move the signal column down 1 row, so that the record can be compared across the same row. Then your original conditions: A: df.at[i, 'signal'] == 0 and B: df.at[i - 1, 'signal'] != 0 and C: df.at[i, 'ind_1'] == 0 becomes this, comparable on the same row: A&C: df2['signal'].abs() + df2['ind_1'].abs() == 0 and B: df2['signal_shift1'] != 0 Note that I've combined conditions A==0 and C==0 to become abs(A)+abs(C)==0 #original solution (using df) ti = time() for i in range(1, len(df)): if df.at[i, 'signal'] == 0 and df.at[i - 1, 'signal'] != 0: if df.at[i, 'ind_1'] == 0: df.at[i, 'signal'] = df.at[i - 1, 'signal'] print('Time taken original solution: {:.3f} sec'.format(time() - ti)) #proposed solution (using df2) ti = time() df2['signal_compare'] = 0 #initialize break condition while not df2['signal_compare'].equals(df2['signal']): df2['signal_compare'] = df2['signal'].copy() #condition to break while-loop df2['signal_shift1'] = df2['signal'].shift(1) df2.at[0, 'signal_shift1'] = df2.at[0, 'signal'] #to remove null value after .shift(1) df2['signal'] = np.where(np.logical_and(df2['signal'].abs() + df2['ind_1'].abs() == 0, df2['signal_shift1'] != 0), df2['signal_shift1'], df2['signal']).astype('int') print('Time taken proposed solution: {:.3f} sec'.format(time() - ti)) #check if original solution and proposed solution are the same print('Output columns are the same:', df['signal'].equals(df2['signal'])) print(df2) The output shows that the proposed solution is completed within a shorter time, while having the same column signal results. It's very obvious that vectorization is much faster than for-loop iteration! Time taken original solution: 6.501 sec Time taken proposed solution: 0.202 sec Output columns are the same: True ind_1 ind_2 signal signal_compare signal_shift1 0 -1 80 -1 -1 -1.0 1 0 14 -1 -1 -1.0 2 0 14 -1 -1 -1.0 3 1 46 0 0 -1.0 4 0 23 0 0 0.0 ... ... ... ... ... ... 999995 0 70 0 0 0.0 999996 -1 88 -1 -1 0.0 999997 0 73 -1 -1 -1.0 999998 -1 39 0 0 -1.0 999999 -1 83 -1 -1 0.0 [1000000 rows x 5 columns] | 3 | 1 |
76,716,898 | 2023-7-18 | https://stackoverflow.com/questions/76716898/is-python-dict-insertion-order-preserved-after-deleting-elements | This StackOverflow answer says python dicts keep insertion order of keys as of python 3.7. The comments to that answer discuss the implementation details of what happens when a key is deleted. I'd like to know: what does the language spec guarantee about key order in the face of deletes (preferably with a link)? Based on the discussion, I bet it guarantees insertion order of the undeleted elements, but I've been unable to find confirmation. | The language guarantees that undeleted elements will remain in the same order after a key is deleted. The Python language reference states (emphasis mine): Dictionaries preserve insertion order, meaning that keys will be produced in the same order they were added sequentially over the dictionary. Replacing an existing key does not change the order, however removing a key and re-inserting it will add it to the end instead of keeping its old place. The bolded text would not be true if the order of keys changed after a deletion. | 4 | 3 |
76,713,315 | 2023-7-18 | https://stackoverflow.com/questions/76713315/passing-nested-tuple-to-values-in-psycopg3 | I'm trying to updates some psycopg2 code to psycopg3. I'm trying to do a selection based on a set of values passed from Python (joining with an existing table). Without the join, a simplified example is: with connection.cursor() as cur: sql = "WITH sources (a,b,c) AS (VALUES %s) SELECT a,b+c FROM sources;" data = (('hi',2,0), ('ho',5,2)) cur.execute(sql, (data,) ) print(cur.fetchone()); I get an error ProgrammingError: syntax error at or near "'("(hi,2,0)","(ho,5,2)")'" LINE 1: WITH sources (a,b,c) AS (VALUES '("(hi,2,0)","(ho,5,2)")') S... The psycopg2 code used extras.execute_values instead, which is not available in psycopg3. Is there a way to pass the values for an intermediate table using psycopg3? | One way to do it: import psycopg con = psycopg.connect("dbname=test host=localhost user=postgres") with con.cursor() as cur: rs = [] sql = "SELECT %s, %s + %s" data = [('hi',2,0), ('ho',5,2)] cur.executemany(sql, data, returning=True ) while True: rs.append(cur.fetchone()) if not cur.nextset(): break print(rs) [('hi', 2), ('ho', 7)] From here psycopg cursor classes: executemany( ... ) Note Using the usual fetchone(), fetchall(), you will be able to read the records returned by the first query executed only. In order to read the results of the following queries you can call nextset() to move to the following result set. A typical use case for executemany(returning=True) might be to insert a bunch of records and to retrieve the primary keys inserted, taken from a PostgreSQL sequence. In order to do so, you may execute a query such as INSERT INTO table VALUES (...) RETURNING id. Because every INSERT is guaranteed to insert exactly a single record, you can obtain the list of the new ids using a pattern such as: cur.executemany(query, records) ids = [] while True: ids.append(cur.fetchone()[0]) if not cur.nextset(): break Warning More explicitly, fetchall() alone will not return all the values returned! You must iterate on the results using nextset(). UPDATE data_combined = [y for x in data for y in x] data_combined ['hi', 2, 0, 'ho', 5, 2] qry = sql.Composed( [sql.SQL("select a, b + c from ( "), sql.SQL('VALUES '), sql.SQL(",").join( sql.SQL("({})").format(sql.SQL(',').join(sql.Placeholder() * len(data[0]))) * len(data)), sql.SQL(") as t(a, b, c)")]) print(qry.as_string(con)) select a, b + c from ( VALUES (%s,%s,%s),(%s,%s,%s)) as t(a, b, c) cur.execute(qry, data_combined) cur.fetchall() [('hi', 2), ('ho', 7)] Used sql.Composed to build up a query with a variable number of VALUES and placeholders. Combined the tuple of tuples into a flat list and passed it to the query. | 3 | 2 |
76,713,531 | 2023-7-18 | https://stackoverflow.com/questions/76713531/create-a-list-in-a-list-from-a-separate-list-which-contains-the-intervals-as-an | As the title suggests, I would like to create a list in a list from a second list. For this I prepared the following example: list1 = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] list2 = [3, 4, 2, 6] solution = [[1, 2, 3], [4, 5, 6, 7], [8, 9], [10, 11, 12, 13, 14, 15]] I have already found a solution, but I find it very cumbersome and I am sure there is a faster and easier solution to my problem. list1 = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] list2 = [3, 4, 2, 6] j = 0 solution= [] for i in list2: solution.append(list1[j:i+j]) j += i Thanks a lot already guys | One possible solution is using itertools.islice: from itertools import islice list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] list2 = [3, 4, 2, 6] out = iter(list1) out = [list(islice(out, v)) for v in list2] print(out) Prints: [[1, 2, 3], [4, 5, 6, 7], [8, 9], [10, 11, 12, 13, 14, 15]] Another using := operator: list1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] list2 = [3, 4, 2, 6] out = 0 out = [list1[out:(out:=out+a)] for a in list2] print(out) | 2 | 6 |
76,712,042 | 2023-7-18 | https://stackoverflow.com/questions/76712042/how-to-properly-execute-a-convert-command-using-python | I'm new to Python, so, I'm sorry if my question may seems dumb. I'm trying to convert GIF files to APNG using this tool. As it says in the bottom of this page, to convert GIF to APNG you must execute the following command: $ vertopal convert GIF_INPUT_FILE --to apng It works fine when I run the command in the terminal, but I cannot implement it in my Python code. Here is my code: import subprocess discord_sticker_converter = subprocess.call("vertopal convert funny cat.gif --to apng") print(discord_sticker_converter) This is what is printed in the output: usage: vertopal [options] <command> [<args>] vertopal: error: unrecognized arguments: cat.gif 2 How should I call the convert command properly? | Let's talk about your command: "vertopal convert funny cat.gif --to apng" The name "funny cat.gif" has space in it. subprocess calls shlex.split() to split this string into a list of string tokens and the result is: >>> shlex.split("vertopal convert funny cat.gif --to apng") ['vertopal', 'convert', 'funny', 'cat.gif', '--to', 'apng'] As you can see, the file name is broken into 2 tokens: "funny" and "cat.gif". That is why it failed. To fix this situation, you need to properly quote your file name: >>> shlex.split("vertopal convert 'funny cat.gif' --to apng") ['vertopal', 'convert', 'funny cat.gif', '--to', 'apng'] The function subprocess.call() is flexible: it can take a single string and try to split it, or it can take a list of strings. That means both of these will work: subprocess.call("vertopal convert 'funny cat.gif' --to apng") subprocess.call(['vertopal', 'convert', 'funny cat.gif', '--to', 'apng']) The first form is more compact and is what you would do in the terminal. The second form is more explicit. Which form to use is your choice. As for me, I will choose the second form because the Zen of python said Explicit is better than implicit. | 3 | 3 |
76,712,736 | 2023-7-18 | https://stackoverflow.com/questions/76712736/why-cant-i-name-a-module-array | Given the following directory structure: app/ βββ array/ βββ test/ β βββ __init__.py β βββ test_array.py βββ __init__.py βββ functions.py When I run pytest from the app directory, I get the following error: ERROR collecting array/test/test_array.py _______________________________________ ImportError while importing test module '/path/to/app/array/test/test_array.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) E ModuleNotFoundError: No module named 'array.test'; 'array' is not a package Simply renaming the module to arrays works, but I don't understand what's wrong with array. | Because it collides with the stdlib module of the same name. | 3 | 4 |
76,710,726 | 2023-7-18 | https://stackoverflow.com/questions/76710726/cannot-run-jupyter-notebook-on-ubuntu-22-04 | I have Ubuntu 22.04 with python 3.10. When I try to open jupyter notebook from terminal this error occurrs: Traceback (most recent call last): File "/home/anaconda3/lib/python3.10/site-packages/notebook/services/sessions/sessionmanager.py", line 9, in <module> import sqlite3 File "/home/anaconda3/lib/python3.10/sqlite3/__init__.py", line 57, in <module> from sqlite3.dbapi2 import * File "/home/anaconda3/lib/python3.10/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: /home/anaconda3/lib/python3.10/lib-dynload/_sqlite3.cpython-310-x86_64-linux-gnu.so: undefined symbol: sqlite3_trace_v2 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/anaconda3/bin/jupyter-notebook", line 5, in <module> from notebook.notebookapp import main File "/home/anaconda3/lib/python3.10/site-packages/notebook/notebookapp.py", line 83, in <module> from .services.sessions.sessionmanager import SessionManager File "/home/anaconda3/lib/python3.10/site-packages/notebook/services/sessions/sessionmanager.py", line 12, in <module> from pysqlite2 import dbapi2 as sqlite3 ModuleNotFoundError: No module named 'pysqlite2' When I checked sqlite3, it is intalled in home/anaconda3/lib/python3.10/sqlite3/ and it contains dbapi2.py. Should I somehow reorganize the folders? PS: When I tried pip install pysqlite2 another error occurred: ERROR: Could not find a version that satisfies the requirement pysqlite2 (from versions: none) ERROR: No matching distribution found for pysqlite2 | It looks like you performed a manual installation of Jupyter (and required dependencies) directly in your home directory. I'm assuming /home/anaconda3/ is your home directory, and anaconda3 is just a badly chosen name. Rename the lib subdirectory: mv lib lib_aside You might want to do something similar for include/ or bin/ directories directly in your home as well (but check first what is in bin/ ; if that is all Python/Jupyter related, or that there's software you actually need that is not default installable). Comment out any PYTHONPATH setting in your .bashrc that points to that lib/ directory (and subdirectories). Comment out any PATH alteration to the bin/ in your home directory as well. If you don't need it, comment-out the section with Conda stuff as well. Install Jupyter from your package manager. E.g. sudo apt install python3-notebook Start a new terminal, and test: which python which jupyter These should point to /usr/bin Start your notebook: jupyter notebook | 3 | 1 |
76,710,614 | 2023-7-18 | https://stackoverflow.com/questions/76710614/dag-import-error-attributeerror-taskdecorator-object-has-no-attribute-upd | I'm facing an issue which my dag cannot be imported, but cannot figure out why: from airflow.sensors.sql import SqlSensor import pendulum from airflow.decorators import task,dag @dag( dag_id = "database_monitor", schedule_interval = '*/10 * * * *', start_date=pendulum.datetime(2023, 7, 16, 21,0,tz="UTC"), catchup=False,) def Pipeline(): check_db_alive = SqlSensor( task_id="check_db_alive", conn_id="evergreen", sql="SELECT pg_is_in_recovery()", success= lambda x: x == False, poke_interval= 60, #timeout = 60 * 2, mode = "reschedule", ) @task() def alert_of_db_inrecovery(): import requests # result = f"Former primary instance is in recovery, task_instance_key_str: {kwargs['task_instance_key_str']}" data = {"@key":"kkll", "@version" : "alertapi-0.1", "@type":"ALERT", "object" : "Testobject", "severity" : "MINOR", "text" : str("Former primary instance is in recovery") } requests.post('https://httpevents.systems/api/sendAlert',verify=False,data=data) check_db_alive >> alert_of_db_inrecovery dag = Pipeline() I get this error: AttributeError: '_TaskDecorator' object has no attribute 'update_relative' | You need to call the Python task flow operator i.e change check_db_alive >> alert_of_db_inrecovery to check_db_alive >> alert_of_db_inrecovery() check correct code from airflow.sensors.sql import SqlSensor import pendulum from airflow.decorators import task, dag @dag( dag_id="database_monitor", schedule_interval='*/10 * * * *', start_date=pendulum.datetime(2023, 7, 16, 21, 0, tz="UTC"), catchup=False, ) def Pipeline(): check_db_alive = SqlSensor( task_id="check_db_alive", conn_id="evergreen", sql="SELECT pg_is_in_recovery()", success=lambda x: x == False, poke_interval=60, # timeout = 60 * 2, mode="reschedule", ) @task def alert_of_db_inrecovery(): import requests # result = f"Former primary instance is in recovery, task_instance_key_str: {kwargs['task_instance_key_str']}" data = {"@key": "kkll", "@version": "alertapi-0.1", "@type": "ALERT", "object": "Testobject", "severity": "MINOR", "text": str("Former primary instance is in recovery") } requests.post('https://httpevents.systems/api/sendAlert', verify=False, data=data) check_db_alive >> alert_of_db_inrecovery() dag = Pipeline() Ref: https://airflow.apache.org/docs/apache-airflow/stable/tutorial/taskflow.html | 6 | 15 |
76,700,313 | 2023-7-16 | https://stackoverflow.com/questions/76700313/nicegui-tables-how-to-use-selection-and-click-events | I want to either click or select a single row in a table, retrieve the corresponding row's data, and then process it further with handlers. But the documentation is slim at the moment, and I have never used Quasar or Vue.js before. Here's my attempt: from nicegui import ui columns = [ {'name': 'title', 'label': 'Title', 'field':'title', 'required': True, 'align': 'left'}, {'name': 'author', 'label': 'Author', 'field':'author', 'align': 'left'}, {'name': 'year', 'label': 'Year', 'field':'year', 'sortable': True, 'align': 'left'}, ] rows = [ {'title': 'Some title A', 'url' : 'https://example.com/search?q=A', 'author': 'Alice', 'year': 2023}, {'title': 'Some title B', 'url' : 'https://example.com/search?q=B', 'author': 'Bob', 'year': 2022}, {'title': 'Some title C', 'url' : 'https://example.com/search?q=C', 'author': 'Carol', 'year': 2021} ] table = ui.table(title='Example table', columns=columns, rows=rows, row_key='name', pagination=5, selection='single', on_select=lambda e: ui.notify(e.selection)) table.add_slot('body', r''' <q-tr :props="props" @row-click="rowClick"> <q-td> <a :href="props.row.url">{{ props.row.title }}</a> </q-td> <q-td>{{ props.row.author }}</q-td> <q-td>{{ props.row.year }}</q-td> </q-tr> ''') table.on('rowClick', lambda *args: print(args), [None, None, None]) # c.f. nicegui issues #664, #672, #1095 ui.run() The code above does not work. There are no exceptions, but nothing is happening if I click on a row (and the check boxes to select a row are not shown; I might have disabled them inadvertently using my template). If i I had to choose, I would prefer to handle a click event. | Here is a working example table: table = ui.table(title='Example table', columns=columns, rows=rows, row_key='title', pagination=5, selection='single') table.add_slot('body-cell-title', r'<td><a :href="props.row.url">{{ props.row.title }}</a></td>') table.on('rowClick', lambda e: print(e.args)) It looks like your body template broke the row-click event, but I'm not 100% sure. Anyway, using the "body-cell-title" slot, you don't need to write the template for the whole body, but for the "title" cell only. And note that, when using features like filters and selection, the row_key should match a column name with unique values. Here I'm using "title". | 3 | 3 |
76,703,878 | 2023-7-17 | https://stackoverflow.com/questions/76703878/swap-columns-from-multidimensional-array | I have this array: my_array = np.arange(1216800).reshape(2, 100, 78, 78) The shape now is: (2, 100, 78, 78) and I want to reorder to : (100, 78, 78, 2). I tried something like: my_array[:, :, 2, :], my_array[:, :, :, 3] = my_array[:, :, :, 3], my_array[:, :, 2, :].copy() to swap first those columns, but I am receiving the same array. I saw this but whatever I try, I am having the same array. | Here is the code that you want: import numpy as np my_array = np.arange(1216800).reshape(2, 100, 78, 78) reordered_array = np.transpose(my_array, (1, 2, 3, 0)) print(reordered_array.shape) # Output: (100, 78, 78, 2) | 2 | 3 |
76,705,388 | 2023-7-17 | https://stackoverflow.com/questions/76705388/python-telegram-bot-send-message-with-buttons | i'd like to send a message through telegram bot with buttons. Once button is pressed i need to know which button was that and then change the text that came with the button. I've almost found out how to do things separately but i can't unite them. To send a message with buttons i need either to write /start or have to press a menu button. I need those buttons to appear after the message without user having to press anything. This is a script that i've found in the official description with added functions to send a message #!/usr/bin/env python # pylint: disable=unused-argument, wrong-import-position # This program is dedicated to the public domain under the CC0 license. """ Basic example for a bot that uses inline keyboards. For an in-depth explanation, check out https://github.com/python-telegram-bot/python-telegram-bot/wiki/InlineKeyboard-Example. """ import logging import asyncio from telegram import __version__ as TG_VER try: from telegram import __version_info__ except ImportError: __version_info__ = (0, 0, 0, 0, 0) # type: ignore[assignment] if __version_info__ < (20, 0, 0, "alpha", 1): raise RuntimeError( f"This example is not compatible with your current PTB version {TG_VER}. To view the " f"{TG_VER} version of this example, " f"visit https://docs.python-telegram-bot.org/en/v{TG_VER}/examples.html" ) from telegram import Bot, InlineKeyboardButton, InlineKeyboardMarkup, Update from telegram.ext import ApplicationBuilder, Application, CallbackQueryHandler, CommandHandler, ContextTypes # Enable logging logging.basicConfig( format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", level=logging.INFO ) # set higher logging level for httpx to avoid all GET and POST requests being logged logging.getLogger("httpx").setLevel(logging.WARNING) logger = logging.getLogger(__name__) async def start(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: """Sends a message with three inline buttons attached.""" keyboard = [ [ InlineKeyboardButton("Option 1", callback_data="1"), InlineKeyboardButton("Option 2", callback_data="2"), ], [InlineKeyboardButton("Option 3", callback_data="3")], ] reply_markup = InlineKeyboardMarkup(keyboard) await update.message.reply_text("Please choose:", reply_markup=reply_markup) async def button(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: """Parses the CallbackQuery and updates the message text.""" query = update.callback_query # CallbackQueries need to be answered, even if no notification to the user is needed # Some clients may have trouble otherwise. See https://core.telegram.org/bots/api#callbackquery await query.answer() await query.edit_message_text(text=f"Selected option: {query.data}") async def help_command(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: """Displays info on how to use the bot.""" await update.message.reply_text("Use /start to test this bot.") # using telegram.Bot async def send(chat, msg): await Bot('<TOKEN>').sendMessage(chat_id=chat, text=msg) # using ApplicationBuilder async def send_more(chat, msg): application = ApplicationBuilder().token('<TOKEN>').build() await application.bot.sendMessage(chat_id=chat, text=msg) def main() -> None: """Run the bot.""" # Create the Application and pass it your bot's token. application = Application.builder().token("<TOKEN>").build() # asyncio.run(send_more('<CHAT_ID>', 'Hello there!')) application.add_handler(CommandHandler("start", start)) application.add_handler(CallbackQueryHandler(button)) application.add_handler(CommandHandler("help", help_command)) asyncio.run(send('<CHAT_ID>', 'Hello there!')) # Run the bot until the user presses Ctrl-C application.run_polling(allowed_updates=Update.ALL_TYPES) if __name__ == "__main__": main() <<< EDIT >>> The new code: #!/usr/bin/env python # pylint: disable=unused-argument, wrong-import-position # This program is dedicated to the public domain under the CC0 license. """ Basic example for a bot that uses inline keyboards. For an in-depth explanation, check out https://github.com/python-telegram-bot/python-telegram-bot/wiki/InlineKeyboard-Example. """ import logging from telegram import __version__ as TG_VER try: from telegram import __version_info__ except ImportError: __version_info__ = (0, 0, 0, 0, 0) # type: ignore[assignment] if __version_info__ < (20, 0, 0, "alpha", 1): raise RuntimeError( f"This example is not compatible with your current PTB version {TG_VER}. To view the " f"{TG_VER} version of this example, " f"visit https://docs.python-telegram-bot.org/en/v{TG_VER}/examples.html" ) from telegram import Bot, InlineKeyboardButton, InlineKeyboardMarkup, Update from telegram.ext import ApplicationBuilder, Application, CallbackQueryHandler, CommandHandler, ContextTypes # Enable logging logging.basicConfig( format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", level=logging.INFO ) # set higher logging level for httpx to avoid all GET and POST requests being logged logging.getLogger("httpx").setLevel(logging.WARNING) logger = logging.getLogger(__name__) prolong_1y = 0 prolong_2y = 0 noprolong = 0 async def button(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None: """Parses the CallbackQuery and updates the message text.""" query = update.callback_query # CallbackQueries need to be answered, even if no notification to the user is needed # Some clients may have trouble otherwise. See https://core.telegram.org/bots/api#callbackquery await query.answer() global prolong_1y, prolong_2y, noprolong if "prolong_1y" in query: prolong_1y = 1 prolong_2y = 0 noprolong = 0 elif "prolong_2y" in query: prolong_1y = 0 prolong_2y = 1 noprolong = 0 elif "noprolong" in query: prolong_1y = 0 prolong_2y = 0 noprolong = 1 await query.edit_message_text(text=f"Selected option: {query.data}") print(f"prolong_1y => {prolong_1y}, prolong_2y => {prolong_2y} and noprolong => {noprolong}") # using telegram.Bot async def send(chat, msg, reply_markup): await Bot('<TOKEN>').sendMessage(chat_id=chat, text=msg, reply_markup=reply_markup) async def post_init(application: Application) -> None: """Sends a message with three inline buttons attached.""" keyboard = [ [ InlineKeyboardButton("Option 1", callback_data="prolong_1y"), InlineKeyboardButton("Option 2", callback_data="prolong_2y"), ], [InlineKeyboardButton("Option 3", callback_data="noprolong")], ] reply_markup = InlineKeyboardMarkup(keyboard) await send('<CHAT_ID>', 'Hello there!', reply_markup) def main() -> None: """Run the bot.""" # Create the Application and pass it your bot's token. application = Application.builder().token("<TOKEN>").post_init(post_init).build() application.add_handler(CallbackQueryHandler(button)) # Run the bot until the user presses Ctrl-C application.run_polling(allowed_updates=Update.ALL_TYPES) if __name__ == "__main__": main() The error that i receive after pressing on either Choice button: 2023-07-19 13:50:13,278 - apscheduler.scheduler - INFO - Scheduler started 2023-07-19 13:50:13,278 - telegram.ext.Application - INFO - Application started 2023-07-19 13:50:16,557 - telegram.ext.Application - ERROR - No error handlers are registered, logging exception. Traceback (most recent call last): File "/home/admin2/.local/lib/python3.11/site-packages/telegram/ext/_application.py", line 1173, in process_update await coroutine File "/home/admin2/.local/lib/python3.11/site-packages/telegram/ext/_basehandler.py", line 141, in handle_update return await self.callback(update, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/media/smb_general/ΠΠ΄ΠΌΡΠ½ΡΡΡΡΡΠ²Π°Π½Π½Ρ/Source/ElReports/elreports/inlinekeyboard.py", line 50, in button if "prolong_1y" in query: ^^^^^^^^^^^^^^^^^^^^^ File "/home/admin2/.local/lib/python3.11/site-packages/telegram/_telegramobject.py", line 247, in __getitem__ return getattr(self, item) ^^^^^^^^^^^^^^^^^^^ TypeError: attribute name must be string, not 'int' <<< EDIT 2 >>> Silly me, changed query to query.data now everything works! | If you want to attach buttons to the message sent in send, you can just use the corresponding parameter of send_message for that. Not that message.reply_text as used in start is just a shortcut for that method. Moreover, you don't need to manually initialize a bot in send and to manually run that method via asyncio.run. I recommend to instead make use of post_init which allows to run a custom logic as part of the startup logic of application.run_polling. Disclaimer: I'm currently the maintainer of python-telegram-bot. | 2 | 5 |
76,702,175 | 2023-7-17 | https://stackoverflow.com/questions/76702175/using-scipy-interpolate-in-python-i-want-to-erase-certain-values-and-interpolat | In my code, I have certain values like my_list = [725.998474, 0.0, 0.0, 0.0, 0.0, 789.507934, 792.585388, 801.612916, 799.38916, 809.280518, 809.186036, 811.899414, .... , 412.314528] In my code, I want to interpolate the points where the list is 0.0 because they are outliers. But it is not working as interpolation only works for empty values. How can I erase those 0.0 values and do interpolation? | Using the data you provided, we can create a "cleaned" list of the x and y data using numpy. You said that the values equal to 0 are the outliers, but checking equality with floating point numbers can lead to issues, so I used np.isclose. With the outliers removed, you can interpolate the cleaned data. import numpy as np from scipy.interpolate import make_interp_spline import matplotlib.pyplot as plt plt.close("all") y = np.array([725.998474, 0.0, 0.0, 0.0, 0.0, 789.507934, 792.585388, 801.612916, 799.38916, 809.280518, 809.186036, 811.899414, 412.314528]) x = np.arange(len(y)) outliers = np.isclose(y, 0) y_clean = y[~outliers] x_clean = x[~outliers] spline = make_interp_spline(x_clean, y_clean) y_interped = spline(x) fig, ax = plt.subplots() ax.plot(x_clean, y_clean, ".", label="cleaned", zorder=3) ax.plot(x, y_interped, label="interpolated") ax.legend() ax.set_xlabel("x") ax.set_ylabel("y") fig.show() If, as @Reinderien suggested, your actual condition is that values below 100, for example, are outliers, then you could change it to that condition (i.e. outliers = y < 100). | 2 | 5 |
76,705,656 | 2023-7-17 | https://stackoverflow.com/questions/76705656/python-refer-to-parent-class-from-static-class | Is it possible to access the parent class name within a static class? For example, how to I print the parent class name in the bar method below? class Static: def bar(self): parent_name = ??? print(parent_name) class A: object = Static() def foo(self): A.object.bar() class B: object = Static() def foo(self): B.object.bar() A().foo() B().foo() | You're looking for the __set_name__ method. It is called on every static attribute of a class when the class is finished constructing. One of the parameters is the "owner" class (A or B), so all you need is to store that reference. Note that this requires a separate Static instance for each parent class, but you're doing that already. class Static: def bar(self): print(self.parent_name) def __set_name__(self, owner, name): self.parent_name = owner.__name__ class A: object = Static() def foo(self): A.object.bar() class B: object = Static() def foo(self): B.object.bar() A().foo() B().foo() I personally avoid these magic methods when possible, so you might also consider taking @JonSG's advice and put the owner class in the Static constructor instead. This way the relationship is much clearer, at the cost of adding them after-the-fact. class Static: def __init__(self, owner): self.parent_name = owner.__name__ def bar(self): print(self.parent_name) class A: def foo(self): A.object.bar() A.object = Static(A) class B: def foo(self): B.object.bar() B.object = Static(B) A().foo() B().foo() Alternatively, one might leverage a metaclass to do some of the work. This has the potential to also simplify the implementations of A() and B() class Static: def __init__(self, parent_name) -> None: self.parent_name = parent_name def bar(self): print(self.parent_name) class AB_Meta(type): def __init__(cls, name, bases, dct): cls.object = Static(name) class A(metaclass=AB_Meta): def foo(self): A.object.bar() class B(metaclass=AB_Meta): def foo(self): B.object.bar() A().foo() B().foo() Should give you: A B | 2 | 3 |
76,704,752 | 2023-7-17 | https://stackoverflow.com/questions/76704752/merging-records-based-on-consecutive-dates-in-python | I want to merge records of my dataframe if dates are same.Here in the below example I want to merge date (13,14,15), (25,26), (30,31) together as there are continuous dates. I want to break the merging of record if there is any single day break. cust date description CUST123 2020-06-13 observed increased loss rate CUST123 2020-06-13 cut performed job CUST123 2020-06-14 working tight area CUST123 2020-06-15 production shut neighbouring app CUST123 2020-07-17 loss pressure slow gain trip CUST123 2020-08-25 established circulation load CUST123 2020-08-26 performed sticky test CUST123 2020-08-28 job meeting prior low energy CUST123 2020-08-30 performed maintenance service CUST123 2020-08-31 reconnected control line expected output cust date description CUST123 2020-06-13 observed increased loss rate cut performed job working tight area production shut neighbouring app CUST123 2020-07-17 loss pressure slow gain trip CUST123 2020-08-25 established circulation load performed sticky test CUST123 2020-08-28 job meeting prior low energy CUST123 2020-08-30 performed maintenance service reconnected control line | In order to merge records of a dataframe if dates are same, you could do: merged_df = df.groupby(['cust', 'date'])['description'].apply(' '.join).reset_index() which outputs: cust date description 0 CUST123 2020-06-13 observed increased loss rate cut performed job 1 CUST123 2020-06-14 working tight area 2 CUST123 2020-06-15 production shut neighbouring app 3 CUST123 2020-07-17 loss pressure slow gain trip 4 CUST123 2020-08-25 established circulation load 5 CUST123 2020-08-26 performed sticky test 6 CUST123 2020-08-28 job meeting prior low energy 7 CUST123 2020-08-30 performed maintenance service 8 CUST123 2020-08-31 reconnected control line EDIT: If you want to merge the consecutive dates, keeping the first date of the consecutive range, you could it like this: # Sort DataFrame by 'date' (in case 'df' is not already sorted) df.sort_values('date', inplace=True) # Initialize variables merged_data = [] prev_row = None # Loop through the rows for _, row in df.iterrows(): if prev_row is None or row['cust'] != prev_row['cust'] or (row['date'] - prev_row['date']).days > 1: merged_data.append({'cust': row['cust'], 'date': row['date'], 'description': row['description']}) else: merged_data[-1]['description'] += ' ' + row['description'] prev_row = row # Create merged DataFrame merged_df = pd.DataFrame(merged_data) print(merged_df) Output: 0 CUST123 2020-06-13 observed increased loss rate cut performed job... 1 CUST123 2020-07-17 loss pressure slow gain trip 2 CUST123 2020-08-25 established circulation load performed sticky ... 3 CUST123 2020-08-28 job meeting prior low energy 4 CUST123 2020-08-30 performed maintenance service reconnected cont... | 2 | 3 |
76,703,126 | 2023-7-17 | https://stackoverflow.com/questions/76703126/selecting-all-rows-which-contain-values-greater-than-a-percentage-of-average | I have a DataFrame, which have 3 numeric columns A,B,C. I need to extract only those rows where values in all these 3 columns A,B,C is more than 40% of their row average. df = pd.DataFrame([['AA',10,8,12],['BB',10,2,18],['CC',10,6,14]], columns=['ID','A', 'B', 'C']) print(df) ID A B C 0 AA 10 8 12 1 BB 10 2 18 2 CC 10 6 14 I mean the following: For Row 1, the mean of A,B,C is 30/3=10, and I want that in Row 1 all values, be it A or B or C should be more than 40% of 10, i.e; 4. Similarly, for Row 2 and Row 3. In case even one element is less than that, we remove that row. My attempt: I used any() function, but that does't help me when involving average of columns. I always get empty DF. df = df[(df[['A','B','C']] > (0.4*df[['A','B','C']].mean(axis=1))).all(1)] print(df) ID A B C I was expecting this: ID A B C 0 AA 10 8 12 2 CC 10 6 14 The average of all rows is 10, so if I would have hardcoded it, it would work, like this: df[(df[['A','B','C']] > 0.4*10).all(1)] How can I do this dynamically? Thanks. | Another possible solution: a = df[['A', 'B', 'C']].values df[(a > 0.4*(a.mean(1)[:, None])).all(1)] Output: ID A B C 0 AA 10 8 12 2 CC 10 6 14 | 3 | 1 |
76,702,207 | 2023-7-17 | https://stackoverflow.com/questions/76702207/two-level-sorting-in-a-nested-tuple-of-nested-lists-in-python | I have a deeply nested tuple of nested lists as follows: ip = (array([[[ 50, 73]], [[ 50, 107]], [[ 55, 108]], [[ 55, 121]], [[978, 87]], [[977, 86]], [[977, 73]]], dtype=int32), array([[[ 669, 3]], [[ 668, 4]], [[ 667, 4]], [[1033, 71]], [[1035, 69]], [[1035, 4]], [[ 848, 4]], [[ 847, 3]], [[ 813, 3]], [[ 718, 4]], [[ 717, 3]]], dtype=int32), array([[[ 17, 3]], [[ 16, 4]], [[ 0, 4]], [[ 0, 49]], [[197, 49]], [[197, 8]], [[ 84, 4]], [[ 83, 3]]], dtype=int32)) The length of the main tuple in above example is 3. I want to perform a 2 level sorting on the above structure. First I want to sort all the 3 elements in the main list in increasing order based on the first value of nested list. So in the above case the third element will come first as it has the lowest value of the first element i.e. 0. Second should be the first element as it has the second lowest value of 50and last should be the third element as it has the third lowest value of 1035. The output of the first level sorting should be: op = (array([[[ 17, 3]], [[ 16, 4]], [[ 0, 4]], [[ 0, 49]], [[197, 49]], [[197, 8]], [[ 84, 4]], [[ 83, 3]]], dtype=int32), array([[[ 50, 73]], [[ 50, 107]], [[ 55, 108]], [[ 55, 121]], [[978, 87]], [[977, 86]], [[977, 73]]], dtype=int32), array([[[ 669, 3]], [[ 668, 4]], [[ 667, 4]], [[1033, 71]], [[1035, 69]], [[1035, 4]], [[ 848, 4]], [[ 847, 3]], [[ 813, 3]], [[ 718, 4]], [[ 717, 3]]], dtype=int32), ) Now I want to perform the same sorting again on the above op but instead of the first value of the nested list I want to sort based on the second value of the nested list. So now the final output would be as follows: final_op = (array([[[ 17, 3]], [[ 16, 4]], [[ 0, 4]], [[ 0, 49]], [[197, 49]], [[197, 8]], [[ 84, 4]], [[ 83, 3]]], dtype=int32), array([[[ 669, 3]], [[ 668, 4]], [[ 667, 4]], [[1033, 71]], [[1035, 69]], [[1035, 4]], [[ 848, 4]], [[ 847, 3]], [[ 813, 3]], [[ 718, 4]], [[ 717, 3]]], dtype=int32), array([[[ 50, 73]], [[ 50, 107]], [[ 55, 108]], [[ 55, 121]], [[978, 87]], [[977, 86]], [[977, 73]]], dtype=int32) ) Any help is appreciated! Thanks in advance! | You can use sorted on your tuple and specify the item using the key parameter First sort ip = sorted(ip, key=lambda x: x[0][0][0]) print(ip) [array([[[ 17, 3]], [[ 16, 4]], [[ 0, 4]], [[ 0, 49]], [[197, 49]], [[197, 8]], [[ 84, 4]], [[ 83, 3]]]), array([[[ 50, 73]], [[ 50, 107]], [[ 55, 108]], [[ 55, 121]], [[978, 87]], [[977, 86]], [[977, 73]]]), array([[[ 669, 3]], [[ 668, 4]], [[ 667, 4]], [[1033, 71]], [[1035, 69]], [[1035, 4]], [[ 848, 4]], [[ 847, 3]], [[ 813, 3]], [[ 718, 4]], [[ 717, 3]]])] And the second sort ip = sorted(ip, key=lambda x: x[0][0][1]) print(ip) [array([[[ 17, 3]], [[ 16, 4]], [[ 0, 4]], [[ 0, 49]], [[197, 49]], [[197, 8]], [[ 84, 4]], [[ 83, 3]]]), array([[[ 669, 3]], [[ 668, 4]], [[ 667, 4]], [[1033, 71]], [[1035, 69]], [[1035, 4]], [[ 848, 4]], [[ 847, 3]], [[ 813, 3]], [[ 718, 4]], [[ 717, 3]]]) array([[[ 50, 73]], [[ 50, 107]], [[ 55, 108]], [[ 55, 121]], [[978, 87]], [[977, 86]], [[977, 73]]])] If you want it back as tuple just do tuple(ip) | 3 | 2 |
76,700,089 | 2023-7-16 | https://stackoverflow.com/questions/76700089/adding-a-column-in-a-dataframe-based-on-thresholds-and-size-of-group | I have a DataFrame with x and y coordinates, where the index represents a timestamp. We may assume it is an object that moves every timestep. The distance between consecutive timestamps is expected to increase. However, if the distance doesn't increase by a certain threshold, I consider it a potential "waiting" position. I use the word potential, because the data is quite noisy, and a single 'waiting' condition is not enough to be really sure that the object was not moving. Thus, I require at least 3 or more consecutive 'waiting' conditions, before I can be sure the object was indeed not moving. I would like to detect these waiting positions and label them accordingly in a new column. Example : x y timestamp 2023-07-01 00:00:00 1 5 2023-07-01 00:01:00 2 6 2023-07-01 00:02:00 3 7 2023-07-01 00:03:00 4 8 2023-07-01 00:04:00 4 8 2023-07-01 00:05:00 5 9 2023-07-01 00:06:00 6 9 2023-07-01 00:07:00 7 10 2023-07-01 00:08:00 7 10 2023-07-01 00:09:00 7 10 2023-07-01 00:10:00 7 10 2023-07-01 00:11:00 8 11 2023-07-01 00:12:00 9 11 To compute the distance, I already shifted the dataframe by 1, and caluclate the distance: x y distance timestamp 2023-07-01 00:00:00 1 5 NaN 2023-07-01 00:01:00 2 6 1.414214 2023-07-01 00:02:00 3 7 1.414214 2023-07-01 00:03:00 4 8 1.414214 2023-07-01 00:04:00 4 8 0.000000 2023-07-01 00:05:00 5 9 1.414214 2023-07-01 00:06:00 6 9 1.000000 2023-07-01 00:07:00 7 10 1.414214 2023-07-01 00:08:00 7 10 0.000000 2023-07-01 00:09:00 7 10 0.000000 2023-07-01 00:10:00 7 10 0.000000 2023-07-01 00:11:00 8 11 1.414214 2023-07-01 00:12:00 9 11 1.000000 Now, assume if the distance is lower than 1, it could potentially be a waiting position: x y distance condition_fulfilled timestamp 2023-07-01 00:00:00 1 5 NaN NaN 2023-07-01 00:01:00 2 6 1.414214 False 2023-07-01 00:02:00 3 7 1.414214 False 2023-07-01 00:03:00 4 8 1.414214 False 2023-07-01 00:04:00 4 8 0.000000 True 2023-07-01 00:05:00 5 9 1.414214 False 2023-07-01 00:06:00 6 9 1.000000 False 2023-07-01 00:07:00 7 10 1.414214 False 2023-07-01 00:08:00 7 10 0.000000 True 2023-07-01 00:09:00 7 10 0.000000 True 2023-07-01 00:10:00 7 10 0.000000 True 2023-07-01 00:11:00 8 11 1.414214 False 2023-07-01 00:12:00 9 11 1.000000 False Since I require at least 3 consecutive fulfilled condtions, the expected output would be: x y distance status timestamp 2023-07-01 00:00:00 1 5 NaN moving 2023-07-01 00:01:00 2 6 1.414214 moving 2023-07-01 00:02:00 3 7 1.414214 moving 2023-07-01 00:03:00 4 8 1.414214 moving 2023-07-01 00:04:00 4 8 0.000000 moving 2023-07-01 00:05:00 5 9 1.414214 moving 2023-07-01 00:06:00 6 9 1.000000 moving 2023-07-01 00:07:00 7 10 1.414214 moving 2023-07-01 00:08:00 7 10 0.000000 waiting 2023-07-01 00:09:00 7 10 0.000000 waiting 2023-07-01 00:10:00 7 10 0.000000 waiting 2023-07-01 00:11:00 8 11 1.414214 moving 2023-07-01 00:12:00 9 11 1.000000 moving | Try: # fill the first NaN df['condition_fulfilled'] = df['condition_fulfilled'].bfill() tmp = (df['condition_fulfilled'] != df['condition_fulfilled'].shift()).cumsum() df['status'] = df.groupby(tmp)['condition_fulfilled'].transform(lambda x: 'waiting' if x.all() and len(x) >= 3 else 'moving') print(df) Prints: x y distance condition_fulfilled status timestamp 2023-07-01 00:00:00 1 5 NaN False moving 2023-07-01 00:01:00 2 6 1.414214 False moving 2023-07-01 00:02:00 3 7 1.414214 False moving 2023-07-01 00:03:00 4 8 1.414214 False moving 2023-07-01 00:04:00 4 8 0.000000 True moving 2023-07-01 00:05:00 5 9 1.414214 False moving 2023-07-01 00:06:00 6 9 1.000000 False moving 2023-07-01 00:07:00 7 10 1.414214 False moving 2023-07-01 00:08:00 7 10 0.000000 True waiting 2023-07-01 00:09:00 7 10 0.000000 True waiting 2023-07-01 00:10:00 7 10 0.000000 True waiting 2023-07-01 00:11:00 8 11 1.414214 False moving 2023-07-01 00:12:00 9 11 1.000000 False moving | 3 | 1 |
76,698,632 | 2023-7-16 | https://stackoverflow.com/questions/76698632/how-to-access-pep-526-local-variable-annotations-in-python | I made a variable annotation of a local variable: def a(): x: int = 1 But the annotations dict is empty: > a.__annotations__ {} Same when I use inspect.get_annotations() or typing.get_type_hints(). How can I access the annotation? Is this feature implemented at all? I'm using Python 3.10.7. | OK, I found a way. As long as a function has a fqdn, its source code can be parsed and local annotations show there. import ast, inspect def a(): k: int = 1 print(ast.dump(ast.parse(inspect.getsource(a)))) gives Module(body=[FunctionDef(name='a', args=arguments(posonlyargs=[], args=[], kwonlyargs=[], kw_defaults=[], defaults=[]), body=[AnnAssign(target=Name(id='k', ctx=Store()), annotation=Name(id='int', ctx=Load()), value=Constant(value=1), simple=1)], decorator_list=[])], type_ignores=[]) This is a pretty hacky way, but works for me. | 2 | 2 |
76,697,475 | 2023-7-16 | https://stackoverflow.com/questions/76697475/how-do-i-safely-handle-asyncio-event-loop-handling-with-telethon-and-telegram-bo | I'm creating a simple telegram bot in which I'm making use of the telegram bot and telethon api(as I want to retrieve all members in a chat and with the bot api, i can't do this unless all users are admins). I'm able to currently print out the users in a chat successfully. However when I terminate the script, I keep getting RuntimeErrors and warnings on certain telethon API coroutines not being awaited and Runtime errors for closed event loops I did do some reading up on asyncio: https://medium.com/dev-bits/a-minimalistic-guide-for-understanding-asyncio-in-python-52c436c244ea and added the following code to my main function to run the client. Here's a sample of what it looks like: from typing import Final from telethon import TelegramClient, events from telegram import Update, Bot from telegram.ext import Application, CommandHandler, MessageHandler, filters, ContextTypes TOKEN: Final = <token> api_id = <api_id> api_hash = <api_hash> BOT_USERNAME = <bot_username> bot = Bot(token=TOKEN) bot_client = TelegramClient('sessions/session_master', api_id, api_hash).start(bot_token=TOKEN) async def start_command(update: Update, context: ContextTypes.DEFAULT_TYPE): await update.message.reply_text('hello, bot starting..') async def tag_all_command(update: Update, context: ContextTypes.DEFAULT_TYPE): await tag_all(update) async def tag_all(update: Update): group_members = await bot_client.get_participants(update.effective_chat.id) for user in group_members: if user.username or user.first_name: print(f' {user.username or user.first_name}') if __name__ == '__main__': print('Starting bot...') app = Application.builder().token(TOKEN).build() # Commands app.add_handler(CommandHandler('start', start_command)) app.add_handler(CommandHandler('all', tag_all_command)) # Messages app.add_handler(MessageHandler(filters.TEXT, handle_message)) # Polling print('Polling...') app.run_polling(poll_interval=3) try: bot_client.run_until_disconnected() except KeyboardInterrupt: bot_client.disconnect() With the code above , I thought this would safely close the event loop, however whenever I terminate the script in pycharm, I get the following errors below, could someone help me pinpoint my mistake?: C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\client\telegrambaseclient.py:643: RuntimeWarning: coroutine 'TelegramBaseClient._disconnect_coro' was never awaited pass RuntimeWarning: Enable tracemalloc to get the object allocation traceback Traceback (most recent call last): File "C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\bot.py", line 104, in <module> bot_client.run_until_disconnected() File "C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\client\updates.py", line 95, in run_until_disconnected return self.loop.run_until_complete(self._run_until_disconnected()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 628, in run_until_complete self._check_closed() File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 519, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed sys:1: RuntimeWarning: coroutine 'UpdateMethods._run_until_disconnected' was never awaited RuntimeWarning: Enable tracemalloc to get the object allocation traceback Task was destroyed but it is pending! task: <Task pending name='Task-3' coro=<Connection._send_loop() running at C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\connection\connection.py:313> wait_for=<Future pending cb=[Task.task_wakeup()]>> Task was destroyed but it is pending! task: <Task pending name='Task-4' coro=<Connection._recv_loop() running at C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\connection\connection.py:332> wait_for=<Future pending cb=[Task.task_wakeup()]>> Task was destroyed but it is pending! task: <Task pending name='Task-5' coro=<MTProtoSender._send_loop() running at C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\mtprotosender.py:462> wait_for=<Future pending cb=[Task.task_wakeup()]>> Task was destroyed but it is pending! task: <Task pending name='Task-6' coro=<MTProtoSender._recv_loop() running at C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\mtprotosender.py:505> wait_for=<Future pending cb=[Task.task_wakeup()]>> Task was destroyed but it is pending! task: <Task pending name='Task-7' coro=<UpdateMethods._update_loop() running at C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\client\updates.py:425> wait_for=<Future pending cb=[Task.task_wakeup()]>> Task was destroyed but it is pending! task: <Task pending name='Task-8' coro=<UpdateMethods._keepalive_loop() running at C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\client\updates.py:459> wait_for=<Future pending cb=[Task.task_wakeup()]>> Unexpected exception in the send loop Traceback (most recent call last): File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\queues.py", line 158, in get await getter GeneratorExit During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\connection\connection.py", line 313, in _send_loop self._send(await self._send_queue.get()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\queues.py", line 160, in get getter.cancel() # Just in case getter is not done yet. ^^^^^^^^^^^^^^^ File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 761, in call_soon self._check_closed() File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 519, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Exception ignored in: <coroutine object Connection._send_loop at 0x000001F8E5CD8740> RuntimeError: coroutine ignored GeneratorExit Exception ignored in: <coroutine object Connection._recv_loop at 0x000001F8E5CDC8B0> Traceback (most recent call last): File "C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\connection\connection.py", line 350, in _recv_loop File "C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\connection\connection.py", line 258, in disconnect File "C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\helpers.py", line 174, in _cancel File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 761, in call_soon File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 519, in _check_closed RuntimeError: Event loop is closed Unhandled error while receiving data Traceback (most recent call last): File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\queues.py", line 158, in get await getter GeneratorExit During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\mtprotosender.py", line 505, in _recv_loop body = await self._connection.recv() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\connection\connection.py", line 299, in recv result, err = await self._recv_queue.get() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\queues.py", line 160, in get getter.cancel() # Just in case getter is not done yet. ^^^^^^^^^^^^^^^ File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 761, in call_soon self._check_closed() File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 519, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Exception ignored in: <coroutine object MTProtoSender._recv_loop at 0x000001F8E5CB5B40> Traceback (most recent call last): File "C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\mtprotosender.py", line 520, in _recv_loop File "C:\Users\Jeevan Mahtani\PycharmProjects\pythonProject\venv\Lib\site-packages\telethon\network\mtprotosender.py", line 428, in _start_reconnect File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 434, in create_task File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 519, in _check_closed RuntimeError: Event loop is closed sys:1: RuntimeWarning: coroutine 'MTProtoSender._reconnect' was never awaited Task was destroyed but it is pending! task: <Task pending name='Task-20' coro=<Queue.get() running at C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\queues.py:158> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[_release_waiter(<Future pendi...ask_wakeup()]>)() at C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\tasks.py:421]> Exception ignored in: <coroutine object Queue.get at 0x000001F8E5CD8E40> Traceback (most recent call last): File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\queues.py", line 160, in get File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 761, in call_soon File "C:\Users\Jeevan Mahtani\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 519, in _check_closed RuntimeError: Event loop is closed Process finished with exit code 1 | telegram package by default is evasive from some aspects, so you need to specify some stuff manually, you have couple issues: main one is run_polling' close_loop not being set to False, it will close the loop on KeyboardInterrupt too and shutdown without giving chance for telethon to disconnect since they share the same loop. run_polling also handles KeyboardInterrupt for you, no need for you to have your own try except. and finally, using telethon's run_until_disconnected is useless and wrong. because run_polling will lock and run the event loop for telethon too. run_until_disconnected was never reached to begin with, until you exit run_polling with a signal. only then then it will run, and lock twice, so you shouldn't use it. something like this should do: print('Polling...') try: app.run_polling(poll_interval=3, close_loop=False) finally: bot_client.disconnect() | 2 | 3 |
76,692,916 | 2023-7-15 | https://stackoverflow.com/questions/76692916/in-nixos-what-is-the-difference-between-installing-from-pkgs-or-python311packag | I had an issue when I installed Yapf this way: environment.systemPackages = with pkgs; [ (python311.withPackages(ps: with ps; [ toml python-lsp-server pyls-isort flake8 ])) pkgs.yapf ]; This gave me the error: $ yapf autoapp.py yapf: toml package is needed for using pyproject.toml as a configuration file And I solved when I did: environment.systemPackages = with pkgs; [ (python311.withPackages(ps: with ps; [ toml python-lsp-server pyls-isort flake8 yapf ])) ]; Why was the first configuration giving me an installed version of yapf that couldn't import toml? | These are the same package - you can see this by checking the source links from the package search page. Adding it to withPackages links the Python package with the interpreter, making it possible to run things like python -m yapf β¦ or import yapf within the Python REPL. If you simply list pkgs.yapf at the top level, the package isn't linked to the Pyython interpreter, and so only things like man pages and executables are available in the resulting environment. | 3 | 2 |
76,693,948 | 2023-7-15 | https://stackoverflow.com/questions/76693948/new-version-selenium-desired-capabilities-deprecate | I want to use ChromeDriver Performance Log to monitor network traffic. I'm looking for some usage like this: capabilities = DesiredCapabilities.CHROME # enable performance log capabilities['loggingPrefs'] = {"performance","all"} self.driver = webdriver.Chrome( desired_capabilities=capabilities ) # get performance log logs = driver.get_log("performance") Since my version of selenium 4.10.0 desired_capabilities property is Deprecated. webdriver.Chrome(service=service, options=opts) This option is allowed but I could not use desired_capabilitiesγ I want to know how to setup desired_capabilities to Chrome or how to open Performance Log in order to trace traffic details. | DesiredCapabilities earlier deprecated, is now completely removed in Selenium v4.10. Solution You have to use an instance of ChromeOptions and the set_capability() method as follows: from selenium.webdriver.chrome.options import Options options = Options() options.set_capability('goog:loggingPrefs', {'performance': 'ALL'}) # other configurations driver = Chrome(service=service, options=options) | 2 | 3 |
76,694,215 | 2023-7-15 | https://stackoverflow.com/questions/76694215/python-type-casting-when-preallocating-list | This question might already have an answer, so please guide me to one if you know any. I couldn't find one myself, though this question feels like a common one. So, consider the following pattern: arr = [None] * n for i in range(n): # do some computations # ... # even more computations arr[i] = MyClass(some_computed_value) So far so good, I tend to use this pattern from time to time. Now, let us be thorough in our attempt to provide all the code with type annotations. The problem is that we preallocate our array with Nones, so it has the type list[None]. But we want it to be list[MyClass]. How do we proceed? The most straightforward solution is making it optional: arr: list[Optional[MyClass]] = [None] * n This solves the type checker issue, but now it's our issue since that Optional prohibits us from performing even basic operations on the result arr[0].my_method() # error: NoneType has no attribute "my_method" Long story short, I end up with the following pattern: arr_: Any = [None] * n for i in range(n): # ... arr_[i] = MyClass(some_computed_value) arr = typing.cast(list[MyClass], arr_) This is ugly, inconvenient, barely readable and boilerplate. What do you do? | You can lie to your type checker when you first initialize arr to "trick" it that arr never contains None entries. For the purposes of static type checking, arr will be a list[MyClass], even though it briefly contains None entries at runtime. You of course assume responsibility for making sure that this assumption plays out. For example, from typing import cast, reveal_type n = 1000 arr: list[int] = [cast(int, None)] * n # or alternatively arr: list[int] = [None] * n # type: ignore for i in range(n): arr[i] = i reveal_type(arr) reveal_type(arr[0]) passes type checking with both mypy and pyright, and outputs # mypy tmp.py:9: note: Revealed type is "builtins.list[builtins.int]" tmp.py:10: note: Revealed type is "builtins.int" Success: no issues found in 1 source file # pyright file.py file.py:9:13 - information: Type of "arr" is "list[int]" file.py:10:13 - information: Type of "arr[0]" is "int" 0 errors, 0 warnings, 2 informations | 2 | 3 |
76,693,541 | 2023-7-15 | https://stackoverflow.com/questions/76693541/why-does-numpy-return-a-different-type-for-arrays-and-scalars | I have some whole numbers stored in np.float64 arrays and scalars, which I want to convert to native Python int. This is my attempt: import numpy as np a = np.array([1, 2, 3], dtype=np.float64) b = np.float64(4) def float_to_int(x): x_object = x.astype(object) return np.floor(x_object) # Array inputs are converted to int print(type(float_to_int(a)[0])) # >> <class 'int'> # Scalar inputs are left as np.float64 print(type(float_to_int(b))) # >> <class 'numpy.float64'> There are 3 things I don't understand here: Why is the type casting different for scalars and arrays? Why did np.floor() do type casting at all (for array inputs)? How can I reliably cast np.float64 to int for scalars and arrays? | I believe since Numpy and python data types are related but inherently different, you would have to explicitly convert it to python data type. One way to do it would be: a = a.astype(np.int64).tolist() b = int(b) or alternatively a = a.astype(np.int64).astype(object) b = b.astype(np.int64).astype(object) When you convert numpy array to object data type, it internally stores it as python objects. The object data type is flexible and inferred during conversion. | 2 | 1 |
76,690,286 | 2023-7-14 | https://stackoverflow.com/questions/76690286/how-to-convert-frames-from-directory-to-video-without-adding-the-first-frame-eve | I am fairly new to this platform. I will do my best to be clear as possible. My goal is to convert frames from my directory into a video. For some reason, it is including the first frame in every 15 fps. Here is my code below: import cv2 import glob from pathlib import Path def play_frames(frames_dir, fps, output_video_path): # Get the list of frames in the directory frames = sorted(glob.glob(str(frames_dir / '*.png'))) # Read the first frame to get the width and height frame = cv2.imread(frames[0]) height, width, _ = frame.shape # Create the video writer object fourcc = cv2.VideoWriter_fourcc(*'mp4v') video_writer = cv2.VideoWriter(output_video_path, fourcc, fps, (width, height)) for frame_path in frames: # Read the frame frame = cv2.imread(frame_path) # Display the frame cv2.imshow('Frame', frame) cv2.waitKey(int(1000 / fps)) # Write the frame to the video video_writer.write(frame) # Release the video writer video_writer.release() cv2.destroyAllWindows() # Directory path frames_dir = Path('./detected_objects') fps = 15 output_video_path = 'vid2output.mp4' # Play the frames and save as video play_frames(frames_dir, fps, output_video_path) I have tried countless of codes here, but all the codes (that I have tried) keeps adding the first frame in every fps. What am I doing wrong? Is there another way to convert images to video without adding the first frame in every fps? (I have checked my frames in the directory, and they are all in the correct sequence.) I appreciate everyone's help and happy to provide more information! Update: I added resolution size to the code, but the first frame keeps showing up every fps. import cv2 import glob import numpy as np from pathlib import Path def play_frames(frames_dir, fps, output_video_path): # Get the list of frames in the directory frames = sorted(glob.glob(str(frames_dir / '*.png'))) # Read the first frame to get the width and height frame = cv2.imread(frames[0]) height, width, _ = frame.shape # Calculate the maximum width and height among all frames max_width = width max_height = height for frame_path in frames[1:]: # Read each frame to get its dimensions frame = cv2.imread(frame_path) height, width, _ = frame.shape # Update the maximum width and height if necessary max_width = max(max_width, width) max_height = max(max_height, height) # Create the video writer object fourcc = cv2.VideoWriter_fourcc(*'mp4v') video_writer = cv2.VideoWriter(output_video_path, fourcc, fps, (max_width, max_height)) # Resize and pad each frame before writing to the video for frame_path in frames: frame = cv2.imread(frame_path) # Resize the frame to the maximum dimensions resized_frame = cv2.resize(frame, (max_width, max_height)) # Write the resized frame to the video video_writer.write(resized_frame) # Release the video writer video_writer.release() # Directory path frames_dir = Path('./detected_objects') fps = 15 output_video_path = 'vid1output.mp4' # Play the frames and save as video play_frames(frames_dir, fps, output_video_path) | sorted(), when given a list of strings, sorts them as strings, not as numbers. The following demonstrates why this is a problem: >>> ', '.join(sorted([str(i) for i in range(67)])) '0, 1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 3, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 4, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 5, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 6, 60, 61, 62, 63, 64, 65, 66, 7, 8, 9' See how instead of 0, 1, 2 we get 0, 1, 10? Similarly, we jump back to 2 between 19 and 20; jump to 3 between 29 and 30; 4 between 39 and 40; etc. It's easy to see how this could look like jumping back to frame 1, even though that's not actually quite what it's doing. You need to either pad your filenames so they all have the same number of digits (001.png instead of 1.png), or tell Python to sort them numerically instead of as strings. To do the latter, use the key argument to sorted, passing it as an argument a function that extracts the correct number. For example: import os.path def file_number(filename): return int(os.path.basename(filename).split('.')[0]) frames = sorted(glob.glob(str(frames_dir / '*.png')), key=file_number) | 3 | 2 |
76,684,195 | 2023-7-14 | https://stackoverflow.com/questions/76684195/how-to-best-filter-exceptions-on-their-cause-or-context | Given a base exception type: class MyModuleError(Exception): pass Suppose we have code that explicitly raises it, using exception chaining: def foo(): try: #some code except (ZeroDivisionError, OSError) as e: raise MyModuleError from e Now, in the calling code... try: foo() except MyModuleError as e: # Now what? how can I idiomatically write the except clause, so that the exception handling depends on the __cause__ (chained exception)? I thought of these approaches: a) using type(e) like: # filter here t=type(e.__cause__) if t is ZeroDivisionError: doStuff() elif t is OSError: doOtherStuff() else: raise b) using isinstance() like: # filter here if isinstance(e.__cause__, ZeroDivisionError): doStuff() elif isinstance(e.__cause__, OSError): doOtherStuff() else: raise c) re-raising like: # filter here try: raise e.__cause__ except ZeroDivisionError: doStuff() except OSError: doOtherStuff() except: raise e #which should be the "outer" exception | The re-raising is not recommended. In general, it is not idiomatic to raise something deliberately so that it can be immediately caught1. It's also less performant (exception handling can involve quite a bit of overhead when the exception is actually raised), and creates a reference cycle between the exception object and the current stack frame, which in the reference implementation of Python will leak memory when the auxiliary garbage collector is disabled. (This is why exception names created with as are explicitly deleted after the except block in 3.x.) On general principle, isinstance is the preferred means of type-checking rather than comparing type results directly, because isinstance automatically takes subtyping into account. The normal functionality of except accounts for subtyping (e.g. except IOError: will catch a FileNotFoundError, which is normally desirable); it stands to reason that, in any normal circumstance, type-checking of the chained exception should do this as well. There is no explicit built-in functionality for this; therefore, approach b) is recommended here. 1Yes, for loops are internally implemented this way, using StopIteration. That's an implementation detail, and user code isn't intended to look like it's doing such things. | 3 | 3 |
76,689,402 | 2023-7-14 | https://stackoverflow.com/questions/76689402/use-literal-style-for-just-multiline-strings-in-ruamel-yaml | I would like to have a custom ruamel.yaml dumper that uses Literal style for all multiline strings and the default style otherwise. For example: import sys import ruamel.yaml data = {"a": "hello", "b": "hello\nthere\nworld"} print("Default style") yaml = ruamel.yaml.YAML() yaml.dump(data, sys.stdout) print() print("style='|'") yaml = ruamel.yaml.YAML() yaml.default_style = "|" yaml.dump(data, sys.stdout) This produces: Default style a: hello b: "hello\nthere\nworld" style='|' "a": |- hello "b": |- hello there world My desired output is: a: hello b: |- hello there world | There are multiple ways to achieve what you want. If you have control over building up the data structure, it is often easiest to add a LiteralScalarString if appropriate: import sys import ruamel.yaml def lim(s): # literal if multi-line if '\n' in s: return ruamel.yaml.scalarstring.LiteralScalarString(s) return s data = {'a': lim('hello'), 'b': lim('hello\nthere\nworld')} yaml = ruamel.yaml.YAML() yaml.dump(data, sys.stdout) which gives: a: hello b: |- hello there world This gives you easy fine control over what gets dumped as literal style. If you don't add all the data individually (but e.g. read them from a JSON file), you can walk over your data structure after it is fully constructed and update it in place: import sys import ruamel.yaml def tmtl(d): """translate multi-line to literal, only acts on dict values and sequence items, not on keys """ if isinstance(d, dict): for k, v in d.items(): if isinstance(v, str) and '\n' in v: d[k] = ruamel.yaml.scalarstring.LiteralScalarString(v) else: tmtl(v) elif isinstance(d, list): for idx, item in enumerate(d): if isinstance(item, str) and '\n' in item: d[idx] = ruamel.yaml.scalarstring.LiteralScalarString(item) data = {'a': 'hello', 'b': 'hello\nthere\nworld'} tmtl(data) yaml = ruamel.yaml.YAML() yaml.dump(data, sys.stdout) which gives: a: hello b: |- hello there world If you cannot update your data, you could rewrite tmtl in the program above so it builds a new data structure and returns that, but at that point it is IMO easier to change the representer: import sys import ruamel.yaml CKS = ruamel.yaml.comments.CommentedKeySeq # so you can have sequences as keys in a mapping class MyRepresenter(ruamel.yaml.representer.RoundTripRepresenter): def represent_str(self, s): if '\n' in s: return self.represent_scalar('tag:yaml.org,2002:str', s, style='|') return self.represent_scalar('tag:yaml.org,2002:str', s) MyRepresenter.add_representer(str, MyRepresenter.represent_str) data = {'a': 'hello', 'b': 'hello\nthere\nworld', CKS((1, 2)): ['nested works\nas well\n\n']} yaml = ruamel.yaml.YAML() yaml.Representer = MyRepresenter yaml.dump(data, sys.stdout) which gives: a: hello b: |- hello there world [1, 2]: - |+ nested works as well ... As you can see the trailing newlines of the final literal style scalar automatically causes the chomping indicator to change from strip (-) to keep (+) and the explicit document end marker (...) to appear. | 2 | 3 |
76,689,146 | 2023-7-14 | https://stackoverflow.com/questions/76689146/pandas-read-csv-ignore-first-cell | I have .csv file like this: Str; Int; Flt A; 123; 0.1 B; 456; 0.2 C; 789; 0.3 I want to get DataFrame like this Int; Flt A; 123; 0.1 B; 456; 0.2 C; 789; 0.3 I read csv like this df = pd.read_csv('data.csv', index_col=0, sep=";") And the problem is that I can't use df.loc["A", "Int"] to get cell value. If I drop Str; from csv everything works fine. So the idea is to use first row as row names and first column as column names. I understand that first element can't be used both as col name and row name, is there any way to drop such ambigous value? | You have a whitespace issue. import io import pandas as pd fp = io.StringIO(''' Str; Int; Flt A; 123; 0.1 B; 456; 0.2 C; 789; 0.3 '''.strip()) df = pd.read_csv(fp, index_col=0, sep=";") print("Index: ", df.index) print("Columns: ", df.columns) # Now look at the whitespace: print(df.loc[' A',' Int']) Yields: Index: Index([' A', ' B', ' C'], dtype='object', name='Str') Columns: Index([' Int', ' Flt'], dtype='object') 123 So when you get rid of "Str", it appears you're dealing with the whitespace issue. So instead do: df = pd.read_csv(fp, index_col=0, sep=";" , skipinitialspace=True) print(df.loc['A','Int']) | 2 | 3 |
76,686,704 | 2023-7-14 | https://stackoverflow.com/questions/76686704/numpy-array-bool-operation-slow | I started profiling my application and I tested to following code: a = np.random.random((100000,)) def get_first_index(value, arr): firstIndex = np.argmax(arr > value) if firstIndex <= 0: raise Exception('No index found') return firstIndex for i in range(0, 1000): get_first_index(0.5, a) It just returns me the first index of an element bigger than the given value. On my machine it takes around 0.01s for array size 50k and 1k calls. I was wondering what causes the slow down. My first suspect was np.argmax but I boiled it down to the boolean comparison arr > value. It spends 99% of the time creating the bool comparison. Is there any faster way I am not aware of? Test code for profiling: a = np.random.random((100000,)) def test_function(a, b): return a < b import cProfile, pstats profiler = cProfile.Profile() profiler.enable() for i in range(0, 1000): test_function(0.5, a) profiler.disable() stats = pstats.Stats(profiler).sort_stats('tottime') stats.print_stats() | The reason your approach is slow is that all elements of arr are compared under all circumstances, even if the first element of arr is greater than value. While numpy does not have an API optimized for this kind of processing, Numba can be used instead, which can be easily implemented as follows. import timeit import numba import numpy as np np.random.seed(0) a = np.random.random((100000,)) def get_first_index(value, arr): firstIndex = np.argmax(arr > value) if firstIndex <= 0: raise Exception("No index found") return firstIndex @numba.njit("i8(f8,f8[:])", cache=True) def get_first_index2(value, arr): for i in range(len(arr)): if arr[i] > value: return i raise Exception("No index found") x = 0.9999 print(timeit.timeit(lambda: get_first_index(x, a), number=1000)) print(timeit.timeit(lambda: get_first_index2(x, a), number=1000)) 0.01520979999999994 0.001343500000000053 As a side note, if arr is a sorted array, np.searchsorted is even faster. | 2 | 5 |
76,685,996 | 2023-7-14 | https://stackoverflow.com/questions/76685996/cant-open-lib-odbc-driver-17-for-sql-server-file-not-found-0-sqldriverco | I have searched a lot for the solution but still struggling with this problem. I'm trying to connect to a SQL Server instance running on 127.0.0.1:1433. However, I'm getting a sqlalchemy.exc.DBAPIError with the following error message: sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 17 for SQL Server' : file not found (0) (SQLDriverConnect)") I think I need to install the ODBC driver, but I'm not sure if it needs to be installed on the SQL Server Docker image or on my local VM. If the answer is the Docker image, then I think my /etc/odbcinst.ini file is correctly configured as follows: [ODBC Driver 17 for SQL Server] Description=Microsoft ODBC Driver 17 for SQL Server Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.10.so.2.1 UsageCount=1 But if the ODBC driver needs to be installed on my local VM, then my /etc/odbcinst.ini file is empty. Here's the Python code I used to connect to the SQL Server instance: from sqlalchemy import create_engine server = "127.0.0.1,1433" user = "sa" password = "Pass@12345" db_name = "test_database" engine = create_engine(f'mssql+pyodbc://{user}:{password}@{server}/{db_name}?driver=ODBC Driver 17 for SQL Server') connection = engine.connect() print("connected") Another question is what should i do if there is @ in password? sqlserver: sqlserver:2022-latest docker image, runs on 127.0.0.1:1433 os: Ubuntu 22.04 python: 3.10.6 sqlalchemy: 2.0.16 Any help would be greatly appreciated. Thanks! | Solved by these steps: 1- install ODBC driver on local machine using this script for ubuntu official doc: if ! [[ "18.04 20.04 22.04 22.10" == *"$(lsb_release -rs)"* ]]; then echo "Ubuntu $(lsb_release -rs) is not currently supported."; exit; fi sudo su curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - curl https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/prod.list > /etc/apt/sources.list.d/mssql-release.list exit sudo apt-get update sudo ACCEPT_EULA=Y apt-get install -y msodbcsql18 # optional: for bcp and sqlcmd sudo ACCEPT_EULA=Y apt-get install -y mssql-tools18 echo 'export PATH="$PATH:/opt/mssql-tools18/bin"' >> ~/.bashrc source ~/.bashrc # optional: for unixODBC development headers sudo apt-get install -y unixodbc-dev Note: DNS filtering may cause issues during installation 2- Edit script as bellow to if password contains char @ from sqlalchemy import create_engine from urllib.parse import quote_plus server = "127.0.0.1:1433" user = "sa" password = "Pass@12345" db_name = "test_database" dsn = "ODBC Driver 18 for SQL Server" engine = create_engine(f"mssql+pyodbc://{user}:%s@{server}/{db_name}?TrustServerCertificate=yes&driver={dsn}" % quote_plus(password)) connection = engine.connect() print("connected") | 10 | 7 |
76,683,407 | 2023-7-13 | https://stackoverflow.com/questions/76683407/how-to-pass-multiple-parameters-to-insert-into-values-using-sqlalchemy-conne | The following is valid SQL for PostgreSQL. INSERT INTO schema.table (id, letter) VALUES (1, 'a'), (2, 'b'), ... I would like to execute a similar, parameterized statement using SQLAlchemy. I am using SQLAlchemy 1.4* Connection.execute(), but I do not have access to the table mapper class (ORM model). The following does not work, but does demonstrate what I am trying to achieve. statement: str = """ INSERT INTO schema.table (id, letter) VALUES :values """ values: Tuple[Tuple[int, str], ...] = ( (1, "a"), (2, "b"), ) with engine.connect() as connection: connection.execute( sqlalchemy.text(statement), {"values": values}, ) In this exact example, values is a tuple; it gets bound as a tuple of tuples, which does not fit what I am trying to achieve ("INSERT has more expressions than target columns"). Likewise, using a list of tuples creates a SQL ARRAY. Q: Why don't you just use an f-string or "...".format(...)? A: I've read that this is poor practice. Question: How can I properly "unpack" the values parameters? Alternatively, what is the best way to achieve what I am seeking (without using the Table ORM class)? * Yes, I am aware that SQLAlchemy 1.4 is deprecated at the time of this question's posting. | You can use the following, you were almost there, you just had to change the insert query and make the binding as a list of dicts. As seen in the docs and slightly modified for your use-case. statement: str = "INSERT INTO schema.table (id, letter) VALUES (:id, :letter)" values = ( (1, "a"), (2, "b"), ) values = [{'id': id, 'letter': letter} for id, letter in values] with engine.connect() as connection: connection.execute(text(statement), values) | 2 | 3 |
76,680,977 | 2023-7-13 | https://stackoverflow.com/questions/76680977/using-enum-values-as-type-variables-without-using-literal | I'm trying to represent physical dimensions (length, time, temperature, ...), and cannot find a nice way to do so, that is compatible with type hinting and generics. Ideally, I want to be able to define an Enum whose names are types themselves (a metaenum ?): from enum import Enum class Dim(Enum): TIME = "t" MASS = "m" I can type hint dimensions (dim: Dim) but cannot do things like from typing import Generic, TypeVar T = TypeVar("T", bound=Dim) # only accepts `Dim` class PhysicalQuantity(Generic[T]): pass class Container: some_time: PhysicalQuantity[Dim.TIME] # doesn't work because these are values. Is there a construct as simple as Enum, but to make types instead of values ? Reasons why I want to keep Enum: very easy to define very easy to associate to a value (str) Ability to sort of "think of Dim as the type, and Dim.TIME as a subtype" There are functional solutions, however I'm asking this to get a "best way" more than a "working way". Here's what I found: The simplest solution is to do use Literal: SomeGenericType[Literal[Dim.TIME]], but this is both annoying to write each time and counter-intuitive for people who expect Dim.TIME to behave as a type. Switching to classes, the most intuitive idea: class Dimension: pass class TIME(Dimension): pass doesn't work, because I want type(TIME) to be Dim, to reproduce Enum behavior That leads to using a metaclass: class Dimension(type): # ... complete __init__ and __new__ to get TIME.symbol = "t" class TIME(metaclass=Dimension, symbol="t"): pass This works, but I lose the ability to do Dim.TIME, to get Dim.TIME from Dim('t'), ... | Is there a construct as simple as Enum, but to make types instead of values? Yes, the metaclass. A metaclass makes types. It is simple in terms of usage i.e. creation of new types, but you do need to put in some more work to set it up properly. Semantically, you could think of the Dimension is a type and Time, Distance etc. as instances of it. In other words the type of the Time class is Dimension. This seems to reflect your view since you said: I want type(Time) to be Dim Now a Quantity could be considered the abstract base class of type Dimension. Something without a symbol. Time would inherit from Quantity (thus also being of type Dimension). No generics so needed so far. Now you can define a Container that is generic in terms of the type(s) of quantity (i.e. instances of Dimension) it holds. The metaclass and base class could look like this: from __future__ import annotations from typing import Any, ClassVar, TypeVar, overload T = TypeVar("T", bound=type) class Dimension(type): _types_registered: ClassVar[dict[str, Dimension]] = {} @overload def __new__(mcs, o: object, /) -> Dimension: ... @overload def __new__( mcs: type[T], name: str, bases: tuple[type, ...], namespace: dict[str, Any], /, **kwargs: Any, ) -> T: ... def __new__( mcs, name: Any, bases: Any = None, namespace: Any = None, /, **kwargs: Any, ) -> type: if bases is None and namespace is None: return mcs._types_registered[name] symbol = kwargs.pop("symbol", None) dim = super().__new__(mcs, name, bases, namespace, **kwargs) if symbol is not None: mcs._types_registered[symbol] = dim return dim class Quantity(metaclass=Dimension): # abstract base (no symbol) pass And to create new Dimension classes you just inherit from Quantity: from typing import Generic, TypeVar, reveal_type # ... import Quantity class Time(Quantity, symbol="t"): pass DimTime = Dimension("t") print(DimTime) # <class '__main__.Time'> print(type(Time)) # <class '__main__.Dimension'> reveal_type(DimTime) # mypy note: Revealed type is "Dimension" Q = TypeVar("Q", bound=Quantity) class Container(Generic[Q]): """generic container for `Dimension` instances (i.e. quantities)""" some_quantity: Q I realize this completely bypasses your Enum question, but since you even phrased the question as an XY Problem yourself by explaining what your actual intent was, I thought I'd give it a go and suggest a different approach. | 4 | 2 |
76,679,508 | 2023-7-13 | https://stackoverflow.com/questions/76679508/how-to-set-multiple-values-in-pandas-in-column-of-dtype-np-array | I have a column of Numpy arrays in Pandas, something like: col1 col2 col3 0 1 a None 1 2 b [2, 4] 2 3 c None The [2, 4] is really np.array([2, 4]). Now I need to impute the missing values, and I have a list of arrays for that. For example: vals_to_impute = [np.array([1, 2]), np.array([1, 4])] I tried: mask = col3.isna() df.loc[mask, "col3"] = vals_to_impute This results in error: ValueError: Must have equal len keys and value when setting with an ndarray I tried converting to Numpy array, extracting column etc., nothing worked. Is it actually possible to set this in a vectorized operation, or do I have to do a manual loop? | I managed to do it using pd.Series instead of list. I also had to input an index to this Series so that the insertion is correct. Maybe it can be done easier. df = pd.DataFrame({ "col1": [1, 2, 3], "col2": ["a", "b", "c"], "col3": [None, np.array([2, 4]), None] }) mask = df["col3"].isna() vals_to_impute = pd.Series( [np.array([1, 2]), np.array([1, 4])], index=mask[mask].index ) df.loc[mask, "col3"] = vals_to_impute print(df) Output: col1 col2 col3 0 1 a [1, 2] 1 2 b [2, 4] 2 3 c [1, 4] | 2 | 3 |
76,672,343 | 2023-7-12 | https://stackoverflow.com/questions/76672343/openai-api-chatcompletion-and-completion-give-totally-different-answers-with-sa | I'm exploring the usage of different prompts on gpt3.5-turbo. Investigating over the differences between "ChatCompletion" and "Completion", some references say that they should be more or less the same, for example: https://platform.openai.com/docs/guides/gpt/chat-completions-vs-completions Other sources say, as expected, that ChatCompletion is more useful for chatbots, since you have "roles" (system, user and assistant), so that you can orchestrate things like few-shot examples and/or memory of previous chat messages. While Completion is more useful for summarization, or text generation. But the difference seems to be much bigger. I can't find references where they explain what is happening under the hood. The following experiment gives me totally diferent results, even when using the same model with the same parameters. With ChatCompletion import os import openai openai.api_type = "azure" openai.api_version = "2023-03-15-preview" openai.api_base = ... openai.api_key = ... chat_response = openai.ChatCompletion.create( engine="my_model", # gpt-35-turbo messages = [{"role":"user","content":"Give me something intresting:\n"}], temperature=0, max_tokens=800, top_p=0.95, frequency_penalty=0, presence_penalty=0, stop=None) print(chat_response.choices[0]['message']['content']) Result is a fact about a war: Did you know that the shortest war in history was between Britain and Zanzibar in 1896? It lasted only 38 minutes! With Completion regular_response = openai.Completion.create( engine="my_model", # gpt-35-turbo prompt="Give me something intresting:\n", temperature=0, max_tokens=800, top_p=0.95, frequency_penalty=0, presence_penalty=0, stop=None) print(regular_response['choices'][0]['text']) Result is a python code and some explanation of what it does: ``` import random import string def random_string(length): return ''.join(random.choice(string.ascii_letters) for i in range(length)) print(random_string(10)) ``` Output: ``` 'JvJvJvJvJv' ``` This code generates a random string of length `length` using `string.ascii_letters` and `random.choice()`. `string.ascii_letters` is a string containing all ASCII letters (uppercase and lowercase). `random.choice()` returns a random element from a sequence. The `for` loop generates `length` number of random letters and `join()` concatenates them into a single string. The result is a random string of length `length`. This can be useful for generating random passwords or other unique identifiers.<|im_end|> Notes I'm using the same parameters (temperature, top_p, etc). The only difference is the ChatCompletion/Completion api. The model is the same in both cases, gpt-35-turbo. I'm keeping the temperature low so I can get more consistent results. Other prompts also give totally different answers, like if I try something like "What is the definition of song?" The Question Why is this happening? Shouldn't same prompts give similar results given that they are using the same model? Is there any reference material where OpenAI explains what it is doing under the hood? | I actually found the answer by chance reviewing some old notebooks. It's all on the hidden tags, or as I found out now, the Chat Markup Language (ChatML): https://github.com/openai/openai-python/blob/main/chatml.md This prompt with the Completion api now returns almost the same answer as the ChatCompletion: prompt = """<|im_start|>system <|im_end|> <|im_start|>user Give me something intresting: <|im_end|> <|im_start|>assistant """ regular_response = openai.Completion.create( engine="my_model", # gpt-35-turbo prompt=prompt, temperature=0, max_tokens=800, top_p=0.95, frequency_penalty=0, presence_penalty=0, stop=None) print(regular_response['choices'][0]['text']) Result now is the same fact about a war (with the ending tag): Did you know that the shortest war in history was between Britain and Zanzibar in 1896? The war lasted only 38 minutes, with the British emerging victorious.<|im_end|> It seems that all that the ChatCompletion api is doing is adding those tags in between your prompts. | 3 | 6 |
76,677,507 | 2023-7-13 | https://stackoverflow.com/questions/76677507/f-string-not-formatting-floats-into-an-integer | Usually, I use % string formatting. But, as I discovered f-string is the new way of string formatting and it is faster as well, I want to use it. But, I am facing a problem when formatting a float into an integer using f-string. Following is what I have tried: Using % string formatting '%5d'%1223 # yields --> ' 1223' '%5d'%1223.555 # yields --> ' 1223' Using f-string formatting f'{1223:5d}' # yields --> ' 1223' ==> correct f'{1223.555:5d}' # gives error # error is "ValueError: Unknown format code 'd' for object of type 'float'" Am I missing something? | The reason for this error is that the format specifier d is specifically for formatting integers, not floats.You can use the general format specifier f instead. f'{int(1223.555):5d}' | 5 | 5 |
76,674,718 | 2023-7-12 | https://stackoverflow.com/questions/76674718/how-to-efficiently-append-same-element-n-times-to-a-non-empty-list | In Python, it's known that the most efficient way to create a list with n repetitions of the same element (let's say the string 's') is by using list multiplication, as shown below: lst = ['s'] * 1000 However, when the list is non-empty initially, what would be the most optimal method to append the same element n times? Here are a couple of methods that come to mind: Method1: lst = [1,2,3] for _ in range(1000): lst.append('s') Method2: lst = [1,2,3] lst.extend(['s'] * 1000) # or # lst.extend(['s' for _ in range(1000)]) But it's worth noting that Method 2 does create a temporary long list, e.g. ['s' for _ in range(1000)]. Are there any alternative approaches that are more efficient, both in terms of time complexity and space usage? Or among the existing methods, which one is deemed the most efficient? | I propose: def addition(N): lst = [1, 2, 3] return lst+['s']*N I profiled the approaches: import itertools from performance_measurement import run_performance_comparison def method1(N): lst = [1, 2, 3] for _ in range(N): lst.append("s") return lst def method2(N): lst = [1, 2, 3] lst.extend(["s"] * N) return lst def generator_approach(N): #@user2390182 lst = [1, 2, 3] lst.extend("s" for _ in range(N)) return lst def addition(N): lst = [1, 2, 3] return lst + ["s"] * N def itertools_approach(N): #@juanpa.arrivillaga lst = [1, 2, 3] lst.extend(itertools.repeat("s", N)) return lst approaches = [method1, method2, generator_approach, addition, itertools_approach] for approach in approaches[1:]: data = [100] assert approach(*data) == approaches[0](*data) run_performance_comparison( approaches, [ 1_000, 3_000, 5_000, 10_000, 30_000, 100_000, 300_000, 500_000, 1_000_000, ], title="Performance Comparison", number_of_repetitions=10, ) It seems to me ['s']*N is very performant, but itertools beats it. Profiling code: import timeit from functools import partial import matplotlib.pyplot as plt from typing import List, Dict, Callable from contextlib import contextmanager @contextmanager def data_provider(data_size, setup=lambda N: N, teardown=lambda: None): data = setup(data_size) yield data teardown() def run_performance_comparison(approaches: List[Callable], data_size: List[int], setup=lambda N: [N], teardown=lambda: None, number_of_repetitions=5, title='Performance Comparison', data_name='N'): approach_times: Dict[Callable, List[float]] = {approach: [] for approach in approaches} for N in data_size: with data_provider(N, setup, teardown) as data: for approach in approaches: function = partial(approach, *data) approach_time = min(timeit.Timer(function).repeat(repeat=number_of_repetitions, number=1)) approach_times[approach].append(approach_time) for approach in approaches: plt.plot(data_size, approach_times[approach], label=approach.__name__) plt.yscale('log') plt.xscale('log') plt.xlabel(data_name) plt.ylabel('Execution Time (seconds)') plt.title(title) plt.legend() plt.show() | 4 | 2 |
76,673,985 | 2023-7-12 | https://stackoverflow.com/questions/76673985/how-to-remove-a-tuple-in-an-integer-tuple-if-its-last-element-is-0-using-pyt | I have the following code to create a tuple contains multiple tuples with integer pairs: iterable = ( tuple(zip([0, 1, 2], _)) for _ in product(range(9), repeat=3) ) next(iterable) # First element is not needed print(list(iterable)) # This code produces: [((0, 0), (1, 0), (2, 1)), ... , ((0, 8), (1, 8), (2, 8))] But I need that if last element of a tuple is "0" (e.g. (0, 0) or (2, 0)), I have to remove that tuple. So new list should be like this: [((2, 1),), ... , ((1, 2), (2, 7)), ((1, 2), (2, 8)), ... , ((0, 8), (1, 8), (2, 8))] I actually achieved this goal by the following code but it is not the correct way I think, I don't know: x = () for i in iterable: y = () for j in i: if j[-1] != 0: y += (j,) x += (y,) print(list(x)) How can I do this with itertools module and in one line, if possible? If needed, I can change the code at the top of this question, to create the desired list in one line. Thank you. | Use filter() to remove the elements ending in 0 from the result of zip(). iterable = ( tuple(filter(lambda x: x[-1] != 0, zip([0, 1, 2], _))) for _ in product(range(9), repeat=3) ) | 4 | 5 |
76,673,056 | 2023-7-12 | https://stackoverflow.com/questions/76673056/differences-in-type-and-naming-conventions-for-uml-class-in-different-programmin | Are UML class representations different for each programming language, or do they adhere to a standard? I have conducted extensive research but remain confused. For instance: In Python: set_name(): None In Java: setName(): void In Kotlin: setName(): Unit | There is not one "standard" for UML naming conventions. I would advise you use names similar to what you would use in the programming language you are using. As long as your intended reader understands the material then it is a good convention. Most important is that whatever you choose remains consistent throughout your documentation. See: this question as well. | 4 | 6 |
76,670,856 | 2023-7-12 | https://stackoverflow.com/questions/76670856/langchain-conversationalretrieval-with-jsonloader | I modified the data loader of this source code https://github.com/techleadhd/chatgpt-retrieval for ConversationalRetrievalChain to accept data as JSON. I created a dummy JSON file and according to the LangChain documentation, it fits JSON structure as described in the document. { "reviews": [ {"text": "Great hotel, excellent service and comfortable rooms."}, {"text": "I had a terrible experience at this hotel. The room was dirty and the staff was rude."}, {"text": "Highly recommended! The hotel has a beautiful view and the staff is friendly."}, {"text": "Average hotel. The room was okay, but nothing special."}, {"text": "I absolutely loved my stay at this hotel. The amenities were top-notch."}, {"text": "Disappointing experience. The hotel was overpriced for the quality provided."}, {"text": "The hotel exceeded my expectations. The room was spacious and clean."}, {"text": "Avoid this hotel at all costs! The customer service was horrendous."}, {"text": "Fantastic hotel with a great location. I would definitely stay here again."}, {"text": "Not a bad hotel, but there are better options available in the area."} ] } The code is : import os import sys import openai from langchain.chains import ConversationalRetrievalChain, RetrievalQA from langchain.chat_models import ChatOpenAI from langchain.document_loaders import DirectoryLoader, TextLoader from langchain.embeddings import OpenAIEmbeddings from langchain.indexes import VectorstoreIndexCreator from langchain.indexes.vectorstore import VectorStoreIndexWrapper from langchain.llms import OpenAI from langchain.vectorstores import Chroma from langchain.document_loaders import JSONLoader os.environ["OPENAI_API_KEY"] = 'YOUR_API_KEY_HERE' # Enable to save to disk & reuse the model (for repeated queries on the same data) PERSIST = False query = None if len(sys.argv) > 1: query = sys.argv[1] if PERSIST and os.path.exists("persist"): print("Reusing index...\n") vectorstore = Chroma(persist_directory="persist", embedding_function=OpenAIEmbeddings()) index = VectorStoreIndexWrapper(vectorstore=vectorstore) else: loader = JSONLoader("data/review.json", jq_schema=".reviews[]", content_key='text') # Use this line if you only need data.json if PERSIST: index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"persist"}).from_loaders([loader]) else: index = VectorstoreIndexCreator().from_loaders([loader]) chain = ConversationalRetrievalChain.from_llm( llm=ChatOpenAI(model="gpt-3.5-turbo"), retriever=index.vectorstore.as_retriever() ) chat_history = [] while True: if not query: query = input("Prompt: ") if query in ['quit', 'q', 'exit']: sys.exit() result = chain({"question": query, "chat_history": chat_history}) print(result['answer']) chat_history.append((query, result['answer'])) query = None Some examples of results are: Prompt: can you summarize the data? Sure! Based on the provided feedback, we have a mix of opinions about the hotels. One person found it to be an average hotel with nothing special, another person had a great experience with excellent service and comfortable rooms, another person was pleasantly surprised by a hotel that exceeded their expectations with spacious and clean rooms, and finally, someone had a disappointing experience with an overpriced hotel that didn't meet their expectations in terms of quality. Prompt: how many feedbacks present in the data ? There are four feedbacks present in the data. Prompt: how many of them are positive (sentiment)? There are four positive feedbacks present in the data. Prompt: how many of them are negative? There are three negative feedbacks present in the data. Prompt: how many of them are neutral? Two of the feedbacks are neutral. Prompt: what is the last review you can see? The most recent review I can see is: "The hotel exceeded my expectations. The room was spacious and clean." Prompt: what is the first review you can see? The first review I can see is "Highly recommended! The hotel has a beautiful view and the staff is friendly." Prompt: how many total texts are in the JSON file? I don't know the answer. I can chat with my data but except for the first answer, all other answers are wrong. Is there a problem with JSONloader or jq_scheme? How can I adapt the code so that I can generate the expected output? | In ConversationalRetrievalChain , search is setup to default 4, refer top_k_docs_for_context: int = 4 in ../langchain/chains/conversational_retrieval/base.py . That makes sense as you don't want to send all the vectors to LLM model(associated cost too). Based on the usecase, you can change the default to more manageable, using the following: chain = ConversationalRetrievalChain.from_llm( llm=ChatOpenAI(model="gpt-3.5-turbo"), retriever=index.vectorstore.as_retriever(search_kwargs={"k": 10}) ) with this change, you will get the result {'question': 'how many feedbacks present in the data ?', 'chat_history': [], 'answer': 'There are 10 pieces of feedback present in the data.'} | 4 | 6 |
76,670,562 | 2023-7-12 | https://stackoverflow.com/questions/76670562/python-parse-then-put-in-a-dataframe | I have a file with a data like this: ------------------------------ ------------------------------ <TIME:2020-01-01 01:25:10> <TIME:2020-01-01 01:25:10> <TIME:2020-01-01 01:25:10> <TIME:2020-01-01 01:25:10> ------ ++++++ %%RequestHandler DATA1 = 123456 ERROR1 = 500 DATA2 = 56789 ERROR2 = 505 Count = 4 --- I would like to create a dataframe like DATA1 ERROR1 123456 500 56789 505 | Another regex approach with pivot: import re # or file.read() out = (pd.DataFrame(re.findall(r'^\s+(\w+)(\d+) = (\d+)', text, flags=re.M)) .pivot(index=1, columns=0, values=2) .rename_axis(index=None, columns=None) ) print(out) Output: DATA ERROR 1 123456 500 2 56789 505 Used input: text = '''------------------------------ ------------------------------ <TIME:2020-01-01 01:25:10> <TIME:2020-01-01 01:25:10> <TIME:2020-01-01 01:25:10> <TIME:2020-01-01 01:25:10> ------ ++++++ %%RequestHandler DATA1 = 123456 ERROR1 = 500 DATA2 = 56789 ERROR2 = 505 Count = 4''' regex demo | 2 | 2 |
76,669,711 | 2023-7-12 | https://stackoverflow.com/questions/76669711/how-may-i-instantiate-a-class-of-type-hint-in-python | I am new to python and am coming from C# and java. I want to instantiate a class of the type provided as type hint R as following from typing import (TypeVar, Generic) class BaseParams(object): def __init__(self) -> None: self.name = 'set-in-base-class' class ChildParams(BaseParams): def __init__(self) -> None: super().__init__() self.name = 'set-in-child-class' R = TypeVar('R', bound= BaseParams) class MyGeneric(Generic[R]): def __init__(self) -> None: super().__init__() def test(self): r = R() # how should I instantiate R here print(r.name) c = MyGeneric[ChildParams]() c.test() something like the following C# code class BaseParams { public BaseParams() { Name = "set-in-base-class"; } public string Name { get; set; } } class ChildParams : BaseParams { public ChildParams() { Name = "set-in-child-class"; } } class MyGenericClass<R> where R : BaseParams, new() { public void test() { var r = new R(); Console.WriteLine(r.Name); } } I've made quite a lot search on how to do that in python and all the sources refer to a situation where we provide the type in a method or something like that. I wonder if it is possible to do that at all. would you please someone help me to have a workaround on this? | It's still type hints and not type declaration, but I like inspecting weird objects and looking at their insides, so I gave it a go: I tried to inspect Generics in interactive session using dir and also looked for clues in the typing's source. My testing consisted of having one MyGeneric without R specified and one with, then inspecting them both. Both of them had __orig_bases__ which included Generic[R], but then I noticed that the parametrized MyClass has additional __orig_class__ attribute - bingo! >>> a = MyGeneric() >>> b = MyGeneric[int]() >>> a.__orig_bases__ (typing.Generic[~R],) >>> b.__orig_ b.__orig_bases__ b.__orig_class__( >>> b.__orig_bases__ (typing.Generic[~R],) >>> a.__orig_class__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'MyGeneric' object has no attribute '__orig_class__'. Did you mean: '__orig_bases__'? >>> b.__orig_class__ __main__.MyGeneric[int] By looking at Generic's source earlier, I already knew that .__args__ parameter gets whatever is in [], as tuple (because it could be multiple things). So: >>> b.__orig_class__.__args__ (<class 'int'>,) >>> b.__orig_class__.__args__[0] <class 'int'> Going back to your code I had in file, I put that attribute chain and run the file - and got set-in-child-class. from typing import (TypeVar, Generic) class BaseParams(object): def __init__(self) -> None: self.name = 'set-in-base-class' class ChildParams(BaseParams): def __init__(self) -> None: super().__init__() self.name = 'set-in-child-class' R = TypeVar('R', bound= BaseParams) class MyGeneric(Generic[R]): def __init__(self) -> None: super().__init__() def test(self): r = self.__orig_class__.__args__[0]() print(r.name) c = MyGeneric[ChildParams]() c.test() I'd advise to put that magical chain of attributes as some function with nice name. Like get_parametrized_base_arg or something. Or just put a good comment to explain the magic. | 3 | 2 |
76,667,874 | 2023-7-12 | https://stackoverflow.com/questions/76667874/how-can-i-install-python-library-in-chatgpt-code-interpreter | ChatGPT is the newest platform for running Python in a Jupyter-like environment. However the installed libraries are limited. You cannot access the internet too. So, I cannot use pip to install. How can I install a new library? | You can upload a wheel file taken from PyPI, select cp38-manylinux_..._x86_64.whl and upload it. Tell ChatGPT to unzip and move it to /home/sandbox/.local/lib/python3.8/site-packages/ Then you can import and use it normally. Here's an example case where I install DuckDB: https://chat.openai.com/share/fa3df390-8a25-45d3-997a-c19d4f19df67 | 6 | 12 |
76,664,207 | 2023-7-11 | https://stackoverflow.com/questions/76664207/sending-a-reply-with-gmail-api-python | I created two Gmail accounts and I'm trying to create an e-mail thread between them with the Python Gmail API. I can send e-mails without any issue, but when it comes to replying to each other and creating a thread, it is simply not working : the new message is successfully displaying as the answer of the received email for the sender, but it is appearing as a new message - without a linked thread - for the receiver. This problem was described here in 2019 : https://stackoverflow.com/a/63186609/21966625 However, the Gmail APIs changed a lot since this article and I didn't find how to use these advices with today's API. I tried to carefully respect the instructions of the docs by defining the message's parameters References and In-Reply-To as the received message's id when replying. Indeed, I retrieve the email : received_email= service.users().messages().get(userId='me', id=label['id']).execute() I get a dict looking like: {'id': '189462395f418017', 'threadId': '189462395f418017', 'labelIds': ['UNREAD','INBOX'], 'snippet': 'xxx'....} Hence, when I'm building my e-mail, the following method should work : message_id=received_email['id'] message = EmailMessage() message.set_content('') message['To'] = '[email protected]' message['From'] = '[email protected]' message['References'] = message_id message['In-Reply-To'] = message_id message['Subject'] = 'Automated draft' In the same way, I defined the threadId as the id of the message I wanted to reply to. create_message = {'raw': encoded_message, 'threadId': message_id } send_message = (service.users().messages().send(userId="me", body=create_message).execute()) Thanks to this part of the code, the answers are correctly displayed (for the sender of the answer) as explained above, but it appears as a new message - unlinked to a thread - for the receiver. | Actually I found why my method did not work ; even if the dict mention a kind of message id : email = {'id': '189462395f418017', 'threadId': '189462395f418017', 'labelIds': ['UNREAD','INBOX'], 'snippet': 'xxx'....} I thought the messageIDcould be taken just by call email['id']. The real messageID is somewhere in the ['payload']['headers'] dictionnary ; one could find it by a loop like : for p in email['payload']['headers']: if p["name"] == "Message-Id": message_id = p['value'] This way we have the true messageID of the email, and the threads are successfully created. | 3 | 5 |
76,645,870 | 2023-7-9 | https://stackoverflow.com/questions/76645870/importerror-cannot-import-name-gptsimplevectorindex-from-llama-index | I am getting an ImportError while using GPTSimpleVectorIndex from the llama-index library. Have installed the latest version of llama-index library and trying to run it on python 3.9. from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader, LLMPredictor, PromptHelper, ServiceContext ImportError: cannot import name 'GPTSimpleVectorIndex' from 'llama_index' (E:\Experiments\OpenAI\data anaysis\llama-index-main\venv\lib\site-packages\llama_index\__init__.py The source code is given below, import os, streamlit as st from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, LLMPredictor, PromptHelper, ServiceContext from langchain.llms.openai import OpenAI | Try use GPTVectorStoreIndex instead of GPTSimpleVectorIndex: from llama_index import GPTVectorStoreIndex, .. | 6 | 10 |
76,624,911 | 2023-7-6 | https://stackoverflow.com/questions/76624911/get-href-link-using-python-playwright | I am trying to extract the link inside a href but all I am finding it is the text inside the element The website code is the following: <div class="item-info-container "> <a href="/imovel/32600863/" role="heading" aria-level="2" class="item-link xh-highlight" title="Apartamento T3 na avenida da Liberdade, SΓ£o JosΓ© de SΓ£o LΓ‘zaro e SΓ£o JoΓ£o do Souto, Braga"> Apartamento T3 na avenida da Liberdade, SΓ£o JosΓ© de SΓ£o LΓ‘zaro e SΓ£o JoΓ£o do Souto, Braga </a> And the code I am using is: element_handle = page.locator('//div[@class="item-info-container "]//a').all_inner_texts() No matter if I specify //a[@href] or not, my output is always the title text: Apartamento T3 na avenida da Liberdade, SΓ£o JosΓ© de SΓ£o LΓ‘zaro e SΓ£o JoΓ£o do Souto, Braga When what I really want to achieve is: /imovel/32600863/ Any ideas of where my logic is failing me? | Using get_attribute: link = page.locator('.item-info-container ').get_by_role('link').get_attribute('href') More than one locator: link_locators = page.locator('.item-info-container ').get_by_role('link').all() for _ in link_locators: print(_.get_attribute('href')) | 8 | 15 |
76,643,220 | 2023-7-8 | https://stackoverflow.com/questions/76643220/from-within-a-python-function-how-can-i-tell-if-it-is-being-executed-in-the-gpu | I have a function that sometimes call with Numba as a device function to execute on the GPU and sometimes I call directly from within regular Python on the host: def process(): # perform computation process_cuda = cuda.jit(device=True)(process) sometimes I call process() from directly from Python, and sometimes I invoke the process_cuda wrapper as a kernel with Numba. My question, how can I tell, from within the process function, if it was called directly from Python or if it is executing as a Numba device function? | Numba offers a function called cuda.current_context(), if the code is executed from a GPU the function return the current CUDA context otherwise it will return None when run using a CPU So, we can initialize a variable in the process() to check if the code is executed from a CPU or GPU def process(): cuda_context = cuda.current_context() if cuda_context: return 'GPU' else: return 'CPU' If there is anything I did wrong, please share it with me. | 4 | 4 |
76,666,254 | 2023-7-11 | https://stackoverflow.com/questions/76666254/not-understanding-python-module-in-a-special-case-where-pythonpath-is-set-to-a-s | I encountered an issue I am confused about with Python modules. I have built a minimal working example: $ tree βββ app β βββ configuration.py β βββ error_code.py βββ test.py app/configuration.py from error_code import ErrorCode class Configuration: def status(self): return self._status def __init__(self): self._status = ErrorCode.Error1 app/error_code.py import enum class ErrorCode(enum.Enum): Success = 0 Error1 = 1 test.py #!/usr/bin/env python from app.configuration import Configuration from app.error_code import ErrorCode c = Configuration() print(c.status()) print(c.status() == ErrorCode.Error1) Command: PYTHONPATH=app ./test.py Output: ErrorCode.Error1 False Why does in such case the output show ErrorCode.Error1 while it says False for the equal check? I suppose it has to do with the module, because if in app/configuration.py I replace from error_code import ErrorCode with from app.error_code import ErrorCode it works as expected. But it seems weird to me. Where should I look to get the documentation for this specific behavior? | It's because the directory that test.py is in is also added to the import path. Which is why this import in test.py worked to begin with: from app.error_code import ErrorCode However, when you do: from error_code import ErrorCode in app/configuration.py it relied on PYTHONPATH=app/. But the original import was cached as "app.error_code" in sys.modules, so when you import error_code it isn't found in the cache, because it would need to exist as "error_code", and it isn't recognized as the same module, so it is reimported as a new module. IMO, you should use: from app.error_code import ErrorCode And as you noted, the import will be appropriately cached. Really, it's the parent directory of app that should be added to the PYTHONPATH to begin with. Better yet, you shouldn't be relying on the working directory or on PYTHONPATH, and just make your project pip installable. | 5 | 2 |
76,633,836 | 2023-7-7 | https://stackoverflow.com/questions/76633836/what-does-langchain-charactertextsplitters-chunk-size-param-even-do | My default assumption was that the chunk_size parameter would set a ceiling on the size of the chunks/splits that come out of the split_text method, but that's clearly not right: from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter chunk_size = 6 chunk_overlap = 2 c_splitter = CharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap) text = 'abcdefghijklmnopqrstuvwxyz' c_splitter.split_text(text) prints: ['abcdefghijklmnopqrstuvwxyz'], i.e. one single chunk that is much larger than chunk_size=6. So I understand that it didn't split the text into chunks because it never encountered the separator. But so then the question is what is the chunk_size even doing? I checked the documentation page for langchain.text_splitter.CharacterTextSplitter here but did not see an answer to this question. And I asked the "mendable" chat-with-langchain-docs search functionality, but got the answer "The chunk_size parameter of the CharacterTextSplitter determines the maximum number of characters in each chunk of text."...which is not true, as the code sample above shows. | CharacterTextSplitter will only split on separator (which is '\n\n' by default). chunk_size is the maximum chunk size that will be split if splitting is possible. If a string starts with n characters, has a separator, and has m more characters before the next separator then the first chunk size will be n if chunk_size < n + m + len(separator). Your example string has no matching separators so there's nothing to split on. Basically, it attempts to make chunks that are <= chunk_size, but will still produce chunks > chunk_size if the minimum size chunks that can be created are > chunk_size. | 18 | 22 |
76,632,765 | 2023-7-6 | https://stackoverflow.com/questions/76632765/how-to-optimise-an-inequality-join-in-pandas | I have a cross-join-like operation that I have implemented using a for loop. I need to make it fast and preferably elegant. It creates a block entry per day with a date range condition. This works fine for small datasets but completely stalls into a very slow runtime for larger datasets. I know that it can be vectorized. My implementation is very bad. I have looked at the other posts on how to vectorize loops in DataFrames. I read 10 minutes to pandas as per suggested by this post How to iterate over rows in a DataFrame in Pandas, tried using lambda functions. Messed with Cython. I just can't get it. I tried implementing [pandas.MultiIndex.to_frame] and I have a strong feeling this, or one of it's cousins, is a good way to go. I have also tried a bunch of other things and nothing. I want to learn to code elegantly. All suggestions, variations on the solution and, comments are welcome. from datetime import datetime import pandas as pd beginning = pd.to_datetime('14/09/2021', dayfirst=True) today = pd.to_datetime(datetime.today()) date_range = pd.date_range(start=beginning, end=today) # .tolist() frame = pd.DataFrame(columns=['Record_Date', 'Identifier', 'start_date', 'end_date', 'color']) block = pd.DataFrame( {'Identifier': ['4913151F', 'F4E9124A', '31715888', 'D0C57FCA', '57B4D7EB', 'E46F1E5D', '99E0A2F8', 'D77E342E', 'C596D233', 'D0EED63F', 'D0C57FCA'], 'start_date': ['03/11/2020', '05/07/2022', '22/12/2016', '17/03/2024', '14/10/2022', '08/08/2022', '04/11/2020', '13/03/2023', '05/11/2021', '12/27/2022', '13/06/2022'], 'end_date': ['11/07/2023', '11/04/2023', '14/12/2018', '20/01/2025', '15/06/2023', '09/01/2023', '16/07/2022', '19/05/2024', '24/09/2022', '17/11/2023', '13/06/2023'], 'color': ['red', 'green', 'magenta', 'yellow', 'light_blue', 'dark_blue', 'black', 'white', 'pink', 'orange', 'yellow']}) block.start_date = pd.to_datetime(block.start_date, dayfirst=True, format='mixed') block.end_date = pd.to_datetime(block.end_date, dayfirst=True, format='mixed') block_uniques = block.drop_duplicates(['Identifier', 'start_date']) for x in date_range: temp_df = block_uniques[(block_uniques.start_date <= x) & (block_uniques.end_date >= x)] temp_df.insert(0, 'Record_Date', x) frame = pd.concat([frame, temp_df]) frame = frame.sort_values(['Record_Date', 'Identifier']) frame = frame.reset_index().drop('index', axis=1) print(frame) Output and solution: Record_Date Identifier start_date end_date color 0 2021-09-14 4913151F 2020-11-03 2023-07-11 red 1 2021-09-14 99E0A2F8 2020-11-04 2022-07-16 black 2 2021-09-15 4913151F 2020-11-03 2023-07-11 red 3 2021-09-15 99E0A2F8 2020-11-04 2022-07-16 black 4 2021-09-16 4913151F 2020-11-03 2023-07-11 red ... ... ... ... ... ... 2641 2023-07-05 D0EED63F 2022-12-27 2023-11-17 orange 2642 2023-07-05 D77E342E 2023-03-13 2024-05-19 white 2643 2023-07-06 4913151F 2020-11-03 2023-07-11 red 2644 2023-07-06 D0EED63F 2022-12-27 2023-11-17 orange 2645 2023-07-06 D77E342E 2023-03-13 2024-05-19 white [2646 rows x 5 columns] | Looks like some form of inequality join; conditional_join offers an efficient way to handle this. Note that if your dates in block are not overlapping, then pd.IntervalIndex is suitable and performant. # pip install pyjanitor import janitor import pandas as pd # convert date_range to either a named series, or a dataframe date_range = pd.Series(date_range, name = 'date') (block .conditional_join( date_range, # column from the left, # column from the right, # operator ('start_date', 'date', '<='), ('end_date', 'date', '>='), # in some scenarios, # numba might offer a perf boost use_numba=False, ) ) Identifier start_date end_date color date 0 4913151F 2020-11-03 2023-07-11 red 2021-09-14 1 4913151F 2020-11-03 2023-07-11 red 2021-09-15 2 4913151F 2020-11-03 2023-07-11 red 2021-09-16 3 4913151F 2020-11-03 2023-07-11 red 2021-09-17 4 4913151F 2020-11-03 2023-07-11 red 2021-09-18 ... ... ... ... ... ... 2644 D0C57FCA 2022-06-13 2023-06-13 yellow 2023-06-09 2645 D0C57FCA 2022-06-13 2023-06-13 yellow 2023-06-10 2646 D0C57FCA 2022-06-13 2023-06-13 yellow 2023-06-11 2647 D0C57FCA 2022-06-13 2023-06-13 yellow 2023-06-12 2648 D0C57FCA 2022-06-13 2023-06-13 yellow 2023-06-13 [2649 rows x 5 columns] | 3 | 2 |
76,661,046 | 2023-7-11 | https://stackoverflow.com/questions/76661046/can-you-have-a-progress-bar-for-sorting-a-list | I have a list containing ~50k elements of a custom data type (the latter is probably not important for my question) I'm sorting the list using pythons builtin list.sort() method. myList: List[Foo] = ... myList.sort(key=Foo.x) Since the sorting takes a couple of minutes, I would like to have a progress bar for the sorting process. I haven't found any solutions online. Is this even possible? I'm aware sorting algorithms may be complex and it might not be possible to measure the sorting progress at all. However, it would be fine for my usecase to have a "rough" measurement, like 25%, 50%, 75%... | Given the interface provided by sort, you don't have too many options for hooking into the actual sorting algoithm. However, if 50K keys is slow, it is most likely that invoking the key function is slow, which is computed prior to the actual sorting. From the docs: The key corresponding to each item in the list is calculated once and then used for the entire sorting process. So if you keep count of how many times the key method is invoked, you could gain a rough estimate for the whole sorting process. To do so, you could create a wrapper for the key function to manage this book keeping: def progress_sort(data, *, key=lambda v: v, on_increment=None): total = len(data) if on_increment is None: start = time.time() def on_increment(c): print(f"{time.time() - start}: {c/total * 100}%") count = 0 def progress_key(val): nonlocal count if count % int(total / 10) == 0: on_increment(count) count += 1 return key(val) data.sort(key=progress_key) on_increment(total) Example with some dummy data and a slow key method def slow_key(val): time.sleep(1.0/500_000) return val data = [random.randint(-50_000, 50_000)/1.0 for i in range(50_000)] progress_sort(data, key=slow_key) 0.0: 0.0% 0.5136210918426514: 10.0% 1.0435900688171387: 20.0% 1.6074442863464355: 30.0% 2.156496524810791: 40.0% 2.9734878540039062: 50.0% 3.4794368743896484: 60.0% 4.016523599624634: 70.0% 4.558118104934692: 80.0% 5.047779083251953: 90.0% 5.545809030532837: 100.0% This method could then be combined with whatever type of library you wish to use for updating status. You may wish to further configure the data supplied to the provided hooks, however, the principle remains the same. Here's an example that uses tqdm: def slow_key(val): time.sleep(1.0/500_000) return val data = [random.randint(-50_000, 50_000)/1.0 for i in range(50_001)] with tqdm(total=len(data), desc="sorting") as pbar: progress_sort(data, key=slow_key, on_increment=lambda c: pbar.update(c - pbar.n)) pbar.set_description("Finished") sorting: 80%|ββββββββ | 40000/50001 [00:05<00:01, 5802.30it/s] Finished: 100%|ββββββββββ| 50001/50001 [00:07<00:00, 6489.14it/s] | 7 | 14 |
76,649,259 | 2023-7-9 | https://stackoverflow.com/questions/76649259/why-is-performing-matrix-multiplication-on-a-pre-transposed-matrix-faster-than-o | Consider the following code in Python, where multiplying a pre-transposed matrix yields faster execution time compared to multiplying a non-transposed matrix: import numpy as np import time # Generate random matrix matrix_size = 1000 matrix = np.random.rand(matrix_size, matrix_size) # Transpose the matrix transposed_matrix = np.transpose(matrix) # Multiply non-transposed matrix start = time.time() result1 = np.matmul(matrix, matrix) end = time.time() execution_time1 = end - start # Multiply pre-transposed matrix start = time.time() result2 = np.matmul(transposed_matrix, transposed_matrix) end = time.time() execution_time2 = end - start print("Execution time (non-transposed):", execution_time1) print("Execution time (pre-transposed):", execution_time2) Surprisingly, multiplying the pre-transposed matrix is faster. One might assume that the order of multiplication should not affect the performance significantly, but there seems to be a difference. Why does processing a pre-transposed matrix result in faster execution time compared to a non-transposed matrix? Is there any underlying reason or optimization that explains this behavior? UPDATE I've taken the comments about the cache into consideration and I'm generating new matrices on each loop: import numpy as np import time import matplotlib.pyplot as plt # Generate random matrices matrix_size = 3000 # Variables to store execution times execution_times1 = [] execution_times2 = [] # Perform matrix multiplication A @ B^T and measure execution time for 50 iterations num_iterations = 50 for _ in range(num_iterations): matrix_a = np.random.rand(matrix_size, matrix_size) start = time.time() result1 = np.matmul(matrix_a, matrix_a) end = time.time() execution_times1.append(end - start) # Perform matrix multiplication A @ B and measure execution time for 50 iterations for _ in range(num_iterations): matrix_b = np.random.rand(matrix_size, matrix_size) start = time.time() result2 = np.matmul(matrix_b, matrix_b.T) end = time.time() execution_times2.append(end - start) # Print average execution times avg_execution_time1 = np.mean(execution_times1) avg_execution_time2 = np.mean(execution_times2) #print("Average execution time (A @ B^T):", avg_execution_time1) #print("Average execution time (A @ B):", avg_execution_time2) # Plot the execution times plt.plot(range(num_iterations), execution_times1, label='A @ A') plt.plot(range(num_iterations), execution_times2, label='B @ B.T') plt.xlabel('Iteration') plt.ylabel('Execution Time') plt.title('Matrix Multiplication Execution Time Comparison') plt.legend() plt.show() # Display BLAS configuration np.show_config() Results: blas_mkl_info: libraries = ['mkl_rt'] library_dirs = ['C:/Users/User/anaconda3\\Library\\lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['C:/Users/User/anaconda3\\Library\\include'] blas_opt_info: libraries = ['mkl_rt'] library_dirs = ['C:/Users/User/anaconda3\\Library\\lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['C:/Users/User/anaconda3\\Library\\include'] lapack_mkl_info: libraries = ['mkl_rt'] library_dirs = ['C:/Users/User/anaconda3\\Library\\lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['C:/Users/User/anaconda3\\Library\\include'] lapack_opt_info: libraries = ['mkl_rt'] library_dirs = ['C:/Users/User/anaconda3\\Library\\lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['C:/Users/User/anaconda3\\Library\\include'] Supported SIMD extensions in this NumPy install: baseline = SSE,SSE2,SSE3 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2 not found = AVX512F,AVX512CD,AVX512_SKX,AVX512_CLX,AVX512_CNL | It doesn't seem really obvious on my machine. On 1000 runs. I get these timings (x=non transposed, y=transposed). There are more red dots (under the y=x axis) than blue dots. 685/315 to be more accurate. So, p-value wise, no doubt, that cannot be just random effect. (1000 coins drawn, with 685 heads is a clear anomaly) But timing-wise, it is not obvious. The cluster is mainly centered on the y=x axis. Now I started this answer because I was pretty sure that this was a cache problem. When I was in engineer school (a very long time ago, when those considerations where even more important they are now, and taught by teachers who, themselves date back from a time when it was even more important), in HPC lessons, we were taught to be very careful when switching from Fortran to C, because of cache effect: when iterating an array, it is very important to interate it in the order it is in memory (which in numpy is still called either "C" order vs "fortran" order, proof that it is still an important consideration for people who care more that I do - I rarely need to care in my everyday job, hence the reason I invoke school memory and not job memory). Because when dealing with the number that is right next to the one you've just processed before in memory, that number is probably already loaded in cache memory. While if the next number you process is 1 row under (in C order, so further in memory), then, it is more likely that it is not in cache. With nowadays cache size, it takes big matrix so that it makes a difference tho. Since transpose doesn't move any data, and just adjust the strides, the effect of working on transposed matrix is that you change the order in memory of the processed data. So, if you consider the naive algorithm for i in range(N): for j in range(N): res[i,j]=0 for k in range(N): res[i,j] += A[i,k] * B[k,j] if A and B are in C order, then iteration of matrix A is done in memory order (we iterate along a row, columns by columns, so adjacent number in memory one after the other), while B is not. If that order is inversed, for example, because they have been transposed, then it is the reverse. It is B that is iterated in the order that won't pose a cache problem and A that is not. Well, no need to stay too long on this, since I tell all that to explain why I wanted to investigate possibility of a cache problem (my intend was to compare the same multiplication with a copy of a transposed matrix, so that it is the same matrix multiplication, with only order changing. And also to try to see if there is a threshold in matrix size under which the phenomenon is not visible, which would also validate cache problem, since, for this to matter, the whole matrix must not fit in cache) But, the first step while doing so, is also to start to avoid bias, because first computation use data not yet in cache, while second use data already in cache (especially in the case where the whole matrix fits in cache). So, here is the first thing I've tried: just inverted computation order. Compute fist on transposed_matrix, and then on matrix. This time, shifting is in favor of blue dots (and, of course, I've changed only the computation order, not the meaning of axis. So x is still matrix@matrix timing, and y still transposed_matrix The number of red dots this time is 318 vs 682. So, almost exactly the opposite as before. So, conclusion (valid at least for my machine): this is indeed a cache problem. But a cache problem caused only by the fact that there is a bias in favor of transposed_matrix: it is already in cache (since the data are the same as the data of matrix), when you use it to compute. Edit: about question update. As I said in comment (but since the question was updated, and this answer, already quite upvoted, I think it is important that it also appears in that answer, for future readers), the update is something different. Your first question was about A@A vs [email protected]. The second appeared to be faster. But it was only with 1 single operation. So the reason, as I've shown, was just due to the fact that when second operation is done, A is already in cache memory (which was not the case, when first operation was done). Because A.T is the same data as A (not a copy. But the same data, at same memory address). My previous answer shows that if you reverse, and compute [email protected] fist, then A@A, then it is, on the contrary [email protected] that is slower, and in the exact same proportion. Another way to show it is import numpy as np import timeit A=np.random.normal(0,1,(1000,1000)) B=A.copy() A@A print(timeit.timeit(lambda: A@A, number=20)) [email protected] print(timeit.timeit(lambda: [email protected], number=20)) B@B print(timeit.timeit(lambda: B@B, number=20)) (The fact to perform A@A before timeit, is just to ensure that first of the 20 computations is not slower because of cache consideration) On my computer, all those operation takes 1 second almost exactly (the number=20 was chosen so that it takes 1 second) This time, no cache effect, because we run things 21 times each, not counting time at the 1st run. And no influence of .T Now, for your question update, that is something else [email protected] print(timeit.timeit(lambda: [email protected], number=20)) A.T@A print(timeit.timeit(lambda: A.T@A, number=20)) [email protected] print(timeit.timeit(lambda: [email protected], number=20)) This times, the 1st two operations takes only 650 ms. And not because of cache: it is the same whatever the order of those operations. That is because numpy is able to detect that A.T and A are the same matrix, but with one transposition operation. (It is quite easy for it to detect so: it is the same data address, but strides and shape (well here shape is square anyway; but more importantly, strides is inverted) are inverted: A.strides β (8000,8), A.T.strides β (8, 8000). So, it is easy for numpy to realize that this is a [email protected] situation. And therefore to apply an algorithm that computes that faster. As said in comment (and said before I did by others in comments, but also by others, days ago, who misread your first question... but were right to do so, since they answered in advance to what is now the update): [email protected] is symmetrical. So, there are some easy correction here. Note that timeit.timeit(lambda: A@B, number=20) timeit.timeit(lambda: [email protected], number=20) are both 1 second (as A@A, and [email protected] are). So, it is easy to understand that A@B, A@A, [email protected] all just use one standard "matrix multiplication" algorithm. Which [email protected], A.T@A use a faster one. Since B is a copy of A, [email protected] has the same symmetrical result as [email protected]. But this times, because it is a copy, numpy cannot realize that it is a [email protected] situation, cannot realize that it is a symmetrical result (before the result is computed). So [email protected] has the same standard "1 second" timing as A@A. While [email protected] has not. Which confirms that it does rely on the same address, inverted strides criteria. As long as it is either not the same address, or not the same strides inverted, standard algorithm, 1 second. If it is both the same address, but strides inverted, then, special algorithm, 650 ms. | 10 | 12 |
76,657,664 | 2023-7-10 | https://stackoverflow.com/questions/76657664/pyspark-iterate-rows-and-drop-rows-with-specified-value | I have a dataframe like this Column A Column B Hello [{id: 1000, abbreviatedId: 1, name: βJohn", planet: βEarthβ, solarsystem: βMilky Wayβ, universe: βthis oneβ, continent: {id: 33, country: βChina", Capital: βBejingβ}, otherId: 400, language: βCantoneseβ, species: 23409, creature: βHumanβ}] Bye [{id: 2000, abbreviatedId: 2, name: βJames", planet: βEarthβ, solarsystem: βMilky Wayβ, universe: βthis oneβ, continent: {id: 33, country: βRussia", Capital: βMoscowβ}, otherId: 500, language: βRussianβ, species: 12308, creature: βHumanβ}] How do I iterate through the rows of the dataframe to drop all rows with country: "China" before writing to external location? I have tried if df.select(array_contains(col("columnb.continent.country"), "China")) != True: df.write.format("delta").mode("overwrite").save("file://path/") and for row in df.rdd.collect(): if df.select(array_contains(col("columnb.continent.country"), "China")) != True: df.drop(row) df.write.format("delta").mode("overwrite").save("file://path/") | One way is using exists array function. from pyspark.sql.functions import expr from pyspark.sql import Row df = spark.createDataFrame([ [ [ Row(**{"id": 1000, "abbreviatedId": 1, "name": "John", "planet": "Earth", "solarsystem": "Milky Way", "universe": "this one", "continent": Row(**{"id": 33, "country": "China", "Capital": "Bejing"}), "otherId": 400, "language": "Cantonese", "species": 23409, "creature": "Human"}), Row(**{"id": 1001, "abbreviatedId": 2, "name": "Alex", "planet": "Mars", "solarsystem": "Milky Way", "universe": "this one", "continent": Row(**{"id": 34, "country": "Japan", "Capital": "Tokyo"}), "otherId": 400, "language": "Japanese", "species": 23409, "creature": "Human"}) ] ]], ["b"]) df.filter(expr("not exists(b, x -> x.continent.country == 'China')")) The syntax Row(**dict) will create an instance of Row through argument unpacking. | 3 | 0 |
76,637,539 | 2023-7-7 | https://stackoverflow.com/questions/76637539/dataprocinstantiateinlineworkflowtemplateoperator-error-in-pysparkjob-template | Hello fellow Stackoverflowers, I have been trying to use the DataprocInstantiateInlineWorkflowTemplateOperator to run a pyspark job. Sadly after following all the documentation I am getting error in Composer ValueError: Protocol message OrderedJob has no "stepID" field. Here is the template that I am using. { "id": "my-workflow-template", "jobs": [ { "stepID": "123456dfgy", "pysparkJob": { "mainPythonFileUri": "gs://gcp-gmp/app.py" } } ], "name": "My Workflow Template", "placement": { "managedCluster": { "clusterName": "my-managed-cluster", "config": { "master_config": { "disk_config": { "boot_disk_size_gb": 1024, "boot_disk_type": "pd-standard" }, "machine_type_uri": "n1-standard-4", "num_instances": 1 }, "worker_config": { "disk_config": { "boot_disk_size_gb": 1024, "boot_disk_type": "pd-standard" }, "machine_type_uri": "n1-standard-4", "num_instances": 2 } } } } } Here is the entire python code. import json from datetime import datetime ,timedelta from airflow import DAG from airflow.utils.trigger_rule import TriggerRule from airflow.providers.google.cloud.operators.dataproc import DataprocInstantiateInlineWorkflowTemplateOperator from airflow.operators.dummy import DummyOperator DAG_ID= 'Dataproc_Instantiate_Inline_Workflow_TemplateOper_example' JSON_CONTENT = """{ "id": "my-workflow-template", "jobs": [ { "stepID": "123456dfgy", "pysparkJob": { "mainPythonFileUri": "gs://my-bucket/app.py" } } ], "name": "My Workflow Template", "placement": { "managedCluster": { "clusterName": "my-managed-cluster", "config": { "master_config": { "disk_config": { "boot_disk_size_gb": 1024, "boot_disk_type": "pd-standard" }, "machine_type_uri": "n1-standard-4", "num_instances": 1 }, "worker_config": { "disk_config": { "boot_disk_size_gb": 1024, "boot_disk_type": "pd-standard" }, "machine_type_uri": "n1-standard-4", "num_instances": 2 } } } } }""" template_dict = json.loads(JSON_CONTENT) default_args = { 'start_date': datetime(2023, 6, 29), 'retries': 1, 'retry_delay': timedelta(minutes=2), } dag = DAG( dag_id = DAG_ID, default_args=default_args, schedule_interval=None, ) start = DummyOperator( task_id = 'start', dag = dag ) create_dataproc_template = DataprocInstantiateInlineWorkflowTemplateOperator( template = template_dict, task_id = 'create_dataproc_template', project_id= 'my-project', region = 'us-central1', gcp_conn_id = 'google_cloud_default', dag = dag ) complete = DummyOperator( task_id = 'complete', trigger_rule = TriggerRule.NONE_FAILED, dag = dag ) start >> create_dataproc_template >> complete Strangely when I was not using the stepID field the error was ValueError: Protocol message OrderedJob has no "pysparkJob" field. Any help is appreciated. | After doing a lot of research and trial and error, I discovered that the keys we provided in the config JSON differ from those described in the Google documentation. While running this from Composer/ Airflow we have to make the keys from camel case to snake case. e.g. stepID --> step_id I am providing the correct JSON here : { "id":"my-workflow-template", "jobs":[ { "step_id":"123456dfgy", "pyspark_job":{ "main_python_file_uri":"gs://my-bucket/app.py" } } ], "placement":{ "managed_cluster":{ "cluster_name":"my-managed-cluster", "config":{ "master_config":{ "disk_config":{ "boot_disk_size_gb":1024, "boot_disk_type":"pd-standard" }, "machine_type_uri":"n1-standard-4", "num_instances":1 }, "worker_config":{ "disk_config":{ "boot_disk_size_gb":1024, "boot_disk_type":"pd-standard" }, "machine_type_uri":"n1-standard-4", "num_instances":2 } } } } } | 4 | 2 |
76,651,826 | 2023-7-10 | https://stackoverflow.com/questions/76651826/how-to-chain-multiple-promptnodes-together-in-a-haystack-generativeqapipeline | I'm trying to chain together a simple question answering prompt to an elaboration prompt using Haystack. I had the following code working just fine: import os from haystack.document_stores import InMemoryDocumentStore from haystack.nodes import BM25Retriever from haystack.nodes import PromptNode, PromptTemplate, AnswerParser from haystack.pipelines import Pipeline, TextIndexingPipeline class Bert: pipe = None def __init__(self, data_path): print("Initializing model...") doc_dir = data_path document_store = InMemoryDocumentStore(use_bm25=True) files_to_index = [os.path.join(doc_dir, f) for f in os.listdir(doc_dir)] indexing_pipeline = TextIndexingPipeline(document_store) indexing_pipeline.run_batch(file_paths=files_to_index) print("Done indexing") retriever = BM25Retriever(document_store=document_store, top_k=2) lfqa_prompt = PromptTemplate( prompt="""Synthesize a comprehensive answer from the following text for the given question. Provide a clear and concise response that summarizes the key points and information presented in the text. Your answer should be in your own words and be no longer than 50 words. \n\n Related text: {join(documents)} \n\n Question: {query} \n\n Answer:""", output_parser=AnswerParser(), ) prompt_node = PromptNode(model_name_or_path="google/flan-t5-large", default_prompt_template=lfqa_prompt) elaboration_prompt = PromptTemplate( prompt="""Elaborate on the answer to the following question given the related texts. Provide additional details to the answer in your own words. The final response should be between 100-200 words. \n\n Related text: {join(documents)} \n\n Question: {query} \n\n Answer: {prompt_node}""", output_parser=AnswerParser(), ) elaboration_node = PromptNode(model_name_or_path="google/flan-t5-large", default_prompt_template=elaboration_prompt) self.pipe = Pipeline() self.pipe.add_node(component=retriever, name="retriever", inputs=["Query"]) self.pipe.add_node(component=prompt_node, name="prompt_node", inputs=["retriever"]) #self.pipe.add_node(component=elaboration_node, name="elaboration_node", inputs=["Query", "retriever", "prompt_node"]) def generate(self, query): prediction = self.pipe.run(query=query) return prediction But when I tried to chain another PromptNode to the end of the lfqa_prompt, I ran into errors. I did some research online and saw that I may need to use Shapers and I edited my code as follows: import os from haystack.document_stores import InMemoryDocumentStore from haystack.nodes import AnswerParser, BM25Retriever, BaseComponent, PromptNode, PromptTemplate, Shaper from haystack.schema import Answer, Document, List from haystack.pipelines import Pipeline, TextIndexingPipeline class QAPromptOutputAdapter(BaseComponent): outgoing_edges = 1 def run(self, **kwargs): print(kwargs) return {"answers": [Answer(answer=result, type="generative") for result in results]}, "output_1" def run_batch(self): pass class Bert: pipe = None def __init__(self, data_path): print("Initializing model...") doc_dir = data_path document_store = InMemoryDocumentStore(use_bm25=True) files_to_index = [os.path.join(doc_dir, f) for f in os.listdir(doc_dir)] indexing_pipeline = TextIndexingPipeline(document_store) indexing_pipeline.run_batch(file_paths=files_to_index) print("Done indexing") retriever = BM25Retriever(document_store=document_store, top_k=2) lfqa_prompt = PromptTemplate( prompt="""Synthesize a comprehensive answer from the following text for the given question. Provide a clear and concise response that summarizes the key points and information presented in the text. Your answer should be in your own words and be no longer than 50 words. \n\n Related text: {join(documents)} \n\n Question: {query} \n\n Answer:""", #output_parser=AnswerParser(), ) prompt_node = PromptNode(model_name_or_path="google/flan-t5-large", default_prompt_template=lfqa_prompt) question_shaper = Shaper(func="value_to_list", inputs={"value": "query", "target_list": "documents"}, outputs=["questions"]) answer_shaper = Shaper(func="value_to_list", inputs={"value": "prompt_node.results", "target_list": "documents"}, outputs=["answers"]) elaboration_prompt = PromptTemplate( prompt="""Elaborate on the answer to the following question given the related texts. Provide additional details to the answer in your own words. The final response should be between 100-200 words. \n\n Related text: {join(documents)} \n\n Question: {questions} \n\n Answer: {outputs}""", output_parser=AnswerParser(), ) elaboration_node = PromptNode(model_name_or_path="google/flan-t5-large", default_prompt_template=elaboration_prompt) self.pipe = Pipeline() self.pipe.add_node(component=retriever, name="retriever", inputs=["Query"]) self.pipe.add_node(component=prompt_node, name="prompt_node", inputs=["retriever"]) self.pipe.add_node(component=question_shaper, name="question_shaper", inputs= ["prompt_node"]) self.pipe.add_node(component=answer_shaper, name="answer_shaper", inputs=["prompt_node"]) self.pipe.add_node(component=elaboration_node, name="elaboration_node", inputs=["question_shaper", "retriever", "answer_shaper"]) def generate(self, query): prediction = self.pipe.run(query=query) return prediction Now I just get: Exception: Exception while running node 'answer_shaper': name 'results' is not defined Is this the correct solution to chaining two prompt nodes together? Should I be using shapers or am I going about this completely wrong? I'm fairly new to Haystack and generative AI models in general, so help is greatly appreciated. | The answer is supposedly to set the "output_variable" parameter of the PromptNode like this: lfqa_node = PromptNode( model_name_or_path="google/flan-t5-large", default_prompt_template=lfqa_prompt, output_variable="my_answer" ) And then you can use the output like: elaboration_prompt = PromptTemplate( prompt=""" ... Previous answer: {my_answer} \n\n New answer: """ ) However, this solution did not seem to work for me, so I simply wrote two separate pipelines, and manually parsed the response from the first pipeline and inputted the answer variable into the second pipeline like this: lfqa = self.pipe.run(query=query) lfqa_answer = lfqa['results'][0] elaboration = self.elaboration_pipeline.run(query=lfqa_answer) | 2 | 1 |
76,666,045 | 2023-7-11 | https://stackoverflow.com/questions/76666045/unable-to-use-numpy-dot-with-numba | I am getting errors trying to run numpy.dot with numba. It seems to be supported (eg: numpy: Faster np.dot/ multiply(element-wise multiplication) when one array is the same) but eg this code gives me the following error (it runs fine if I remove the njit part) Code: import numpy as np import numba @numba.njit() def tst_dot(): a = np.array([[1, 0], [0, 1]]) b = np.array([[4, 1], [2, 2]]) return np.dot(a, b) print(tst_dot()) Error: No implementation of function Function(<function dot at 0x00000280CC542EF0>) found for signature: >>> dot(array(int64, 2d, C), array(int64, 2d, C)) There are 4 candidate implementations: - Of which 2 did not match due to: Overload in function 'dot_2': File: numba\np\linalg.py: Line 525. With argument(s): '(array(int64, 2d, C), array(int64, 2d, C))': Rejected as the implementation raised a specific error: TypingError: Failed in nopython mode pipeline (step: native lowering) Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(<function dot at 0x00000280CC542EF0>) found for signature: >>> dot(array(int64, 2d, C), array(int64, 2d, C), array(int64, 2d, C)) There are 4 candidate implementations: - Of which 2 did not match due to: Overload in function 'dot_2': File: numba\np\linalg.py: Line 525. With argument(s): '(array(int64, 2d, C), array(int64, 2d, C), array(int64, 2d, C))': Rejected as the implementation raised a specific error: TypingError: too many positional arguments raised from C:\Users\a_che\PycharmProjects\minCovTarget\venv\lib\site-packages\numba\core\typing\templates.py:784 - Of which 2 did not match due to: Overload in function 'dot_3': File: numba\np\linalg.py: Line 784. With argument(s): '(array(int64, 2d, C), array(int64, 2d, C), array(int64, 2d, C))': Rejected as the implementation raised a specific error: LoweringError: Failed in nopython mode pipeline (step: native lowering) unsupported dtype for <BLAS function>() File "venv\lib\site-packages\numba\np\linalg.py", line 817: def codegen(context, builder, sig, args): <source elided> return lambda left, right, out: _impl(left, right, out) ^ During: lowering "$10call_function.4 = call $2load_deref.0(left, right, out, func=$2load_deref.0, args=[Var(left, linalg.py:817), Var(right, linalg.py:817), Var(out, linalg.py:817)], kws=(), vararg=None, varkwarg=None, target=None)" at C:\Users\a_che\PycharmProjects\minCovTarget\venv\lib\site-packages\numba\np\linalg.py (817) raised from C:\Users\a_che\PycharmProjects\minCovTarget\venv\lib\site-packages\numba\core\errors.py:837 During: resolving callee type: Function(<function dot at 0x00000280CC542EF0>) During: typing of call at C:\Users\a_che\PycharmProjects\minCovTarget\venv\lib\site-packages\numba\np\linalg.py (460) File "venv\lib\site-packages\numba\np\linalg.py", line 460: def dot_impl(a, b): <source elided> out = np.empty((m, n), a.dtype) return np.dot(a, b, out) ^ During: lowering "$8call_function.3 = call $2load_deref.0(left, right, func=$2load_deref.0, args=[Var(left, linalg.py:582), Var(right, linalg.py:582)], kws=(), vararg=None, varkwarg=None, target=None)" at C:\Users\a_che\PycharmProjects\minCovTarget\venv\lib\site-packages\numba\np\linalg.py (582) raised from C:\Users\a_che\PycharmProjects\minCovTarget\venv\lib\site-packages\numba\core\typeinfer.py:1086 - Of which 2 did not match due to: Overload in function 'dot_3': File: numba\np\linalg.py: Line 784. With argument(s): '(array(int64, 2d, C), array(int64, 2d, C))': Rejected as the implementation raised a specific error: TypingError: missing a required argument: 'out' raised from C:\Users\a_che\PycharmProjects\minCovTarget\venv\lib\site-packages\numba\core\typing\templates.py:784 During: resolving callee type: Function(<function dot at 0x00000280CC542EF0>) During: typing of call at C:\Users\a_che\PycharmProjects\minCovTarget\tst4.py (164) File "tst4.py", line 164: def tst_dot(a, b): <source elided> return np.dot(a, b) ^ I have tried adding out=None as a third argument (even though it is meant to be optional) but it didn't help. I was expecting the same result as if I was not using numba. | The docs say: Basic linear algebra is supported on 1-D and 2-D contiguous arrays of floating-point and complex numbers: numpy.dot() ... However, your two arrays contain integers. Note indeed, the error message: dot(array(int64, 2d, C), array(int64, 2d, C)) Hence, the trick is to change the dtype: import numpy as np import numba @numba.njit() def tst_dot(): a = np.array([[1, 0], [0, 1]], dtype=np.float32) b = np.array([[4, 1], [2, 2]], dtype=np.float32) return np.dot(a, b) print(tst_dot()) [[4. 1.] [2. 2.]] | 5 | 5 |
76,663,521 | 2023-7-11 | https://stackoverflow.com/questions/76663521/where-does-py-modules-go-in-pyproject-toml-setuptools | Where does the py_modules parameter (I don't know what it is called) go in a pyproject.toml file? [project] name = "elements" version = "1.0.0" description = "Magic" readme = "README.md" requires-python = ">=3.10" license = {file = "LICENSE"} authors = [ {name = "x"}, {name = "y"} ] maintainers = [ {name = "x"}, {name = "y"} ] [project.scripts] stuff = "stuff:__init__" [tool.setuptools] packages = ["baseclasses", "elements"] # this raises error. py_modules = ["__init__"] [build-system] requires = ["setuptools>=43.0.0", "wheel"] build-backend = "setuptools.build_meta" My project structure looks like this: . βββ .gitignore βββ LICENSE βββ README.md βββ __init__.py βββ baseclasses β βββ __init__.py β βββ bunch_of_py_files.py βββ elements β βββ __init__.py β βββ bunch_of_py_files.py βββ pyproject.toml I have tried to put it under the project section and the tool.setuptools section, but both raised the project must not contain {'py_modules'} properties error. I was looking through setuptools and pip docs, but the only thing I could find was for setup.py files, which I read was being deprecated and as such didn't want to use. | I was able to find the solution here [tool.setuptools] packages = ["baseclasses", "elements"] py-modules = ["__init__"] # dash, not underscore | 5 | 7 |
76,653,905 | 2023-7-10 | https://stackoverflow.com/questions/76653905/what-are-the-mechanics-of-python-decorator-for-property-setter-and-deleter | The subject of Python properties is covered extensively here, and the Python documentation provides a pure Python implementation here. However, I am still not fully clear on the mechanics of the decorator functionality itself. More specifically, for identically named getters and setters x, how does the setter function object x (before being passed to the @x.setter decorator) not end-up rewriting the property object bound to x (thus making the decorator call meaningless)? Consider the following example: class C(object): def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" print("getter of x called") return self._x @x.setter def x(self, value): print("setter of x called") self._x = value @x.deleter def x(self): print("deleter of x called") del self._x From what I understand about decorators (please correct me if I'm wrong), @property followed by def x(self): ... in the getter definition is the pure equivalent of def x(self): ... followed by x = property(x). Yet, when I try to replace all three decorators with the class constructor call syntax equivalent (while still keeping the function names identical), it stops working, like so: class C(object): def __init__(self): self._x = None def x(self): """I'm the 'x' property.""" print("getter of x called") return self._x x = property(x) def x(self, value): print("setter of x called") self._x = value x = x.setter(x) def x(self): print("deleter of x called") del self._x x = x.deleter(x) ... which results in AttributeError: 'function' object has no attribute 'setter' on line 14 (x = x.setter(x)). This seems expected, since def x(self, value)... for the setter should overwrite x = property(x) from above. What am I missing? | As pointed out by @kindall, the answer seems to lie in the decorator: and the fact that with it, Python does not seem to bind the raw function name to the namespace, but simply creates the raw function object, then calls the decorator function on it, and only binds the final result. This is touched upon in the answer here, answered much better here, both citing PEP318 which explains that: @dec2 @dec1 def func(arg1, arg2, ...): pass ... is equivalent to: def func(arg1, arg2, ...): pass func = dec2(dec1(func)) though without the intermediate creation of a variable named func. As suggested here, this seems to be also directly evidenced by using the dis module to "disassemble" the code and see what is actually executing. Here is the excerpt from the output of dis command (python -m dis <filename>) ran on the code from the first example of the original question above. (This looks like the part where Python reads and interprets the class body: Disassembly of <code object C at 0x00000212AAA1DB80, file "PracticeRun6/property1.py", line 1>: 1 0 RESUME 0 2 LOAD_NAME 0 (__name__) 4 STORE_NAME 1 (__module__) 6 LOAD_CONST 0 ('C') 8 STORE_NAME 2 (__qualname__) 2 10 LOAD_CONST 1 (<code object __init__ at 0x00000212AACA5BD0, file "PracticeRun6/property1.py", line 2>) 12 MAKE_FUNCTION 0 14 STORE_NAME 3 (__init__) 5 16 LOAD_NAME 4 (property) 6 18 LOAD_CONST 2 (<code object x at 0x00000212AAA234B0, file "PracticeRun6/property1.py", line 5>) 20 MAKE_FUNCTION 0 5 22 PRECALL 0 26 CALL 0 6 36 STORE_NAME 5 (x) 11 38 LOAD_NAME 5 (x) 40 LOAD_ATTR 6 (setter) 12 50 LOAD_CONST 3 (<code object x at 0x00000212AAA235A0, file "PracticeRun6/property1.py", line 11>) 52 MAKE_FUNCTION 0 11 54 PRECALL 0 58 CALL 0 12 68 STORE_NAME 5 (x) 16 70 LOAD_NAME 5 (x) 72 LOAD_ATTR 7 (deleter) 17 82 LOAD_CONST 4 (<code object x at 0x00000212AA952CD0, file "PracticeRun6/property1.py", line 16>) 84 MAKE_FUNCTION 0 16 86 PRECALL 0 90 CALL 0 17 100 STORE_NAME 5 (x) 102 LOAD_CONST 5 (None) 104 RETURN_VALUE We can see (from what I understand) that for each decorated function definition: the inner function is loaded as a code object: LOAD_CONST (<code object x at ...> made into a function object: MAKE_FUNCTION passed straight to the decorator call without being bound: PRECALL followed by CALL and finally bound/stored in final form: STORE_NAME. Finally, here is my ugly-looking but working (!) solution that tries to emulate this decorator behavior all while not using decorators and keeping the same raw function names (as initially sought in the original qeustion): from types import FunctionType class C(object): def __init__(self): self._x = None x = property( FunctionType( code=compile( r""" def _(self): print("getter of x called") return self._x """, '<string>', 'exec').co_consts[0], globals=globals(), ) ) x = x.setter( FunctionType( code=compile( r""" def _(self, value): print("setter of x called") self._x = value """, '<string>', 'exec').co_consts[0], globals=globals(), ) ) x = x.deleter( FunctionType( code=compile( r""" def _(self): print("deleter of x called") del self._x """, '<string>', 'exec').co_consts[0], globals=globals(), ) ) c = C() c.x = 120 print(c.x) del c.x Hopefully, someone with actual knowledge of CPython, or a good source, can write or point to an actual pure Python emulation of Decorator behavior, that most closely resembles what Python does under the hood. | 3 | 2 |
76,624,569 | 2023-7-5 | https://stackoverflow.com/questions/76624569/assertion-error-custom-exception-not-raised | I am trying to write unittests for a function I wrote that includes a try/except block. I have raised a custom exception if my string contains less than 3 characters. I wanted to test that the error gets raised when inputting a string less than 3 characters long. When running the function, if I input a string less than 3 characters, e.g. "ha" - I get the correct error message: "There are not enough letters in your sentence" which leads me to believe that I have raised the custom exception correctly, however, googling has told me that this means I have not raised my custom exception in my function. I just cannot see or understand where I have gone wrong. Function file: from collections import Counter # set custom exception to raise when a word with less than 3 letters is given class NotEnoughLetters(Exception): pass # create a function that will return the 3 most common letters def three_most_common(string: str): string = string.replace(" ", "") try: if not all(char.isalpha() for char in string): raise ValueError("Your input must contain a string") # using all because in this instance I haven't accounted for strings and ints mixed if len(string) < 3: raise NotEnoughLetters("There are not enough letters in your sentence") most_common = Counter(string).most_common(3) letters = [key for key, value in most_common] except ValueError as err: return err except NotEnoughLetters as e: return e else: return f"Here are your most common letters: 1) {letters[0]} 2) {letters[1]} 3) {letters[2]}" finally: print("The program is running, please wait for your output") Test file: import unittest from unittest import TestCase from common_letters import three_most_common, NotEnoughLetters class TestCommonLetters(TestCase): # valid input def test_good_string(self): expected_input = "cheesy puff" expected_output = "Here are your most common letters: 1) e 2) f 3) c" result = three_most_common(expected_input) self.assertEqual(expected_output, result) # add assertion here # invalid input def test_bad_string(self): expected_input = "cheesy puff" false_output = "Here are your most common letters: 1) f 2) f 3) e" result = three_most_common(expected_input) self.assertNotEqual(false_output, result) # edge case 1, having 3 letters def test_having_three_letters(self): expected_input = "hay" expected_output = "Here are your most common letters: 1) h 2) a 3) y" result = three_most_common(expected_input) self.assertEqual(expected_output, result) # edge case 2, having 2 letters TODO this didn't work so get clarification tomorrow as to why not def test_having_two_letters(self): with self.assertRaises(NotEnoughLetters): three_most_common(string="ha") if __name__ == '__main__': unittest.main() This is giving me the following output: Traceback (most recent call last): File "C:\Homework-LivvyW\Homework-LivvyW\Homework_week7-LivvyW\test_common_letters.py", line 31, in test_having_two_letters with self.assertRaises(NotEnoughLetters): AssertionError: NotEnoughLetters not raised I have tried to look at similar stackoverflow question/answers but sadly still not comprehending why/where I have gone wrong. Thank you! | Not raise the Custom Exceptions An other possibility is to remove your custom Exceptions an return directly the error message. So your code becomes: from collections import Counter # create a function that will return the 3 most common letters def three_most_common(string: str): string = string.replace(" ", "") try: if not all(char.isalpha() for char in string): #raise ValueError("Your input must contain a string") # using all because in this instance I haven't accounted for strings and ints mixed return "Your input must contain a string" if len(string) < 3: #raise NotEnoughLetters("There are not enough letters in your sentence") return "There are not enough letters in your sentence" most_common = Counter(string).most_common(3) letters = [key for key, value in most_common] #except ValueError as err: # return err #except NotEnoughLetters as e: # return e #else: return f"Here are your most common letters: 1) {letters[0]} 2) {letters[1]} 3) {letters[2]}" finally: print("The program is running, please wait for your output") How to change the test code Accordingly the test code becomes the following: import unittest from unittest import TestCase #from common_letters import three_most_common, NotEnoughLetters from common_letters import three_most_common class TestCommonLetters(TestCase): # valid input def test_good_string(self): expected_input = "cheesy puff" expected_output = "Here are your most common letters: 1) e 2) f 3) c" result = three_most_common(expected_input) self.assertEqual(expected_output, result) # add assertion here # invalid input def test_bad_string(self): expected_input = "cheesy puff" false_output = "Here are your most common letters: 1) f 2) f 3) e" result = three_most_common(expected_input) self.assertNotEqual(false_output, result) # edge case 1, having 3 letters def test_having_three_letters(self): expected_input = "hay" expected_output = "Here are your most common letters: 1) h 2) a 3) y" result = three_most_common(expected_input) self.assertEqual(expected_output, result) # edge case 2, having 2 letters TODO this didn't work so get clarification tomorrow as to why not """def test_having_two_letters(self): with self.assertRaises(NotEnoughLetters): three_most_common(string="ha")""" # edge case 2, having 2 letters TODO this didn't work so get clarification tomorrow as to why not def test_having_two_letters(self): result = three_most_common(string="ha") self.assertEqual("There are not enough letters in your sentence", result) if __name__ == '__main__': unittest.main() In the test code, the test case test_having_two_letters() verifies the error message returned by the function three_most_common() (as in the other test cases) and not if it is raised an Exception. | 4 | 1 |
76,644,359 | 2023-7-8 | https://stackoverflow.com/questions/76644359/how-to-type-hint-a-function-which-takes-a-callable-and-its-required-positional-a | Here is my function: def call_func(func, *args): return func(*args) I think I have two options here: Using TypeVarTuple -> in Callable[[*Ts], Any] form. Ts = TypeVarTuple("Ts") T = TypeVar("T") def call_func(func: Callable[[*Ts], T], *args: *Ts) -> T: return func(*args) Currently Mypy has problem with [*Ts] part. it says: Invalid type comment or annotation. (I also enabled --enable-incomplete-feature=TypeVarTuple.) Using ParamSpec -> in Callable[P, Any] form. P = ParamSpec("P") T = TypeVar("T") def call_func(func: Callable[P, T], *args: P.args) -> T: return func(*args) This time Mypy says: ParamSpec must have "*args" typed as "P.args" and "**kwargs" typed as "P.kwargs". It looks like it wants me to also specify kwargs. What is the correct way of doing it? Is there any technical difference between using TypeVarTuple and ParamSec in Callable? | The only way to fully type-check something that only accepts positional-only arguments is with PEP 646, which mypy (as of the time of this answer) does not fully implement. However, you can manipulate typing.ParamSpec and typing.Protocol to make positional-only and keyword-only callable types which throw errors at a call site if someone tries to pass keyword and positional arguments, respectively. The core idea is to form a union with Callable[P, R] with some Protocol CallbackProto such that: Callable[P, R] unioned with CallbackProto::__call__(self, *args: typing.Any) -> R rejects all attempts to pass any keyword arguments; Callable[P, R] unioned with CallbackProto::__call__(self, **kwargs: typing.Any) -> R rejects all attempts to pass any positional arguments. In your case, let's say that this was the desired output: P = ParamSpec("P") R = TypeVar("R") def call_func(func: Callable[P, R], *args: P.args, **kwargs: P.kwargs) -> R: return func(*args) >>> def function_(a: int, b: str) -> None: ... ... ... >>> call_func(function_, 1, "") # OK >>> call_func(function_, 1, b="") # Unexpected keyword argument "b" This can be done by making call_func reject all attempts to pass keyword arguments: from __future__ import annotations import typing as t if t.TYPE_CHECKING: import collections.abc as cx P = t.ParamSpec("P") R = t.TypeVar("R") CallFuncT = t.TypeVar("CallFuncT", bound="CallFunc") class PositionalOnlyCallable(t.Protocol): def __call__(self, *args: t.Any) -> t.Any: ... class CallFunc(t.Protocol): def __call__(self, func: cx.Callable[P, R], *args: P.args, **kwargs: P.kwargs) -> R: ... def asPositionalOnlyCallable(f: CallFuncT, /) -> CallFuncT | PositionalOnlyCallable: """ No-op decorator which manipulates the typing signature. Unions a given callable's type with something which only accepts positional arguments. """ return f @asPositionalOnlyCallable def call_func(func: cx.Callable[P, R], *args: P.args, **kwargs: P.kwargs) -> R: return func(*args) >>> def function_(a: int, b: str) -> None: ... ... ... >>> call_func(function_, 1, "") # OK >>> call_func(function_, 1, b="") # mypy: Unexpected keyword argument "b" for "__call__" of "PositionalOnlyCallable" [call-arg] | 4 | 3 |
76,655,600 | 2023-7-10 | https://stackoverflow.com/questions/76655600/how-to-use-proxy-in-selenium-4-1 | i am trying to make an automation project which will use proxy to register in a site. but i am unable to get it work. I have tried selenium-wire for proxy authentication still it doesn't work from seleniumwire import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.firefox.options import Options import os import keyboard import pyautogui from time import sleep user = 'jesbeqki' pwd = '1wbt8dnqtu8w' end = '2.56.119.93:5074' seleniumwire_options ={ "proxies" : { "https": "http://"f"{user}:{pwd}@{end}/", "http": "http://"f"{user}:{pwd}@{end}/", 'verify_ssl': False } } firefox_options = Options() # firefox_options.add_argument("--headless") driver = webdriver.Firefox(options=firefox_options, seleniumwire_options=seleniumwire_options) driver.get('https://httpbin.org/ip') text = driver.find_element(By.TAG_NAME, 'body').text print(text) it's showing my original ip. | You can use SeleniumBase to proxy to a site. pip install seleniumbase, then run this with python after filling in the proxy details: from seleniumbase import SB with SB(uc=True, proxy="USER:PASS@IP:PORT") as sb: sb.driver.get("https://whatismyip.com") sb.sleep(5) | 3 | 3 |
76,624,970 | 2023-7-6 | https://stackoverflow.com/questions/76624970/how-can-i-change-the-parameter-signature-in-the-help-docs-of-a-decorated-functio | I have a class that includes a method that takes two parameters, a and b, like so: class Foo: def method(self, a, b): """Does something""" x, y, z = a.x, a.y, b.z return do(x, y, z) When running help(Foo) in the Python REPL, you would see something like this: class Foo(builtins.object) | Methods defined here: | | method(self, a, b) | Does something I have updated the class with a decorator that manipulates the arguments passed into the method so instead of taking a and b and unpacking them, the decorator will do the unpacking and pass x, y, and z into the method: def unpack(fn): def _unpack(self, a, b): x, y, z = a.x, a.y, b.z return fn(self, x, y, z) return _unpack class Foo: @unpack def method(self, x, y, z): """Does something""" return do(x, y, z) This works fine except that the help string isn't very helpful anymore: class Foo(builtins.object) | Methods defined here: | | method = _unpack(self, a, b) The standard way to fix this is to use functools.wraps: from functools import wraps def unpack(fn): @wraps(fn) def _unpack(self, a, b): x, y, z = a.x, a.y, b.z return fn(self, x, y, z) return _unpack And that works great, too, except that it shows the method as taking x, y, and z as arguments, when really it still takes a and b (that is, 2 arguments instead of 3), so the help doc is a bit misleading: class Foo(builtins.object) | Methods defined here: | | method(self, x, y, z) | Does something Is there a way to modify the function wrapper so it correctly grabs the doc string and other attributes as in functools.wraps but also shows the arguments accepted by the wrapper? For example, I would like the help string to show: class Foo(builtins.object) | Methods defined here: | | method(self, a, b) | Does something even when the method is wrapped by unpack. (I have examined the source code of functools.wraps but I could not figure out if that can copy the function signature or not.) | TL;DR Delete the __wrapped__ attribute that is added automatically by functools.wraps before returning the wrapper: def unpack(fn): @wraps(fn) def wrapper(self, a, b): x, y, z = a.x, a.y, b.z return fn(self, x, y, z) del wrapper.__wrapped__ # <-- force inspection to show `wrapper` signature return wrapper Details The built-in help function is just a a wrapper around pydoc.help, which in turn is just an instance of pydoc.Helper. Skipping over a few steps, it turns out that eventually for rendering the documentation for any kind of function, the TextDoc.docroutine method is called. And that in turn utilizes inspect.signature to retrieve the signature of the function in question (see here). All it does then is take the string representation of that Signature object, adds a bit more stuff, and glues the docstring underneath it. The docstring is essentially just the __doc__ attribute of the function object in question, which functools.wraps (or more precisely functools.update_wrapper) copies over from the "actual" function. This is why using the @wraps decorator solves the docstring issue. The problem for you is that update_wrapper also always adds the __wrapped__ attribute to the wrapper function and assigns the original function to that. Turns out that the inspect library knows this and is on the lookout for that attribute. The idea presumably being that you usually want to inspect the "actual" underlying function, not the wrapper, since you took the time to use update_wrapper in the first place. The inspect.unwrap method is used to drill down to the non-wrapped function before analyzing that and retrieving its parameter specification. Therefore the straightforward solution is to simply get rid of the __wrapped__ attribute on your wrapper function so that inspect will be forced to analyze that. It should be obvious, but the __wrapped__ attribute is added for a reason: To still have access to the original function after decoration. So while this does solve your problem, it may introduce other problems. PS Taking this one step further, you could theoretically keep the __wrapped__ attribute, but assign it a stub with any signature you want. This will "fool" the inspect tools into returning the signature of that stub, which will in turn end up in the auto-generated documentation from pydoc/help. For example: from collections.abc import Callable from functools import wraps from typing import Protocol, TypeVar R = TypeVar("R") T = TypeVar("T") class HasXY(Protocol): x: int y: str class HasZ(Protocol): z: float def unpack(fn: Callable[[T, int, str, float], R]) -> Callable[[T, HasXY, HasZ], R]: @wraps(fn) def wrapper(self: T, a: HasXY, b: HasZ) -> R: x, y, z = a.x, a.y, b.z return fn(self, x, y, z) def _signature(self, a: HasXY, b: HasZ): # type: ignore[no-untyped-def] raise NotImplementedError __annotations__ = getattr(fn, "__annotations__", {}) for name in ("self", "return"): if name in __annotations__: _signature.__annotations__[name] = __annotations__[name] wrapper.__wrapped__ = _signature # type: ignore[attr-defined] return wrapper class Foo: @unpack def method(self, x: int, y: str, z: float) -> None: """Does something""" print(f"method({x=}, {y=}, {z=})") help(Foo) class Bar: x = 1 y = "a" class Baz: z = 3.14 foo = Foo() foo.method(Bar, Baz) Output: Help on class Foo in module __main__: class Foo(builtins.object) | Methods defined here: | | method(self, a: __main__.HasXY, b: __main__.HasZ) -> None | Does something | | ---------------------------------------------------------------------- | ... method(x=1, y='a', z=3.14) The code above will incidentally pass mypy --strict without errors. Again, this is somewhat hack-ish, in that that the method's __wrapped__ attribute is now just a reference to the _signature stub, so you will not be able to call the original method directly in any way. | 3 | 4 |
76,652,410 | 2023-7-10 | https://stackoverflow.com/questions/76652410/pytest-xdist-teardown-after-all-workers-finish | When I'm running pytest normally it works perfectly fine (before first test it's upgrading migration and after last test it's downgrading migration). Without using xdist I can just set fixture scope='session' and its work that way. But when I run pytest-xdist (tests in parallel) it's upgrading migration before each test and downgrading migration after each test (I want to setUp before first test - first worker, and tearDown after last test - when last worker finishes). I think that setUp before first worker I can handle with File Lock (and also I don't need it that much because in migration upgrade I'm just creating some tables if not exist so if it was already upgraded it just won't do anything). I'm interested especially in tearDown after last worker finshes. import pytest import os import subprocess @pytest.fixture(autouse=True, scope='session') def handle_database(): """Create necesarry tables in db (using migration) and drop them after tests""" # Change directory to test/api_simulator os.chdir('test/api_simulator') # Perform migration upgrade subprocess.run(['flask', 'db', 'upgrade']) print('\n\nUpgrade migration\n\n') # Change directory back to the original directory os.chdir('../..') yield # Change directory to test/api_simulator os.chdir('test/api_simulator') # Perform migration downgrade subprocess.run(['flask', 'db', 'downgrade']) print('\n\nDowngrade migration\n\n') # Change directory back to the original directory os.chdir('../..') I was searching about this issue in web but I found nothing good about tearDown with xdist. | The fixture is running as part of a test, even if it's only once in session scope. This means it's running in the process xdist opened. To run this only once you move the code to pytest_sessionstart and pytest_sessionfinish in conftest.py. Those methods will be executed once in "regular" scope before/after xdist kicks in. Note that the pytest_session functions will run again as part of the tests cycle so you need to check it def __handle_database(config, command): if not hasattr(config, 'workerinput'): os.chdir('test/api_simulator') # Perform migration upgrade subprocess.run(['flask', 'db', command]) print('\n\nUpgrade migration\n\n') # Change directory back to the original directory os.chdir('../..') def pytest_sessionstart(session): __handle_database(session.config, 'upgrade') def pytest_sessionfinish(session, exitstatus): __handle_database(session.config, 'downgrade') | 3 | 5 |
76,649,888 | 2023-7-9 | https://stackoverflow.com/questions/76649888/how-do-i-copy-an-image-from-the-output-in-jupyter-notebook-7 | I've been working with Jupyter Notebooks for quite a while. When working with visualisations, I like to copy the output image from a cell by right clicking the image and selecting "Copy Image" from the context menu: I like working with the direct copy from the notebook, especially for answering questions on Stack Overflow, so I'd rather not store them to disk. That would be a real reason to revert to legacy notebooks for me. However with the Notebook 7 migration coming, I gave the beta a try by running pip install notebook --pre --upgrade and to my surprise I can't right click, copy the output image because the new Jupyter context menu pops up instead. This really breaks my workflow. How can I copy an image from the output of a cell in notebook 7+? | Hold down Shift while right-clicking to access the native browser menu, as spelled out here. For context, Version 7 of Jupyter Notebook is built off JupyterLab components (see here and here), and that means that some of the interface and GUI abilities you can learn by seeing how JupyterLab users handle them. Tested by launching a temporary session via MyBinder from here and works in Chrome on a Mac. | 11 | 15 |
76,649,501 | 2023-7-9 | https://stackoverflow.com/questions/76649501/gitpython-error-module-git-does-not-explicitly-export-attribute-repo-attr | I am using Python 3.10.4, GitPython version 3.1.31, mypy version 1.4.1: $ pip show GitPython Name: GitPython Version: 3.1.31 Location: /home/hakon/.pyenv/versions/3.10.4/lib/python3.10/site-packages Requires: gitdb $ python --version Python 3.10.4 $ mypy --version mypy 1.4.1 (compiled: yes) If run mypy on this minimal example (git-python-types.py) : import git repo = git.Repo('some_dir') I get the following error: $ mypy --strict git-python-types.py git-python-types.py:3: error: Module "git" does not explicitly export attribute "Repo" [attr-defined] Found 1 error in 1 file (checked 1 source file) Any ideas on why this error occurs and how to fix it? Some clues I can see the following line in the GitPython source code : from git.repo import Repo # @NoMove @IgnorePep8 but I am not sure if mypy is reading this line or not. | As documentation suggests this is proper usage https://gitpython.readthedocs.io/en/stable/tutorial.html#meet-the-repo-type from git import Repo ... then I would consider this a minor bug that should be fixed in GitPython. You can work around this bug by importing it from the submodule git.repo.Repo('some_dir') or don't use strict. The check it adds here is --no-implicit-reexport, which GitPython currently breaks. | 3 | 1 |
76,645,926 | 2023-7-9 | https://stackoverflow.com/questions/76645926/peewee-improperlyconfigured-postgres-driver-not-installed | I have installed peewee and postgres. I am using FastAPI as the backend. When I run the request to create tables, it throws the shown error: INFO: 127.0.0.1:33284 - "GET /setup_db HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): peewee.ImproperlyConfigured: Postgres driver not installed! This is my main.py: from fastapi import FastAPI from models.schema import db, User, Plan from datetime import datetime from peewee import * db = PostgresqlDatabase('database_name', host='localhost', port=5432, user='user', password='password') app = FastAPI() @app.get('/') async def Home(): return "Welcome Home" @app.get('/setup_db') async def SetupDB(): db.connect() db.create_tables([User],[Plan]) new_user = User.create( username = 'user1', email = '[email protected]', hashed_password = 'pwd####', create_on = datetime(2001, 3, 29, 5, 15, 30) ) print(new_user) return new_user I tried importing the Models from different folders but it did not work. Please let me know what drivers/config do I need to add to make this work. Thanks & Cheers! | install postgresql driver: pip install psycopg2 But, I suggest you to use more robust solution, like: sqlalchemy, asyncpg, alembic :) | 3 | 6 |
76,630,762 | 2023-7-6 | https://stackoverflow.com/questions/76630762/403-response-while-web-scraping-tried-many-different-solutions-and-implementati | So I've been scraping Glassdoor for company reviews for a while now. In the past, I had scraped the site pretty easily using one line of code: page = http.get(url+ id + ".htm",timeout= 1000000,headers=HEADERS) In fact, I didn't even need the "header" line! This code had worked wonders until I took about a 6-month break from the project. When I returned, instead of picking up right where I left off as I expected, every time I tried to request the webpage, I was returned <Response [403]> with the HTML correlating to a "security page". As a result, I have not been able to get any usable data from the website. As this is quite the common occurrence, I scoured through MANY stack overflow questions and have implemented their suggestions. All of the following changes have been attempted and have not resulted in any success: Adding a 'user-agent' header containing only 'Mozilla/5.0' Adding a more complicated 'user-agent' header such as: 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Mobile Safari/537.36' Adding a completely filled out header, based on this stack overflow question: Web Scraping getting error (HTTP Error 403: Forbidden) using urllib Adding only the 'user-agent' and 'Accept-Language' parameters, setting them equal to 'en-US;q=0.7,en;q=0.3' Instead of using the httpx (http) library, replacing it with the requests library Instead of using requests.get(), using session and session.get() as session keeps track of cookies allow some websites' blockers to be bypassed And the last, desperate thing I tried was to use proxies based on this question: Web scraping results in 403 Forbidden Error and using the free random proxies from https://free-proxy-list.net/. I cycled through about 3 different addresses with varying 'specs'--none of them worked. At this point, I have pretty much no leads on what thing sets off the red flags (perhaps my IP was flagged?) I have attached my code below in case that is helpful. Again, this is quite the new thing, just a few months ago everything was working smoothly... url = "https://www.glassdoor.com/Reviews/-Reviews-E" HEADERS = { 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', 'Accept-Encoding':'gzip, deflate, br', 'Accept-Language':'en-US,en;q=0.9', 'Cache-Control': 'max-age=0', 'Cookie': 'Here is where I copied the cookies from my browser, I looked through it and it contained some info that Might be able to personally identify me so I removed it from the post', 'Sec-Ch-Ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"', 'Sec-Ch-Ua-Mobile':'?0', 'Sec-Ch-Ua-Platform':"Windows", 'Sec-Fetch-Dest': 'document', 'Sec-Fetch-Mode': 'navigate', 'Sec-Fetch-Site': 'same-origin', 'Sec-Fetch-User':'?1', 'Upgrade-Insecure-Requests': '1', 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36' } if __name__ == "__main__": id = input("What Glassdoor company would you like to scrape [enter an id]: ") #getting "403 Forbidden " session = requests.Session() session.headers = HEADERS session.proxies = {'http':'209.141.62.12'} #this code returns 403 forbidden #page = http.get(url+ id + ".htm",timeout= 1000000,headers=HEADERS) page = session.get(url + id + ".htm",timeout = 100000,headers=HEADERS,proxies={'http':'209.141.62.12'}) try: data = re.findall(r'apolloState":\s*({.+})};', page.text)[0] except IndexError: dir = create_log_directory("failed",id) logging.basicConfig(filename= dir+"info.log",encoding='utf-8',filemode='w',level=logging.DEBUG) logging.critical("failed") logging.debug(url + id + ".htm") logging.debug(page.text) sys.exit() For context, you can assume all the logging functions work fine and that the line of code in the try will throw an IndexError only if I am returned an error or incorrect page. Additionally, I removed the 'Cookie' from the Cookie section of the header as I want to avoid possibly doxxing myself. However, it is worth noting that the 'Cookie' (and all other headers) came directly from my chrome browser (I visited the site and used chrome debug tools to identify my request headers). For testing purposes, you could replace the cookie with your own cookie. I really hope someone has an idea for how to fix this, and if it works fine on anyone's local device, perhaps something about the request coming from me gives it away. I do hope my post is not too wordy and repetitive, but I really wanted to show the extent of what I tried and don't want this post to be closed (as either a duplicate or for 'lack of effort'). Finally, I want to mention that while the code above is not asynchronous, the solution MUST be applicable to use with some form of async. For example, previously I used the package aiohttp with asyncio to send multiple get requests simultaneously. While I am completely open to using new packages, I would desire that any such packages possess some async capabilities :) thanks again to anyone who took the time to read this! | I found a solution that can bypass Cloudflare's protections, it is a Python module cloudscraper (which is a fork of cloudflare-scrape). It works on a small scale, but it says in the README that if you get reCAPTCHA challenge, then it won't be able to scrape the page. It is pretty simple to use: import re import cloudscraper url = "https://www.glassdoor.com/Reviews/-Reviews-E" HEADERS = { 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', 'Accept-Encoding':'gzip, deflate, br', 'Accept-Language':'en-US,en;q=0.9', 'Cache-Control': 'max-age=0', 'Cookie': 'Here is where I copied the cookies from my browser, I looked through it and it contained some info that Might be able to personally identify me so I removed it from the post', 'Sec-Ch-Ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"', 'Sec-Ch-Ua-Mobile':'?0', 'Sec-Ch-Ua-Platform':"Windows", 'Sec-Fetch-Dest': 'document', 'Sec-Fetch-Mode': 'navigate', 'Sec-Fetch-Site': 'same-origin', 'Sec-Fetch-User':'?1', 'Upgrade-Insecure-Requests': '1', 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36' } if __name__ == "__main__": id = input("What Glassdoor company would you like to scrape [enter an id]: ") #getting "403 Forbidden " scraper = cloudscraper.CloudScraper() #scraper.headers = HEADERS #scraper.proxies = {'http':'209.141.62.12'} page = scraper.get(url + id + '.htm', timeout = 100000) try: data = re.findall(r'apolloState":\s*({.+})};', page.text)[0] except IndexError as e: print(e) sys.exit() You can set headers and proxies attributes just like with requests.Session() object, but it works without them. | 3 | 0 |
76,630,196 | 2023-7-6 | https://stackoverflow.com/questions/76630196/how-to-fix-random-seed-in-pytorch-while-keeping-the-dropout-randomness | I am trying to approximate a Bayesian model by keeping the dropout probability during both training and inference (Monte Carlo dropout), in order to obtain the epistemic uncertainty of the model. Is there a way to fix all source of randomness for reproducibility (random seed), but to maintain the randomness of the dropout? # Set random seed for reproducibility seed = 123 torch.manual_seed(seed) random.seed(seed) np.random.seed(seed) # Training and Inference phase (with dropout) dropout_mask = torch.bernoulli(torch.full_like(input, 1 - self.dropout)) skip = self.skip0(input * dropout_mask / (1 - self.dropout)) for i in range(self.layers): residual = x filter = self.filter_convs[i](x) filter = torch.tanh(filter) gate = self.gate_convs[i](x) gate = torch.sigmoid(gate) x = filter * gate dropout_mask = torch.bernoulli(torch.full_like(x, 1 - self.dropout)) x = x * dropout_mask / (1 - self.dropout) s = x s = self.skip_convs[i](s) skip = s + skip if self.gcn_true: x = self.gconv1[i](x, adp) + self.gconv2[i](x, adp.transpose(1, 0)) else: x = self.residual_convs[i](x) x = x + residual[:, :, :, -x.size(3):] if idx is None: x = self.norm[i](x, self.idx) else: x = self.norm[i](x, idx) skip = self.skipE(x) + skip x = F.relu(skip) x = F.relu(self.end_conv_1(x)) x = self.end_conv_2(x) return x The above code produces same result everytime, which is not what I am trying to do. | I had to remove this line for things to work as expected. model.eval() This line seems to disable the dropout during inference. | 3 | 0 |
76,639,423 | 2023-7-7 | https://stackoverflow.com/questions/76639423/installing-libmagic-with-pip-fails | After installing in my Jupyter Notebook (as a container of JupyterLab as jovan user without access to root) the libmagic while having cmake 3.26.4 already installed in the conda env. I try to install install libmagic with pip: pip install python-libmagic but I keep getting error: Collecting python-libmagic Using cached python_libmagic-0.4.0-py3-none-any.whl Collecting cffi==1.7.0 (from python-libmagic) Using cached cffi-1.7.0.tar.gz (400 kB) Preparing metadata (setup.py) ... done Requirement already satisfied: pycparser in /opt/conda/envs/cho_env/lib/python3.10/site-packages (from cffi==1.7.0->python-libmagic) (2.21) Building wheels for collected packages: cffi Building wheel for cffi (setup.py) ... error error: subprocess-exited-with-error Γ python setup.py bdist_wheel did not run successfully. β exit code: 1 β°β> [254 lines of output] running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-cpython-310 creating build/lib.linux-x86_64-cpython-310/cffi copying cffi/ffiplatform.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/cffi_opcode.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/verifier.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/commontypes.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/vengine_gen.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/setuptools_ext.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/vengine_cpy.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/recompiler.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/cparser.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/lock.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/backend_ctypes.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/__init__.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/model.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/api.py -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/_cffi_include.h -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/parse_c_type.h -> build/lib.linux-x86_64-cpython-310/cffi copying cffi/_embedding.h -> build/lib.linux-x86_64-cpython-310/cffi running build_ext building '_cffi_backend' extension creating build/temp.linux-x86_64-cpython-310 creating build/temp.linux-x86_64-cpython-310/c gcc -pthread -B /opt/conda/envs/cho_env/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/cho_env/include -fPIC -O2 -isystem /opt/conda/envs/cho_env/include -fPIC -DUSE__THREAD -I/usr/include/ffi -I/usr/include/libffi -I/opt/conda/envs/cho_env/include/python3.10 -c c/_cffi_backend.c -o build/temp.linux-x86_64-cpython-310/c/_cffi_backend.o In file included from c/_cffi_backend.c:274: c/minibuffer.h: In function βmb_ass_sliceβ: c/minibuffer.h:66:5: warning: βPyObject_AsReadBufferβ is deprecated [-Wdeprecated-declarations] 66 | if (PyObject_AsReadBuffer(other, &buffer, &buffer_len) < 0) | ^~ In file included from /opt/conda/envs/cho_env/include/python3.10/genobject.h:12, from /opt/conda/envs/cho_env/include/python3.10/Python.h:110, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/abstract.h:343:17: note: declared here 343 | PyAPI_FUNC(int) PyObject_AsReadBuffer(PyObject *obj, | ^~~~~~~~~~~~~~~~~~~~~ In file included from c/_cffi_backend.c:277: c/file_emulator.h: In function βPyFile_AsFileβ: c/file_emulator.h:54:14: warning: assignment discards βconstβ qualifier from pointer target type [-Wdiscarded-qualifiers] 54 | mode = PyText_AsUTF8(ob_mode); | ^ In file included from c/_cffi_backend.c:281: c/wchar_helper.h: In function β_my_PyUnicode_AsSingleWideCharβ: c/wchar_helper.h:83:5: warning: βPyUnicode_AsUnicodeβ is deprecated [-Wdeprecated-declarations] 83 | Py_UNICODE *u = PyUnicode_AS_UNICODE(unicode); | ^~~~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here 580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( | ^~~~~~~~~~~~~~~~~~~ In file included from c/_cffi_backend.c:281: c/wchar_helper.h:84:5: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations] 84 | if (PyUnicode_GET_SIZE(unicode) == 1) { | ^~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here 446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from c/_cffi_backend.c:281: c/wchar_helper.h:84:5: warning: βPyUnicode_AsUnicodeβ is deprecated [-Wdeprecated-declarations] 84 | if (PyUnicode_GET_SIZE(unicode) == 1) { | ^~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here 580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( | ^~~~~~~~~~~~~~~~~~~ In file included from c/_cffi_backend.c:281: c/wchar_helper.h:84:5: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations] 84 | if (PyUnicode_GET_SIZE(unicode) == 1) { | ^~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here 446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from c/_cffi_backend.c:281: c/wchar_helper.h: In function β_my_PyUnicode_SizeAsWideCharβ: c/wchar_helper.h:99:5: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations] 99 | Py_ssize_t length = PyUnicode_GET_SIZE(unicode); | ^~~~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here 446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from c/_cffi_backend.c:281: c/wchar_helper.h:99:5: warning: βPyUnicode_AsUnicodeβ is deprecated [-Wdeprecated-declarations] 99 | Py_ssize_t length = PyUnicode_GET_SIZE(unicode); | ^~~~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here 580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( | ^~~~~~~~~~~~~~~~~~~ In file included from c/_cffi_backend.c:281: c/wchar_helper.h:99:5: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations] 99 | Py_ssize_t length = PyUnicode_GET_SIZE(unicode); | ^~~~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here 446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from c/_cffi_backend.c:281: c/wchar_helper.h: In function β_my_PyUnicode_AsWideCharβ: c/wchar_helper.h:118:5: warning: βPyUnicode_AsUnicodeβ is deprecated [-Wdeprecated-declarations] 118 | Py_UNICODE *u = PyUnicode_AS_UNICODE(unicode); | ^~~~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here 580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( | ^~~~~~~~~~~~~~~~~~~ c/_cffi_backend.c: In function βctypedescr_deallocβ: c/_cffi_backend.c:352:23: error: lvalue required as left operand of assignment 352 | Py_REFCNT(ct) = 43; | ^ c/_cffi_backend.c:355:23: error: lvalue required as left operand of assignment 355 | Py_REFCNT(ct) = 0; | ^ c/_cffi_backend.c: In function βcast_to_integer_or_charβ: c/_cffi_backend.c:3331:26: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations] 3331 | PyUnicode_GET_SIZE(ob), ct->ct_name); | ^~~~~~~~~~~~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here 446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~ c/_cffi_backend.c:3331:26: warning: βPyUnicode_AsUnicodeβ is deprecated [-Wdeprecated-declarations] 3331 | PyUnicode_GET_SIZE(ob), ct->ct_name); | ^~~~~~~~~~~~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here 580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode( | ^~~~~~~~~~~~~~~~~~~ c/_cffi_backend.c:3331:26: warning: β_PyUnicode_get_wstr_lengthβ is deprecated [-Wdeprecated-declarations] 3331 | PyUnicode_GET_SIZE(ob), ct->ct_name); | ^~~~~~~~~~~~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here 446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~ c/_cffi_backend.c: In function βb_complete_struct_or_unionβ: c/_cffi_backend.c:4251:17: warning: βPyUnicode_GetSizeβ is deprecated [-Wdeprecated-declarations] 4251 | do_align = PyText_GetSize(fname) > 0; | ^~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:177:43: note: declared here 177 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_ssize_t) PyUnicode_GetSize( | ^~~~~~~~~~~~~~~~~ c/_cffi_backend.c:4283:13: warning: βPyUnicode_GetSizeβ is deprecated [-Wdeprecated-declarations] 4283 | if (PyText_GetSize(fname) == 0 && | ^~ In file included from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:177:43: note: declared here 177 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_ssize_t) PyUnicode_GetSize( | ^~~~~~~~~~~~~~~~~ c/_cffi_backend.c:4353:17: warning: βPyUnicode_GetSizeβ is deprecated [-Wdeprecated-declarations] 4353 | if (PyText_GetSize(fname) > 0) { | ^~ In file included from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:177:43: note: declared here 177 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_ssize_t) PyUnicode_GetSize( | ^~~~~~~~~~~~~~~~~ c/_cffi_backend.c: In function βprepare_callback_info_tupleβ: c/_cffi_backend.c:5214:5: warning: βPyEval_InitThreadsβ is deprecated [-Wdeprecated-declarations] 5214 | PyEval_InitThreads(); | ^~~~~~~~~~~~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/Python.h:130, from c/_cffi_backend.c:2: /opt/conda/envs/cho_env/include/python3.10/ceval.h:122:37: note: declared here 122 | Py_DEPRECATED(3.9) PyAPI_FUNC(void) PyEval_InitThreads(void); | ^~~~~~~~~~~~~~~~~~ c/_cffi_backend.c: In function βb_callbackβ: c/_cffi_backend.c:5255:5: warning: βffi_prep_closureβ is deprecated: use ffi_prep_closure_loc instead [-Wdeprecated-declarations] 5255 | if (ffi_prep_closure(closure, &cif_descr->cif, | ^~ In file included from c/_cffi_backend.c:15: /opt/conda/envs/cho_env/include/ffi.h:347:1: note: declared here 347 | ffi_prep_closure (ffi_closure*, | ^~~~~~~~~~~~~~~~ In file included from /opt/conda/envs/cho_env/include/python3.10/unicodeobject.h:1046, from /opt/conda/envs/cho_env/include/python3.10/Python.h:83, from c/_cffi_backend.c:2: c/ffi_obj.c: In function β_ffi_typeβ: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:744:29: warning: initialization discards βconstβ qualifier from pointer target type [-Wdiscarded-qualifiers] 744 | #define _PyUnicode_AsString PyUnicode_AsUTF8 | ^~~~~~~~~~~~~~~~ c/_cffi_backend.c:72:25: note: in expansion of macro β_PyUnicode_AsStringβ 72 | # define PyText_AS_UTF8 _PyUnicode_AsString | ^~~~~~~~~~~~~~~~~~~ c/ffi_obj.c:191:32: note: in expansion of macro βPyText_AS_UTF8β 191 | char *input_text = PyText_AS_UTF8(arg); | ^~~~~~~~~~~~~~ c/lib_obj.c: In function βlib_build_cpython_funcβ: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:744:29: warning: initialization discards βconstβ qualifier from pointer target type [-Wdiscarded-qualifiers] 744 | #define _PyUnicode_AsString PyUnicode_AsUTF8 | ^~~~~~~~~~~~~~~~ c/_cffi_backend.c:72:25: note: in expansion of macro β_PyUnicode_AsStringβ 72 | # define PyText_AS_UTF8 _PyUnicode_AsString | ^~~~~~~~~~~~~~~~~~~ c/lib_obj.c:129:21: note: in expansion of macro βPyText_AS_UTF8β 129 | char *libname = PyText_AS_UTF8(lib->l_libname); | ^~~~~~~~~~~~~~ c/lib_obj.c: In function βlib_build_and_cache_attrβ: /opt/conda/envs/cho_env/include/python3.10/cpython/unicodeobject.h:744:29: warning: initialization discards βconstβ qualifier from pointer target type [-Wdiscarded-qualifiers] 744 | #define _PyUnicode_AsString PyUnicode_AsUTF8 | ^~~~~~~~~~~~~~~~ c/_cffi_backend.c:71:24: note: in expansion of macro β_PyUnicode_AsStringβ 71 | # define PyText_AsUTF8 _PyUnicode_AsString /* PyUnicode_AsUTF8 in Py3.3 */ | ^~~~~~~~~~~~~~~~~~~ c/lib_obj.c:208:15: note: in expansion of macro βPyText_AsUTF8β 208 | char *s = PyText_AsUTF8(name); | ^~~~~~~~~~~~~ In file included from c/cffi1_module.c:16, from c/_cffi_backend.c:6636: c/lib_obj.c: In function βlib_getattrβ: c/lib_obj.c:506:7: warning: assignment discards βconstβ qualifier from pointer target type [-Wdiscarded-qualifiers] 506 | p = PyText_AsUTF8(name); | ^ In file included from c/cffi1_module.c:19, from c/_cffi_backend.c:6636: c/call_python.c: In function β_get_interpstate_dictβ: c/call_python.c:20:30: error: dereferencing pointer to incomplete type βPyInterpreterStateβ {aka βstruct _isβ} 20 | builtins = tstate->interp->builtins; | ^~ c/call_python.c: In function β_ffi_def_extern_decoratorβ: c/call_python.c:73:11: warning: assignment discards βconstβ qualifier from pointer target type [-Wdiscarded-qualifiers] 73 | s = PyText_AsUTF8(name); | ^ error: command '/usr/bin/gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for cffi Running setup.py clean for cffi Failed to build cffi ERROR: Could not build wheels for cffi, which is required to install pyproject.toml-based projects``` how can I fix this? | Correct me, if I'm wrong, but I guess you want to install the python bindings for libmagic. Find them here: https://pypi.org/project/python-magic/ and here https://github.com/ahupp/python-magic pip install python-magic There was an issue request 6 years ago by its maintainer to the person sitting on the abandoned "python-libmagic" name to change it, to no avail. The pip package "libmagic" from here https://pypi.org/project/libmagic/ is an even older bindings (basically a python interface to the library) and not the same as the library either. This package looks abandoned as well. And this abandoned pip package seems to interfere with the conda package that you need to install, so: pip uninstall libmagic And with conda install the actual library like you did: conda install -c conda-forge libmagic Just for reference in other cases, one can install system packages for libmagic, for instance: apt install libmagic1 libmagic-dev | 3 | 4 |
76,640,329 | 2023-7-7 | https://stackoverflow.com/questions/76640329/rds-mysql-to-kinesis-data-stream-pipeline-using-aws-lambda | I am trying to extract data from RDS MySQL instance and load into kinesis data stream using put_record boto3 API. The connection using pymysql is working and I am able to print the table but, I cannot write the data into kinesis data stream. I get this error "Object of type datetime is not JSON serializable". def lambda_handler(event, context): connection = pymysql.connect( host = endpoint, user = username, password = passwrd, database = database_name) cursor = connection.cursor() cursor.execute('SELECT * FROM table LIMIT 10') rows = cursor.fetchall() for row in rows: print("{0} {1} {2}".format(row[0], row[1], row[2])) kinesis = boto3.client('kinesis') response = kinesis.put_record( StreamName="test", Data=json.dumps(rows), PartitionKey="1" ) connection.commit() lambda_handler(None,None) I tried printing the table and it worked. Only issue is putting records into kinesis data stream. | This error is thrown when an object with a property of type datetime is passed to json.dumps method. There are several ways on how to resolve it but the quickest one is to convert all datetime properties to string by using the keyword argument of json.dumps import json from datetime import datetime # The default keyword argument can # be set to a function that gets called # for objects that can't otherwise be serialized. response = kinesis.put_record( StreamName="test", Data=json.dumps(rows, default=str), PartitionKey="1" ) | 3 | 1 |
76,640,378 | 2023-7-7 | https://stackoverflow.com/questions/76640378/python-zip-magic-for-classes-instead-of-tuples | Python zip function is its own inverse (in a way), thus we can do this: points = [(1,2), (3,4), (5,6), (7,8)] xs, ys = zip(*points) and now xs=[1,3,5,7] and ys=[2,4,6,8]. I wonder if something similar can be done with data class instances instead of tuples: from dataclasses import dataclass @dataclass class XY: "2d point" x: float | int y: float | int points = [XY(1,2), XY(3,4), XY(5,6), XY(7,8)] xs, ys = zip(*[(p.x,p.y) for p in points]) but without an explicit list comprehension. Of course, the result would not be a tuple (xs,ys) but a dict with keys x and y because, without an explicit list comprehension, we would be collecting all fields. | With astuple: from dataclasses import dataclass, astuple @dataclass class XY: "2d point" x: float | int y: float | int def __iter__(self): return iter(astuple(self)) points = [XY(1,2), XY(3,4), XY(5,6), XY(7,8)] xs, ys = zip(*points) Or instead map it: xs, ys = zip(*map(astuple, points)) | 26 | 19 |
76,638,630 | 2023-7-7 | https://stackoverflow.com/questions/76638630/how-to-select-last-column-of-polars-dataframe | I have the following polars dataframe and I wanted to select the last column dynamically. >>> import polars as pl >>> >>> df = pl.DataFrame({ ... "col1": [1, 2], ... "col2": ["2", "3"], ... "col3": [3, 4] ... }) >>> >>> df shape: (2, 3) ββββββββ¬βββββββ¬βββββββ β col1 β col2 β col3 β β --- β --- β --- β β i64 β str β i64 β ββββββββͺβββββββͺβββββββ‘ β 1 β 2 β 3 β β 2 β 3 β 4 β ββββββββ΄βββββββ΄βββββββ >>> # How to select col3? which is the last column in df How can i do this in polars?. I can do df.iloc[:,-1:] to select the last column if it's a pandas dataframe. Additional info: >>> import sys >>> sys.version_info sys.version_info(major=3, minor=11, micro=0, releaselevel='final', serial=0) >>> import polars >>> polars.__version__ '0.18.3' | To aid in operatons like these, a polars.selectors module was introduced recently. You can simply use last from this module: df.select(cs.last()) shape: (2, 1) ββββββββ β col3 β β --- β β i64 β ββββββββ‘ β 3 β β 4 β ββββββββ | 3 | 3 |
76,635,770 | 2023-7-7 | https://stackoverflow.com/questions/76635770/how-to-test-fastapi-application-without-sharing-the-same-application-between-tes | I'm trying to test my FastAPI code using pytest. some of the tests I'm doing require the application be in an initial state (some configuration to be reset, object data to be cleared and so on). both of the methods I tried put the app in the same state during the test session, I also tried to reload the module in which the app is in, but with no success. here is a minimal reproducible example of the code: file :main.py from fastapi import FastAPI app = FastAPI() my_state = False @app.get("/my_state/") def return_my_state(): return {"state": my_state } @app.post("/my_state/") def set_my_state(content:dict): global my_state my_state = content['set'] file: test_file1.py from fastapi.testclient import TestClient from main import app import pytest client = TestClient(app) @pytest.mark.asyncio async def test_check_status(): global client client.post("/my_state/", json={"set": True}) print(client.get("/my_state/").json()['state']) #prints True file: test_file2.py from fastapi.testclient import TestClient from main import app import pytest client = TestClient(app) @pytest.mark.asyncio async def test_check_status_other_file(): global client print(client.get("/my_state/").json()['state']) #prints True this prevents me from properly testing the application. is there another way that I can use other than running the application outside of test cases ?? | If you convert main.py to use an app factory pattern, and store your state in the app rather than in a global, like this: from fastapi import FastAPI def create_app(): app = FastAPI() app.state.my_state = False @app.get("/my_state/") def return_my_state(): return {"state": app.state.my_state} @app.post("/my_state/") def set_my_state(content: dict): app.state.my_state = content["set"] return app app = create_app() Then you can have pytest fixtures that create per-test instances of the app and test client. In conftest.py we have: import pytest from fastapi.testclient import TestClient from main import create_app @pytest.fixture def app(): return create_app() @pytest.fixture def client(app): return TestClient(app) And then test_file1.py becomes: from fastapi.testclient import TestClient from main import app import pytest @pytest.mark.asyncio async def test_check_status(client): client.post("/my_state/", json={"set": True}) assert client.get("/my_state/").json()['state'] And test_file2.py becomes: from fastapi.testclient import TestClient from main import app import pytest @pytest.mark.asyncio async def test_check_status_other_file(client): assert not client.get("/my_state/").json()["state"] Running pytest -v produces: ============================= test session starts ============================== platform linux -- Python 3.11.4, pytest-7.4.0, pluggy-1.2.0 -- /home/lars/tmp/python/.venv/bin/python cachedir: .pytest_cache rootdir: /home/lars/tmp/python/testapi plugins: xdist-3.3.1, mock-3.11.1, anyio-3.7.1, asyncio-0.21.0 asyncio: mode=Mode.STRICT collecting ... collected 2 items test_file1.py::test_check_status PASSED [ 50%] test_file2.py::test_check_status_other_file PASSED [100%] ============================== 2 passed in 0.03s =============================== This structure ensures that every test starts with a fresh copy of the application with no state retained from previous tests. All the code referenced in this answer is available in this github repository. | 3 | 6 |
76,633,711 | 2023-7-7 | https://stackoverflow.com/questions/76633711/langchain-text-splitter-behavior | I don't understand the following behavior of Langchain recursive text splitter. Here is my code and output. from langchain.text_splitter import RecursiveCharacterTextSplitter r_splitter = RecursiveCharacterTextSplitter( chunk_size=10, chunk_overlap=0, # separators=["\n"]#, "\n", " ", ""] ) test = """a\nbcefg\nhij\nk""" print(len(test)) tmp = r_splitter.split_text(test) print(tmp) Output 13 ['a\nbcefg', 'hij\nk'] As you can see, it outputs chunks of size 7 and 5 and only splits on one of the new line characters. I was expecting output to be ['a','bcefg','hij','k'] | Accord to the split_text funcion in RecursiveCharacterTextSplitter def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" final_chunks = [] # Get appropriate separator to use separator = self._separators[-1] for _s in self._separators: if _s == "": separator = _s break if _s in text: separator = _s break # Now that we have the separator, split the text if separator: splits = text.split(separator) else: splits = list(text) # Now go merging things, recursively splitting longer texts. _good_splits = [] for s in splits: if self._length_function(s) < self._chunk_size: _good_splits.append(s) else: if _good_splits: merged_text = self._merge_splits(_good_splits, separator) final_chunks.extend(merged_text) _good_splits = [] other_info = self.split_text(s) final_chunks.extend(other_info) if _good_splits: merged_text = self._merge_splits(_good_splits, separator) # Here will merge the items if the cusum is less than chunk size in your example is 10 final_chunks.extend(merged_text) return final_chunks this will merge the items if the cusum is less than chunk size in your example is 10 | 5 | 2 |
76,630,471 | 2023-7-6 | https://stackoverflow.com/questions/76630471/coverage-py-returns-error-no-source-for-code-when-running-tensorflow-keras | I have built unittests for my code and everything works fine when running them from vscode. Even running coverage run runs successfully. But when I try to run coverage report I get the following output: No source for code: 'C:\Users\XXX\AppData\Local\Temp\1\__autograph_generated_file4ilniiln.py' I found out that this happens exactly when I add a test case which contains tensorflow.keras.Model.fit function. If I remove tensorflow.keras.Model.fit then this message does not appear for coverage report command. How can I solve this issue? | Tensorflow rewrites your code and runs it from a new location. I need some assistance to get it working properly in coverage.py. See this issue for details: https://github.com/nedbat/coveragepy/issues/856 You can try using the -i flag to coverage report to ignore files it can't find. I don't know if you will still get complete data or not. | 3 | 3 |
76,624,477 | 2023-7-5 | https://stackoverflow.com/questions/76624477/whats-the-raku-equivalent-of-the-super-keyword-as-used-in-javascript-and-python | Whenever you extend a class in JavaScript or Python, the derived class must use the super keyword in order to set attributes and/or invoke methods and constructor in the base class. For example: class Rectangle { constructor(length, width) { this.name = "Rectangle"; this.length = length; this.width = width; } shoutArea() { console.log( `I AM A ${this.name.toUpperCase()} AND MY AREA IS ${this.length * this.width}` ); } rectHello() { return "Rectanglish: hello"; } } class Square extends Rectangle { constructor(length) { super(length, length); this.name = "Square" } squaHello() { const h = super.rectHello(); return "Squarish:" + h.split(':')[1]; } } const rect = new Rectangle(6, 4); rect.shoutArea(); //=> I AM A RECTANGLE AND MY AREA IS 24 const squa = new Square(5); squa.shoutArea(); //=> I AM A SQUARE AND MY AREA IS 25 console.log(squa.squaHello()); //=> Squarish: hello | What's the Raku equivalent of the super keyword as used in JavaScript and Python? One of Raku's re-dispatching functions.ΒΉ Basics of redispatch First some code that does not include a redispatch function: class Rectangle { has ($.length, $.width) } Rectangle.new: length => 6, width => 4; The Rectangle declaration does not even include construction code, just a declaration of two attributes and that's it. So what is the Rectangle.new call doing? It's inheriting the default new method provided by Raku's Mu class which initializes any class attributes whose names match any named arguments. If you want a custom constructor that accepts positional arguments, then you typically write a new method which lists which arguments you want in its signature, and then have that method call suitably invoke the default new, which requires named arguments, by calling an appropriate redispatch function with the arguments converted to named arguments: class Rectangle { has ($.length, $.width); method new ($length, $width) { callwith length => $length, width => $width } } Rectangle.new: 6, 4; callwith is a redispatch function which does a: call of the next matching candidate based on the original call.Β² with a fresh set of arguments. In this simple case the original call was Rectangle.new: 6, 4, and the next candidate is the new method inherited from Mu. A Rectangle class based on yours Rather than mimic your code I'll write an idiomatic Raku translation of it and comment on it. class Rectangle { has ($!length, $!width) is required is built; method new ($length, $width) { callwith :$length, :$width } method shoutArea { put uc "I am a {self.^name} and my area is {$!length * $!width}" } method rectHello { 'Rectanglish: hello' } } constant rect = Rectangle.new: 6, 4; rect.shoutArea; #=> I AM A RECTANGLE AND MY AREA IS 24 Commentary: It's a good habit to default to writing code that limits problems that can arise as code evolves. For this reason I've used $!length for the length attribute rather than $.length.Β³ I've added an is required annotation to the attributes. This means a failure to initialize attributes by the end of an instance's construction will mean an exception gets thrown. I've added an is built annotation to the attributes. This means that even an attribute without a public accessor -- as is the case for $!length and $!width due to my use of ! instead of . in the "twigil" -- can/will still be automatically initialized if there is a matching named argument in the construction call. :$length is short for length => $length. self.^name avoids unnecessary overhead. It's not important and quite possibly distracting to read about so feel free to ignore my footnote explaining it.β΄ A Square class based on yours I'll make the new for Square redispatch: class Square is Rectangle { method new ($side-length) { callwith $side-length, $side-length } method squaHello { "Squarish: {self.rectHello.split(':')[1].trim}" } } constant squa = Square.new: 5; squa.shoutArea; #=> I AM A SQUARE AND MY AREA IS 25 put squa.squaHello; #=> Squarish: hello Commentary: I picked the name $side-length for the Square's .new parameter, but the name doesn't matter because it's a positional parameter/argument. The redispatch is to the next candidate, just as it was before, abstractly speaking. Concretely speaking the next candidate this time is the method I had just defined in Rectangle (which in turn redispatches to the new of Mu). self.rectHello suffices because the method being called has a different name than the originally called method (squaHello). If you renamed the two methods in Rectangle and Square to have the same name Hello then a redispatch would again be appropriate, though this time I'd have written just callsame rather than callwith ... because callsame just redispatches to the next candidate using the same arguments that were provided in the original call, which would save bothering to write out the arguments again. Footnotes ΒΉ Redispatching is a generalization of features like super. Redispatch functions are used for a range of purposes, including ones that have nothing to do with object orientation. Β² In Raku a function or method call may result in the compiler generating a list of possibly matching candidates taking into account factors such as invocants for method calls, and multiple dispatch and function wrappers for both functions and methods. Having constructed a candidate list it then dispatches to the leading candidate (or the next one in the case of redispatch to the next candidate). Β³ If you really want a getter/setter to be automatically generated for a given attribute, then declare it with a ., eg $.length instead of $!length, and Raku will generate both a $!length attribute and a .length getter. (And a setter too if you add an is rw to the $.length declaration.) I did this in the first code example to keep things a bit simpler. β΄ The ^ in a method call like foo.^bar means a bar method call is redirected "upwards" (hence the ^) to the Higher Order Workings object that knows how a foo functions as a particular kind of type. In this case a Rectangle is a class and the HOW object is an instance of Perl6::Metamodel::ClassHOW, which knows how classes work, including that each class has a distinct name and has a .name method that retrieves that name. And the name of the Rectangle class is of course 'Rectangle', so self.^name saves having to create something else with the class's name. | 11 | 11 |
76,631,028 | 2023-7-6 | https://stackoverflow.com/questions/76631028/how-can-i-create-a-fractionenum-in-python-without-a-metaclass-conflict | I am trying to create a FractionEnum similar to StrEnum or IntEnum. My first attempt resulted in a metaclass conflict: class FractionEnum(fractions.Fraction, Enum): VALUE_1 = 1, 1 VALUE_2 = 8, 9 TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases I followed the suggestion from this answer Multiple inheritance metaclass conflict involving Enum and created a new metaclass: class FractionEnumMeta(type(Enum), type(fractions.Fraction)): pass class FractionEnum(fractions.Fraction, Enum, metaclass=FractionEnumMeta): VALUE_1 = 1, 1 VALUE_2 = 8, 9 This solved the above error but now I get: File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/enum.py", line 289, in __new__ enum_member = __new__(enum_class, *args) File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/fractions.py", line 93, in __new__ self = super(Fraction, cls).__new__(cls) TypeError: Enum.__new__() missing 1 required positional argument: 'value' The issue seems to be that the __new__ call inside Fraction is trying to create an enum, from the call inside the EnumMeta metaclass: else: enum_member = __new__(enum_class, *args) I'm misunderstanding how the metaclasses can work together to create an object that is both a fraction and an Enum - it seems to work out of the box with int or str or classes that don't define a metaclass. Update: I was able to use the code below to have the enumeration replace the Fraction's new method, but I am getting an error if I try deepcopy a class that has the enum as a member: /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/enum.py:497: in _create_ _, first_enum = cls._get_mixins_(cls, bases) in the Enum code: # ensure final parent class is an Enum derivative, find any concrete # data type, and check that Enum has no members first_enum = bases[-1] if not issubclass(first_enum, Enum): raise TypeError("new enumerations should be created as " "`EnumName([mixin_type, ...] [data_type,] enum_type)`") member_type = _find_data_type(bases) or object if first_enum._member_names_: > raise TypeError("Cannot extend enumerations") E TypeError: Cannot extend enumerations Sample to reproduce: class TestFractionEnum(FractionEnum): VALUE_1 = 1, 1 VALUE_2 = 8, 9 class C: def __init__(self): self.fraction_enum = TestFractionEnum.VALUE_1 c = C() print(c) print(c.fraction_enum) d = copy.copy(c) print(d) e = copy.deepcopy(c) print(e) Update 2: Overriding deepcopy on the enum seems to work: def __deepcopy__(self, memo): if type(self) == Fraction: return self for item in self.__class__: if self == item: return item assert f'Invalid enum: {self}' | As @jsbueno said, you have to write your own __new__: from enum import EnumType, Enum from fractions import Fraction import math class FractionEnumMeta(type(Fraction), EnumType): pass class FractionEnum(Fraction, Enum, metaclass=FractionEnumMeta): def __new__(cls, numerator=0, denominator=None): # # this is the only line different from Fraction.__new__ self = object.__new__(cls) # # if denominator is None: if type(numerator) is int: self._numerator = numerator self._denominator = 1 return self elif isinstance(numerator, numbers.Rational): self._numerator = numerator.numerator self._denominator = numerator.denominator return self elif isinstance(numerator, (float, Decimal)): # Exact conversion self._numerator, self._denominator = numerator.as_integer_ratio() return self elif isinstance(numerator, str): # Handle construction from strings. m = _RATIONAL_FORMAT.match(numerator) if m is None: raise ValueError('Invalid literal for Fraction: %r' % numerator) numerator = int(m.group('num') or '0') denom = m.group('denom') if denom: denominator = int(denom) else: denominator = 1 decimal = m.group('decimal') if decimal: decimal = decimal.replace('_', '') scale = 10**len(decimal) numerator = numerator * scale + int(decimal) denominator *= scale exp = m.group('exp') if exp: exp = int(exp) if exp >= 0: numerator *= 10**exp else: denominator *= 10**-exp if m.group('sign') == '-': numerator = -numerator else: raise TypeError("argument should be a string " "or a Rational instance") elif type(numerator) is int is type(denominator): pass # *very* normal case elif (isinstance(numerator, numbers.Rational) and isinstance(denominator, numbers.Rational)): numerator, denominator = ( numerator.numerator * denominator.denominator, denominator.numerator * numerator.denominator ) else: raise TypeError("both arguments should be " "Rational instances") if denominator == 0: raise ZeroDivisionError('Fraction(%s, 0)' % numerator) g = math.gcd(numerator, denominator) if denominator < 0: g = -g numerator //= g denominator //= g self._numerator = numerator self._denominator = denominator return self class TestFractionEnum(FractionEnum): VALUE_1 = 1, 1 VALUE_2 = 8, 9 In this case, we just reuse Fraction.__new__, but remove the super() call and create the new fraction instance directly. Disclosure: I am the author of the Python stdlib Enum, the enum34 backport, and the Advanced Enumeration (aenum) library. | 4 | 3 |
76,619,522 | 2023-7-5 | https://stackoverflow.com/questions/76619522/jupyter-notebook-slides-in-vscode | In Jupyter Notebook we could create slides, like explained here. Here you can see a reproducible example to create Jupyter notebook slides via Anaconda-navigator: When running this command in the terminal: jupyter nbconvert slides_test.ipynb --to slides --post serve It will outputs this in your browser: And for the code cell output: This is very nice and I would like to use this but in VScode. In the Jupyter notebook of Anaconda navigator we can create slides when clicking on View -> Cell Toolbar -> Slideshow in the header bar. Unfortunately, this is not possible in VScode. So I was wondering if anyone knows how to create Jupyter Notebook slides in VScode? | Firstly, ensure that you have install Jupyter extension. It will install several extensions required for Jupyter including slides for you. Then, you can add a slide type to the cell you're on by opening the Command Palette (Cmd+Shift+P) and selecting Switch Slide Type according to the document in GitHub. You could modify slide types for notebook cells by selecting the slide type on the cell. After assigning slide types to your cells, create an HTML slideshow presentation by opening the integrated terminal and running the command, jupyter nbconvert '<notebook-file-name>.ipynb' --to slides --post serve. | 4 | 8 |
76,624,617 | 2023-7-5 | https://stackoverflow.com/questions/76624617/quickly-mapping-values-from-one-pandas-dataframe-to-another-with-datetime-condit | I have two dataframes df1= ID Value BeginDate EndDate 1 0.5 1/1/12 1/1/13 1 0.6 1/1/13 1/1/14 2 0.4 1/1/12 1/1/13 3 0.7 1/1/12 1/1/13 df2= ID Date 1 6/6/12 1 7/5/12 1 10/5/13 2 8/9/12 3 6/6/12 I want to map the Value column from df1 onto df2 based on the ID column and whether the Date column in df2 fits between BeginDate and EndDate in df1. The problem is that df1 has 10,000 rows and df2 has 1,000,000 rows. I know that this is possible. I am mostly concerned with how to do this quickly. I've had to do something similar before where df1 only has 10 rows, and I used df1.iterrows like so df2['Value'] = 1 for index, row in df1.iterrows(): df1['Value'] = np.where( (row['BeginDate'] <= df2['Date']) & (df2['Date'] <= row['EndDate']), row['Value'], df2['Value']) However, obviously with df1 now having 10,000 rows, iterrows is no longer preferable. I've considered defining a function and using df2.apply, but this would involve filtering df1 on every iteration. I've had efficiency problems with this approach in the past. It's possible to break up df1 into smaller dataframes based on another column (not displayed in example). So instead of one big dataframe, I'll have 10 smaller ones. I'm thinking of putting them into a dictionary. The keys of this dictionary can be mapped from df2, so this would help, but still leaves me with my initial question. I've seen approaches using a mask and df.loc, but haven't figured out a way with the date conditions. Doing this using pandas isn't essential. | If the intervals are non overlapping, the most efficient pandas approach would be to merge_asof on the BeginDate, then to check that the EndDate is valid: df1[['BeginDate', 'EndDate']] = df1[['BeginDate', 'EndDate']].apply(pd.to_datetime, format='%d/%m/%y') df2['Date'] = pd.to_datetime(df2['Date'], format='%d/%m/%y') out = (pd.merge_asof( df2.reset_index() .sort_values(by='Date'), df1.sort_values(by='BeginDate'), left_on='Date', right_on='BeginDate', by='ID') .assign(Value=lambda d: d['Value'].where(d['Date'].le(d['EndDate']))) .set_index('index').sort_index() ) Output (with all columns, you can then drop the BeginDate/EndDate): ID Date Value BeginDate EndDate index 0 1 2012-06-06 0.5 2012-01-01 2013-01-01 1 1 2012-05-07 0.5 2012-01-01 2013-01-01 2 1 2013-05-10 0.6 2013-01-01 2014-01-01 3 2 2012-09-08 0.4 2012-01-01 2013-01-01 4 3 2012-06-06 0.7 2012-01-01 2013-01-01 Alternatively, a more generic method that also handles overlapping intervals would be to use janitor; # pip install janitor import janitor df1[['BeginDate', 'EndDate']] = df1[['BeginDate', 'EndDate']].apply(pd.to_datetime, format='%d/%m/%y') df2['Date'] = pd.to_datetime(df2['Date'], format='%d/%m/%y') df2.conditional_join(df1, ('ID', 'ID', '=='), ('Date', 'BeginDate', '>='), ('Date', 'EndDate', '<='), ) | 3 | 2 |
76,624,161 | 2023-7-5 | https://stackoverflow.com/questions/76624161/pandas-updating-a-column-with-loc-does-not-work-as-expected | As far as I understood it is preferable to update the column values using .loc method rather than slicing [], because of the potential SettingWithCopy issue. When I use slicing I frequently get these warnings, and I understand that there is a concern when using multindexing. However, when I am using .loc method, my column is not updated when the column's values are used in the right-hand side: a=pd.DataFrame({'A':['2023-01-01','2023-01-02']},[0,1]) a Out[128]: A 0 2023-01-01 1 2023-01-02 a.dtypes Out[130]: A object dtype: object When I am running either of the following commands to convert object into the datetime, the dtype of the column "A" remains object. Thus there is no effect: a.loc[:,'A']=pd.to_datetime(a.loc[:,'A']) a.loc[:,'A']=pd.to_datetime(a['A']) When I am using a['A']=pd.to_datetime(a['A']) My dtype is converted into datetime. I am curious why .loc method does not work in this case? Thanks | If you use a.loc[:,'A'] the dtype of the Series will not change because you modify only values. You slice your Series (even if you select all rows with :) so the container will remain the same. However, the type of each individual value will change from string to Timestamp. This is the same for any operations where the left part is sliced (not only for DatetimeIndex): b = pd.DataFrame({'B': ['3', '5']}) b.loc[:, 'B'] = pd.to_numeric(b.loc[:, 'B']) # OR b.loc[:, 'B'] = b.loc[:, 'B'].astype(int) # OR b.loc[:, 'B'] = b['B'].astype(int) Output: >>> b B 0 3 1 5 >>> b.dtypes B object dtype: object >>> b.loc[0, 'B'] 3 >>> type(b.loc[0, 'B']) int This becomes more obvious if you use a slice like [0:2] or a specific index like 0 with loc. In this case, you understand why the dtype of the container won't change if you change some values and not all values. | 3 | 2 |
76,623,651 | 2023-7-5 | https://stackoverflow.com/questions/76623651/storage-of-floating-point-numbers-in-memory-in-python | I know that Python maintains an internal storage of small-ish integers rather than creating them at runtime: id(5) 4304101544 When repeating this code after some time in the same kernel, the id is stable over time: id(5) 4304101544 I thought that this wouldn't work for floating point numbers because it can't possibly maintain a pre-calculated list of all floating point numbers. However this code returns the same id twice. id(4.33+1), id(5.33) (5674699600, 5674699600) After some time repeating the same code returns some different location in memory: id(4.33 + 1), id(5.33) (4962564592, 4962564592) What's going on here? | It's not just that the object is garbage collected and and the new object stored in the same location as the previous one after garbage collection. Something different is at work here. We can use the dis module to look at the bytecode generated: import dis def f(): one, two = 4.3333333, 3.3333333 + 1. a, b = id(one), id(two) return one, two, a, b dis.dis(f) one, two, a, b = f() shows us the bytecode generated: 1 0 RESUME 0 2 2 LOAD_CONST 1 ((4.3333333, 4.3333333)) 4 UNPACK_SEQUENCE 2 8 STORE_FAST 0 (one) 10 STORE_FAST 1 (two) 3 12 LOAD_GLOBAL 1 (NULL + id) 24 LOAD_FAST 0 (one) 26 PRECALL 1 30 CALL 1 40 LOAD_GLOBAL 1 (NULL + id) 52 LOAD_FAST 1 (two) 54 PRECALL 1 58 CALL 1 68 STORE_FAST 3 (b) 70 STORE_FAST 2 (a) 4 72 LOAD_FAST 0 (one) 74 LOAD_FAST 1 (two) 76 LOAD_FAST 2 (a) 78 LOAD_FAST 3 (b) 80 BUILD_TUPLE 4 82 RETURN_VALUE (4.3333333, 4.3333333, 12424698960, 12424698960) The id of one and two are also stable over time: >>> id(one), id(two) (12424698960, 12424698960) They are indeed the same object, because the interpreter optimizes the addition before the bytecode is generated. | 4 | 1 |
76,623,398 | 2023-7-5 | https://stackoverflow.com/questions/76623398/how-to-get-entries-of-array-with-separated-by-string-in-one-line | I have this data [[ 2.696000e+00 0.000000e+00 0.000000e+00 0.000000e+00] [ 2.577000e+00 0.000000e+00 0.000000e+00 0.000000e+00] [ 0.000000e+00 -2.560096e+03 0.000000e+00 0.000000e+00] ... and want to print it as such, for each row 2.696000e+00 & 0.000000e+00 & 0.000000e+00 & 0.000000e+00 // 2.577000e+00 & 0.000000e+00 & 0.000000e+00 & 0.000000e+00 // ... I have done is for k in range(1): for i in range(len(data[k])): print(data[k][i],'&') if i == len(data[k])-1: print(data[k][i],'//') Which outputs 2.696 & 0.0 & 0.0 & 0.0 & 0.0 // How to get rid of the new line between each entry of the row? | You can use str.join to join the numbers by & and then print // as last string: data = [ [2.696000e00, 0.000000e00, 0.000000e00, 0.000000e00], [2.577000e00, 0.000000e00, 0.000000e00, 0.000000e00], [0.000000e00, -2.560096e03, 0.000000e00, 0.000000e00], ] for row in data: print(' & '.join(map(str, row)), '//') Prints: 2.696 & 0.0 & 0.0 & 0.0 // 2.577 & 0.0 & 0.0 & 0.0 // 0.0 & -2560.096 & 0.0 & 0.0 // | 2 | 5 |
76,621,598 | 2023-7-5 | https://stackoverflow.com/questions/76621598/how-can-i-reshape-my-dataframe-into-a-3-dimensional-numpy-array | My dataframe contains a multivariate time series per user id. The first column id is the user id (there are N users), the second dt is the date (each user has T days worth of data, i.,e T rows for each user) and the other columns are metrics (basically, each column is a time series per id.) Here's a code to recreate a similar dataframe import pandas as pd from datetime import datetime import numpy as np N=5 T=100 dfs=[] datelist = pd.date_range(datetime.today(), periods=T).tolist() for id in range(N): test = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) test['dt'] = datelist test['id']=id dfs.append(test) dfs = pd.concat(dfs) The output would look something like this, where 'A','B' and so on are metrics (like total purchases): A B C D dt id 0 58 3 52 5 2023-07-05 14:34:12.852460 0 1 6 34 28 88 2023-07-06 14:34:12.852460 0 2 27 98 74 81 2023-07-07 14:34:12.852460 0 3 96 13 7 52 2023-07-08 14:34:12.852460 0 4 80 69 22 12 2023-07-09 14:34:12.852460 0 I want to transform this data into a numpy matrix X, of shape N x T x F, where N is the number of users, T is the number of days in the time series (T is constant for all ids) and F is the number of metrics (in the example above, F=4.) This means that X[0] returns a TxF array, that should look exactly like the output of dfs.query('id==0')[['A','B','C','D']].values So far, I've tried using pivot and reshape but the elements in the final matrix are not arranged as I would like. Here's what I've tried: # Pivot the dataframe df_pivot = dfs.sort_values(['id','dt']).pivot(index='id', columns='dt') # Get the values from the pivot table X = df_pivot.values.reshape(dfs['id'].nunique(), -1, len([x for x in dfs.columns if x not in ['dt','id']])) If I do X[0], the result I get it: [58, 6, 27, 96], [80, 65, 41, 39], [30, 26, 38, 13], [50, 60, 60, 73], ... From which you can see that the result is not what I would want. This is what I need: [58, 3, 52, 5], [ 6, 34, 28, 88], [27, 98, 74, 81], [96, 13, 7, 52], ... Any help appreciated! | Group the dataframe by id then for each unique id export the metrics as numpy array inside a list comprehension df = df.set_index(['id', 'dt']).sort_index() np.array([g.values for _, g in df.groupby(level=0)]) array([[[58, 3, 52, 5], [ 6, 34, 28, 88], [27, 98, 74, 81], [96, 13, 7, 52], [80, 69, 22, 12]]]) | 3 | 0 |
76,621,424 | 2023-7-5 | https://stackoverflow.com/questions/76621424/cause-of-pytest-valueerror-fixture-is-being-applied-more-than-once-to-the-same | I have a unit test I'm running with a particular fixture. The fixture is pretty simple and looks like so: @pytest.fixture def patch_s3(): # Create empty bucket bucket = s3_resource().create_bucket(Bucket=BUCKET) bucket.objects.all().delete() yield bucket.objects.all().delete() and the test isn't all that complicated either. Here is the signature for the test: def test_task_asg(patch_s3, tmp_path, monkeypatch) However, when I run pytest, I get the following error: tests/test_task_asg.py:313: in <module> @pytest.fixture home/ubuntu/miniconda/envs/pipeline/lib/python3.7/site-packages/_pytest/fixtures.py:1150: in fixture fixture_function home/ubuntu/miniconda/envs/pipeline/lib/python3.7/site-packages/_pytest/fixtures.py:1011: in __call__ "fixture is being applied more than once to the same function" E ValueError: fixture is being applied more than once to the same function This is weird to me, as it seems to imply I'm somehow including patch_s3 more than once in test_task_asg. What could be causing this? | If you scroll up from your definition of patch_s3, you'll probably find another duplicate line like @pytest.fixture, causing the issue. That line can only be used once per fixture method definition. The easiest way to reproduce this issue is with: import pytest @pytest.fixture(scope="session") @pytest.fixture def my_fixture(): yield def test_abc(my_fixture): pass Running that with pytest yields ValueError: fixture is being applied more than once to the same function. But if you remove one of the two @pytest.fixture lines, then the issue goes away. Here are some examples where other people had the same issue and fixed it: https://github.com/pytest-dev/pytest/issues/3518 https://github.com/simonw/datasette/commit/969771770fcf795daace72e2310804e699067cfe https://github.com/RhodiumGroup/rhg_compute_tools/pull/67 | 3 | 1 |
76,621,589 | 2023-7-5 | https://stackoverflow.com/questions/76621589/how-to-run-async-methods-in-langchain | I have a basic chain that classifies some text based on the Common European Framework of Reference for Languages. I'm timing the difference between normal chain.apply and chain.aapply but can't get it to work. What am I doing wrong? import os from time import time import openai from dotenv import load_dotenv, find_dotenv from langchain.chains import LLMChain from langchain.chat_models import ChatOpenAI from langchain.prompts import ChatPromptTemplate _ = load_dotenv(find_dotenv()) openai.api_key = os.getenv('OPENAI_API_KEY') llm = ChatOpenAI(temperature=0) prompt = ChatPromptTemplate.from_template( 'Classify the text based on the Common European Framework of Reference ' 'for Languages (CEFR). Give a single value: {text}', ) chain = LLMChain(llm=llm, prompt=prompt) texts = [ {'text': 'Hallo, ich bin 25 Jahre alt.'}, {'text': 'Wie geht es dir?'}, {'text': 'In meiner Freizeit, spiele ich gerne Fussball.'} ] start = time() res_a = chain.apply(texts) print(res_a) print(f"apply time taken: {time() - start:.2f} seconds") print() start = time() res_aa = chain.aapply(texts) print(res_aa) print(f"aapply time taken: {time() - start:.2f} seconds") Output [{'text': 'Based on the given text "Hallo, ich bin 25 Jahre alt," it can be classified as CEFR level A1.'}, {'text': 'A2'}, {'text': 'A2'}] apply time taken: 2.24 seconds <coroutine object LLMChain.aapply at 0x0000025EA95BE3B0> aapply time taken: 0.00 seconds C:\Users\User\AppData\Local\Temp\ipykernel_13620\1566967258.py:34: RuntimeWarning: coroutine 'LLMChain.aapply' was never awaited res_aa = chain.aapply(texts) RuntimeWarning: Enable tracemalloc to get the object allocation traceback | Changing res_aa = chain.aapply(texts) to res_aa = await chain.aapply(texts) did the job! Now it works (damn these methods are much faster than doing it sequentially) [{'text': 'Based on the given text "Hallo, ich bin 25 Jahre alt," it can be classified as CEFR level A1.'}, {'text': 'A2'}, {'text': 'A2'}] apply time taken: 2.87 seconds [{'text': 'Based on the given text "Hallo, ich bin 25 Jahre alt," it can be classified as CEFR level A1.'}, {'text': 'A2'}, {'text': 'A2'}] aapply time taken: 1.34 seconds | 3 | 2 |
76,619,466 | 2023-7-5 | https://stackoverflow.com/questions/76619466/slash-commands-visible-by-role-user-or-permissions | If I have commands for special roles (e.g. /ban, /kick, /mute), every user can see them. It would be possible to display these commands only for those users who can execute them? You could check for permissions, roles, or id, but then people would still see these commands in the list | You can prevent users without the required permissions from seeing a command, by using the default_member_permissions parameter found here, when creating slash commands. @client.slash_command(name="ping", description="Ping the bot", default_member_permissions=(nextcord.Permissions(administrator=True))) async def ping(interaction: nextcord.Interaction): await interaction.send("Pong!", ephemeral=True) You can allow and deny certain users and roles from seeing specific commands in the server settings. To do so, go to Server Settings -> Integrations -> Bot/Application and click the command you want to configure roles and members for. | 3 | 4 |
76,617,034 | 2023-7-5 | https://stackoverflow.com/questions/76617034/custom-date-format-in-fastapi-model | I have a date type of the format 15-Jul-1996. And I am trying to define a Pydantic model to use for input data validation in a FastAPI application as such: from pydantic import BaseModel from datetime import date class Profile(BaseModel): name: str DOB: date # type of 15-Jul-1996 gender: str Is there a way to constrain the DOB to that particular format? I can't seem to find a way to do that in the Pydantic docs. The attempt above is not quite what I want as it constrains to 05-07-2023. | Try this. You can use @validator from pydantic import BaseModel, validator from datetime import datetime class Profile(BaseModel): name: str DOB: str # change this to str gender: str @validator('DOB') def parse_dob(cls, v): return datetime.strptime(v, '%d-%b-%Y').date() # convert string to date | 2 | 0 |
76,614,379 | 2023-7-4 | https://stackoverflow.com/questions/76614379/roo-validator-error-when-importing-langchain-text-splitter-python | I have been working through a LangChain tutorial and have hit this problem when trying to import langchain into python in vscode on macos 13.4.1. from dotenv import load_dotenv import os import streamlit as st from PyPDF2 import PdfReader from langchain.text_splitter import CharacterTextSplitter def main(): load_dotenv() # print(os.getenv("OPENAI_API_KEY")) st.set_page_config(page_title="Select the Data PDF") st.header("Load your PDF below: β‘οΈ") pdf = st.file_uploader("Upload your PDF", type="pdf") if pdf is not None: pdf_reader = PdfReader(pdf) # the PdfReader reads in chunks of one page each text = "" for page in pdf_reader.pages: text += page.extract_text() st.write(text) # split text into chunks (inside outer if not None) # the CharacterTextSplitter class has properties that need to be set... # so pass in property parameters to the class inititializer text_splitter = CharacterTextSplitter( separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len #python's basic length function ) chunks = text_splitter.split_text(text) st.write(chunks) if __name__ == '__main__' : main() I see a suggested fix, but I do not understand how to implement it. I get the following error on the Stringlit web page and the VSCode terminal: PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`. For further information visit https://errors.pydantic.dev/2.0/u/root-validator-pre-skip Traceback: File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script exec(code, module.__dict__) File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/app.py", line 6, in <module> from langchain.embeddings import ImportError File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/langchain/__init__.py", line 8, in <module> from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/langchain/agents/__init__.py", line 2, in <module> from langchain.agents.agent import Agent File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 11, in <module> from langchain.chains.llm import LLMChain File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/langchain/chains/__init__.py", line 2, in <module> from langchain.chains.conversation.base import ConversationChain File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/langchain/chains/conversation/base.py", line 7, in <module> from langchain.chains.conversation.memory import ConversationBufferMemory File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/langchain/chains/conversation/memory.py", line 7, in <module> from langchain.chains.conversation.prompt import SUMMARY_PROMPT File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/langchain/chains/conversation/prompt.py", line 2, in <module> from langchain.prompts.prompt import PromptTemplate File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/langchain/prompts/__init__.py", line 2, in <module> from langchain.prompts.base import BasePromptTemplate File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/langchain/prompts/base.py", line 35, in <module> class BasePromptTemplate(BaseModel, ABC): File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/langchain/prompts/base.py", line 41, in BasePromptTemplate @root_validator() File "/Users/jamesallison/Desktop/Python Apps Temp/LangchainPDF/.venv/lib/python3.8/site-packages/pydantic/deprecated/class_validators.py", line 228, in root_validator raise PydanticUserError( | I was using LangChain 0.0.20 and I got the same issue. Upgrading to Python 3.9 and LangChain 0.0.224 fixed this issue for me. | 4 | 0 |
76,576,620 | 2023-6-28 | https://stackoverflow.com/questions/76576620/optional-caching-in-python-wrapping-a-cachetools-decorator-with-arguments | I'm using the cachetools library and I would like to wrap the decorator form this library and add a class self argument to enable/disable the caching at the class level e.e. MyClass(enable_cache=True) An example usage would be something like: class MyClass(object): def __init__(self, enable_cache=True): self.enable_cache = enable_cache self.cache = cachetools.LRUCache(maxsize=10) @cachetools.cachedmethod(operator.attrgetter('cache')) def calc(self, n): return 1*n I'm not sure how to keep the cache as a shared self class object and allow for the enable_cache flag within my own wrapper decorator using this library. | When you use cachetools the answer is actually quite simple - you set the cache to None. import cachetools import operator class MyClass(object): def __init__(self, enable_cache=True): self.cache = cachetools.LRUCache(maxsize=10) if enable_cache else None @cachetools.cachedmethod(operator.attrgetter('cache')) def calc(self, n): print("Calculating", n) return 1*n m1 = MyClass(True) m1.calc(2) # print m1.calc(2) # should not print m1.calc(3) # print m1.calc(3) # should not print print("now without") m2 = MyClass(False) m2.calc(2) # print m2.calc(2) # print m2.calc(3) # print m2.calc(3) # print Output: Calculating 2 Calculating 3 now without Calculating 2 Calculating 2 Calculating 3 Calculating 3 More flexible you can do it do it by wrapping the cache or by making a whole new decorator: import cachetools import operator def flexible_cache(cache): def cache_wrapper(self): if self.enable_cache: return cache(self) return None return cache_wrapper def optional_cache(cache, *args, **kwargs): return cachetools.cachedmethod(flexible_cache(cache), *args, **kwargs) class MyClass(object): def __init__(self, enable_cache=True): self.enable_cache = enable_cache self.cache = cachetools.LRUCache(maxsize=10) # Now the None part is handled by the decorators @cachetools.cachedmethod(flexible_cache(operator.attrgetter('cache'))) def calc2(self, n): print("Calculating2", 2*n) return 2*n @optional_cache(operator.attrgetter('cache')) def calc3(self, n): print("Calculating3", 2*n) return 2*n | 5 | 5 |
76,596,952 | 2023-7-1 | https://stackoverflow.com/questions/76596952/how-do-i-select-the-top-k-rows-of-a-python-polars-dataframe-for-each-group | The polars dataframe has a top_k method that can be used to select rows which contain the k largest values when sorting on a column. For example, the following code selects the two rows with the largest and second largest entry in the val column: df = pl.DataFrame({'grp':['a','a','a','b','b','b'], 'val':[1,2,3,10,20,30], 'etc':[0,1,2,3,4,5]}) grp val etc str i64 i64 "a" 1 0 "a" 2 1 "a" 3 2 "b" 10 3 "b" 20 4 "b" 30 5 df.top_k(2, by='val') grp val etc str i64 i64 "b" 30 5 "b" 20 4 My question is: how do I get the rows with top k values for each group? Specifically, I want the entire row and not just the value in the val column. I want to do something like this, but this doesn't work in polars because polars GroupBy doesn't have a top_k method: df.groupby('grp').top_k(2, by='val') # doesnt work in polars grp val etc str i64 i64 "b" 30 5 "b" 20 4 "a" 3 2 "a" 2 1 I was able to come up with two ways: one using map_groups and another using sorting. Both of these are not desirable for performance reasons. map_groups is generally not recommended because it's almost always significantly slower. The sorting option is also not desirable as getting the top k elements uses a faster algorithm than sorting (for small k and large n, it's basically O(n) vs O(n log n)). So even though the following below work, I'm looking for other approaches. Is there any way to directly use a top_k method with polars groupby? That would be my ideal solution. # works, but at expense of using map_groups method df.group_by('grp').map_groups(lambda df: df.top_k(2, by='val')) grp val etc str i64 i64 "b" 30 5 "b" 20 4 "a" 3 2 "a" 2 1 # works, but at expense of sorting entire groups df.group_by('grp').agg(pl.all().sort_by('val', descending=True).head(2)).explode('val','etc') grp val etc str i64 i64 "a" 3 2 "a" 2 1 "b" 30 5 "b" 20 4 df.group_by('grp').top_k(2, by='val'), which doesn't work in polars df.group_by('grp').map_groups(lambda df: df.top_k(2, by='val')), which works at the cost of using map_groups df.group_by('grp').agg(pl.all().sort_by('val', descending=True).head(2)).explode('val','etc'), which works at the cost of sorting | The latest release (version 0.20.24) of polars introduced pl.Expr.top_k_by (and also pl.Expr.bottom_k_by) with optimised runtime complexity O(n + k log n - k / 2) for precisely the use-case mentioned in the question. It can be used jointly with pl.Expr.over and mapping_strategy="explode" to obtain the desired result. df.select( pl.all().top_k_by("val", k=2).over("grp", mapping_strategy="explode") ) shape: (4, 3) βββββββ¬ββββββ¬ββββββ β grp β val β etc β β --- β --- β --- β β str β i64 β i64 β βββββββͺββββββͺββββββ‘ β a β 3 β 2 β β a β 2 β 1 β β b β 30 β 5 β β b β 20 β 4 β βββββββ΄ββββββ΄ββββββ Note. The call to pl.Expr.over with mapping_strategy="explode" is equivalent to the following aggregation. df.group_by("grp").agg(pl.all().top_k_by("val", k=2)).explode(pl.exclude("grp")) | 7 | 4 |
76,586,062 | 2023-6-30 | https://stackoverflow.com/questions/76586062/wordcloud-attributeerror-transposedfont-object-has-no-attribute-getbbox | I tried to run the following code to generate a word cloud. text = 'word1 word2 word2 word3 word3 word3' from wordcloud import WordCloud wordcloud = WordCloud(width=480, height=480).generate(text) But I ran into this error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-17-659d2fbc8555> in <module> 8 9 # Generate the word cloud ---> 10 wordcloud.generate(text) 11 12 # Display the word cloud ~\anaconda3\lib\site-packages\wordcloud\wordcloud.py in generate(self, text) 637 self 638 """ --> 639 return self.generate_from_text(text) 640 641 def _check_generated(self): ~\anaconda3\lib\site-packages\wordcloud\wordcloud.py in generate_from_text(self, text) 619 """ 620 words = self.process_text(text) --> 621 self.generate_from_frequencies(words) 622 return self 623 ~\anaconda3\lib\site-packages\wordcloud\wordcloud.py in generate_from_frequencies(self, frequencies, max_font_size) 451 font_size = self.height 452 else: --> 453 self.generate_from_frequencies(dict(frequencies[:2]), 454 max_font_size=self.height) 455 # find font sizes ~\anaconda3\lib\site-packages\wordcloud\wordcloud.py in generate_from_frequencies(self, frequencies, max_font_size) 506 font, orientation=orientation) 507 # get size of resulting text --> 508 box_size = draw.textbbox((0, 0), word, font=transposed_font, anchor="lt") 509 # find possible places using integral image: 510 result = occupancy.sample_position(box_size[3] + self.margin, ~\anaconda3\lib\site-packages\PIL\ImageDraw.py in textbbox(self, xy, text, font, anchor, spacing, align, direction, features, language, stroke_width, embedded_color) 565 font = self.getfont() 566 mode = "RGBA" if embedded_color else self.fontmode --> 567 bbox = font.getbbox( 568 text, mode, direction, features, language, stroke_width, anchor 569 ) AttributeError: 'TransposedFont' object has no attribute 'getbbox' This is the full error trace I got when I run the code in my jupyter notebook. What is this error and how to resolve this? I can't understand what is that TransposedFont when I haven't used any argument of that name in the WordCloud function. | The solution was to upgrade pip and pillow, as based on an issue posted at the word_cloud GitHub repo. See the comments below the post above for how it solved the error featured here. | 3 | 6 |
76,616,042 | 2023-7-4 | https://stackoverflow.com/questions/76616042/attributeerror-module-pil-image-has-no-attribute-antialias | I am trying to have images in my Tkinter GUI, hence I am using PIL. Image.ANTIALAIS is not working. However, Image.BILINEAR works Here's some sample code: import tkinter as tk from PIL import Image, ImageTk window = tk.Tk() image = Image.open(r"VC.png") image = image.resize((20, 20), Image.ANTIALIAS) tk_image = ImageTk.PhotoImage(image) image_label = tk.Label(window, image=tk_image) image_label.pack() window.mainloop() Here's the error: Traceback (most recent call last): File "<module1>", line 19, in <module> AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS' I tried reinstalling pip and Pillow. It didn't work. I asked ChatGPT about this, and it advised me to upgrade to Pillow's latest version. I am on the latest version (10.0.0). | The problem is with Pillow 10.0. Trying to uninstall Pillow might give some errors. Just put this in cmd: pip install Pillow==9.5.0 | 115 | 23 |
76,604,686 | 2023-7-3 | https://stackoverflow.com/questions/76604686/pytorch-distributed-run-with-slurm-results-in-adress-family-not-found | When trying to run an example python file via torch.distributed.run on 2 Nodes with 2 GPUs each on a cluster by using a SLURM script I encounter the following error: [W socket.cpp:426] [c10d] The server socket cannot be initialized on [::]:16773 (errno: 97 - Address family not supported by protocol). [W socket.cpp:601] [c10d] The client socket cannot be initialized to connect to [clara06.url.de]:16773 (errno: 97 - Address family not supported by protocol). This is the SLURM script: #!/bin/bash #SBATCH --job-name=distribution-test # name #SBATCH --nodes=2 # nodes #SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node! #SBATCH --cpus-per-task=4 # number of cores per tasks #SBATCH --partition=clara #SBATCH --gres=gpu:v100:2 # number of gpus #SBATCH --time 0:15:00 # maximum execution time (HH:MM:SS) #SBATCH --output=%x-%j.out # output file name module load Python pip install --user -r requirements.txt MASTER_ADDR=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1) MASTER_PORT=$(expr 10000 + $(echo -n $SLURM_JOBID | tail -c 4)) GPUS_PER_NODE=2 LOGLEVEL=INFO python -m torch.distributed.run --rdzv_id=$SLURM_JOBID --rdzv_backend=c10d --rdzv_endpoint=$MASTER_ADDR\:$MASTER_PORT --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES torch-distributed-gpu-test.py and the python code that should be running: import fcntl import os import socket import torch import torch.distributed as dist def printflock(*msgs): """solves multi-process interleaved print problem""" with open(__file__, "r") as fh: fcntl.flock(fh, fcntl.LOCK_EX) try: print(*msgs) finally: fcntl.flock(fh, fcntl.LOCK_UN) local_rank = int(os.environ["LOCAL_RANK"]) torch.cuda.set_device(local_rank) device = torch.device("cuda", local_rank) hostname = socket.gethostname() gpu = f"[{hostname}-{local_rank}]" try: # test distributed dist.init_process_group("nccl") dist.all_reduce(torch.ones(1).to(device), op=dist.ReduceOp.SUM) dist.barrier() # test cuda is available and can allocate memory torch.cuda.is_available() torch.ones(1).cuda(local_rank) # global rank rank = dist.get_rank() world_size = dist.get_world_size() printflock(f"{gpu} is OK (global rank: {rank}/{world_size})") dist.barrier() if rank == 0: printflock(f"pt={torch.__version__}, cuda={torch.version.cuda}, nccl={torch.cuda.nccl.version()}") except Exception: printflock(f"{gpu} is broken") raise I have tried different python runs like this: LOGLEVEL=INFO python -m torch.distributed.run --master_addr $MASTER_ADDR --master_port $MASTER_PORT --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES torch-distributed-gpu-test.py LOGLEVEL=INFO torchrun --rdzv_id=$SLURM_JOBID --rdzv_backend=c10d --rdzv_endpoint=$MASTER_ADDR\:$MASTER_PORT --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES torch-distributed-gpu-test.py LOGLEVEL=INFO python -m torch.distributed.launch --rdzv_id=$SLURM_JOBID --rdzv_backend=c10d --rdzv_endpoint=$MASTER_ADDR\:$MASTER_PORT --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES torch-distributed-gpu-test.py All resulting in the same error. I have tried specifing the IP Adress explicitly instead of the MASTER_ADDR IP_ADDRESS=$(srun hostname --ip-address | head -n 1) I have looked at ports that are open: everything above 1023 is open And inspected the /etc/resolv.conf: the hostnames are clearly mapped And pinged the nodes, which also succeeded. I have specified the ip version by appending .ipv4 to the MASTER_ADDR with no success. | Adress Family not Found Errors are related to IPv4 and IPv6 versions. As my service did not provide an ipv6 connection between nodes, these errors occured. But they can be understood as warnings, the connection via IPv4 was still established. I did not find any solution on disabling IPv6 connections, but as they are just "information" so to say, I ignored them | 4 | 1 |
76,571,059 | 2023-6-28 | https://stackoverflow.com/questions/76571059/how-to-freeze-the-backbone-feature-extraction-layers-in-yolo-v8 | I want to train the YOLO v8 in transfer learning on my custom dataset. I have different classes than the base training on the COCO dataset. Yet I don't want to learn again the feature extraction. Hence I though following the Ultralytics YOLOv8 Docs - Train. Yet, When I train on my small dataset I want to freeze the backbone. How can I do that? I looked at the documentation and couldn't find how to do so. | You can do the following def freeze_layer(trainer): model = trainer.model num_freeze = 10 print(f"Freezing {num_freeze} layers") freeze = [f'model.{x}.' for x in range(num_freeze)] # layers to freeze for k, v in model.named_parameters(): v.requires_grad = True # train all layers if any(x in k for x in freeze): print(f'freezing {k}') v.requires_grad = False print(f"{num_freeze} layers are freezed.") Then add this function as a custom callback function to the model model = YOLO("yolov8x.pt") model.add_callback("on_train_start", freeze_layer) model.train(data="./dataset.yaml") Original answer is provided in one of the issues in ultralytics repo Freezing layers yolov8 #793 | 3 | 4 |
76,576,942 | 2023-6-28 | https://stackoverflow.com/questions/76576942/performance-degradation-with-increasing-threads-in-python-multiprocessing | I have a machine with 24 cores and 2 threads per core. I'm trying to optimize the following code for parallel execution. However, I noticed that the code's performance starts to degrade after a certain number of threads. import argparse import glob import h5py import numpy as np import pandas as pd import xarray as xr from tqdm import tqdm import time import datetime from multiprocessing import Pool, cpu_count, Lock import multiprocessing import cProfile, pstats, io def process_parcel_file(f, bands, mask): start_time = time.time() test = xr.open_dataset(f) print(f"Elapsed in process_parcel_file for reading dataset: {time.time() - start_time}") start_time = time.time() subset = test[bands + ['SCL']].copy() subset = subset.where(subset != 0, np.nan) if mask: subset = subset.where((subset.SCL >= 3) & (subset.SCL < 7)) subset = subset[bands] # Adding a new dimension week_year and performing grouping subset['week_year'] = subset.time.dt.strftime('%Y-%U') subset = subset.groupby('week_year').mean().sortby('week_year') subset['id'] = test['id'].copy() # Store the dates and counting pixels for each parcel dates = subset.week_year.values n_pixels = test[['id', 'SCL']].groupby('id').count()['SCL'][:, 0].values.reshape(-1, 1) # Converting to dataframe grouped_sum = subset.groupby('id').sum() ids = grouped_sum.id.values grouped_sum = grouped_sum.to_array().values grouped_sum = np.swapaxes(grouped_sum, 0, 1) grouped_sum = grouped_sum.reshape((grouped_sum.shape[0], -1)) colnames = ["{}_{}".format(b, str(x).split('T')[0]) for b in bands for x in dates] + ['count'] values = np.hstack((grouped_sum, n_pixels)) df = pd.DataFrame(values, columns=colnames) df.insert(0, 'id', ids) print(f"Elapsed in process_parcel_file til end: {time.time() - start_time}") return df def fs_creation(input_dir, out_file, labels_to_keep=None, th=0.1, n=64, days=5, total_days=180, mask=False, mode='s2', method='patch', bands=['B02', 'B03', 'B04', 'B05', 'B06', 'B07', 'B08', 'B8A', 'B11', 'B12']): files = glob.glob(input_dir) times_pool = [] # For storing execution times times_seq = [] cpu_counts = list(range(2, multiprocessing.cpu_count() + 1, 4)) # The different CPU counts to use for count in cpu_counts: print(f"Executing with {count} threads") if method == 'parcel': start_pool = time.time() with Pool(count) as pool: arguments = [(f, bands, mask) for f in files] dfs = list(tqdm(pool.starmap(process_parcel_file, arguments), total=len(arguments))) end_pool = time.time() start_seq = time.time() dfs = pd.concat(dfs) dfs = dfs.groupby('id').sum() counts = dfs['count'].copy() dfs = dfs.div(dfs['count'], axis=0) dfs['count'] = counts dfs.drop(index=-1).to_csv(out_file) end_seq = time.time() times_pool.append(end_pool - start_pool) times_seq.append(end_seq - start_seq) pd.DataFrame({'CPU_count': cpu_counts, 'Time pool': times_pool, 'Time seq' : times_seq}).to_csv('cpu_times.csv', index=False) return 0 When executing the code, it scales well up to around 7-8 threads, but after that, the performance starts to deteriorate. I have profiled the code, and it seems that each thread takes more time to execute the same code. For example, with 2 threads: Elapsed in process_parcel_file for reading dataset: 0.012271404266357422 Elapsed in process_parcel_file til end: 1.6681673526763916 Elapsed in process_parcel_file for reading dataset: 0.014229536056518555 Elapsed in process_parcel_file til end: 1.5836331844329834 However, with 22 threads: Elapsed in process_parcel_file for reading dataset: 0.17968058586120605 Elapsed in process_parcel_file til end: 12.049026727676392 Elapsed in process_parcel_file for reading dataset: 0.052398681640625 Elapsed in process_parcel_file til end: 6.014119625091553 I'm struggling to understand why the performance degrades with more threads. I've already verified that the system has the required number of cores and threads. I would appreciate any guidance or suggestions to help me identify the cause of this issue and optimize the code for better performance. It's really hard for me to provide a minimal working example so take that into account. Thank you in advance. Edit: The files are around 80MB each. I have 451 files. I added the following code to profile the function: ... start_time = time.time() mem_usage_start = memory_usage(-1, interval=0.1, timeout=None)[0] cpu_usage_start = psutil.cpu_percent(interval=None) test = xr.open_dataset(f) times['read_dataset'] = time.time() - start_time memory['read_dataset'] = memory_usage(-1, interval=0.1, timeout=None)[0] - mem_usage_start cpu_usage['read_dataset'] = psutil.cpu_percent(interval=None) - cpu_usage_start ... And more code for each line in a similar fashion. I used the libraries memory_profiler and psutil, and I have the information for each thread. CSV's with the results are available here: https://wetransfer.com/downloads/44df14ea831da7693300a29d8e0d4e7a20230703173536/da04a0 The results identify each line in the function with the number of cpus selected, so each one is a thread. Edit2: Here I have a report of a subset of the data, where you can clearly see what each thread is doing, and how some threads are getting less work than others: https://wetransfer.com/downloads/259b4e42aae6dd9cda5a22d576aba29520230717135248/ae3f88 | TL;DR Replace xarray.open_dataset with xarray.load_dataset to immediately load the data into memory; the former is quite lazy, while the latter is a bit more eager (a.k.a greedy, strict). [if this answer is accepted] Explanation: [At the risk of repeating previous comments/answers,] This appears to be an I/O throughput limitation, where too many threads/processes attempt to simultaneously read from the same device, ironically slowing things down for all sibling threads/processes. As for what's causing this, I believe it to be the call to xarray's open_dataset on line 17: test = xr.open_dataset(f), which by some specific interpretation of the docs may appear to be loading the data lazily: Data is always loaded lazily from netCDF files. You can manipulate, slice and subset Dataset and DataArray objects, and no array values are loaded into memory until you try to perform some sort of actual computation. In case this is true, it can explain the symptoms you're exhibiting - 451 files initially get represented by "empty" objects (which takes an almost insignificant amount of time), and as these objects' data is read (or manipulated) almost simultaneously - the storage device gets stormed by tens of thousands (or millions, depending on blocksize) of read requests. There is a tip, four paragraphs down : Xarrayβs lazy loading of remote or on-disk datasets is often but not always desirable. Before performing computationally intense operations, it is often a good idea to load a Dataset (or DataArray) entirely into memory by invoking the Dataset.load() method. Alternatively, try using load_dataset. | 5 | 0 |
76,579,007 | 2023-6-29 | https://stackoverflow.com/questions/76579007/is-there-an-interactive-table-widget-for-shiny-for-python-that-calls-back-select | I'm using reactable, Datatable or rhandsontable in R Shiny to display an interactive table and as a user input for selecting rows. Given there are a few packages for doing this in R, I thought there would be even more libraries in python for selecting rows from an interactive table - however I haven't found one yet. Please let me know if one exists. | I also love the way Datatable works for shiny R, and have been searching for an equivalent in shiny Python. The latest release of shiny Python (0.4) has a new feature "data grid / data table" that looks promising ! Here are the links to the release announcement and the API documentation: https://shiny.posit.co/blog/posts/shiny-python-0.4.0/ https://shiny.posit.co/py/api/render.DataTable.html They show an example of getting the selected rows in a callback (as you asked) Hope this helps | 3 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.