question_id
int64 59.5M
79.6M
| creation_date
stringdate 2020-01-01 00:00:00
2025-05-14 00:00:00
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
71,365,594 | 2022-3-5 | https://stackoverflow.com/questions/71365594/how-to-make-a-proxy-object-with-typing-as-underlying-object-in-python | I have a proxy class with an underlying object. I wish to pass the proxy object to a function that expects and underlying object type. How do I make the proxy typing match the underlying object? class Proxy: def __init__(self, obj): self.obj = obj def __getattribute__(self, name): return getattr(self.obj, name) def __setattr__(self, name, value): setattr(self.obj, name, value) def foo(bar: MyClass): ... foo(Proxy(MyClass())) # Warning: expected 'MyClass', got 'Proxy' instead | Credit to @SUTerliakov for actually providing the essence of this answer in the comment on the question. The code should look something along the lines of the following: from typing import TYPE_CHECKING from my_module import MyClass if TYPE_CHECKING: base = MyClass else: base = object class Proxy(base): def __init__(self, obj): self.obj = obj def __getattribute__(self, name): return getattr(self.obj, name) def __setattr__(self, name, value): setattr(self.obj, name, value) def foo(bar: MyClass): ... foo(Proxy(MyClass())) # Works! Note that your class may not always be available - but that's no real problem. The trick works perfectly fine with TypeVars. # proxy.py from typing import TYPE_CHECKING, TypeVar Proxied = TypeVar('Proxied') if TYPE_CHECKING: base = Proxied else: base = object class Proxy(base): def __init__(self, obj: Proxied): self.obj = obj def __getattribute__(self, name): return getattr(self.obj, name) def __setattr__(self, name, value): setattr(self.obj, name, value) And in the file where you use it: from proxy import Proxy def foo(bar: MyClass): ... foo(Proxy(MyClass())) # Works! | 6 | 6 |
71,319,523 | 2022-3-2 | https://stackoverflow.com/questions/71319523/django-rest-framework-drf-yasg-swagger-multiple-file-upload-error-for-listfield | I am trying to make upload file input from swagger (with drf-yasg), but when I use MultiPartParser class it gives me the below error: drf_yasg.errors.SwaggerGenerationError: FileField is supported only in a formData Parameter or response Schema My view: class AddExperience(generics.CreateAPIView): parser_classes = [MultiPartParser] permission_classes = [IsAuthenticated] serializer_class = DoctorExperienceSerializer My serializer: class DoctorExperienceSerializer(serializers.Serializer): diploma = serializers.ListField( child=serializers.FileField(allow_empty_file=False) ) education = serializers.CharField(max_length=1000) work_experience = serializers.CharField(max_length=1000) I also tried FormParser but it still gives me the same error. Also: FileUploadParser parser but it works like JsonParser: | The OpenAPISchema (OAS) 2 doesn't support the multiple file upload (see issue #254); but OAS 3 supports it (you can use this YML spec on a live swagger editer (see this result)). Comes to the real issue, there is a section in the drf-yasg's doc, If you are looking to add Swagger/OpenAPI support to a new project you might want to take a look at drf-spectacular, which is an actively maintained new library that shares most of the goals of this project, while working with OpenAPI 3.0 schemas. OpenAPI 3.0 provides a lot more flexibility than 2.0 in the types of API that can be described. drf-yasg is unlikely to soon, if ever, get support for OpenAPI 3.0. That means the package drf-yasg doesn't have support for OAS3 and thus, it won't support the "multiple file upload" feature. You can consider migrating from drf-yasg to drf-spectacular. But, also note that, drf-spectacular is also dealing the FileUpload in a different way. | 7 | 5 |
71,316,246 | 2022-3-2 | https://stackoverflow.com/questions/71316246/fill-missing-dates-in-a-pandas-dataframe | Iβve a lot of DataFrames with 2 columns, like this: Fecha unidades 0 2020-01-01 2.0 84048 2020-09-01 4.0 149445 2020-10-01 11.0 532541 2020-11-01 4.0 660659 2020-12-01 2.0 1515682 2021-03-01 9.0 1563644 2021-04-01 2.0 1759823 2021-05-01 1.0 2226586 2021-07-01 1.0 As it can be seen, there are some months that are missing. Missing data depends on the DataFrame, I can have 2 months, 10, 100% complete, only one...I need to complete column "Fecha" with missing months (from 2020-01-01 to 2021-12-01) and when date is added into "Fecha", add "0" value to "unidades" column. Each element in Fecha Column is a class 'pandas._libs.tslibs.timestamps.Timestamp How could I fill the missing dates for each DataFrame?? | You could create a date range and use "Fecha" column to set_index + reindex to add missing months. Then fillna + reset_index fetches the desired outcome: df['Fecha'] = pd.to_datetime(df['Fecha']) df = (df.set_index('Fecha') .reindex(pd.date_range('2020-01-01', '2021-12-01', freq='MS')) .rename_axis(['Fecha']) .fillna(0) .reset_index()) Output: Fecha unidades 0 2020-01-01 2.0 1 2020-02-01 0.0 2 2020-03-01 0.0 3 2020-04-01 0.0 4 2020-05-01 0.0 5 2020-06-01 0.0 6 2020-07-01 0.0 7 2020-08-01 0.0 8 2020-09-01 4.0 9 2020-10-01 11.0 10 2020-11-01 4.0 11 2020-12-01 2.0 12 2021-01-01 0.0 13 2021-02-01 0.0 14 2021-03-01 9.0 15 2021-04-01 2.0 16 2021-05-01 1.0 17 2021-06-01 0.0 18 2021-07-01 1.0 19 2021-08-01 0.0 20 2021-09-01 0.0 21 2021-10-01 0.0 22 2021-11-01 0.0 23 2021-12-01 0.0 | 5 | 9 |
71,371,909 | 2022-3-6 | https://stackoverflow.com/questions/71371909/how-to-calculate-when-ones-10000-day-after-his-or-her-birthday-will-be | I am wondering how to solve this problem with basic Python (no libraries to be used): How can I calculate when one's 10000 day after their birthday will be (/would be)? For instance, given Monday 19/05/2008, the desired day is Friday 05/10/2035 (according to https://www.durrans.com/projects/calc/10000/index.html?dob=19%2F5%2F2008&e=mc2) So far I have done the following script: years = range(2000, 2050) lst_days = [] count = 0 tot_days = 0 for year in years: if((year % 400 == 0) or (year % 100 != 0) and (year % 4 == 0)): lst_days.append(366) else: lst_days.append(365) while tot_days <= 10000: tot_days = tot_days + lst_days[count] count = count+1 print(count) Which estimates the person's age after 10,000 days from their birthday (for people born after 2000). But how can I proceed? | Using base Python packages only On the basis that "no special packages" means you can only use base Python packages, you can use datetime.timedelta for this type of problem: import datetime start_date = datetime.datetime(year=2008, month=5, day=19) end_date = start_date + datetime.timedelta(days=10000) print(end_date.date()) Without any base packages (and progressing to the problem) Side-stepping even base Python packages, and taking the problem forwards, something along the lines of the following should help (I hope!). Start by defining a function that determines if a year is a leap year or not: def is_it_a_leap_year(year) -> bool: """ Determine if a year is a leap year Args: year: int Extended Summary: According to: https://airandspace.si.edu/stories/editorial/science-leap-year The rule is that if the year is divisible by 100 and not divisible by 400, leap year is skipped. The year 2000 was a leap year, for example, but the years 1700, 1800, and 1900 were not. The next time a leap year will be skipped is the year 2100. """ if year % 4 != 0: return False if year % 100 == 0 and year % 400 != 0: return False return True Then define a function that determines the age of a person (utilizing the above to recognise leap years): def age_after_n_days(start_year: int, start_month: int, start_day: int, n_days: int) -> tuple: """ Calculate an approximate age of a person after a given number of days, attempting to take into account leap years appropriately. Return the number of days left until their next birthday Args: start_year (int): year of the start date start_month (int): month of the start date start_day (int): day of the start date n_days (int): number of days to elapse """ # Check if the start date happens on a leap year and occurs before the # 29 February (additional leap year day) start_pre_leap = (is_it_a_leap_year(start_year) and start_month < 3) # Account for the edge case where you start exactly on the 29 February if start_month == 2 and start_day == 29: start_pre_leap = False # Keep a running counter of age age = 0 # Store the "current year" whilst iterating through the days current_year = start_year # Count the number of days left days_left = n_days # While there is at least one year left to elapse... while days_left > 364: # Is it a leap year? if is_it_a_leap_year(current_year): # If not the first year if age > 0: days_left -= 366 # If the first year is a leap year but starting after the 29 Feb... elif age == 0 and not start_pre_leap: days_left -= 365 else: days_left -= 366 # If not a leap year... else: days_left -= 365 # If the number of days left hasn't dropped below zero if days_left >= 0: # Increment age age += 1 # Increment year current_year += 1 return age, days_left Using your example, you can test the function with: age, remaining_days = age_after_n_days(start_year=2000, start_month=5, start_day=19, n_days=10000) Now you have the number of complete years that will elapse and the number of remaining days You can then use the remaining_days to work out the exact date. | 19 | 20 |
71,396,605 | 2022-3-8 | https://stackoverflow.com/questions/71396605/how-can-i-specify-several-examples-for-the-fastapi-docs-when-response-model-is-a | I am writing a FastAPI app in python and I would like to use the openapi docs which are automatically generated. In particular, I would like to specify examples for the response value. I know how to do it when the response_model is a class that inherits from pydantic's BaseModel, but I am having trouble when it is a list of such classes. Here's a minimal example: from fastapi import FastAPI from typing import List from pydantic import BaseModel, Field class Person(BaseModel): name: str = Field( ..., title="Name", description="The name of the person", example="Alice" ) age: int = Field( ..., title="Age", description="The age of the person", example=83 ) class Config: schema_extra = { 'examples': [ { "name": "Alice", "age": 83 }, { "name": "Bob", "age": 77 } ] } app = FastAPI() @app.get('/person', response_model=Person) def person(): return { "name": "Alice", "age": 83 } @app.get('/people', response_model=List[Person]) def people(): return [ { "name": "Alice", "age": 83 }, { "name": "Bob", "age": 77 } ] In the automatically generated openapi docs, the example value for a successful response for /person is { "name": "Alice", "age": 83 } which is what I want. However, for /people it is [ { "name": "Alice", "age": 83 } ] but I would prefer for it to be [ { "name": "Alice", "age": 83 }, { "name": "Bob", "age": 77 } ] Is there any way to achieve that? Thank you in advance! | You can specify your example in responses parameter: @app.get('/people', response_model=List[Person], responses={ 200: { "description": "People successfully found", "content": { "application/json": { "example": [ { "name": "Alice", "age": 83 }, { "name": "Bob", "age": 77 } ] } } }, 404: {"description": "People not found"} }) def people(): return [ { "name": "Alice", "age": 83 }, { "name": "Bob", "age": 77 } ] With this code, the example is specified for status code 200, but you can also configure example for errors (or you can remove 404 entry if you do not want it to appear in your openapi). For more information you can check FastAPI documentation: https://fastapi.tiangolo.com/advanced/additional-responses/ | 8 | 13 |
71,352,354 | 2022-3-4 | https://stackoverflow.com/questions/71352354/sklearn-kmeans-is-not-working-as-i-only-get-nonetype-object-has-no-attribute | I don't know what is wrong but suddenly KMeans from sklearn is not working anymore and I don't know what I am doing wrong. Has anyone encountered this problem yet or knows how I can fix it? from sklearn.cluster import KMeans kmeanModel = KMeans(n_clusters=k, random_state=0) kmeanModel.fit(allLocations) allLocations looks like this: array([[12.40236 , 51.38086 ], [12.40999 , 51.38494 ], [12.40599 , 51.37284 ], [12.28692 , 51.32039 ], [12.41349 , 51.34443 ], ...]) and allLocations.dtype gives dtype('float64'). The scikit-learn version is 1.0.2 and the NumPy version is 1.22.2 and I am using Jupyter Notebook. The Error says: 'NoneType' object has no attribute 'split' The whole Error looks like this: AttributeError Traceback (most recent call last) <ipython-input-30-db8e8220c8b9> in <module> 12 for k in K: 13 kmeanModel = KMeans(n_clusters=k, random_state=0) ---> 14 kmeanModel.fit(allLocations) 15 distortions.append(kmeanModel.inertia_) 16 #Plotting the distortions ~\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in fit(self, X, y, sample_weight) 1169 if self._algorithm == "full": 1170 kmeans_single = _kmeans_single_lloyd -> 1171 self._check_mkl_vcomp(X, X.shape[0]) 1172 else: 1173 kmeans_single = _kmeans_single_elkan ~\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py in _check_mkl_vcomp(self, X, n_samples) 1026 active_threads = int(np.ceil(n_samples / CHUNK_SIZE)) 1027 if active_threads < self._n_threads: -> 1028 modules = threadpool_info() 1029 has_vcomp = "vcomp" in [module["prefix"] for module in modules] 1030 has_mkl = ("mkl", "intel") in [ ~\anaconda3\lib\site-packages\sklearn\utils\fixes.py in threadpool_info() 323 return controller.info() 324 else: --> 325 return threadpoolctl.threadpool_info() 326 327 ~\anaconda3\lib\site-packages\threadpoolctl.py in threadpool_info() 122 In addition, each module may contain internal_api specific entries. 123 """ --> 124 return _ThreadpoolInfo(user_api=_ALL_USER_APIS).todicts() 125 126 ~\anaconda3\lib\site-packages\threadpoolctl.py in __init__(self, user_api, prefixes, modules) 338 339 self.modules = [] --> 340 self._load_modules() 341 self._warn_if_incompatible_openmp() 342 else: ~\anaconda3\lib\site-packages\threadpoolctl.py in _load_modules(self) 371 self._find_modules_with_dyld() 372 elif sys.platform == "win32": --> 373 self._find_modules_with_enum_process_module_ex() 374 else: 375 self._find_modules_with_dl_iterate_phdr() ~\anaconda3\lib\site-packages\threadpoolctl.py in _find_modules_with_enum_process_module_ex(self) 483 484 # Store the module if it is supported and selected --> 485 self._make_module_from_path(filepath) 486 finally: 487 kernel_32.CloseHandle(h_process) ~\anaconda3\lib\site-packages\threadpoolctl.py in _make_module_from_path(self, filepath) 513 if prefix in self.prefixes or user_api in self.user_api: 514 module_class = globals()[module_class] --> 515 module = module_class(filepath, prefix, user_api, internal_api) 516 self.modules.append(module) 517 ~\anaconda3\lib\site-packages\threadpoolctl.py in __init__(self, filepath, prefix, user_api, internal_api) 604 self.internal_api = internal_api 605 self._dynlib = ctypes.CDLL(filepath, mode=_RTLD_NOLOAD) --> 606 self.version = self.get_version() 607 self.num_threads = self.get_num_threads() 608 self._get_extra_info() ~\anaconda3\lib\site-packages\threadpoolctl.py in get_version(self) 644 lambda: None) 645 get_config.restype = ctypes.c_char_p --> 646 config = get_config().split() 647 if config[0] == b"OpenBLAS": 648 return config[1].decode("utf-8") AttributeError: 'NoneType' object has no attribute 'split' | Downgrading numpy to 1.21.4 made it work again | 42 | 15 |
71,362,488 | 2022-3-5 | https://stackoverflow.com/questions/71362488/apply-transformation-on-a-paramspec-variable | Is there any way for me to apply a transformation on a ParamSpec? I can illustrate the problem with an example: from typing import Callable def as_upper(x: str): return x.upper() def eventually(f: Callable[P, None], *args: P.args, **kwargs: P.kwargs): def inner(): def transform(a): return a() if isinstance(a, Callable) else a targs = tuple(transform(a) for a in args) tkwargs = {k: transform(v) for k,v in kwargs.items()} return f(*targs, **tkwargs) return inner eventually(as_upper, lambda: "hello") # type checker complains here This type checker (pyright in my case) will complain about this. The function eventually received a callable () -> str and not a str which was expected. My question is: is there some way for me to specify that it should expect () -> str and not the str itself? And in general, if a function expects a type T I can transform it to (say) () -> T? I'm basically asking if it's possible to transform the ParamSpec in some way so that a related function does not expect the same parameters, but "almost" the same parameters. I don't really expect this to be possible, but maybe some with more experience with type checking know a potential solution to this problem. :) | This decorator cannot be properly typed with currently available tools (Python 3.10). Two main problems here: ParamSpec and Concatenate for now only allow us to modify a fixed number of parameters. We cannot concatenate keyword-only arguments (which makes transforming **kwargs: P.kwargs impossible) However, under these constraints, we can achieve a less elegant solution if the positional parameters are known, taking advantage of currying: from typing import Callable, Concatenate, ParamSpec, TypeVar T = TypeVar("T") RetT = TypeVar("RetT") P = ParamSpec("P") def something(a: str, b: int, c: bool): return f"{a} {b} {c}" def eventually( f: Callable[Concatenate[T, P], RetT], a: Callable[[], T] ) -> Callable[P, RetT]: def inner(*args: P.args, **kwargs: P.kwargs) -> RetT: transformed = a() if callable(a) else a return f(transformed, *args, **kwargs) return inner something_eventually = eventually( eventually(eventually(something, lambda: "hello"), lambda: 2), lambda: False ) something_eventually() # hello 2 False Notice that it is yet not possible to concatenate keyword parameters (See also). eventually can also be applied in a more functional way (but it cannot be properly typed): from functools import reduce # Though it unfortunately doesn't type check something_eventually = reduce( eventually, [lambda: "hello", lambda: 2, lambda: False], something, ) something_eventually() # hello 2 False reduce expects the type of the value to have the same type, while we are changing the type of the function in each iteration. This makes it impossible to type it when we apply eventually arbitrary times in this manner. reduce is just an example. In a more general sense, I doubt if we can properly type something that involves repeatedly applying currying functions like eventually as proposed above, at least with the current Concatenate support. | 4 | 5 |
71,356,388 | 2022-3-4 | https://stackoverflow.com/questions/71356388/how-to-connect-to-user-data-stream-binance | I need to listen to User Data Stream, whenever there's an Order Event - order execution, cancelation, and so on - I'd like to be able to listen to those events and create notifications. So I got my "listenKey" and I'm not sure if it was done the right way but I executed this code and it gave me something like listenKey. Code to get listenKey: def get_listen_key_by_REST(binance_api_key): url = 'https://api.binance.com/api/v1/userDataStream' response = requests.post(url, headers={'X-MBX-APIKEY': binance_api_key}) json = response.json() return json['listenKey'] print(get_listen_key_by_REST(API_KEY)) And the code to listen to User Data Stream - which doesn't work, I get no json response. socket = f"wss://fstream-auth.binance.com/ws/btcusdt@markPrice?listenKey=<listenKeyhere>" def on_message(ws, message): json_message = json.loads(message) print(json_message) def on_close(ws): print(f"Connection Closed") # restart() def on_error(ws, error): print(f"Error") print(error) ws = websocket.WebSocketApp(socket, on_message=on_message, on_close=on_close, on_error=on_error) I have read the docs to no avail. I'd appreciate it if someone could point me in the right direction. | You can create a basic async user socket connection from the docs here along with other useful info for the Binance API. Here is a simple example: import asyncio from binance import AsyncClient, BinanceSocketManager async def main(): client = await AsyncClient.create(api_key, api_secret, tld='us') bm = BinanceSocketManager(client) # start any sockets here, i.e a trade socket ts = bm.user_socket() # then start receiving messages async with ts as tscm: while True: res = await tscm.recv() print(res) await client.close_connection() if __name__ == "__main__": loop = asyncio.get_event_loop() loop.run_until_complete(main()) | 7 | 1 |
71,337,173 | 2022-3-3 | https://stackoverflow.com/questions/71337173/django-4-connection-to-postgresql-using-passfile-fe-sendauth-no-password-supp | Hello SO & Django community, My problem is related to Django 4, as the feature to use passfile to connect to Postgres has appeared in this version. Though I have went through the similar error message related questions about previous versions, I had no success in solving my problem. What I am trying to do I want to connect Postgres database DB_MyProject to a django MyProject. In Django 4, you may use a passfile instead of providing all user/password information in the settings.py. The documentation about this new feature is here. The concept of password file in Postgres is explained here, you may also read about connection service here. Having followed these docs to my best understanding, I have done the following: Created the following DATABASES entry in settings.py of the Django project, as advised here: 'default': { 'ENGINE': 'django.db.backends.postgresql', 'OPTIONS': { 'service': 'db_service', 'passfile': '.pgpass', }, } } In the pg configuration directory (pg_config --sysconfdir), created file pg_service.conf with following information, as in pg docs and django docs: [db_service] host=localhost port=5432 dbname=DB_MyProject user=my_postgres_user Created a .pgpass file, as in pg docs: localhost:5432:DB_MyProject:my_postgres_user:my_passwd Now, this .pgpass file exists in several locations, as a result of the quest to make this work: in pg configuration directory (pg_config --sysconfdir) in my regular user home directory ~/ in django MyProject root directory all of these files are exact copies with the same permission level: -rw------- 1 my_regular_user my_group 52 Mar 2 18:47 .pgpass I also created a DB in PGAdmin with the specified name, and made sure user has permission with the specified password. Now I assumed it should be working OK. But instead, when I try to makemigrations or migrate or runserver, I get this: django.db.utils.OperationalError: connection to server at "localhost" (::1), port 5432 failed: fe_sendauth: no password supplied What have I tried already verified compatibility: Django v4.0.3, PostgreSQL v14.0, psycopg2 v2.9.3 created an environment variable PGPASSFILE with a path to ~./.pgpass changed ownership of ~/.pgpass from my_regular_user to my_postgres_user for testing, was given the same result installed direnv, created .envrc file in the project root directory, containing: export PGPASSFILE=~/.pgpass Thank you in advance for your help. I would also be happy to be pointed out any misconceptions in my thinking, as I am a newbie developer. | Created the following DATABASES entry in settings.py of the Django project: 'default': { 'ENGINE': 'django.db.backends.postgresql', 'OPTIONS': { 'service': 'db_service', 'passfile': '.pgpass', }, } } In user home directory, created file ~/.pg_service.conf with following information: [db_service] host=localhost port=5432 dbname=DB_MyProject user=my_postgres_user Created a .pgpass file in django MyProject root directory and change the permissions of the file chmod 0600 MyProject/.pgpass: localhost:5432:DB_MyProject:my_postgres_user:my_passwd | 5 | 9 |
71,376,207 | 2022-3-7 | https://stackoverflow.com/questions/71376207/latex-math-text-in-pil-imagedraw-text | I'm trying to annotate a few figures I created in python. So, I'm generating an image containing the specified text (using PIL ImageDraw) and concatenating it with the image. Now, I want to include a math notation into the text. Is there a way to write text in latex math when creating the image of text? This answer suggests a work-around by using unicode text, but I would prefer writing the text in latex directly. MWE: from PIL import ImageFont, Image, ImageDraw from matplotlib import pyplot def get_annotation(image_shape: tuple, title: str, font_size: int = 50): frame_height, frame_width = image_shape[:2] times_font = ImageFont.truetype('times-new-roman.ttf', font_size) text_image = Image.new('RGB', (frame_width, frame_height), (255, 255, 255)) drawer = ImageDraw.Draw(text_image) w, h = drawer.textsize(title, font=times_font) drawer.text(((frame_width - w) / 2, (frame_height - h) / 2), text=title, fill=(0, 0, 0), font=times_font, align='center') annotation = numpy.array(text_image) return annotation if __name__ == '__main__': anno = get_annotation((100, 300), 'frame $f_n$') pyplot.imshow(anno) pyplot.show() I tried passing frame $f_n$ to title parameter, but dollars got printed in the text image. PS: times-new-roman.ttf can be obtained here | I found an alternative with sympy here import sympy sympy.preview(r'frame $f_n$', dvioptions=["-T", "tight", "-z", "0", "--truecolor", "-D 600"], viewer='file', filename='test.png', euler=False) | 5 | 0 |
71,399,847 | 2022-3-8 | https://stackoverflow.com/questions/71399847/runtimeerror-0d-or-1d-target-tensor-expected-multi-target-not-supported-i-was | *My Training Model* def train(model,criterion,optimizer,iters): epoch = iters train_loss = [] validaion_loss = [] train_acc = [] validation_acc = [] states = ['Train','Valid'] for epoch in range(epochs): print("epoch : {}/{}".format(epoch+1,epochs)) for phase in states: if phase == 'Train': model.train() *training the data if phase is train* dataload = train_data_loader else: model.eval() dataload = valid_data_loader run_loss,run_acc = 0,0 *creating variables to calculate loss and acc* for data in dataload: inputs,labels = data inputs = inputs.to(device) labels = labels.to(device) labels = labels.byte() optimizer.zero_grad() #Using the optimizer with torch.set_grad_enabled(phase == 'Train'): outputs = model(inputs) loss = criterion(outputs,labels.unsqueeze(1).float()) predict = outputs>=0.5 if phase == 'Train': loss.backward() #backward propagation optimizer.step() acc = torch.sum(predict == labels.unsqueeze(1)) run_loss+=loss.item() run_acc+=acc.item()/len(labels) if phase == 'Train': #calulating train loss and accucracy epoch_loss = run_loss/len(train_data_loader) train_loss.append(epoch_loss) epoch_acc = run_acc/len(train_data_loader) train_acc.append(epoch_acc) else: #training validation loss and accuracy epoch_loss = run_loss/len(valid_data_loader) validaion_loss.append(epoch_loss) epoch_acc = run_acc/len(valid_data_loader) validation_acc.append(epoch_acc) print("{}, loss :{},accuracy:{}".format(phase,epoch_loss,epoch_acc)) history = {'Train_loss':train_loss,'Train_accuracy':train_acc, 'Validation_loss':validaion_loss,'Validation_Accuracy':validation_acc} return model,history[enter image description here][1] I was experiencing the error as 0D or 1D target tensor expected, multi-target not supported could you please help in rectifying the code which is described above. Referred the previous related articles but unable to get the desired result. What are the code snippets I had to change so that my model will run successfully. Any suggestions are mostly welcome. Thanks in Advance. | Your problem is that labels have the correct shape to calculate the loss. When you add .unsqueeze(1) to labels you made your labels with this shape [32,1] which is not consistent to the requirment to calcualte the loss. To fix the problem, you only need to remove .unsqueeze(1) for labels. If you read the documentation of CrossEntropLoss, the arguments: Input should be in (N,C) shape which is outputs in your case and [32,3]. Target should be in N shape which is labels in your case and should be [32]. Therefore, the loss function expects labels to be in 1D target not multi-target. | 8 | 14 |
71,370,656 | 2022-3-6 | https://stackoverflow.com/questions/71370656/special-number-count | It is a number whose gcd of (sum of quartic power of its digits, the product of its digits) is more than 1. eg. 123 is a special number because hcf of(1+16+81, 6) is more than 1. I have to find the count of all these numbers that are below input n. eg. for n=120 their are 57 special numbers between (1 and 120) I have done a code but its very slow can you please tell me to do it in some good and fast way. Is there is any way to do it using some maths. import math,numpy t = int(input()) ans = [] for i in range(0,t): ans.append(0) n = int(input()) for j in range(1, n+1): res = math.gcd(sum([pow(int(k),4) for k in str(j)]),numpy.prod([int(k) for k in str(j)])) if res>1: ans[i] = ans[i] + 1 for i in range(0,t): print(ans[i]) | Here's an O(log n) algorithm for actually counting special numbers less than or equal to n. It builds digit strings one at a time, keeping track of whether 2, 3, 5 and 7 divide that digit string's product, and the remainder modulo 2, 3, 5, and 7 of the sum of fourth powers of those digits. The logic for testing whether a number is special based on divisibility by those prime factors and remainder of powers under those factors is the same as in David's answer, and is explained better there. Since there are only 2^4 possibilities for which primes divide the product, and 2*3*5*7 possibilities for the remainder, there are a constant number of combinations of both that are possible, for a runtime of O(2^4 * 210 * log n) = O(log n). def count_special_less_equal(digits: List[int]) -> int: """Return the count of numbers less than or equal to the represented number, with the property that gcd(product(digits), sum(fourth powers of digits)) > 1""" # Count all digit strings with zeroes total_non_special = len(digits) primes = (2, 3, 5, 7) prime_product = functools.reduce(operator.mul, primes, 1) digit_to_remainders = [pow(x, 4, prime_product) for x in range(10)] # Map each digit 1-9 to prime factors # 2: 2**0, 3: 2**1, 5: 2**2, 7: 2**3 factor_masks = [0, 0, 1, 2, 1, 4, 3, 8, 1, 2] def is_fac_mask_mod_special(factor_mask: int, remainder: int) -> bool: """Return true if any of the prime factors represented in factor_mask have corresponding remainder 0 (i.e., divide the sum of fourth powers)""" return any((factor_mask & (1 << i) != 0 and remainder % primes[i] == 0) for i in range(4)) prefix_less_than = [Counter() for _ in range(16)] # Empty string prefix_equal = (0, 0) for digit_pos, digit in enumerate(digits): new_prefix_less_than = [Counter() for _ in range(16)] # Old "lesser than" prefixes stay lesser for fac_mask, fac_mask_counts in enumerate(prefix_less_than): for new_digit in range(1, 10): new_mask = fac_mask | factor_masks[new_digit] remainder_change = digit_to_remainders[new_digit] for old_remainder, old_count in fac_mask_counts.items(): new_remainder = (remainder_change + old_remainder) % prime_product new_prefix_less_than[new_mask][new_remainder] += old_count if digit == 0: prefix_equal = None if prefix_equal is not None: equal_fac_mask, equal_remainder = prefix_equal for new_digit in range(1, digit): new_mask = equal_fac_mask | factor_masks[new_digit] remainder_change = digit_to_remainders[new_digit] new_remainder = (remainder_change + equal_remainder) % prime_product new_prefix_less_than[new_mask][new_remainder] += 1 new_mask = equal_fac_mask | factor_masks[digit] remainder_change = digit_to_remainders[digit] new_remainder = (remainder_change + equal_remainder) % prime_product prefix_equal = (new_mask, new_remainder) prefix_less_than = new_prefix_less_than if digit_pos == len(digits) - 1: break # Empty string prefix_less_than[0][0] += 1 for fac_mask, fac_mask_counts in enumerate(prefix_less_than): for remainder, rem_count in fac_mask_counts.items(): if not is_fac_mask_mod_special(factor_mask=fac_mask, remainder=remainder): total_non_special += rem_count if prefix_equal is not None: if not is_fac_mask_mod_special(*prefix_equal): total_non_special += 1 return 1 + int(''.join(map(str, digits))) - total_non_special Example usage: print(f"{count_special_less_equal(digits_of(120))}") prints 57 and for exponent in range(1, 19): print(f"Count up to 10^{exponent}: {count_special_less_equal(digits_of(10**exponent))}") gives: Count up to 10^1: 8 Count up to 10^2: 42 Count up to 10^3: 592 Count up to 10^4: 7400 Count up to 10^5: 79118 Count up to 10^6: 854190 Count up to 10^7: 8595966 Count up to 10^8: 86010590 Count up to 10^9: 866103492 Count up to 10^10: 8811619132 Count up to 10^11: 92967009216 Count up to 10^12: 929455398976 Count up to 10^13: 9268803096820 Count up to 10^14: 92838342330554 Count up to 10^15: 933105194955392 Count up to 10^16: 9557298732021784 Count up to 10^17: 96089228976983058 Count up to 10^18: 960712913414545906 Done in 0.3783 seconds This finds the frequencies for all powers of 10 up to 10^18 in about a third of a second. It's possible to optimize this further in the constant factors, using numpy arrays or other tricks (like precomputing the counts for all numbers with a fixed number of digits). | 9 | 4 |
71,402,387 | 2022-3-8 | https://stackoverflow.com/questions/71402387/the-rationale-of-functools-partial-behavior | I'm wondering what the story -- whether sound design or inherited legacy -- is behind these functools.partial and inspect.signature facts (talking python 3.8 here). Set up: from functools import partial from inspect import signature def bar(a, b): return a / b All starts well with the following, which seems compliant with curry-standards. We're fixing a to 3 positionally, a disappears from the signature and it's value is indeed bound to 3: f = partial(bar, 3) assert str(signature(f)) == '(b)' assert f(6) == 0.5 == f(b=6) If we try to specify an alternate value for a, f won't tell us that we got an unexpected keyword, but rather that it got multiple values for argument a: f(a=2, b=6) # TypeError: bar() got multiple values for argument 'a' f(c=2, b=6) # TypeError: bar() got an unexpected keyword argument 'c' But now if we fix b=3 through a keyword, b is not removed from the signature, it's kind changes to keyword-only, and we can still use it (overwrite the default, as a normal default, which we couldn't do with a in the previous case): f = partial(bar, b=3) assert str(signature(f)) == '(a, *, b=3)' assert f(6) == 2.0 == f(6, b=3) assert f(6, b=1) == 6.0 Why such asymmetry? It gets even stranger, we can do this: f = partial(bar, a=3) assert str(signature(f)) == '(*, a=3, b)' # whaaa?! non-default argument follows default argument? Fine: For keyword-only arguments, there can be no confusing of what parameter a default is assigned to, but I still wonder what design-thinking or constraints are behind these choices. | Using partial with a Positional Argument f = partial(bar, 3) By design, upon calling a function, positional arguments are assigned first. Then logically, 3 should be assigned to a with partial. It makes sense to remove it from the signature as there is no way to assign anything to it again! when you have f(a=2, b=6), you are actually doing bar(3, a=2, b=6) when you have f(2, 2), you are actually doing bar (3, 2, 2) We never get rid of 3 For the new partial function: We can't give a a different value with another positional argument We can't use the keyword a to assign a different value to it as it is already "filled" If there is a parameter with the same name as the keyword, then the argument value is assigned to that parameter slot. However, if the parameter slot is already filled, then that is an error. I recommend reading the function calling behavior section of pep-3102 to get a better grasp of this matter. Using partial with a Keyword Argument f = partial(bar, b=3) This is a different use case. We are applying a keyword argument to bar. You are functionally turning def bar(a, b): ... into def f(a, *, b=3): ... where b becomes a keyword-only argument instead of def f(a, b=3): ... inspect.signature correctly reflects a design decision of partial. The keyword arguments passed to partial are designed to append additional positional arguments (source). Note that this behavior does not necessarily override the keyword arguments supplied with f = partial(bar, b=3), i.e., b=3 will be applied regardless of whether you supply the second positional argument or not (and there will be a TypeError if you do so). This is different from a positional argument with a default value. >>> f(1, 2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: f() takes 1 positional argument but 2 were given where f(1, 2) is equivalent to bar(1, 2, b=3) The only way to override it is with a keyword argument >>> f(2, b=2) An argument that can only be assigned with a keyword but positionally? This is a keyword-only argument. Thus (a, *, b=3) instead of (a, b=3). The Rationale of Non-default Argument follows Default Argument f = partial(bar, a=3) assert str(signature(f)) == '(*, a=3, b)' # whaaa?! non-default argument follows default argument? You can't do def bar(a=3, b). a and b are so called positional-or-keyword arguments. You can do def bar(*, a=3, b). a and b are keyword-only arguments. Even though semantically, a has a default value and thus it is optional, we can't leave it unassigned because b, which is a positional-or-keyword argument needs to be assigned a value if we want to use b positionally. If we do not supply a value for a, we have to use b as a keyword argument. Checkmate! There is no way for b to be a positional-or-keyword argument as we intended. The PEP for positonal-only arguments also kind of shows the rationale behind it. This also has something to do with the aforementioned "function calling behavior". partial != Currying & Implementation Details partial by its implementation wraps the original function while storing the fixed arguments you passed to it. IT IS NOT IMPLEMENTED WITH CURRYING. It is rather partial application instead of currying in the sense of functional programming. partial is essentially applying the fixed arguments first, then the arguments you called with the wrapper: def __call__(self, /, *args, **keywords): keywords = {**self.keywords, **keywords} return self.func(*self.args, *args, **keywords) This explains f(a=2, b=6) # TypeError: bar() got multiple values for argument 'a'. See also: Why is partial called partial instead of curry Under the Hood of inspect The outputs of inspect is another story. inspect itself is a tool that produces user-friendly outputs. For partial() in particular (and partialmethod(), similarly), it follows the wrapped function while taking the fixed parameters into account: if isinstance(obj, functools.partial): wrapped_sig = _get_signature_of(obj.func) return _signature_get_partial(wrapped_sig, obj) Do note that it is not inspect.signature's goal to show you the actual signature of the wrapped function in the AST. def _signature_get_partial(wrapped_sig, partial, extra_args=()): """Private helper to calculate how 'wrapped_sig' signature will look like after applying a 'functools.partial' object (or alike) on it. """ ... So we have a nice and ideal signature for f = partial(bar, 3) but get f(a=2, b=6) # TypeError: bar() got multiple values for argument 'a' in reality. Follow-up If you want currying so badly, how do you implement it in Python, in the way which gives you the expected TypeError? | 6 | 5 |
71,391,946 | 2022-3-8 | https://stackoverflow.com/questions/71391946/does-raku-have-pythons-union-type | In Python, Python has Union type, which is convenient when a method can accept multi types: from typing import Union def test(x: Union[str,int,float,]): print(x) if __name__ == '__main__': test(1) test('str') test(3.1415926) Raku probably doesn't have Union type as Python, but a where clause can achieve a similar effect: sub test(\x where * ~~ Int | Str | Rat) { say(x) } sub MAIN() { test(1); test('str'); test(3.1415926); } I wander if Raku have a possibility to provide the Union type as Python? # vvvvvvvvvvvvvvvvvvvv - the Union type doesn't exist in Raku now. sub test(Union[Int, Str, Rat] \x) { say(x) } | My answer (which is very similar to your first solution ;) would be: subset Union where Int | Rat | Str; sub test(Union \x) { say(x) } sub MAIN() { test(1); test('str'); test(pi); } Constraint type check failed in binding to parameter 'x'; expected Union but got Num (3.141592653589793e0) (or you can put a where clause in the call signature, as you have it) In contrast to Python: this is native in raku and does not rely on a package like "typing" to be imported Python Union / SumTypes are used for static hinting, which is good for eg. IDEs but these types are unenforced in Python (per @freshpaste comment and this SO), in raku they are checked and will fail at runtime So - the raku syntax is there to do what you ask ... sure, it's a different language so it does it in a different way. Personally I think that a typed language should fail if type checks are breached. It seems to me that type hinting that is not always enforced is a false comfort blanket. On a wider point, raku also offers built in Allomorph types for IntStr, RatStr, NumStr and ComplexStr - so you can work in a mixed mode using both string and math functions | 10 | 7 |
71,370,107 | 2022-3-6 | https://stackoverflow.com/questions/71370107/how-to-change-input-keyboard-layout-programmatically-in-pyqt5 | Is it possible to change the Input Keyboard Layouts by programmatically in Pyqt5? My first and second text box accepts Tamil letters. IN Tamil So many keyboard Layouts available. By default in Windows 10, Tamil Phonetic, Tamil99 and Tamil Traditional Keyboards are available. Now I want to select Keybaord layouts programmatically... For example. In My First textbox, I need to assign a "Tamil99" keyboard layout and in the second textbox, I need to assign a "Tamil Phonetic" keyboard layout. How to assign it, programmatically? import sys from PyQt5.QtGui import * from PyQt5.QtCore import * from PyQt5.QtWidgets import * class Diff_Language(QWidget): def __init__(self): super().__init__() self.setWindowTitle("InPut Different languges in Different Textbox") self.lbl1 = QLabel("Input Language - Tamil99 Keyboard") self.lbl2 = QLabel("Input Language - Tamil phonetic keyboard") self.tbox1 = QLineEdit() self.tbox1.setFont(QFont('senthamil', 10, QFont.Bold)) self.tbox2 = QLineEdit() self.tbox2.setFont(QFont('senthamil', 30, QFont.Bold)) self.vbox = QVBoxLayout() self.vbox.addWidget(self.lbl1) self.vbox.addWidget(self.tbox1) self.vbox.addWidget(self.lbl2) self.vbox.addWidget(self.tbox2) self.setLayout(self.vbox) def main(): app = QApplication(sys.argv) mainscreen = Diff_Language() app.setStyle("Fusion") mainscreen.show() sys.exit(app.exec_()) if __name__ == '__main__': main() | So Qt doesn't offer this, but you can ask your OS to do it for you. Assuming you're just looking at Windows, you can change the current keyboard layout in python using pywin32, which lets you easily access the Windows API from your script. Once installed into your Python environment you can import win32api and then use a call like win32api.LoadKeyboardLayout('00000809',1) to set the layout (the values I've put here set it to UK English). The first parameter is a string representing the keyboard layout to use, and the second is a flag. See documentation. I found this list of KLIDs (Keyboard Layout IDs), which shows two for Tamil keyboards. "00000449" is Tamil, "00020449" is Tamil 99. The 449 at the end means Tamil, and the two digits before set which subtype of Tamil keyboard to use (e.g. 20 for Tamil 99) - I can't find one for Tamil phonetic, but maybe you'll be able to find it. You can set up your program to call these functions whenever you want it to switch keyboard (for example, when your user activates a specific text input box). Also, if you want to check the current keyboard layout you can use win32api.GetKeyboardLayout(0) (doc). Maybe you can use this to figure out what the IDs are for each of the Tamil keyboards you want to use. Mind that it returns an int for the locale id rather than a string. Other useful keyboard related functions are win32api.GetKeyboardLayoutList() (to find all the locales installed on the current machine), win32api.GetKeyboardLayoutName() and win32api.GetKeyboardState - documentation for all these can be found here. | 6 | 6 |
71,367,526 | 2022-3-6 | https://stackoverflow.com/questions/71367526/nextcord-slash-command-nextcord-errors-httpexception-400-bad-request-error-c | I was migrating my bot from discord.py to nextcord and I changed my help command to a slash command, but it kept showing me this error: nextcord.errors.HTTPException: 400 Bad Request (error code: 50035): Invalid Form Body It said that the error was caused by exceeding 2000 characters in the web. Full Error: Ignoring exception in on_connect Traceback (most recent call last): File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/client.py", line 415, in _run_event await coro(*args, **kwargs) File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/client.py", line 1894, in on_connect await self.rollout_application_commands() File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/client.py", line 1931, in rollout_application_commands await self.register_new_application_commands(data=global_payload) File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/client.py", line 1855, in register_new_application_commands await self._connection.register_new_application_commands(data=data, guild_id=None) File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/state.py", line 736, in register_new_application_commands await self.register_application_command(app_cmd, guild_id) File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/state.py", line 754, in register_application_command raw_response = await self.http.upsert_global_command(self.application_id, payload) File "/opt/virtualenvs/python3/lib/python3.8/site-packages/nextcord/http.py", line 337, in request raise HTTPException(response, data) nextcord.errors.HTTPException: 400 Bad Request (error code: 50035): Invalid Form Body Code @client.slash_command(name="help",description="Help Command!") async def help(ctx: nextcord.Interaction, *, command: str = nextcord.SlashOption(name="Command",description="The command you want to get help on.")): embedh = nextcord.Embed(title="Help",description="Help Menu is here!",color=nextcord.Color.green()) embedh.set_footer(text=f'Requested by {ctx.author}',icon_url=ctx.author.avatar.url) embedh.add_field(name="General", value="`dm` `say` `poll`") embedh.add_field(name="Fun",value="`avatar` `giveaway` `8ball`",inline=False) embedh.add_field(name="Events",value="`guessthenumber`", inline=False) embedh.add_field(name="Image",value="`wanted`") embedh.add_field(name="Moderation",value="`ban` `unban` `kick` `mute` `warn` `purge` `wakeup` `makerole` `slowmode` `role` `lock` `unlock` `nickname`",inline=False) embedh.add_field(name="Utility", value="`ping` `help` `prefix` `setprefix` `serverinfo` `feedback` `credits` `support` `website` `guild`") await ctx.response.send(embed=embedh) Information JSON Error Code (I got it from here): 50035 IDE: Replit Module: nextcord How do I resolve this error? | Explanation From the discord dev docs: CHAT_INPUT command names and command option names must match the following regex ^[\w-]{1,32}$ The regex essentially translates to: If there is a lowercase variant of any letters used, you must use those In this case, your option name, 'Command' has an uppercase 'C', which is disallowed. Note: The length of the name must also be lower or equal to 32. Reference Application command naming | 4 | 4 |
71,344,780 | 2022-3-3 | https://stackoverflow.com/questions/71344780/importerror-dll-load-failed-while-importing-gdal-the-specified-module-could-n | I have a python script that previously worked but that now throws the error:ImportError: DLL load failed while importing _gdal: The specified module could not be found. I am trying to upload a shapefile using fiona and originally the message read: ImportError: DLL load failed while importing _fiona: The specified module could not be found. I am using anaconda navigator as my IDE on windows 11. I am aware that this is a question that has been asked before and I have read the answers to those questions. The solutions, however, hove not worked either due to my circumstance or my misinterpretation and action in following through with it. So my question is either how do I fix this, or, if it is not that simple, to better understand the problem. I have looked inside the DLLs folder within the environment folder that I am using and there is nothing in there with name fiona, gdal or geopandas. My attempts so far: 1. uninstall and re-install fiona gdal and geopandas (as I believe they are dependent). 2. update all libraries and anaconda to latest verions. 3. download Visual C++ Redistributable for Visual Studio 2015. Ran into issue during download as it was already installed on my computer, likely because it is a windows computer. Is it possible that this would help if i moved it to a different path/folder? 4. Uninstall and re-install anaconda navigator on cumputer. Re-create virtual environemt and import necessary libraries. result: error in line: import geopandas as gpd: ImportError: DLL load failed while importing _datadir: The specified module could not be found. If there is a fix that I have not mentioned or if you suspect that I attempted one of the above fixed incorrectly because of my limited understanding of how python libraries are stored please make a suggestion! Thank you | I was struggling badly with the same problem for the last couple of days. Using conda, I've tried everything I found on the internet such as: conda update gdal conda update -n base -c defaults conda Creating new environments (over and over again). Despite it's not recommended I even tried it with pip install... but no results. At the end what worked for me was to create a new environment with Python version 3.6 conda create -n env python=3.6 gdal spyder Let me know if it worked. | 5 | 5 |
71,395,504 | 2022-3-8 | https://stackoverflow.com/questions/71395504/input-0-of-layer-model-is-incompatible-with-the-layer-expected-shape-none-5 | I am training a Unet segmentation model for binary class. The dataset is loaded in tensorflow data pipeline. The images are in (512, 512, 3) shape, masks are in (512, 512, 1) shape. The model expects the input in (512, 512, 3) shape. But I am getting the following error. Input 0 of layer "model" is incompatible with the layer: expected shape=(None, 512, 512, 3), found shape=(512, 512, 3) Here are the images in metadata dataframe. Randomly sampling the indices to select the training and validation set num_samples = train_metadata.shape[0] train_indices = np.random.choice(range(num_samples), int(num_samples * 0.8), replace=False) valid_indices = list(set(range(num_samples)) - set(train_indices)) train_samples = train_metadata.iloc[train_indices, ] valid_samples = train_metadata.iloc[valid_indices, ] Dimensions IMG_WIDTH = 512 IMG_HEIGHT = 512 IMG_CHANNELS = 3 Parsing function for training images def parse_function_train_images(image_path): image_path = image_path mask_path = tf.strings.regex_replace(image_path, "sat", "mask") mask_path = tf.strings.regex_replace(mask_path, "jpg", "png") image = tf.io.read_file(image_path) image = tf.image.decode_jpeg(image, channels=3) image = tf.image.convert_image_dtype(image, tf.uint8) image = tf.image.resize(image, (IMG_WIDTH, IMG_HEIGHT)) #image = tf.expand_dims(image, axis=0) mask = tf.io.read_file(mask_path) mask = tf.image.decode_png(mask, channels=1) mask = tf.image.convert_image_dtype(mask, tf.uint8) mask = tf.image.resize(mask, (IMG_WIDTH, IMG_HEIGHT)) #mask = tf.where(mask == 255, np.dtype("uint8").type(0), mask) return image, mask Parsing function for test images def parse_function_test_images(image_path): image = tf.io.read_file(image_path) image = tf.image.decode_jpeg(image, channels=3) image = tf.image.convert_image_dtype(image, tf.uint8) image = tf.image.resize(image, (IMG_WIDTH, IMG_HEIGHT)) #image = tf.expand_dims(image, axis=0) return image Loading the dataset ds = tf.data.Dataset.from_tensor_slices(train_samples["sat_with_path"].values) train_dataset = ds.map(parse_function_train_images) validation_ds = tf.data.Dataset.from_tensor_slices(valid_samples["sat_with_path"].values) validation_dataset = validation_ds.map(parse_function_train_images) test_ds = tf.data.Dataset.from_tensor_slices(test_metadata["sat_with_path"].values) test_dataset = test_ds.map(parse_function_test_images) Normalizing the images def normalize(image, mask): image = tf.cast(image, tf.float32) / 255.0 mask = tf.cast(mask, tf.float32) / 255.0 return image, mask def test_normalize(image): image = tf.cast(image, tf.float32) / 255.0 return image TRAIN_LENGTH = len(train_dataset) BATCH_SIZE = 64 BUFFER_SIZE = 1000 STEPS_PER_EPOCH = TRAIN_LENGTH // BATCH_SIZE Mapping the dataset train_images = train_dataset.map(normalize, num_parallel_calls=tf.data.AUTOTUNE) validation_images = validation_dataset.map(normalize, num_parallel_calls=tf.data.AUTOTUNE) test_images = test_dataset.map(test_normalize, num_parallel_calls=tf.data.AUTOTUNE) Augmentation Layer class Augment(tf.keras.layers.Layer): def __init__(self, seed=42): super().__init__() self.augment_inputs = tf.keras.layers.RandomFlip(mode="horizontal", seed=seed) self.augment_labels = tf.keras.layers.RandomFlip(mode="horizontal", seed=seed) def call(self, inputs, labels): inputs = self.augment_inputs(inputs) inputs = tf.expand_dims(inputs, axis=0) labels = self.augment_labels(labels) return inputs, labels train_batches = ( train_images .cache() .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE) .repeat() .map(Augment()) .prefetch(buffer_size=tf.data.AUTOTUNE) ) validation_batches = ( validation_images .cache() .shuffle(BUFFER_SIZE) .batch(BATCH_SIZE) .repeat() .map(Augment()) .prefetch(buffer_size=tf.data.AUTOTUNE) ) test_batches = test_images.batch(BATCH_SIZE) Unet Model inputs = tf.keras.layers.Input((IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS)) c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer="he_normal", padding="same")(inputs) c1 = tf.keras.layers.Dropout(0.1)(c1) c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer="he_normal", padding="same")(c1) p1 = tf.keras.layers.MaxPooling2D((2, 2))(c1) c2 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p1) c2 = tf.keras.layers.Dropout(0.1)(c2) c2 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c2) p2 = tf.keras.layers.MaxPooling2D((2, 2))(c2) c3 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p2) c3 = tf.keras.layers.Dropout(0.2)(c3) c3 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c3) p3 = tf.keras.layers.MaxPooling2D((2, 2))(c3) c4 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p3) c4 = tf.keras.layers.Dropout(0.2)(c4) c4 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c4) p4 = tf.keras.layers.MaxPooling2D((2, 2))(c4) c5 = tf.keras.layers.Conv2D(256, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p4) c5 = tf.keras.layers.Dropout(0.3)(c5) c5 = tf.keras.layers.Conv2D(256, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c5) u6 = tf.keras.layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding="same")(c5) u6 = tf.keras.layers.concatenate([u6, c4]) c6 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(u6) c6 = tf.keras.layers.Dropout(0.2)(c6) c6 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c6) u7 = tf.keras.layers.Conv2DTranspose(64, (2, 2), strides=(2, 2), padding="same")(c6) u7 = tf.keras.layers.concatenate([u7, c3]) c7 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(u7) c7 = tf.keras.layers.Dropout(0.2)(c7) c7 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c7) u8 = tf.keras.layers.Conv2DTranspose(32, (2, 2), strides=(2, 2), padding="same")(c7) u8 = tf.keras.layers.concatenate([u8, c2]) c8 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(u8) c8 = tf.keras.layers.Dropout(0.1)(c8) c8 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c8) u9 = tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding="same")(c8) u9 = tf.keras.layers.concatenate([u9, c1], axis=3) c9 = tf.keras.layers.Conv2D(16, (3, 3), strides=(2, 2), padding="same")(u9) c9 = tf.keras.layers.Dropout(0.1)(c9) c9 = tf.keras.layers.Conv2D(16, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c9) outputs = tf.keras.layers.Conv2D(1, (1, 1), activation="sigmoid")(c9) model = tf.keras.Model(inputs=[inputs], outputs=[outputs]) model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]) model.summary() checkpointer = tf.keras.callbacks.ModelCheckpoint('model_for_nuclie.h5', verbose=1, save_best_only=True) callbacks = [ tf.keras.callbacks.EarlyStopping(patience=2, monitor="val_loss"), tf.keras.callbacks.TensorBoard(log_dir="logs"), checkpointer ] Fit the model to data results = model.fit(train_images, validation_data=validation_images, \ batch_size=16, epochs=25, callbacks=callbacks ) Error: | Use train_batches in model.fit and not train_images. Also, you do not need to use repeat(), which causes an infinite dataset if you do not specify how many times you want to repeat your dataset. Regarding your labels error, try rewriting your model like this: import tensorflow as tf inputs = tf.keras.layers.Input((512, 512, 3)) c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer="he_normal", padding="same")(inputs) c1 = tf.keras.layers.Dropout(0.1)(c1) c1 = tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer="he_normal", padding="same")(c1) p1 = tf.keras.layers.MaxPooling2D((2, 2))(c1) c2 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p1) c2 = tf.keras.layers.Dropout(0.1)(c2) c2 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c2) p2 = tf.keras.layers.MaxPooling2D((2, 2))(c2) c3 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p2) c3 = tf.keras.layers.Dropout(0.2)(c3) c3 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c3) p3 = tf.keras.layers.MaxPooling2D((2, 2))(c3) c4 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p3) c4 = tf.keras.layers.Dropout(0.2)(c4) c4 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c4) p4 = tf.keras.layers.MaxPooling2D((2, 2))(c4) c5 = tf.keras.layers.Conv2D(256, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(p4) c5 = tf.keras.layers.Dropout(0.3)(c5) c5 = tf.keras.layers.Conv2D(256, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c5) u6 = tf.keras.layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding="same")(c5) u6 = tf.keras.layers.concatenate([u6, c4]) c6 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(u6) c6 = tf.keras.layers.Dropout(0.2)(c6) c6 = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c6) u7 = tf.keras.layers.Conv2DTranspose(64, (2, 2), strides=(2, 2), padding="same")(c6) u7 = tf.keras.layers.concatenate([u7, c3]) c7 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(u7) c7 = tf.keras.layers.Dropout(0.2)(c7) c7 = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c7) u8 = tf.keras.layers.Conv2DTranspose(32, (2, 2), strides=(2, 2), padding="same")(c7) u8 = tf.keras.layers.concatenate([u8, c2]) c8 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(u8) c8 = tf.keras.layers.Dropout(0.1)(c8) c8 = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", kernel_initializer="he_normal", padding="same")(c8) u9 = tf.keras.layers.Conv2DTranspose(16, (2, 2), strides=(2, 2), padding="same")(c8) u9 = tf.keras.layers.concatenate([u9, c1], axis=3) outputs = tf.keras.layers.Conv2D(1, (1, 1), activation="sigmoid")(u9) model = tf.keras.Model(inputs=[inputs], outputs=[outputs]) model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]) model.summary() | 5 | 2 |
71,380,024 | 2022-3-7 | https://stackoverflow.com/questions/71380024/coverage-py-vs-pytest-cov | The documentation of coverage.py says that Many people choose to use the pytest-cov plugin, but for most purposes, it is unnecessary. So I would like to know what is the difference between these two? And which one is the most efficient ? Thank you in advance | pytest-cov uses coverage.py, so there's no different in efficiency, or basic behavior. pytest-cov auto-configures multiprocessing settings, and ferries data around if you use pytest-xdist. | 32 | 31 |
71,386,332 | 2022-3-7 | https://stackoverflow.com/questions/71386332/how-do-i-specify-extra-bracket-dependencies-in-a-pyproject-toml | I'm working on a project that specifies its dependencies using Poetry and a pyproject.toml file to manage dependencies. The documentation for one of the libraries I need suggests pip-installing with an "extra" option to one of the dependencies, like this: pip install google-cloud-bigquery[opentelemetry] How should I reflect this requirement in the pyproject.toml file? Currently, there are a few lines like this: [tool.poetry.dependencies] python = "3.7.10" apache-beam = "2.31.0" dynaconf = "3.1.4" google-cloud-bigquery = "2.20.0" Changing the last line to google-cloud-bigquery[opentelemetry] = ">=2.20.0" yields Invalid TOML file /home/jupyter/vertex-monitoring/pyproject.toml: Unexpected character: 'o' at line 17 col 22 Other variants that don't seem to be parsed properly: google-cloud-bigquery["opentelemetry"] = "2.20.0" There are other StackOverflow questions which look related, as well as several different PEP docs, but my searches are complicated because I'm not sure whether these are "options" or "extras" or something else. | You can add it by poetry add "google-cloud-bigquery[opentelemetry]". This will result in: [tool.poetry.dependencies] ... google-cloud-bigquery = {extras = ["opentelemetry"], version = "^2.34.2"} | 33 | 44 |
71,373,337 | 2022-3-6 | https://stackoverflow.com/questions/71373337/invalidentrypoint-for-aws-lambda-with-python-docker-container | I've built an image for Lambda using public.ecr.aws/lambda/python:3.8 but always get this error. Tried changing up the function and file names, but not getting any more details for debugging. Also I've run the function locally and the entrypoint/cmd works. START RequestId: cb4ba88c-c347-4e7d-b1ca-031a2e02fde4 Version: $LATEST IMAGE Launch error: fork/exec /lambda-entrypoint.sh: exec format error Entrypoint: [/lambda-entrypoint.sh] Cmd: [index.lambda_handler] WorkingDir: [/var/task]IMAGE Launch error: fork/exec /lambda-entrypoint.sh: exec format error Entrypoint: [/lambda-entrypoint.sh] Cmd: [index.lambda_handler] WorkingDir: [/var/task]END RequestId: cb4ba88c-c347-4e7d-b1ca-031a2e02fde4 REPORT RequestId: cb4ba88c-c347-4e7d-b1ca-031a2e02fde4 Duration: 12.86 ms Billed Duration: 13 ms Memory Size: 128 MB Max Memory Used: 3 MB RequestId: cb4ba88c-c347-4e7d-b1ca-031a2e02fde4 Error: fork/exec /lambda-entrypoint.sh: exec format error Runtime.InvalidEntrypoint | Turned out to be an architecture compatibility issue - Needed to make sure the arch matched between the lambda function, and the docker image. Locally I was building on an M1 with arm64 but the function is configured by default to use amd64 I changed my build command to docker buildx build --platform linux/amd64 -t <image_name>:<image_tag> Although I could have also updated the arch type for the lambda function to use arm64 | 29 | 47 |
71,372,066 | 2022-3-6 | https://stackoverflow.com/questions/71372066/docker-fails-to-install-cffi-with-python3-9-alpine-in-dockerfile | Im trying to run the below Dockerfile using docker-compose. I searched around but I couldnt find a solution on how to install cffi with python:3.9-alpine. I also read this post which states that pip 21.2.4 or greater can be a possible solution but it didn't work out form me https://www.pythonfixing.com/2021/09/fixed-why-i-getting-this-error-while.html Docker file FROM python:3.9-alpine ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 COPY ./requirements.txt . RUN apk add --update --no-cache postgresql-client RUN apk add --update --no-cache --virtual .tmp-build-deps \ gcc libc-dev linux-headers postgresql-dev RUN pip3 install --upgrade pip && pip3 install -r /requirements.txt RUN apk del .tmp-build-deps RUN mkdir /app WORKDIR /app COPY . /app RUN adduser -D user USER user This is the requirements.txt file. asgiref==3.5.0 backports.zoneinfo==0.2.1 certifi==2021.10.8 cffi==1.15.0 cfgv==3.3.1 ... Error message: process-exited-with-error #9 47.99 #9 47.99 Γ Running setup.py install for cffi did not run successfully. #9 47.99 β exit code: 1 #9 47.99 β°β> [58 lines of output] #9 47.99 Package libffi was not found in the pkg-config search path. #9 47.99 Perhaps you should add the directory containing `libffi.pc' #9 47.99 to the PKG_CONFIG_PATH environment variable #9 47.99 Package 'libffi', required by 'virtual:world', not found #9 47.99 Package libffi was not found in the pkg-config search path. #9 47.99 Perhaps you should add the directory containing `libffi.pc' #9 47.99 to the PKG_CONFIG_PATH environment variable #9 47.99 Package 'libffi', required by 'virtual:world', not found #9 47.99 Package libffi was not found in the pkg-config search path. #9 47.99 Perhaps you should add the directory containing `libffi.pc' #9 47.99 to the PKG_CONFIG_PATH environment variable #9 47.99 Package 'libffi', required by 'virtual:world', not found #9 47.99 Package libffi was not found in the pkg-config search path. #9 47.99 Perhaps you should add the directory containing `libffi.pc' #9 47.99 to the PKG_CONFIG_PATH environment variable #9 47.99 Package 'libffi', required by 'virtual:world', not found #9 47.99 Package libffi was not found in the pkg-config search path. #9 47.99 Perhaps you should add the directory containing `libffi.pc' #9 47.99 to the PKG_CONFIG_PATH environment variable #9 47.99 Package 'libffi', required by 'virtual:world', not found #9 47.99 running install #9 47.99 running build #9 47.99 running build_py #9 47.99 creating build #9 47.99 creating build/lib.linux-aarch64-3.9 #9 47.99 creating build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/__init__.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/cffi_opcode.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/commontypes.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/vengine_gen.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/vengine_cpy.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/backend_ctypes.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/api.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/ffiplatform.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/verifier.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/error.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/setuptools_ext.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/lock.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/recompiler.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/pkgconfig.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/cparser.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/model.py -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/_cffi_include.h -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/parse_c_type.h -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/_embedding.h -> build/lib.linux-aarch64-3.9/cffi #9 47.99 copying cffi/_cffi_errors.h -> build/lib.linux-aarch64-3.9/cffi #9 47.99 warning: build_py: byte-compiling is disabled, skipping. #9 47.99 #9 47.99 running build_ext #9 47.99 building '_cffi_backend' extension #9 47.99 creating build/temp.linux-aarch64-3.9 #9 47.99 creating build/temp.linux-aarch64-3.9/c #9 47.99 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.9 -c c/_cffi_backend.c -o build/temp.linux-aarch64-3.9/c/_cffi_backend.o #9 47.99 c/_cffi_backend.c:15:10: fatal error: ffi.h: No such file or directory #9 47.99 15 | #include <ffi.h> #9 47.99 | ^~~~~~~ #9 47.99 compilation terminated. #9 47.99 error: command '/usr/bin/gcc' failed with exit code 1 #9 47.99 [end of output] #9 47.99 #9 47.99 note: This error originates from a subprocess, and is likely not a problem with pip. #9 47.99 error: legacy-install-failure #9 47.99 #9 47.99 Γ Encountered error while trying to install package. #9 47.99 β°β> cffi #9 47.99 #9 47.99 note: This is an issue with the package mentioned above, not pip. #9 47.99 hint: See above for output from the failure. | @Klaus D.'s comment helped a lot. I updated Dockerfile: RUN apk add --update --no-cache --virtual .tmp-build-deps \ gcc libc-dev linux-headers postgresql-dev \ && apk add libffi-dev | 14 | 23 |
71,366,868 | 2022-3-6 | https://stackoverflow.com/questions/71366868/django-how-to-pass-variable-to-include-tag-from-url-tag | So right now I hardcode to url, which is a bit annoying if you move endpoints. This is my current setup for my navbar items. # in base.html {% include 'components/navbar/nav-item.html' with title='Event Manager' url='/eventmanager/' %} # in components/navbar/nav-item.html <li> <a href="{{ url }}">{{ title }}</a> </li> See how I use the url right now? What I now want is this: {% include 'components/navbar/link.html' with title='Event Manager' url={% url 'event_manager:index' %} %} But apparently, this is invalid syntax. How do I do it? If it is not possible, how do I create an app that somehow creates a view where I can pass all the URLs with context variable? In theory, this sounds easy but I'd have to somehow insert that view in every other view. | You can pass it as a variable not django's url tag, use like this: {% url 'event_manager:index' as myurl %} {% include 'components/navbar/link.html' with myurl %} Now you can pass it to include tag. | 4 | 9 |
71,366,566 | 2022-3-5 | https://stackoverflow.com/questions/71366566/how-to-play-audio-in-jupyter-notebook-with-vscode | Using a jupyter notebook in VSCode, I'm trying to run the following code from this documentation: import numpy as np from IPython.display import Audio framerate = 44100 t = np.linspace(0,5,framerate*5) data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t) Audio(data, rate=framerate) However, I only get this If I press play button, then nothing happens... | As of today, it seems VSCode Jupyter extension does not support audio. You can track the issue here on their Github. One solution can be merging this pull request and rebuilding VSCode, which is not suggested. The preferred alternate solution is using jupyter lab instead of VSCode for such use cases. | 4 | 5 |
71,365,904 | 2022-3-5 | https://stackoverflow.com/questions/71365904/how-to-print-all-the-routes-used-in-django | I want to display all the Routes in an app built with Django, something like what Laravel does with the command: php artisan route:list Is there a way to get all the Routes? | django-extensions has command show_urls, sou after instalation you can do: python manage.py show_urls | 4 | 8 |
71,357,427 | 2022-3-4 | https://stackoverflow.com/questions/71357427/how-to-pass-a-rust-function-as-a-callback-to-python-using-pyo3 | I am using Pyo3 to call Rust functions from Python and vice versa. I am trying to achieve the following: Python calls rust_function_1 Rust function rust_function_1 calls Python function python_function passing Rust function rust_function_2 as a callback argument Python function python_function calls the callback, which in this case is Rust function rust_function_2 I cannot figure out how to pass rust_function_2 as a callback argument to python_function. I have the following Python code: import rust_module def python_function(callback): print("This is python_function") callback() if __name__ == '__main__': rust_module.rust_function_1() And I have the following non-compiling Rust code: use pyo3::prelude::*; #[pyfunction] fn rust_function_1() -> PyResult<()> { println!("This is rust_function_1"); Python::with_gil(|py| { let python_module = PyModule::import(py, "python_module")?; python_module .getattr("python_function")? .call1((rust_function_2.into_py(py),))?; // Compile error Ok(()) }) } #[pyfunction] fn rust_function_2() -> PyResult<()> { println!("This is rust_function_2"); Ok(()) } #[pymodule] #[pyo3(name = "rust_module")] fn quantum_network_stack(_python: Python, module: &PyModule) -> PyResult<()> { module.add_function(wrap_pyfunction!(rust_function_1, module)?)?; module.add_function(wrap_pyfunction!(rust_function_2, module)?)?; Ok(()) } The error message is: error[E0599]: the method `into_py` exists for fn item `fn() -> Result<(), PyErr> {rust_function_2}`, but its trait bounds were not satisfied --> src/lib.rs:10:37 | 10 | .call1((rust_function_2.into_py(py),))?; | ^^^^^^^ method cannot be called on `fn() -> Result<(), PyErr> {rust_function_2}` due to unsatisfied trait bounds | = note: `rust_function_2` is a function, perhaps you wish to call it = note: the following trait bounds were not satisfied: `fn() -> Result<(), PyErr> {rust_function_2}: AsPyPointer` which is required by `&fn() -> Result<(), PyErr> {rust_function_2}: pyo3::IntoPy<Py<PyAny>>` | The comment from PitaJ led me to the solution. Rust code that works: use pyo3::prelude::*; #[pyclass] struct Callback { #[allow(dead_code)] // callback_function is called from Python callback_function: fn() -> PyResult<()>, } #[pymethods] impl Callback { fn __call__(&self) -> PyResult<()> { (self.callback_function)() } } #[pyfunction] fn rust_function_1() -> PyResult<()> { println!("This is rust_function_1"); Python::with_gil(|py| { let python_module = PyModule::import(py, "python_module")?; let callback = Box::new(Callback { callback_function: rust_function_2, }); python_module .getattr("python_function")? .call1((callback.into_py(py),))?; Ok(()) }) } #[pyfunction] fn rust_function_2() -> PyResult<()> { println!("This is rust_function_2"); Ok(()) } #[pymodule] #[pyo3(name = "rust_module")] fn quantum_network_stack(_python: Python, module: &PyModule) -> PyResult<()> { module.add_function(wrap_pyfunction!(rust_function_1, module)?)?; module.add_function(wrap_pyfunction!(rust_function_2, module)?)?; module.add_class::<Callback>()?; Ok(()) } Python code that works (same as in the question): import rust_module def python_function(callback): print("This is python_function") callback() if __name__ == '__main__': rust_module.rust_function_1() The following solution improves on the above solition in a number of ways: The callback provided by Rust is stored and called later, instead of being called immediately (this is more realistic for real-life use cases) Each time when Python calls Rust, it passes in a PythonApi object removes the need for Rust function to do a Python import every time they are called. The callback provided by Rust can be closures that capture variables (move semantics only) in addition to plain functions. The more general Rust code is as follows: use pyo3::prelude::*; #[pyclass] struct Callback { #[allow(dead_code)] // callback_function is called from Python callback_function: Box<dyn Fn(&PyAny) -> PyResult<()> + Send>, } #[pymethods] impl Callback { fn __call__(&self, python_api: &PyAny) -> PyResult<()> { (self.callback_function)(python_api) } } #[pyfunction] fn rust_register_callback(python_api: &PyAny) -> PyResult<()> { println!("This is rust_register_callback"); let message: String = "a captured variable".to_string(); Python::with_gil(|py| { let callback = Box::new(Callback { callback_function: Box::new(move |python_api| { rust_callback(python_api, message.clone()) }), }); python_api .getattr("set_callback")? .call1((callback.into_py(py),))?; Ok(()) }) } #[pyfunction] fn rust_callback(python_api: &PyAny, message: String) -> PyResult<()> { println!("This is rust_callback"); println!("Message = {}", message); python_api.getattr("some_operation")?.call0()?; Ok(()) } #[pymodule] #[pyo3(name = "rust_module")] fn quantum_network_stack(_python: Python, module: &PyModule) -> PyResult<()> { module.add_function(wrap_pyfunction!(rust_register_callback, module)?)?; module.add_function(wrap_pyfunction!(rust_callback, module)?)?; module.add_class::<Callback>()?; Ok(()) } The more general Python code is as follows: import rust_module class PythonApi: def __init__(self): self.callback = None def set_callback(self, callback): print("This is PythonApi::set_callback") self.callback = callback def call_callback(self): print("This is PythonApi::call_callback") assert self.callback is not None self.callback(self) def some_operation(self): print("This is PythonApi::some_operation") def python_function(python_api, callback): print("This is python_function") python_api.callback = callback def main(): print("This is main") python_api = PythonApi() print("Calling rust_register_callback") rust_module.rust_register_callback(python_api) print("Returned from rust_register_callback; back in main") print("Calling callback") python_api.call_callback() if __name__ == '__main__': main() The output from the latter version of code is as follows: This is main Calling rust_register_callback This is rust_register_callback This is PythonApi::set_callback Returned from rust_register_callback; back in main Calling callback This is PythonApi::call_callback This is rust_callback Message = a captured variable This is PythonApi::some_operation | 4 | 6 |
71,362,928 | 2022-3-5 | https://stackoverflow.com/questions/71362928/average-values-over-all-offset-diagonals | I'm trying to compute average values of shifted diagonals of a square array. Given input matrix like (in reality much larger than 3x3): [[a, b, c], [d, e, f], [g, h, i]] correct answer would be [g, (d+h)/2, (a+e+i)/3, (b+f)/2, c] A code to compute such average could be: import numpy as np def offset_diag_mean(mat): n = len(mat) return np.array([np.mean(np.diag(mat,k)) for k in range(-n+1,n)]) I assume one can do without a list comprehension (speed is quite important for me). Any clever solutions I'm missing? | On efficient solution is to accumulate lines of the input 2D array directly in the output array at a specific position and then perform the division. The idea is to zero-initialize an output array, then add [a, b, c] to output[2:5], then add [d, e, f] to output[1:4] and then add [g, h, i] to output[0:3]. Finally, we can divide result by [1, 2, 3, 2, 1]. Here is an implementation: # Use the decorator @nb.njit here to use Numba def compute(mat): n = mat.shape[0] output = np.zeros(n*2-1, dtype=np.float64) for i in range(n-1, -1, -1): output[i:i+n] += mat[n-1-i] output[0:n] /= np.arange(1, n+1, 1, dtype=np.float64) output[n:] /= np.arange(n-1, 0, -1, dtype=np.float64) return output Here is the resulting performance on my machine on a 1000x1000 array: Initial code: 15.31 ms Kevin's code: 3.26 ms ( x4.7) This code: 1.42 ms (x10.8) This code + Numba: 0.66 ms (x23.2) Thus, this implementation is 10.8 times faster than the initial implementation and 2.3 times faster than the fastest alternative version. Note that using Numba results in an even faster code with almost not change to the code: this Numba version is 23.2 times faster than the initial implementation. | 4 | 3 |
71,359,897 | 2022-3-5 | https://stackoverflow.com/questions/71359897/why-does-python-point-to-my-systems-default-python-interpreter-instead-of-my | python points to my system's default python interpreter, instead of my pyenv python interpreter. I created the python virtual environment and activated it as follows: pyenv virtualenv 3.8.12 test3 pyenv activate test3 Then, running python gives me a python 3.7 interpreter (which is my system's default python interpreter), instead of 3.8.12. Why? Full command outputs: root@server:/home/code-base/f# pyenv virtualenv 3.8.12 test3 Looking in links: /tmp/tmp1yp95sav Requirement already satisfied: setuptools in /root/.pyenv/versions/3.8.12/envs/test3/lib/python3.8/site-packages (56.0.0) Requirement already satisfied: pip in /root/.pyenv/versions/3.8.12/envs/test3/lib/python3.8/site-packages (21.1.1) root@server:/home/code-base/f# pyenv activate test3 pyenv-virtualenv: prompt changing will be removed from future release. configure `export PYENV_VIRTUALENV_DISABLE_PROMPT=1' to simulate the behavior. (test3) root@server:/home/code-base/f# python Python 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] :: Anaconda, Inc. on linux Additionally: pyenv which python returns /root/.pyenv/versions/test3/bin/python command -v python returns /opt/conda/bin/python $PATH inside my virtualenv: /root/.pyenv/plugins/pyenv-virtualenv/shims:/root/.pyenv/bin:/opt/conda/bin:/app/python/bin:/opt/conda/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.local/bin ls -la /root/.pyenv/plugins/pyenv-virtualenv/shims contains two folders: activate and deactivate. | Given the new informations you gave us it is most likely that your a missing a eval "$(pyenv init --path)" in your ~/.profile (or in your Dockerfile as you are using K8s) as /root/.pyenv/shim is not part of $PATH. Old answer: Two possible solutions here: Either you did not select your 3.8.12 binary as a system default via: $ pyenv global 3.8.12 $ python -V Python 3.8.12 $ pyenv versions system 2.7.15 * 3.8.12 (set by /home/realpython/.pyenv/version) or /opt/conda/bin/ has a higher priority in your $PATH then your pyenv installation. | 4 | 2 |
71,357,872 | 2022-3-4 | https://stackoverflow.com/questions/71357872/boto3-how-to-assume-iam-role-to-access-other-account | Looking for some guidance with regards to uploading files into AWS S3 bucket via a python script and an IAM role. I am able to upload files using BOTO3 and an aws_access_key_id & aws_secret_access_key for other scripts. However, I have now been given an IAM role to login to a certain account. I have no issue using AWS CLI to authenticate and query the S3 data so I do believe that my .aws/credential and .aws/config files are correct. However I am not sure how to use the ARN value within my python code. This is what I have put together so far, but get a variety of errors which all lead to denied access: session = boto3.Session(profile_name='randomName') session.client('sts').get_caller_identity() assumed_role_session = boto3.Session(profile_name='randomNameAccount') print(assumed_role_session.client('sts').get_caller_identity()) credentials = session.get_credentials() aws_access_key_id = credentials.access_key aws_secret_access_key = credentials.secret_key s3 = boto3.client('s3', aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key) bucket_name = 'bucketName' This is a sample of what my credential and config files looks like as a referal. .aws/config file: [profile randomNameAccount] role_arn = arn:aws:iam::12345678910:role/roleName source_profile = randomName aws/credentials file: [randomName] aws_access_key_id = 12345678910 aws_secret_access_key = 1234567-abcdefghijk My question is help around the python code to be able to authenticate against AWS and navigate around a S3 bucket using an IAM role and then upload files when I call an upload function. Thank you in advance. | You should create an entry for the IAM Role in ~/.aws/credentials that refers to a set of IAM User credentials that have permission to assume the role: [my-user] aws_access_key_id = AKIAxxx aws_secret_access_key = xxx [my-role] source_profile = my-user role_arn = arn:aws:iam::123456789012:role/the-role Add an entry to ~/.aws/config to provide a default region: [profile my-role] region = ap-southeast-2 Then you can assume the IAM Role with this code: import boto3 # Create a session by assuming the role in the named profile session = boto3.Session(profile_name='my-role') # Use the session to access resources via the role s3_client = session.client('s3') response = s3_client.list_objects(Bucket=...) | 4 | 8 |
71,356,827 | 2022-3-4 | https://stackoverflow.com/questions/71356827/retrieve-latest-file-with-pathlib | I primarily use pathlib over os for paths, however there is one thing I have failed to get working on pathlib. If for example I require the latest created .csv within a directory I would use glob & os import glob import os target_dir = glob.glob("/home/foo/bar/baz/*.csv") latest_csv = max(target_dir, key=os.path.getctime) Is there an alternative using pathlib only? (I am aware you can yield matching files with Path.glob(*.csv)) | You can achieve it in just pathlib by doing the following: A Path object has a method called stat() where you can retrieve things like creation and modify date. from pathlib import Path files = Path("./").glob("*.py") latest_file = max([f for f in files], key=lambda item: item.stat().st_ctime) print(latest_file) | 5 | 8 |
71,344,648 | 2022-3-3 | https://stackoverflow.com/questions/71344648/how-to-define-str-for-dataclass-that-omits-default-values | Given a dataclass instance, I would like print() or str() to only list the non-default field values. This is useful when the dataclass has many fields and only a few are changed. @dataclasses.dataclass class X: a: int = 1 b: bool = False c: float = 2.0 x = X(b=True) print(x) # Desired output: X(b=True) | The solution is to add a custom __str__() function: @dataclasses.dataclass class X: a: int = 1 b: bool = False c: float = 2.0 def __str__(self): """Returns a string containing only the non-default field values.""" s = ', '.join(f'{field.name}={getattr(self, field.name)!r}' for field in dataclasses.fields(self) if getattr(self, field.name) != field.default) return f'{type(self).__name__}({s})' x = X(b=True) print(x) # X(b=True) print(str(x)) # X(b=True) print(repr(x)) # X(a=1, b=True, c=2.0) print(f'{x}, {x!s}, {x!r}') # X(b=True), X(b=True), X(a=1, b=True, c=2.0) This can also be achieved using a decorator: def terse_str(cls): # Decorator for class. def __str__(self): """Returns a string containing only the non-default field values.""" s = ', '.join(f'{field.name}={getattr(self, field.name)}' for field in dataclasses.fields(self) if getattr(self, field.name) != field.default) return f'{type(self).__name__}({s})' setattr(cls, '__str__', __str__) return cls @dataclasses.dataclass @terse_str class X: a: int = 1 b: bool = False c: float = 2.0 | 11 | 12 |
71,353,113 | 2022-3-4 | https://stackoverflow.com/questions/71353113/polars-how-to-reorder-columns-in-a-specific-order | I cannot find how to reorder columns in a polars dataframe in the polars DataFrame docs. | Turns out it is the same as pandas: df = df[['PRODUCT', 'PROGRAM', 'MFG_AREA', 'VERSION', 'RELEASE_DATE', 'FLOW_SUMMARY', 'TESTSUITE', 'MODULE', 'BASECLASS', 'SUBCLASS', 'Empty', 'Color', 'BINNING', 'BYPASS', 'Status', 'Legend']] | 27 | -2 |
71,343,002 | 2022-3-3 | https://stackoverflow.com/questions/71343002/downloading-files-from-public-google-drive-in-python-scoping-issues | Using my answer to my question on how to download files from a public Google drive I managed in the past to download images using their IDs from a python script and Google API v3 from a public drive using the following bock of code: from google_auth_oauthlib.flow import Flow, InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.http import MediaFileUpload, MediaIoBaseDownload from google.auth.transport.requests import Request import io import re SCOPES = ['https://www.googleapis.com/auth/drive'] CLIENT_SECRET_FILE = "myjson.json" authorized_port = 6006 # authorize URI redirect on the console flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRET_FILE, SCOPES) cred = flow.run_local_server(port=authorized_port) drive_service = build("drive", "v3", credentials=cred) regex = "(?<=https://drive.google.com/file/d/)[a-zA-Z0-9]+" for i, l in enumerate(links_to_download): url = l file_id = re.search(regex, url)[0] request = drive_service.files().get_media(fileId=file_id) fh = io.FileIO(f"file_{i}", mode='wb') downloader = MediaIoBaseDownload(fh, request) done = False while done is False: status, done = downloader.next_chunk() print("Download %d%%." % int(status.progress() * 100)) In the mean time I discovered pydrive and pydrive2, two wrappers around Google API v2 that allows to do very useful things such as listing files from folders and basically allows to do the same thing with a lighter syntax: from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive import io import re CLIENT_SECRET_FILE = "client_secrets.json" gauth = GoogleAuth() gauth.LocalWebserverAuth() drive = GoogleDrive(gauth) regex = "(?<=https://drive.google.com/file/d/)[a-zA-Z0-9]+" for i, l in enumerate(links_to_download): url = l file_id = re.search(regex, url)[0] file_handle = drive.CreateFile({'id': file_id}) file_handle.GetContentFile(f"file_{i}") However now whether I use pydrive or the raw API I cannot seem to be able to download the same files and instead I am met with: googleapiclient.errors.HttpError: <HttpError 404 when requesting https://www.googleapis.com/drive/v3/files/fileID?alt=media returned "File not found: fileID.". Details: "[{'domain': 'global', 'reason': 'notFound', 'message': 'File not found: fileID.', 'locationType': 'parameter', 'location': 'fileId'}]"> I tried everything and registered 3 different apps using Google console it seems it might be (or not) a question of scoping (see for instance this answer, with apps having access to only files in my Google drive or created by this app). However I did not have this issue before (last year). When going to the Google console explicitly giving https://www.googleapis.com/auth/drive as a scope to the API mandates filling a ton of fields with application's website/conditions of use/confidentiality rules/authorized domains and youtube videos explaining the app. However I will be the sole user of this script. So I could only give explicitly the following scopes: /auth/drive.appdata /auth/drive.file /auth/drive.install Is it because of scoping ? Is there a solution that doesn't require creating a homepage and a youtube video ? EDIT 1: Here is an example of links_to_download: links_to_download = ["https://drive.google.com/file/d/fileID/view?usp=drivesdk&resourcekey=0-resourceKeyValue"] EDIT 2: It is super instable sometimes it works without a sweat sometimes it doesn't. When I relaunch the script multiple times I get different results. Retry policies are working to a certain extent but sometimes it fails multiple times for hours. | Well thanks to the security update released by Google few months before. This makes the link sharing stricter and you need resource key as well to access the file in-addition to the fileId. As per the documentation , You need to provide the resource key as well for newer links, if you want to access it in the header X-Goog-Drive-Resource-Keys as fileId1/resourceKey1. If you apply this change in your code, it will work as normal. Example edit below: regex = "(?<=https://drive.google.com/file/d/)[a-zA-Z0-9]+" regex_rkey = "(?<=resourcekey=)[a-zA-Z0-9-]+" for i, l in enumerate(links_to_download): url = l file_id = re.search(regex, url)[0] resource_key = re.search(regex_rkey, url)[0] request = drive_service.files().get_media(fileId=file_id) request.headers["X-Goog-Drive-Resource-Keys"] = f"{file_id}/{resource_key}" fh = io.FileIO(f"file_{i}", mode='wb') downloader = MediaIoBaseDownload(fh, request) done = False while done is False: status, done = downloader.next_chunk() print("Download %d%%." % int(status.progress() * 100)) Well, the regex for resource key was something I quickly made, so cannot be sure on if it supports every case. But this provides you the solution. Now, you may have to listen to old and new links based on this and set the changes. | 6 | 4 |
71,351,209 | 2022-3-4 | https://stackoverflow.com/questions/71351209/why-does-map-hide-a-stopiteration | I found a case when map() usage isn't equivalent to a list comprehension. It happens when next used as the first argument. For example: l1 = [1, 2] l2 = ['hello', 'world'] iterators = [iter(l1), iter(l2)] # list comprehension values1 = [next(it) for it in iterators] # values1 = [1, "hello"] values2 = [next(it) for it in iterators] # values2 = [2, "world"] values3 = [next(it) for it in iterators] # raise StopIteration l1 = [1, 2] l2 = ['hello', 'world'] iterators = [iter(l1), iter(l2)] # map values1 = list(map(next, iterators)) # values1 = [1, "hello"] values2 = list(map(next, iterators)) # values2 = [2, "world"] values3 = list(map(next, iterators)) # values3 = [] # doesn't raise StopIteration Any other exceptions occur as they should. Example: def divide_by_zero(value: int): return value // 0 l = [1, 2, 3] values = list(map(divide_by_zero, l)) # raises ZeroDivisionError as expected values = [divide_by_zero(value) for value in l] # raises ZeroDivisionError as expected, too It seems very strange. It works the same with Python 3.9 and Python 3.11. It seems like map() works like this: def map(func, iterator): try: while True: item = next(iterator) yield func(item) except StopIteration: pass but I expected it to work like this: def map(func, iterator): while True: try: item = next(iterator) except StopIteration: break yield func(item) Is it a bug? | Try calling next on map: >>> >>> m = map(next, iterators) >>> next(m) 1 >>> next(m) 'hello' >>> next(m) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration It's list that sees StopIteration and uses it to stop building the list from what map yields. The list comprehension, on the other hand, is building the list by iterating over iterators, not a particular iterator in that list. That is, next(it) is used to produce a value for the list, not to determine if we've reached the end of iterators. | 8 | 7 |
71,349,515 | 2022-3-4 | https://stackoverflow.com/questions/71349515/how-to-find-all-possible-uniform-substrings-of-a-string | I have a string like aaabbbbcca And I'd like to parse all possible uniform substrings from that. So my expected substrings for this string are ['a', 'aa', 'aaa', 'b', 'bb', 'bbb', 'bbbb', 'c', 'cc', 'a'] I tried the following import re print(re.findall(r"([a-z])(?=\1*)", "aaabbbbcca")) # Output: ['a', 'a', 'a', 'b', 'b', 'b', 'b', 'c', 'c', 'a'] Is it possible trough regular expressions? If yes, then how? | You can achieve what you need without a regex here: result = [] text = "aaabbbbcca" prev = '' for c in text: if c == prev: result.append(result[-1] + c) else: result.append(c) prev = c print(result) # => ['a', 'aa', 'aaa', 'b', 'bb', 'bbb', 'bbbb', 'c', 'cc', 'a'] See the Python demo. In short, you can iterate over the string and append new item to a result list when the new char is not equal to the previous char, otherwise, append a new item with the value equal to the previous item + the same char concatenated to the value. With regex, the best you can do is import re text = "aaabbbbcca" print( [x.group(1) for x in re.finditer(r'(?=((.)\2*))', text)] ) # => ['aaa', 'aa', 'a', 'bbbb', 'bbb', 'bb', 'b', 'cc', 'c', 'a'] See this Python demo. Here, (?=((.)\2*)) matches any location inside the string that is immediately preceded with any one char (other than line break chars if you do not use re.DOTALL option) that is followed with zero or more occurrences of the same char (capturing the char(s) into Group 1). | 12 | 7 |
71,348,706 | 2022-3-4 | https://stackoverflow.com/questions/71348706/pycharm-code-completion-does-not-work-for-simplenamespace | Why SimpleNamespace code completion does not work in pycharm editor? from types import SimpleNamespace sn= SimpleNamespace(param_a = '1') sn. # pressing '.' dot I'm NOT offered param_a This does work in pycharm python console, suggesting SimpleNamespace instance must be somehow 'computed' at runtime first. However if purpose of SimpleNamespace is to provide a namespace, to group a few parameters and access them via dot notation, then, while technically I can still type sn.param_a manually, without code completion is the whole thing stripped most of its usefulness and I'd likely switch to plain class where code completion does work in editor (class instantiated or not) Tried on different machines\pycharm versions so does not look like some quirk of my environment. | Why SimpleNamespace code completion does not work in pycharm editor? Because PyCharm doesn't have enough smarts to handle SimpleNamespaces specially (i.e. to know that the kwargs are just assigned into instance attributes). This does work in pycharm python console, suggesting SimpleNamespace instance must be somehow 'computed' at runtime first Yes, the console does something like vars() or dir() on the live object to introspect it. Static code analysis is an entirely different thing to inspecting objects at runtime. The instance isn't "computed" in any special way at runtime. I'd likely switch to plain class where code completion does work in editor (class instantiated or not) You might want to use dataclasses (or attrs) for brevity; both are well supported by PyCharm. | 5 | 5 |
71,333,997 | 2022-3-3 | https://stackoverflow.com/questions/71333997/set-name-execution-in-descriptor | I have came across a code that it is including descriptors. As I understand, __set_name__ is a method that is called when the class is created. Then, if the class is called twice I'd get two calls. In the following snippet I would expect to get the call in __set_name__ twice, but I am getting just one call. Why this behavior? class SharedAttribute: def __init__(self, initial_value=None): self.value = initial_value self._name = None def __get__(self, instance, owner): if instance is None: return self if self.value is None: raise AttributeError(f'{self._name} was never set') return self.value def __set__(self, instance, new_value): self.value = new_value def __set_name__(self, owner, name): print(f'{self} was named {name} by {owner}') self._name = name class GitFetcher: current_tag = SharedAttribute() current_branch = SharedAttribute() def __init__(self, tag, branch=None): self.current_tag = tag self.current_branch = branch @property def current_tag(self): if self._current_tag is None: raise AttributeError("tag was never set") return self._current_tag @current_tag.setter def current_tag(self, new_tag): self.__class__._current_tag = new_tag def pull(self): print(f"pulling from {self.current_tag}") return self.current_tag f1 = GitFetcher(0.1) f2 = GitFetcher(0.2) f1.current_tag = 0.3 f2.pull() f1.pull() During the previous execution, __set_name__ is called with current_branch, but not called with current_tag. Why this distinction? The only call is this one: <__main__.SharedAttribute object at 0x047BACB0> was named current_branch by <class '__main__.GitFetcher'> | TL;DR By the time __set_name__ methods are called, current_tag refers to an instance of property, not an instance of SharedAttribute. __set_name__ is called after the class has been defined (so that the class can be passed as the owner argument), not immediately after the assignment is made. However, you changed the value of current_tag to be a property, so the name is no longer bound to a SharedAttribute instance once the class definition has completed.' From the documentation (emphasis mine): Automatically called at the time the owning class owner is created. The SharedAttribute instance is created while the body of the class statement is being executed. The class itself is not created until after the body is executed; the result of executing the body is a namespace which is passed as an argument to the metaclass that creates the class. During that process, the class attributes are scanned for values with a __set_name__ method, and only then is the method called. Here's a simpler example: class GitFetcher: current_branch = SharedAttribute() current_tag = SharedAttribute() current_tag = 3 By the time GitFetcher is defined, current_tag is no longer bound to a descriptor, so no attempt to call current_tag.__set_name__ is made. It's not clear if you want to somehow compose a property with a SharedAttribute, or if this is just an inadvertent re-use of the name current_tag. | 4 | 4 |
71,340,374 | 2022-3-3 | https://stackoverflow.com/questions/71340374/how-can-i-set-the-frequency-of-a-pandas-index | So this is my code basically: df = pd.read_csv('XBT_60.csv', index_col = 'date', parse_dates = True) df.index.freq = 'H' I load a csv, set the index to the date column and want to set the frequency to 'H'. But this raises this error: ValueError: Inferred frequency None from passed values does not conform to passed frequency H The format of the dates column is: 2017-01-01 00:00:00 I already tried loading the csv without setting the index column and used pd.to_datetime on the dates column before I set it as index, but still i am unable to set the frequency. How can I solve this? BTW: my aim is to use the seasonal_decompose() method from statsmodels, so I need the frequency there. | You can't set frequency if you have missing index values: >>> df val 2019-09-15 0 2019-09-16 1 2019-09-18 3 >>> df.index.freq = 'D' ... ValueError: Inferred frequency None from passed values does not conform to passed frequency D To find missing index, use: >>> df = df.resample('D').first() val 2019-09-15 0.0 2019-09-16 1.0 2019-09-17 NaN 2019-09-18 3.0 >>> df.index.freq <Day> To debug, find missing indexes: >>> pd.date_range(df.index.min(), df.index.max(), freq='D').difference(df.index) DatetimeIndex(['2019-09-17'], dtype='datetime64[ns]', freq=None) | 6 | 8 |
71,332,870 | 2022-3-3 | https://stackoverflow.com/questions/71332870/pylint-not-an-iterable-error-when-subclassing-listint | Code example: from typing import List class MyList(List[int]): def total(self) -> int: return sum(i for i in self) a = MyList([1,2,3]) print(f'{a.total()=:}') When I run it, it works a.total()=6 But when I use pylint, I get the following error ... toy.py:5:30: E1133: Non-iterable value self is used in an iterating context (not-an-iterable) ... There are other pylint errors, but they're understandable. For the not-an-iterable problem, I don't quite understand it, am I subclassing List[int], correctly? I'm using Python-3.8, pylint==2.6.0 | I upgraded pylint to a more recent version. pylint --version pylint 2.9.6 astroid 2.6.6 Python 3.8.2 (default, Mar 15 2021, 10:18:42) and the error is gone. Due to system constraints, I can't upgrade to the very latest version. | 5 | 1 |
71,336,795 | 2022-3-3 | https://stackoverflow.com/questions/71336795/is-there-a-quicker-method-for-iterating-over-rows-in-python-to-calculate-a-featu | I have a Pandas Dataframe df that details Names of players that play a game. The Dataframe has 2 columns of 'Date' they played a game and their name, sorted by Date. Date Name 1993-03-28 Tom 1993-03-28 Joe 1993-03-29 Tom 1993-03-30 Joe What I am trying to accomplish is to time-efficiently calculate the previous number of games each player has played before they play the upcoming game that day. For the example Dataframe above, calculating the players previous number of games would start at 0 and look like follows. Date Name Previous Games 1993-03-28 Tom 0 1993-03-28 Joe 0 1993-03-29 Tom 1 1993-03-30 Joe 1 I have tried the following codes and although they have delivered the correct result, they took many days for my computer to run. Attempt 1: for i in range(0, len(df) ): df['Previous Games'][i] = len( df[ (df['Name'] == df['Name'][i]) & (df['Date'] < df['Date'][i]) ] ) Attempt 2: df['Previous Games'] = [ len( df[ (df['Name'] == df['Name'][i]) & (df['Date'] < df['Date'][i]) ] ) for i in range(0, len(df) ) ] Although Attempt 2 was slightly quicker, it was still not time-efficient so I need help in finding a faster method. | Any time you write "for" and "pandas" anywhere close together you are probably doing something wrong. It seems to me you want the cumulative count: df["prev_games"] = df.sort_values('Date').groupby('Name').cumcount() | 4 | 4 |
71,333,786 | 2022-3-3 | https://stackoverflow.com/questions/71333786/what-is-the-mechanism-between-max-requests-and-max-requests-jitter-in-gunicorn | According to the official guide https://docs.gunicorn.org/en/latest/settings.html#settings a worker will restart when it has handled max_requests requests. But when max_requests_jitter is set, a worker will restart when it has handled randint(0, max_requests_jitter) request, to stagger worker restarts to avoid all workers restarting at the same time. Is that means max_requests_jitter setting will override max_requests and make it be invalidοΌ | From the docs - The jitter causes the restart per worker to be randomized by randint(0, max_requests_jitter). This is intended to stagger worker restarts to avoid all workers restarting at the same time. What I understand is that the jitter is a random addition to each worker and the term max_requests_jitter should be (though not necessary) smaller than max_requests. In other words, worker_1 will restart after max_requests + j1 requests, worker_2 will restart after max_requests + j2 requests etc. where the values of j1, j2, j3... are determined by the max_requests_jitter argument. | 5 | 7 |
71,331,483 | 2022-3-3 | https://stackoverflow.com/questions/71331483/alive-progress-bar-not-working-on-pycharm | I am trying to use the alive_progress alive_bar on PyCharm but it only appears in the console once the whole process has finished. Instead, I want it to display and progress as the for loop operates. Toy example: from alive_progress import alive_bar import time bar_l = 100 with alive_bar(bar_l) as bar: for i in range(bar_l): time.sleep(0.001) bar() Has anyone else encountered this problem? | There is a option to force enable it and see alive-progress in PyCharm. It's "force_tty=True" with alive_bar(1000, force_tty=True) as bar: for i in range(1000): time.sleep(.01) bar() | 5 | 10 |
71,331,496 | 2022-3-3 | https://stackoverflow.com/questions/71331496/object-assign-equivalent-in-python | Is there an equivalent for Javascript's Object.assign(targetDict, srcDict) in python, which takes all the items from one dictionary into another, replacing as we go? (Cleaner than a for-in loop, anyhow) ------ Context --------- I use Object.assign in javascript to expand out settings-dictionary parameters in large functions, e.g.: function someFunctionWithLotsOfArguments(namedArg1, namedArg2, namedArg3, namedArg4){} // turns into function someFunctionWithLotsOfArguments(argDictionary){ namedArg1 = argDictionary["namedArg1"]; // etc } The first form allows for default arguments while the second one does not, so I then use: function someFunctionWithLotsOfArguments(namedArg1="default1", namedArg2="default2", namedArg3="default3", namedArg4="default4"){} // turns into function someFunctionWithLotsOfArguments(argDictionary){ defaultArgs = { namedArg1: "default1", namedArg2: "default2", namedArg3: "default3", namedArg4: "default4" } Object.assign(defaultArgs, argDictionary); argDictionary = defaultArgs; namedArg1 = argDictionary["namedArg1"]; // etc } I know, I know, this is a bit of an antipattern, and I should try and group my named arguments into structs to reduce my function parameter count, and if I have a large configurable function with many binary switches I should strip out the common functionality into smaller functions, etc. But if I did need a function like this, is there one available? | Python has a dict1.update(dict2) method which will do exactly that. :) | 5 | 6 |
71,328,089 | 2022-3-2 | https://stackoverflow.com/questions/71328089/pandas-extract-all-regex-matches-from-column-join-with-delimiter | I need to extract all matches from a string in a column and populate a second column. The matches will be delimited by a comma. df2 = pd.DataFrame([[1000, 'Jerry', 'string of text BR1001_BR1003_BR9009 more string','BR1003',''], [1001, '', 'BR1010_BR1011 random text', 'BR1010',''], ['', '', 'test to discardBR3009', 'BR2002',''], [1003, 'Perry','BR4009 pure gibberish','BR1001',''], [1004, 'Perry2','','BR1001','']], columns=['ID', 'Name', 'REGEX string', 'Member of','Status']) Pattern representing the codes to be extracted. BR_pat = re.compile(r'(BR[0-9]{4})', re.IGNORECASE) Hoped for output in column BR1001, BR1003, BR9009 BR1010,BR1011 BR3009 BR4009 My attempt: df2['REGEX string'].str.extractall(BR_pat).unstack().fillna('').apply(lambda x: ", ".join(x)) Output: match 0 0 BR1001, BR1010, BR3009, BR4009 1 BR1003, BR1011, , 2 BR9009, , , There are extra commas and rows missing. What did I do wrong? | You need to use >>> df2['REGEX string'].str.findall(r'BR\d{4}').str.join(", ") 0 BR1001, BR1003, BR9009 1 BR1010, BR1011 2 BR3009 3 BR4009 4 Name: REGEX string, dtype: object With Series.str.findall, you extract all occurrences of the pattern inside a string value, it returns a "Series/Index of lists of strings". To concatenate them into a single string, the Series.str.join() is used. | 6 | 6 |
71,324,949 | 2022-3-2 | https://stackoverflow.com/questions/71324949/import-selenium-could-not-be-resolved-pylance-reportmissingimports | I am editing a file in VS code. VS code gives the following error: Import "selenium" could not be resolved Pylance (reportMissingImports). This is the code from metachar: # Coded and based by METACHAR/Edited and modified for Microsoft by Major import sys import datetime import selenium import requests import time as t from sys import stdout from selenium import webdriver from optparse import OptionParser from selenium.webdriver.common.keys import Keys from selenium.common.exceptions import NoSuchElementException # Graphics class color: PURPLE = '\033[95m' CYAN = '\033[96m' DARKCYAN = '\033[36m' BLUE = '\033[94m' GREEN = '\033[92m' YELLOW = '\033[93m' RED = '\033[91m' BOLD = '\033[1m' UNDERLINE = '\033[4m' END = '\033[0m' CWHITE = '\33[37m' # Config# parser = OptionParser() now = datetime.datetime.now() # Args parser.add_option("--passsel", dest="passsel",help="Choose the password selector") parser.add_option("--loginsel", dest="loginsel",help= "Choose the login button selector") parser.add_option("--passlist", dest="passlist",help="Enter the password list directory") parser.add_option("--website", dest="website",help="choose a website") (options, args) = parser.parse_args() CHROME_DVR_DIR = '/home/major/Hatch/chromedriver' # Setting up Brute-Force function def wizard(): print (banner) website = raw_input(color.GREEN + color.BOLD + '\n[~] ' + color.CWHITE + 'Enter a website: ') sys.stdout.write(color.GREEN + '[!] '+color.CWHITE + 'Checking if site exists '), sys.stdout.flush() t.sleep(1) try: request = requests.get(website) if request.status_code == 200: print (color.GREEN + '[OK]'+color.CWHITE) sys.stdout.flush() except selenium.common.exceptions.NoSuchElementException: pass except KeyboardInterrupt: print (color.RED + '[!]'+color.CWHITE+ 'User used Ctrl-c to exit') exit() except: t.sleep(1) print (color.RED + '[X]'+color.CWHITE) t.sleep(1) print (color.RED + '[!]'+color.CWHITE+ ' Website could not be located make sure to use http / https') exit() password_selector = '#i0118' login_btn_selector = '#idSIButton9' pass_list = raw_input(color.GREEN + '[~] ' + color.CWHITE + 'Enter a directory to a password list: ') brutes(password_selector,login_btn_selector,pass_list, website) # Execute Brute-Force function def brutes(password_selector,login_btn_selector,pass_list, website): f = open(pass_list, 'r') driver = webdriver.Chrome(CHROME_DVR_DIR) optionss = webdriver.ChromeOptions() optionss.add_argument("--disable-popup-blocking") optionss.add_argument("--disable-extensions") count = 1 browser = webdriver.Chrome(CHROME_DVR_DIR) while True: try: for line in f: browser.get(website) t.sleep(1) Sel_pas = browser.find_element_by_css_selector(password_selector) enter = browser.find_element_by_css_selector(login_btn_selector) Sel_pas.send_keys(line) t.sleep(2) print ('------------------------') print (color.GREEN + 'Tried password: '+color.RED + line + color.GREEN) print ('------------------------') temp = line except KeyboardInterrupt: exit() except selenium.common.exceptions.NoSuchElementException: print ('AN ELEMENT HAS BEEN REMOVED FROM THE PAGE SOURCE THIS COULD MEAN 2 THINGS THE PASSWORD WAS FOUND OR YOU HAVE BEEN LOCKED OUT OF ATTEMPTS! ') print ('LAST PASS ATTEMPT BELLOW') print (color.GREEN + 'Password has been found: {0}'.format(temp)) print (color.YELLOW + 'Have fun :)') exit() banner = color.BOLD + color.RED +''' _ _ _ _ | | | | | | | | | |__| | __ _| |_ ___| |__ | __ |/ _` | __/ __| '_ \\ | | | | (_| | || (__| | | | |_| |_|\__,_|\__\___|_| |_| {0}[{1}-{2}]--> {3}V.1.0 {4}[{5}-{6}]--> {7}coded by Metachar {8}[{9}-{10}]-->{11} brute-force tool '''.format(color.RED, color.CWHITE,color.RED,color.GREEN,color.RED, color.CWHITE,color.RED,color.GREEN,color.RED, color.CWHITE,color.RED,color.GREEN) driver = webdriver.Chrome(CHROME_DVR_DIR) optionss = webdriver.ChromeOptions() optionss.add_argument("--disable-popup-blocking") optionss.add_argument("--disable-extensions") count = 1 if options.passsel == None: if options.loginsel == None: if options.passlist == None: if options.website == None: wizard() password_selector = options.passsel login_btn_selector = options.loginsel website = options.website pass_list = options.passlist print (banner) brutes(password_selector,login_btn_selector,pass_list, website) I have downloaded the windows chromedriver. I don't know where I must place it on my computer. Does anyone have an idea where I must place it and how I can solve this error. When I try it in Linux, I get not an error. I placed the chromedriver in the same dir as the python file. When I do the exact same thing in windows it does not work. Can anyone help me out? | PyLance looks for the "selenium" python package and cannot find it in the configured python installation. Since you're using VSCode, make sure you've configured the python extension properly. When you open a .py file in VSCode, you should see a python setting in the status bar down below on the left. Select the installation on which you've installed selenium and PyLance will find your import. | 14 | 2 |
71,324,369 | 2022-3-2 | https://stackoverflow.com/questions/71324369/does-time-complexity-change-when-two-nested-loops-are-re-written-into-a-single-l | Is the time complexity of nested for, while, and if statements the same? Suppose a is given as an array of length n. for _ in range(len(a)): for _ in range(len(a)): do_something The for statement above will be O(nΒ²). i = 0 while i < len(a) * len(a): do_something i += 1 At first glance, the above loop can be thought of as O(n), but in the end I think that it is also O(nΒ²). Am I right? | Am I right? Yes! The double loop: for _ in range(len(a)): for _ in range(len(a)): do_something has a time complexity of O(n) * O(n) = O(nΒ²) because each loop runs until n. The single loop: i = 0 while i < len(a) * len(a): do_something i += 1 has a time complexity of O(n * n) = O(nΒ²), because the loop runs until i = n * n = nΒ². | 23 | 50 |
71,322,568 | 2022-3-2 | https://stackoverflow.com/questions/71322568/create-dictionary-from-several-columns-based-on-position-of-values | I have a dataframe like this import pandas as pd df = pd.DataFrame( { 'C1': list('aabbab'), 'C2': list('abbbaa'), 'value': range(11, 17) } ) C1 C2 value 0 a a 11 1 a b 12 2 b b 13 3 b b 14 4 a a 15 5 b a 16 and I would like to generate a dictionary like this: {'C1': {'a': {1: 11, 2: 12, 3: 15}, 'b': {1: 13, 2: 14, 3: 16}}, 'C2': {'a': {1: 11, 2: 15, 3: 16}, 'b': {1: 12, 2: 13, 3: 14}}} Logic is as follows: In df I go to the column C1 and the first a I find in the column corresponds to value 11, the second one to value 12 and the third one to 15. The position of the a and the corresponding value should be stored in the dictionary for the keys C1 and a. I could do something like this df_ss = df.loc[df['C1'] == 'a', 'value'] d = {ind: val for ind, val in enumerate(df_ss.values, 1)} which yields for d: {1: 11, 2: 12, 3: 15} which is indeed the desired output. I could then put this into a loop and generate all required dictionaries. Does anyone sees something more efficient than this? | You could use a groupby and a nested dict comprehension: import pandas as pd df = pd.DataFrame( { 'C1': list('aabbab'), 'C2': list('abbbaa'), 'value': range(11, 17) } ) d = { c: {k: dict(enumerate(g["value"], 1)) for k, g in df.groupby(c)} for c in ["C1", "C2"] } Which outputs: {'C1': {'a': {1: 11, 2: 12, 3: 15}, 'b': {1: 13, 2: 14, 3: 16}}, 'C2': {'a': {1: 11, 2: 15, 3: 16}, 'b': {1: 12, 2: 13, 3: 14}}} | 4 | 5 |
71,319,929 | 2022-3-2 | https://stackoverflow.com/questions/71319929/how-to-find-and-replace-text-in-a-single-cell-when-using-jupyter-extension-insid | As the title says, how to find and replace text inside a single jupyter cell when using the jupyter extension in Visual Studio Code? I am familiar with ctr+h but that will replace all the occurrences in the entire jupyter notebook file. This is a really important feature for me, as I am using it a lot in jupyter on the browser. | You can select the first occurrence and then use Ctrl+D. It will select the next occurence in the cell. Repeat that until you go back to the first ocurrence and then type the new value. It will replace all the values your circled on. In case you have changed that keyboard shortcut or if it is different you can find the correct one: | 12 | 6 |
71,320,044 | 2022-3-2 | https://stackoverflow.com/questions/71320044/why-does-the-sortwith-key-function-not-work-as-intended | # A function that returns the frequency of each value: def myFunc(e): return cars.count(e) cars = ['Ford', 'Ford', 'Ford', 'Mitsubishi','Mitsubishi', 'BMW', 'VW'] cars.sort(key=myFunc) print(cars) Output: ['Ford', 'Ford', 'Ford', 'Mitsubishi', 'Mitsubishi', 'BMW', 'VW'] What I expect: ['BMW', 'VM', 'Mitsubishi', 'Mitsubishi', 'Ford', 'Ford', 'Ford'] Counts: Ford - 3 Mitsubishi - 2 BMW - 1 VM - 1 It should sort in ascending order of count in the list. | The problem is that you are using cars inside the key function, but .sort is in-place. This causes cars to be unreliable in intermediate calls to the key function. We can see the problem if we print cars inside the key function: def myFunc(e): print(cars) return cars.count(e) cars = ['Ford', 'Ford', 'Ford', 'Mitsubishi', 'Mitsubishi', 'BMW', 'VW'] cars.sort(key=myFunc) This outputs [] [] [] [] [] [] [] so cars.count will return 0 regardless what element is passed and the list will retain its original order. Use sorted(...) which is not in-place: def myFunc(e): return cars.count(e) cars = ['Ford', 'Ford', 'Ford', 'Mitsubishi', 'Mitsubishi', 'BMW', 'VW'] cars = sorted(cars, key=myFunc) print(cars) This outputs ['BMW', 'VW', 'Mitsubishi', 'Mitsubishi', 'Ford', 'Ford', 'Ford'] As a side-note, in this case you can use cars.count directly, without defining the wrapper function: cars = sorted(cars, key=cars.count) | 4 | 4 |
71,316,065 | 2022-3-2 | https://stackoverflow.com/questions/71316065/aws-cdk-secrets-manger-getting-the-full-arn-python | I am trying to create a canary resource that uses a script that needs a secret. I'm trying to add a policy statement to the canary role (which I'm creating as part of the cdk). To do this I need to get the secrets full arn, I can get the partial arn with secret_from_name = secretsmanager.Secret.from_secret_name_v2 then use it like resources = [secret_from_name.secret_arn] but that doesn't give me the full arn and the permissions don't work. .....because no identity-based policy allows the secretsmanager:GetSecretValue action Thought I would get around this by doing resources = [secret_from_name.secret_full_arn] But because this is derived by name, it doesn't get the full arn and you get 'undefined' I also tried getting it from attribute using the partial arn, no joy there either. So is there any way around this? As what I don't want to do is pass around full arn's or is there another way I can grant access to this reousece? | Secret ARNs have a dash and 6 random characters at the end. Define the IAM policy statement's resource with a -?????? wildcard suffix to grant your role access to all versions of the secret name. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "secretsmanager:GetSecretValue", "Resource": [ "arn:aws:secretsmanager:<region>:<account-id-number>:secret:<secret-name>-??????" ] } ] } In a CDK context you can simply use string concatenation to assemble the policy statement's resource ARN from the secret's name. Or use a CDK ARN utility (Arn.format or Stack.format_arn). | 6 | 11 |
71,298,179 | 2022-2-28 | https://stackoverflow.com/questions/71298179/fastapi-how-to-get-app-instance-inside-a-router | I want to get the app instance in my router file, what should I do ? My main.py is as follows: # ... app = FastAPI() app.machine_learning_model = joblib.load(some_path) app.include_router(some_router) # ... Now I want to use app.machine_learning_model in some_router's file , what should I do ? | Since FastAPI is actually Starlette underneath, you could store the model on the application instance using the generic app.state attribute, as described in Starlette's documentation (see State class implementation too). Example: app.state.ml_model = joblib.load(some_path) As for accessing the app instance (and subsequently, the model) from outside the main file, you can use the Request object. As per Starlette's documentation, where a request is available (i.e., endpoints and middleware), the app is available on request.app. Example: from fastapi import Request @router.get('/') def some_router_function(request: Request): model = request.app.state.ml_model Alternatively, one could now use request.state to store global variables/objects, by initializing them within the lifespan handler, as explained in this answer. | 18 | 35 |
71,256,853 | 2022-2-24 | https://stackoverflow.com/questions/71256853/how-do-i-install-python2-on-centos-9-stream | I am trying to figure out how to install python2 on centos9 stream. I am getting the errors below. Any suggestions? sudo dnf install python2 Last metadata expiration check: 0:04:48 ago on Thu 24 Feb 2022 01:43:10 PM EST. No match for argument: python2 Error: Unable to find a match: python2 | how to install python2 on centos9 stream You cannot do that from any repo. No available repo / no packages. Reason: python2.7 had End Of Life January 1, 2020. https://endoflife.date/python | 7 | 2 |
71,278,961 | 2022-2-26 | https://stackoverflow.com/questions/71278961/how-can-i-decompile-pyc-files-from-python-3-10 | I did try uncompyle6, decompyle3, and others, but none of them worked with Python 3.10. Is it even possible to do this right now? | Use pycdc. Github: https://github.com/zrax/pycdc git clone https://github.com/zrax/pycdc cd pycdc cmake . make make check pycdc C:\Users\Bobby\example.pyc | 8 | 21 |
71,250,418 | 2022-2-24 | https://stackoverflow.com/questions/71250418/call-the-generated-init-from-custom-constructor-in-dataclass-for-defaults | Is it possible to benefit from dataclasses.field, especially for default values, but using a custom constuctor? I know the @dataclass annotation sets default values in the generated __init__, and won't do it anymore if I replace it. So, is it possible to replace the generated __init__, and to still call it inside? @dataclass class A: l: list[int] = field(default_factory=list) i: int = field(default=0) def __init__(self, a: Optional[int]): # completely different args than instance attributes self.call_dataclass_generated_init() # call generated init to set defaults if a is not None: # custom settings of attributes self.i = 2*a A workaround would be to define __new__ instead of overriding __init__, but I prefer to avoid that. This question is quite close, but the answers only address the specific use-case that is given as a code example. Also, I don't want to use __post_init__ because I need to use __setattr__ which is an issue for static type checking, and it doesn't help tuning the arguments that __init__ will take anyway. I don't want to use a class method either, I really want callers to use the custom constructor. This one is also close, but it's only about explaining why the new constructor replaces the generated one, not about how to still call the latter (there's also a reply suggesting to use Pydantic, but I don't want to have to subclass BaseModel, because it will mess my inheritance). So, in short, I want to benefit from dataclass's feature to have default values for attributes, without cumbersome workarounds. Note that raw default values is not an option for me because it sets class attributes: class B: a: int = 0 # this will create B.a class attribute, and vars(B()) will be empty l: list[int] = [] # worse, a mutable object will be shared between instances | As I perceive it, the cleaner approach there is to have an alternative classmethod to use as your constructor: this way, the dataclass would work exactly as intended and you could just do: from dataclasses import dataclass, field from typing import Optional @dataclass class A: l: list[int] = field(default_factory=list) i: int = field(default=0) @classmethod def new(cls, a: Optional[int]=0): # completely different args than instance attributes # creates a new instance with default values: instance = cls() # if one wants to have more control over the instance creation, it is possible to call __new__ and __init__ manually: # instance = cls.__new__(cls) # instance.__init__() if a is not None: # custom settings of attributes i = 2*a return instance But if you don't want an explicit constructor method, and really need to call just A(), it can be done by creating a decorator, that will be applied after @dataclass - it can then move __init__ to another name. The only thng being that your custom __init__ has to be called another name, otherwise @dataclass won't create the method. def custom_init(cls): cls._dataclass_generated_init = cls.__init__ cls.__init__ = cls.__custom_init__ return cls @custom_init @dataclass class A: l: list[int] = field(default_factory=list) i: int = field(default=0) def __custom_init__(self, a: Optional[int]): # completely different args than instance attributes self._dataclass_generated_init() # call generated init to set defaults if a is not None: # custom settings of attributes i = 2*a ... print("custom init called") | 7 | 4 |
71,239,268 | 2022-2-23 | https://stackoverflow.com/questions/71239268/passing-commands-to-the-wsl-shell-from-a-windows-python-script | I'm on Windows using PowerShell and WSL 'Ubuntu 20.04 LTS'. I have no native Linux Distro, and I cant use virtualisation because of nested device reasons. My purpose is to use a Windows Python script in PowerShell to call WSL to decrypt some avd-snapshots into raw-images. I already tried os.popen, subprocess.Popen/run/call, win32com.client, multiprocessing, etc. I can boot the WSL shell, but no further commands are getting passed to it. Does somebody know how to get the shell into focus and prepared for more instructions? Code Example: from multiprocessing import Process import win32com.client import time, os, subprocess def wsl_shell(): shell = win32com.client.Dispatch("wscript.shell") shell.SendKeys("Start-Process -FilePath C:\\Programme\\WindowsApps\\CanonicalGroupLimited.Ubuntu20.04onWindows_2004.2021.825.0_x64__79rhkp1fndgsc\\ubuntu2004.exe {ENTER}") time.sleep(5) os.popen("ls -l") if __name__ == '__main__': ps = Process(target = wsl_shell) ps.start() | There are a few ways of running WSL scripts/commands from Windows Python, but a SendKeys-based approach is usually the last resort, IMHO, since it's: Often non-deterministic Lacks any control logic Also, avoid the ubuntu2004.exe (or, for other users who find this, the deprecated bash.exe command). The much more capable wsl.exe command is what you are looking for. It has a lot of options for running commands that the <distroname>.exe versions lack. With that in mind, here are a few simplified examples: Using os.system import os os.system('wsl ~ -e sh -c "ls -l > filelist.txt"') After running this code in Windows Python, go into your Ubuntu WSL instance and you should find filelist.txt in your home directory. This works because: os.system can be used to launch the wsl command The ~ tells WSL to start in the user's home directory (more deterministic, while being able to avoid specifying each path in this case) wsl -e sh runs the POSIX shell in WSL (you could also use bash for this) Passing -c "<command(s)>" to the shell runs those commands in the WSL shell Given that, you can pretty much run any Linux command(s) from Windows Python. For multiple commands: Either separate them with a semicolon. E.g.: os.system('wsl ~ -e sh -c "ls -l > filelist.txt; gzip filelist.txt') Or better, just put them all in a script in WSL (with a shebang line), set it executable, and run the script via: wsl -e /path/to/script.sh That could even be a Linux Python script (assuming the correct shebang line in the script): wsl -e /path/to/script.py So if needed, you can even call Linux Python from Windows Python this way. Using subprocess.run The os.system syntax is great for "fire and forget" scripts where you don't need to process the results in Python, but often you'll want to capture the output of the WSL/Linux commands for processing in Python. For that, use subprocess.run: import subprocess cp = subprocess.run(["wsl", "~", "-e", "ls", "-l"], capture_output=True) print(cp.stdout) As before, the -e argument can be any type of Linux script you want. Note that subprocess.run also gives you the exit status of the command. | 6 | 9 |
71,236,391 | 2022-2-23 | https://stackoverflow.com/questions/71236391/pytorch-lightning-print-accuracy-and-loss-at-the-end-of-each-epoch | In tensorflow keras, when I'm training a model, at each epoch it print the accuracy and the loss, I want to do the same thing using pythorch lightning. I already create my module but I don't know how to do it. import torch import torch.nn as nn from residual_block import ResidualBlock import pytorch_lightning as pl from torchmetrics import Accuracy class ResNet(pl.LightningModule): def __init__(self, block, layers, image_channels, num_classes, learning_rate): super(ResNet, self).__init__() self.in_channels = 64 self.conv1 = nn.Conv2d( image_channels, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU() self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer( block, layers[0], intermediate_channels=64, stride=1) self.layer2 = self._make_layer( block, layers[1], intermediate_channels=128, stride=2) self.layer3 = self._make_layer( block, layers[2], intermediate_channels=256, stride=2) self.layer4 = self._make_layer( block, layers[3], intermediate_channels=512, stride=2) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * 4, num_classes) self.learning_rate = learning_rate self.train_accuracy = Accuracy() self.val_accuracy = Accuracy() self.test_accuracy = Accuracy() def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = x.reshape(x.shape[0], -1) x = self.fc(x) return x def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate) return optimizer def training_step(self, train_batch, batch_idx): images, labels = train_batch outputs = self(images) criterion = nn.CrossEntropyLoss() loss = criterion(outputs, labels) self.train_accuracy(outputs, labels) self.log('train_loss', loss) self.log('train_accuracy', self.train_accuracy) return loss def validation_step(self, val_batch, batch_idx): images, labels = val_batch outputs = self(images) criterion = nn.CrossEntropyLoss() loss = criterion(outputs, labels) self.val_accuracy(outputs, labels) self.log('val_loss', loss) self.log('val_accuracy', self.val_accuracy) def test_step(self, test_batch, batch_idx): images, labels = test_batch outputs = self(images) criterion = nn.CrossEntropyLoss() loss = criterion(outputs, labels) self.test_accuracy(outputs, labels) self.log('test_loss', loss) self.log('test_accuracy', self.test_accuracy) def _make_layer(self, block, num_residual_blocks, intermediate_channels, stride): identity_downsample = None layers = [] if stride != 1 or self.in_channels != intermediate_channels * 4: identity_downsample = nn.Sequential(nn.Conv2d(self.in_channels, intermediate_channels * 4, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(intermediate_channels * 4),) layers.append( block(self.in_channels, intermediate_channels, identity_downsample, stride)) self.in_channels = intermediate_channels * 4 for i in range(num_residual_blocks - 1): layers.append(block(self.in_channels, intermediate_channels)) return nn.Sequential(*layers) @classmethod def ResNet50(cls, img_channels, num_classes, learning_rate): return ResNet(ResidualBlock, [3, 4, 6, 3], img_channels, num_classes, learning_rate) @classmethod def ResNet101(cls, img_channels, num_classes, learning_rate): return ResNet(ResidualBlock, [3, 4, 23, 3], img_channels, num_classes, learning_rate) @classmethod def ResNet152(cls, img_channels, num_classes, learning_rate): return ResNet(ResidualBlock, [3, 8, 36, 3], img_channels, num_classes, learning_rate) I just want to print the training and validation accuracy and loss at the end of each epoch. | self.log("train_loss", loss, prog_bar=True, on_step=False, on_epoch=True) The above code logs train_loss to the progress bar. https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html#automatic-logging Or you can use this if on one device: def training_step(self, batch, batch_idx): ... loss = nn.functional.mse_loss(x_hat, x) return loss def training_epoch_end(self, outputs) -> None: loss = sum(output['loss'] for output in outputs) / len(outputs) print(loss) Multiple GPUs: def training_epoch_end(self, outputs) -> None: gathered = self.all_gather(outputs) if self.global_rank == 0: # print(gathered) loss = sum(output['loss'].mean() for output in gathered) / len(outputs) print(loss.item()) Updated December 2023: I don't think it works anymore since Lightning 2.0 and I have to use TorchMetric to customize my metrics. | 7 | 12 |
71,268,169 | 2022-2-25 | https://stackoverflow.com/questions/71268169/optional-query-parameters-in-fastapi | I don't understand optional query parameters in FastAPI. How is it different from default query parameters with a default value of None? What is the difference between arg1 and arg2 in the example below where arg2 is made an optional query parameter as described in the above link? @app.get("/info/") async def info(arg1: int = None, arg2: int | None = None): return {"arg1": arg1, "arg2": arg2} | This is covered in the FastAPI reference manual, albeit just as a small note: async def read_items(q: Optional[str] = None): FastAPI will know that the value of q is not required because of the default value = None. The Optional in Optional[str] is not used by FastAPI, but will allow your editor to give you better support and detect errors. (Optional[str] is the same as str | None pre 3.10 for other readers) Since your editor might not be aware of the context in which the parameter is populated and used by FastAPI, it might have trouble understanding the actual signature of the function when the parameter is not marked as Optional. You may or may not care about this distinction. | 24 | 31 |
71,263,405 | 2022-2-25 | https://stackoverflow.com/questions/71263405/run-bash-command-via-subprocess-in-python-without-bandit-warning-b404-and-b603 | Since the pre-commit hook does not allow even warnings and commits issued by bandit, I need to find a way to execute bash commands from python scripts without bandit complaining. Using the subprocess python package, bandit has always complained so far, no matter what I did. I used ".run()", ".check_call()", ".Popen()", .. all without shell=True and yet there's no avail. If there is a secure alternative to subprocess, I'd also be interested, but I'm sure it must work somehow with subprocess as well. Example which is not accepted by bandit: import shlex import subprocess ... bash_command = ( f'aws s3 cp {source_dir} s3://{target_bucket_name} --recursive' f' --profile {profile_name}') subprocess.check_call(shlex.split(bash_command), text=True) | In order for the code to be secure, you need to know that source_dir target_bucket_name profile_name aren't malicious: e.g. can an untrusted user pass .ssh as the value to be copied? Once you know the subprocess line is secure, you can add # nosec comment to tell bandit not to give a warning about the line: subprocess.check_call(shlex.split(bash_command), text=True) # nosec (The command aws s3 ... running in subprocess.check_call isn't running in a bash shell, which might confuse people reading the question. Python will directly start the aws process, passing arguments.) | 6 | 3 |
71,312,665 | 2022-3-1 | https://stackoverflow.com/questions/71312665/you-may-have-failed-to-include-the-related-model-in-your-api-or-incorrectly-con | I'm trying to setup the lookup field between two entities, but I can't fix this error. I've already tried these solutions but none of them worked for me(What am I doing wrong?): Django Rest Framework, improperly configured lookup field Django Rest Framework - Could not resolve URL for hyperlinked relationship using view name "user-detail" DRF Could not resolve URL for hyperlinked relationship using view name on PrimaryKeyRelatedField here's my code Models: class Category(models.Model): title = models.CharField(max_length=50, unique=True) slug = models.SlugField(max_length=80, default='') def __str__(self): return self.title class Option(models.Model): title = models.CharField(max_length=80) slug = models.SlugField(max_length=80, unique=True) description = models.CharField(max_length=250) price = models.DecimalField(max_digits=7, decimal_places=2) category = models.ForeignKey(Category, related_name='options', on_delete=models.CASCADE) photo = models.ImageField(upload_to='options', null=True) class Meta: ordering = ['title'] def __str__(self): return self.title Serializers: class CategorySerializer(serializers.HyperlinkedModelSerializer): options = serializers.HyperlinkedRelatedField(many=True, view_name='option-detail', read_only=True) class Meta: model = Category fields = ('url', 'slug', 'title', 'options') lookup_field = 'slug' extra_kwargs = { 'url': {'lookup_field': 'slug'} } class OptionSerializer(serializers.HyperlinkedModelSerializer): category = serializers.ReadOnlyField(source='category.title') class Meta: model = Option fields = ('url', 'slug', 'title', 'description', 'price', 'category') lookup_field = 'slug' extra_kwargs = { 'url': {'lookup_field': 'slug'}, 'category': {'lookup_field': 'slug'} } Views: class CategoryViewSet(viewsets.ReadOnlyModelViewSet): """ Returns the Category list or the requested one """ queryset = Category.objects.all() serializer_class = CategorySerializer lookup_field = 'slug' class OptionViewSet(viewsets.ReadOnlyModelViewSet): """ Returns the Option list or the requested one """ queryset = Option.objects.all() serializer_class = OptionSerializer lookup_field = 'slug' urls: router = DefaultRouter() router.register(r'options', views.OptionViewSet) router.register(r'categories', views.CategoryViewSet) urlpatterns = [ path('', include(router.urls)), ] This works for the Option model. When I hit the '[localhost]/options/' url, it correctly lists the options and when hitting '[localhost]/options/some-option-slug' it returns the correct option. But none of that works for the Category model. Calls to '[localhost]/categories/' retuns "Could not resolve URL for hyperlinked relationship using view name "option-detail". You may have failed to include the related model in your API, or incorrectly configured the lookup_field attribute on this field.". While calls to '[localhost]/categories/category-slug/' returns 404 Not Found. My django version is 4.0.1 and my Django Rest Framework version is 3.13.1 EDIT As suggestted by @code-apprendice, here's the output of print('router'): [<URLPattern '^options/$' [name='option-list']>, <URLPattern '^options.(?P[a-z0-9]+)/?$' [name='option-list']>, <URLPattern '^options/(?P[^/.]+)/$' [name='option-detail']>, <URLPattern '^options/(?P[^/.]+).(?P[a-z0-9]+)/?$' [name='option-detail']>, <URLPattern '^categories/$' [name='category-list']>, <URLPattern '^categories.(?P[a-z0-9]+)/?$' [name='category-list']>, <URLPattern '^categories/(?P[^/.]+)/$' [name='category-detail']>, <URLPattern '^categories/(?P[^/.]+).(?P[a-z0-9]+)/?$' [name='category-detail']>, <URLPattern '^$' [name='api-root']>, <URLPattern '^.(?P[a-z0-9]+)/?$' [name='api-root']>] The DRF correctly generated the views option-list, option-detail, category-list and category-detail | Defining the lookup_field attribute for the options in the CategorySerializer solved the problem. Here's the CategorySerializer class: class CategorySerializer(serializers.HyperlinkedModelSerializer): options = serializers.HyperlinkedRelatedField( view_name='option-detail', lookup_field = 'slug', many=True, read_only=True) class Meta: model = Category fields = ('url', 'slug', 'title', 'options') lookup_field = 'slug' extra_kwargs = { 'url': {'lookup_field': 'slug'} } The problem was that the CategorySerializer set an explict options HyperlinkedRelatedField and it's lookup_field needs to be configured too. Feel free to edit this answer to add a deeper explanation. | 5 | 7 |
71,306,070 | 2022-3-1 | https://stackoverflow.com/questions/71306070/do-you-need-to-put-eos-and-bos-tokens-in-autoencoder-transformers | I'm starting to wrap my head around the transformer architecture, but there are some things that I am not yet able to grasp. In decoder-free transformers, such as BERT, the tokenizer includes always the tokens CLS and SEP before and after a sentence. I understand that CLS acts both as BOS and as a single hidden output that gives the classification information, but I am a bit lost about why does it need SEP for the masked language modeling part. I'll explain a bit more about the utility I expect to get. In my case, I want to train a transformer to act as an autoencoder, so target = input. There would be no decoder, since my idea is to reduce the dimensionality of the original vocabulary into less embedding dimensions, and then study (not sure how yet, but will get there) the reduced space in order to extract useful information. Therefore, an example would be: string_input = "The cat is black" tokens_input = [1,2,3,4] string_target = "The cat is black" tokens_output = [1,2,3,4] Now when tokenizing, assuming that we tokenize in the basis of word by word, what would be the advantage of adding BOS and EOS? I think these are only useful when you are using the self-attention decoder, right? so, since in that case, for the decoder the outputs would have to enter right-shifted, the vectors would be: input_string = "The cat is black EOS" input_tokens = [1,2,3,4,5] shifted_output_string = "BOS The cat is black" shifted_output_tokens = [6,1,2,3,4] output_string = "The cat is black EOS" output_token = [1,2,3,4,5] However, BERT does not have a self-attention decoder, but a simple feedforward layer. That is why I'm not sure of understanding the purpose of these special tokens. In summary, the questions would be: Do you always need BOS and EOS tokens, even if you don't have a transformer decoder? Why does BERT, that does not have a transformer decoder, require the SEP token for the masked language model part? | First, a little about BERT - BERT word embeddings allow for multiple vector representations for the same word, based on the context in which the word was used. In this sense, BERT embeddings are context-dependent. BERT explicitly takes the index position of each word in the sentence while calculating its embedding. The input to BERT is a sentence rather than a single word. This is because BERT needs the context of the whole sentence to determine the vectors of the words in the sentence. If you only input a single word vector to BERT it would completely defeat the purpose of BERTβs bidirectional, contextual nature. The output is then a fixed-length vector representation of the whole input sentence. BERT provides support for out-of-vocabulary words because the model learns words at a βsubwordβ level (also called βword-piecesβ). The SEP token is used to help BERT differentiate between two different word sequences. This is necessary in next-sequence-prediction (NSP). CLS is also necessary in NSP to let BERT know when the first sequence begins. Ideally you would use a format like this: CLS [sequence 1] SEP [sequence 2] SEP Note that we are not using any BOS or EOS tokens. The standard BERT tokenizer does not include these. We can see this if we run the following code: from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') print(tokenizer.eos_token) print(tokenizer.bos_token) print(tokenizer.sep_token) print(tokenizer.cls_token) Output: None None [SEP] [CLS] For masked-language-modeling (MLM), we are only concerned with the MASK token, since the model's objective is merely to guess the masked token. BERT was trained on both NSP and MLM and it is the combination of those two training methods that make BERT so effective. So to answer your questions - you do not "always need" EOS and/or BOS. In fact, you don't "need" them at all. However, if you are fine-tuning BERT for a specific downstream task, where you intent to use BOS and EOS tokens (the manner of which, is up to you), then yes I suppose you would include them as special tokens. But understand that BERT was not trained with those in mind and you may see unpredictable/unstable results. | 5 | 5 |
71,248,521 | 2022-2-24 | https://stackoverflow.com/questions/71248521/why-numexpr-defaulting-to-8-threads-warning-message-shown-in-python | I am trying to use the lux library in python to get visualization recommendations. It shows warnings like NumExpr defaulting to 8 threads.. import pandas as pd import numpy as np import opendatasets as od pip install lux-api import lux import matplotlib And then: link = "https://www.kaggle.com/noordeen/insurance-premium-prediction" od.download(link) df = pd.read_csv("./insurance-premium-prediction/insurance.csv") But, everything is working fine. Is there any problem or should I ignore it? Warning shows like this: | This is not really something to worry about in most cases. The warning comes from this function, here the most important part: ... env_configured = False n_cores = detect_number_of_cores() if 'NUMEXPR_MAX_THREADS' in os.environ: # The user has configured NumExpr in the expected way, so suppress logs. env_configured = True n_cores = MAX_THREADS ... if 'NUMEXPR_NUM_THREADS' in os.environ: requested_threads = int(os.environ['NUMEXPR_NUM_THREADS']) elif 'OMP_NUM_THREADS' in os.environ: requested_threads = int(os.environ['OMP_NUM_THREADS']) else: requested_threads = n_cores if not env_configured: log.info('NumExpr defaulting to %d threads.'%n_cores) So if neither NUMEXPR_MAX_THREADS nor NUMEXPR_NUM_THREADS nor OMP_NUM_THREADS are set, NumExpr uses so many threads as there are cores (even if the documentation says "at most 8", yet this is not what I see in the code). You might want to use another number of threads, e.g. while really huge matrices are calculated and one could profit from it or to use less threads, because there is no improvement. Set the environment variables either in the shell or prior to importing numexpr, e.g. import os os.environ['NUMEXPR_MAX_THREADS'] = '4' os.environ['NUMEXPR_NUM_THREADS'] = '2' import numexpr as ne | 12 | 11 |
71,258,548 | 2022-2-24 | https://stackoverflow.com/questions/71258548/how-to-convert-dataframe-append-to-pandas-concat | In pandas 1.4.0: append() was deprecated, and the docs say to use concat() instead. FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. Codeblock in question: def generate_features(data, num_samples, mask): """ The main function for generating features to train or evaluate on. Returns a pd.DataFrame() """ logger.debug("Generating features, number of samples", num_samples) features = pd.DataFrame() for count in range(num_samples): row, col = get_pixel_within_mask(data, mask) input_vars = get_pixel_data(data, row, col) features = features.append(input_vars) print_progress(count, num_samples) return features These are the two options I've tried, but did not work: features = pd.concat([features],[input_vars]) and pd.concat([features],[input_vars]) This is the line that is deprecated and throwing the error: features = features.append(input_vars) | You can store the DataFrames generated in the loop in a list and concatenate them with features once you finish the loop. In other words, replace the loop: for count in range(num_samples): # .... code to produce `input_vars` features = features.append(input_vars) # remove this `DataFrame.append` with the one below: tmp = [] # initialize list for count in range(num_samples): # .... code to produce `input_vars` tmp.append(input_vars) # append to the list, (not DF) features = pd.concat(tmp) # concatenate after loop You can certainly concatenate in the loop but it's more efficient to do it only once. | 21 | 18 |
71,311,507 | 2022-3-1 | https://stackoverflow.com/questions/71311507/modulenotfounderror-no-module-named-app-fastapi-docker | FROM python:3.8 WORKDIR /app COPY requirements.txt / RUN pip install --requirement /requirements.txt COPY ./app /app EXPOSE 8000 CMD ["uvicorn", "app.main:app", "--host=0.0.0.0" , "--reload" , "--port", "8000"] when i used docker-compose up -d ModuleNotFoundError: No module named 'app' the folders in Fastapi framework: fastapi app -main.py language_detector.py Dockerfile docker-compose | CMD ["uvicorn", "main:app", "--host=0.0.0.0" , "--reload" , "--port", "8000"] Your work directory is /app and the main.py file is already there. So you don't need to call app.main module. Just call main.py script directly in CMD. | 5 | 13 |
71,302,366 | 2022-3-1 | https://stackoverflow.com/questions/71302366/how-can-i-programmatically-trigger-an-event-with-pysimplegui | For example, the "Show" event in the example below is tied to clicking the "Show" button. Is there a way to programmatically fire off the "Show" event without actually clicking the button? The goal is to automate clicking a series of buttons and filling text boxes by just clicking one other button instead, like a browser autofill. import PySimpleGUI as sg sg.theme("BluePurple") layout = [ [sg.Text("Your typed chars appear here:"), sg.Text(size=(15, 1), key="-OUTPUT-")], [sg.Input(key="-IN-")], [sg.Button("Show"), sg.Button("Exit")], ] window = sg.Window("Pattern 2B", layout) while True: # Event Loop event, values = window.read() print(event, values) if event == sg.WIN_CLOSED or event == "Exit": break if event == "Show": # Update the "output" text element to be the value of "input" element window["-OUTPUT-"].update(values["-IN-"]) window.close() | From martineau's comment above: You can generate a click of the button as if the user clicked on it by calling its click() method. See the docs. Aditionally, you can fire a specific event with: write_event_value(key, value) | 4 | 6 |
71,285,719 | 2022-2-27 | https://stackoverflow.com/questions/71285719/python-loop-to-run-after-n-minutes-from-start-time | I am trying to create a while loop which will iterate between 2 time objects, while datetime.datetime.now().time() <= datetime.datetime.now() +relativedelta(hour=1): but on every n minutes or second interval. So if the starting time was 1:00 AM, the next iteration should begin at 1:05 AM with n being 5 mins. So the iteration should begin after 5 mins of the start time and not for example, from the end of an iteration, which is the case when using sleep. Could you please advise how this could be accomplished? A possible solution to this was from here: write python script that is executed every 5 minutes import schedule import time def func(): print("this is python") schedule.every(5).minutes.do(func) while True: schedule.run_pending() time.sleep(1) With this, the start time has to be 1 am. Secondly, what if the program needs to run say 5 min + 1. In that case a 6 min interval wont work. | I believe this could be considered an object-oriented "canonical" solution which creates a Thread subclass instance that will call a specified function repeatedly every datetime.timedelta units until canceled. The starting and how long it's left running are not details the class concerns itself with, and are left to the code making use of the class to determine. Since most of the action occurs in a separate thread, the main thread could be doing other things concurrently, if desired. import datetime from threading import Thread, Event import time from typing import Callable class TimedCalls(Thread): """Call function again every `interval` time duration after it's first run.""" def __init__(self, func: Callable, interval: datetime.timedelta) -> None: super().__init__() self.func = func self.interval = interval self.stopped = Event() def cancel(self): self.stopped.set() def run(self): next_call = time.time() while not self.stopped.is_set(): self.func() # Target activity. next_call = next_call + self.interval # Block until beginning of next interval (unless canceled). self.stopped.wait(next_call - time.time()) def my_function(): print(f"this is python: {time.strftime('%H:%M:%S', time.localtime())}") # Start test a few secs from now. start_time = datetime.datetime.now() + datetime.timedelta(seconds=5) run_time = datetime.timedelta(minutes=2) # How long to iterate function. end_time = start_time + run_time assert start_time > datetime.datetime.now(), 'Start time must be in future' timed_calls = TimedCalls(my_function, 10) # Thread to call function every 10 secs. print(f'waiting until {start_time.strftime("%H:%M:%S")} to begin...') wait_time = start_time - datetime.datetime.now() time.sleep(wait_time.total_seconds()) print('starting') timed_calls.start() # Start thread. while datetime.datetime.now() < end_time: time.sleep(1) # Twiddle thumbs while waiting. print('done') timed_calls.cancel() Sample run: waiting until 11:58:30 to begin... starting this is python: 11:58:30 this is python: 11:58:40 this is python: 11:58:50 this is python: 11:59:00 this is python: 11:59:10 this is python: 11:59:20 this is python: 11:59:30 this is python: 11:59:40 this is python: 11:59:50 this is python: 12:00:00 this is python: 12:00:10 this is python: 12:00:20 done | 5 | 2 |
71,279,968 | 2022-2-26 | https://stackoverflow.com/questions/71279968/getting-a-prediction-from-an-onnx-model-in-python | I can't find anyone who explains to a layman how to load an onnx model into a python script, then use that model to make a prediction when fed an image. All I could find were these lines of code: sess = rt.InferenceSession("onnx_model.onnx") input_name = sess.get_inputs()[0].name label_name = sess.get_outputs()[0].name pred = sess.run([label_name], {input_name: X.astype(np.float32)})[0] But I don't know what any of that means. And everywhere I look, everybody already seems to know what they mean, so nobody's explaining it. That would be one thing if I could just run this code, but I can't. It gives me this error: onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid rank for input: Input3 Got: 2 Expected: 4 Please fix either the inputs or the model. So I need to actually know what those things mean so I can figure out how to fix the error. Will someone knowledgeable please explain? | Let's first start by going over the code you provided, to make everything clear. sess = ort.InferenceSession("onnx_model.onnx") This line loads the model into a session object. This means that the layers, functions and weights used in the model are made ready to perform inferences. input_name = sess.get_inputs()[0].name label_name = sess.get_outputs()[0].name The two methods get_inputs and get_outputs each retrieve some meta information about the model, that being what inputs the model expects, and what outputs it can provide. Off of this meta information in these lines, only the first input & output is actually used, and off of these, only the name is being gotten, and saved into variables. For the last line, let's tackle that part by part. pred = sess.run(...)[0] This performs a inference on the model, we'll go over the inputs to this method after this, but for now, the output is a list of different outputs. These outputs are each numpy arrays. In this case only the first output in this list is being used, and saved to the pred variable ([label_name], {input_name: X.astype(np.float32)}) These are the inputs to sess.run. The fist is a list of names of outputs that you want to be computed by the session. The second argument is a dict, where each input's name maps to numpy arrays. These arrays are are expected to be of the same dimension as the ones supplied during creation of the model. Similarly the types of these arrays should also match the types used during creation of the model. The error you encountered seems to indicate that the supplied array doesn't have the expected dimensions. These intended amount of dimensions seems to be 4. To gain clarity about what the exact shape and data type of the input array should be, there are visualization tools, like Netron | 6 | 7 |
71,288,513 | 2022-2-27 | https://stackoverflow.com/questions/71288513/how-can-i-determine-validation-loss-for-faster-rcnn-pytorch | I followed this tutorial for object detection: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html and their GitHub repository that contains the following train_one_epoch and evaluate functions: https://github.com/pytorch/vision/blob/main/references/detection/engine.py However, I want to calculate losses during validation. I implemented this for the evaluation loss, where essentially to obtain losses, model.train() needs to be on: @torch.no_grad() def evaluate_loss(model, data_loader, device): val_loss = 0 model.train() for images, targets in data_loader: images = list(image.to(device) for image in images) targets = [{k: v.to(device) for k, v in t.items()} for t in targets] loss_dict = model(images, targets) losses = sum(loss for loss in loss_dict.values()) # reduce losses over all GPUs for logging purposes loss_dict_reduced = utils.reduce_dict(loss_dict) losses_reduced = sum(loss for loss in loss_dict_reduced.values()) val_loss += losses_reduced validation_loss = val_loss/ len(data_loader) return validation_loss I then place it after the learning rate scheduler step in my for loop: for epoch in range(args.num_epochs): # train for one epoch, printing every 10 iterations train_one_epoch(model, optimizer, train_data_loader, device, epoch, print_freq=10) # update the learning rate lr_scheduler.step() validation_loss = evaluate_loss(model, valid_data_loader, device=device) # evaluate on the test dataset evaluate(model, valid_data_loader, device=device) Does this look correct or can it interfere with training or produce inaccurate validation losses? If ok, by using this, is there is a simple way in applying early stopping for validation loss? I'm considering just adding something like this after the evaluate model function shown above: torch.save({ 'epoch': epoch, 'model_state_dict': net.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'validation loss': valid_loss, }, PATH) where I also aim to save the model at every epoch for checkpointing purposes. However I need to determine the validation "loss" for saving the "best" model. | So it turns out no stages of the pytorch fasterrcnn return losses when model.eval() is set. However, you can just manually use the forward code to generate the losses in evaluation mode: from typing import Tuple, List, Dict, Optional import torch from torch import Tensor from collections import OrderedDict from torchvision.models.detection.roi_heads import fastrcnn_loss from torchvision.models.detection.rpn import concat_box_prediction_layers def eval_forward(model, images, targets): # type: (List[Tensor], Optional[List[Dict[str, Tensor]]]) -> Tuple[Dict[str, Tensor], List[Dict[str, Tensor]]] """ Args: images (list[Tensor]): images to be processed targets (list[Dict[str, Tensor]]): ground-truth boxes present in the image (optional) Returns: result (list[BoxList] or dict[Tensor]): the output from the model. It returns list[BoxList] contains additional fields like `scores`, `labels` and `mask` (for Mask R-CNN models). """ model.eval() original_image_sizes: List[Tuple[int, int]] = [] for img in images: val = img.shape[-2:] assert len(val) == 2 original_image_sizes.append((val[0], val[1])) images, targets = model.transform(images, targets) # Check for degenerate boxes # TODO: Move this to a function if targets is not None: for target_idx, target in enumerate(targets): boxes = target["boxes"] degenerate_boxes = boxes[:, 2:] <= boxes[:, :2] if degenerate_boxes.any(): # print the first degenerate box bb_idx = torch.where(degenerate_boxes.any(dim=1))[0][0] degen_bb: List[float] = boxes[bb_idx].tolist() raise ValueError( "All bounding boxes should have positive height and width." f" Found invalid box {degen_bb} for target at index {target_idx}." ) features = model.backbone(images.tensors) if isinstance(features, torch.Tensor): features = OrderedDict([("0", features)]) model.rpn.training=True #model.roi_heads.training=True #####proposals, proposal_losses = model.rpn(images, features, targets) features_rpn = list(features.values()) objectness, pred_bbox_deltas = model.rpn.head(features_rpn) anchors = model.rpn.anchor_generator(images, features_rpn) num_images = len(anchors) num_anchors_per_level_shape_tensors = [o[0].shape for o in objectness] num_anchors_per_level = [s[0] * s[1] * s[2] for s in num_anchors_per_level_shape_tensors] objectness, pred_bbox_deltas = concat_box_prediction_layers(objectness, pred_bbox_deltas) # apply pred_bbox_deltas to anchors to obtain the decoded proposals # note that we detach the deltas because Faster R-CNN do not backprop through # the proposals proposals = model.rpn.box_coder.decode(pred_bbox_deltas.detach(), anchors) proposals = proposals.view(num_images, -1, 4) proposals, scores = model.rpn.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level) proposal_losses = {} assert targets is not None labels, matched_gt_boxes = model.rpn.assign_targets_to_anchors(anchors, targets) regression_targets = model.rpn.box_coder.encode(matched_gt_boxes, anchors) loss_objectness, loss_rpn_box_reg = model.rpn.compute_loss( objectness, pred_bbox_deltas, labels, regression_targets ) proposal_losses = { "loss_objectness": loss_objectness, "loss_rpn_box_reg": loss_rpn_box_reg, } #####detections, detector_losses = model.roi_heads(features, proposals, images.image_sizes, targets) image_shapes = images.image_sizes proposals, matched_idxs, labels, regression_targets = model.roi_heads.select_training_samples(proposals, targets) box_features = model.roi_heads.box_roi_pool(features, proposals, image_shapes) box_features = model.roi_heads.box_head(box_features) class_logits, box_regression = model.roi_heads.box_predictor(box_features) result: List[Dict[str, torch.Tensor]] = [] detector_losses = {} loss_classifier, loss_box_reg = fastrcnn_loss(class_logits, box_regression, labels, regression_targets) detector_losses = {"loss_classifier": loss_classifier, "loss_box_reg": loss_box_reg} boxes, scores, labels = model.roi_heads.postprocess_detections(class_logits, box_regression, proposals, image_shapes) num_images = len(boxes) for i in range(num_images): result.append( { "boxes": boxes[i], "labels": labels[i], "scores": scores[i], } ) detections = result detections = model.transform.postprocess(detections, images.image_sizes, original_image_sizes) # type: ignore[operator] model.rpn.training=False model.roi_heads.training=False losses = {} losses.update(detector_losses) losses.update(proposal_losses) return losses, detections Testing this code gives me: import torchvision from torchvision.models.detection.faster_rcnn import FastRCNNPredictor # load a model pre-trained on COCO model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) # replace the classifier with a new one, that has # num_classes which is user-defined num_classes = 2 # 1 class (person) + background # get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) losses, detections = eval_forward(model,torch.randn([1,3,300,300]),[{'boxes':torch.tensor([[100,100,200,200]]),'labels':torch.tensor([0])}]) {'loss_classifier': tensor(0.6594, grad_fn=<NllLossBackward0>), 'loss_box_reg': tensor(0., grad_fn=<DivBackward0>), 'loss_objectness': tensor(0.5108, grad_fn=<BinaryCrossEntropyWithLogitsBackward0>), 'loss_rpn_box_reg': tensor(0.0160, grad_fn=<DivBackward0>)} | 4 | 8 |
71,263,622 | 2022-2-25 | https://stackoverflow.com/questions/71263622/sslcertverificationerror-when-downloading-pytorch-datasets-via-torchvision | I am having trouble downloading the CIFAR-10 dataset from pytorch. Mostly it seems like some SSL error which I don't really know how to interpret. I have also tried changing the root to various other folders but none of them works. I was wondering whether it is a permission type setting on my end but I am inexperienced. Would appreciate some help to fix this! The code executed is here: trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=1) The error is reproduced here: --------------------------------------------------------------------------- SSLCertVerificationError Traceback (most recent call last) File C:\ProgramData\Miniconda3\envs\pDL\lib\urllib\request.py:1354, in AbstractHTTPHandler.do_open(self, http_class, req, **http_conn_args) 1353 try: -> 1354 h.request(req.get_method(), req.selector, req.data, headers, 1355 encode_chunked=req.has_header('Transfer-encoding')) 1356 except OSError as err: # timeout error File C:\ProgramData\Miniconda3\envs\pDL\lib\http\client.py:1256, in HTTPConnection.request(self, method, url, body, headers, encode_chunked) 1255 """Send a complete request to the server.""" -> 1256 self._send_request(method, url, body, headers, encode_chunked) File C:\ProgramData\Miniconda3\envs\pDL\lib\http\client.py:1302, in HTTPConnection._send_request(self, method, url, body, headers, encode_chunked) 1301 body = _encode(body, 'body') -> 1302 self.endheaders(body, encode_chunked=encode_chunked) File C:\ProgramData\Miniconda3\envs\pDL\lib\http\client.py:1251, in HTTPConnection.endheaders(self, message_body, encode_chunked) 1250 raise CannotSendHeader() -> 1251 self._send_output(message_body, encode_chunked=encode_chunked) File C:\ProgramData\Miniconda3\envs\pDL\lib\http\client.py:1011, in HTTPConnection._send_output(self, message_body, encode_chunked) 1010 del self._buffer[:] -> 1011 self.send(msg) 1013 if message_body is not None: 1014 1015 # create a consistent interface to message_body File C:\ProgramData\Miniconda3\envs\pDL\lib\http\client.py:951, in HTTPConnection.send(self, data) 950 if self.auto_open: --> 951 self.connect() 952 else: File C:\ProgramData\Miniconda3\envs\pDL\lib\http\client.py:1425, in HTTPSConnection.connect(self) 1423 server_hostname = self.host -> 1425 self.sock = self._context.wrap_socket(self.sock, 1426 server_hostname=server_hostname) File C:\ProgramData\Miniconda3\envs\pDL\lib\ssl.py:500, in SSLContext.wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session) 494 def wrap_socket(self, sock, server_side=False, 495 do_handshake_on_connect=True, 496 suppress_ragged_eofs=True, 497 server_hostname=None, session=None): 498 # SSLSocket class handles server_hostname encoding before it calls 499 # ctx._wrap_socket() --> 500 return self.sslsocket_class._create( 501 sock=sock, 502 server_side=server_side, 503 do_handshake_on_connect=do_handshake_on_connect, 504 suppress_ragged_eofs=suppress_ragged_eofs, 505 server_hostname=server_hostname, 506 context=self, 507 session=session 508 ) File C:\ProgramData\Miniconda3\envs\pDL\lib\ssl.py:1040, in SSLSocket._create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session) 1039 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets") -> 1040 self.do_handshake() 1041 except (OSError, ValueError): File C:\ProgramData\Miniconda3\envs\pDL\lib\ssl.py:1309, in SSLSocket.do_handshake(self, block) 1308 self.settimeout(None) -> 1309 self._sslobj.do_handshake() 1310 finally: SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131) During handling of the above exception, another exception occurred: URLError Traceback (most recent call last) Input In [8], in <module> ----> 1 trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) 2 trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=1) File C:\ProgramData\Miniconda3\envs\pDL\lib\site-packages\torchvision\datasets\cifar.py:66, in CIFAR10.__init__(self, root, train, transform, target_transform, download) 63 self.train = train # training set or test set 65 if download: ---> 66 self.download() 68 if not self._check_integrity(): 69 raise RuntimeError('Dataset not found or corrupted.' + 70 ' You can use download=True to download it') File C:\ProgramData\Miniconda3\envs\pDL\lib\site-packages\torchvision\datasets\cifar.py:144, in CIFAR10.download(self) 142 print('Files already downloaded and verified') 143 return --> 144 download_and_extract_archive(self.url, self.root, filename=self.filename, md5=self.tgz_md5) File C:\ProgramData\Miniconda3\envs\pDL\lib\site-packages\torchvision\datasets\utils.py:427, in download_and_extract_archive(url, download_root, extract_root, filename, md5, remove_finished) 424 if not filename: 425 filename = os.path.basename(url) --> 427 download_url(url, download_root, filename, md5) 429 archive = os.path.join(download_root, filename) 430 print("Extracting {} to {}".format(archive, extract_root)) File C:\ProgramData\Miniconda3\envs\pDL\lib\site-packages\torchvision\datasets\utils.py:130, in download_url(url, root, filename, md5, max_redirect_hops) 127 _download_file_from_remote_location(fpath, url) 128 else: 129 # expand redirect chain if needed --> 130 url = _get_redirect_url(url, max_hops=max_redirect_hops) 132 # check if file is located on Google Drive 133 file_id = _get_google_drive_file_id(url) File C:\ProgramData\Miniconda3\envs\pDL\lib\site-packages\torchvision\datasets\utils.py:78, in _get_redirect_url(url, max_hops) 75 headers = {"Method": "HEAD", "User-Agent": USER_AGENT} 77 for _ in range(max_hops + 1): ---> 78 with urllib.request.urlopen(urllib.request.Request(url, headers=headers)) as response: 79 if response.url == url or response.url is None: 80 return url File C:\ProgramData\Miniconda3\envs\pDL\lib\urllib\request.py:222, in urlopen(url, data, timeout, cafile, capath, cadefault, context) 220 else: 221 opener = _opener --> 222 return opener.open(url, data, timeout) File C:\ProgramData\Miniconda3\envs\pDL\lib\urllib\request.py:525, in OpenerDirector.open(self, fullurl, data, timeout) 522 req = meth(req) 524 sys.audit('urllib.Request', req.full_url, req.data, req.headers, req.get_method()) --> 525 response = self._open(req, data) 527 # post-process response 528 meth_name = protocol+"_response" File C:\ProgramData\Miniconda3\envs\pDL\lib\urllib\request.py:542, in OpenerDirector._open(self, req, data) 539 return result 541 protocol = req.type --> 542 result = self._call_chain(self.handle_open, protocol, protocol + 543 '_open', req) 544 if result: 545 return result File C:\ProgramData\Miniconda3\envs\pDL\lib\urllib\request.py:502, in OpenerDirector._call_chain(self, chain, kind, meth_name, *args) 500 for handler in handlers: 501 func = getattr(handler, meth_name) --> 502 result = func(*args) 503 if result is not None: 504 return result File C:\ProgramData\Miniconda3\envs\pDL\lib\urllib\request.py:1397, in HTTPSHandler.https_open(self, req) 1396 def https_open(self, req): -> 1397 return self.do_open(http.client.HTTPSConnection, req, 1398 context=self._context, check_hostname=self._check_hostname) File C:\ProgramData\Miniconda3\envs\pDL\lib\urllib\request.py:1357, in AbstractHTTPHandler.do_open(self, http_class, req, **http_conn_args) 1354 h.request(req.get_method(), req.selector, req.data, headers, 1355 encode_chunked=req.has_header('Transfer-encoding')) 1356 except OSError as err: # timeout error -> 1357 raise URLError(err) 1358 r = h.getresponse() 1359 except: URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)> | Turn off the ssl verification. import ssl ssl._create_default_https_context = ssl._create_unverified_context | 6 | 13 |
71,262,481 | 2022-2-25 | https://stackoverflow.com/questions/71262481/how-to-avoid-roundoff-errors-in-numpy-random-choice | Say x_1, x_2, ..., x_n are n objects and one wants to pick one of them so that the probability of choosing x_i is proportional to some number u_i. Numpy provides a function for that: x, u = np.array([x_1, x_2, ..., x_n]), np.array([u_1, ..., u_n]) np.random.choice(x, p = u/np.sum(u)) However, I have observed that this code sometimes throws a ValueError saying "probabilities do not sum to 1.". This is probably due to the round-off errors of finite precision arithmetic. What should one do to make this function work properly? | After reading the answer https://stackoverflow.com/a/60386427/6087087 to the question pointed by @Pychopath, I have found the following solution, inspired by the documentation of numpy.random.multinomial https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.multinomial.html Say p is the array of probabilities which may not be exactly 1 due to roundoff errors, even if we normalized it with p = p/np.sum(p). This is not rare, see the comment by @pd shah at the answer https://stackoverflow.com/a/46539921/6087087. Just do p[-1] = 1 - np.sum(p[0:-1]) np.random.choice(x, p = p) And the problem is solved! The roundoff errors due to subtraction will be much smaller than roundoff errors due to normalization. Moreover, one need not worry about the changes in p, they are of the order of roundoff errors. | 4 | 6 |
71,260,969 | 2022-2-25 | https://stackoverflow.com/questions/71260969/changing-overlap-order-of-a-line-chart-in-altair | I generate a line chart in Altair. I'd like to control which lines are "on top" of the stack of lines. In my example here, I wish for the red line to be on top (newest date) and then descend down to the yellow (oldest date) to be on the bottom. I tried to control this with the sort parameter of of alt.Color but regardless of sort='ascending' or sort='descending' the order of the line overlap will not change. How can I control this? Was hoping I can do this without sorting my source dataframe itself. data = [{'review_date': dt.date(year=2022, month=2, day=24), 'a':19, 'b':17, 'c':12, 'd':8}, {'review_date': dt.date(year=2022, month=2, day=23), 'a':20, 'b':16, 'c':14, 'd':8}, {'review_date': dt.date(year=2022, month=2, day=22), 'a':22, 'b':16, 'c':14, 'd':10}, {'review_date': dt.date(year=2022, month=2, day=21), 'a':14, 'b':13, 'c':12, 'd':5},] df = pd.DataFrame(data).melt(id_vars=['review_date'], value_name='price', var_name='contract') df.review_date = pd.to_datetime(df.review_date) domain = df.review_date.unique() range_ = ['red', 'blue', 'gray', 'yellow'] alt.Chart(df, title='foo').mark_line().encode( x=alt.X('contract:N'), y=alt.Y('price:Q',scale=alt.Scale(zero=False)), color=alt.Color('review_date:O', sort="ascending", scale=alt.Scale(domain=domain, range=range_) ) ).interactive() | By default, graphical marks are plotted in the order they occur in the dataframe (as you noted), which means that the elements last in the dataframe will be plotted last and end up on top in the chart (called the highest "layer" or the highest "z-order"): import pandas as pd import altair as alt df = pd.DataFrame({ 'a': [1, 2, 1, 2], 'b': [1.1, 2.1, 1.0, 2.2], 'c': ['point1', 'point1', 'point2', 'point2'] }) alt.Chart(df).mark_circle(size=1000).encode( x='a', y='b', color='c' ) When you set the sort parameter of the color encoding, you are not changing the z-order for the dots, you are only changing the order in which they are assigned a color. In the plot below, "point2" is still on top, but it is now blue instead of orange: alt.Chart(df).mark_circle(size=1000).encode( x='a', y='b', color=alt.Color('c', sort='descending') ) If we wanted to change the z-ordering so that "point1" is on top, we would have to specify this with the order encoding: alt.Chart(df).mark_circle(size=1000).encode( x='a', y='b', color='c', order=alt.Order('c', sort='descending') ) However, as you can read in the Vega-Lite documentation the order encoding has a special behavior for stacked and path marks, including line mark, where it controls the order in which the points are connected in a line rather than their z-ordering/layering. Therefore, I believe the only way you can achieve the desired behavior is by sorting that column. You can do this during chart construction: alt.Chart(df).mark_line(size=10).encode( x='a', y='b', color='c' ) alt.Chart(df.sort_values('c', ascending=False)).mark_line(size=10).encode( x='a', y='b', color='c' ) | 5 | 4 |
71,299,591 | 2022-2-28 | https://stackoverflow.com/questions/71299591/why-is-typing-mapping-not-a-protocol | As described here, some built-in generic types are Protocols. This means that as long as they implement certain methods, type-checkers will mark them as being compatible with the type: If a class defines a suitable __iter__ method, mypy understands that it implements the iterable protocol and is compatible with Iterable[T]. So why is Mapping not a protocol? It clearly feels like it should be one, as evidenced by this well up-voted SO answer: typing.Mapping is an object which defines the __getitem__,__len__,__iter__ magic methods If it were one, I could pass things which behave like mappings into function which require a mapping, but doing that is not allowed: from typing import Mapping class IntMapping: def __init__(self): self._int_map = {} def __repr__(self): return repr(self._int_map) def __setitem__(self, key: int, value: int): self._int_map[key] = value def __getitem__(self, key: int): return self._int_map[key] def __len__(self): return len(self._int_map) def __iter__(self): return iter(self._int_map) def __contains__(self, item: int): return item in self._int_map def keys(self): return self._int_map.keys() def items(self): return self._int_map.items() def values(self): return self._int_map.values() def get(self, key: int, default: int) -> int: return self._int_map.get(key, default) def __eq__(self, other): return self == other def __ne__(self, other): return self != other x: Mapping = IntMapping() # Type checkers don't like this Shout-out to this answer for pointing me to the link about protocols) | It appears to be deliberate, and basically boils down to 'we think that type is too complex to be a protocol.' See https://www.python.org/dev/peps/pep-0544/#changes-in-the-typing-module. Note that you can get this effect by having your own class extend abc.Mapping | 5 | 5 |
71,255,965 | 2022-2-24 | https://stackoverflow.com/questions/71255965/403-error-returned-from-python-get-requests-but-auth-works-in-postman | I'm trying to return a GET request from an API using HTTPBasicAuth. I've tested the following in Postman, and received the correct response URL:"https://someapi.data.io" username:"username" password:"password" And this returns me the data I expect, and all is well. When I've tried this in python however, I get kicked back a 403 error, alongside a ""error_type":"ACCESS DENIED","message":"Please confirm api-key, api-secret, and permission is correct." Below is my code: import requests from requests.auth import HTTPBasicAuth URL = 'https://someapi.data.io' authBasic=HTTPBasicAuth(username='username', password='password') r = requests.get(URL, auth = authBasic) print(r) I honestly can't tell why this isn't working since the same username and password passes in Postman using HTTPBasicAuth | You have not conveyed all the required parameters. And postman is doing this automatically for you. To be able to use in python requests just specify all the required parameters. headers = { 'Host': 'sub.example.com', 'User-Agent': 'Chrome v22.2 Linux Ubuntu', 'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate, br', 'Connection': 'keep-alive', 'X-Requested-With': 'XMLHttpRequest' } url = 'https://sub.example.com' response = requests.get(url, headers=headers) | 5 | 5 |
71,300,294 | 2022-2-28 | https://stackoverflow.com/questions/71300294/how-to-terminate-pythons-processpoolexecutor-when-parent-process-dies | Is there a way to make the processes in concurrent.futures.ProcessPoolExecutor terminate if the parent process terminates for any reason? Some details: I'm using ProcessPoolExecutor in a job that processes a lot of data. Sometimes I need to terminate the parent process with a kill command, but when I do that the processes from ProcessPoolExecutor keep running and I have to manually kill them too. My primary work loop looks like this: with concurrent.futures.ProcessPoolExecutor(n_workers) as executor: result_list = [executor.submit(_do_work, data) for data in data_list] for id, future in enumerate( concurrent.futures.as_completed(result_list)): print(f'{id}: {future.result()}') Is there anything I can add here or do differently to make the child processes in executor terminate if the parent dies? | You can start a thread in each process to terminate when parent process dies: def start_thread_to_terminate_when_parent_process_dies(ppid): pid = os.getpid() def f(): while True: try: os.kill(ppid, 0) except OSError: os.kill(pid, signal.SIGTERM) time.sleep(1) thread = threading.Thread(target=f, daemon=True) thread.start() Usage: pass initializer and initargs to ProcessPoolExecutor with concurrent.futures.ProcessPoolExecutor( n_workers, initializer=start_thread_to_terminate_when_parent_process_dies, # + initargs=(os.getpid(),), # + ) as executor: This works even if the parent process is SIGKILL/kill -9'ed. | 9 | 15 |
71,292,505 | 2022-2-28 | https://stackoverflow.com/questions/71292505/tk-python-checkbutton-rtl | I have a checkbutton: from tkinter import * master = Tk() Checkbutton(master, text="Here...").grid(row=0, sticky=W) mainloop() Which looks like this: I tried to move the checkbutton to the other side (to support RTL languages), so it'll be like: Here...[] I know that I can draw a label next to the checkbutton, but this way clicking the text won't effect the checkbutton. How can I do it? | You can bind the left mouse button click event of the label, to a lambda construct that toggles the checkbutton -: label.bind("<Button-1>", lambda x : check_button.toggle()) The label can then be placed before the checkbutton using grid(as mentioned in the OP at the end) -: from tkinter import * master = Tk() l1 = Label(master, text = "Here...") cb = Checkbutton(master) l1.grid(row = 0, column = 0) cb.grid(row = 0, column = 1, sticky=W) l1.bind("<Button-1>", lambda x : cb.toggle()) mainloop() This will toggle, the checkbutton even if the label is clicked. OUTPUT -: NOTE: The checkbutton, has to now be fetched as an object(cb), to be used in the lambda construct for the label's bind function callback argument. Thus, it is gridded in the next line. It is generally a good practice to manage the geometry separately, which can prevent error such as this one. Also, as mentioned in the post linked by @Alexander B. in the comments, if this assembly is to be used multiple times, it can also be made into a class of it's own that inherits from the tkinter.Frame class -: class LabeledCheckbutton(Frame): def __init__(self, root, text = ""): Frame.__init__(self, root) self.checkbutton = Checkbutton(self) self.label = Label(self, text = text) self.label.grid(row = 0, column = 0) self.checkbutton.grid(row = 0, column = 1) self.label.bind('<Button-1>', lambda x : self.checkbutton.toggle()) return pass Using this with grid as the geometry manager, would make the full code look like this -: from tkinter import * class LabeledCheckbutton(Frame): def __init__(self, root, text = ""): Frame.__init__(self, root) self.checkbutton = Checkbutton(self) self.label = Label(self, text = text) self.label.grid(row = 0, column = 0) self.checkbutton.grid(row = 0, column = 1) self.label.bind('<Button-1>', lambda x : self.checkbutton.toggle()) return pass master = Tk() lcb = LabeledCheckbutton(master, text = "Here...") lcb.grid(row = 0, sticky = W) mainloop() The output of the above code remains consistent with that of the first approach. The only difference is that it is now more easily scalable, as an object can be created whenever needed and the same lines of code need not be repeated every time. | 6 | 5 |
71,306,092 | 2022-3-1 | https://stackoverflow.com/questions/71306092/how-to-login-manually-to-telegram-account-with-pyrogram-without-interactive-cons | I'm using Python's pyrogram lib to login to multiple accounts. I need to create a function just to send verification code to account and then read it from other user input (not the default pyrogram login prompt). When I use send_code it sends code and waits for user input from console and that what I don't want it to do. I need simply a function that takes phone number as parameter and send confirmation code to it, and a function then to login with that confirmation code (got from user input somewhere else, e.g.: from telegram message to a linked bot or .... | I found a way to do it but with Telethon : client = TelegramClient('sessionfile',api_id,api_hash) def getcode(): code = ... # get the code from somewhere ( bot, file etc.. ) return code client.start(phone=phone_number,password=password,code_callback=getcode) this will login , gets confirmation code from specific function and then uses it to login , and store session file | 5 | 3 |
71,313,812 | 2022-3-1 | https://stackoverflow.com/questions/71313812/pattern-matching-to-check-a-protocol-getting-typeerror-called-match-pattern-mu | I need to match cases where the input is iterable. Here's what I tried: from typing import Iterable def detector(x: Iterable | int | float | None) -> bool: match x: case Iterable(): print('Iterable') return True case _: print('Non iterable') return False That is producing this error: TypeError: called match pattern must be a type Is it possible to detect iterability with match/case? Note, these two questions address the same error message but neither question is about how to detect iterability: how to fix TypeError: called match pattern must be a type in Python 3.10 Pattern Matching in Python getting Error : called match pattern must be a type | The problem is the typing.Iterable is only for type hints and is not considered a "type" by structural pattern matching. Instead, you need to use an abstract base class for detecting iterability: collections.abc.Iterable. The solution is distinguish the two cases, marking one as being a type hint and the other as a class pattern for structural pattern matching: def detector(x: typing.Iterable | int | float | None) -> bool: match x: case collections.abc.Iterable(): print('Iterable') return True case _: print('Non iterable') return False Also note that at the time of this writing mypy does not support the match statement. | 5 | 8 |
71,312,712 | 2022-3-1 | https://stackoverflow.com/questions/71312712/why-the-latest-python-3-8-x-release-provides-no-windows-installer | I need to install Python 3.8 on a Windows computer and hope to use the latest minor version of 3.8.12. The official release web page provides tarball files of the source code but no Windows installer. Python 3.8.10 provides Windows installers, but it is not the latest version. I am wondering: Why v3.8.12 does not provide any Windows installer? If we want to use v3.8.12, what can we do? I appreciate your help and suggestions. | https://devguide.python.org/#status-of-python-branches provides a summary of the various release statuses. features new features, bugfixes, and security fixes are accepted. prerelease feature fixes, bugfixes, and security fixes are accepted for the upcoming feature release. bugfix bugfixes and security fixes are accepted, new binaries are still released. (Also called maintenance mode or stable release) security only security fixes are accepted and no more binaries are released, but new source-only versions can be released end-of-life release cycle is frozen; no further changes can be pushed to it. Python 3.8 is currently in security mode, so only source releases (and no binary installers) are provided from Python itself. As the page you linked to says, According to the release calendar specified in PEP 569, Python 3.8 is now in the "security fixes only" stage of its life cycle: 3.8 branch only accepts security fixes and releases of those are made irregularly in source-only form until October 2024. Python 3.8 isn't receiving regular bug fixes anymore, and binary installers are no longer provided for it. Python 3.8.10 was the last full bugfix release of Python 3.8 with binary installers. Python 3.8.10 was the last release at the bug fix stage, and so the last for which Python officially provided binary installers. If you want a binary installers for 3.8.12, you'll have to either make one yourself from the source or find someone else who has done it or will do it for you. | 7 | 5 |
71,290,699 | 2022-2-28 | https://stackoverflow.com/questions/71290699/is-it-possible-to-connect-to-auradb-with-neomodel | Is it possible to connect to AuraDB with neomodel? AuraDB connection URI is like neo4j+s://xxxx.databases.neo4j.io. This is not contained user/password information. However, connection config of neomodel is bolt and it is contained user/password information. config.DATABASE_URL = 'bolt://neo4j:password@localhost:7687' | Connecting to neo4j Aura uses neo4j+s protocol so you need to use the provided uri by Aura. Reference: https://neo4j.com/developer/python/#driver-configuration In example below; you can set the database url by setting the userid and password along with the uri. It works for me so it should also work for you. from neomodel import config user = 'neo4j' psw = 'awesome_password' uri = 'awesome.databases.neo4j.io' config.DATABASE_URL = 'neo4j+s://{}:{}@{}'.format(user, psw, uri) print(config.DATABASE_URL) Result: neo4j+s://neo4j:[email protected] | 7 | 6 |
71,309,179 | 2022-3-1 | https://stackoverflow.com/questions/71309179/how-to-check-if-named-capture-group-exists | I'm wondering what is the proper way to test if a named capture group exists. Specifically, I have a function that takes a compiled regex as an argument. The regex may or may not have a specific named group, and the named group may or may not be present in a string being passed in: some_regex = re.compile("^foo(?P<idx>[0-9]*)?$") other_regex = re.compile("^bar$") def some_func(regex, string): m = regex.match(regex, string) if m.group("idx"): # get *** IndexError: no such group here... print(f"index found and is {m.group('idx')}") print(f"no index found") some_func(other_regex, "bar") I'd like to test if the group exists without using try -- as this would short circuit the rest of the function, which I would still need to run if the named group was not found. | You can check the groupdict of the match object: import re some_regex = re.compile("^foo(?P<idx>[0-9]*)?$") match = some_regex.match('foo11') print(True) if match and 'idx' in match.groupdict() else print(False) # True match = some_regex.match('bar11') print(True) if match and 'idx' in match.groupdict() else print(False) # False | 6 | 1 |
71,301,504 | 2022-2-28 | https://stackoverflow.com/questions/71301504/accumulate-the-grouped-sum-of-values-across-trillions-of-values | I have a data reduction issue that is proving to be very difficult to solve. Essentially, I have a program that calculates incremental values (floating point) for pairs of keys from a set of about 60 million keys total. The program will generate values for about 53 trillion pairs 'relatively' quickly (simply iterating through the values would take about three days π€£). Not every pair of keys will occur, and many pairs will come up many times. There is no reasonable way to have the pairs come up in a particular order. What I need is a way to find the sum of the values generated for each pair of keys. For data that would fit in memory, this is a very simple problem. In python it would look something like: from collections import Counter res = Counter() for key1,key2,val in data_generator(): res[(key1,key2)] += val The problem, of course, is that a mapping like that won't fit in memory. So I'm looking for a way to do this efficiently with a mix of on-disk and in-memory processing. So far I've tried: A postgresql table with upserts (ON CONFLICT UPDATE). This turned out to be far, far to slow. A hybrid of in-memory dictionaries in python that write to a RocksDB or LMDB key value store when they get too big. Though these DBs are much faster than postgresql for this kind of task, the time to complete is still on the order of months. At this point, I'm hoping someone has a better approach that I could try. Is there a way to break this problem up into smaller parts? Is there a standard MapReduce approach to this kind of problem? Any tips or pointers would be greatly appreciated. Thanks! Edit: The computer I'm using has 64GB of RAM, 96 cores (most of my work is very parallelizable), and terabytes of HDD (and some SSD) storage. It's hard to estimate the total number of key pairs that will be in the reduced result, but it will certainly be at least in the hundreds of billions. | As Frank Yellin observes, there's a one-round MapReduce algorithm. The mapper produces key-value pairs with key key1,key2 and value val. The MapReduce framework groups these pairs by key (the shuffle). The reducer sums the values. In order to control the memory usage, MapReduce writes the intermediate data to disk. Traditionally there are n files, and all of the pairs with key key1,key2 go to file hash((key1,key2)) mod n. There is a tension here: n should be large enough that each file can be handled by an in-memory map, but if n is too large, then the file system falls over. Back of the envelope math suggests that n might be between 1e4 and 1e5 for you. Hopefully the OS will use RAM to buffer the file writes for you, but make sure that you're maxing out your disk throughput or else you may have to implement buffering yourself. (There also might be a suitable framework, but you don't have to write much code for a single machine.) I agree with user3386109 that you're going to need a Really Big Disk. If you can regenerate the input multiple times, you can trade time for space by making k passes that each save only a 1/k fraction of the files. I'm concerned that the running time of this MapReduce will be too large relative to the mean time between failures. MapReduce is traditionally distributed for fault tolerance as much as parallelism. If there's anything you can tell us about how the input arises, and what you're planning to do with the output, we might be able to give you better advice. | 5 | 3 |
71,290,916 | 2022-2-28 | https://stackoverflow.com/questions/71290916/vs-code-pylance-works-slow-with-much-delay | When I try to use the autocomplete using Pylance it is stuck there for some time and After Some time like 3 ~ 5 seconds the pop up with auto-complete shows up Python Language Server is already set to Pylance What I've tried so far. Reinstall Python Extension. Reinstall VS Code Restarted Python Language Server Reset VS Code Reinstall Pylance. But None of the above seems to work | It works well on my computer, how do you open this python file? Try moving your code to its own folder and opening that up instead of opening up some big folder that contains a lot of files. This does show a performance hole where large workspaces take a while to load. You can refer to this page for more details. | 18 | 3 |
71,305,358 | 2022-3-1 | https://stackoverflow.com/questions/71305358/python-default-value-of-function-as-a-function-argument | Suppose I have the function: def myF(a, b): return a*b-2*b and let's say that I want a default value for b to be a-1. If I write: def myF(a, b=a-1): return a*b-2*b I get the error message: NameError: name 'a' is not defined I can use the code below: def myF(a, b): return a*b-2*b def myDefaultF(a): return myF(a, a-1) to have myF with default value, but I don't like it. How can I avoid myDefaultF and have myF with default value a-1 for b without errors? | You can do the following: def myF(a, b=None): if b is None: b = a - 1 return a * b - 2 * b | 5 | 9 |
71,298,402 | 2022-2-28 | https://stackoverflow.com/questions/71298402/is-there-a-better-way-to-search-a-sorted-list-if-the-other-list-is-sorted-too | In the numpy library, one can pass a list into the numpy.searchsorted function, whereby it searched through a different list one element at a time and returns an array of the same sizes as the indices needed to preserve order. However, it seems to be wasting performance if both lists are sorted. For example: m=[1,3,5,7,9] n=[2,4,6,8,10] numpy.searchsorted(m,n) would return [1,2,3,4,5] which is the correct answer, but it looks like this would have complexity O(n ln(m)), whereby if one were to simply loop through m, and have some kind of pointer to n, it seems like the complexity is more like O(n+m)? Is there some kind of function in NumPy which does this? | AFAIK, this is not possible to do that in linear time only with Numpy without making additional assumptions on the inputs (eg. the integer are small and bounded). An alternative solution is to use Numba to do the merge manually: import numba as nb # Note: Numba requires a function signature with well defined array types @nb.njit('int64[:](int64[::1], int64[::1])') def search_both_sorted(a, b): i, j = 0, 0 result = np.empty(b.size, np.int64) while i < a.size and j < a.size: if a[i] < b[j]: i += 1 else: result[j] = i j += 1 for k in range(j, b.size): result[k] = i return result a, b = np.cumsum(np.random.randint(0, 100, (2, 1000000)).astype(np.int64), axis=1) result = search_both_sorted(a, b) A faster implementation consists in using a branch-less approach so to remove the overhead of branch mis-prediction (especially on random/unpredictable inputs) when a and b are about the same size. Additionally, the O(n log m) algorithm can be faster when b is small so using np.searchsorted in that case is very efficient as pointed out by @MichaelSzczesny. Note that the Numba implementation of np.searchsorted can be a bit slower than the one of Numpy so it is better to pick the Numpy implementation. Here is the optimized version: @nb.njit('int64[:](int64[::1], int64[::1])') def search_both_sorted_opt_numba(a, b): sa, sb = a.size, b.size # Choose the best algorithm if sb < sa * 0.15: # Use a version with branches because `a[i] < b[j]` # should be most of the time true. i, j = 0, 0 result = np.empty(b.size, np.int64) while i < a.size and j < b.size: if a[i] < b[j]: i += 1 else: result[j] = i j += 1 for k in range(j, b.size): result[k] = i else: # Use a branchless approach to avoid miss-predictions i, j = 0, 0 result = np.empty(b.size, np.int64) while i < a.size and j < b.size: tmp = a[i] < b[j] result[j] = i i += tmp j += ~tmp for k in range(j, b.size): result[k] = i return result def search_both_sorted_opt(a, b): sa, sb = a.size, b.size # Choose the best algorithm if 2 * sb * np.log2(sa) < sa + sb: return np.searchsorted(a, b) else: return search_both_sorted_opt_numba(a, b) searchsorted: 19.1 ms snp_search: 11.8 ms search_both_sorted: 6.5 ms search_both_sorted_branchless: 4.3 ms The optimized branchless Numba implementation is about 4.4 times faster than searchsorted which is pretty good considering that the code of searchsorted is already highly optimized. It can be even faster when a and b are huge because of cache locality. | 4 | 2 |
71,297,994 | 2022-2-28 | https://stackoverflow.com/questions/71297994/django-query-annotate-values-get-list-from-reverse-foreign-key | I have a simple model like class Author(models.Model): name = models.CharField(max_length=100) def __str__(self): return self.name class Blog(models.Model): title = models.CharField(max_length=100) author = models.ForeignKey(Author, on_delete=models.CASCADE) Here I want to queryall authors with the title of all the blogs they have written like Author One : [Blog One, Blog Two, Blog Three] I want this from query and not loop Here my approach is to use subquery like blogs =Blog.objects.filter(author=OuterRef('pk')).values("title") authors = Author.objects.annotate(blogs=Subquery(blogs), output_field=CharField()) But here I am getting error like sub-select returns 2 columns - expected 1 How can I get all the authors with all the blogs they have written ? I do not want to use loop. I want this through query | If you are using a postgres as a database, then you can use an ArrayAgg function: from django.contrib.postgres.aggregates import ArrayAgg authors = Author.objects.annotate(blogs=ArrayAgg('blog_set__title')) | 4 | 4 |
71,297,697 | 2022-2-28 | https://stackoverflow.com/questions/71297697/modulenotfounderror-when-running-a-simple-pytest | Python version 3.6 I have the following folder structure . βββ main.py βββ tests/ | βββ test_Car.py βββ automobiles/ βββ Car.py my_program.py from automobiles.Car import Car p = Car("Grey Sedan") print(p.descriptive_name()) Car.py class Car(): description = "Default" def __init__(self, message): self.description = message def descriptive_name(self): return self.description test_Car.py from automobiles.Car import Car def test_descriptive_name(): input_string = "Blue Hatchback" p = Car(input_string) assert(p.descriptive_name() == input_string) When running pytest in the commandline from the project root folder, I get the following error- Traceback: ..\..\..\AppData\Local\Programs\Python\Python36\lib\importlib\__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests\test_Car.py:2: in <module> from automobiles.Car import Car E ModuleNotFoundError: No module named 'automobiles' I've been battling this for awhile now and I think I'm missing something obvious. I don't think it is anything to do with a missing __init__.py - I've tried placing an empty __init.py__ alongside the car.py file, with no difference in the error. What do I have to change to get the test_Car.py to run successfully ? | You have 2 options: Run python -m pytest instead of pytest, which will also add the current directory to sys.path (see official docs for details). Add a __init__.py file under tests/, then you can simply run pytest. This basically enables pytest to discover the tests if they live outside of the application code. You can find some more details about this in the Tests outside application code section in the official docs. Hope that helped! | 5 | 6 |
71,287,607 | 2022-2-27 | https://stackoverflow.com/questions/71287607/how-to-make-a-normal-distribution-graph-from-data-frame-in-python | my question is how to make a normal distribution graph from data frame in Python. I can find many information to make such a graph from random numbers, but I don't know how to make it from data frame. First, I generated random numbers and made a data frame. import numpy as np import pandas from pandas import DataFrame cv1 = np.random.normal(50, 3, 1000) source = {"Genotype": ["CV1"]*1000, "AGW": cv1} Cultivar_1=DataFrame(source) Then, I tried to make a normal distribution graph. sns.kdeplot(data = Cultivar_1['AGW']) plt.xlim([30,70]) plt.xlabel("Grain weight (mg)", size=12) plt.ylabel("Frequency", size=12) plt.grid(True, alpha=0.3, linestyle="--") plt.show() However, this is a density graph, not a normal distribution graph which is calculated using mean and standard deviation. Could you let me know which codes I need to use to make a normal distribution graph? Thanks!! | I found one solution to make a normal distribution graph from data frame. #Library import numpy as np import pandas as pd import matplotlib.pyplot as plt import scipy.stats as stats #Generating data frame x = np.random.normal(50, 3, 1000) source = {"Genotype": ["CV1"]*1000, "AGW": x} df = pd.DataFrame(source) # Calculating mean and Stdev of AGW df_mean = np.mean(df["AGW"]) df_std = np.std(df["AGW"]) # Calculating probability density function (PDF) pdf = stats.norm.pdf(df["AGW"].sort_values(), df_mean, df_std) # Drawing a graph plt.plot(df["AGW"].sort_values(), pdf) plt.xlim([30,70]) plt.xlabel("Grain weight (mg)", size=12) plt.ylabel("Frequency", size=12) plt.grid(True, alpha=0.3, linestyle="--") plt.show() | 4 | 6 |
71,297,077 | 2022-2-28 | https://stackoverflow.com/questions/71297077/python-regex-replace-every-2nd-occurrence-in-a-string | I have a string with data that looks like this: str1 = "[2.4],[5],[2.54],[4],[3.36],[4.46],[3.36],[4],[3.63],[4.86],[4],[4.63]" I would want to replace every second iteration of "],[" with "," so it will look like this: str2 = "[2.4,5],[2.54,4],[3.36,4.46],[3.36,4],[3.63,4.86],[4,4.63]" Here is was I have so far: str1 = "[2.4],[5],[2.54],[4],[3.36],[4.46],[3.36],[4],[3.63],[4.86],[4],[4.63]" s2 = re.sub(r"],\[", ',', str1) print(s2) I was trying to mess around with this: (.*?],\[){2} But it does not seem to yield me the desired results. I tried using loops but I only managed to replace only the second occurrence and nothing after using this sample code I found here. And the code is: import re def replacenth(string, sub, wanted, n): where = [m.start() for m in re.finditer(sub, string)][n-1] before = string[:where] after = string[where:] after = after.replace(sub, wanted, 1) newString = before + after print(newString) For these variables: string = 'ababababababababab' sub = 'ab' wanted = 'CD' n = 5 Thank you. | You can use import re from itertools import count str1 = "[2.4],[5],[2.54],[4],[3.36],[4.46],[3.36],[4],[3.63],[4.86],[4],[4.63]" c = count(0) print( re.sub(r"],\[", lambda x: "," if next(c) % 2 == 0 else x.group(), str1) ) # => [2.4,5],[2.54,4],[3.36,4.46],[3.36,4],[3.63,4.86],[4,4.63] See the Python demo. The regex is the same, ],\[, it matches a literal ],[ text. The c = count(0) initializes the counter whose value is incremented upon each match inside a lambda expression used as the replacement argument. When the counter is even, the match is replaced with a comma, else, it is kept as is. | 6 | 6 |
71,297,090 | 2022-2-28 | https://stackoverflow.com/questions/71297090/can-i-unpack-destructure-a-typing-namedtuple | This is a simple question so I'm surprised that I can't find it asked on SO (apologies if I've missed it), and it always pops into my mind as I contemplate a refactor to replace a tuple by a NamedTuple. Can I unpack a typing.NamedTuple as arguments or as a destructuring assignment, like I can with a tuple? | Yes you certainly can. from typing import NamedTuple class Test(NamedTuple): a: int b: int t = Test(1, 2) # destructuring assignment a, b = t # a = 1 # b = 2 def f(a, b): return f"{a}{b}" # unpack f(*t) # '12' Unpacking order is the order of the fields in the definition. | 5 | 5 |
71,272,721 | 2022-2-25 | https://stackoverflow.com/questions/71272721/why-does-creating-a-variable-name-for-an-exception-raised-in-a-python-function-a | I've defined two simple Python functions that take a single argument, raise an exception, and handle the raised exception. One function uses a variable to refer to the exception before raising/handling, the other does not: def refcount_unchanged(x): try: raise Exception() except: pass def refcount_increases(x): e = Exception() try: raise e except: pass One of the resulting functions increases pythons refcount for its input argument, the other does not: import sys a = [] print(sys.getrefcount(a)) for i in range(3): refcount_unchanged(a) print(sys.getrefcount(a)) # prints: 2, 2, 2, 2 b = [] print(sys.getrefcount(b)) for i in range(3): refcount_increases(b) print(sys.getrefcount(b)) # prints: 2, 3, 4, 5 Can anyone explain why this happens? | It is a side effect of the "exception -> traceback -> stack frame -> exception" reference cycle from the __traceback__ attribute on exception instances introduced in PEP-344 (Python 2.5), and resolved in cases like refcount_unchanged in PEP-3110 (Python 3.0). In refcount_increases, the reference cycle can be observed by printing this: except: print(e.__traceback__.tb_frame.f_locals) # {'x': [], 'e': Exception()} which shows that x is also referenced in the frame's locals. The reference cycle is resolved when the garbage collector runs, or if gc.collect() is called. In refcount_unchanged, as per PEP-3110's Semantic Changes, Python 3 generates additional bytecode to delete the target, thus eliminating the reference cycle: def refcount_unchanged(x): try: raise Exception() except: pass gets translated to something like: def refcount_unchanged(x): try: raise Exception() except Exception as e: try: pass finally: e = None del e Resolving the reference cycle in refcount_increases While not necessary (since the garbage collector will do its job), you can do something similar in refcount_increases by manually deleting the variable reference: def refcount_increases(x): e = Exception() try: raise e except: pass finally: # + del e # + Alternatively, you can overwrite the variable reference and let the implicit deletion work: def refcount_increases(x): e = Exception() try: raise e # except: # - except Exception as e: # + pass A little more about the reference cycle The exception e and other local variables are actually referenced directly by e.__traceback__.tb_frame, presumably in C code. This can be observed by printing this: print(sys.getrefcount(b)) print(gc.get_referrers(b)[0]) # <frame at ...> Accessing e.__traceback__.tb_frame.f_locals creates a dictionary cached on the frame (another reference cycle) and thwarts the proactive resolutions above. print(sys.getrefcount(b)) print(gc.get_referrers(b)[0]) # {'x': [], 'e': Exception()} However, this reference cycle will also be handled by the garbage collector. | 4 | 5 |
71,291,252 | 2022-2-28 | https://stackoverflow.com/questions/71291252/how-to-pass-multiple-arguments-in-multiprocessing-executor-map-function | I have been watching several videos on Multiprocessing map function. I know that I can send one list as an argument to the function I want to target with Multiprocessing, and that will call the same function n times (dependent upon the size of that passed list). What I am struggling to do is what if I want to pass multiple arguments to that function? I basically have a List whose size is n (it can vary but for current case, its 209) My function requires 3 arguments... the index of the list (0, 1, 2 etc.) Another list containing data A fixed integer value I could have used the 2nd and 3rd arguments as global variables, but that doesn't work for me because I have to call the map function in a while loop... and in every another iteration, the values of these two will change. My function returns two values which I need to access in the function from where it was called. This is what I have tried but it didn't work for me, def main_fun(): with concurrent.futures.ProcessPoolExecutor() as executor: results = executor.map(MyFun, (row, pop[0].data, fitness) for row in range(0, len(pop[0].data))) for result in results: print(result) I also tried to use ZIP function but again, with no success. | If your second and third arguments to your worker function (i.e. the first argument to map), then you can use method functools.partial to have the second and third arguments specified without resorting to the use of global variables. If your worker functions is, for example, foo, then: from concurrent.futures import ProcessPoolExecutor from functools import partial def foo(idx: int, lst: list, int_value: int): ... def main(): with ProcessPoolExecutor() as executor: worker = partial(foo, lst=pop[0].data, int_value=fitness) executor.map(worker, range(0, len(pop[0].data))) if __name__ == '__main__': main() So now we only have to pass to map function worker, which will be called two fixed arguments, and a single iterable argument. If you are executing the map call in a loop, you will, of course, create a new worker functions by passing to functools.partial new arguments. For example: from concurrent.futures import ProcessPoolExecutor from functools import partial def foo(idx: int, lst: list, int_value: int): print(idx, lst[idx] * int_value, flush=True) def main(): l = [3, 5, 7] fitness = 9 with ProcessPoolExecutor() as executor: worker = partial(foo, lst=l, int_value=fitness) executor.map(worker, range(0, len(l))) if __name__ == '__main__': main() Prints: 0 27 1 45 2 63 | 5 | 2 |
71,295,840 | 2022-2-28 | https://stackoverflow.com/questions/71295840/python-pip-error-legacy-install-failure | I want to install gensim python package via pip install gensim But this error occurs and I have no idea what should I do to solve it. running build_ext building 'gensim.models.word2vec_inner' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure Γ Encountered error while trying to install package. β°β> gensim note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. | If you fail to install plugins, you can download it from other repositories like this one: repository depends on the version of python and the system. for example: for windows 11(x64) and python 3.10 you should take this file: gensimβ4.1.2βcp310βcp310βwin_amd64.whl | 31 | 11 |
71,294,619 | 2022-2-28 | https://stackoverflow.com/questions/71294619/json-to-markdown-table-formatting | I'm trying to build out a function to convert JSON data into a list to then be used as base for building out markdown tables. I have a first prototype: #!/usr/bin/env python3 import json data = { "statistics": { "map": [ { "map_name": "Location1", "nan": "loc1", "dont": "ignore this", "packets": "878607764338" }, { "map_name": "Location2", "nan": "loc2", "dont": "ignore this", "packets": "67989088698" }, ], "map-reset-time": "Thu Jan 6 05:59:47 2022\n" } } headers = ['Name', 'NaN', 'Packages'] def jsonToList(data): """adds the desired json fields""" # Wil be re-written to be more acceptant to different data fields. json_obj = data ips = [] for piece in json_obj['statistics']['map']: this_ip = [piece['map_name'], piece['nan'], piece['packets']] ips.append(this_ip) return ips def markdownTable(data, headers): # Find maximal length of all elements in list n = max(len(x) for l in data for x in l) # Print the rows headerLength = len(headers) # expected "| Name| NaN| Packages|" for i in range(len(headers)): # Takes the max number of characters and subtracts the length of the header word hn = n - len(headers[i]) # Prints | [space based on row above][header word] print("|" + " " * hn + f"{headers[i]}", end='') # If last run is meet add ending pipe if i == headerLength-1: print("|") # End pipe for headers # expected |--------|--------|--------| print("|", end='') # Start pipe for sep row for i in range(len(headers)): print ("-" *n + "|", end='') # seams to be adding an extra line however if its not there, # Location1 print("\n", end='') dataLength = len(data) for row in data: for x in row: hn = n - len(x) print(f"|" + " " * hn + x, end='') print("|") if __name__ == "__main__": da = jsonToList(data) markdownTable(da, headers) This code outputs as expected a table that can be used as markdown. | Name| NaN| Packages| |------------|------------|------------| | Location1| loc1|878607764338| | Location2| loc2| 67989088698| I was wondering if anyone have any good ideas regarding the placement of the words (centralized) currently I'm utilizing a n = max(len(x) for l in data for x in l) and then subtracts the length of the current string and ands it at the end of the output, this works well for left align but if would like to have them centered there's an issue. Additionally general feedback on ways to optimize the code is much appreciated, if someone has build a similar function before this is my first attempt or ways to go directly from JSON. | If you are at a liberty to use pandas, this is quite straight forward. The markdown feature is readily available. See example below. import pandas df = pandas.DataFrame.from_dict(data['statistics']['map']).rename(columns={'map_name':'Name', 'nan':'NaN', 'packets':'Packages'}) df.drop(['dont'], axis=1, inplace=True) print(df.to_markdown(index=False,tablefmt='fancy_grid')) This will provide an output like: βββββββββββββ€ββββββββ€βββββββββββββββ β Name β NaN β Packages β βββββββββββββͺββββββββͺβββββββββββββββ‘ β Location1 β loc1 β 878607764338 β βββββββββββββΌββββββββΌβββββββββββββββ€ β Location2 β loc2 β 67989088698 β βββββββββββββ§ββββββββ§βββββββββββββββ You can use the tablefmt argument to apply different styles like psql, pipe etc. | 7 | 9 |
71,294,521 | 2022-2-28 | https://stackoverflow.com/questions/71294521/tkinter-window-appears-black-upon-running-in-pycharm | Tkinter background appears black upon running script no matter how I attribute the background colour. I'm using PyCharm CE 2021.3.2 on macOS 12.2.1. Python Interpreter = Python 3.8 with 5 packages (as follows): Pillow 9.0.1 future 0.18.2 pip 22.0.3 setuptools 57.0.0 wheel 0.36.2 Window looks like this: Black, blank Tkinter window I've tried: import tkinter as tk window = tk.Tk() window.title("Test") window.geometry("600x400") window.mainloop() Tried changing with window.configure(bg="white") as well as window['bg'] = "white" and window['background'] = "white" to no avail. | Thanks to @typedecker Issue was with Python 3.8 and the Monterey update. Fix: First install Python 3.10 then follow this tutorial: Creating Python 3.10 Virtual Env Then simply select the newly created virtual env in PyCharms and run. | 6 | 4 |
71,281,717 | 2022-2-27 | https://stackoverflow.com/questions/71281717/connecting-elasticsearch-to-django-using-django-elasticsearch-dsl-results-in-c | I am trying to call a local ES instance running on docker. I used the following instructions to setup my ES instance: https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-cli-run-dev-mode I am able to play around with my instance on Kibana at http://0.0.0.0:5601/app/dev_tools#/console. Everything works upto here. Now I am trying to define some sample documents using django models and index them through the library; I am following the instructions here: https://django-elasticsearch-dsl.readthedocs.io/en/latest/quickstart.html#install-and-configure First, I add pip installed and added django_elasticsearch_dsl to INSTALLED_APPS Next, I added in settings.py: ELASTICSEARCH_DSL = { 'default': { 'hosts': 'localhost:9200' }, } Then I create a sample model and document that looks like this: # models.py from django.db import models class Car(models.Model): name = models.CharField(max_length=30) color = models.CharField(max_length=30) description = models.TextField() type = models.IntegerField(choices=[ (1, "Sedan"), (2, "Truck"), (4, "SUV"), ]) # documents.py from django_elasticsearch_dsl import Document from django_elasticsearch_dsl.registries import registry from .models import Car @registry.register_document class CarDocument(Document): class Index: # Name of the Elasticsearch index name = 'cars' # See Elasticsearch Indices API reference for available settings settings = {'number_of_shards': 1, 'number_of_replicas': 0} class Django: model = Car # The model associated with this Document # The fields of the model you want to be indexed in Elasticsearch fields = [ 'name', 'color', 'description', 'type', ] Finally, running python3 manage.py search_index --rebuild results in the following connection error: raise ConnectionError("N/A", str(e), e) elasticsearch.exceptions.ConnectionError: ConnectionError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))) caused by: ProtocolError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))) I suspect that there might be an issue with my ELASTICSEARCH_DSL setup as I have not specified any config for https but the documentation does not make this clear. How do I resolve this issue? Django version: Django==4.0.1 django-elasticsearch-dsl==7.2.2 Python version: Python 3.9.10 Thanks! | I figured that it was an issue with my certificate. I needed to add some additional config param to the ELASTICSEARCH_DSL variable. Adding this solves the issue: from elasticsearch import RequestsHttpConnection # Elasticsearch configuration in settings.py ELASTICSEARCH_DSL = { 'default': { 'hosts': 'localhost:9200', 'use_ssl': True, 'http_auth': ('user', 'password'), 'ca_certs': '/path/to/cert.crt' 'connection_class': RequestsHttpConnection } } See this section of the elastic docs on verifying a certificate. If you haven't setup a certificate to authenticate the connection with and just need to get something up and running quickly, you can pass 'verify_certs': False and set 'ca_certs' to None. | 5 | 8 |
71,289,347 | 2022-2-27 | https://stackoverflow.com/questions/71289347/pytesseract-improving-ocr-accuracy-for-blurred-numbers-on-an-image | Example of numbers I am using the standard pytesseract img to text. I have tried with digits only option 90% of the time it is perfect but above is a example where it goes horribly wrong! This example produced no characters at all As you can see there are now letters so language option is of no use, I did try adding some text in the grabbed image but it still goes wrong. I increased the contrast using CV2 the text has been blurred upstream of my capture Any ideas on increasing accuracy? After many tests using the suggestions below. I found the sharpness filter gave unreliable results. another tool you can use is contrast=cv2.convertScaleAbs(img2,alpha=2.5,beta=-200) I used this as my text in black and white ended up light gray text on a gray background with convertScaleAbs I was able to increase the contrast to get almost a black and white image Basic steps for OCR Convert to monochrome Crop image to your target text Filter image to get black and white perform OCR | Here's a simple approach using OpenCV and Pytesseract OCR. To perform OCR on an image, it's important to preprocess the image. The idea is to obtain a processed image where the text to extract is in black with the background in white. To do this, we can convert to grayscale, then apply a sharpening kernel using cv2.filter2D() to enhance the blurred sections. A general sharpening kernel looks like this: [[-1,-1,-1], [-1,9,-1], [-1,-1,-1]] Other kernel variations can be found here. Depending on the image, you can adjust the strength of the filter. From here we Otsu's threshold to obtain a binary image then perform text extraction using the --psm 6 configuration option to assume a single uniform block of text. Take a look here for more OCR configuration options. Here's a visualization of the image processing pipeline: Input image Convert to grayscale -> apply sharpening filter Otsu's threshold Result from Pytesseract OCR 124,685 Code import cv2 import numpy as np import pytesseract pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe" # Load image, grayscale, apply sharpening filter, Otsu's threshold image = cv2.imread('1.png') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) sharpen_kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]]) sharpen = cv2.filter2D(gray, -1, sharpen_kernel) thresh = cv2.threshold(sharpen, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # OCR data = pytesseract.image_to_string(thresh, lang='eng', config='--psm 6') print(data) cv2.imshow('sharpen', sharpen) cv2.imshow('thresh', thresh) cv2.waitKey() | 4 | 6 |
71,287,550 | 2022-2-27 | https://stackoverflow.com/questions/71287550/repeatedly-removing-the-maximum-average-subarray | I have an array of positive integers. For example: [1, 7, 8, 4, 2, 1, 4] A "reduction operation" finds the array prefix with the highest average, and deletes it. Here, an array prefix means a contiguous subarray whose left end is the start of the array, such as [1] or [1, 7] or [1, 7, 8] above. Ties are broken by taking the longer prefix. Original array: [ 1, 7, 8, 4, 2, 1, 4] Prefix averages: [1.0, 4.0, 5.3, 5.0, 4.4, 3.8, 3.9] -> Delete [1, 7, 8], with maximum average 5.3 -> New array -> [4, 2, 1, 4] I will repeat the reduction operation until the array is empty: [1, 7, 8, 4, 2, 1, 4] ^ ^ [4, 2, 1, 4] ^ ^ [2, 1, 4] ^ ^ [] Now, actually performing these array modifications isn't necessary; I'm only looking for the list of lengths of prefixes that would be deleted by this process, for example, [3, 1, 3] above. What is an efficient algorithm for computing these prefix lengths? The naive approach is to recompute all sums and averages from scratch in every iteration for an O(n^2) algorithm-- I've attached Python code for this below. I'm looking for any improvement on this approach-- most preferably, any solution below O(n^2), but an algorithm with the same complexity but better constant factors would also be helpful. Here are a few of the things I've tried (without success): Dynamically maintaining prefix sums, for example with a Binary Indexed Tree. While I can easily update prefix sums or find a maximum prefix sum in O(log n) time, I haven't found any data structure which can update the average, as the denominator in the average is changing. Reusing the previous 'rankings' of prefix averages-- these rankings can change, e.g. in some array, the prefix ending at index 5 may have a larger average than the prefix ending at index 6, but after removing the first 3 elements, now the prefix ending at index 2 may have a smaller average than the one ending at 3. Looking for patterns in where prefixes end; for example, the rightmost element of any max average prefix is always a local maximum in the array, but it's not clear how much this helps. This is a working Python implementation of the naive, quadratic method: from fractions import Fraction def find_array_reductions(nums: List[int]) -> List[int]: """Return list of lengths of max average prefix reductions.""" def max_prefix_avg(arr: List[int]) -> Tuple[float, int]: """Return value and length of max average prefix in arr.""" if len(arr) == 0: return (-math.inf, 0) best_length = 1 best_average = Fraction(0, 1) running_sum = 0 for i, x in enumerate(arr, 1): running_sum += x new_average = Fraction(running_sum, i) if new_average >= best_average: best_average = new_average best_length = i return (float(best_average), best_length) removed_lengths = [] total_removed = 0 while total_removed < len(nums): _, new_removal = max_prefix_avg(nums[total_removed:]) removed_lengths.append(new_removal) total_removed += new_removal return removed_lengths Edit: The originally published code had a rare error with large inputs from using Python's math.isclose() with default parameters for floating point comparison, rather than proper fraction comparison. This has been fixed in the current code. An example of the error can be found at this Try it online link, along with a foreword explaining exactly what causes this bug, if you're curious. | This problem has a fun O(n) solution. If you draw a graph of cumulative sum vs index, then: The average value in the subarray between any two indexes is the slope of the line between those points on the graph. The first highest-average-prefix will end at the point that makes the highest angle from 0. The next highest-average-prefix must then have a smaller average, and it will end at the point that makes the highest angle from the first ending. Continuing to the end of the array, we find that... These segments of highest average are exactly the segments in the upper convex hull of the cumulative sum graph. Find these segments using the monotone chain algorithm. Since the points are already sorted, it takes O(n) time. # Lengths of the segments in the upper convex hull # of the cumulative sum graph def upperSumHullLengths(arr): if len(arr) < 2: if len(arr) < 1: return [] else: return [1] hull = [(0, 0),(1, arr[0])] for x in range(2, len(arr)+1): # this has x coordinate x-1 prevPoint = hull[len(hull) - 1] # next point in cumulative sum point = (x, prevPoint[1] + arr[x-1]) # remove points not on the convex hull while len(hull) >= 2: p0 = hull[len(hull)-2] dx0 = prevPoint[0] - p0[0] dy0 = prevPoint[1] - p0[1] dx1 = x - prevPoint[0] dy1 = point[1] - prevPoint[1] if dy1*dx0 < dy0*dx1: break hull.pop() prevPoint = p0 hull.append(point) return [hull[i+1][0] - hull[i][0] for i in range(0, len(hull)-1)] print(upperSumHullLengths([ 1, 7, 8, 4, 2, 1, 4])) prints: [3, 1, 3] | 29 | 34 |
71,287,630 | 2022-2-27 | https://stackoverflow.com/questions/71287630/how-do-i-get-tqdm-working-on-pandas-apply | Tqdm documentation shows an example of tqdm working on pandas apply using progress_apply. I adapted the following code from here https://tqdm.github.io/docs/tqdm/ on a process that regularly take several minutes to perform (func1 is a regex function). from tqdm import tqdm tqdm.pandas() df.progress_apply(lambda x: func1(x.textbody), axis=1) The resulting progress bar doesn't show any progress. It just jumps from 0 at the start of the loop to 100 when it is finished. I am currently running tqdm version 4.61.2 | Utilizing tqdm with pandas Generally speaking, people tend to use lambdas when performing operations on a column or row. This can be done in a number of ways. Please note: that if you are working in jupyter notebook you should use tqdm_notebook instead of tqdm. Also I'm not sure what your code looks like but if you're simply following the example given in the tqdm docs, and you're only performing 100 interations, computers are fast and will blow through that before your progress bar has time to update. Perhaps it would be more instructive to use a larger dataset like I provided below. Example 1: from tqdm import tqdm # version 4.62.2 import pandas as pd # version 1.4.1 import numpy as np tqdm.pandas(desc='My bar!') # lots of cool paramiters you can pass here. # the below line generates a very large dataset for us to work with. df = pd.DataFrame(np.random.randn(100000000, 4), columns=['a','b','c','d']) # the below line will square the contents of each element in an column-wise # fashion df.progress_apply(lambda x: x**2) Output: Output Example 2: # you could apply a function within the lambda expression for more complex # operations. And keeping with the above example... tqdm.pandas(desc='My bar!') # lots of cool paramiters you can pass here. # the below line generates a very large dataset for us to work with. df = pd.DataFrame(np.random.randn(100000000, 4), columns=['a','b','c','d']) def function(x): return x**2 df.progress_apply(lambda x: function(x)) | 8 | 20 |
71,232,879 | 2022-2-23 | https://stackoverflow.com/questions/71232879/how-to-speed-up-async-requests-in-python | I want to download/scrape 50 million log records from a site. Instead of downloading 50 million in one go, I was trying to download it in parts like 10 million at a time using the following code but it's only handling 20,000 at a time (more than that throws an error) so it becomes time-consuming to download that much data. Currently, it takes 3-4 mins to download 20,000 records with the speed of 100%|ββββββββββ| 20000/20000 [03:48<00:00, 87.41it/s] so how to speed it up? import asyncio import aiohttp import time import tqdm import nest_asyncio nest_asyncio.apply() async def make_numbers(numbers, _numbers): for i in range(numbers, _numbers): yield i n = 0 q = 10000000 async def fetch(): # example url = "https://httpbin.org/anything/log?id=" async with aiohttp.ClientSession() as session: post_tasks = [] # prepare the coroutines that poat async for x in make_numbers(n, q): post_tasks.append(do_get(session, url, x)) # now execute them all at once responses = [await f for f in tqdm.tqdm(asyncio.as_completed(post_tasks), total=len(post_tasks))] async def do_get(session, url, x): headers = { 'Content-Type': "application/x-www-form-urlencoded", 'Access-Control-Allow-Origin': "*", 'Accept-Encoding': "gzip, deflate", 'Accept-Language': "en-US" } async with session.get(url + str(x), headers=headers) as response: data = await response.text() print(data) s = time.perf_counter() try: loop = asyncio.get_event_loop() loop.run_until_complete(fetch()) except: print("error") elapsed = time.perf_counter() - s # print(f"{__file__} executed in {elapsed:0.2f} seconds.") Traceback (most recent call last): File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 986, in _wrap_create_connection return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 1056, in create_connection raise exceptions[0] File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 1041, in create_connection sock = await self._connect_sock( File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 955, in _connect_sock await self.sock_connect(sock, address) File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 702, in sock_connect return await self._proactor.connect(sock, address) File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\tasks.py", line 328, in __wakeup future.result() File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\windows_events.py", line 812, in _poll value = callback(transferred, key, ov) File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\windows_events.py", line 599, in finish_connect ov.getresult() OSError: [WinError 121] The semaphore timeout period has expired The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Users\SGM\Desktop\xnet\x3stackoverflow.py", line 136, in <module> loop.run_until_complete(fetch()) File "C:\Users\SGM\AppData\Roaming\Python\Python39\site-packages\nest_asyncio.py", line 81, in run_until_complete return f.result() File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\futures.py", line 201, in result raise self._exception File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\tasks.py", line 256, in __step result = coro.send(None) File "C:\Users\SGM\Desktop\xnet\x3stackoverflow.py", line 88, in fetch response = await f File "C:\Users\SGM\Desktop\xnet\x3stackoverflow.py", line 37, in _wait_for_one return f.result() File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\futures.py", line 201, in result raise self._exception File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\asyncio\tasks.py", line 258, in __step result = coro.throw(exc) File "C:\Users\SGM\Desktop\xnet\x3stackoverflow.py", line 125, in do_get async with session.get(url + str(x), headers=headers) as response: File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\client.py", line 1138, in __aenter__ self._resp = await self._coro File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\client.py", line 535, in _request conn = await self._connector.connect( File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 542, in connect proto = await self._create_connection(req, traces, timeout) File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 907, in _create_connection _, proto = await self._create_direct_connection(req, traces, timeout) File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 1206, in _create_direct_connection raise last_exc File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 1175, in _create_direct_connection transp, proto = await self._wrap_create_connection( File "C:\Users\SGM\AppData\Local\Programs\Python\Python39\lib\site-packages\aiohttp\connector.py", line 992, in _wrap_create_connection raise client_error(req.connection_key, exc) from exc aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host example.com:80 ssl:default [The semaphore timeout period has expired] | Bottleneck: number of simultaneous connections First, the bottleneck is the total number of simultaneous connections in the TCP connector. That default for aiohttp.TCPConnector is limit=100. On most systems (tested on macOS), you should be able to double that by passing a connector with limit=200: # async with aiohttp.ClientSession() as session: async with aiohttp.ClientSession(connector=aiohttp.TCPConnector(limit=200)) as session: The time taken should decrease significantly. (On macOS: q = 20_000 decreased 43% from 58 seconds to 33 seconds, and q = 10_000 decreased 42% from 31 to 18 seconds.) The limit you can configure depends on the number of file descriptors that your machine can open. (On macOS: You can run ulimit -n to check, and ulimit -n 1024 to increase to 1024 for the current terminal session, and then change to limit=1000. Compared to limit=100, q = 20_000 decreased 76% to 14 seconds, and q = 10_000 decreased 71% to 9 seconds.) Supporting 50 million requests: async generators Next, the reason why 50 million requests appears to hang is simply because of its sheer number. Just creating 10 million coroutines in post_tasks takes 68-98 seconds (varies greatly on my machine), and then the event loop is further burdened with that many tasks, 99.99% of which are blocked by the TCP connection pool. We can defer the creation of coroutines using an async generator: async def make_async_gen(f, n, q): async for x in make_numbers(n, q): yield f(x) We need a counterpart to asyncio.as_completed() to handle async_gen and concurrency: from asyncio import ensure_future, events from asyncio.queues import Queue def as_completed_for_async_gen(fs_async_gen, concurrency): done = Queue() loop = events.get_event_loop() # todo = {ensure_future(f, loop=loop) for f in set(fs)} # - todo = set() # + def _on_completion(f): todo.remove(f) done.put_nowait(f) loop.create_task(_add_next()) # + async def _wait_for_one(): f = await done.get() return f.result() async def _add_next(): # + try: f = await fs_async_gen.__anext__() except StopAsyncIteration: return f = ensure_future(f, loop=loop) f.add_done_callback(_on_completion) todo.add(f) # for f in todo: # - # f.add_done_callback(_on_completion) # - # for _ in range(len(todo)): # - # yield _wait_for_one() # - for _ in range(concurrency): # + loop.run_until_complete(_add_next()) # + while todo: # + yield _wait_for_one() # + Then, we update fetch(): from functools import partial CONCURRENCY = 200 # + n = 0 q = 50_000_000 async def fetch(): # example url = "https://httpbin.org/anything/log?id=" async with aiohttp.ClientSession(connector=aiohttp.TCPConnector(limit=CONCURRENCY)) as session: # post_tasks = [] # - # # prepare the coroutines that post # - # async for x in make_numbers(n, q): # - # post_tasks.append(do_get(session, url, x)) # - # Prepare the coroutines generator # + async_gen = make_async_gen(partial(do_get, session, url), n, q) # + # now execute them all at once # - # responses = [await f for f in tqdm.asyncio.tqdm.as_completed(post_tasks, total=len(post_tasks))] # - # Now execute them with a specified concurrency # + responses = [await f for f in tqdm.tqdm(as_completed_for_async_gen(async_gen, CONCURRENCY), total=q)] # + Other limitations With the above, the program can start processing 50 million requests but: it will still take 8 hours or so with CONCURRENCY = 1000, based on the estimate from tqdm. your program may run out of memory for responses and crash. For point 2, you should probably do: # responses = [await f for f in tqdm.tqdm(as_completed_for_async_gen(async_gen, CONCURRENCY), total=q)] for f in tqdm.tqdm(as_completed_for_async_gen(async_gen, CONCURRENCY), total=q): response = await f # Do something with response, such as writing to a local file # ... An error in the code do_get() should return data: async def do_get(session, url, x): headers = { 'Content-Type': "application/x-www-form-urlencoded", 'Access-Control-Allow-Origin': "*", 'Accept-Encoding': "gzip, deflate", 'Accept-Language': "en-US" } async with session.get(url + str(x), headers=headers) as response: data = await response.text() # print(data) # - return data # + | 9 | 20 |
71,277,957 | 2022-2-26 | https://stackoverflow.com/questions/71277957/how-to-zip-a-file-in-python | I have been trying to make a python script to zip a file with the zipfile module. Although the text file is made into a zip file, It doesn't seem to be compressing it; testtext.txt is 1024KB whilst testtext.zip (The code's creation) is also equal to 1024KB. However, if I compress testtext.txt manually in File Explorer, the resulting zip file is compressed (To 2KB, specifically). How, if possible, can I combat this logical error? Below is the script that I have used to (unsuccessfully) zip a text file. from zipfile import ZipFile textFile = ZipFile("compressedtextstuff.zip", "w") textFile.write("testtext.txt") textFile.close() | Well that's odd. Python's zipfile defaults to the stored compression method, which does not compress! (Why would they do that?) You need to specify a compression method. Use ZIP_DEFLATED, which is the most widely supported. import zipfile zip = zipfile.ZipFile("stuff.zip", "w", zipfile.ZIP_DEFLATED) zip.write("test.txt") zip.close() | 7 | 12 |
71,277,420 | 2022-2-26 | https://stackoverflow.com/questions/71277420/type-annotation-hint-for-index-in-pandas-dataframe-iterrows | I am trying to add type annotations/hints in a Python script for running mypy checks. I have a pandas.DataFrame object, which I iterate like this: someTable: pandas.DataFrame = pandas.DataFrame() # ... # adding some data to someTable # ... for index, row in someTable.iterrows(): #reveal_type(index) print(type(index)) print(index + 1) If I run this script, here's what I get: $ python ./some.py <class 'int'> 2 <class 'int'> 3 And if I check it with mypy, then it reports errors: $ mypy ./some.py some.py:32: note: Revealed type is "Union[typing.Hashable, None]" some.py:34: error: Unsupported operand types for + ("Hashable" and "int") some.py:34: error: Unsupported operand types for + ("None" and "int") some.py:34: note: Left operand is of type "Optional[Hashable]" Found 2 errors in 1 file (checked 1 source file) As I understand, mypy sees the index as Union[typing.Hashable, None], which is not int, and so index + 1 looks like an error to it. How and where should I then annotate/hint it to satisfy mypy? I tried this: index: int for index, row in someTable.iterrows(): # ... but that results in: $ mypy ./some.py some.py:32: error: Incompatible types in assignment (expression has type "Optional[Hashable]", variable has type "int") Found 1 error in 1 file (checked 1 source file) | You could hint index as Optional[int], but then x + 1 won't type check. I'm not sure where Union[typing.Hashable, None] comes from; iterrows itself returns an Iterable[tuple[Hashable, Series]]. But it seems like you can safely assert that if index is assigned a value, then it will not be None. index: Optional[int] for index, row in someTable.iterrows(): index = typing.cast(int, index) print(index + 1) (Is the Union supposed to reflect the possibility of the iterable raising StopIteration? That doesn't seem right, as a function that raises an exception doesn't return None; it doesn't return at all.) | 6 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.