question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
73,939,866
2022-10-3
https://stackoverflow.com/questions/73939866/initialize-dataclass-instance-with-functions
I'm trying to create a dataclass to store all relevant data in a single object. How can I initialize a dataclass instance where the values are evaluated from functions within the dataclass, which take parameters? This is where I am so far: @dataclass class Person: def Name(self): return f'My name is {self.name[0]} {self.name[1]}.' def Age(self): return f'I am {self.age} years old.' name: field(default_factory=Name(self), init=True) age: field(default_factory=Age(self), init=True) person = Person(('John', 'Smith'), '100') print(person) Current output: Person(name=('John', 'Smith'), age='100') This is the output I'm trying to achieve: Person(name='My name is John Smith', age='I am 100 years old') I was trying to use How to reference `self` in dataclass' fields? for reference on this topic.
First - and this is rather subtle - I note that it does not work to have dataclasses.field() as a type annotation. That is, name: field(...) is invalid. I can assume you mean to do name: str = field(...). Here str is the type annotation for name. But even with that approach, you would run into a TypeError based on how you are passing the default_factory argument - you would need a no-argument callable, though I notice that doesn't seem to help in this use case. My impression is, it is not possible to achieve what you are trying to do with dataclasses.field(...) alone, as I believe the docs indicate default_factory needs to be a zero argument callable. For instance, default_factory=list works as list() provides a no-arg constructor. However, note that the following is not possible: field(default_factory = lambda world: f'hello {world}!') dataclasses will not pass a value for world to the default_factory function, so you will run into an error with such an approach. The good news is there are a few different alternatives or options to consider in your case, which I proceed to outline below. Init-only Variables To work around this, one option could be to use a combination of InitVar with field(init=False): from dataclasses import field, dataclass, InitVar @dataclass class Person: in_name: InitVar[tuple[str, str]] in_age: InitVar[str] name: str = field(init=False) age: str = field(init=False) def __post_init__(self, in_name: tuple[str, str], in_age: str): self.name = f'My name is {in_name[0]} {in_name[1]}.' self.age = f'I am {in_age} years old.' person = Person(('John', 'Smith'), '100') print(person) Prints: Person(name='My name is John Smith.', age='I am 100 years old.') Properties Another usage could be with field-properties in dataclasses. In this case, the values are passed in to the constructor method as indicated (i.e. a tuple and str), and the @setter method for each field-property generates a formatted string, which it stores in a private attribute, for example as self._name. Note that there is undefined behavior when no default values for field properties are passed in the constructor, due to how dataclasses handles (or rather silently ignores) properties currently. To work around that, you can use a metaclass such as one I have outlined in this gist. from dataclasses import field, dataclass @dataclass class Person: name: tuple[str, str] age: str # added to silence any IDE warnings _age: str = field(init=False, repr=False) _name: str = field(init=False, repr=False) @property def name(self): return self._name @name.setter def name(self, name: tuple[str, str]): self._name = f'My name is {name[0]} {name[1]}.' @property def age(self): return self._age @age.setter def age(self, age: str): self._age = f'I am {age} years old.' person = Person(('John', 'Smith'), '100') print(person) person.name = ('Betty', 'Johnson') person.age = 150 print(person) # note that a strange error is returned when no default value is passed for # properties; you can use my gist to work around that. # person = Person() Prints: Person(name='My name is John Smith.', age='I am 100 years old.') Person(name='My name is Betty Johnson.', age='I am 150 years old.') Descriptors One last option I would be remiss to not mention, and one I would likely recommend as being a little bit easier to set up than properties, would be the use of descriptors in Python. From what I understand, descriptors are essentially an easier approach as compared to declaring a ton of properties, especially if the purpose or usage of said properties is going to be quite similar. Here is an example of a custom descriptor class, named FormatValue: from typing import Callable, Any class FormatValue: __slots__ = ('fmt', 'private_name', ) def __init__(self, fmt: Callable[[Any], str]): self.fmt = fmt def __set_name__(self, owner, name): self.private_name = '_' + name def __get__(self, obj, objtype=None): value = getattr(obj, self.private_name) return value def __set__(self, obj, value): setattr(obj, self.private_name, self.fmt(value)) It can be used as follows, and works the same as the above example with properties: from dataclasses import dataclass @dataclass class Person: name: 'tuple[str, str] | str' = FormatValue(lambda name: f'My name is {name[0]} {name[1]}.') age: 'str | int' = FormatValue(lambda age: f'I am {age} years old.') person = Person(('John', 'Smith'), '100') print(person) person.name = ('Betty', 'Johnson') person.age = 150 print(person) Prints: Person(name='My name is John Smith.', age='I am 100 years old.') Person(name='My name is Betty Johnson.', age='I am 150 years old.')
5
10
73,948,788
2022-10-4
https://stackoverflow.com/questions/73948788/why-does-the-pydantic-dataclass-cast-a-list-to-a-dict-how-to-prevent-this-behav
I am a bit confused by the behavior of the pydantic dataclass. Why does the dict type accept a list of a dict as valid dict and why is it converted it to a dict of the keys? Am i doing something wrong? Is this some kind of intended behavior and if so is there a way to prevent that behavior? Code Example: from pydantic.dataclasses import dataclass @dataclass class X: y: dict print(X([{'a':'b', 'c':'d'}])) Output: X(y={'a': 'c'})
Hah, this is kind of amusing actually. Bear with me... Why does the dict type accept a list of a dict as valid dict and why is it converted it to a dict of the keys? This can be explained when we take a closer look at the default dict_validator used by Pydantic models. The very first thing it does (to any non-dict) is attempt to coerce the value to a dict. Try this with your specific example: y = [{'a': 'b', 'c': 'd'}] assert dict(y) == {'a': 'c'} Why is that? Well, to initialize a dict, you can pass different kinds of arguments. One option is to pass some Iterable and Each item in the iterable must itself be an iterable with exactly two objects. The first object of each item becomes a key in the new dictionary, and the second object the corresponding value. In your example, you just so happened to have an iterable (specifically a list) and the only item in that iterable is itself an iterable (specifically a dict). And how are dictionaries iterated over by default? Via their keys! Since that dictionary {'a': 'b', 'c': 'd'} has exactly two key-value-pairs, that means when iterated over it produces those two keys, i.e. "a" and "c": d = {'a': 'b', 'c': 'd'} assert tuple(iter(d)) == ('a', 'c') It is this mechanism that allows a dict to be constructed for example from a list of 2-tuples like so: data = [('a', 1), ('b', 2)] assert dict(data) == {'a': 1, 'b': 2} In your case this leads to the result you showed, which at first glance seems strange and unexpected, but actually makes sense, when you think about the logic of dictionary initialization. What is funny is that this only works when the dict in the list has exactly two key-value pairs! Anything more or less will lead to an error. (Try it yourself.) So in short: This behavior is neither special to Pydantic nor dataclasses, but is the result of regular dict initialization. Am i doing something wrong? I would say, yes. The value you are trying to assign to X.y is a list, but you declared it to be a dict. So that is obviously wrong. I get that sometimes data comes from external sources, so it may not be up to you. Is this some kind of intended behavior [...]? That is a good question in the sense that I am curious to know if the Pydantic team is aware of this edge case and the strange result it causes. I would say it is at least understandable that the dictionary validator was implemented the way it was. is there a way to prevent that behavior? Yes. Aside from the obvious solution of just not passing a list there. You could add your own custom validator, configure it with pre=True and have it for example only allow actual straight up dict instances to proceed to further validation. Then you could catch this error immediately. Hope this helps. Thanks for shining a light on this because this would have thrown me off at first, too. I think I'll start digging through the Pydantic issue tracker and PRs and see if this may/should/will be addressed somehow. PS Here is very simple implementation of the aforementioned "strict" validator that prevents dict-coercion and instead raises an error for non-dict immediately: from typing import Any from pydantic.class_validators import validator from pydantic.dataclasses import dataclass from pydantic.fields import ModelField, SHAPE_DICT, SHAPE_SINGLETON @dataclass class X: y: dict @validator("*", pre=True) def strict_dict(cls, v: Any, field: ModelField) -> Any: declared_dict_type = ( field.type_ is dict and field.shape is SHAPE_SINGLETON or field.shape is SHAPE_DICT ) if declared_dict_type and not isinstance(v, dict): raise TypeError(f"value must be a `dict`, got {type(v)}") return v if __name__ == '__main__': print(X([{'a': 'b', 'c': 'd'}])) Output: Traceback (most recent call last): File "....py", line 24, in <module> print(X([{'a': 'b', 'c': 'd'}])) File "pydantic/dataclasses.py", line 313, in pydantic.dataclasses._add_pydantic_validation_attributes.new_init File "pydantic/dataclasses.py", line 416, in pydantic.dataclasses._dataclass_validate_values # worries about external callers. pydantic.error_wrappers.ValidationError: 1 validation error for X y value must be a `dict`, got <class 'list'> (type=type_error)
3
4
73,883,756
2022-9-28
https://stackoverflow.com/questions/73883756/pandas-read-csv-delimiter-used-in-a-text
I have a CSV file with ";" as separator made like this: col1;col2;col3;col4 4;hello;world;1;1 4;hi;1;1 4;hi;1;1 4;hi;1;1 Obviously by using ";" as sep it gives me the error about tokenizing data (obviously from the header less columns are expected), how can i obtain a dataframe like this: col1 col2 col3 col4 4 hello;world 1 1 4 hi 1 1 4 hi 1 1 4 hi 1 1 It could be read even with others packages and other data type (even if i prefer pandas because of the following operations in the code)
You could split off the outer cols until you are left with the remaining col2. This could be done in Pandas as follows: import pandas as pd df_raw = pd.read_csv("input.csv", delimiter=None, header=None, skiprows=1) df_raw[['col1', 'rest']] = df_raw[0].str.split(";", n=1, expand=True) df_raw[['col2', 'col3', 'col4']] = df_raw.rest.str.rsplit(";", n=2, expand=True) df = df_raw[['col1', 'col2', 'col3', 'col4']] print(df) Giving df as: col1 col2 col3 col4 0 4 hello;world 1 1 1 4 hi 1 1 2 4 hi 1 1 3 4 hi 1 1 First read in the CSV file without using any delimiters to get a single column. Use a .str.split() with n=1 to split out just col1 using the ; delimiter from the left. Take the remaining rest and apply .str.rsplit() with n=2 to do a reverse split using the ; delimiter to get the remaining columns. This allows col2 to have any have any characters. This assume that only col2 can have additional ; separators and the last two are fixed.
3
2
73,953,999
2022-10-4
https://stackoverflow.com/questions/73953999/filter-and-apply-condition-between-multiple-rows
I have the following dataframe: client_id location_id region_name location_name 1 123 Florida location_ABC 6 123 Florida(P) location_ABC 6 845 Miami(P) location_THE 1 386 Boston location_WOP 6 386 Boston(P) location_WOP What I'm trying to do is: If some location_id has more than one client_id, I'll pick the client_id == 1. If some location_id has only one client_id, I'll pick whatever row it is. If we were implementing only one logic, it should be as simple as df[df['client_id'] == 1]. But I can not figure out how to perform this type of filtering that requires verifying more rows at the same time (figure out how to check if some location_id has more then one client_id, for example). So, in this scenario, the resulting data frame would be: client_id location_id region_name location_name 1 123 Florida location_ABC 6 845 Miami(P) location_THE 1 386 Boston location_WOP Any ideas?
You can use idxmax with a custom groupby on the boolean Series equal to your preferred id, then slice: out = df.loc[df['client_id'].eq(1).groupby(df['location_id'], wort=False).idxmax()] output: client_id location_id region_name location_name 0 1 123 Florida location_ABC 2 6 845 Miami(P) location_THE 3 1 386 Boston location_WOP
3
2
73,951,076
2022-10-4
https://stackoverflow.com/questions/73951076/overlay-of-two-imshow-plots-on-top-of-each-other-with-a-slider-to-change-the-op
The code below works to overlay two imshow plots, and to create a slider which changes the value of the global variable OPACITY. Unfortunately, img1.set_data(y); fig.canvas.draw_idle() doesn't redraw the new opacity. How to make an overlay of two imshow plots with a slider to change the opacity of the 2nd layer? import numpy as np, matplotlib.pyplot as plt, matplotlib.widgets as mpwidgets OPACITY = 0.5 x = np.random.random((100, 50)) y = np.linspace(0, 0.1, 100*50).reshape((100, 50)) # PLOT fig, (ax0, ax1) = plt.subplots(2, 1, gridspec_kw={'height_ratios': [5, 1]}) img0 = ax0.imshow(x, cmap="jet") img1 = ax0.imshow(y, cmap="jet", alpha=OPACITY) def update(value): global OPACITY OPACITY = value print(OPACITY) img1.set_data(y) fig.canvas.draw_idle() slider0 = mpwidgets.Slider(ax=ax1, label='opacity', valmin=0, valmax=1, valinit=OPACITY) slider0.on_changed(update) plt.show()
You can re-set the opacity of the overlaying img1 within the update function by img1.set_alpha(): Code: import numpy as np, matplotlib.pyplot as plt, matplotlib.widgets as mpwidgets OPACITY = 0.5 x = np.random.random((100, 50)) y = np.linspace(0, 0.1, 100*50).reshape((100, 50)) # PLOT fig, (ax0, ax1) = plt.subplots(2, 1, gridspec_kw={'height_ratios': [5, 1]}) img0 = ax0.imshow(x, cmap="jet") img1 = ax0.imshow(y, cmap="jet", alpha=OPACITY) def update(value): img1.set_alpha(value) fig.canvas.draw_idle() slider0 = mpwidgets.Slider(ax=ax1, label='opacity', valmin=0, valmax=1, valinit=OPACITY) slider0.on_changed(update) plt.show()
4
2
73,946,012
2022-10-4
https://stackoverflow.com/questions/73946012/using-logging-basicconfig-with-multiple-handlers
I'm trying to understand the behaviour of the logging module when using basicConfig with multiple handlers. My aim is to have warning level on syslog messages, and debug level to a log file. This is my setup: import logging import logging.handlers import os from datetime import datetime log_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') f_handler = logging.FileHandler(f'{os.path.expanduser("~")}/logs/test_{datetime.now().strftime("%Y%m%d%H%M%S")}.log') s_handler = logging.handlers.SysLogHandler(address='/dev/log') f_handler.setLevel(logging.DEBUG) s_handler.setLevel(logging.WARN) f_handler.setFormatter(log_format) s_handler.setFormatter(log_format) logging.basicConfig(handlers=[f_handler, s_handler]) print (logging.getLogger().getEffectiveLevel()) logging.warning('This is a warning') logging.error('This is an error') logging.info('This is an info') logging.debug('This is debug') This does not work - my log file only contains the warning and error messages, not the debug and info messages as intended. If I change this line to: logging.basicConfig(handlers=[f_handler, s_handler], level=logging.DEBUG) Then this does seem to work, i.e. the file contains all messages and syslog contains only the error and warning message. I don't understand why the original code did not work, given that I set the level on the file handler specifically.
It's because you need to set a level on the logger, as that's checked first, and if there is further processing to be done, then the event is passed to handlers, where it's checked against their levels. Level on logger - overall verbosity of application Level on handler - verbosity for the destination represented by that handler See the diagram below for more information.
4
5
73,949,658
2022-10-4
https://stackoverflow.com/questions/73949658/how-can-i-sort-multiple-levels-of-a-pandas-multiindex-with-custom-key-functions
Say I have a dataframe with a MultiIndex like this: import pandas as pd import numpy as np my_index = pd.MultiIndex.from_product( [(3,1,2), ("small", "tall", "medium"), ("B", "A", "C")], names=["number", "size", "letter"] ) df_0 = pd.DataFrame(np.random.rand(27, 2), columns=["x", "y"], index=my_index) x y number size letter 3 small B 0.950073 0.599918 A 0.014450 0.472736 C 0.208064 0.778538 tall B 0.979631 0.367234 A 0.832459 0.449875 C 0.761929 0.053144 medium B 0.460764 0.800131 A 0.355746 0.573813 C 0.078924 0.058865 1 small B 0.405209 0.354636 A 0.536242 0.012904 C 0.458910 0.723627 tall B 0.859898 0.442954 A 0.109729 0.885598 C 0.378363 0.220695 medium B 0.652191 0.685181 A 0.503525 0.400973 C 0.454671 0.188798 2 small B 0.407654 0.168782 A 0.393451 0.083023 C 0.073432 0.165209 tall B 0.678226 0.108497 A 0.718348 0.077935 C 0.595500 0.146271 medium B 0.719985 0.422167 A 0.950950 0.532390 C 0.687721 0.920229 Now I want to sort the index by the different levels, first number, then size, and finally letter. If I do this... df_1 = df_0.sort_index(level=["number", "size", "letter"], inplace=False) ... the size of course gets sorted in alphabetical order. x y number size letter 1 medium A 0.503525 0.400973 B 0.652191 0.685181 C 0.454671 0.188798 small A 0.536242 0.012904 B 0.405209 0.354636 C 0.458910 0.723627 tall A 0.109729 0.885598 B 0.859898 0.442954 C 0.378363 0.220695 2 medium A 0.950950 0.532390 B 0.719985 0.422167 C 0.687721 0.920229 small A 0.393451 0.083023 B 0.407654 0.168782 C 0.073432 0.165209 tall A 0.718348 0.077935 B 0.678226 0.108497 C 0.595500 0.146271 3 medium A 0.355746 0.573813 B 0.460764 0.800131 C 0.078924 0.058865 small A 0.014450 0.472736 B 0.950073 0.599918 C 0.208064 0.778538 tall A 0.832459 0.449875 B 0.979631 0.367234 C 0.761929 0.053144 But I want it to be sorted by a custom key. I know I can sort the size level with a custom sort function like this: custom_key = np.vectorize(lambda x: {"small": 0, "medium": 1, "tall": 2}[x]) df_2 = df_0.sort_index(level=1, key=custom_key, inplace=False) x y number size letter 1 small A 0.536242 0.012904 B 0.405209 0.354636 C 0.458910 0.723627 2 small A 0.393451 0.083023 B 0.407654 0.168782 C 0.073432 0.165209 3 small A 0.014450 0.472736 B 0.950073 0.599918 C 0.208064 0.778538 1 medium A 0.503525 0.400973 B 0.652191 0.685181 C 0.454671 0.188798 2 medium A 0.950950 0.532390 B 0.719985 0.422167 C 0.687721 0.920229 3 medium A 0.355746 0.573813 B 0.460764 0.800131 C 0.078924 0.058865 1 tall A 0.109729 0.885598 B 0.859898 0.442954 C 0.378363 0.220695 2 tall A 0.718348 0.077935 B 0.678226 0.108497 C 0.595500 0.146271 3 tall A 0.832459 0.449875 B 0.979631 0.367234 C 0.761929 0.053144 But how can I sort by all levels like for df_1 and use the custom key on the second level? Expected output: x y number size letter 1 small A 0.536242 0.012904 B 0.405209 0.354636 C 0.458910 0.723627 medium A 0.503525 0.400973 B 0.652191 0.685181 C 0.454671 0.188798 tall A 0.109729 0.885598 B 0.859898 0.442954 C 0.378363 0.220695 2 small A 0.393451 0.083023 B 0.407654 0.168782 C 0.073432 0.165209 medium A 0.950950 0.532390 B 0.719985 0.422167 C 0.687721 0.920229 tall A 0.718348 0.077935 B 0.678226 0.108497 C 0.595500 0.146271 3 small A 0.014450 0.472736 B 0.950073 0.599918 C 0.208064 0.778538 medium A 0.355746 0.573813 B 0.460764 0.800131 C 0.078924 0.058865 tall A 0.832459 0.449875 B 0.979631 0.367234 C 0.761929 0.053144 And how should I define the custom key function, so that I also can access the level in sort_index by name like this? df_3 = df_0.sort_index(level="size", key=custom_key, inplace=False) Here, it gives a KeyError: 'Level size not found'
The ideal would be to use ordered Categorical data. Else, you can use a custom mapper based on the level name: # define here custom sorters # all other levels will be sorted by default order order = {'size': ['small', 'medium', 'tall']} def sorter(s): if s.name in order: return s.map({k:v for v,k in enumerate(order[s.name])}) return s out = df_0.sort_index(level=["number", "size", "letter"], key=sorter) Output: x y number size letter 1 small A 0.530753 0.687982 B 0.722848 0.974920 C 0.174058 0.695016 medium A 0.397016 0.550404 B 0.426989 0.843007 C 0.929218 0.497728 tall A 0.159078 0.005675 B 0.917871 0.384265 C 0.685435 0.585242 2 small A 0.423254 0.838356 B 0.342158 0.209632 ...
4
2
73,948,326
2022-10-4
https://stackoverflow.com/questions/73948326/how-to-replace-multiple-substring-at-once-and-not-sequentially
I want to replace multiple substring at once, for instance, in the following statement I want to replace dog with cat and cat with dog: I have a dog but not a cat. However, when I use sequential replace string.replace('dog', 'cat') and then string.replace('cat', 'dog'), I get the following. I have a dog but not a dog. I have a long list of replacements to be done at once so a nested replace with temp will not help.
One way using re.sub: import re string = "I have a dog but not a cat." d = {"dog": "cat", "cat": "dog"} new_string = re.sub("|".join(d), lambda x: d[x.group(0)], string) Output: 'I have a cat but not a dog.'
3
10
73,945,687
2022-10-4
https://stackoverflow.com/questions/73945687/delete-duplicates-words-from-column
I have a dataframe like this: df3 = pd.DataFrame({'ID': ['Stay home, T5006, T5006, Stay home', 'Go for walk, T5007, T5007, Go for walk'], 'Name': ['Stay home, Go for walk, Stay home', 'Go outside, Go outside, Go outside'] }) ID Name 0 Stay home, T5006, T5006, Stay home Stay home, Go for walk, Stay home 1 Go for walk, T5007, T5007, Go for walk Go outside, Go outside, Go outside I want to delete the dulicates from ID column. Expected outcome: ID Name 0 Stay home,T5006 Stay home, Go for walk, Stay home 1 Go for walk,T5007 Go outside, Go outside, Go outside Any ideas?
Use dict.fromkey trick for remove duplicates of splitted values, then join by , in lambda function: df3['ID'] = df3['ID'].apply(lambda x: ', '.join(dict.fromkeys(x.split(', ')))) Or use list comprehension: df3['ID'] = [', '.join(dict.fromkeys(x.split(', '))) for x in df3['ID']] print (df3) ID Name 0 Stay home, T5006 Stay home, Go for walk, Stay home 1 Go for walk, T5007 Go outside, Go outside, Go outside Of if possible order is not important use sets: df3['ID'] = df3['ID'].apply(lambda x: ', '.join(set(x.split(', ')))) df3['ID'] = [', '.join(set(x.split(', '))) for x in df3['ID']] print (df3) ID Name 0 Stay home, T5006 Stay home, Go for walk, Stay home 1 T5007, Go for walk Go outside, Go outside, Go outside
3
2
73,929,564
2022-10-2
https://stackoverflow.com/questions/73929564/entrypoints-object-has-no-attribute-get-digital-ocean
I have made a deplyoment to Digital ocean, on staging (Heroku server) the app is working well, but Digital ocean it's failing with the error below, what could be the issue : AttributeError at /admin/ 'EntryPoints' object has no attribute 'get' Request Method: GET Request URL: https://xxxx/admin/ Django Version: 3.1 Exception Type: AttributeError Exception Value: 'EntryPoints' object has no attribute 'get' Exception Location: /usr/local/lib/python3.7/site-packages/markdown/util.py, line 85, in <module> Python Executable: /usr/local/bin/python Python Version: 3.7.5 Python Path: ['/opt/app', '/usr/local/bin', '/usr/local/lib/python37.zip', '/usr/local/lib/python3.7', '/usr/local/lib/python3.7/lib-dynload', '/usr/local/lib/python3.7/site-packages', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf', '/usr/local/lib/python3.7/site-packages/odf'] Server time: Sun, 02 Oct 2022 21:41:00 +0000
Because importlib-metadata releases v5.0.0 yesterday which it remove deprecated endpoint. You can set importlib-metadata<5.0 in ur setup.py so it does not install latest version. Or if you use requirements.txt, you can as well set importlib-metadata below version 5.0 e.g importlib-metadata==4.13.0 For more info: https://importlib-metadata.readthedocs.io/en/latest/history.html
76
116
73,940,751
2022-10-3
https://stackoverflow.com/questions/73940751/why-cant-i-call-a-function-from-another-function-using-exec
say, I have very simple piece of code py = """ a = 1 print (f'before all {a=}') def bar(n): print(f'bye {n=}') def foo(n): print(f'hello {n=}') bar(n) bar ('just works') foo ('sara') """ loc = {} glo = {} bytecode = compile(py, "script", "exec") exec(bytecode, glo, loc) as you can see I defined two functions: bar & foo and called both of them with results: before all a=1 bye n='just works' hello n='sara' Traceback (most recent call last): File "/home/bla-bla/pythonProject/main_deco2.py", line 43, in <module> exec(bytecode, glo, loc) File "script", line 13, in <module> File "script", line 10, in foo NameError: name 'bar' is not defined and this leaves me puzzled, as I don't understand why function foo doesn't see bar when just a second ago I was able to call bar without a problem ?
From the docs: If exec gets two separate objects as globals and locals, the code will be executed as if it were embedded in a class definition. That's not what you want, so don't provide a locals. glo = {} exec(bytecode, glo) Output, for reference: before all a=1 bye n='just works' hello n='sara' bye n='sara'
4
1
73,939,648
2022-10-3
https://stackoverflow.com/questions/73939648/what-is-the-correct-way-to-run-a-pydrake-simulation-online
In Drake/pydrake, all of the examples I have seen create an instance of Simulator and then advance the simulation by a set duration, e.g. sim.AdvanceTo(10) will step the simulation until it has simulated 10 seconds of activity. How can I instead run a simulation indefinitely, where I don't have a specific duration to advance to, just have the simulation running in its own node and step at a fixed interval forever? My intention is to just have the simulator running, and then send requests to update commands that will be sent to the robot. Is there a straightforward way to achieve this that won't break the simulation? I figured there would be a public Step function on Simulator, but have only encountered AdvanceTo. My first thought was to try to take the context from a call to AdvanceTo over a short interval, e.g. AdvanceTo(0.01) and then overwrite the context for the next call so that it's updating from the new context instead of the original, but I'm not sure it will work. Maybe there's a more official way to achieve this scheme?
You can do simulator.AdvanceTo(std::numeric_limits<double>::infinity()); to keep the simulation running indefinitely. https://github.com/RobotLocomotion/drake/tree/master/examples/kuka_iiwa_arm may be helpful as an example.
3
2
73,938,345
2022-10-3
https://stackoverflow.com/questions/73938345/django-rest-framework-integer-field-will-not-accept-empty-response
The models.py IntegerField is set-up: class DMCAModel(models.Model): tick = models.CharField(max_length=20) percent_calc = models.FloatField() ratio = models.IntegerField(blank=True, null=True, default=None) #The problem is here created_at = models.DateTimeField(auto_now_add=True) owner = models.ForeignKey(User, related_name='dmca', on_delete=models.CASCADE, null=True) serilaizers.py class DMCASerializer(serializers.ModelSerializer): class Meta: model = DMCAModel fields = '__all__' # fields = ['id', 'tick', 'percent_calc', 'ratio'] # I have tried removing fields, if I remove 'ratio' then even if I send an input it is ignored. api.py class DmcaViewSet(viewsets.ModelViewSet): # queryset = DMCAModel.objects.all() permission_classes = [ permissions.IsAuthenticated ] serializer_class = DMCASerializer def get_queryset(self): return self.request.user.dmca.all() def perform_create(self, serializer): # print(self.request.user) serializer.save(owner=self.request.user) When I to make the following POST request from postman: Authorization: Token xxxxxx Body JSON: { "tick":"XYZ", "percent_calc":"3", "ratio":"" } I get the error: { "ratio": [ "A valid integer is required." ] } I think I have set the properties of the Models field correctly if I understand them correctly: null=True database will set empty to NULL, blank=True form input will be accepted, default=None incase nothing is entered. I have tried to use the properties one by one and see what happens. But can't work out the error.
Inside your serializer, you can use a CharField for the ratio field if you want to send an empty string and then convert to Int in the validation method. class DMCASerializer(serializers.ModelSerializer): ratio = serializers.CharField(required=False, allow_null=True, allow_blank=True) class Meta: model = DMCAModel fields = '__all__' def validate_ratio(self, value): if not value: return None try: return int(value) except ValueError: raise serializers.ValidationError('Valid integer is required') Also, you need to change serializer_class inside DmcaViewSet from: serializer_class = DcaSerializer to serializer_class = DMCASerializer When I send an empty string: When I send an Integer:
4
1
73,936,433
2022-10-3
https://stackoverflow.com/questions/73936433/python-id-of-function-of-instance-is-inconsistent
Please consider the following class A(object): def foo(self): pass a = A() # accessing a's foo seems consistent init1 = a.foo init2 = a.foo assert init1 == init2 assert id(a.foo) == id(a.foo) # Or is it? foos= [a.foo for i in range(10)] ids = [id(foo) for foo in foos] for i, id_ in enumerate(ids): for j, id__ in enumerate(ids): if i != j: assert id_ != id__ It appears id(a.foo) can equal to id(a.foo) and can not equal to id(a.foo), but I can't understand when it is the same and when it isn't. Please explain what's going on here.
The transformation of a function to an instance method happens every time the instance method attribute is accessed, so the id should be different every time, at least in the sense that a new object has been created. (see Python Data Model for details about method transformation and descriptors) The problem is doing this: assert id(a.foo) == id(a.foo) There are times the Python garbage collector can work so fast that even within a single expression, two different objects can have the same id, because the object has already been reclaimed once id() is done with it. If you do this: assert id(init1) == id(init2) you'll see that they in fact have different ids. Update: to address the question about why init1 == init2 is True: init1 and init2 are method wrapper objects that refer to the same function in the same class, so the method wrapper's __eq__() considers them equal.
3
6
73,929,539
2022-10-2
https://stackoverflow.com/questions/73929539/plot-circles-and-scale-them-up-so-the-text-inside-doesnt-go-out-of-the-circle-b
I have some data where i have languages and relative unit size. I want to produce a bubble plot and then export it to PGF. I got most of my code from this answer Making a non-overlapping bubble chart in Matplotlib (circle packing) but I am having the problem that my text exits the circle boundary: How can I either, increase the scale of everything (much easier I assume), or make sure that the bubble size is always greater than the text inside (and the bubbles are still proportional to each other according to the data series). I assume this is much more difficult to do but I don't really need that. Relevant code: #!/usr/bin/env python3 import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as mcolors # create 10 circles with different radii r = np.random.randint(5,15, size=10) mapping = [("English", 25), ("French", 13), ("Spanish", 32), ("Thai", 10), ("Vientamese", 13), ("Chinese", 20), ("Jamaican", 8), ("Scottish", 3), ("Irish", 12), ("American", 5), ("Romanian", 3), ("Dutch", 2)] class C(): def __init__(self,r): self.colors = list(mcolors.XKCD_COLORS) self.N = len(r) self.labels = [item[0] for item in r] self.x = np.ones((self.N,3)) self.x[:,2] = [item[1] for item in r] maxstep = 2*self.x[:,2].max() length = np.ceil(np.sqrt(self.N)) grid = np.arange(0,length*maxstep,maxstep) gx,gy = np.meshgrid(grid,grid) self.x[:,0] = gx.flatten()[:self.N] self.x[:,1] = gy.flatten()[:self.N] self.x[:,:2] = self.x[:,:2] - np.mean(self.x[:,:2], axis=0) self.step = self.x[:,2].min() self.p = lambda x,y: np.sum((x**2+y**2)**2) self.E = self.energy() self.iter = 1. def minimize(self): while self.iter < 1000*self.N: for i in range(self.N): rand = np.random.randn(2)*self.step/self.iter self.x[i,:2] += rand e = self.energy() if (e < self.E and self.isvalid(i)): self.E = e self.iter = 1. else: self.x[i,:2] -= rand self.iter += 1. def energy(self): return self.p(self.x[:,0], self.x[:,1]) def distance(self,x1,x2): return np.sqrt((x1[0]-x2[0])**2+(x1[1]-x2[1])**2)-x1[2]-x2[2] def isvalid(self, i): for j in range(self.N): if i!=j: if self.distance(self.x[i,:], self.x[j,:]) < 0: return False return True def scale(self, size): """Scales up the plot""" self.x = self.x*size def plot(self, ax): for i in range(self.N): circ = plt.Circle(self.x[i,:2],self.x[i,2], color=mcolors.XKCD_COLORS[self.colors[i]]) ax.add_patch(circ) ax.text(self.x[i][0],self.x[i][1], self.labels[i], horizontalalignment='center', size='medium', color='black', weight='semibold') c = C(mapping) fig, ax = plt.subplots(subplot_kw=dict(aspect="equal")) ax.axis("off") c.minimize() c.plot(ax) ax.relim() ax.autoscale_view() plt.show()
I think both approaches that you outline are largely equivalent. In both cases, you have to determine the sizes of your text boxes in relation to the sizes of the circles. Getting precise bounding boxes for matplotlib text objects is tricky business, as rendering text objects is done by the backend, not matplotlib itself. So you have to render the text object, get its bounding box, compute the ratio between current and desired bounds, remove the text object, and finally re-render the text rescaled by the previously computed ratio. And since the bounding box computation and hence the rescaling is wildly inaccurate for very small and very large text objects, you actually have to repeat the process several times (below I am doing it twice, which is the minimum). W.r.t. the placement of the circles, I have also taken the liberty of substituting your random walk in an energy landscape with a proper minimization. It's faster, and I think the results are much better. #!/usr/bin/env python3 import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as mcolors from scipy.optimize import minimize, NonlinearConstraint from scipy.spatial.distance import pdist, squareform def _get_fontsize(size, label, ax, *args, **kwargs): """Given a circle, precompute the fontsize for a text object such that it fits the circle snuggly. Parameters ---------- size : float The radius of the circle. label : str The string. ax : matplotlib.axis object The matplotlib axis. *args, **kwargs Passed to ax.text(). Returns ------- fontsize : float The estimated fontsize. """ default_fontsize = kwargs.setdefault('size', plt.rcParams['font.size']) width, height = _get_text_object_dimensions(ax, label, *args, **kwargs) initial_estimate = size / (np.sqrt(width**2 + height**2) / 2) * default_fontsize kwargs['size'] = initial_estimate # Repeat process as bbox estimates are bad for very small and very large bboxes. width, height = _get_text_object_dimensions(ax, label, *args, **kwargs) return size / (np.sqrt(width**2 + height**2) / 2) * initial_estimate def _get_text_object_dimensions(ax, string, *args, **kwargs): """Precompute the dimensions of a text object on a given axis in data coordinates. Parameters ---------- ax : matplotlib.axis object The matplotlib axis. string : str The string. *args, **kwargs Passed to ax.text(). Returns ------- width, height : float The dimensions of the text box in data units. """ text_object = ax.text(0., 0., string, *args, **kwargs) renderer = _find_renderer(text_object.get_figure()) bbox_in_display_coordinates = text_object.get_window_extent(renderer) bbox_in_data_coordinates = bbox_in_display_coordinates.transformed(ax.transData.inverted()) w, h = bbox_in_data_coordinates.width, bbox_in_data_coordinates.height text_object.remove() return w, h def _find_renderer(fig): """ Return the renderer for a given matplotlib figure. Notes ----- Adapted from https://stackoverflow.com/a/22689498/2912349 """ if hasattr(fig.canvas, "get_renderer"): # Some backends, such as TkAgg, have the get_renderer method, which # makes this easy. renderer = fig.canvas.get_renderer() else: # Other backends do not have the get_renderer method, so we have a work # around to find the renderer. Print the figure to a temporary file # object, and then grab the renderer that was used. # (I stole this trick from the matplotlib backend_bases.py # print_figure() method.) import io fig.canvas.print_pdf(io.BytesIO()) renderer = fig._cachedRenderer return(renderer) class BubbleChart: def __init__(self, sizes, colors, labels, ax=None, **font_kwargs): # TODO: input sanitation self.sizes = np.array(sizes) self.labels = labels self.colors = colors self.ax = ax if ax else plt.gca() self.positions = self._initialize_positions(self.sizes) self.positions = self._optimize_positions(self.positions, self.sizes) self._plot_bubbles(self.positions, self.sizes, self.colors, self.ax) # NB: axis limits have to be finalized before computing fontsizes self._rescale_axis(self.ax) self._plot_labels(self.positions, self.sizes, self.labels, self.ax, **font_kwargs) def _initialize_positions(self, sizes): # TODO: try different strategies; set initial positions to lie # - on a circle # - on concentric shells, larger bubbles on the outside return np.random.rand(len(sizes), 2) * np.min(sizes) def _optimize_positions(self, positions, sizes): # Adapted from: https://stackoverflow.com/a/73353731/2912349 def cost_function(new_positions, old_positions): return np.sum((new_positions.reshape((-1, 2)) - old_positions)**2) def constraint_function(x): x = np.reshape(x, (-1, 2)) return pdist(x) lower_bounds = sizes[np.newaxis, :] + sizes[:, np.newaxis] lower_bounds -= np.diag(np.diag(lower_bounds)) # squareform requires zeros on diagonal lower_bounds = squareform(lower_bounds) nonlinear_constraint = NonlinearConstraint(constraint_function, lower_bounds, np.inf, jac='2-point') result = minimize(lambda x: cost_function(x, positions), positions.flatten(), method='SLSQP', jac="2-point", constraints=[nonlinear_constraint]) return result.x.reshape((-1, 2)) def _plot_bubbles(self, positions, sizes, colors, ax): for (x, y), radius, color in zip(positions, sizes, colors): ax.add_patch(plt.Circle((x, y), radius, color=color)) def _rescale_axis(self, ax): ax.relim() ax.autoscale_view() ax.get_figure().canvas.draw() def _plot_labels(self, positions, sizes, labels, ax, **font_kwargs): font_kwargs.setdefault('horizontalalignment', 'center') font_kwargs.setdefault('verticalalignment', 'center') for (x, y), label, size in zip(positions, labels, sizes): fontsize = _get_fontsize(size, label, ax, **font_kwargs) ax.text(x, y, label, size=fontsize, **font_kwargs) if __name__ == '__main__': mapping = [("English", 25), ("French", 13), ("Spanish", 32), ("Thai", 10), ("Vietnamese", 13), ("Chinese", 20), ("Jamaican", 8), ("Scottish", 3), ("Irish", 12), ("American", 5), ("Romanian", 3), ("Dutch", 2)] labels = [item[0] for item in mapping] sizes = [item[1] for item in mapping] colors = list(mcolors.XKCD_COLORS) fig, ax = plt.subplots(figsize=(10, 10), subplot_kw=dict(aspect="equal")) bc = BubbleChart(sizes, colors, labels, ax=ax) ax.axis("off") plt.show()
3
3
73,895,789
2022-9-29
https://stackoverflow.com/questions/73895789/plotly-volume-not-rendering-random-distribution-of-points
I have 3D vertices from a third-party data source. The plotly Volume object expects all the coordinates as 1D lists. The examples on their website use the mgrid function to populate the 3D space into the flatten function to get the 1D lists of each axis. https://plotly.com/python/3d-volume-plots/ I don't understand why my approach produces an empty plot. coords is my list of vertices in the shape of (N, 3). See the following code snippet that draws random coordinates, sorts them, but results in an empty render. X = np.random.uniform(0, 1, 30000) Y = np.random.uniform(0, 1, 30000) Z = np.random.uniform(0, 1, 30000) coords = np.dstack((X.flatten(), Y.flatten(), Z.flatten()))[0] sort_idx = np.lexsort((coords[:, 0], coords[:, 1], coords[:, 2])) coords = coords[sort_idx] X=coords[:, 0] Y=coords[:, 1] Z=coords[:, 2] V = np.sin(X) * np.sin(Y) + Z fig = go.Figure(data=go.Volume( x=X, y=Y, z=Z, value=V, isomin=np.min(Z), isomax=np.max(Z), opacity=0.1, # needs to be small to see through all surfaces surface_count=20, # needs to be a large number for good volume rendering colorscale='Spectral', reversescale=True )) fig.show() Update: It seems like plotly expects the coordinates to be sorted. X, Y, Z = np.mgrid[-50:50:40j, -50:50:40j, -8:8:10j] coords = np.dstack((X.flatten(), Y.flatten(), Z.flatten()))[0] np.random.shuffle(coords) Shuffling the list like this and plugging coords into the code above produces an empty Volumn render. I now tried to sort my data points, but I still get an empty render. How can I share my dataset? npfile, but where should I host it? sort_idx = np.lexsort((coords[:, 0], coords[:, 1], coords[:, 2])) coords = coords[sort_idx] Update 2: Using a uniform random distribution to generate the coordinates results in a vertex list that seems to be not processable by plotly even after sorting. X = np.random.uniform(0, 1, 30000) Y = np.random.uniform(0, 1, 30000) Z = np.random.uniform(0, 1, 30000) coords = np.dstack((X.flatten(), Y.flatten(), Z.flatten()))[0]
I have not read the documentation closely enough. Draws volume trace between iso-min and iso-max values with coordinates given by four 1-dimensional arrays containing the value, x, y and z of every vertex of a uniform or non-uniform 3-D grid. The input has to be in the form of an orthogonal grid; thus a cloud of random points is not processable by Volumn. Non-uniformity means in this case that along one axis (e.g., z) the "layers" don't have to be evenly spaced. Furtunatly my data is almost in a grid shape which is fixable by interpolation.
3
2
73,935,184
2022-10-3
https://stackoverflow.com/questions/73935184/generate-all-possible-sequences-of-n-elements-with-sequential-rules
I have a function get_appendable_values(sequence) that takes a sequence (even empty) and returns a list of all the values appendable (as a last element) to that sequence. I need to generate all the possible sequences of 4 elements, with respect to the rules defined in this function and starting with an empty sequence. Example : Let's say the implementation of get_appendable_values is : def get_appendable_values(sequence): '''Dummy rules''' if len(sequence) == 2: return [4, 12] if sequence[-1] == 4: return [7] return [0, 9] Expected output : [[0, 0, 4, 7], [0, 0, 12, 0], [0, 0, 12, 9], [0, 9, 4, 7], [0, 9, 12, 0], [0, 9, 12, 9], [9, 0, 4, 7], [9, 0, 12, 0], [9, 0, 12, 9], [9, 9, 4, 7], [9, 9, 12, 0], [9, 9, 12, 9]] I have the feeling that recursion is the key, but I could not figure it out.
Yes, recursion is the key. To generate a sequence of size 4, you first generate all sequences of size 3, and add all possible endings to them. Likewise, to generate a sequence of size 3, you need all sequences of size 2... and so forth down to size 0. def get_appendable_values(sequence): '''Dummy rules''' if len(sequence) == 2: return [4, 12] #need a len check here to avoid IndexError when `sequence` is empty if len(sequence) > 0 and sequence[-1] == 4: return [7] return [0, 9] def generate_sequences(size): if size == 0: yield [] else: for left_part in generate_sequences(size-1): for right_part in get_appendable_values(left_part): yield left_part + [right_part] for seq in generate_sequences(4): print(seq) Result: [0, 0, 4, 7] [0, 0, 12, 0] [0, 0, 12, 9] [0, 9, 4, 7] [0, 9, 12, 0] [0, 9, 12, 9] [9, 0, 4, 7] [9, 0, 12, 0] [9, 0, 12, 9] [9, 9, 4, 7] [9, 9, 12, 0] [9, 9, 12, 9]
3
6
73,934,683
2022-10-3
https://stackoverflow.com/questions/73934683/remove-column-name-suffix-from-dataframe-python
I have the following pandas DataFrame in python: attr_x header example_x other 3232 322 abv ideo 342 123213 ffee iie 232 873213 ffue iie 333 4534 ffpo iieu I want to remove the suffixes '_x' from all columns containing it. The original DataFrame is much longer. Example result: attr header example other 3232 322 abv ideo 342 123213 ffee iie 232 873213 ffue iie 333 4534 ffpo iieu
Use str.removesuffix: df.columns = df.columns.str.removesuffix("_x") Or replace: df.columns = df.columns.str.replace(r'_x$', '')
3
3
73,932,808
2022-10-3
https://stackoverflow.com/questions/73932808/load-gitlab-ci-yml-with-pyyaml-fails-with-could-not-determine-constructor
Loading .gitlab-ci.yml fails when I try to load it with yaml.load. yaml.constructor.ConstructorError: could not determine a constructor for the tag '!reference' in "python.yml", line 9, column 7 That's the yaml I try to load. .python: before_script: - pip install -r requirements.txt script: - echo "Hello Python!" test: before_script: - !reference [.python, before_script] script: - pytest Right now I'm not interested in those references. However, I'm modifying some parts of the yaml and then write it back to the filesystem. So, I don't want to strip these references. Are there any GitLab libraries to load gitlab-ci.yml? I could not find any. Is there a way to treat !reference [.python, before_script] as String and keep it as is?
Upgrade to ruamel.yaml>=0.15.x yaml_str = """ .python: before_script: - pip install -r requirements.txt script: - echo "Hello Python!" test: before_script: - !reference [.python, before_script] script: - pytest """ from ruamel.yaml import YAML yaml = YAML() yaml.preserve_quotes = True data = yaml.load(yaml_str)
3
1
73,931,204
2022-10-3
https://stackoverflow.com/questions/73931204/new-column-based-on-interaction-of-two-related-past-rows-in-pandas-dataframe
I have a dataframe of race results of MMA fighters sorted in descending order of date that looks like Race_ID Fighter_ID Date Result 1 1 2022-05-17 1 1 2 2022-05-17 0 2 1 2022-04-17 0 2 3 2022-04-17 1 3 2 2022-03-11 1 3 1 2022-03-11 0 4 2 2022-02-11 1 4 4 2022-02-11 0 5 3 2022-02-08 1 5 1 2022-02-08 0 6 2 2022-01-11 1 6 4 2022-01-11 0 7 1 2022-01-01 1 7 2 2022-01-01 0 where 1 means win and 0 means lose in that race. I want to add a new column called history that equals 1 if the fighter has actually fought with the opponent in the current race and won the recent past race, and 0 otherwise. Hence the desired outcome looks like Race_ID Fighter_ID Date Result history 1 1 2022-05-17 1 0 #Since fighter1 fought with fighter2 in race 3 and lost 1 2 2022-05-17 0 1 #Since fighter2 fought with fighter1 in race 3 and won 2 1 2022-04-17 0 0 #Since fighter1 fought with fighter3 in race 5 and lost 2 3 2022-04-17 1 1 #Since fighter3 fought with fighter1 in race 5 and won 3 2 2022-03-11 1 0 #Since fighter2 fought with fighter1 in race 7 and lost 3 1 2022-03-11 0 1 #Since fighter1 fought with fighter2 in race 7 and won 4 2 2022-02-11 1 1 4 4 2022-02-11 0 0 5 3 2022-02-08 1 0 #Since there they have not fought before 5 1 2022-02-08 0 0 #Since there they have not fought before 6 2 2022-01-11 1 0 #Since there they have not fought before 6 4 2022-01-11 0 0 #Since there they have not fought before 7 1 2022-01-01 1 0 #Since there they have not fought before 7 2 2022-01-01 0 0 #Since there they have not fought before
Simplier solution is create helper Series by frozenset and used for DataFrameGroupBy.shift: g = df['Race_ID'].map(df.groupby('Race_ID')['Fighter_ID'].agg(frozenset)) df['History'] = df.groupby(['Fighter_ID', g])['Result'].shift(-1, fill_value=0) print (df) Race_ID Fighter_ID Date Result History 0 1 1 2022-05-17 1 0 1 1 2 2022-05-17 0 1 2 2 1 2022-04-17 0 0 3 2 3 2022-04-17 1 1 4 3 2 2022-03-11 1 0 5 3 1 2022-03-11 0 1 6 4 2 2022-02-11 1 1 7 4 4 2022-02-11 0 0 8 5 3 2022-02-08 1 0 9 5 1 2022-02-08 0 0 10 6 2 2022-01-11 1 0 11 6 4 2022-01-11 0 0 12 7 1 2022-01-01 1 0 13 7 2 2022-01-01 0 0 Original solution: Idea is working with pairs - by column Fighter_ID and Result, so first sorting by 3 columns for same ordering in Fighter_ID, create column History by tuples which are shifted per groups by column Fighter_IDand then convert tuples to columns with append History column to original DataFrame (because different ordering of Fighter_ID in original DataFrame): df1 = (df.sort_values(['Race_ID','Date','Fighter_ID']) .groupby('Race_ID')[['Fighter_ID','Result']] .agg(tuple) .assign(History = lambda x: x.groupby('Fighter_ID')['Result'].shift(-1)) .dropna(subset=['History']) .reset_index() .drop('Result', axis=1) .apply(pd.Series.explode)) df = df.merge(df1, on=['Race_ID','Fighter_ID'], how='left').fillna({'History':0}) print (df) Race_ID Fighter_ID Date Result History 0 1 1 2022-05-17 1 0 1 1 2 2022-05-17 0 1 2 2 1 2022-04-17 0 0 3 2 3 2022-04-17 1 1 4 3 2 2022-03-11 1 0 5 3 1 2022-03-11 0 1 6 4 2 2022-02-11 1 1 7 4 4 2022-02-11 0 0 8 5 3 2022-02-08 1 0 9 5 1 2022-02-08 0 0 10 6 2 2022-01-11 1 0 11 6 4 2022-01-11 0 0 12 7 1 2022-01-01 1 0 13 7 2 2022-01-01 0 0
3
4
73,924,768
2022-10-2
https://stackoverflow.com/questions/73924768/attributeerror-module-flax-has-no-attribute-nn
I'm trying to run RegNeRF, which requires flax. On installing the latest version of flax==0.6.0, I got an error stating flax has no attribute optim. This answer suggested to downgrade flax to 0.5.1. On doing that, now I'm getting the error AttributeError: module 'flax' has no attribute 'nn' I could not find any solutions on the web for this error. Any help is appreciated. I'm using ubuntu 20.04
The flax.optim module has been moved to optax as of flax version 0.6.0; see Upgrading my Codebase to Optax for information on how to migrate your code. If you're using external code that imports flax.optim and can't update these references, you'll have to install flax version 0.5.3 or older. Regarding flax.nn: this module was replaced by flax.linen in flax version 0.4.0. See Upgrading my Codebase to Linen for information on this migration. If you're using external code that imports flax.nn and can't update these references, you'll have to install flax version 0.3.6 or older.
6
4
73,923,051
2022-10-2
https://stackoverflow.com/questions/73923051/cannot-convert-base64-string-into-image
Can someone help me turn this base64 data into image? I don't know if it's because the data was not decoded properly or anything else. Here is how I decoded the data: import base64 c_data = { the data in the link (string type) } c_decoded = base64.b64decode(c_data) But it gave the error Incorrect Padding so I followed some tutorials and tried different ways to decode the data. c_decoded = base64.b64decode(c_data + '=' * (-len(c_data) % 4)) c_decoded = base64.b64decode(c_data + '=' * ((4 - len(c_data) % 4) % 4) Both ways decoded the data without giving the error Incorrect Padding but now I can't turn the decoded data into image. I have tried creating an empty png then write the decoded data into it: from PIL import Image with open('c.png', 'wb') as f: f.write(c_decoded) image = Image.open('c.png') image.show() It didn't work and gave the error: cannot identify image file 'c.png' I have tried using BytesIO: from PIL import Image import io from io import BytesIO image = Image.open(io.BytesIO(c_decoded)) image.show() Now it gave the error: cannot identify image file <_io.BytesIO object at 0x0000024082B20270> Please help me.
Not sure if you definitely need a Python solution, or you just want help decoding your image like the first line of your question says, and you thought Python might be needed. If the latter, you can just use ImageMagick in the Terminal: cat YOURFILE.TXT | magick inline:- result.png Or equivalently and avoiding "Useless Use of cat": magick inline:- result.png < YOURFILE.TXT If the former, you can use something like this (untested): from urllib import request with request.urlopen('data:image/png;base64,iVBORw0...') as response: im = response.read() Now im contains a PNG-encoded [^1] image, so you can either save to disk as such: with open('result.png','wb') as f: f.write(im) Or, you can wrap it in a BytesIO and open into a PIL Image: from io import BytesIO from PIL import Image pilImage = Image.open(BytesIO(im)) [^1]: Note that I have blindly assumed it is a PNG, it may be JPEG, so you should ideally look at the start of the DataURI to determine a suitable extension for saving your file.
4
3
73,922,260
2022-10-1
https://stackoverflow.com/questions/73922260/how-can-i-trigger-same-cloud-run-job-service-using-different-arguments
I'm trying to make a scrapy scraper work using cloud run. The main idea is that every 20 minutes a cloud scheduler cron should trigger the web scraper and get data from different sites. All sites have the same structure, so I would like to use same code and parallelize the execution of the scraping job, doing something like scrapy crawl scraper -a site=www.site1.com and scrapy crawl scraper -a site=www.site2.com. I have already deployed a version of the scraper, but it only can do scrapy crawl scraper. How can I do that at execution the command's site change? Also, should I be using cloud run job or service?
According to that page of documentation, there is a trick. Define a number of task, let's say, you set the number of task equal to the number of site to scrap. use the --task parameter for that In your container (or in Cloud Storage, but if you do that, you have to download the file before starting the process), add a file with 1 website to scrap per line. At runtime, use the CLOUD_RUN_TASK_INDEX environment variable. That variable indicate the number of the task in the execution. For each different number, pick a line in your file of websites (the number of the line equal to the env var value). Like that, you can leverage Cloud Run jobs and parallelism. The main tradeoff here is the static form of the websites list to scrap.
4
1
73,918,802
2022-10-1
https://stackoverflow.com/questions/73918802/valueerror-x-and-y-must-have-same-first-dimension-but-have-shapes-100-and
I'm trying to plot a simple function using python, numpy and matplotlib but when I execute the script it returns a ValueError described in the title. This is my code: """Geometrical interpretation: In Python, plot the function y = f(x) = x**3 − (1/x) and plot its tangent line at x = 1 and at x = 2.""" import numpy as np import matplotlib.pyplot as plt def plot(func): plt.figure(figsize=(12, 8)) x = np.linspace(-100, 100, 100) plt.plot(x, func, '-', color='pink') plt.show() plt.close() plot(lambda x: x ** 3 - (1 / x)) Please send this beginer some help :)
This was a good effort actually. You only needed to add the y variable using y=func(x). And then plt.plot(x,y)... so this works: import numpy as np import matplotlib.pyplot as plt def plot(func): plt.figure(figsize=(12, 8)) x = np.linspace(-100, 100, 100) y = func(x) plt.plot(x, y, '-', color='pink') plt.show() plt.close() plot(lambda x: x ** 3 - (1 / x)) result:
7
5
73,898,903
2022-9-29
https://stackoverflow.com/questions/73898903/how-do-i-get-pyenv-to-display-the-executable-path-for-an-installed-version
Install a python version using: $ pyenv install 3.8.9 Installed Python-3.8.9 to /Users/robino/.pyenv/versions/3.8.9 List the python versions now available: $ pyenv versions * system 3.8.2 3.8.9 A week goes by and I forget where it is installed. Now suppose I want to get the executable path for 3.8.9 version. The following do not work: $ pyenv which 3.8.9 pyenv: 3.8.9: command not found $ pyenv which python 3.8.9 (gives path to system python) $ pyenv which python-3.8.9 pyenv: python-3.8.9: command not found $ pyenv which Python-3.8.9 pyenv: Python-3.8.9: command not found A workaround I found was to set the python version, check, then set it back to system: $ pyenv local 3.8.9 $ pyenv which python /Users/robino/.pyenv/versions/3.8.9/bin/python $ pyenv local --unset However this is a suboptimal solution as it required that no local is previous set. What is the correct command to print out the python executable path for a currently not used version, using pyenv?
By default, pyenv executable can be found at $(pyenv root)/versions/{VERSION}/bin/python. I am not aware of a command displaying all/any executables other than pyenv which python. If you'd like to get the path via commands though, another option would be to make a temporary subdirectory and set the local pyenv interpreter there: $ mkdir tmp; cd tmp $ pyenv local 3.8.9 $ pyenv which python /Users/robino/.pyenv/versions/3.8.9/bin/python $ cd ..; rm -r tmp Since deeper directories take priority with local pyenv versions, a parent directory wouldn't interfere in this case. Yet another option would be to temporarily set the global pyenv version, as this does not have the requirement of no local pyenv version being set. I wouldn't like this though as I'd probably forget to set it back to its original value ;)
16
21
73,914,281
2022-9-30
https://stackoverflow.com/questions/73914281/julia-transpose-grouped-data-passing-tuple-of-column-selectors
ds = Dataset([[1, 1, 1, 2, 2, 2], ["foo", "bar", "monty", "foo", "bar", "monty"], ["a", "b", "c", "d", "e", "f"], [1, 2, 3, 4, 5, 6]], [:g, :key, :foo, :bar]) In InmemoryDatasets, the transpose function can Pass Tuple of column selectors. transpose(groupby(ds, :g), (:foo, :bar), id = :key) Result: g foo bar monty foo_1 bar_1 monty_1 identity identity identity identity identity identity identity Int64? String? String? String? Int64? Int64? Int64? 1 1 a b c 1 2 3 2 2 d e f 4 5 6 Question: How can I do this in DataFrames.jl? How can I do this in R and Python?
In R, pivot_wider can be used for reshaping. library(tidyr) pivot_wider(ds, names_from = key, values_from = c(foo, bar)) -output # A tibble: 2 × 7 g foo_foo foo_bar foo_monty bar_foo bar_bar bar_monty <dbl> <chr> <chr> <chr> <int> <int> <int> 1 1 a b c 1 2 3 2 2 d e f 4 5 6 If we want to get the same column names, we could rename the columns library(dplyr) library(stringr) ds %>% rename("grp"= 'foo', '1' = 'bar') %>% pivot_wider(names_from = key, values_from = c("grp", `1`), names_glue = "{key}_{.value}") %>% rename_with(~ str_remove(.x, "_grp"), ends_with('_grp')) -output # A tibble: 2 × 7 g foo bar monty foo_1 bar_1 monty_1 <dbl> <chr> <chr> <chr> <int> <int> <int> 1 1 a b c 1 2 3 2 2 d e f 4 5 6 data ds <- structure(list(g = c(1, 1, 1, 2, 2, 2), key = c("foo", "bar", "monty", "foo", "bar", "monty"), foo = c("a", "b", "c", "d", "e", "f"), bar = 1:6), class = "data.frame", row.names = c(NA, -6L))
3
5
73,914,589
2022-9-30
https://stackoverflow.com/questions/73914589/filter-and-move-text-in-another-column-in-substring
I have the following dataset: df = pd.DataFrame([ {'Phone': 'Fax(925) 482-1195', 'Fax': None}, {'Phone': 'Fax(406) 226-0317', 'Fax': None}, {'Phone': 'Fax+1 650-383-6305', 'Fax': None}, {'Phone': 'Phone(334) 585-1171', 'Fax': 'Fax(334) 585-1182'}, {'Phone': None, 'Fax': None}, {'Phone': 'Phone(334) 585-1171', 'Fax': 'Fax(334) 585-1176'}] ) Which should look like: What I'm trying to do is: for every row that I see "Fax", I want to truncate it and transfer this record to the column "Fax". At first, I was trying to query only the matching with this filtering: df[df['Phone'].str.contains("Fax") == True, "Fax"] = df[df['Phone'].str.contains("Fax") == True] But it does not works, with the error: "TypeError: unhashable type: 'Series'". Any ideas?
You have a bunch of rows, that is, a list of dicts. Simplest approach would be to massage each row prior to adding it to the dataframe. rows = [ ... ] def get_contacts(rows): for row in rows: phone, fax = row['Phone'], row['Fax'] if 'Fax' in phone: phone, fax = None, phone yield phone, fax df = pd.DataFrame(get_contacts(rows)) You can force str instead of None with a filter like this: ... yield clean(phone), clean(fax) ... def clean(s, default=''): if s is None: return default return s If you really prefer to stick to using Pandas, you might want to identify a mask of rows where df.Phone contains 'Fax', then copy that subset into df['Fax'], then blank out selected df['Phone'] entries. You can verify / debug each step by itself -- get (1) right before moving on to attempt (2). If you choose to go this route, please post your final solution.
3
2
73,907,111
2022-9-30
https://stackoverflow.com/questions/73907111/exclude-multiple-marked-tests-with-pytest-from-command-line
I have the following in my pyproject.toml [tool.pytest.ini_options] markers = [ "plot: marks SLOW plot tests (deselect with '-m \"not plot\"')", "open_tutorial: marks the open_tutorial (which opens VSCode all the times)" ] and I have a bunch of test methods marked accordingly. If I run coverage run --branch -m pytest -m "not open_tutorial" or coverage run --branch -m pytest -m "not plot" I got the desired results, namely the marked test are skipped, but I cannot figure out how to make pytest to skip both. I tried the following coverage run --branch -m pytest -m "not open_tutorial" -m "not plot" coverage run --branch -m pytest -m "not open_tutorial" "not plot" coverage run --branch -m pytest -m ["not open_tutorial","not plot"] but none of them worked.
According to pytest help: -m MARKEXPR only run tests matching given mark expression. For example: -m 'mark1 and not mark2'. If you want to use more than one marker you should use the and operator. pytest -m "not open_tutorial and not plot" will run all test without marks: open_tutorial and plot but: pytest -m "not open_tutorial and not plot and othermark" will run tests with othermark if they don't have plot or open_tutorial marks.
11
19
73,910,742
2022-9-30
https://stackoverflow.com/questions/73910742/why-is-my-python-decorator-classs-get-method-not-called-in-all-cases
So I'm trying to implement something akin to C# events in Python as a decorator for methods: from __future__ import annotations from typing import * import functools def event(function: Callable) -> EventDispatcher: return EventDispatcher(function) class EventDispatcher: def __init__(self, function: Callable) -> None: functools.update_wrapper(self, function) self._instance: Any | None = None self._function: Callable = function self._callbacks: Set[Callable] = set() def __get__(self, instance: Any, _: Any) -> EventDispatcher: self._instance = instance return self def __iadd__(self, callback: Callable) -> EventDispatcher: self._callbacks.add(callback) return self def __isub__(self, callback: Callable) -> EventDispatcher: self._callbacks.remove(callback) return self def __call__(self, *args: Any, **kwargs: Any) -> Any: for callback in self._callbacks: callback(*args, **kwargs) return self._function(self._instance, *args, **kwargs) But when I decorate a class method with @event and later on call the decorated method, the method will be invoked on the incorrect instance in some cases. Good Case: class A: @event def my_event(self) -> None: print(self) class B: def __init__(self, a: A) -> None: a.my_event += self.my_callback def my_callback(self) -> None: pass a0 = A() a1 = A() # b = B(a0) a1.my_event() a0.my_event() The above code will result in the output: <__main__.A object at 0x00000170AA15FCA0> <__main__.A object at 0x00000170AA15FC70> Evidently the function my_event() is called twice, each time with a different instance as expected. Bad Case: Taking the code from the good case and commenting in the line # b = B(a0) results in the output: <__main__.A object at 0x000002067650FCA0> <__main__.A object at 0x000002067650FCA0> Now the method my_event() is called twice, too. But on the same instance. Question: I think the issue boils down to EventDispatcher.__get__() not being called in the bad case. So my question is, why is EventDispatcher.__get__() not called and how do I fix my implementation?
The problem lies in the initializer of B: a.my_event += self.my_callback Note that the += operator is not just a simple call to __iadd__, but actually equivalent to: a.my_event = a.my_event.__iadd__(self.my_callback) This is also the reason why your __iadd__ method needs to return self. Because the class EventDispatcher has only __get__ but no __set__, the result will be written to the instance's attribute during assignment, so the above statement is equivalent to: a.__dict__['my_event'] = A.__dict__['my_event'].__get__(a, A).__iadd__(self.my_callback) Simple detection: print(a0.__dict__) b = B(a0) print(a0.__dict__) Output: {} {'my_event': <__main__.EventDispatcher object at 0x00000195218B3FD0>} When a0 calls my_event on the last line, it only takes the instance of EventDispatcher from a0.__dict__ (instance attribute access takes precedence over non data descriptors, refer to invoking descriptor), and does not trigger the __get__ method. Therefore, A.__dict__['my_event']._instance will not be updated. The simplest repair way is to add an empty __set__ method to the definition of EventDispatcher: class EventDispatcher: ... def __set__(self, instance, value): pass ... Output: <__main__.A object at 0x000002A36B9B3D30> <__main__.A object at 0x000002A36B9B3D00>
3
4
73,910,005
2022-9-30
https://stackoverflow.com/questions/73910005/how-to-sum-an-ndarray-over-ranges-bounded-by-other-indexes
For an array of multiple dimensions, I would like to sum along some dimensions, with the sum range defined by other dimension indexes. Here is an example: >>> import numpy as np >>> x = np.arange(2*3*4).reshape((2,3,4)) >>> x array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> wanted = [[sum(x[i,j,i:j]) for j in range(x.shape[1])] for i in range(x.shape[0])] >>> wanted [[0, 4, 17], [0, 0, 21]] Is there a more efficient way to do it without for loops or list comprehension? My array is quite large.
You can use boolean masks: # get lower triangles m1 = np.arange(x.shape[1])[:,None]>np.arange(x.shape[2]) # get columns index >= depth index m2 = np.arange(x.shape[2])>=np.arange(x.shape[0])[:,None,None] # combine both mask to form 3D mask mask = m1 & m2 out = np.where(mask, x, 0).sum(axis=2) output: array([[ 0, 4, 17], [ 0, 0, 21]]) Masks: # m1 array([[False, False, False, False], [ True, False, False, False], [ True, True, False, False]]) # m2 array([[[ True, True, True, True]], [[False, True, True, True]]]) # mask array([[[False, False, False, False], [ True, False, False, False], [ True, True, False, False]], [[False, False, False, False], [False, False, False, False], [False, True, False, False]]])
6
3
73,907,772
2022-9-30
https://stackoverflow.com/questions/73907772/connect-with-opcua-server-with-basic256sha256-in-python
Te following code is used to connect with the opcua server: from opcua import Client client = Client(url='opc.tcp://192.168.0.5:4840') client.set_user('user1') client.set_password('password') client.connect() Error message: Received an error: MessageAbort(error:StatusCode(BadSecurityPolicyRejected), reason:None) Protocol Error I also tried to append this code: client.set_security_string("Basic256Sha256,Sign,cert.pem,key.pem") But I do not know where I can create de cert.pem and key.pem Does anyone know how to connect with the server in python
You can use this script and adapt it to your needs: https://github.com/FreeOpcUa/python-opcua/blob/master/examples/generate_certificate.sh Also python-opcua is deprecated. So if you start a new project, I would recommend you to use asyncua. Asyncua also has a sync wrapper if you don't want to deal with asyncio.
4
3
73,899,633
2022-9-29
https://stackoverflow.com/questions/73899633/difference-with-nearest-conditional-rows-per-group-using-pandas
I have a dataframe (sample) like this: import pandas as pd data = [['A', '2022-09-01', False, 2], ['A', '2022-09-02', False, 3], ['A', '2022-09-03', True, 1], ['A', '2022-09-05', False, 4], ['A', '2022-09-08', True, 4], ['A', '2022-09-09', False, 2], ['B', '2022-09-03', False, 4], ['B', '2022-09-05', True, 5], ['B', '2022-09-06', False, 7], ['B', '2022-09-09', True, 4], ['B', '2022-09-10', False, 2], ['B', '2022-09-11', False, 3]] df = pd.DataFrame(data = data, columns = ['group', 'date', 'indicator', 'val']) group date indicator val 0 A 2022-09-01 False 2 1 A 2022-09-02 False 3 2 A 2022-09-03 True 1 3 A 2022-09-05 False 4 4 A 2022-09-08 True 4 5 A 2022-09-09 False 2 6 B 2022-09-03 False 4 7 B 2022-09-05 True 5 8 B 2022-09-06 False 7 9 B 2022-09-09 True 4 10 B 2022-09-10 False 2 11 B 2022-09-11 False 3 I would like to create a column called Diff, which shows the difference of rows with its nearest (depends on date) conditional rows (indicator == True) where the conditional rows have a value of 0 per group. Here is the desired output: data = [['A', '2022-09-01', False, 2, 1], ['A', '2022-09-02', False, 3, 2], ['A', '2022-09-03', True, 1, 0], ['A', '2022-09-05', False, 4, 3], ['A', '2022-09-08', True, 4, 0], ['A', '2022-09-09', False, 2, -2], ['B', '2022-09-03', False, 4, -1], ['B', '2022-09-05', True, 5, 0], ['B', '2022-09-06', False, 7, 2], ['B', '2022-09-09', True, 4, 0], ['B', '2022-09-10', False, 2, -2], ['B', '2022-09-11', False, 3, -1]] df_desired = pd.DataFrame(data = data, columns = ['group', 'date', 'indicator', 'val', 'Diff']) group date indicator val Diff 0 A 2022-09-01 False 2 1 1 A 2022-09-02 False 3 2 2 A 2022-09-03 True 1 0 3 A 2022-09-05 False 4 3 4 A 2022-09-08 True 4 0 5 A 2022-09-09 False 2 -2 6 B 2022-09-03 False 4 -1 7 B 2022-09-05 True 5 0 8 B 2022-09-06 False 7 2 9 B 2022-09-09 True 4 0 10 B 2022-09-10 False 2 -2 11 B 2022-09-11 False 3 -1 As you can see it returns the difference respectively with the nearest indicator == True rows per group where the conditioned rows have a Diff of 0. So I was wondering if anyone knows have to get the desired result using pandas? Extra info column Diff: Let's take group A as an example. The column Diff is calculated by the difference with respect to the nearest row with indicator True. So for example: Row 1 is 2 - 1 = 1 with respect to row 3 (nearest True row based on date). Row 2 is 3 - 1 = 2 with respect to row 3. Row 4 is 4 - 1 = 3 with respect to row 3. Row 6 is 2 - 4 = -2 with respect to row 5 (nearest True row based on date). The rows with True have a value of 0 in Diff because everything is calculated with respect to these rows.
IIUC use merge_asof with filtered rows by indicator and subtract column val: df['date'] = pd.to_datetime(df['date'] ) df = df.sort_values('date') df['Diff'] = df['val'].sub(pd.merge_asof(df, df[df['indicator']], on='date', by='group', direction='nearest')['val_y']) df = df.sort_index()
3
4
73,900,229
2022-9-29
https://stackoverflow.com/questions/73900229/in-a-new-project-graphene-or-strawberry-why
which lib is better to integrate with a new django project? i red the docs and still doesnt know how performatic or easier to integrate each one can be in prod environment. i used graphene before to integrate with some pipefy code that i did at work but im pretty new in graphql and dont know at this point what way i should go. Strawberry docs - https://strawberry.rocks/docs Graphene - https://docs.graphene-python.org/en/latest/
I'm the maintainer of Strawberry, so there might be some bias in my answer 😊 Both Strawberry and Graphene are based on GraphQL-core which is the library that provides the GraphQL execution, so in terms of performance they are comparable. For Strawberry we have a performance dashboard here: https://speed.strawberry.rocks/ and you can see we've been working to make it as fast as we can, but GraphQL-core will always be the deciding factor for the speed[1] For Django, I personally don't tend to use the integrations from models as I think it is a bad practise, but both Graphene and Strawberry have integration in that sense. Graphene integration's probably more mature, but Strawberry's is getting better every day (the maintainer works on both strawberry-django and strawberry-django-plus, and he's doing amazing work). Graphene has also probably more extensions at the moment and maybe more guides online, though most might not be up-to-date any more. Strawberry is well maintained and makes releases quite often, and we are trying to not have big breaking changes, even if we are at version 0.x. Graphene has been unmaintained for a bit but luckily now there are more maintainers. I'd definitely encourage you to do a small prototype with both libraries and see which one resonates the most with you as they have different DX, with Strawberry leveraging Python Type Hints and Graphene having a syntax very similar to Django models. [1] I do have some ideas on how we can make the library faster, but I don't know when I'll be able to implement them :)
7
17
73,905,610
2022-9-30
https://stackoverflow.com/questions/73905610/how-to-generate-color-based-on-the-magnitude-of-the-values-in-python
I am trying to make a Bar plot in python so that the color of each bar is set based on the magnitude of that value. For example the list below is our numbers: List = [2,10,18,50, 100, ... 500] In this situation 2 should have the lightest color and 500 should have the darkest color values. The picture below which is from excel shows what i mean: I appreciate answers which helps me. Thank you. Update: import matplotlib.pyplot as plt fig, ax = plt.subplots() names= ['A', 'B', 'C', 'D'] counts = [30, 40, 55, 100] bar_colors = ['blue'] ax.bar(names, counts, color=bar_colors) ax.set_ylabel('Frequency') ax.set_title('Names') plt.show()
Then you can use a Colormap (extra docs) from matplotlib In this example the bar_colors list is being creating with a scale between 0 and 100 but you can choose the scale as you want. import matplotlib.pyplot as plt import matplotlib.colors fig, ax = plt.subplots() names= ['A', 'B', 'C', 'D', 'E'] counts = [10, 30, 40, 55, 100] cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", ["red","orange", "green"]) bar_colors = [cmap(c/100) for c in counts] ax.bar(names, counts, color=bar_colors) ax.set_ylabel('Frequency') ax.set_title('Names') plt.show() Please, notice you can instantiate the cmap with a list of colors as you desire. Also, be sure that when invoking cmap() you just can use a float between [0,1].
3
4
73,902,239
2022-9-29
https://stackoverflow.com/questions/73902239/indexing-python-dict-with-string-keys-using-enums
Is there a way to index a dict using an enum? e.g. I have the following Enum and dict: class STATUS(Enum): ACTIVE = "active" d = { "active": 1 } I'd like to add the appropriate logic to the class STATUS in order to get the following result: d[STATUS.ACTIVE] # returns 1 I understand that the type of STATUS.ACTIVE is not a string.. But is there a way around other than declaring the dict using STATUS.ACTIVE instead of "active"? Also: looking for a solution other than adding a class property or .value
@ElmovanKielmo solution will work, however you can also achieve your desired result by making the STATUS Enum derive from str as well from enum import Enum class STATUS(str, Enum): ACTIVE = "active" d = {"active": 1} print(d[STATUS.ACTIVE]) # prints 1 Please keep in mind, that when inheriting from str, or any other type, the resulting enum members are also that type.
5
7
73,898,848
2022-9-29
https://stackoverflow.com/questions/73898848/how-to-add-an-image-as-the-background-to-a-matplotlib-figure-not-to-plots-but
I am trying to add an image to the "whitespace" behind the various subplots of a matplotlib figure. Most discussions similar to this topic are to add images to the plots themselves, however I have not yet come across a means to change the background of the overall "canvas". The most similar function I have found is set_facecolor(), however this only allows a single color to be set as the background. fig, ax = plt.subplots(2,2) fig.patch.set_facecolor('xkcd:mint green') plt.show() However, I am seeking a solution to import an image behind the plots, similar to this (manually made): I have googled, searched SO, and looked through the matplotlib docs but I only get results for either plt.imshow(image) or set_facecolor() or similar.
You can use a dummy subplot, with the same size as the figure, and plot the background onto that subplot. import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np image = plt.imread('test.jpg') # make ticks white, for readability on colored background mpl.rcParams.update({'xtick.color': "white", 'ytick.color': "white", 'axes.labelcolor': "white"}) # create a figure with 4 subplots, with the same aspect ratio as the image width = 8 fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(width, width * image.shape[0] / image.shape[1])) for ax in np.ravel(axs): ax.patch.set_alpha(0.7) # make subplots semi-transparent background_ax = plt.axes([0, 0, 1, 1]) # create a dummy subplot for the background background_ax.set_zorder(-1) # set the background subplot behind the others background_ax.imshow(image, aspect='auto') # show the backgroud image # plot something onto the subplots t = np.linspace(0, 8 * np.pi, 2000) for i in range(2): for j in range(2): axs[i, j].plot(np.sin(t * (i + 2)), np.sin(t * (j + 4))) # plt.tight_layout() gives a warning, as the background ax won't be taken into account, # but normally the other subplots will be rearranged to nicely fill the figure plt.tight_layout() plt.show()
4
7
73,900,510
2022-9-29
https://stackoverflow.com/questions/73900510/multiple-python-versions-in-the-same-ubuntu-machine
I am on an Ubuntu machine, where Python 3.10 is automatically installed. To do a given task in a shared codebase I need to use Python 3.9 for some issues with new versions. I would like to have both of the Python installed on my machine and be able to use both and switching if I need to. So, I am trying to install old Python 3.9 with the command sudo apt-get install python3.9 and it succeeded in installation, but I can't find it anywhere even with which python3.9 and similar. Even the python interpreter option in VSCode does not show it. I think I am missing something. Can please someone help me? Thank you
Python should be installed under the /usr/bin/ folder. If it's not there, you might have not actually installed the package. Check out this guide for installing specific versions (Scroll down to the "Use Deadsnakes PPA to Install Python 3 on Ubuntu" section.) This will allow you to install specific version of python like python3.9
8
3
73,899,974
2022-9-29
https://stackoverflow.com/questions/73899974/pandas-create-multiple-rows-of-dummy-data-from-one-row
I'm building a machine learning model and I need to populate a test dataframe with synthetic data. I have time series data that currently looks like this: Date DayOfWeek Unit 2022-10-01 7 A 2022-10-02 1 A 2022-10-03 2 A What I need is to duplicate all the date rows, but I need a row for each 'Unit' (A,B,C,D) like this: Date DayOfWeek Unit 2022-10-01 7 A 2022-10-01 7 B 2022-10-01 7 C 2022-10-01 7 D 2022-10-02 1 A 2022-10-02 1 B 2022-10-02 1 C 2022-10-02 1 D 2022-10-03 2 A 2022-10-03 2 B 2022-10-03 2 C 2022-10-03 2 D I found a previous answer that showed me how to repeat: df.reindex(df.index.repeat(4)).reset_index(drop=True) What's the best way to take that, but instead of repeating everything, only repeating 'Date' and "DayOfWeek' but populating A through D on 'Unit'?
Suggest using itertools.product for the purpose: from itertools import product df = pd.DataFrame( data=product( pd.Series(pd.date_range('2022-10-01', '2022-10-03', freq='D')), "ABCD" ), columns=("Date", "Unit"), ) df["DayOfWeek"] = df["Date"].dt.dayofweek.add(1) # To Have Day of Week Starting with 1 df = df[["Date", "DayOfWeek", "Unit"]] print(df) Output: Date DayOfWeek Unit 0 2022-10-01 6 A 1 2022-10-01 6 B 2 2022-10-01 6 C 3 2022-10-01 6 D 4 2022-10-02 7 A 5 2022-10-02 7 B 6 2022-10-02 7 C 7 2022-10-02 7 D 8 2022-10-03 1 A 9 2022-10-03 1 B 10 2022-10-03 1 C 11 2022-10-03 1 D
3
2
73,899,080
2022-9-29
https://stackoverflow.com/questions/73899080/remove-duplicate-words-in-the-same-cell-within-a-column-in-python
i need somebody's help, i have a column with words, i want to remove the duplicated words inside each cell what i want to get is something like this words expected car apple car good car apple good good bad well good good bad well car apple bus food car apple bus food i've tried this but is not working from collections import OrderedDict df['expected'] = (df['words'].str.split().apply(lambda x: OrderedDict.fromkeys(x).keys()).str.join(' ')) I'll be very grateful if somebody can help me
If you don't need to retain the original order of the words, you can create an intermediate set which will remove duplicates. df["expected"] = df["words"].str.split().apply(set).str.join(" ")
3
2
73,896,757
2022-9-29
https://stackoverflow.com/questions/73896757/pandas-regex-look-ahead-and-behind-from-a-1st-occurrence-of-character
I have python strings like below "1234_4534_41247612_2462184_2131_GHI.xlsx" "1234_4534__sfhaksj_DHJKhd_hJD_41247612_2462184_2131_PQRST.GHI.xlsx" "12JSAF34_45aAF34__sfhaksj_DHJKhd_hJD_41247612_2f462184_2131_JKLMN.OPQ.xlsx" "1234_4534__sfhaksj_DHJKhd_hJD_41FA247612_2462184_2131_WXY.TUV.xlsx" I would like to do the below a) extract characters that appear before and after 1st dot b) The keywords that I want are always found after the last _ symbol For ex: If you look at 2nd input string, I would like to get only PQRST.GHI as output. It is after last _ and before 1st . and we also get keyword after 1st . So, I tried the below for s in strings: after_part = (s.split('.')[1]) before_part = (s.split('.')[0]) before_part = qnd_part.split('_')[-1] expected_keyword = before_part + "." + after_part print(expected_keyword) Though this works, this is definitely not nice and elegant way to write a regex. Is there any other better way to write this? I expect my output to be like as below. As you can see that we get keywords before and after 1st dot character GHI PQRST.GHI JKLMN.OPQ WXY.TUV
Try (regex101): import re strings = [ "1234_4534_41247612_2462184_2131_ABCDEF.GHI.xlsx", "1234_4534__sfhaksj_DHJKhd_hJD_41247612_2462184_2131_PQRST.GHI.xlsx", "12JSAF34_45aAF34__sfhaksj_DHJKhd_hJD_41247612_2f462184_2131_JKLMN.OPQ.xlsx", "1234_4534__sfhaksj_DHJKhd_hJD_41FA247612_2462184_2131_WXY.TUV.xlsx", ] pat = re.compile(r"[^.]+_([^.]+\.[^.]+)") for s in strings: print(pat.search(s).group(1)) Prints: ABCDEF.GHI PQRST.GHI JKLMN.OPQ WXY.TUV
3
2
73,887,700
2022-9-28
https://stackoverflow.com/questions/73887700/is-there-a-way-to-unstack-a-dataframe-and-return-as-a-list-value
I have a dataframe looks like this: import pandas as pd df = pd.DataFrame({'type_a': [1,0,0,0,0,1,0,0,0,1], 'type_b': [0,1,0,0,0,0,0,0,1,1], 'type_c': [0,0,1,1,1,1,0,0,0,0], 'type_d': [1,0,0,0,0,1,1,0,1,0], }) I wanna create a new column based on those 4 columns, it will return the column names whenever the value in those 4 columns equals to 1, if there are multiple columns equal to 1 at the same time then it will return the list of those columns names, otherwise it will be nan. The output dataframe will look like this: df = pd.DataFrame({'type_a': [1,0,0,0,0,1,0,0,0,1], 'type_b': [0,1,0,0,0,0,0,0,1,1], 'type_c': [0,0,1,1,1,1,0,0,0,0], 'type_d': [1,0,0,0,0,1,1,0,1,0], 'type':[['type_a','type_d'], 'type_b', 'type_c', 'type_c','type_c', ['type_a','type_c','type_d'], 'type_d', 'nan', ['type_b','type_d'],['type_a','type_b']] }) Any help will be really appreciated. Thanks!
This is also another way: import pandas as pd df['type'] = (pd.melt(df.reset_index(), id_vars='index') .query('value == 1') .groupby('index')['variable'] .apply(list)) type_a type_b type_c type_d type 0 1 0 0 1 [type_a, type_d] 1 0 1 0 0 [type_b] 2 0 0 1 0 [type_c] 3 0 0 1 0 [type_c] 4 0 0 1 0 [type_c] 5 1 0 1 1 [type_a, type_c, type_d] 6 0 0 0 1 [type_d] 7 0 0 0 0 NaN 8 0 1 0 1 [type_b, type_d] 9 1 1 0 0 [type_a, type_b]
3
4
73,889,060
2022-9-29
https://stackoverflow.com/questions/73889060/python-pandas-timedelta-for-one-month
Is there a way to do a Timedelta for one month? Applying pd.Timedelta('1M') yields Timedelta('0 days 00:01:00'). Is there a code word for month?
Timedelta is for absolute offsets. A month "offset", by definition, might have different lengths depending on the value it's applied to. For those cases, you should use DateOffset: pandas.DateOffset(months=1) Functional example: import pandas as pd pd.Timestamp('2022-09-29 00:27:00') + pd.DateOffset(months=1) >>> Timestamp('2022-10-29 00:27:00')
6
6
73,888,639
2022-9-28
https://stackoverflow.com/questions/73888639/why-is-this-unpacking-expression-not-allowed-in-python3-10
I used to unpack a long iterable expression like this: In python 3.8.7: >>> _, a, (*_), c = [1,2,3,4,5,6] >>> a 2 >>> c 6 In python 3.10.7: >>> _, a, (*_), c = [1,2,3,4,5,6] File "<stdin>", line 1 _, a, (*_), c = [1,2,3,4,5,6] ^^ SyntaxError: cannot use starred expression here I'm not sure which version of python between 3.8.7 and 3.10.7 introduced this backwards breaking behavior. What's the justification for this?
There's an official discussion here. The most relevant quote I can find is: Also the current behavior allows (*x), y = 1 assignment. If (*x) is to be totally disallowed, (*x), y = 1 should also be rejected. I agree. The final "I agree" is from Guido van Rossum. The rationale for rejecting (*x) was: Honestly this seems like a bug in 3.8 to me (if it indeed behaves like this): >>> (*x), y (1, 2, 3) Every time I mistakenly tried (*x) I really meant (*x,), so it's surprising that (*x), y would be interpreted as (*x, y) rather than flagging (*x) as an error. Please don't "fix" this even if it is a regression. Also by Guido van Rossum. So it seems like (*x) was rejected because it looks too similar to unpacking into a singlet tuple.
6
6
73,883,435
2022-9-28
https://stackoverflow.com/questions/73883435/is-python-3-path-write-text-from-pathlib-atomic
I was wondering if the Path.write_text(data) function from pathlib was atomic or not. If not, are there scenarios where we could end up with a file created in the filesystem but not containing the intended content? To be more specific, as the comment from @ShadowRanger suggested what I care about is to know if the file contains either the original data or the new data, but never something in between. Which is actually less as full atomicity.
On the specific case of the file containing the original data or the new data, and nothing in between: No, it does not do any tricks with opening a temp file in the same directory, populating it, and finishing with an atomic rename to replace the original file. The current implementation is guaranteed to be at least two unique operations: Opening the file in write mode (which implicitly truncates it), and Writing out the provided data (which may take multiple system calls depending on the size of the data, OS API limitations, and interference by signals that might interrupt the write part-way and require the remainder to be written in a separate system call) If nothing else, your code could die after step 1 and before step 2 (a badly timed Ctrl-C or power loss), and the original data would be gone, and no new data would be written. Old answer in terms of general atomicity: The question is kinda nonsensical on its face. It doesn't really matter if it's atomic; even if it was atomic, a nanosecond after the write occurs, some other process could open the file, truncate it, rewrite it, move it, etc. Heck, in between write_text opening the file and when it writes the data, some other process could swoop in and move/rename the newly opened file or delete it; the open handle write_text holds would still work when it writes a nanosecond later, but the data would never be seen in a file at the provided path (and might disappear the instant write_text closes it, if some other process swooped in and deleted it). Beyond that, it can't be atomic even while writing, in any portable sense. Two processes could have the same file open at once, and their writes can interleave (there are locks around the standard handles within a process to prevent this, but no such locks exist to coordinate with an arbitrary other process). Concurrent file I/O is hard; avoid it if at all possible.
3
2
73,879,229
2022-9-28
https://stackoverflow.com/questions/73879229/pydantic-perform-full-validation-and-dont-stop-at-the-first-error
Is there a way in Pydatic to perform the full validation of my classes? And return all the possible errors? It seems that the standard behaviour blocks the validation at the first encountered error. As an example: from pydantic import BaseModel class Salary(BaseModel): gross: int net: int tax: int class Employee(BaseModel): name: str age: int salary: Salary salary = Salary(gross = "hello", net = 1000, tax = 10) employee= Employee(name = "Mattia", age = "hello", Salary=salary) This code works fine and returns the validation error: pydantic.error_wrappers.ValidationError: 1 validation error for Salary gross value is not a valid integer (type=type_error.integer) However, it is not catching the second validation error on the age field. In a real bugfix scenario, I would need to fix the first validation error, re-run everything again, and only at that point I would discover the second error on age. Is there a way to perform the full validation in pydantic? So validate everything and return ALL the validation errors? (so basically, do not stop at the first error met)
What you are describing is not Pydantic-specific behavior. This is how exceptions in Python work. As soon as one is raised (and is not caught somewhere up the stack), execution stops. Validation is triggered, when you attempt to initialize a Salary instance. Failed validation triggers the ValidationError. The Python interpreter doesn't even begin executing the line, where you want to initialize an Employee. Pydantic is actually way nicer in this regard than it could be. If you pass more than one invalid value in the same initialization, the ValidationError will contain info about about all of them. Like this examle: ... salary = Salary(gross="hello", net="foo", tax=10) The error message will look like this: ValidationError: 2 validation errors for Salary gross value is not a valid integer (type=type_error.integer) net value is not a valid integer (type=type_error.integer) What you'll have to do, if you want to postpone raising errors, is wrap the initialization in a try-block and upon catching an error, you could for example add it to a list to be processed later. In your example, this will not work because you want to use salary later on. In that case you could just initialize the Employee like this: ... employee = Employee( name="Mattia", age="hello", salary={"gross": "hello", "net": 100, "tax": 10} ) Which would also give you: ValidationError: 2 validation errors for Employee age value is not a valid integer (type=type_error.integer) salary -> gross value is not a valid integer (type=type_error.integer) Hope this helps.
5
3
73,882,042
2022-9-28
https://stackoverflow.com/questions/73882042/how-to-count-the-adjacent-values-with-values-of-1-in-a-geotiff-array
Let's that we have geotiff of 0 and 1. import rasterio src = rasterio.open('myData.tif') data = src.read(1) data array([[0, 1, 1, 0], [1, 0, 0, 1], [0, 0, 1, 0], [1, 0, 1, 1]]) I would like to have for each pixel 1 the sum of all adjacent pixels forming a cluster of ones and to have something like the following: array([[0, 2, 2, 0], [1, 0, 0, 1], [0, 0, 3, 0], [1, 0, 3, 3]])
You can use scipy.ndimage.label: from scipy.ndimage import label out = np.zeros_like(data) labels, N = label(data) for i in range(N): mask = labels==i+1 out[mask] = mask.sum() output: array([[0, 2, 2, 0], [1, 0, 0, 1], [0, 0, 3, 0], [1, 0, 3, 3]])
3
2
73,876,937
2022-9-28
https://stackoverflow.com/questions/73876937/what-is-the-difference-between-keyword-pass-and-in-python
Is there any significant difference between the two Python keywords (...) and (pass) like in the examples def tempFunction(): pass and def tempFunction(): ... I should be aware of?
The ... is the shorthand for the Ellipsis global object in python. Similar to None and NotImplemented it can be used as a marker value to indicate the absence of something. For example: print(...) # Prints "Ellipsis" In this case, it has no effect. You could put any constant there and it would do the same. This is valid: def function(): 1 Or def function(): 'this function does nothing' Note both do nothing and return None. Since there is no return keyword the value won't be returned. pass explicitly does nothing, so it will have the same effect in this case too.
7
5
73,876,790
2022-9-28
https://stackoverflow.com/questions/73876790/poetry-configuration-is-invalid-additional-properties-are-not-allowed-group
Recently, I faced this issue with Poetry. All my commands using poetry were failing with the following error. RuntimeError The Poetry configuration is invalid: - Additional properties are not allowed ('group' was unexpected)
I figured out the following issue. The code owners had updated the poetry core requirement to requires = ["poetry-core>=1.2.0"] My current poetry version was 1.1.12 I did the following to fix my issue. # remove the current poetry installation rm -rf /Users/myusername/.poetry # upgrade poetry version pip install poetry -U This should solve the problem. I verified the same by running my other poetry commands. It should be noted that your current poetry configurations will be lost while doing this, and would need to be recreated and reinstalled. # reinstall poetry for my project poetry install
58
68
73,807,634
2022-9-21
https://stackoverflow.com/questions/73807634/how-can-i-select-a-button-contained-within-an-iframe-in-playwright-python-by-i
I am attempting to select a button within an iframe utilizing Python & Playwright... in Selenium I know you can do this by using indexes. Is this possible in playwright? I've been digging through the documentation and can't seem to figure it out. The button contained within the iframe that I am trying to select is: "button:has-text(\"Add New User\")" The html code for the iframe I am using looks similar to this: <iframe src="https://www.urlthatcannotbereturnedinpagehtml.com/veryparticularparameters" width="100%" style="height: 590px;"></iframe> Does anyone have any thoughts? I've attempted to find the URL by parsing the code for the webpage, but this portion can't be selected like that. I may just be at a loss with the documentation in Playwright, I've spent so much time in selenium that this seems like an entirely new language.
From what I understand, you have a page that has content within an iframe. You want to use Playwright to access elements within that frame. The official docs to handle frames: Official docs: https://playwright.dev/python/docs/frames You could then try something like this: // Locate element inside frame const iframeButton = await page.frameLocator('iFrame').locator("button:has-text(\"Add New User\")"); await iframeButton.click(); Notice that the example has the iFrame tag as locator, but you could and should use something more accurate like and id, name or url.
7
12
73,830,524
2022-9-23
https://stackoverflow.com/questions/73830524/attributeerror-module-lib-has-no-attribute-x509-v-flag-cb-issuer-check
Recently I had to reinstall Python due to a corrupt executable. This made one of our Python scripts bomb with the following error: AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK' The line of code that caused it to bomb was: from apiclient.discovery import build I tried pip uninstalling and pip upgrading the google-api-python-client, but I can’t seem to find any information on this particular error. For what it is worth, I am trying to pull Google Analytics information down via an API call. Here is an output of the command prompt error: File "C:\Analytics\Puritan_GoogleAnalytics\Google_Conversions\mcfTest.py", line 1, in <module> from apiclient.discovery import build File "C:\ProgramData\Anaconda3\lib\site-packages\apiclient\__init__.py", line 3, in <module> from googleapiclient import channel, discovery, errors, http, mimeparse, model File "C:\ProgramData\Anaconda3\lib\site-packages\googleapiclient\discovery.py", line 57, in <module> from googleapiclient import _auth, mimeparse File "C:\ProgramData\Anaconda3\lib\site-packages\googleapiclient\_auth.py", line 34, in <module> import oauth2client.client File "C:\ProgramData\Anaconda3\lib\site-packages\oauth2client\client.py", line 45, in <module> from oauth2client import crypt File "C:\ProgramData\Anaconda3\lib\site-packages\oauth2client\crypt.py", line 45, in <module> from oauth2client import _openssl_crypt File "C:\ProgramData\Anaconda3\lib\site-packages\oauth2client\_openssl_crypt.py", line 16, in <module> from OpenSSL import crypto File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\__init__.py", line 8, in <module> from OpenSSL import crypto, SSL File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\crypto.py", line 1517, in <module> class X509StoreFlags(object): File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\crypto.py", line 1537, in X509StoreFlags CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
Upgrade the latest version of PyOpenSSL. python3 -m pip install pip --upgrade pip install pyopenssl --upgrade
188
306
73,822,353
2022-9-23
https://stackoverflow.com/questions/73822353/how-can-i-get-word-level-timestamps-in-openais-whisper-asr
I use OpenAI's Whisper python lib for speech recognition. How can I get word-level timestamps? To transcribe with OpenAI's Whisper (tested on Ubuntu 20.04 x64 LTS with an Nvidia GeForce RTX 3090): conda create -y --name whisperpy39 python==3.9 conda activate whisperpy39 pip install git+https://github.com/openai/whisper.git sudo apt update && sudo apt install ffmpeg whisper recording.wav whisper recording.wav --model large If using an Nvidia GeForce RTX 3090, add the following after conda activate whisperpy39: pip install -f https://download.pytorch.org/whl/torch_stable.html conda install pytorch==1.10.1 torchvision torchaudio cudatoolkit=11.0 -c pytorch
In openai-whisper version 20231117, you can get word level timestamps by setting word_timestamps=True when calling transcribe(): pip install openai-whisper import whisper model = whisper.load_model("large") transcript = model.transcribe( word_timestamps=True, audio="toto.mp3" ) for segment in transcript['segments']: print(''.join(f"{word['word']}[{word['start']}/{word['end']}]" for word in segment['words'])) prints: Toto,[2.98/3.4] I[3.4/3.82] have[3.82/3.96] a[3.96/4.02] feeling[4.02/4.22] we're[4.22/4.44] not[4.44/4.56] in[4.56/4.72] Kansas[4.72/5.14] anymore.[5.14/5.48]
21
13
73,803,605
2022-9-21
https://stackoverflow.com/questions/73803605/psycopg2-programmingerror-the-connection-cannot-be-re-entered-recursively
Am calling an endpoint from flask using fetch api from react. I keep getting psycopg2.ProgrammingError: the connection cannot be re-entered recursively I the endpoint call is inside a loop. @app.get("/api/plot-project/<int:plot_id>/<int:project_id>") def check_and_deactivate(plot_id, project_id): with connection: with connection.cursor() as cursor: cursor.execute(PLOT_PROJECT_CHECK, (plot_id, project_id)) data = cursor.fetchall() if len(data) == 0: return "No data found", 404 removed = data.pop(0) if len(data) > 1: for row in data: print(row[0]) cursor.execute(PLOT_PROJECT_DEACTIVATE, ('deleted', row[0], plot_id, project_id)) return { "Remain": removed }, 200 The react fuctions const handleGet = () => { data.forEach (async (item) => { await getData(item.plotID, item.projectID); }) } the fetch handle const getData = async (plotID, projectID) => { fetch(`http://127.0.0.1:5000/api/plot-project/${plotID}/${projectID}`, { method : 'GET', mode: 'no-cors', headers : { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}` }}) .then(data => data.json()) .then((response) => { console.log('mapping', plotID, "to", projectID) console.log('request succeeded with JSON response', response) }).catch(function (error) { console.log('mapping', plotID, "to", projectID) console.log('no mapping yet') }); }
This error happens when you are trying to call the context manager of the same connection, which is already called in the context manager. To fix that issue you have few options: To use the same connection without context manager and to commit or rollback changes. The code will look like: try: result = None with connection.cursor() as cursor: cursor.execute(PLOT_PROJECT_CHECK, (plot_id, project_id)) data = cursor.fetchall() if len(data) == 0: result = "No data found", 404 removed = data.pop(0) if len(data) > 1: for row in data: print(row[0]) cursor.execute(PLOT_PROJECT_DEACTIVATE, ('deleted', row[0], plot_id, project_id)) result = { "Remain": removed }, 200 conn.commit() return result except Exception as e: logging.error(f"An error occurred: {str(e)}") conn.rollback() return "Database error", 500 To use the same connection without context manager and to use autocommit. The code will look like: connection.autocommit = True (you can place that code where you are making the connection) with connection.cursor() as cursor: cursor.execute(PLOT_PROJECT_CHECK, (plot_id, project_id)) data = cursor.fetchall() if len(data) == 0: return "No data found", 404 removed = data.pop(0) if len(data) > 1: for row in data: print(row[0]) cursor.execute(PLOT_PROJECT_DEACTIVATE, ('deleted', row[0], plot_id, project_id)) return { "Remain": removed }, 200 To use connection pool and to enter the context manager of separate connection for every request.
5
2
73,810,377
2022-9-22
https://stackoverflow.com/questions/73810377/how-to-save-an-uploaded-image-to-fastapi-using-python-imaging-library-pil
I am using image compression to reduce the image size. When submitting the post request, I am not getting any error, but can't figure out why the images do not get saved. Here is my code: @app.post("/post_ads") async def create_upload_files(title: str = Form(),body: str = Form(), db: Session = Depends(get_db), files: list[UploadFile] = File(description="Multiple files as UploadFile")): for file in files: im = Image.open(file.file) im = im.convert("RGB") im_io = BytesIO() im = im.save(im_io, 'JPEG', quality=50)
PIL.Image.open() takes as fp argumnet the following: fp – A filename (string), pathlib.Path object or a file object. The file object must implement file.read(), file.seek(), and file.tell() methods, and be opened in binary mode. Using a BytesIO stream, you would need to have something like the below (as shown in client side of this answer): Image.open(io.BytesIO(file.file.read())) However, you don't really have to use an in-memory bytes buffer, as you can get the actual file object using the .file attribute of UploadFile. As per the documentation: file: A SpooledTemporaryFile (a file-like object). This is the actual Python file that you can pass directly to other functions or libraries that expect a "file-like" object. Example - Saving image to disk: # ... from fastapi import HTTPException from PIL import Image @app.post("/upload") def upload(file: UploadFile = File()): try: im = Image.open(file.file) if im.mode in ("RGBA", "P"): im = im.convert("RGB") im.save('out.jpg', 'JPEG', quality=50) except Exception: raise HTTPException(status_code=500, detail='Something went wrong') finally: file.file.close() im.close() Example - Saving image to an in-memory bytes buffer (see this answer): # ... from fastapi import HTTPException from PIL import Image @app.post("/upload") def upload(file: UploadFile = File()): try: im = Image.open(file.file) if im.mode in ("RGBA", "P"): im = im.convert("RGB") buf = io.BytesIO() im.save(buf, 'JPEG', quality=50) # to get the entire bytes of the buffer use: contents = buf.getvalue() # or, to read from `buf` (which is a file-like object), call this first: buf.seek(0) # to rewind the cursor to the start of the buffer except Exception: raise HTTPException(status_code=500, detail='Something went wrong') finally: file.file.close() buf.close() im.close() For more details and code examples on how to upload files/images using FastAPI, please have a look at this answer and this answer. Also, please have a look at this answer for more information on defining your endpoint with def or async def.
3
8
73,856,901
2022-9-26
https://stackoverflow.com/questions/73856901/how-can-i-use-paramspec-with-method-decorators
I was following the example from PEP 0612 (last one in the Motivation section) to create a decorator that can add default parameters to a function. The problem is, the example provided only works for functions but not methods, because Concate doesn't allow inserting self anywhere in the definition. Consider this example, as an adaptation of the one in the PEP: def with_request(f: Callable[Concatenate[Request, P], R]) -> Callable[P, R]: def inner(*args: P.args, **kwargs: P.kwargs) -> R: return f(*args, request=Request(), **kwargs) return inner class Thing: @with_request def takes_int_str(self, request: Request, x: int, y: str) -> int: print(request) return x + 7 thing = Thing() thing.takes_int_str(1, "A") # Invalid self argument "Thing" to attribute function "takes_int_str" with type "Callable[[str, int, str], int]" thing.takes_int_str("B", 2) # Argument 2 to "takes_int_str" of "Thing" has incompatible type "int"; expected "str" Both attempts raise a mypy error because Request doesn't match self as the first argument of the method, like Concatenate said. The problem is that Concatenate doesn't allow you to append Request to the end, so something like Concatenate[P, Request] won't work either. This would be the ideal way to do this in my view, but it doesn't work because "The last parameter to Concatenate needs to be a ParamSpec". def with_request(f: Callable[Concatenate[P, Request], R]) -> Callable[P, R]: ... class Thing: @with_request def takes_int_str(self, x: int, y: str, request: Request) -> int: ... Any ideas?
There is surprisingly little about this online. I was able to find someone else's discussion of this over at python/typing's Github, which I distilled using your example. The crux of this solution is Callback Protocols, which are functionally equivalent to Callable, but additionally enable us to modify the return type of __get__ (essentially removing the self parameter) as is done for standard methods. from __future__ import annotations from typing import Any, Callable, Concatenate, Generic, ParamSpec, Protocol, TypeVar from requests import Request P = ParamSpec("P") R = TypeVar("R", covariant=True) class Method(Protocol, Generic[P, R]): def __get__(self, instance: Any, owner: type | None = None) -> Callable[P, R]: ... def __call__(self_, self: Any, *args: P.args, **kwargs: P.kwargs) -> R: ... def request_wrapper(f: Callable[Concatenate[Any, Request, P], R]) -> Method[P, R]: def inner(self, *args: P.args, **kwargs: P.kwargs) -> R: return f(self, Request(), *args, **kwargs) return inner class Thing: @request_wrapper def takes_int_str(self, request: Request, x: int, y: str) -> int: print(request) return x + 7 thing = Thing() thing.takes_int_str(1, "a") Since @Creris asked about the mypy error raised from the definition of inner, which is an apparent bug in mypy w/ ParamSpec and Callback Protocols as of mypy==0.991, here is an alternate implementation with no errors: from __future__ import annotations from typing import Any, Callable, Concatenate, ParamSpec, TypeVar from requests import Request P = ParamSpec("P") R = TypeVar("R", covariant=True) def request_wrapper(f: Callable[Concatenate[Any, Request, P], R]) -> Callable[Concatenate[Any, P], R]: def inner(self: Any, *args: P.args, **kwargs: P.kwargs) -> R: return f(self, Request(), *args, **kwargs) return inner class Thing: @request_wrapper def takes_int_str(self, request: Request, x: int, y: str) -> int: print(request) return x + 7 thing = Thing() thing.takes_int_str(1, "a")
8
14
73,829,894
2022-9-23
https://stackoverflow.com/questions/73829894/plotly-figure-how-to-get-the-number-of-rows-and-cols
I create a Plotly Figure instance this way: fig = go.Figure() fig = make_subplots(rows=3, cols=1, shared_xaxes=True, row_width=[0.3, 0.3, 0.4]) Lets assume that now I do not know how many rows and cols the Figure instance has. How can I obtain these values? For example, I expect something like this: rows = fig.get_rows_num() cols = fig.get_cols_num() I appreciate any help.
I had the same use case come up! I was happy to find you can do this: rows, cols = fig._get_subplot_rows_columns()
5
11
73,852,273
2022-9-26
https://stackoverflow.com/questions/73852273/openpyxl-not-found-in-exe-file-made-with-pyinstaller
I wrote a Python code using a virtual evn with pip, and I built it with pyinstaller to use it as executable, and it works. Now I'm moving to conda environment to use also geopandas, fiona and gdal. I can run it without any errors, but if I build the code into the .exe, this error raised: Traceback (most recent call last): File "main.py", line 5, in <module> File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module File "openpyxl\__init__.py", line 6, in <module> File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module File "openpyxl\workbook\__init__.py", line 4, in <module> File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module File "openpyxl\workbook\workbook.py", line 9, in <module> File "PyInstaller\loader\pyimod03_importers.py", line 495, in exec_module File "openpyxl\worksheet\_write_only.py", line 13, in <module> File "openpyxl\worksheet\_writer.py", line 23, in init openpyxl.worksheet._writer ModuleNotFoundError: No module named 'openpyxl.cell._writer' [12248] Failed to execute script 'main' due to unhandled exception! I tried also to reinstall openpyxl through conda, but nothing changed. The command line to build is: pyinstaller --onefile main_new.spec main.py and the spec file is: # -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis(['main.py'], pathex=[], binaries=[], datas=[('./inputs/*.csv', 'inputs')], hiddenimports=[ 'openpyxl', 'xlrd', 'xlswriter' ], hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE(pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='DESAT', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=True, disable_windowed_traceback=False, target_arch=None, codesign_identity=None, entitlements_file=None ) How can I solve this issue? Thank you!
The error is referring to 'openpyxl.cell._writer' that is inside openpyxl. in fact, pyinstaller was actually able to find openpyxl. I checked inside, and I found that in the pip environment i was using the 3.0.9 version, while in the conda one I was using the 3.0.10. Downgrading to 3.0.9, no --hidden-import or other needed, it is just working.
4
2
73,830,225
2022-9-23
https://stackoverflow.com/questions/73830225/init-got-an-unexpected-keyword-argument-cachedir-when-importing-top2vec
I keep getting this error when importing top2vec. TypeError Traceback (most recent call last) Cell In [1], line 1 ----> 1 from top2vec import Top2Vec File ~\AppData\Roaming\Python\Python39\site-packages\top2vec\__init__.py:1 ----> 1 from top2vec.Top2Vec import Top2Vec 3 __version__ = '1.0.27' File ~\AppData\Roaming\Python\Python39\site-packages\top2vec\Top2Vec.py:12 10 from gensim.models.phrases import Phrases 11 import umap ---> 12 import hdbscan 13 from wordcloud import WordCloud 14 import matplotlib.pyplot as plt File ~\AppData\Roaming\Python\Python39\site-packages\hdbscan\__init__.py:1 ----> 1 from .hdbscan_ import HDBSCAN, hdbscan 2 from .robust_single_linkage_ import RobustSingleLinkage, robust_single_linkage 3 from .validity import validity_index File ~\AppData\Roaming\Python\Python39\site-packages\hdbscan\hdbscan_.py:509 494 row_indices = np.where(np.isfinite(matrix).sum(axis=1) == matrix.shape[1])[0] 495 return row_indices 498 def hdbscan( 499 X, 500 min_cluster_size=5, 501 min_samples=None, 502 alpha=1.0, 503 cluster_selection_epsilon=0.0, 504 max_cluster_size=0, 505 metric="minkowski", 506 p=2, 507 leaf_size=40, 508 algorithm="best", --> 509 memory=Memory(cachedir=None, verbose=0), 510 approx_min_span_tree=True, 511 gen_min_span_tree=False, 512 core_dist_n_jobs=4, 513 cluster_selection_method="eom", 514 allow_single_cluster=False, 515 match_reference_implementation=False, 516 **kwargs 517 ): 518 """Perform HDBSCAN clustering from a vector array or distance matrix. 519 520 Parameters (...) 672 Density-based Cluster Selection. arxiv preprint 1911.02282. 673 """ 674 if min_samples is None: TypeError: __init__() got an unexpected keyword argument 'cachedir' Python version: 3.9.7 (64-bit) Have installed MSBuild No errors when pip installing this package Does anyone know a solution to this problem or experienced a similar problem?
UPDATE 12 November 2022: There is new release (ver. 0.8.29) of hdbscan from 31 Oct. 2022 that fix the issue. See my original answer for more details. Original Answer: It looks like you are using latest (as of 23 Sept 2022) versions of hdbscan and joblib packages available on PyPI. cachedir was removed from joblib.Memory in commit on 2 Feb 2022 as depreciated. The latest version on PyPi is ver. 1.2.0 released on Sep 16, 2022, i.e. it incorporate this change The relevant part of hdbscan source code on GitHub was updated on 16 Sept 2022. Unfortunately the latest (as of 23 Sept 2022) hdbscan release on PyPi is ver. 0.8.28 released on Feb 8, 2022 and still not updated. It still use memory=Memory(cachedir=None, verbose=0) One possible solution is to force using joblib version before cachedir was removed - ver. 1.1.0 as of Oct 7, 2021. However note my edits below. UPDATE 29 Sept 2022: There are open issues on hdbscan repo (#563) and (#565). Note there is vulnerability CVE-2022-21797 when using joblib < 1.2.0 UPDATE 12 November 2022: There is new release (ver. 0.8.29) of hdbscan from 31 Oct. 2022.
12
21
73,823,743
2022-9-23
https://stackoverflow.com/questions/73823743/attributeerror-module-rest-framework-serializers-has-no-attribute-nullboolea
After upgrading djangorestframework from djangorestframework==3.13.1 to djangorestframework==3.14.0 the code from rest_framework.serializers import NullBooleanField Throws AttributeError: module 'rest_framework.serializers' has no attribute 'NullBooleanField' Reading the release notes I don't see a deprecation. Where did it go?
For what it's worth, there's a deprecation warning in the previous version, which also suggests a fix: The NullBooleanField is deprecated and will be removed starting with 3.14. Instead use the BooleanField field and set allow_null=True which does the same thing.
7
4
73,860,427
2022-9-26
https://stackoverflow.com/questions/73860427/how-to-use-the-python-packaging-library-with-a-custom-regex
I'm trying to build and tag artifacts, the environment name gets appended at the end of the release, e.g.: 1.0.0-stg or 1.0.0-sndbx, none of them are PEP-440 compliance, raising the following error message: raise InvalidVersion(f"Invalid version: '{version}'") packaging.version.InvalidVersion: Invalid version: '1.0.0-stg' Using the packaging library I know I can access the regex by doing: from packaging import version version.VERSION_PATTERN However, my question is how can I customize the regex rule also to support other environments?
The original value of the version.VERSION_PATTERN has used regex groups to separate the types of the version. And if you see the pattern, it defines three types of releases: pre-release, post-release, and dev release. v? (?: (?:(?P<epoch>[0-9]+)!)? # epoch (?P<release>[0-9]+(?:\.[0-9]+)*) # release segment (?P<pre> # pre-release [-_\.]? (?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview)) [-_\.]? (?P<pre_n>[0-9]+)? )? (?P<post> # post release (?:-(?P<post_n1>[0-9]+)) | (?: [-_\.]? (?P<post_l>post|rev|r) [-_\.]? (?P<post_n2>[0-9]+)? ) )? (?P<dev> # dev release [-_\.]? (?P<dev_l>dev) [-_\.]? (?P<dev_n>[0-9]+)? )? ) (?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version So you should add the stg and sndbx environment names next to the particular group name. In case those values do not present in the list, the version will always be counted as a local version and not a release version. If those are pre-releases, you should add them to the pre_l group. (?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview|stg|sndbx)) In case those are post-release environments, the post_l should be changed, becoming: (?P<post_l>post|rev|r|stg|sndbx) And if those are dev release environments, the dev_l group should be updated in the same way. (?P<dev_l>dev|stg|sndbx) Once you have decided on the type of release and set values to the particular groups, you can override the version.VERSION_PATTERN and see the result. from packaging import version version.VERSION_PATTERN = r""" v? (?: (?:(?P<epoch>[0-9]+)!)? # epoch (?P<release>[0-9]+(?:\.[0-9]+)*) # release segment (?P<pre> # pre-release [-_\.]? (?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview)) [-_\.]? (?P<pre_n>[0-9]+)? )? (?P<post> # post release (?:-(?P<post_n1>[0-9]+)) | (?: [-_\.]? (?P<post_l>post|rev|r) [-_\.]? (?P<post_n2>[0-9]+)? ) )? (?P<dev> # dev release [-_\.]? (?P<dev_l>dev|stg|sndbx) [-_\.]? (?P<dev_n>[0-9]+)? )? ) (?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version """ NOTE: It is not required to add both identifiers of environments into one group. So one of them can be added to another group.
4
2
73,820,642
2022-9-22
https://stackoverflow.com/questions/73820642/always-defer-a-field-in-django
How do I make a field on a Django model deferred for all queries of that model without needing to put a defer on every query? Research This was requested as a feature in 2014 and rejected in 2022. Baring such a feature native to Django, the obvious idea is to make a custom manager like this: class DeferedFieldManager(models.Manager): def __init__(self, defered_fields=[]): super().__init__() self.defered_fields = defered_fields def get_queryset(self, *args, **kwargs): return super().get_queryset(*args, **kwargs ).defer(*self.defered_fields) class B(models.Model): pass class A(models.Model): big_field = models.TextField(null=True) b = models.ForeignKey(B, related_name="a_s") objects = DeferedFieldManager(["big_field"]) class C(models.Model): a = models.ForeignKey(A) class D(models.Model): a = models.OneToOneField(A) class E(models.Model): a_s = models.ManyToManyField(A) However, while this works for A.objects.first() (direct lookups), it doesn't work for B.objects.first().a_s.all() (one-to-manys), C.objects.first().a (many-to-ones), D.objects.first().a (one-to-ones), or E.objects.first().a_s.all() (many-to-manys). The thing I find particularly confusing here is that this is the default manager for my object, which means it should also be the default for the reverse lookups (the one-to-manys and many-to-manys), yet this isn't working. Per the Django docs: By default the RelatedManager used for reverse relations is a subclass of the default manager for that model. An easy way to test this is to drop the field that should be deferred from the database, and the code will only error with an OperationalError: no such column if the field is not properly deferred. To test, do the following steps: Data setup: b = B.objects.create() a = A.objects.create(b=b) c = C.objects.create(a=a) d = D.objects.create(a=a) e = E.objects.create() e.a_s.add(a) Comment out big_field manage.py makemigrations manage.py migrate Comment in big_field Run tests: from django.db import OperationalError def test(test_name, f, attr=None): try: if attr: x = getattr(f(), attr) else: x = f() assert isinstance(x, A) print(f"{test_name}:\tpass") except OperationalError: print(f"{test_name}:\tFAIL!!!") test("Direct Lookup", A.objects.first) test("One-to-Many", B.objects.first().a_s.first) test("Many-to-One", C.objects.first, "a") test("One-to-One", D.objects.first, "a") test("Many-to-Many", E.objects.first().a_s.first) If the tests above all pass, the field has been properly deferred. I'm currently getting: Direct Lookup: pass One-to-Many: FAIL!!! Many-to-One: FAIL!!! One-to-One: FAIL!!! Many-to-Many: FAIL!!! Partial Answer @aaron's answer solves half of the failing cases. If I change A to have: class Meta: base_manager_name = 'objects' I now get the following from tests: Direct Lookup: pass One-to-Many: FAIL!!! Many-to-One: pass One-to-One: pass Many-to-Many: FAIL!!! This still does not work for the revere lookups.
Set Meta.base_manager_name to 'objects'. class A(models.Model): big_field = models.TextField(null=True) b = models.ForeignKey(B, related_name="a_s") objects = DeferedFieldManager(["big_field"]) class Meta: base_manager_name = 'objects' From https://docs.djangoproject.com/en/4.1/topics/db/managers/#using-managers-for-related-object-access: Using managers for related object access By default, Django uses an instance of the Model._base_manager manager class when accessing related objects (i.e. choice.question), not the _default_manager on the related object. This is because Django needs to be able to retrieve the related object, even if it would otherwise be filtered out (and hence be inaccessible) by the default manager. If the normal base manager class (django.db.models.Manager) isn’t appropriate for your circumstances, you can tell Django which class to use by setting Meta.base_manager_name. Reverse Many-to-One and Many-to-Many managers The "One-To-Many" case in the question is a Reverse Many-To-One. Django subclasses the manager class to override the behaviour, and then instantiates it — without the defered_fields argument passed to __init__ since django.db.models.Manager and its subclasses are not expected to have parameters. Thus, you need something like: def make_defered_field_manager(defered_fields): class DeferedFieldManager(models.Manager): def get_queryset(self, *args, **kwargs): return super().get_queryset(*args, **kwargs).defer(*defered_fields) return DeferedFieldManager() Usage: # objects = DeferedFieldManager(["big_field"]) objects = make_defered_field_manager(["big_field"])
9
4
73,864,714
2022-9-27
https://stackoverflow.com/questions/73864714/python-can-bring-window-to-front-but-cannot-set-focus-win32gui-setforegroundwi
My program pops up a window every time the user presses F2 (in any application). I'm using pynput to capture the F2 button (works ok) I'm using tkinter to create the popup window (works ok) I'm using win32gui.SetForegroundWindow(windowHandel) to bring the tkinter window to the front and set the focus. And there is the problem. If the python windows is selected when I press F2, everything works ok, and the tkinter window both moves to front and gets focus. BUT - if any other window is selected when I press F2, the tkinter window does moves to the front, but it is not selected (i.e. focused). Here is the relevant section from the code (find full code below): while not windowFound and counter < MAX_TRIES_TO_FIND_THE_FLIPPER_WINDOW: try: windowHandel = win32gui.FindWindow(None, windowName) win32gui.SetForegroundWindow(windowHandel) except: windowFound = False else: print("Success, Window found on the " + str(counter + 1) + " tries") windowFound = True After looking for an answer for a while, I found someone saying that this can be solved by using win32process. So I tried adding: windowHandelID, _ = win32process.GetWindowThreadProcessId(windowHandel) win32process.AttachThreadInput(win32api.GetCurrentThreadId(), windowHandelID, True) win32gui.SetFocus(windowHandel) Yet, it resulted in the same behavior. Here below is the full (simplified, without exit conditions) code. Try pressing F2 while pythong is focused. And then try pressing F2 while any other window (e.g. notepad) is focused. You'll see that in one case you can just start writing and the tkinter window will receive the input while in the other case, you'll still have to click the window. I'd appreciate any help or suggestions. import pyautogui # For keyboard shortcuts and moving the cursor and selecting the window import time # For the delay function from pynput import keyboard # For catching keyboard strokes import tkinter # GUI import threading # For Threading import win32gui # For Setting Focus on the Flipper Window import win32process import win32api # Resetting Variables / Settings start_flipping_text_sequence = False ContinueThreads = True SearchForFlipperWindow = False window_name = "tk" MAX_TRIES_TO_FIND_THE_FLIPPER_WINDOW = 10 # This function runs in a separate thread def selectFlipperWindow(windowName): # Since the thread runs constantly, it will only start looking for the flipper window when this variable is True global SearchForFlipperWindow # How many loops should the program go through before it gives up on finding the window global MAX_TRIES_TO_FIND_THE_FLIPPER_WINDOW # While program was not ended while True: # This is False, unless F2 is pressed if SearchForFlipperWindow: # Did the program find the flipper window windowFound = False counter = 0 while not windowFound and counter < MAX_TRIES_TO_FIND_THE_FLIPPER_WINDOW: try: windowHandel = win32gui.FindWindow(None, windowName) win32gui.SetForegroundWindow(windowHandel) except: windowFound = False else: print("Success, Window found on the " + str(counter + 1) + " tries") windowHandelID, _ = win32process.GetWindowThreadProcessId(windowHandel) win32process.AttachThreadInput(win32api.GetCurrentThreadId(), windowHandelID, True) win32gui.SetFocus(windowHandel) windowFound = True counter += 1 time.sleep(0.1) SearchForFlipperWindow = False time.sleep(0.1) # Execute functions based on the clicked key def on_press(key): global start_flipping_text_sequence # If the user pressed the F2 key if key == keyboard.Key.f2: start_flipping_text_sequence = True def okButton(): root.destroy() def enter(event): okButton() # Assigning event to function listener = keyboard.Listener(on_press=on_press) # initiating listener listener.start() # Start a thread for searching for the flipper window selectWindowThread = threading.Thread(target=selectFlipperWindow, args=(window_name,)) selectWindowThread.start() while 1 == 1: time.sleep(.05) if start_flipping_text_sequence: SearchForFlipperWindow = True root = tkinter.Tk() tk_window_input = tkinter.Entry(root, width=100) tk_window_input.pack(padx=20) tk_window_input.focus() # Binds the OK button to the okButton function above tk_window_ok = tkinter.Button(root, width=20, text="OK", command=okButton) tk_window_ok.pack(pady=20) # Binds the "Enter" keyboard key to the "enter" event above tk_window_input.bind('<Return>', enter) # the main looper of the tkinter window # runs until root.destroy() to executed above root.mainloop() start_flipping_text_sequence = False ```
What you see is an intentional restriction in Windows. The restriction is described by Raymond Chen in article Foreground activation permission is like love: You can’t steal it, it has to be given to you. Remarks section of the SetForegroundWindow documentation gives more technical details about the restriction. There are ways to be exempt from the restriction. One good way to do so is described by Raymond Chen in article Pressing a registered hotkey gives you the foreground activation love. The following code shows one more, strange way to bypass the restriction: kbd.press(keyboard.Key.alt) try: win32gui.SetForegroundWindow(windowHandel) finally: kbd.release(keyboard.Key.alt) where kbd was created like this: from pynput.keyboard import Controller kbd = Controller() Here is an explanation why this workaround works: link. A good way to get rid of this workaround may be to require a user to press Alt-F2 in order to switch to your application. Good luck with coding ;-)
3
5
73,838,387
2022-9-24
https://stackoverflow.com/questions/73838387/pandas-add-dataframes-columns-names-to-rows-after-join-procedure
I have the following dataframe: df1 = pd.DataFrame({'ID' : ['T1002.', 'T5006.', 'T5007.'], 'Parent': ['Stay home.', "Stay home.","Stay home."], 'Child' : ['2Severe weather.', "5847.", "Severe weather."]}) ID Parent Child 0 T1002. Stay home. 2Severe weather. 1 T5006. Stay home. 5847. 2 T5007. Stay home. Severe weather. I want to add the two columns into one and also add the columns' names into the rows. I want also the columns' names to be in bold. Expected outcome: (I cannot make bold the columns names ID, etc) Joined_columns() 0 ID: T1002. Parent: Stay home. Child: 2Severe weather. 1 ID: T5006. Parent: Stay home. Child: 5847. 2 ID: T5007. Parent: Stay home. Child: Severe weather. The join is accomplished with the following code: df1_final=df1.stack().groupby(level=0).apply(' '.join).to_frame(0) But I am not sure how to go to the end. Any ideas?
I want to add the two columns into one and also add the columns' name into the rows. I want also the columns names to be in bold. i would like to have the bold text on Excel. applying formats to substrings within cells in an excel worksheet requires writing a rich string It is necessary to construct a list of rich strings and then write them out to the worksheet iteratively. The difficulty with this approach is that you have to keep track of the index offset with respect to the rest of the data frame. import pandas as pd import xlsxwriter # create a workbook / worksheet and format book = xlsxwriter.Workbook('text.xlsx') sheet = book.add_worksheet('joined-columns') bold = book.add_format({'bold': True}) # generate the formatted rich string df1 = pd.DataFrame({'ID': ['T1002.', 'T5006.', 'T5007.'], 'Parent': ['Stay home.', "Stay home.","Stay home."], 'Child': ['2Severe weather.', "5847.", "Severe weather."]}) # the list of string fragments is generated using # a double list comprehension. # this flattens the list of lists, where # each inner list is [bold, 'key: ', 'value', ' ']. # the trailing space is removed through `[:-1]` data = df1.apply( lambda row: [a for k, v in row.to_dict().items() for a in [bold, f'{k}: ', v, ' '] ][:-1], axis=1 ).to_list() # write the title & rich strings # the list of string fragments has to be unpacked with `*` # because `write_rich_string` accepts variable # number or arguments after the cell is specified. sheet.write(0, 0, 'Joined_columns()') for idx, cell in enumerate(data): sheet.write_rich_string(idx + 1, 0, *cell) book.close()
3
4
73,840,143
2022-9-24
https://stackoverflow.com/questions/73840143/in-pytorch-how-do-i-update-a-neural-network-via-the-average-gradient-from-a-lis
I have a toy reinforcement learning project based on the REINFORCE algorithm (here's PyTorch's implementation) that I would like to add batch updates to. In RL, the "target" can only be created after a "prediction" has been made, so standard batching techniques do not apply. As such, I accrue losses for each episode and append them to a list l_losses where each item is a zero-dimensional tensor. I hold off on calling .backward() or optimizer.step() until a certain number of episodes have passed in order to create a sort of pseudo batch. Given this list of losses, how do I have PyTorch update the network based on their average gradient? Or would updating based on the average gradient be the same as updating on the average loss (I seem to have read otherwise elsewhere)? My current method is to create a new tensor t_loss from torch.stack(l_losses), and then run t_loss = t_loss.mean(), t_loss.backward(), optimizer.step(), and zero the gradient, but I'm unsure if this is equivalent to my intents? It's also unclear to me if I should have been running .backward() on each individual loss instead of concatenating them in a list (but holding on the .step() part until the end?
Gradient is a linear operation so gradient of the average is the same as the average of the gradient. Take some example data import torch a = torch.randn(1, 4, requires_grad=True); b = torch.randn(5, 4); You could store all the losses and compute the mean as you are doing, a.grad = None x = (a * b).mean(axis=1) x.mean().backward() # gradient of the mean print(a.grad) Or every iteration to compute the back propagation to get the contribution of that loss to the gradient. a.grad = None for bi in b: (a * bi / len(b)).mean().backward() print(a.grad) Performance I don't know the internal details of the pytorch backward implementation, but I can tell that (1) the graph is destroyed by default after the backward pass ratain_graph=True or create_graph=True to backward(). (2) The gradient is not kept except for leaf tensors, unless you specify retain_grad; (3) if you evaluate a model twice using different inputs, you can perform the backward pass to individual variables, this means that they have separate graphs. This can be verified with the following code. a.grad = None # compute all the variables in advance r = [ (a * b / len(b)).mean() for bi in b ] for ri in r: # This depends on the graph of r[i] but the graph or r[i-1] # was already destroyed, it means that r[i] graph is independent # of r[i-1] graph, hence they require separate memory. ri.backward() # this will remove the graph of ri print(a.grad) So if you update the gradient after each episode it will accumulate the gradient of the leaf nodes, that's all the information you need for the next optimization step, so you can discard that loss freeing up resources for further computations. I would expect a memory usage reduction, potentially even a faster execution if the memory allocation can efficiently use the just deallocated pages for the next allocation.
4
3
73,854,849
2022-9-26
https://stackoverflow.com/questions/73854849/instantiate-threading-within-a-class
I have a class within a class and want to activate threading capabilities in the second class. Essentially, the script below is a reproducible template of my proper project. When I use @threading I get that showit is not iterable, so the tp.map thinks I do not have a list. However, when I run: if __name__ == '__main__': tp = ThreadPoolExecutor(5) print(tp.map(testit(id_str).test_first, id_int)) for values in tp.map(testit(id_str).test_first, id_int): values I get no issues, besides that I want the expected output to print out each number in the list. However, I wanted to achieve this within the class. Something like the following: from concurrent.futures import ThreadPoolExecutor from typing import List id_str = ['1', '2', '3', '4', '5'] id_int = [1, 2, 3, 4, 5] def threaded(fn, pools=10): tp = ThreadPoolExecutor(pools) def wrapper(*args): return tp.map(fn, *args) # returns Future object return wrapper class testit: def __init__(self, some_list: List[str]) -> None: self._ids = some_list print(self._ids) def test_first(self, some_id: List[int]) -> None: print(some_id) class showit(testit): def __init__(self, *args): super(showit, self).__init__(*args) @threaded def again(self): global id_int for values in self.test_first(id_int): print(values) a = showit(id_str) print(a.again()) Error: File "test_1.py", line 32, in <module> print(a.again()) File "test_1.py", line 10, in wrapper return tp.map(fn, *args) # returns Future object File "/Users/usr/opt/anaconda3/lib/python3.8/concurrent/futures/_base.py", line 600, in map fs = [self.submit(fn, *args) for args in zip(*iterables)] TypeError: 'showit' object is not iterable Expected output: 1 2 3 4 5
Considerations: Writing a decorator that takes an optional argument needs to be done a bit differently than the way you have done it. I am assuming you would like the threaded decorator to be able to support both functions and methods if that is not to difficult to accomplish. The decorated function/method can take as many positional and keyword arguments as you want (but at least one positional argument). When you call the wrapped function the last positional argument must be an iterable that will be used with the pool's map method. All positional and keyword arguments will then be passed to the function/method when it is invoked as a worker function by map except that the last positional argument will now be an element of the iterable. from concurrent.futures import ThreadPoolExecutor from functools import wraps, partial def threaded(poolsize=10): def outer_wrapper(fn): wraps(fn) def wrapper(*args, **kwargs): """ When the wrapper function is called, the last positional argument iterable will be an iterable to be used with the pool's map method. Then when map is invoking the original unwrapped function/method as a worker function, the last positional argument will be an element of that iterable. """ # We are being called with an iterable as the first argument" # Construct a worker function if necessary: with ThreadPoolExecutor(poolsize) as executor: worker = partial(fn, *args[:-1], **kwargs) return list(executor.map(worker, args[-1])) return wrapper return outer_wrapper iterable = [1, 2, 3] @threaded(len(iterable)) def worker1(x, y, z=0): """ y, the last positional argument, will be an iterable when the wrapper function is called or an element of the iterable when the actual, unwrapped function, worker1, is called. """ return x + (y ** 2) - z @threaded(3) def worker2(x): return x.upper() class MyClass: @threaded() def foo(self, x): return -x # last positional argument is the iterable: print(worker1(100, iterable, z=10)) print(worker2(['abcdef', 'ghijk', 'lmn'])) o = MyClass() print(o.foo([3, 2, 1])) Prints: [91, 94, 99] ['ABCDEF', 'GHIJK', 'LMN'] [-3, -2, -1]
3
2
73,825,424
2022-9-23
https://stackoverflow.com/questions/73825424/type-annotation-for-an-iterable-class
I've got a class that extends ElementTree.Element: import xml.etree.ElementTree as ET from typing import cast class MyElement(ET.Element): def my_method(self): print('OK') xml = '''<test> <sub/> <sub/> </test>''' root: MyElement = cast( MyElement, ET.fromstring(xml, parser=ET.XMLParser(target=ET.TreeBuilder(element_factory=MyElement)))) root.my_method() # this is fine for ch in root: ch.my_method() # PyCharm error message ??? This does work, however the last line is highlighted by PyCharm because it considers ch to be Element, not MyElement. How should I annotate MyElement to make it clear that when I iterate it, I get MyElement instances and not ET.Elements?
Annotate the __iter__ of MyElement as a method to return Iterator[MyElement]. There is almost no additional runtime overhead. Pycharm and mypy will both pass: from collections.abc import Callable, Iterator class MyElement(ET.Element): def my_method(self): print('OK') __iter__: Callable[..., Iterator['MyElement']]
4
5
73,819,773
2022-9-22
https://stackoverflow.com/questions/73819773/pandas-comparing-2-dataframes-without-iterating
Considering I have 2 dataframes as shown below (DF1 and DF2), I need to compare DF2 with DF1 such that I can identify all the Matching, Different, Missing values for all the columns in DF2 that match columns in DF1 (Col1, Col2 & Col3 in this case) for rows with same EID value (A, B, C & D). I do not wish to iterate on each row of a dataframe as it can be time-consuming. Note: There can around 70 - 100 columns. This is just a sample dataframe I am using. DF1 EID Col1 Col2 Col3 Col4 0 A a1 b1 c1 d1 1 B a2 b2 c2 d2 2 C None b3 c3 d3 3 D a4 b4 c4 d4 4 G a5 b5 c5 d5 DF2 EID Col1 Col2 Col3 0 A a1 b1 c1 1 B a2 b2 c9 2 C a3 b3 c3 3 D a4 b4 None Expected output dataframe EID Col1 Col2 Col3 New_Col 0 A a1 b1 c1 Match 1 B a2 b2 c2 Different 2 C None b3 c3 Missing in DF1 3 D a4 b4 c4 Missing in DF2
Firstly, you will need to filter df1 based on df2. new_df = df1.loc[df1['EID'].isin(df2['EID']), df2.columns] EID Col1 Col2 Col3 0 A a1 b1 c1 1 B a2 b2 c2 2 C None b3 c3 3 D a4 b4 c4 Next, since you have a big dataframe to compare, you can change both the new_df and df2 to numpy arrays. array1 = new_df.to_numpy() array2 = df2.to_numpy() Now you can compare it row-wise using np.where new_df['New Col'] = np.where((array1 == array2).all(axis=1),'Match', 'Different') EID Col1 Col2 Col3 New Col 0 A a1 b1 c1 Match 1 B a2 b2 c2 Different 2 C None b3 c3 Different 3 D a4 b4 c4 Different Finally, to convert the row with None value, you can use df.loc and df.isnull new_df.loc[new_df.isnull().any(axis=1), ['New Col']] = 'Missing in DF1' new_df.loc[df2.isnull().any(axis=1), ['New Col']] = 'Missing in DF2' EID Col1 Col2 Col3 New Col 0 A a1 b1 c1 Match 1 B a2 b2 c2 Different 2 C None b3 c3 Missing in DF1 3 D a4 b4 c4 Missing in DF2
3
5
73,816,296
2022-9-22
https://stackoverflow.com/questions/73816296/password-field-is-visible-and-not-encrypted-in-django-admin-site
So to use email as username I override the build-in User model like this (inspired by Django source code) models.py class User(AbstractUser): username = None email = models.EmailField(unique=True) objects = UserManager() USERNAME_FIELD = "email" REQUIRED_FIELDS = [] def __str__(self): return self.email admin.py @admin.register(User) class UserAdmin(admin.ModelAdmin): fieldsets = ( (None, {"fields": ("email", "password")}), (("Personal info"), {"fields": ("first_name", "last_name")}), ( ("Permissions"), { "fields": ( "is_active", "is_staff", "is_superuser", "groups", "user_permissions", ), }, ), (("Important dates"), {"fields": ("last_login", "date_joined")}), ) add_fieldsets = ( ( None, { "classes": ("wide",), "fields": ("email", "password1", "password2"), }, ), ) list_display = ("email", "is_active", "is_staff", "is_superuser") list_filter = ("is_active", "is_staff", "is_superuser") search_fields = ("email",) ordering = ("email",) filter_horizontal = ("groups", "user_permissions",) But this is how it looks like when I go to Admin site to change a user: Password is visible and not hashed and no link to change password form. Comparing to what it looks like on a default Django project: Password is not visible and there's a link to change password form So clearly I'm missing something but I can't figure out what it is.
You are not able to see the password in hashed state because the password field is a CharField which renders it as normal text field. In Django's admin side there's a field called ReadOnlyPasswordHashField in django.contrib.auth.forms which renders the password field to be in hashed state with password change link. Django's UserAdmin uses different form classes for user creation and updation. form = UserChangeForm add_form = UserCreationForm change_password_form = AdminPasswordChangeForm To edit user details UserAdmin uses form = UserChangeForm(source code) where the password field is set as ReadOnlyPasswordHashField(source code). class UserChangeForm(forms.ModelForm): password = ReadOnlyPasswordHashField( label=_("Password"), help_text=_( "Raw passwords are not stored, so there is no way to see this " "user’s password, but you can change the password using " '<a href="{}">this form</a>.' ), ) So, Just by inheriting from UserAdmin from django.contrib.auth.admin would make the password to be in hashed state with all the other essentials as seen in default admin site for users. OR you could simply import UserChangeForm from django.contrib.auth.forms and set form = UserChangeForm in custom UserAdmin from django.contrib.auth.forms import UserChangeForm,AdminPasswordChangeForm # code @admin.register(User) class UserAdmin(admin.ModelAdmin): # code form = UserChangeForm change_password_form = AdminPasswordChangeForm # code
3
3
73,871,485
2022-9-27
https://stackoverflow.com/questions/73871485/how-to-remove-pytest-no-header-no-summary-q-parameters-in-pycharm
I see the parameters for running a default-settings pytest configuration are as follows: Launching pytest with arguments payments/tests/test_edd_countries.py --no-header --no-summary -q in payments/tests I would like to remove all of those parameters specifically: --no-header --no-summary -q How can that be achieved given the Runtime configuration in Edit Configurations does not even show them?
The parameters --no-header --no-summary -q are added as an IDE setting (they aren't set in Run Configurations). They can be configured by going to File > Settings > Advanced Settings > Python and checking the option Pytest: do not add "--no-header --no-summary -q" as shown in the screenshot:
11
15
73,871,993
2022-9-27
https://stackoverflow.com/questions/73871993/does-python-cache-items-in-the-same-line-and-use-again-later
With this line: print(sum(tuple), len(tuple), sum(tuple)/len(tuple)) Will python cache sum(tuple) in the 0 index and use it in the average calculation (2 index)? Or will with calculate sum(tuple) again?
Python won't perform this optimization for you. You can see this by defining your own function instead of sum and observing the side effect: import functools def my_sum(x): print('my sum') return functools.reduce(lambda a, b: a + b, x) tup = (1, 2, 3) print(my_sum(tup), len(tup), my_sum(tup)/len(tup)) If you run this snippet, you'll see the phrase "my sum" printed twice, proving the call to my_sum isn't optimized out. Having said that, you could implement this optimization yourself by using functools' cache: import functools @functools.cache def my_sum(x): print('my sum') return functools.reduce(lambda a, b: a + b, x) tup = (1, 2, 3) print(my_sum(tup), len(tup), my_sum(tup)/len(tup)) (or by using the := operator as Andrej Kesely suggests in the comments)
3
3
73,868,794
2022-9-27
https://stackoverflow.com/questions/73868794/pandas-normalize-values-by-group
I find it hard to explain with words what I want to achieve, so please don't judge me for showing a simple example instead. I have a table that looks like this: main_col some_metadata value this True 10 this False 3 that True 50 that False 10 other True 20 other False 5 I want to normalize this data separately for each case of main_col. For example, if we're to choose min-max normalization and scale it to range [0; 100], I want the output to look like this: main_col some_metadata value (normalized) this True 100 this False 30 that True 100 that False 20 other True 100 other False 25 Where for each case of main_col, the highest value is scaled to 100 and another value is scaled in respective proportion.
You can use groupby.transform('max') to get the max per group, then normalize in place: df['value'] /= df.groupby('main_col')['value'].transform('max').div(100) or: df['value'] *= df.groupby('main_col')['value'].transform('max').rdiv(100) output: main_col some_metadata value 0 this True 100.0 1 this False 30.0 2 that True 100.0 3 that False 20.0 4 other True 100.0 5 other False 25.0
5
5
73,858,980
2022-9-26
https://stackoverflow.com/questions/73858980/postgres-suddenly-raise-error-usr-lib-libpq-5-dylib-no-such-file
when I run Django project or any code related to Postgres : Referenced from: '/Users/mahmoudnasser/.local/share/virtualenvs/wyspp_backend-PwdII1PB/lib/python3.8/site-packages/psycopg2/_psycopg.cpython-38-darwin.so' Reason: tried: '/opt/homebrew/opt/postgresql/lib/libpq.5.dylib' (no such file), '/usr/local/lib/libpq.5.dylib' (no such file), '/usr/lib/libpq.5.dylib' (no such file) I tried many solutions online but none of them worked. Note: I use MacOS
To solve this problem just run the following command: sudo mkdir -p /usr/local/lib && sudo ln -s /opt/homebrew/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/lib/libpq.5.dylib
14
19
73,851,205
2022-9-26
https://stackoverflow.com/questions/73851205/django-q-objects-vs-python-code-better-performance
What would provide better performance using filtering conditions with Q in django ORM or simply fetching unfiltered objects and comparing in python. employee_qs = employee.objects.filter(state=States.ACTIVE, topic_assn__topic_id=instance.ss_topic_id).select_related('c_data').filter( Q(c_data__is_null=True) | Q(c_budget__gt=F('c_data__budget_spent') + offset_amt)) V/s employee_qs = employee.objects.filter(state=States.ACTIVE, topic_assn__topic_id=instance.ss_topic_id).select_related('c_data') for employee in employee_qs: if not employee.c_data or float(employee.budget)-employee.c_data.budget_spent > offset_amt: #do something... Which of these two choices would be better performance wise?
TLDR: Q objects will be faster. Why? Well filtering done with Q object will be done on the SQL server (either PostgreSQL, MariaDB). So two aspects should be considered: with Q objects, unfiltered data will not be transferred from your database to your django server (less data over the network make things faster) ; Q objects are converted to SQL WHERE clause, depending on how your table is configured (for instance, the presence of indexes on the columns that you are filtering) the SQL server will use these indexes to filter data, which is much faster than on server side. Also, SQL servers are written in languages that are much faster than Python.
4
11
73,848,347
2022-9-25
https://stackoverflow.com/questions/73848347/difference-in-time-between-successive-dataframe-rows
Similar to this question, I would like to compute the time difference between rows of a dataframe. Unlike that question however, the difference should be by groupby id. So foe example, this dataframe: df = pd.DataFrame( {'id': [6,6,6,6,6,10,10,10,10,10], 'timestamp': ['2016-04-01 00:04:00','2016-04-01 00:04:20','2016-04-01 00:04:30', '2016-04-01 00:04:35','2016-04-01 00:04:54','2016-04-30 13:04:59', '2016-04-30 13:05:00','2016-04-30 13:05:12','2016-04-30 13:05:20', '2016-04-30 13:05:51']} ) df.head() id timestamp 0 6 2016-04-01 00:04:00 1 6 2016-04-01 00:04:20 2 6 2016-04-01 00:04:30 3 6 2016-04-01 00:04:35 4 6 2016-04-01 00:04:54 5 10 2016-04-30 13:04:59 6 10 2016-04-30 13:05:00 7 10 2016-04-30 13:05:12 8 10 2016-04-30 13:05:20 9 10 2016-04-30 13:05:51 Then I want to create a column ΔT for the differences, like so: df['timestamp'] = pd.to_datetime(df['timestamp'], format='%Y-%m-%d %H:%M:%S') df['ΔT'] = df.groupby('id').index.to_series().diff().astype('timedelta64[s]') AttributeError: 'DataFrameGroupBy' object has no attribute 'index' Intended output: id timestamp ΔT 0 6 2016-04-01 00:04:00 0 1 6 2016-04-01 00:04:20 20 2 6 2016-04-01 00:04:30 10 3 6 2016-04-01 00:04:35 5 4 6 2016-04-01 00:04:54 19 5 10 2016-04-30 13:04:59 0 6 10 2016-04-30 13:05:00 1 7 10 2016-04-30 13:05:12 12 8 10 2016-04-30 13:05:20 8 9 10 2016-04-30 13:05:51 31
df.groupby('id')['timestamp'].diff().dt.total_seconds().fillna(0)
3
3
73,843,523
2022-9-25
https://stackoverflow.com/questions/73843523/implementing-from-scratch-cv2-warpperspective
I was making some experimentations with the OpenCV function cv2.warpPerspective when I decided to code it from scratch to better understand its pipeline. Though I followed (hopefully) every theoretical step, it seems I am still missing something and I am struggling a lot to understand what. Could you please help me? SRC image (left) and True DST Image (right) Output of the cv2.warpPerspective overlapped on the True DST # Invert the homography SRC->DST to DST->SRC hinv = np.linalg.inv(h) src = gray1 dst = np.zeros(gray2.shape) h, w = src.shape # Remap back and check the domain for ox in range(h): for oy in range(w): # Backproject from DST to SRC xw, yw, w = hinv.dot(np.array([ox, oy, 1]).T) # cv2.INTER_NEAREST x, y = int(xw/w), int(yw/w) # Check if it falls in the src domain c1 = x >= 0 and y < h c2 = y >= 0 and y < w if c1 and c2: dst[x, y] = src[ox, oy] cv2.imshow(dst + gray2//2) Output of my code PS: The output images are the overlapping of Estimated DST and the True DST to better highlight differences.
Your issue amounts to a typo. You mixed up the naming of your coordinates. The homography assumes (x,y,1) order, which would correspond to (j,i,1). Just use (x, y, 1) in the calculation, and (xw, yw, w) in the result of that (then x,y = xw/w, yw/w). the w factor mirrors the math, when formulated properly. Avoid indexing into .shape. The indices don't "speak". Just do (height, width) = src.shape[:2] and use those. I'd recommend to fix the naming scheme, or define it up top in a comment. I'd recommend sticking with x,y instead of i,j,u,v, and then extend those with prefixes/suffixes for the space they're in ("src/dst/in/out"). Perhaps something like ox,oy for iterating, just xw,yw,w for the homography result, which turns into x,y via division, and ix,iy (integerized) for sampling in the input? Then you can use dst[oy, ox] = src[iy, ix]
6
4
73,843,943
2022-9-25
https://stackoverflow.com/questions/73843943/regex-for-alternating-numbers
I am trying to write a regex pattern for phone numbers consisting of 9 fixed digits. I want to identify numbers that have two numbers alternating for four times such as 5XYXYXYXY I used the below sample number = 561616161 I tried the below pattern but it is not accurate ^5(\d)(?=\d\1).+ can someone point out what i am doing wrong?
I would use: ^(?=\d{9}$)\d*(\d)(\d)(?:\1\2){3}\d*$ Demo Here is an explanation of the pattern: ^ from the start of the number (?=\d{9}$) assert exactly 9 digits \d* match optional leading digits (\d) capture a digit in \1 (\d) capture another digit in \2 (?:\1\2){3} match the XY combination 3 more times \d* more optional digits $ end of the number
3
5
73,843,521
2022-9-25
https://stackoverflow.com/questions/73843521/awaiting-multiple-async-functions-in-sequence
I have been learning and exploring Python asyncio for a while. Before starting this journey I have read loads of articles to understand the subtle differences between multithreading, multiprocessing, and asyncio. But, as far as I know, I missed something on about a fundamental issue. I'll try to explain what I mean by pseudocodes below. import asyncio import time async def io_bound(): print("Running io_bound...") await asyncio.sleep(3) async def main(): start = time.perf_counter() result_1 = await io_bound() result_2 = await io_bound() end = time.perf_counter() print(f"Finished in {round(end - start, 0)} second(s).") asyncio.run(main()) For sure, it will take around 6 seconds because we called the io_bound coroutine directly twice and didn't put them to the event loop. This also means that they were not run concurrently. If I would like to run them concurrently I will have to use asyncio.gather(*tasks) feature. I run them concurrently it would only take 3 seconds for sure. Let's imagine this io_bound coroutine is a coroutine that queries a database to get back some data. This application could be built with FastAPI roughly as follows. from fastapi import FastAPI app = FastAPI() @app.get("/async-example") async def async_example(): result_1 = await get_user() result_2 = await get_countries() if result_1: return {"result": result_2} return {"result": None} Let's say the get_user and get_countries methods take 3 seconds each and have asynchronous queries implemented correctly. My questions are: Do I need to use asyncio.gather(*tasks) for these two database queries? If necessary, why? If not, why? What is the difference between io_bound, which I call twice, and get_user and get_countries, which I call back to back, in the above example? In the io_bound example, if I did the same thing in FastAPI, wouldn't it take only 6 seconds to give a response back? If so, why not 3 seconds? In the context of FastAPI, when would be the right time to use asyncio.gather(*tasks) in an endpoint?
Do I need to use asyncio.gather(*tasks) for these two database queries? If necessary, why? If not, why? Do you need to? Nope, what you have done works. The request will take 6 seconds but will not be blocking so if you had another request coming in, FastAPI can process the two requests at the same time. I.e. two requests coming in at the same time will take 6 seconds still, rather than 12 seconds. If the two functions get_user() and get_countries() are independant of eachother, then you can get the run the functions concurrently using either asyncio.gather or any of the many other ways of doing it in asyncio, which will mean the request will now take just 3 seconds. For example: async def main(): start = time.perf_counter() result_1_task = asyncio.create_task(io_bound()) result_2_task = asyncio.create_task(io_bound()) result_1 = await result_1_task result_2 = await result_2_task end = time.perf_counter() print(f"Finished in {round(end - start, 0)} second(s).") or async def main_2(): start = time.perf_counter() results = await asyncio.gather(io_bound(), io_bound()) end = time.perf_counter() print(f"Finished in {round(end - start, 0)} second(s).") What is the difference between io_bound, which I call twice, and get_user and get_countries, which I call back to back, in the above example? assuming get_user and get_countries just call io_bound, nothing. In the io_bound example, if I did the same thing in FastAPI, wouldn't it take only 6 seconds to give a response back? If so, why not 3 seconds? It will take 6 seconds. FastAPI doesn't do magic to change the way your functions work, it just allows you to create a server that can easily run asynchronous functions. In the context of FastAPI, when would be the right time to use asyncio.gather(*tasks) in an endpoint? When you want run two or more asyncronous functions concurrently. This is the same, regardless of if you are using FastAPI or any other asynchronous code in python.
4
3
73,842,710
2022-9-25
https://stackoverflow.com/questions/73842710/count-the-group-occurrences
I have dataframe id webpage 1 google 2 bing 3 google 4 google 5 yahoo 6 yahoo 7 google 8 google Would like to count the groups like id webpage count 1 google 1 2 bing 2 3 google 3 4 google 3 5 yahoo 4 6 yahoo 4 7 google 5 8 google 5 I have tried using the cumcount or ngroup when using groupby it is grouping all occurrence.
I believe you need to cumsum() over the state transitions. Every time webpage differs from the previous row you increase your count. df["count"] = (df.webpage != df.webpage.shift()).cumsum()
3
4
73,842,214
2022-9-25
https://stackoverflow.com/questions/73842214/how-to-convert-comma-separated-string-to-list-that-contains-comma-in-items-in-py
I have a string for items separated by comma. Each item is surrounded by quotes ("), but the items can also contain commas (,). So using split(',') creates problems. How can I split this text properly in Python? An example of such string "coffee", "water, hot" What I want to achieve ["coffee", "water, hot"]
import ast s = '"coffee", "water, hot"' result = ast.literal_eval(f'[{s}]') print(result)
4
2
73,837,417
2022-9-24
https://stackoverflow.com/questions/73837417/freeze-panes-first-two-rows-and-column-with-openpyxl
Trying to freeze the first two rows and first column with openpyxl, however, whenever doing such Excel says that the file is corrupted and there is no freeze. Current code: workbook = openpyxl.load_workbook(path) worksheet = workbook[first_sheet] freeze_panes = Pane(xSplit=2000, ySplit=3000, state="frozen", activePane="bottomRight") worksheet.sheet_view.pane = freeze_panes Took a look at the documentation, however, there is little explanation on parametere setting. Desired output: Came across this answer, however, it fits a specific use case, hence, wanted to make a general question for future reference: How to split Excel screen with Openpyxl?
To freeze the first two rows and first column, use the sample code below... ws.freeze_panes works. Note that, like you would do in excel, select the cell above and left of which you want to freeze. So, in your case, the cell should be B3. Hope this is what you are looking for. import openpyxl wb=openpyxl.load_workbook('Sample.xlsx') ws=wb['Sheet1'] mycell = ws['B3'] ws.freeze_panes = mycell wb.save('Sample.xlsx')
3
4
73,836,651
2022-9-24
https://stackoverflow.com/questions/73836651/python-drop-columns-in-string-range
I want to drop all columns whose name starts by 'var' and whose content is 'None'. Sample of my dataframe: id var1 var2 newvar1 var3 var4 newvar2 1 x y dt None f None Dataframe that I want: id var1 var2 newvar1 var4 newvar2 1 x y dt f None I want to do this for several files and I do not know how many 'var' I have in all of them. My dataframe has only one row. Here is the code that I tried: for i in range(1,300): df.drop(df.loc[df['var'+str(i)] == 'None' ].index, inplace=True) Error obtained: KeyError: 'var208' I also tried: df.drop(df.loc[df['var'+str(i) for i in range(1,300)] == 'None'].index, inplace=True) SyntaxError: invalid syntax Could anyone help me improve my code?
Your error occurs because you have no column with that name. You can use df.columns to get a list of available columns, check if the name .startswith("var") and use df[col].isnull().all() to check if all values are None. import pandas as pd df = pd.DataFrame(columns=["id", "var1", "var2", "newvar1", "var3", "var4", "newvar2"], data=[[1, "x", "y", "dt", None, "f", None]]) df.drop([col for col in df.columns if col.startswith("var") and df[col].isnull().all()], axis=1, inplace=True)
3
3
73,805,879
2022-9-21
https://stackoverflow.com/questions/73805879/poetry-installation-failing-on-mac-os-says-should-use-symlinks
I am trying to install poetry using the following command curl -sSL https://install.python-poetry.org | python3 - but it is failing with the following exception: Exception: This build of python cannot create venvs without using symlinks Below is the text detailing the error Retrieving Poetry metadata # Welcome to Poetry! This will download and install the latest version of Poetry, a dependency and package manager for Python. It will add the `poetry` command to Poetry's bin directory, located at: /Users/DaftaryG/.local/bin You can uninstall at any time by executing this script with the --uninstall option, and these changes will be reverted. Installing Poetry (1.2.1): Creating environment Traceback (most recent call last): File "<stdin>", line 940, in <module> File "<stdin>", line 919, in main File "<stdin>", line 550, in run File "<stdin>", line 571, in install File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 117, in __enter__ return next(self.gen) File "<stdin>", line 643, in make_env File "<stdin>", line 629, in make_env File "<stdin>", line 309, in make File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/venv/__init__.py", line 66, in __init__ self.symlinks = should_use_symlinks(symlinks) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/venv/__init__.py", line 31, in should_use_symlinks raise Exception("This build of python cannot create venvs without using symlinks") Exception: This build of python cannot create venvs without using symlinks I already have symlinks installed so not having that does not seem to be the problem. Would anyone know the cause of this error?
Not the best solution, but you can install it using Homebrew, if you have it. That's what I did. brew install poetry
35
42
73,822,427
2022-9-23
https://stackoverflow.com/questions/73822427/while-debugging-python-in-pdb-how-to-print-output-to-file
Sometimes as I'm using pdb, I would like to save the output to a file, pdbSaves.txt. For example I would want to do something like pp locals() >> pdbSaves.txt, which actually gives *** NameError: name 'pdbSaves' is not defined. What is the correct way to do this?
In Python 3 the ">>" symbol is no longer usable with the regular "print" for redirecting the output. I had not before used it from inside the PDB, but, certainly, the support for it was removed at the same time. What you have to do is to use the regular way of output to file with the new print function - or, if you want to do pretty print (pp does that), with the pprint.pprint function. (Pdb) from pprint import pprint as ppr (Pdb) file = open("x.txt", "wt") (Pdb) ppr("mystuff", stream=file) Or, for regular printing, the parameter name for the output file is file rather than stream (the advantage is that the import statement is not needed): (Pdb) print ("mystuff", file=file) Also, either these methods and the Python 2 >> way require the target to be an open file, not a string with the filename:
3
5
73,830,546
2022-9-23
https://stackoverflow.com/questions/73830546/have-to-make-a-customized-dataframe-from-a-dict-with-multiple-values
Please find below my input/output : INPUT : dico = {'abc': 'val1=343, val2=935', 'def': 'val1=95, val2=935', 'ghi': 'val1=123, val2=508'} OUTPUT (desired) : I tried with pd.DataFrame.from_dict(dico, index=dico.keys) but unfortunately I got an error. TypeError: DataFrame.from_dict() got an unexpected keyword argument 'index' Do you have any suggestions please ?
Let's use a regex pattern to find the matching pairs corresponding to each value in the input dictionary then convert the pairs to dict and create a new dataframe import re pd.DataFrame([dict(re.findall(r'(\S+)=(\d+)', v)) for k, v in dico.items()], dico) Alternative pandas only approach with extractall (might be slower): pd.Series(dico).str.extractall(r'(\S+)=(\d+)').droplevel(1).pivot(columns=0, values=1) Result val1 val2 abc 343 935 def 95 935 ghi 123 508
4
5
73,828,396
2022-9-23
https://stackoverflow.com/questions/73828396/list-find-the-first-index-and-count-the-occurrence-of-a-specific-list-in-list-o
we have a variable named location. location=[["world", 'Live'], ["alpha",'Live'], ['hello', 'Scheduled'],['alpha', 'Live'], ['just', 'Live'], ['alpha','Scheduled'], ['alpha', 'Live']] i want to find the first index and count occurrence of list["alpha",'Live'] in location. i tried the following: index= [location.index(i) for i in location if i ==["alpha", 'Live'] ] count = [location.count(i) for i in location if i ==["alpha",'Live'] ] print('index',index) print('count', count) this returns: index [1, 1, 1] count [3, 3, 3] but is there a way to find both first index, count simultaneously using list comprehension. expected output: index, count = 1, 3
does this solve you problem? location=[["world", 'Live'], ["alpha",'Live'], ['hello', 'Scheduled'],['alpha', 'Live'], ['just', 'Live'], ['alpha','Scheduled'], ['alpha', 'Live']] index= location.index(["alpha",'Live']) count = location.count(["alpha",'Live']) print('index',index) print('count', count) if ['alpha','live'] is not found, find the first ['alpha',??] and print its index and count. location = [["world", 'Live'], ["alpha", 'Live'], ['hello', 'Scheduled'], [ 'alpha', 'Live'], ['just', 'Live'], ['alpha', 'Scheduled'], ['alpha', 'Live']] key = ["alpha", 'Lsive'] count = location.count(key) if count: index = location.index(key) print('count', count) print('index', index) else: for i in location: if i[0] == key[0]: key = i count = location.count(key) index = location.index(key) print('count', count) print('index', index) else: print('not found') cleaner code by @yadavender yadav location = [["alpha", 'Scheduled'], ["alpha", 'Live'], ['hello', 'Scheduled'], [ 'alpha', 'Live'], ['just', 'Live'], ['alpha', 'Scheduled'], ['alpha', 'Live']] key = ["alpha", 'Scheduled'] count = location.count(key) if count: index = location.index(key) else: index=[location.index(i) for i in location if i[0]=="alpha"][0] print('count', count) print('index', index)
3
5
73,817,788
2022-9-22
https://stackoverflow.com/questions/73817788/neural-network-keeps-misclassifying-input-image-despite-performing-well-on-the-o
Link to the dataset in question Before I begin, few things that might be relevant: The input file format is JPEG. I convert them to numpy arrays using matplotlib's imread The RGB images are then reshaped and converted to grayscale images using tensorflow's image.resize method and image.rgb_to_grayscale method respectively. This is my model: model = Sequential( [ tf.keras.Input(shape=(784,),), Dense(200, activation= "relu"), Dense(150, activation= "relu"), Dense(100, activation= "relu"), Dense(50, activation= "relu"), Dense(26, activation= "linear") ] ) The neural network scores a 98.9% accuracy on the dataset. However, when I try to use an image of my own, it always classifies the input as 'A'. I even went to the extent of inverting the colors of the image (black to white and vice versa; the original grayscale image had the alphabet in black and the rest in white). img = plt.imread("20220922_194823.jpg") img = tf.image.rgb_to_grayscale(img) plt.imshow(img, cmap="gray") Which displays this image. img.shape returns TensorShape([675, 637, 1]) img = 1 - img img = tf.image.resize(img, [28,28]).numpy() plt.imshow(img, cmap="gray") This is the result of img = 1-img I suspect that the neural network keeps classifying the input image as 'A' because of some pixels that aren't completely black/white. But why does it do that? How do I avoid this problem in the future? Here's the notebook.
I have downloaded and tested your model. The accuracy was as stated by you, when run against the Kaggle dataset. You were also on the right track with inverting the values of the input for your own image, the one that wasn't working. But you should have taken a look at the training inputs: the values are in the range of 0-255, while you're inverting the values with 1-x, assuming floating points from 0-1. I have drawn a simple "X" and "P" in Paint, saved it as a PNG (should work the same way with JPEG), and the neural network identifies them just fine. For that, I rescale it with OpenCV, grayscale it, then invert it (the white pixels had values of 255, while the training inputs use 0 for the blank pixels). Here is a rough code of what I have done: import numpy as np import keras import cv2 def load_image(path): image = cv2.imread(path) image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) image = 255 - cv2.resize(image, (28,28)) image = image.reshape((1,784)) return image def load_dataset(path): dataset = np.loadtxt(path, delimiter=',') X = dataset[:,0:784] Y = dataset[:,0] return X, Y def benchmark(model, X, Y): test_count = 100 tests = np.random.randint(0, X.shape[0], test_count) correct = 0 p = model.predict(X[tests]) for i, ti in enumerate(tests): if Y[ti] == np.argmax(p[i]): correct += 1 print(f'Accuracy: {correct / test_count * 100}') def recognize(model, image): alph = "abcdefghijklmnopqrstuvwxyz" p = model.predict(image)[0] letter = alph[np.argmax(p)] print(f'Image prediction: {letter}') top3 = dict(sorted( zip(alph, 100 * np.exp(p) / sum(np.exp(p))), key=lambda x: x[1], reverse=True)[:3]) print(f'Top 3: {top3}') img_x = load_image('x.png') img_p = load_image('p.png') X, Y = load_dataset('chardata.csv') model = keras.models.load_model('CharRecognition.h5') benchmark(model, X, Y) recognize(model, img_x) recognize(model, img_p) The predictions are "x" and "p", respectively. I haven't tried other letters, yet, but the issues identified above seem to be part of the problem with high certainty. Here are the images I have used (as I said, both are hand-drawn, nothing generated): I have also run it with the image as a JPEG. All you need to do is change the file path for imread. OpenCV detects the format. If you don't want to or can't use OpenCV and still have trouble, I can expand on the answer. It might go beyond the scope of your actual question, though. Relevant documentation: OpenCV Documentation. Pillow and scikit-image would work very similarly. I noticed that the outputs produce values with high variation - many values are being printed with long scientific notation. It makes it hard to assess the output of the neural network. Therefor, when you're not using a softmax layer, you can also calculate the probabilities separately, as I did in recognize (see Wikipedia: Softmax for the formula and more explanation). I'm mentioning it here because it can be a help troubleshooting such issues in the future and make it easier on other people trying to help you out. For the images above, it produces something like this, which shows that there is a high certainty about the category: Image prediction: x Top 3: {'x': 100.0, 'a': 0.0, 'b': 0.0} Image prediction: p Top 3: {'p': 100.0, 'd': 2.6237523e-09, 'q': 7.537843e-12} Why was the prediction always "a" in your case? Assuming you didn't do any other mistakes, I'd have to guess, but I think it's probably because the letter occupies a large amount of the area in the image, so an inverted image that had most areas filled in would resemble it most closely. Or, the inverted image of an "a" looked to the neural network most closely to the images of "a" it saw during training. It's a guess. But if you give a neural network something it never really saw during training, I think the best anyone can do is guess at the outcome. I would have expected it to be more randomly spread among the categories, probably, so there might be some other issue in your code, possibly with evaluating the prediction. Just out of curiosity, I have used two more images, which don't look like letters at all: The first image the neural network insists is an "e": Top 3: {'e': 99.99985, 's': 0.00014580016, 'c': 1.3610912e-06} The second image it believes to be, with high certainty, an "a": Top 3: {'a': 100.0, 'h': 1.28807605e-08, 'r': 1.0681121e-10} It might be that random images of that sort simply "look like an a" to your neural network. Also, it has been known that neural networks can, at times, be easily fooled and hone in on features that seem very counterintuitive: Jiawei Su, Danilo Vasconcellos Vargas, and Sakurai Kouichi, “One Pixel Attack for Fooling Deep Neural Networks,” IEEE Transactions on Evolutionary Computation 23, no. 5 (October 2019): 828–41, https://doi.org/10.1109/TEVC.2019.2890858. I think there is also a lesson to be learned about training neural networks in general: I had the expectation that, in a case of a classification problem as you are solving, which seems to have become almost like a canonical introductory problem in many machine learning courses, an input that does not clearly belong to any of the trained classes, even in a well-trained network, would manifest itself as predictions that are spread out over several classes, signifying the ambiguity of the input. But, as we can see here, an "unknown" input does not need to produce such results at all, apparently. Even such a case can produce results that seem to show a high certainty that the input belong in a certain class, such as the apparent degree of "certainty" the neural network suggests to have that the nonsensical scribble be an "e". Therefor, another conclusion can perhaps be drawn: if one wants to appropriately deal with inputs that do not belong to any of the trained categories, one must train the neural network for that purpose explicitly. By that I mean that one must add an additional class of non-alphabetic images and train it with images that are non-sensical, miscellaneous images (such as the flower above), or probably even classes very close to letters, such as numbers and non-latin writing symbols. It might be precisely the closeness of that "miscellaneous category" that could help the neural network get a clearer idea of what constitutes a letter. However, as we can see here, it seems insufficient to train a neural network on a set of target classes and then to simply expect it to also be able to give a useful prediction in the case of inputs outside of those classes. Some people might feel that I am way overthinking and complicating the topic at this point, but I think it's important enough of an observation about neural networks that, at least for myself, it is well worth keeping in mind. Preprocessing Images From the exchange in the comments, it turns out that there is another aspect to this problem. The images I had drawn happened to work very well. However, when I increase the contrast, they are no longer being recognized. I will first go into how I have done so. Since it is a common function in machine learning, I had the somewhat unconventional idea to apply a scaled sigmoid function, so as to keep the values in the range of 0-255, retain some of the relative shades, but turn up the contrast. More on that here: Wikipedia: Sigmoid. I'm saying "unconventional" because I don't think it's something you usually use for images, but since this function is so ubiquitous in machine learning, specifically the activation functions, I thought it might be fun to repurpose it, even though the performance is probably terrible compared to algorithms that are more common for image processing. (Aside: I had done almost the exact same for audio processing once, which ended up, when applied to the volume, to function like a compressor. And that's sort of what we're doing: we're "compressing" the grayscale ranges here, without completely eliminating the transitions. This, I believe, ended up really pinpointing the issue with this neural network, because it's a modification that seems more specific, but proceeds to throw off the neural network almost right away. Adjust the parameters in this "generalized sigmoid" function a bit, if you like, to make it smoother (That means: less steep, to retain more of the transitions. Play around with the Desmos graph and look at the PyPlot previews, too.) and get a better feel for at what point precisely the neural network sort of gives up and says "I don't recognize this anymore." People more graphically inclined might also be reminded of the smoothstep function often used to adjust harshness of edges in shaders GLSL: smoothstep). Desmos Graph Formula (s = 25, b = 50 appear to give good results): Then, I preprocess the images with code like this: import matplotlib.pyplot as plt def preprocess(before): s, b = 25, 50 f = lambda x: np.exp(s*(x/255 - s/b)) / (1 + np.exp(s*(x/255 - s/b))) after = f(before) fig, ax = plt.subplots(1,2) ax[0].imshow(before, cmap='gray') ax[1].imshow(after, cmap='gray') plt.show() return after Call the above in load_image, before reshaping it. It will show you the result, side-by-side, before feeding the image to the neural network. In general, not just in machine learning but also statistics, it appears to be good practice to get an idea of the data, to preview and sanity check it, before further working with it. This might have also given you a hint early on about what was wrong with your input images. Here is an example, using the images from above, of what these look like before and after preprocessing: Considering it was such an ad-hoc idea and somewhat unconventional, it seems to work quite well. However, here are the new predictions for these images, after processing: Image prediction: l Top 3: {'l': 11.176592, 'y': 9.341431, 'x': 7.692416} Image prediction: q Top 3: {'q': 11.703363, 'p': 9.119178, 'l': 7.6522427} It doesn't recognize those images at all anymore, which confirms some of the issues you might have been having. Your neural network has "learned" the grey, fuzzy transitions around the letters to be part of the features it considers. I had used this site to draw the images: JSPaint. Maybe it was, in part, luck or intuition that I used the paintbrush and not the pen tool, as I would have probably encountered the same issues you are having, since it produces no transitions from black to white. That seemed natural to me, because it seemed to best fit the "feel" of your training inputs, even if it seemed like a trivial, negligible detail at first. Luck, experience - I don't know. But what you therefor want to do is use a tool that leaves "fuzzy borders" or write yet another preprocessing step that does the reverse of what I have just demonstrated, in order to show the negative case, and add blur to the borders. Data Augmentation I thought I would have been long since done with this question, but it really goes to show how involved dealing with neural networks can quickly get, it seems. The core of the problem of this question really appears to end up touching on what seems to be some of the fundamentals of machine learning. I will state plainly what I think this example ended up demonstrating, quite illustratively, maybe more for myself than for most other readers: Your neural network only learns what you teach it. The explanation might be simply, and probably there are important exceptions to his, that you didn't teach your neural network to recognize letters with sharp borders, so it didn't learn how to recognize them. I'm not a great machine learning expert, so probably none of this is news to anyone more experienced. But this reminded me of a technique in machine learning that I think could be applied in this scenario quite well, which is "data augmentation": Data augmentation in data analysis are techniques used to increase the amount of data by adding slightly modified copies of already existing data or newly created synthetic data from existing data. It acts as a regularizer and helps reduce overfitting when training a machine learning model. Wikipedia: Data Augmentation Perez, Luis, and Jason Wang. “The Effectiveness of Data Augmentation in Image Classification Using Deep Learning.” arXiv, December 13, 2017. https://doi.org/10.48550/arXiv.1712.04621. The good news might be that I have given you everything you need to train your neural network further, without needing any additional data on top of the hundreds of megabytes of training data you are already loading from that CSV file. Use the contrast-enhancing preprocessing function above to create a variation of each of the training images, during learning, so that it learns to also handle such variations. Would another model architecture end up being less picky about such details? Would different activation functions have handled these cases more flexibly, perhaps? I don't know, but those seem like very interesting questions for machine learning in general. Debugging Neural Networks This answer has taken on dimensions I really did not intend, so I'm starting to feel the urge to apologize for adding on to it yet again, but this immediately leads one to wonder about a broader issue, one which has probably plagued the machine learning community (or at least someone with as humble experience in it as myself): How do you debug a neural network? So far, this was a bunch of trial and error, some luck, a little bit of intuition, but it feels like shooting in the dark sometimes when a neural network is not working. This might be far from perfect, but one approach that seems have been spreading online is to visualize which neurons activate for a given input, in order to get an idea of what areas in an image, or input more generally, influence the final prediction of a neural network most. For that, Keras already provides some functionality, by giving you access to the outputs of each model layer. As a reminder, the architecture of the model in question looks like this: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 200) 157000 dense_1 (Dense) (None, 150) 30150 dense_2 (Dense) (None, 100) 15100 dense_3 (Dense) (None, 50) 5050 dense_4 (Dense) (None, 26) 1326 ================================================================= Total params: 208,626 Trainable params: 208,626 Non-trainable params: 0 _________________________________________________________________ You can get access to the activations of each layer by creating a new model and combine the outputs of each layer. That we can plot. Now, it would be a lot easier of those were CNN's, and those might be more appropriate for an image, but that's fine. The author of the question wasn't comfortable with those, yet, so let's go with what we have. With CNN layers we would naturally have a 2-dimensional shape to plot, but a dense layer of neurons is one dimensional. What I like to do in scenarios like that, even though it's less than perfect, is to pad them up to the next larger square. def trace(model, image): outputs = [layer.output for layer in model.layers] trace_model = keras.models.Model(inputs=model.input, outputs=outputs) p = trace_model.predict(image) fig, ax = plt.subplots(1, len(p)) for i, layer in enumerate(p): neurons = layer[0].shape[0] square = int(np.ceil(np.sqrt(neurons))) padding = square**2 - neurons activations = np.append(layer[0], [np.min(layer[0])]*padding).reshape((square,square)) ax[i].imshow(activations) plt.show() As I said, this would be nicer with CNN layers, which is why most sources on the Internet related to this topic will use those, so I thought suggesting something for dense layers might be useful. Here are the results, for the same images of the letter "x" and "p" from above: We can see an image being plotted, per figure, one for each layer of the neural network. This colormap is "viridis", as far as I know, the current default colormap for pyplot, where blue are the lowest values and yellow the highest. You can see the padding at the end of the image for each layer, except where it happens to be a perfect square already (such in the case of 100). There might be a better way to clearly delineate those. In the case of "p", the second image, one can make out the final classification, from the output of the final layer, as the brightest, most yellow dot is on the third line, fourth column ("p" is the 16th letter of the alphabet, 16 = 2x6+4, as the next higher square for 26 letters was 36, so it ends up in a 6x6 square). It's still somewhat difficult to get a clear answer for what's wrong or what's going on here, but it might be a useful start. Other instances, using CNN's, show a lot more clearly what kind of shapes trigger the neural network, but a variation of this technique could perhaps be adopted to dense layers as well. To make a careful attempt at interpreting these images, it does seem to possibly confirm that the neural network is very specific about the feature it learns about an image, as the singular bright, yellow spot in the first layer of both of these images might suggest. What one would more likely expect, ideally, is probably that the neural network considers more features, with similar weights, across the image, thus paying more attention to the overall shape of the letter. However, I am less sure about this and it's probably non-trivial to properly interpret these "activation plots".
3
7
73,828,684
2022-9-23
https://stackoverflow.com/questions/73828684/i-want-to-check-if-any-key-is-pressed-is-this-possible
How do I check if ANY key is pressed? this is how I know to detect one key: import keyboard # using module keyboard while True: # making a loop if keyboard.is_pressed('a'): # if key 'q' is pressed print('You Pressed A Key!') break # finishing the loop How do I check if any key (not just letters) is pressed? For example, if someone presses the spacebar it works, the same for numbers and function keys, etc.
while True: # Wait for the next event. event = keyboard.read_event() if event.event_type == keyboard.KEY_DOWN: print(event.name) # to check key name Press any key and get the key name.
3
4
73,827,249
2022-9-23
https://stackoverflow.com/questions/73827249/merge-lists-in-a-dataframe-column-if-they-share-a-common-value
What I need: I have a dataframe where the elements of a column are lists. There are no duplications of elements in a list. For example, a dataframe like the following: import pandas as pd >>d = {'col1': [[1, 2, 4, 8], [15, 16, 17], [18, 3], [2, 19], [10, 4]]} >>df = pd.DataFrame(data=d) col1 0 [1, 2, 4, 8] 1 [15, 16, 17] 2 [18, 3] 3 [2, 19] 4 [10, 4] I would like to obtain a dataframe where, if at least a number contained in a list at row i is also contained in a list at row j, then the two list are merged (without duplication). But the values could also be shared by more than two lists, in that case I want all lists that share at least a value to be merged. col1 0 [1, 2, 4, 8, 19, 10] 1 [15, 16, 17] 2 [18, 3] The order of the rows of the output dataframe, nor the values inside a list is important. What I tried: I have found this answer, that shows how to tell if at least one item in list is contained in another list, e.g. >>not set([1, 2, 4, 8]).isdisjoint([2, 19]) True Returns True, since 2 is contained in both lists. I have also found this useful answer that shows how to compare each row of a dataframe with each other. The answer applies a custom function to each row of the dataframe using a lambda. df.apply(lambda row: func(row['col1']), axis=1) However I'm not sure how to put this two things together, how to create the func method. Also I don't know if this approach is even feasible since the resulting rows will probably be less than the ones of the original dataframe. Thanks!
This is not straightforward. Merging lists has many pitfalls. One solid approach is to use a specialized library, for example networkx to use a graph approach. You can generate successive edges and find the connected components. Here is your graph: You can thus: generate successive edges with add_edges_from find the connected_components craft a dictionary and map the first item of each list groupby and merge the lists (you could use the connected components directly but I'm giving a pandas solution in case you have more columns to handle) import networkx as nx G = nx.Graph() for l in df['col1']: G.add_edges_from(zip(l, l[1:])) groups = {k:v for v,l in enumerate(nx.connected_components(G)) for k in l} # {1: 0, 2: 0, 4: 0, 8: 0, 10: 0, 19: 0, 16: 1, 17: 1, 15: 1, 18: 2, 3: 2} out = (df.groupby(df['col1'].str[0].map(groups), as_index=False) .agg(lambda x: sorted(set().union(*x))) ) output: col1 0 [1, 2, 4, 8, 10, 19] 1 [15, 16, 17] 2 [3, 18]
4
3
73,826,941
2022-9-23
https://stackoverflow.com/questions/73826941/filtering-pandas-dataframe-on-time-not-date
I have the dataframe below and want to filter by time. The time column comes up as an object when I use dtypes. To get the time to use as the filter criteria I use split: start_time = "25 September 2022, 13:00:00" split_time = start_time.split(", ")[1] I have tried converting split_time and the df column to datime but get an error on the df column conversion: TypeError: <class 'datetime.time'> is not convertible to datetime I have also tried a simple string search but this doesn't return any results. I have been able to filter by date using: split_date = start_time.split(", ")[0] event_date = datetime.strptime(split_date, "%d %B %Y") events_df['start_date'] = pd.to_datetime(events_df['start_date']) filtered_df = events_df.loc[(events_df['start_date'] == event_date)] But can't seem to do the equivalent for time. Can anyone see the problem? Thanks fixture_id name start_date time 145 9394134 Plymouth Argyle v Ipswich Town 2022-09-25 00:00:00 12:30:00 146 9694948 Grays Athletic v Merstham FC 2022-09-25 00:00:00 13:00:00 147 9694959 FC Romania v Faversham Town 2022-09-25 00:00:00 15:00:00
Comapre times generated by Series.dt.time with Timestamp.time: start_time = "25 September 2022, 13:00:00" dt = pd.to_datetime(start_time, format="%d %B %Y, %H:%M:%S") events_df['start_date'] = pd.to_datetime(events_df['start_date']) #if necessary events_df['time'] = pd.to_datetime(events_df['time']).dt.time filtered_df = events_df.loc[(events_df['time'] == dt.time())] print (filtered_df) fixture_id name start_date time 1 146 9694948 Grays Athletic v Merstham FC 2022-09-25 13:00:00
3
5
73,826,252
2022-9-23
https://stackoverflow.com/questions/73826252/how-to-plot-a-time-series-with-this-dataframe
I want to plot this dataframe like a time series, a line for every country that every year increases or decreases according to 'count'. How can i do this? country count Year 2005 Australia 2 2005 Austria 1 2005 Belgium 0 2005 Canada 4 2005 China 0 2006 Australia 3 2006 Austria 0 2006 Belgium 1 2006 Canada 5 2006 China 2 2007 Australia 5 2007 Austria 1 2007 Belgium 2 2007 Canada 6 2007 China 3 I'd like a thing like this:
You can use seaborn.lineplot: import seaborn as sns df.Year = pd.to_datetime(df.Year) sns.set(rc={'figure.figsize':(12, 8)}) # changed the figure size to avoid overlapping sns.lineplot(data=df, x=df['Year'].dt.strftime('%Y'), # show only years with strftime y=df['count'], hue='country') which gives
4
3
73,797,873
2022-9-21
https://stackoverflow.com/questions/73797873/importlib-metadata-packagenotfounderror-no-package-metadata-was-found-for-pypro
I wrote a code using the pyproj library and converted this code to an exe file for use on another computer. I added the pyproj to the requirements.txt file. And I installed the library with the requirements.txt file on the other computer I will use. When I run the exe file, I get the following error: importlib.metadata.PackageNotFoundError: No package metadata was found for pyproj I'd be glad if you can help.
Here is my solution for those who have problems like above while running the exe file: pyinstaller --onefile --copy-metadata pyproj "example.py" For now this seems to have fixed the problem.
8
6
73,821,248
2022-9-22
https://stackoverflow.com/questions/73821248/sqlalchemy-get-bind-parameters-from-sql-dynamically
I'm trying to use pd and sqlalchemy to run all the sql files in a directory. Currently I can load the text of a sql file into a sqlalchemy.sql.text object, and send that directly to pd.read_sql. What should I use to locate the bind parameters within my sql script so that I can prompt the user for them? import sys, os, pandas as pd, re, sqlalchemy as sa os.chdir(sys.argv[1]) os.mkdir("out") uname = input("Username >>> ") passw = input("Password >>> ") engine = sa.create_engine(f"oracle+cx_oracle://{uname}:{passw}@PROD/?encoding=UTF-8&nencoding=UTF-8") for filename in os.listdir('.'): if not re.match(filename,r".*\.sql"): continue #Ignore non-sql files. print("Executing",filename) with open(filename,"r") as my_file: sql = sa.text(''.join(filename.readlines())) ### Something goes here ### df = pd.read_sql(''.join(my_file.readlines()),engine) df.to_excel(f"./out/{filename}",filename,index=False) My current best guess is to read through the file line-by-line with a regex that finds things that look like bind params, but I feel like there should be a better way.
You can obtain the names of the bind parameters from the compiled query: >>> q = text('select id, name from users where name = :p1 and age > :p2') >>> q.compile().params {'p1': None, 'p2': None} In the resulting dictionary, the keys correspond to the bind parameter names. The values are None until values are bound.
3
4
73,823,564
2022-9-23
https://stackoverflow.com/questions/73823564/how-to-create-this-dataframe-using-pandas
Please see this - I want to create a table similar to this using pandas in python? Can anyone guide? Unable to understand how to add "Total Marks" text corresponding to two columns ("Student id" & "Course id"). Thanks
As far as I'm aware, pandas doesn't really support merged cells/cells spanning multiple rows in the same way as Excel/HTML do. A pandas-esque way to include the total row for a single column would be something like the following: import pandas as pd columns = ['Student ID', 'Course ID', 'Marks'] data = [(103, 201, 67), (103, 203, 67), (103, 204, 89)] df = pd.DataFrame(data, columns=columns) df.at['Total marks', 'Marks'] = df['Marks'].sum() print(df) (per this answer) This gives you: Student ID Course ID Marks 0 103.0 201.0 67.0 1 103.0 203.0 67.0 2 103.0 204.0 89.0 Total marks NaN NaN 223.0 For HTML output like the image you provided, it looks like you would have to edit the HTML after exporting from pandas (this thread talks about columns but I believe the same applies for merging cells in rows).
4
5
73,821,319
2022-9-22
https://stackoverflow.com/questions/73821319/number-of-occurrences-of-digit-in-numbers-from-0-to-n
Given a number n, count number of occurrences of digits 0, 2 and 4 including n. Example1: n = 10 output: 4 Example2: n = 22 output: 11 My Code: n = 22 def count_digit(n): count = 0 for i in range(n+1): if '2' in str(i): count += 1 if '0' in str(i): count += 1 if '4' in str(i): count += 1 return count count_digit(n) Code Output: 10 Desired Output: 11 Constraints: 1 <= N <= 10^5 Note: The solution should not cause outOfMemoryException or Time Limit Exceeded for large numbers.
Another brute force, seems faster: def count_digit(n): s = str(list(range(n+1))) return sum(map(s.count, '024')) Benchmark with n = 10**5: result time solution 115474 244 ms original 138895 51 ms Kelly 138895 225 ms islam_abdelmoumen 138895 356 ms CodingDaveS Code (Try it online!): from timeit import default_timer as time def original(n): count = 0 for i in range(n+1): if '2' in str(i): count += 1 if '0' in str(i): count += 1 if '4' in str(i): count += 1 return count def Kelly(n): s = str(list(range(n+1))) return sum(map(s.count, '024')) def islam_abdelmoumen(n): count = 0 for i in map(str,range(n+1)): count+=i.count('0') count+=i.count('2') count+=i.count('3') return count def CodingDaveS(n): count = 0 for i in range(n + 1): if '2' in str(i): count += str(i).count('2') if '0' in str(i): count += str(i).count('0') if '4' in str(i): count += str(i).count('4') return count funcs = original, Kelly, islam_abdelmoumen, CodingDaveS print('result time solution') print() for _ in range(3): for f in funcs: t = time() print(f(10**5), ' %3d ms ' % ((time()-t)*1e3), f.__name__) print()
3
0
73,821,359
2022-9-22
https://stackoverflow.com/questions/73821359/how-to-edit-lines-of-a-text-file-based-on-regex-conditions
import re re_for_identificate_1 = r"" with open("data_path/filename_1.txt","r+") as file: for line in file: #replace with a substring adding a space in the middle line = re.sub(re_for_identificate_1, " milesimo", line) #replace in txt with the fixed line Example filename_1.txt : unmilesimo primero 1001° dosmilesimos quinto 2005° tresmilesimos 3000° nuevemilesimos doceavo 9012° The correct output file that I need obtiene is this: Rewrited input filename_1.txt un milesimo primero 1001° dos milesimos quinto 2005° tres milesimos 3000° nueve milesimos doceavo 9012° What is the regex that I need and what is the best way to replace the fixed línes in their original positions in the input file?
You can use file.seek(0) to go beginning of the file, then write data and truncate the file. Like this: import re re_for_identificate_1 = "(?<!^)milesimo" tmp = "" with open("data.txt", "r+") as file: for line in file: line = re.sub(re_for_identificate_1, " milesimo", line) tmp += line file.seek(0) file.write(tmp) file.truncate() The regex you want to use is "(?<!^)milesimo" to replace every instance of "milesimo" with " milesimo" but not at the beginning of a line.
4
4
73,820,799
2022-9-22
https://stackoverflow.com/questions/73820799/checking-match-on-two-ordered-lists
I will give the specific case and the generic case so we can help more people: I have a list of ordered lists and another list with the same length as each ordered list. Each list in the list of lists is students' answers from a large-scale evaluation, and the second is the correct answers from the test. I need to check the % of right answers, AKA how many matches are between each item in each of the lists in order. The output should be a list where 1 means there is a match, and 0 there is no match. Example: list1 = [['A', 'B', 'C', 'A'], ['A', 'C', 'C', 'B']] list2 = ['A', 'B', 'C', 'A'] result = [[1, 1, 1, 1],[1, 0, 1, 0] Thank you!
The others have done a fine job detailing how this can be done with list comprehensions. The following code is a beginner-friendly way of getting the same answer. final = [] # Begin looping through each list within list1 for test in list1: # Create a list to store each students scores within student = [] for student_ans, correct_ans in zip(test, list2): # if student_ans == correct_ans at the same index... if student_ans == correct_ans: student.append(1) else: student.append(0) # Append the student list to the final list final.append(student)
3
2
73,814,238
2022-9-22
https://stackoverflow.com/questions/73814238/recreate-pandas-dataframe-from-question-in-stackoverflow
This is a question from someone who tries to answer questions about pandas dataframes. Consider a question with a given dataset which is just the visualization (not the actual code), for example: numbers letters dates all 0 1 a 20-10-2020 NaN 1 2 b 21-10-2020 b 2 3 c 20-11-2020 4 3 4 d 20-10-2021 20-10-2020 4 5 e 10-10-2020 3.14 Is it possible to quickly import this in python as a dataframe or as a dictionary? So far I copied the given text and transformed it to a dataframe by making strings (adding '') and so on. I think there are two 'solutions' for this: Make a function that given the text as input, it somehow transforms it to a dataframe. Use some function in the text-editor (I use spyder) which can do this trick for us.
read_clipboard You can use pd.read_clipboard() optionally with a separator (e.g. pd.read_clipboard('\s\s+') if you have datetime strings or spaces in column names and columns are separated by at least two spaces): select text on the question and copy to clipboard (ctrl+c/command-c) move to python shell or notebook and run pd.read_clipboard() Note that this doesn't work well on all platforms. read_csv + io.StringIO For more complex formats, combine read_csv combined with io.StringIO: data = ''' numbers letters dates all 0 1 a 20-10-2020 NaN 1 2 b 21-10-2020 b 2 3 c 20-11-2020 4 3 4 d 20-10-2021 20-10-2020 4 5 e 10-10-2020 3.14 ''' import io df = pd.read_csv(io.StringIO(data), sep='\s+') df
4
7
73,813,602
2022-9-22
https://stackoverflow.com/questions/73813602/futurewarning-in-a-future-version-of-pandas-all-arguments-of-dataframe-any-and
Here is the Python statement creating the problem: nan_rows = df[df.isnull().any(1)] And it gives the following warning: FutureWarning: In a future version of pandas all arguments of DataFrame.any and Series.any will be keyword-only. It is a warning from production environment. I am sorry that I am not able to share the complete code as it is fairly complex to fit into this question. How can I understand what this warning is saying, and how should I update this line of code?
Starting from pandas 1.5, you get a FutureWarning. You must specify axis=1: nan_rows = df[df.isnull().any(axis=1)] In a future version only keywords arguments will be accepted, omitting the keyword will raise an error.
3
5
73,800,003
2022-9-21
https://stackoverflow.com/questions/73800003/how-to-specify-variable-type-of-a-pandas-series-string-or-typevar
I want to use type hinting for something like: def fo() -> pd.Series[np.float64]: return pd.Series(np.float64[0]) This won't work. From this answer: How to specify the type of pandas series elements in type hints? I understand I can use either: def fo() -> "pd.Series[np.float64]": return pd.Series(np.float64[0]) Or: from typing import ( TypeVar ) SeriesFloat64 = TypeVar('pd.Series[np.float64]') def fo() -> SeriesFloat64: return pd.Series(np.float64[0]) Why should I prefer one over the other?
Both "solutions" you referenced are wrong I'll start with the second one: from typing import TypeVar import numpy as np, pandas as pd SeriesFloat64 = TypeVar('pd.Series[np.float64]') def fo() -> SeriesFloat64: return pd.Series(np.float64(0)) Is this type variable technically a valid annotation? Yes. Does it specify the generic pd.Series? No. First of all, as @jonrsharpe pointed out, this breaks the convention of initializing your type variables with a name argument that corresponds to the actual name of the variable. More importantly, neither bound nor constraints have been specified, meaning you might as well have just written this: from typing import TypeVar import numpy as np, pandas as pd T = TypeVar("T") # which is the same as `TypeVar("T", bound=typing.Any)` def fo() -> T: return pd.Series(np.float64(0)) This would at least fix the name issue, but it would not specify anything about the return type of fo(). In fact, mypy will correctly point out the following: error: Incompatible return value type (got "Series[Any]", expected "T") [return-value] Which already gives a hint about what we can and cannot do regarding specification of pd.Series and this leads us to the second "solution": import numpy as np, pandas as pd def fo() -> "pd.Series[np.float64]": return pd.Series(np.float64(0)) This is equivalent, by the way: (no quotes needed) from __future__ import annotations import numpy as np, pandas as pd def fo() -> pd.Series[np.float64]: return pd.Series(np.float64(0)) This is wrong because the type parameter for the generic Series does not accept np.float64. Again, mypy points this out: error: Value of type variable "S1" of "Series" cannot be "floating" [type-var] If we check out the pandas-stubs source for core.series.Series (as of today), we see that Series inherits from typing.Generic[S1]. And when we go to the definition of _typing.S1, we can see the constraints on that type variable. And a numpy float is not among them, but we do find the built-in float. What does that mean? Well, we know that np.float64 does inherit from the regular float, but it also inherits from np.floating, and that is the issue. As mentioned in this section of PEP 484, as opposed to an upper bound type constraints cause the inferred type to be exactly one of the constraint types[.] This means that you are not allowed to use np.float64 in place of the aforementioned S1 to specify a Series type. A better way The "most correct" way to type hint that function, in my opinion, is to do this: from __future__ import annotations import numpy as np, pandas as pd def fo() -> pd.Series[float]: return pd.Series(np.float64(0)) This makes correct use of the Series generic providing a type variable that is both in line with the defined type constraints and indicates an element type for the series returned by the function that as close as you can get to the actual type since np.float64 does inherit from float. It also passed the strict mypy check. Adding useful information One caveat of this is that you lose the information that the series actually contains 64-bit floats. If you want that because you want the signature/documentation for the function to reflect that nuance, you can simply set up a custom type alias: from __future__ import annotations import numpy as np, pandas as pd float64 = float def fo() -> pd.Series[float64]: return pd.Series(np.float64(0)) Now calling help(fo) gives this: ... fo() -> 'pd.Series[float64]' But it is important to note that this is just for your benefit and does absolutely nothing for the static type checker. Limitations of pd.Series types Another thing worth mentioning is that as of today, there are no useful annotations on many of the methods of a pd.Series that return a single element, such as the __getitem__ method for access via the square brackets []. Say I do this: ... series = fo() x = series[0] print(x, type(x)) y = int(x) The output is 0.0 <class 'numpy.float64'>, but type checkers have no clue that x is a np.float64 or any float at all for that matter. (In fact, my PyCharm complains at y = int(x) because it thinks x is a timestamp for whatever reason.) This is just to illustrate that as of now, you may not get any useful auto-suggestions when dealing with pd.Series, even if you annotate your types more or less correctly. Hope this helps.
4
1
73,809,795
2022-9-22
https://stackoverflow.com/questions/73809795/how-can-i-convert-a-string-true-to-boolean-python
So I have this data that list of True and False for example tf = ['True', 'False', 'False'] how can I convert tf to a bool. Once I print(tf[0]) it prints True
Use the ast module: import ast tf = ['True', 'False', 'False'] print(type(ast.literal_eval(tf[0]))) print(ast.literal_eval(tf[0])) Result: <class 'bool'> True Ast Documentation Literal_eval
3
4
73,797,666
2022-9-21
https://stackoverflow.com/questions/73797666/is-there-a-better-way-to-capture-all-the-regex-patterns-in-matching-with-nested
I am trying out a simple text-matching activity where I scraped titles of blog posts and try to match it with my pre-defined categories once I find specific keywords. So for example, the title of the blog post is "Capture Perfect Night Shots with the Oppo Reno8 Series" Once I ensure that "Oppo" is included in my categories, "Oppo" should match with my "phone" category like so: categories = {"phone" : ['apple', 'oppo', 'xiaomi', 'samsung', 'huawei', 'nokia'], "postpaid" : ['signature', 'postpaid'], "prepaid" : ['power all', 'giga'], "sku" : ['data', 'smart bro'], "ewallet" : ['gigapay'], "event" : ['gigafest'], "software" : ['ios', 'android', 'macos', 'windows'], "subculture" : ['anime', 'korean', 'kpop', 'gaming', 'pop', 'culture', 'lgbtq', 'binge', 'netflix', 'games', 'ml', 'apple music'], "health" : ['workout', 'workouts', 'exercise', 'exercises'], "crypto" : ['axie', 'bitcoin', 'coin', 'crypto', 'cryptocurrency', 'nft'], "virtual" : ['metaverse', 'virtual']} Then my dataframe would look like this Fortunately I found a reference to how to use regex in mapping to nested dictionaries but it can't seem to work past the first couple of words Reference is here So once I use the code def put_category(cats, text): regex = re.compile("(%s)" % "|".join(map(re.escape, categories.keys()))) if regex.search(text): ret = regex.search(text) return ret[0] else: return 'general' It usually reverts to put "general" as the category, even when doing it in lowercase as seen here I'd prefer to use the current method of inputting values inside the dictionary for this matching activity instead of running pure regex patterns and then putting it through fuzzy matching for the result.
You can create a reverse mapping that maps keywords to categories instead, so that you can efficiently return the corresponding category when a match is found: mapping = {keyword: category for category, keywords in categories.items() for keyword in keywords} def put_category(mapping, text): match = re.search(rf'\b(?:{"|".join(map(re.escape, mapping))})\b', text, re.I) if match: return mapping[match[0].lower()] return 'general' print(put_category(mapping, "Capture Perfect Night Shots with the Oppo Reno8 Series")) This outputs: phone Demo: https://replit.com/@blhsing/BlandAdoredParser
3
3