question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
73,667,806 | 2022-9-9 | https://stackoverflow.com/questions/73667806/how-to-convert-list-items-to-dictionary-keys-with-0-as-default-values | I'm trying to turn this list: ['USA', 'Canada', 'Japan'] into this dictionary: {'USA': 0, 'Canada': 0, 'Japan': 0} Can it be achieved with a simple loop? How would you go about it? Thanks! | Use dict.fromkeys: lst = ["USA", "Canada", "Japan"] out = dict.fromkeys(lst, 0) print(out) Prints: {'USA': 0, 'Canada': 0, 'Japan': 0} | 5 | 6 |
73,667,014 | 2022-9-9 | https://stackoverflow.com/questions/73667014/python-get-first-x-elements-of-a-list-and-remove-them | Note: I know there is probably an answer for this on StackOverflow already, I just can't find it. I need to do this: >>> lst = [1, 2, 3, 4, 5, 6] >>> first_two = lst.magic_pop(2) >>> first_two [1, 2] >>> lst [3, 4, 5, 6] Now magic_pop doesn't exist, I used it just to show an example of what I need. Is there a method like magic_pop that would help me to do everything in a pythonic way? | Do it in two steps. Use a slice to get the first two elements, then remove that slice from the list. first_list = lst[:2] del lst[:2] If you want a one-liner, you can wrap it in a function. def splice(lst, start = 0, end = None): if end is None: end = len(lst) partial = lst[start:end] del lst[start:end] return partial first_list = splice(lst, end = 2) | 6 | 7 |
73,665,184 | 2022-9-9 | https://stackoverflow.com/questions/73665184/how-do-i-close-figure-in-matplotlib | import matplotlib.pyplot as plt import pandas as pd l1 = [1,2,3,4] l2 = [2,4,6,8] fig = plt.figure() def func(): plt.pause(1) plt.plot(l1,l2) plt.draw() plt.pause(1) input("press any key to continue...") plt.close(fig) plt.pause(1) while True: func() plt.pause(1) This is the modified one: import matplotlib.pyplot as plt import pandas as pd l1 = [1,2,3,4] l2 = [2,4,6,8] fig = plt.figure() a = 1 def func(num): input(f"the {num}th window is not opened yet") plt.pause(1) plt.plot(l1,l2) plt.draw() print(f"the {num}th window is opened") plt.pause(1) input("press any key to continue...") plt.close(fig) plt.pause(1) print(f"the {num}th window is closed") while True: func(a) plt.pause(1) a+=1 If I don't use while True loop, it stops running as I press any key and that was my intention. However, if I run this code with while True loop, the figure window doesn't close even though I press any key or x button in the upper left. I think it is due to while True. I have no idea how to solve this problem, keeping while True. Please help me! modified: I could see an opened window when "The 2th window is not opened yet" input message came out. Probably, the window would be the one from the first time of the loop, since the second window wasn't opened at that time. Why was the first window still there? I used plt.close() to close the window. | The issue is not while True: as much as how you create your figures. Let's step through your process conceptually: fig = plt.figure() creates a figure and stores the handle in fig. Then you call func, which draws on the figure and eventually calls plt.pause(1). You loop around and call func again. This time, plt.plot(l1, l2) creates a new figure, since there are no open figures. func calls plt.close(fig). But the handle that's stored in fig is not the new figure that you opened, so of course your figure won't close. To close the correct figure, open the figure inside func. Using the object oriented API gives you more control regarding what you open, close, plot, etc: import matplotlib.pyplot as plt import pandas as pd l1 = [1, 2, 3, 4] l2 = [2, 4, 6, 8] def func(): fig, ax = plt.subplots() ax.plot(l1, l2) plt.show(block=False) plt.pause(1) input("press any key to continue...") plt.close(fig) plt.pause(1) while True: func() Alternatively, you can just replace plt.close(fig) with plt.close(), which is equivalent to plt.close(plt.gcf()), so you don't have to know the handle to your new figure. | 9 | 12 |
73,664,830 | 2022-9-9 | https://stackoverflow.com/questions/73664830/pydantic-object-has-no-attribute-fields-set-error | I'm working with FastAPI to create a really simple dummy API. For it I was playing around with enums to define the require body for a post request and simulating a DB call from the API method to a dummy method. To have the proper body request on my endpoint, Im using Pydantic's BaseModel on the class definition but for some reason I get this error File "pydantic/main.py", line 406, in pydantic.main.BaseModel.__setattr__ AttributeError: 'MagicItem' object has no attribute '__fields_set__' I'm not sure what's the problem, here is my code that generate all this: I'm kinda lost right now cuz I don't see the error in such a simple code. | You basically completely discarded the Pydantic BaseModel.__init__ method on your MagicItem. Generally speaking, if you absolutely have to override the Base Model's init-method (and in your case, you don't), you should at least call it inside your own like this: super().__init__(...) Pydantic does a whole lot of magic in the init-method. One of those things being the setting of the __fields_set__ attribute. That is why you are getting that error. I suggest just completely removing your custom __init__ method. One of the main benefits of using Pydantic models is that you don't have to worry about writing boilerplate like this. Check out their documentation, it is really good in my opinion. PS: If you insist because you want to be able to initialize your MagicItem with positional arguments, you could just do this: class MagicItem(BaseModel): name: str damage: Damage def __init__(self, name: str, damage: Damage) -> None: super().__init__(name=name, damage=damage) | 14 | 28 |
73,661,849 | 2022-9-9 | https://stackoverflow.com/questions/73661849/which-specific-characters-does-the-strip-function-remove | Here is what you can find in the str.strip documentation: The chars argument is a string specifying the set of characters to be removed. If omitted or None, the chars argument defaults to removing whitespace. Now my question is: which specific characters are considered whitespace? These function calls share the same result: >>> ' '.strip() '' >>> '\n'.strip() '' >>> '\r'.strip() '' >>> '\v'.strip() '' >>> '\x1e'.strip() '' In this related question, a user mentioned that the str.strip function works with a superset of ASCII whitespace characters (in other words, a superset of string.whitespace). More specifically, it works with all unicode whitespace characters. Moreover, I believe (but I'm just guessing, I have no proofs) that c.isspace() returns True for each character c that would also be removed by str.strip. Is that correct? If so, I guess one could just run c.isspace() for each unicode character c, and come up with a list of whitespace characters that are removed by default by str.strip. >>> ' '.isspace() True >>> '\n'.isspace() True >>> '\r'.isspace() True >>> '\v'.isspace() True >>> '\x1e'.isspace() True Is my assumption correct? And if so, how can I find some proofs? Is there an easier way to know which specific characters are automatically removed by str.strip? | The most trivial way to know which characters are removed by str.strip() is to loop over each possible characters and check if a string containing such character gets altered by str.strip(): c = 0 while True: try: s = chr(c) except ValueError: break if (s != s.strip()): print(f"{hex(c)} is stripped", flush=True) c+=1 As suggested in the comments, you may also print a table to check if str.strip(), str.split() and str.isspace() share the same behaviour about white spaces: c = 0 print("char\tstrip\tsplit\tisspace") while True: try: s = chr(c) except ValueError: break stripped = s != s.strip() splitted = not s.split() spaced = s.isspace() if (stripped or splitted or spaced): print(f"{hex(c)}\t{stripped}\t{splitted}\t{spaced}", flush=True) c+=1 If I run the code above I get: char strip split isspace 0x9 True True True 0xa True True True 0xb True True True 0xc True True True 0xd True True True 0x1c True True True 0x1d True True True 0x1e True True True 0x1f True True True 0x20 True True True 0x85 True True True 0xa0 True True True 0x1680 True True True 0x2000 True True True 0x2001 True True True 0x2002 True True True 0x2003 True True True 0x2004 True True True 0x2005 True True True 0x2006 True True True 0x2007 True True True 0x2008 True True True 0x2009 True True True 0x200a True True True 0x2028 True True True 0x2029 True True True 0x202f True True True 0x205f True True True 0x3000 True True True So, at least in python 3.10.4, your assumption seems to be correct. | 9 | 5 |
73,660,050 | 2022-9-9 | https://stackoverflow.com/questions/73660050/how-to-achieve-resumption-semantics-for-python-exceptions | I have a validator class with a method that performs multiple checks and may raise different exceptions: class Validator: def validate(something) -> None: if a: raise ErrorA() if b: raise ErrorB() if c: raise ErrorC() There's a place in the outside (caller) code where I want to customize its behaviour and prevent ErrorB from being raised, without preventing ErrorC. Something like resumption semantics would be useful here. Hovewer, I haven't found a good way to achieve this. To clarify: I have the control over Validator source code, but prefer to preserve its existing interface as much as possible. Some possible solutions that I've considered: The obvious try: validator.validate(something) except ErrorB: ... is no good because it also suppresses ErrorC in cases where both ErrorB and ErrorC should be raised. Copy-paste the method and remove the check: # In the caller module class CustomValidator(Validator): def validate(something) -> None: if a: raise ErrorA() if c: raise ErrorC() Duplicating the logic for a and c is a bad idea and will lead to bugs if Validator changes. Split the method into separate checks: class Validator: def validate(something) -> None: self.validate_a(something) self.validate_b(something) self.validate_c(something) def validate_a(something) -> None: if a: raise ErrorA() def validate_b(something) -> None: if b: raise ErrorB() def validate_c(something) -> None: if c: raise ErrorC() # In the caller module class CustomValidator(Validator): def validate(something) -> None: super().validate_a(something) super().validate_c(something) This is just a slightly better copy-paste. If some validate_d() is added later, we have a bug in CustomValidator. Add some suppression logic by hand: class Validator: def validate(something, *, suppress: list[Type[Exception]] = []) -> None: if a: self._raise(ErrorA(), suppress) if b: self._raise(ErrorB(), suppress) if c: self._raise(ErrorC(), suppress) def _raise(self, e: Exception, suppress: list[Type[Exception]]) -> None: with contextlib.suppress(*suppress): raise e This is what I'm leaning towards at the moment. There's a new optional parameter and the raise syntax becomes kinda ugly, but this is an acceptable cost. Add flags that disable some checks: class Validator: def validate(something, *, check_a: bool = True, check_b: bool = True, check_c: bool = True) -> None: if check_a and a: raise ErrorA() if check_b and b: raise ErrorB() if check_c and c: raise ErrorC() This is good, because it allows to granually control different checks even if they raise the same exception. However, it feels verbose and will require additional maintainance as Validator changes. I actually have more than three checks there. Yield exceptions by value: class Validator: def validate(something) -> Iterator[Exception]: if a: yield ErrorA() if b: yield ErrorB() if c: yield ErrorC() This is bad, because it's a breaking change for existing callers and it makes propagating the exception (the typical use) way more verbose: # Instead of # validator.validate(something) e = next(validator.validate(something), None) if e is not None: raise e Even if we keep everything backwards-compatible class Validator: def validate(something) -> None: e = next(self.iter_errors(something), None) if e is not None: raise e def iter_errors(something) -> Iterator[Exception]: if a: yield ErrorA() if b: yield ErrorB() if c: yield ErrorC() The new suppressing caller still needs to write all this code: exceptions = validator.iter_errors(something) e = next(exceptions, None) if isinstance(e, ErrorB): # Skip ErrorB, don't raise it. e = next(exceptions, None) if e is not None: raise e Compared to the previous two options: validator.validate(something, suppress=[ErrorB]) validator.validate(something, check_b=False) | With bare exceptions you are looking at the wrong tool for the job. In Python, to raise an exception means that execution hits an exceptional case in which resuming is not possible. Terminating the broken execution is an express purpose of exceptions. Execution Model: 4.3. Exceptions Python uses the βterminationβ model of error handling: an exception handler can find out what happened and continue execution at an outer level, but it cannot repair the cause of the error and retry the failing operation (except by re-entering the offending piece of code from the top). To get resumption semantics for exception handling, you can look at the generic tools for either resumption or for handling. Resumption: Coroutines Python's resumption model are coroutines: yield coroutine-generators or async coroutines both allow to pause and explicitly resume execution. def validate(something) -> Iterator[Exception]: if a: yield ErrorA() if b: yield ErrorB() if c: yield ErrorC() It is important to distinguish between send-style "proper" coroutines and iterator-style "generator" coroutines. As long as no value must be sent into the coroutine, it is functionally equivalent to an iterator. Python has good inbuilt support for working with iterators: for e in validator.iter_errors(something): if isinstance(e, ErrorB): continue # continue even if ErrorB happens raise e Similarly, one could filter the iterator or use comprehensions. Iterators easily compose and gracefully terminate, making them suitable for iterating exception cases. Effect Handling Exception handling is just the common use case for the more generic effect handling. While Python has no builtin effect handling support, simple handlers that address only the origin or sink of an effect can be modelled just as functions: def default_handler(failure: BaseException): raise failure def validate(something, failure_handler = default_handler) -> None: if a: failure_handler(ErrorA()) if b: failure_handler(ErrorB()) if c: failure_handler(ErrorC()) This allows the caller to change the effect handling by supplying a different handler. def ignore_b_handler(failure: BaseException): if not isinstance(failure, ErrorB): raise failure validate(..., ignore_b_handler) This might seem familiar to dependency inversion and is in fact related to it. There are various stages of buying into effect handling, and it is possible to reproduce much if not all features via classes. Aside from technical functionality, one can implement ambient effect handlers (similar to how try "connects" to raise automatically) via thread local or context-local variables. | 7 | 3 |
73,661,082 | 2022-9-9 | https://stackoverflow.com/questions/73661082/how-to-generate-a-certain-number-of-random-whole-numbers-that-add-up-to-a-certai | I want to generate 10 whole numbers that add up to 40 and are in the range of 2-6. For example: 2 + 6 + 2 + 5 + 6 + 2 + 2 + 6 + 3 + 6 = 40 Ten random numbers between 2 and 6 that add up to 40. | Given the relatively small search space, you could use itertools.combinations_with_replacement() to generate all possible sequences of 10 numbers between 2 and 6, save the ones that sum to 40 - then pick and shuffle one at random when requested: from itertools import combinations_with_replacement as combine from random import choice, shuffle sequences = [list(combo) for combo in combine(range(2, 6+1), 10) if sum(combo) == 40] def get_random_sequence_of_sum_40(): seq = choice(sequences) shuffle(seq) return seq # ... later when you need random sequences of sum=40 for i in range(10): rand_sequence = get_random_sequence_of_sum_40() print(f"The sum of {rand_sequence} is {sum(rand_sequence)}") Sample output: The sum of [6, 3, 4, 4, 3, 3, 4, 6, 5, 2] is 40 The sum of [3, 3, 5, 3, 5, 5, 3, 3, 5, 5] is 40 The sum of [3, 3, 6, 3, 4, 6, 3, 4, 4, 4] is 40 The sum of [6, 6, 5, 3, 4, 3, 3, 2, 4, 4] is 40 The sum of [5, 2, 2, 4, 4, 4, 5, 4, 4, 6] is 40 The sum of [4, 4, 4, 3, 4, 4, 3, 6, 4, 4] is 40 The sum of [4, 4, 5, 4, 2, 4, 4, 5, 5, 3] is 40 The sum of [4, 2, 6, 2, 5, 6, 2, 5, 4, 4] is 40 The sum of [3, 6, 3, 4, 3, 3, 4, 4, 6, 4] is 40 The sum of [2, 2, 6, 2, 3, 5, 6, 4, 4, 6] is 40 | 6 | 7 |
73,659,394 | 2022-9-9 | https://stackoverflow.com/questions/73659394/python-merge-two-dictionary-without-losing-the-order | I have a dictionary containing UUID generated for documents like below. { "UUID": [ "b8f2904b-dafd-4be3-9615-96bac8e16c7f", "1240ad39-4815-480f-8cb2-43f802ba8d4e" ] } And another dictionary as a nested one { "R_Id": 304, "ContextKey": "Mr.Dave", "ConsolidationInformation": { "Input": [ { "DocumentCode": "BS", "ObjectType": "Document", "InputContextKey": "Mr.Dave_HDFC.pdf2022-08-010T09:40:06.429358" }, { "DocumentCode": "F16", "ObjectType": "Document", "InputContextKey": "Mr.Dave_F16.pdf2022-08-010T09:40:06.429358" } ] } } I want to add the UUID by index to the ['ConsolidationInformation']['Input'] and inside individual Input as DocumentUUID, how can I map it using a for a loop. I tried searching on the internet but could not find a solution that could satisfy this nested condition. Expected output { "R_Id": 304, "ContextKey": "Mr.Dave", "ConsolidationInformation": { "Input": [ { "DocumentCode": "BS", "ObjectType": "Document", "InputContextKey": "Mr.Dave_HDFC.pdf2022-08-010T09:40:06.429358", "DocumentUUID": "b8f2904b-dafd-4be3-9615-96bac8e16c7f" }, { "DocumentCode": "F16", "ObjectType": "Document", "InputContextKey": "Mr.Dave_F16.pdf2022-08-010T09:40:06.429358", "DocumentUUID": "1240ad39-4815-480f-8cb2-43f802ba8d4e" } ] } } I tried something like the below, but it resulted in Traceback (most recent call last): File "<string>", line 26, in <module> KeyError: 0 Code uuid = { "UUID": [ "b8f2904b-dafd-4be3-9615-96bac8e16c7f", "1240ad39-4815-480f-8cb2-43f802ba8d4e" ] } document = { "R_Id": 304, "ContextKey": "Mr.Dave", "ConsolidationInformation": { "Input": [ { "DocumentCode": "BS", "ObjectType": "Document", "InputContextKey": "Mr.Dave_HDFC.pdf2022-08-010T09:40:06.429358" }, { "DocumentCode": "F16", "ObjectType": "Document", "InputContextKey": "Mr.Dave_F16.pdf2022-08-010T09:40:06.429358" } ] } } for i, document in enumerate(document): uuid = uuid[i] print(f"${uuid} for 1 {document}") | Issues: You have done well with your attempt. The only issue is that the values must be accessed and added at the correct nesting level. Solution: You can correct your attempt as follows: for i, doc in enumerate(document['ConsolidationInformation']['Input']): doc['DocumentUUID'] = uuid['UUID'][i] Alternatively: You can use the zip function. You can learn more about this function here. Here is an example of how you may apply the function to your code: for u, doc in zip(uuid['UUID'], document['ConsolidationInformation']['Input']): doc['DocumentUUID'] = u Output: The output is as follows: { "R_Id":304, "ContextKey":"Mr.Dave", "ConsolidationInformation":{ "Input":[ { "DocumentCode":"BS", "ObjectType":"Document", "InputContextKey":"Mr.Dave_HDFC.pdf2022-08-010T09:40:06.429358", "DocumentUUID":"b8f2904b-dafd-4be3-9615-96bac8e16c7f" }, { "DocumentCode":"F16", "ObjectType":"Document", "InputContextKey":"Mr.Dave_F16.pdf2022-08-010T09:40:06.429358", "DocumentUUID":"1240ad39-4815-480f-8cb2-43f802ba8d4e" } ] } } | 3 | 6 |
73,639,623 | 2022-9-7 | https://stackoverflow.com/questions/73639623/how-can-i-pass-the-gitlab-job-token-into-a-docker-build-without-causing-a-cache | We are using the PyPI repos built into our gitlab deployment to share our internal packages with multiple internal projects. When we build our docker images we need to install those packages as part of image creation. However the gitlab CI token that we use to get access to the gitlab PyPI repository is a one-off token, and so is different every time we run the build. Our Dockerfile starts something like this: FROM python:3.9 WORKDIR /project COPY poetry.lock pyproject.toml RUN pip install poetry ARG CI_JOB_TOKEN RUN poetry config http-basic.gitlab-pypi-repo gitlab-ci-token ${CI_JOB_TOKEN} RUN poetry install --no-interaction Now because we're using poetry and the versions are locked in poetry.lock, when we get to the poetry steps we shouldn't need to reinstall poetry unless the poetry.lock file has changed, but because the CI_JOB_TOKEN is always different we always miss the cache and have to rebuild poetry and everything downstream (which is actually where most of the work is) as well. So is there a way that we can pass CI_JOB_TOKEN into the docker build but in a way that is ignored for the purposes of the cache? Or maybe there's another way to achieve this? | Use build secrets instead (requires build kit) You can mount the secret at build time using the --mount argument to the RUN instruction. Suppose you have the following in a dockerfile: # ... RUN --mount=type=secret,id=mysecret echo "$(cat /run/secrets/mysecret)" > .foo RUN echo "another layer" > .bar Then you can pass the secret into the build using the --secret flag. On the first run, you'll see the RUN instruction executed and if you were to inspect the .foo file, it would contain the secret (because we echoed it to the file in the RUN command -- in practice, this might be your poetry configuration, for example). $ echo -n supersecret > ../secret.txt $ docker build --secret id=mysecret,src=../secret.txt -t test . # ... => [3/4] RUN --mount=type=secret,id=mysecret echo "$(cat /run/secrets/mysecret)" > .foo 0.2s => [4/4] RUN echo "another layer" > .bar 0.4s # ... Even if your secret changes, on subsequent runs, you'll see the relevant layers still remain cached: $ echo -n newvalue > ../secret.txt $ docker build --secret id=mysecret,src=../secret.txt -t test . # ... => CACHED [3/4] RUN --mount=type=secret,id=mysecret echo "$(cat /run/secrets/mysecret)" > .foo 0.0s => CACHED [4/4] RUN echo "another layer" > .bar 0.0s # ... Of course, because the RUN instruction was cached, you would see the old secret value in .foo in the resulting build. As a separate note, you should be aware that your poetry config command is writing to disk. This means that your secret will be contained in the resulting image layers, which may not be ideal from a security standpoint. | 6 | 3 |
73,652,967 | 2022-9-8 | https://stackoverflow.com/questions/73652967/how-to-define-a-uniqueconstraint-on-two-or-more-columns-with-sqlmodel | With SQLAlchemy it is possible to create a constraint on two or more columns using the UniqueConstraint declaration. What would be the most idiomatic way of doing that using SQLModel? For example: from sqlmodel import SQLModel, Field class SubAccount(SQLModel): id: int = Field(primary_key=True) name: str description: str user_id: int = Field(foreign_key="user.id") How can I enforce that name should be unique for a given user_id ? | As of now, exactly the same way. The SQLModel metaclass inherits a great deal from the SQLAlchemy DeclarativeMeta class. In many respects, you can treat an SQLModel subclass the way you would treat your SQLAlchemy ORM base class. Example: from sqlmodel import Field, SQLModel, UniqueConstraint, create_engine class User(SQLModel, table=True): id: int = Field(primary_key=True) ... class SubAccount(SQLModel, table=True): __table_args__ = ( UniqueConstraint("name", "user_id", name="your_unique_constraint_name"), ) id: int = Field(primary_key=True) name: str description: str user_id: int = Field(foreign_key="user.id") if __name__ == '__main__': engine = create_engine("sqlite:///:memory:", echo=True) SQLModel.metadata.create_all(engine) SQLModel even re-imports compatible classes/functions from SQLAlchemy explicitly to indicate that they can be used the way you expect it from SQLAlchemy. This is not to say that this works with everything. Far from it. The project is still in its earliest stages. But in this case it does. Here is the corresponding SQL output for creating the table: CREATE TABLE subaccount ( id INTEGER NOT NULL, name VARCHAR NOT NULL, description VARCHAR NOT NULL, user_id INTEGER NOT NULL, PRIMARY KEY (id), CONSTRAINT your_unique_constraint_name UNIQUE (name, user_id), FOREIGN KEY(user_id) REFERENCES user (id) ) | 10 | 20 |
73,652,973 | 2022-9-8 | https://stackoverflow.com/questions/73652973/add-values-to-column-pandas-dataframe-by-index-number | Imagine I have a list that represents the indexes of a dataframe, like this one: indexlist = [0,1,4] And the following dataframe: Name Country 0 John BR 1 Peter BR 2 Paul BR 3 James CZ 4 Jonatan CZ 5 Maria DK I need to create a column on this dataframe named "Is it valid?" that would add "Yes" if the index row is in the list. Otherwise would add "No", resulting in this dataframe: Name Country Is it valid? 0 John BR Yes 1 Peter BR Yes 2 Paul BR No 3 James CZ No 4 Jonatan CZ Yes 5 Maria DK No Is there any way I could do it? Thanks! | You can use isin for index, that you'd normally use with Series, it essentially creates an array of truth value, which you can pass to np.where with true and false values, assign the result as a column. df['Is it valid?'] = np.where(df.index.isin(indexlist), 'Yes', 'No') OUTPUT: Name Country Is it valid? 0 John BR Yes 1 Peter BR Yes 2 Paul BR No 3 James CZ No 4 Jonatan CZ Yes 5 Maria DK No | 3 | 5 |
73,645,294 | 2022-9-8 | https://stackoverflow.com/questions/73645294/return-deeply-nested-json-objects-with-response-model-fastapi-and-pydantic | This is my schema file from pydantic import BaseModel from typing import Optional class HolidaySchema(BaseModel): year: int month: int country: str language: str class HolidayDateSchema(BaseModel): name: str date: str holidays: HolidaySchema | None = None class Config: orm_mode = True and this is the router that I have @router.get("/holidays/",response_model = List[HolidayDateSchema]) The response I want to get is [ { "date": "2021-08-14", "name": "Independence Day", "holidays": { "year": 2022, "month":5, "country":"pk", "language":"en"}, "id": 13 }, ] Right now it doesn't support the pydantic schema with response model, I don't know why and it gives error pydantic.error_wrappers.ValidationError: 2 validation errors for HolidayDateSchema and value is not a valid dict It would be great if anyone can specify the best to get deeply nested JSON objects with response_model. | HolidaySchema isn't configured with orm_mode = True. You need this for all the models that you want to automagically convert from SQLAlchemy model objects. class HolidaySchema(BaseModel): year: int month: int country: str language: str class Config: orm_mode = True You can configure that setting on a common BaseModel and inherit from that instead if you want the setting for all your models. | 5 | 3 |
73,647,685 | 2022-9-8 | https://stackoverflow.com/questions/73647685/why-does-a-temporary-variable-in-python-change-how-this-pass-by-sharing-variable | first-time questioner here so do highlight my mistakes. I was grinding some Leetcode and came across a behavior (not related to the problem) in Python I couldn't quite figure out nor google-out. It's especially difficult because I'm not sure if my lack of understanding is in: recursion the += operator in Python or variable assignment in general or Python's pass-by-sharing behavior or just something else entirely Here's the simplified code: class Holder: def __init__(self, val=0): self.val = val class Solution: def runThis(self): holder = Holder() self.diveDeeper(holder, 5) return def diveDeeper(self, holder, n): if n==0: return 1 # 1) Doesn't result in mutation holder.val += self.diveDeeper(holder, n-1) # 2) Also doesn't result in mutation # holder.val = holder.val + self.diveDeeper(holder, n-1) # 3) !! Results in mutations # returnVal = self.diveDeeper(holder, n-1) # holder.val += returnVal print(holder.val) return 1 a = Solution() a.runThis() So yeah my main source of confusion is how (1) and (3) look semantically identical to me but results in two completely different outcomes: ================ RESTART: Case 1 =============== 1 1 1 1 1 >>> ================ RESTART: Case 3 =============== 1 2 3 4 5 >>> From (2), it doesn't seem related to the += operator and for brevity, I haven't included the tens of variations I've tried but none of them have given me any leads so far. Would really appreciate any pointers in the right direction (especially in case I get blindsided in job interviews lmao) PS: In case this is relevant, I'm using Python 3.8.2 | In Python, if you have expression1() + expression2(), expression1() is evaluated first. So 1 and 2 are really equivalent to: left = holder.val right = self.diveDeeper(holder, n - 1) holder.val = left + right Now, holder.val is only ever modified after the recursive call, but you use the value from before the recursive call, which means that no matter the iteration, left == 0. Your solution 3 is equivalent to: right = self.diveDeeper(holder, n - 1) left = holder.val holder.val = left + right So the recursive call is made before left = holder.val is evaluated, which means left is now the result of the sum of the previous iteration. This is why you have to be careful with mutable state, you got to understand the order of operations perfectly. | 8 | 3 |
73,642,805 | 2022-9-8 | https://stackoverflow.com/questions/73642805/what-is-more-correct-to-use-to-centralize-this-case | I want to centralize frm_login using pack or grid without using another auxiliary widget. Is it possible? I put an anchor="center" in frm_login.pack(side="left") but it didn't work. import tkinter as tk from tkinter import ttk principal = tk.Tk() principal.title("Login") principal.resizable(False, False) largura = 300 altura = 200 posx = int(principal.winfo_screenwidth() / 2 - largura / 2) posy = int(principal.winfo_screenheight() / 2 - altura / 1.2) principal.geometry("{0}x{1}+{2}+{3}".format(largura, altura, posx, posy)) frm_login = ttk.Frame(principal) frm_login.pack(side="left") lb_usuario = ttk.Label(frm_login, text="UsuΓ‘rio") lb_usuario.grid(row=0, column=0, padx=5, pady=5, sticky="e") ed_usuario = ttk.Entry(frm_login, width=24) ed_usuario.grid(row=0, column=1, sticky="w") lb_senha = ttk.Label(frm_login, text="Senha") lb_senha.grid(row=1, column=0, padx=5, pady=5, sticky="e") ed_senha = ttk.Entry(frm_login, width=24) ed_senha.grid(row=1, column=1, sticky="w") frm_botoes = ttk.Frame(frm_login) frm_botoes.grid(row=2, column=1, pady=5, sticky="w") bt_entrar = ttk.Button(frm_botoes, text="Entrar") bt_entrar.grid(row=1, column=1) bt_sair = ttk.Button(frm_botoes, text="Sair") bt_sair.grid(row=1, column=2) principal.mainloop() | Use the tkinter grid method and pass your parameter sticky="nsew". Use rowconfigure() and columnconfigure(). Set the parameter weight=1 in rowconfigure and columnconfigure. So your widget is going to fill in the whole space they can using sticky. When you configure row and column, only the rows and columns you specified are going to be in the frame with equal weight if all are configured the same. This code should work: Note: You can use padx and pady if the frame is expanding too much. Use them when you are calling grid() but entirely depends on your purpose. import tkinter as tk from tkinter import ttk principal = tk.Tk() principal.title("Login") principal.resizable(False, False) largura = 300 altura = 200 posx = int(principal.winfo_screenwidth() / 2 - largura / 2) posy = int(principal.winfo_screenheight() / 2 - altura / 1.2) principal.geometry("{0}x{1}+{2}+{3}".format(largura, altura, posx, posy)) principal.rowconfigure(0, weight=1) principal.columnconfigure(0, weight=1) frm_login = ttk.Frame(principal) frm_login.grid(row=0, column=0, sticky="nsew") frm_login.rowconfigure((0, 1, 2), weight=1) frm_login.columnconfigure((0, 1, 2), weight=1) lb_usuario = ttk.Label(frm_login, text="UsuΓ‘rio") lb_usuario.grid(row=0, column=0, padx=5, pady=5, sticky="e") ed_usuario = ttk.Entry(frm_login, width=24) ed_usuario.grid(row=0, column=1, sticky="w") lb_senha = ttk.Label(frm_login, text="Senha") lb_senha.grid(row=1, column=0, padx=5, pady=5, sticky="e") ed_senha = ttk.Entry(frm_login, width=24) ed_senha.grid(row=1, column=1, sticky="w") frm_botoes = ttk.Frame(frm_login) frm_botoes.grid(row=2, column=1, pady=5, sticky="w") bt_entrar = ttk.Button(frm_botoes, text="Entrar") bt_entrar.grid(row=1, column=1) bt_sair = ttk.Button(frm_botoes, text="Sair") bt_sair.grid(row=1, column=2) principal.mainloop() Hopefully this answers your question. | 4 | 1 |
73,633,371 | 2022-9-7 | https://stackoverflow.com/questions/73633371/how-to-configure-fastapi-to-publish-logs-to-cloudwatch | I have a FastAPI service that works as expected in every regard except the logging, only when it runs as a AWS Lambda function. When running it locally the logs are displayed on the console as expected: INFO: 127.0.0.1:62160 - "POST /api/v1/feature-requests/febbbc21-9650-44e6-8df5-80c8bb33b6ea/upvote HTTP/1.1" 200 OK INFO: 127.0.0.1:62158 - "OPTIONS /api/v1/feature-requests HTTP/1.1" 200 OK INFO: 127.0.0.1:62160 - "GET /api/v1/feature-requests HTTP/1.1" 200 OK INFO: 127.0.0.1:62158 - "OPTIONS /api/v1/feature-requests-meta HTTP/1.1" 200 OK INFO: 127.0.0.1:62160 - "GET /api/v1/feature-requests-meta HTTP/1.1" 200 OK INFO: 127.0.0.1:62160 - "GET /api/v1/feature-requests/febbbc21-9650-44e6-8df5-80c8bb33b6ea HTTP/1.1" 200 OK INFO: 127.0.0.1:62160 - "GET /api/v1/feature-requests-meta/febbbc21-9650-44e6-8df5-80c8bb33b6ea HTTP/1.1" 200 OK However, when deployed as a Lambda function the logs are not there: 2022-09-07T10:44:57.426+02:00 START RequestId: fd44ae47-5bfb-42e3-aeb4-d9f29857bb39 Version: $LATEST 2022-09-07T10:44:57.604+02:00 END RequestId: fd44ae47-5bfb-42e3-aeb4-d9f29857bb39 2022-09-07T10:44:57.604+02:00 REPORT RequestId: fd44ae47-5bfb-42e3-aeb4-d9f29857bb39 Duration: 177.85 ms Billed Duration: 178 ms Memory Size: 2048 MB Max Memory Used: 152 MB Init Duration: 1733.88 ms 2022-09-07T10:45:00.299+02:00 START RequestId: 08a7a6da-c2c6-446c-baa3-1d08c9816f5b Version: $LATEST 2022-09-07T10:45:00.318+02:00 END RequestId: 08a7a6da-c2c6-446c-baa3-1d08c9816f5b Even for the logs that are produced by our code (as opposed to the framework) are not visible when running as a Lambda function. Configuration: In app.py LOG = logging.getLogger() log_format = "%(asctime)s %(levelname)s %(message)s" log_date_fmt = "%Y-%m-%d %H:%M:%S" logging.basicConfig( format=log_format, level=logging.INFO, datefmt=log_date_fmt, ) In every other Python file: LOG = logging.getLogger(__name__) logging.conf [loggers] keys=root,api,config [handlers] keys=console_handler [formatters] keys=normal_formatter [logger_root] level=INFO handlers=console_handler [logger_api] level=INFO handlers=console_handler qualname=api propagate=0 [logger_config] level=INFO handlers=console_handler qualname=config propagate=0 [handler_console_handler] class=StreamHandler level=INFO formatter=normal_formatter args=(sys.stdout,) [formatter_normal_formatter] format=%(asctime)s %(levelname)s %(name)s %(message)s datefmt=%Y-%m-%d %H:%M:%S I am not sure what else needs to happen to get the logs in CloudWatch. | As it turns out the following setup is needed: On the top of the logging.conf above uvicorn has to be imported that creates an extra property on logging and than fileConfig has to be used like this: import uvicorn logging.config.fileConfig("logging.conf", disable_existing_loggers=False) LOG = logging.getLogger(__name__) | 4 | 0 |
73,629,154 | 2022-9-7 | https://stackoverflow.com/questions/73629154/command-line-stable-diffusion-runs-out-of-gpu-memory-but-gui-version-doesnt | I installed the GUI version of Stable Diffusion here. With it I was able to make 512 by 512 pixel images using my GeForce RTX 3070 GPU with 8 GB of memory: However when I try to do the same thing with the command line interface, I run out of memory: Input: >> C:\SD\stable-diffusion-main>python scripts/txt2img.py --prompt "a close-up portrait of a cat by pablo picasso, vivid, abstract art, colorful, vibrant" --plms --n_iter 3 --n_samples 1 --H 512 --W 512 Error: RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF If I reduce the size of the image to 256 X 256, it gives a result, but obviously much lower quality. So part 1 of my question is why do I run out of memory at 6.13 GiB when I have 8 GiB on the card, and part 2 is what does the GUI do differently to allow 512 by 512 output? Is there a setting I can change to reduce the load on the GPU? Thanks a lot, Alex | This might not be the only answer, but I solved it by using the optimized version here. If you already have the standard version installed, just copy the "OptimizedSD" folder into your existing folders, and then run the optimized txt2img script instead of the original: >> python optimizedSD/optimized_txt2img.py --prompt "a close-up portrait of a cat by pablo picasso, vivid, abstract art, colorful, vibrant" --H 512 --W 512 --seed 27 --n_iter 2 --n_samples 10 --ddim_steps 50 It's quite slow on my computer, but produces 512 X 512 images! Thanks, Alex | 9 | 7 |
73,641,835 | 2022-9-7 | https://stackoverflow.com/questions/73641835/unnesting-event-parameters-in-json-format-within-a-pandas-dataframe | I have a dataset that looks like the one below. It is relational, but has a dimension called event_params which is a JSON object of data related to the event_name in the respective row. import pandas as pd a_df = pd.DataFrame(data={ 'date_time': ['2021-01-03 15:12:42', '2021-01-03 15:12:46', '2021-01-03 15:13:01' , '2021-01-03 15:13:12', '2021-01-03 15:13:13', '2021-01-03 15:13:15' , '2021-01-04 03:29:01', '2021-01-04 18:15:14', '2021-01-04 18:16:01'], 'user_id': ['dhj13h', 'dhj13h', 'dhj13h', 'dhj13h', 'dhj13h', 'dhj13h', '38nr10', '38nr10', '38nr10'], 'account_id': ['181d9k', '181d9k', '181d9k', '181d9k', '181d9k', '181d9k', '56sf15', '56sf15', '56sf15'], 'event_name': ['button_click', 'screen_view', 'close_view', 'button_click', 'exit_app', 'uninstall_app' , 'install_app', 'exit_app', 'uninstall_app'], 'event_params': ['{\'button_id\': \'shop_screen\', \'button_container\': \'main_screen\', \'button_label_text\': \'Enter Shop\'}', '{\'screen_id\': \'shop_main_page\', \'screen_controller\': \'main_view_controller\', \'screen_title\': \'Main Menu\'}', '{\'screen_id\': \'shop_main_page\'}', '{\'button_id\': \'back_to_main_menu\', \'button_container\': \'shop_screen\', \'button_label_text\': \'Exit Shop\'}', '{}', '{}', '{\'utm_campaign\': \'null\', \'utm_source\': \'null\'}', '{}', '{}'] }) I am looking for approaches on how to handle this sort of data. My initial approach is with pandas, but I'm open to other methods. My ideal end state would be to examine each relationships with respect to each user. In the current form, I have to compare the dicts/JSON blobs sitting in event_params to determine the context behind an event. I've tried using explode() to expand out the event_params column. My thinking is the best sort of approach would be to turn event_params into a relational format, where each parameter is an extra row of the dataframe with respect to it's preceding values (in other words, while maintaining the date_time, user_id and event_name that it was related too initially). My explode approach didn't work well, a_df['event_params'] = a_df['event_params'].apply(eval) exploded_df = a_df.explode('event_params') The output of that was: date_time, user_id, account_id, event_name, event_params 2021-01-03 15:12:42,dhj13h,181d9k,button_click,button_id 2021-01-03 15:12:42,dhj13h,181d9k,button_click,button_container It has kind of worked, but it stripped the value fields. Ideally I'd like to maintain those value fields as well. | I hope I've understood your question right. You can transform the event_params column from dict to list of dicts, explode it and transform to new columns key/value: from ast import literal_eval a_df = a_df.assign( event_params=a_df["event_params"].apply( lambda x: [{"key": k, "value": v} for k, v in literal_eval(x).items()] ) ).explode("event_params") a_df = pd.concat( [a_df, a_df.pop("event_params").apply(pd.Series)], axis=1, ).drop(columns=0) print(a_df) Prints: date_time user_id account_id event_name key value 0 2021-01-03 15:12:42 dhj13h 181d9k button_click button_id shop_screen 0 2021-01-03 15:12:42 dhj13h 181d9k button_click button_container main_screen 0 2021-01-03 15:12:42 dhj13h 181d9k button_click button_label_text Enter Shop 1 2021-01-03 15:12:46 dhj13h 181d9k screen_view screen_id shop_main_page 1 2021-01-03 15:12:46 dhj13h 181d9k screen_view screen_controller main_view_controller 1 2021-01-03 15:12:46 dhj13h 181d9k screen_view screen_title Main Menu 2 2021-01-03 15:13:01 dhj13h 181d9k close_view screen_id shop_main_page 3 2021-01-03 15:13:12 dhj13h 181d9k button_click button_id back_to_main_menu 3 2021-01-03 15:13:12 dhj13h 181d9k button_click button_container shop_screen 3 2021-01-03 15:13:12 dhj13h 181d9k button_click button_label_text Exit Shop 4 2021-01-03 15:13:13 dhj13h 181d9k exit_app NaN NaN 5 2021-01-03 15:13:15 dhj13h 181d9k uninstall_app NaN NaN 6 2021-01-04 03:29:01 38nr10 56sf15 install_app utm_campaign null 6 2021-01-04 03:29:01 38nr10 56sf15 install_app utm_source null 7 2021-01-04 18:15:14 38nr10 56sf15 exit_app NaN NaN 8 2021-01-04 18:16:01 38nr10 56sf15 uninstall_app NaN NaN | 6 | 7 |
73,635,937 | 2022-9-7 | https://stackoverflow.com/questions/73635937/extract-key-value-pairs-as-a-tuple-from-nested-json-with-python | I want to extract all key-value pairs from JSON file, I loaded it as a Python dictionary. I created this function below that stores all values. However, I am struggling to put them inside a list to store them like that. Any support is very appreciated. json_example = {'name': 'TheDude', 'age': '19', 'hobbies': { 'love': 'eating', 'hate': 'reading', 'like': [ {'outdoor': { 'teamsport': 'soccer', } } ] } } # My code - Extract values def extract_values(dct, lst=[]): if not isinstance(dct, (list, dict)): lst.append(dct) elif isinstance(dct, list): for i in dct: extract_values(i, lst) elif isinstance(dct, dict): for v in dct.values(): extract_values(v, lst) return lst # Extract keys def matt_keys(dct): if not isinstance(dct, (list, dict)): return [''] if isinstance(dct, list): return [dk for i in dct for dk in matt_keys(i)] return [k+('_'+dk if dk else '') for k, v in dct.items() for dk in matt_keys(v)] Current output: ['TheDude', '19', 'eating'...] Desired output: [('name': 'TheDude'), ('age', '19'), ..., ('hobbies_love', 'eating'), ... , ('hobbies_like_outdoor_teamsport', 'soccer')] Also if there is a more efficient or cleaner way to extract this, then it would be great. | Issues: Your recursive function currently does not pass the key as part of the function call. Also, you will need to deal with nesting when trying to create the key. Hints: We can assemble a list of all the keys that lead to a particular value (e.g. ['hobbies', 'love']) and then join the keys into a single string (e.g. hobbies_love). Solution: Here is your code with the changes implemented: def extract_values(dct, lst=[], keys=[]): if not isinstance(dct, (list, dict)): lst.append(('_'.join(keys), dct)) elif isinstance(dct, list): for i in dct: extract_values(i, lst, keys) elif isinstance(dct, dict): for k, v in dct.items(): keys.append(k) extract_values(v, lst, keys) keys.remove(k) return lst x = extract_values(json_example) print(x) Output: The above code will produce the following desired output: [('name', 'TheDude'), ('age', '19'), ('hobbies_love', 'eating'), ('hobbies_hate', 'reading'), ('hobbies_like_outdoor_teamsport', 'soccer')] | 5 | 4 |
73,635,605 | 2022-9-7 | https://stackoverflow.com/questions/73635605/combine-multiple-columns-into-one-category-column-using-the-column-names-as-valu | I have this data ID A B C 0 0 True False False 1 1 False True False 2 2 False False True And want to transform it into ID group 0 0 A 1 1 B 2 2 C I want to use the column names as value labels for the category column. There is a maximum of only one True value per row. This is the MWE #!/usr/bin/env python3 import pandas as pd df = pd.DataFrame({ 'ID': range(3), 'A': [True, False, False], 'B': [False, True, False], 'C': [False, False, True] }) result = pd.DataFrame({ 'ID': range(3), 'group': ['A', 'B', 'C'] }) result.group = result.group.astype('category') print(df) print(result) I could do df.apply(lambda row: ...magic.., axis=1). But isn't there a more elegant way with pandas' own tools? | You can use melt then a lookup based on the column where the values are true to get the results you are expecting df = df.melt(id_vars = 'ID', var_name = 'group') df.loc[df['value'] == True][['ID', 'group']] | 26 | 7 |
73,634,640 | 2022-9-7 | https://stackoverflow.com/questions/73634640/issue-using-if-statement-in-python | Input: import random I = 0 z = [] while I < 6: y = random. Choices(range(1,50)) if y in z: break z += y I += 1 print(z) Output: [8, 26, 8, 44, 31, 22] I'm trying to make a Lotto numbers generator but I can't make the code to generate 6 numbers that do not repeat. As you can see in the Output 8 is repeating. I'm not clear why in the If statement the code does not check if the random y variable is already in the z list. | Use random.sample: >>> from random import sample >>> sample(range(1, 50), k=6) [40, 36, 43, 15, 37, 25] It already picks k unique items from the range. | 3 | 8 |
73,632,886 | 2022-9-7 | https://stackoverflow.com/questions/73632886/combining-split-with-findall | I'm splitting a string with some separator, but want the separator matches as well: import re s = "oren;moish30.4.200/-/v6.99.5/barbi" print(re.split("\d+\.\d+\.\d+", s)) print(re.findall("\d+\.\d+\.\d+", s)) I can't find an easy way to combine the 2 lists I get: ['oren;moish', '/-/v', '/barbi'] ['30.4.200', '6.99.5'] Into the desired output: ['oren;moish', '30.4.200', '/-/v', '6.99.5', '/barbi'] | From the re.split docs: If capturing parentheses are used in pattern, then the text of all groups in the pattern are also returned as part of the resulting list. So just wrap your regex in a capturing group: print(re.split(r"(\d+\.\d+\.\d+)", s)) | 5 | 4 |
73,564,771 | 2022-9-1 | https://stackoverflow.com/questions/73564771/fastapi-is-very-slow-in-returning-a-large-amount-of-json-data | I have a FastAPI GET endpoint that is returning a large amount of JSON data (~160,000 rows and 45 columns). Unsurprisingly, it is extremely slow to return the data using json.dumps(). I am first reading the data from a file using json.loads() and filtering it per the inputted parameters. Is there a faster way to return the data to the user than using return data? It takes nearly a minute in the current state. My code currently looks like this: # helper function to parse parquet file (where data is stored) def parse_parquet(file_path): df = pd.read_parquet(file_path) result = df.to_json(orient = 'records') parsed = json.loads(result) return parsed @app.get('/endpoint') # has several more parameters async def some_function(year = int | None = None, id = str | None = None): if year is None: data = parse_parquet(f'path/{year}_data.parquet') # no year if year is not None: data = parse_parquet(f'path/all_data.parquet') if id is not None: data = [d for d in data if d['id'] == id] return data | One of the reasons for the response being that slow is that in your parse_parquet() method, you initially convert the file into JSON (using df.to_json()), then into dictionary (using json.loads()) and finally into JSON again, as FastAPI, behind the scenes, automatically converts the returned value into JSON-compatible data using the jsonable_encoder, and then uses the Python standard json.dumps() to serialize the objectβa process that is quite slow (see this answer for more details). As suggested by @MatsLindh in the comments section, you could use alternative JSON encoders, such as orjson or ujosn (see this answer as well), which would indeed speed up the process, compared to letting FastAPI use the jsonable_encoder and then the standard json.dumps() for converting the data into JSON. However, using pandas to_json() and returing a custom Response directlyβas described in Option 1 (Update 2) of this answerβseems to be the best-performing solution. You can use the code given belowβwhich uses a custom APIRoute classβto compare the response time for all available solutions. Use your own parquet file or the below code to create a sample parquet file consisting of 160K rows and 45 columns. create_parquet.py import pandas as pd import numpy as np columns = ['C' + str(i) for i in range(1, 46)] df = pd.DataFrame(data=np.random.randint(99999, 99999999, size=(160000,45)),columns=columns) df.to_parquet('data.parquet') Run the FastAPI app below and access each endpoint separately to inspect the time taken to complete the process of loading and converting the data into JSON. app.py from fastapi import FastAPI, APIRouter, Response, Request from fastapi.routing import APIRoute from typing import Callable import pandas as pd import json import time import ujson import orjson class TimedRoute(APIRoute): def get_route_handler(self) -> Callable: original_route_handler = super().get_route_handler() async def custom_route_handler(request: Request) -> Response: before = time.time() response: Response = await original_route_handler(request) duration = time.time() - before response.headers["Response-Time"] = str(duration) print(f"route duration: {duration}") return response return custom_route_handler app = FastAPI() router = APIRouter(route_class=TimedRoute) @router.get("/defaultFastAPIencoder") def get_data_default(): df = pd.read_parquet('data.parquet') return df.to_dict(orient="records") @router.get("/orjson") def get_data_orjson(): df = pd.read_parquet('data.parquet') return Response(orjson.dumps(df.to_dict(orient='records')), media_type="application/json") @router.get("/ujson") def get_data_ujson(): df = pd.read_parquet('data.parquet') return Response(ujson.dumps(df.to_dict(orient='records')), media_type="application/json") # Preferred way @router.get("/pandasJSON") def get_data_pandasJSON(): df = pd.read_parquet('data.parquet') return Response(df.to_json(orient="records"), media_type="application/json") app.include_router(router) Even though the response time is quite fast using /pandasJSON above (and this should be the preferred way), you may encounter some delay on displaying the data on the browser. That, however, has nothing to do with the server side, but with the client side, as the browser is trying to display a large amount of data. If you don't want to display the data, but instead let the user download the data to their device (which would be much faster), you can set the Content-Disposition header in the Response using the attachment parameter and passing a filename as well, indicating to the browser that the file should be downloaded. For more details, have a look at this answer and this answer. @router.get("/download") def get_data(): df = pd.read_parquet('data.parquet') headers = {'Content-Disposition': 'attachment; filename="data.json"'} return Response(df.to_json(orient="records"), headers=headers, media_type='application/json') I should also mention that there is a library, called Dask, which can handle large datasets, as described here, in case you had to process a large amount of records that is taking too long to complete. Similar to Pandas, you can use the .read_parquet() method to read the file. As Dask doesn't seem to provide an equivalent .to_json() method, you could convert the Dask DataFrame to Pandas DataFrame using df.compute(), and then use Pandas df.to_json() to convert the DataFrame into a JSON string, and return it as demonstrated above. I would also suggest you take a look at this answer, which provides details and solutions on streaming/returning a DataFrame, in case that you are dealing with a large amount of data that converting them into JSON (using .to_json()) or CSV (using .to_csv()) may cause memory issues on server side, if you opt to store the output string (either JSON or CSV) into RAM (which is the default behaviour, if you don't pass a path parameter to the aforementioned functions)βsince a large amount of memory would already be allocated for the original DataFrame as well. | 5 | 13 |
73,567,187 | 2022-9-1 | https://stackoverflow.com/questions/73567187/how-do-i-change-the-size-of-the-flet-window-on-windows-or-specify-that-it-is-not | In Flet on Windows, I'm running the calc demo and trying to modify properties of the application window in Python. How do I change the size of the Flet window in code and specify that it should not be user resizable? (Ideally this post should be tagged with 'Flet' but the tag doesn't exist yet as the project's in it's infancy and I don't have the 1500 points required to created it.) | You can use this as an example import flet as ft def main(page: ft.Page): page.window.width = 200 # window's width is 200 px page.window.height = 200 # window's height is 200 px page.window.resizable = False # window is not resizable page.update() ft.app(target=main) | 4 | 13 |
73,600,082 | 2022-9-4 | https://stackoverflow.com/questions/73600082/how-to-reference-a-requirements-txt-in-the-pyproject-toml-of-a-setuptools-projec | I'm trying to migrate a setuptools-based project from the legacy setup.py towards modern pyproject.toml configuration. At the same time I want to keep well established workflows based on pip-compile, i.e., a requirements.in that gets compiled to a requirements.txt (for end-user / non-library projects of course). This has important benefits as a result of the full transparency: 100% reproducible installs due to pinning the full transitive closure of dependencies. better understanding of dependency conflicts in the transitive closure of dependencies. For this reason I don't want to maintain the dependencies directly inside the pyproject.toml via a dependencies = [] list, but rather externally in the pip-compiled managed requirements.txt. This makes me wonder: Is there a way to reference a requirements.txt file in the pyproject.toml configuration, without having to fallback to a setup.py script? | In setuptools 62.6 the file directive was made available for dependencies and optional-dependencies. Use dynamic metadata: [project] dynamic = ["dependencies"] [tool.setuptools.dynamic] dependencies = {file = ["requirements.txt"]} Note that the referenced file will use a requirements.txt-like syntax; each line must conform to PEP 508, so flags like -r, -c, and -e are not supported inside this requirements.txt. Also note that this capability is still technically in beta. Additionally: If you are using an old version of setuptools, you might need to ensure that all files referenced by the file directive are included in the sdist (you can do that via MANIFEST.in or using plugins such as setuptools-scm, please have a look on [sic] Controlling files in the distribution for more information). Changed in version 66.1.0: Newer versions of setuptools will automatically add these files to the sdist. If you want to use optional-dependencies, say, with a requirements-dev.txt, you will need to put an extra group, as follows (credit to Billbottom): [project] dynamic = ["dependencies", "optional-dependencies"] [tool.setuptools.dynamic] dependencies = {file = ["requirements.txt"]} optional-dependencies = {dev = { file = ["requirements-dev.txt"] }} However: Currently, when specifying optional-dependencies dynamically, all of the groups must be specified dynamically; one can not specify some of them statically and some of them dynamically. | 87 | 126 |
73,596,058 | 2022-9-3 | https://stackoverflow.com/questions/73596058/creating-an-sqlalchemy-engine-based-on-psycopg3 | I need to upgrade the following code to some equivalent code based on psycopg version 3: import psycopg2 from sqlalchemy import create_engine engine = create_engine('postgresql+psycopg2://', creator=connector) This psycopg2 URL worked like a charm, but: import psycopg # v3.1 from sqlalchemy import create_engine engine = create_engine('postgresql+psycopg://', creator=connector) (I also tried the 'psycopg3' word without success) returns: Traceback (most recent call last): File "/tmp/ipykernel_1032556/253047102.py", line 1, in <cell line: 1> engine = create_engine('postgresql+psycopg://', creator=connector) File "<string>", line 2, in create_engine File "/usr/local/lib/python3.10/dist-packages/sqlalchemy/util/deprecations.py", line 309, in warned return fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/sqlalchemy/engine/create.py", line 534, in create_engine entrypoint = u._get_entrypoint() File "/usr/local/lib/python3.10/dist-packages/sqlalchemy/engine/url.py", line 661, in _get_entrypoint cls = registry.load(name) File "/usr/local/lib/python3.10/dist-packages/sqlalchemy/util/langhelpers.py", line 343, in load raise exc.NoSuchModuleError( NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgresql.psycopg So, how to properly create an SQLAlchemy engine based on psycopg (v3.x)? My sqlalchemy version is: '1.4.35' (tried version 1.4.40 but face an AttributeError: module 'sqlalchemy' has no attribute 'dialects' error). psycopg3 doc: https://www.psycopg.org/psycopg3/docs/api/ sqlalchemy doc: https://docs.sqlalchemy.org/en/14/core/engines.html | Just an update to the current answer: Sqlalchemy 2.0 has been released and it does support psycopg3. You need to upgrade to 2.0 to use it. Note the connection string will have to be changed from postgresql to postgresql+psycopg or SqlAlchemy will (at time of writing) try using psycopg2. See docs here for more info: https://docs.sqlalchemy.org/en/20/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg | 23 | 38 |
73,585,779 | 2022-9-2 | https://stackoverflow.com/questions/73585779/how-to-return-a-pdf-file-from-in-memory-buffer-using-fastapi | I want to get a PDF file from s3 and then return it to the frontend from FastAPI backend. This is my code: @router.post("/pdf_document") def get_pdf(document : PDFRequest) : s3 = boto3.client('s3') file=document.name f=io.BytesIO() s3.download_fileobj('adm2yearsdatapdf', file,f) return StreamingResponse(f, media_type="application/pdf") This API returns 200 status code, but it does not return the PDF file as a response. | As the entire file data are already loaded into memory, there is no actual reason for using StreamingResponse. You should instead use Response, by passing the file bytes (use BytesIO.getvalue() to get the bytes containing the entire contents of the buffer), defining the media_type, as well as setting the Content-Disposition header, so that the PDF file can be either viewed in the browser or downloaded to the user's device. For more details and examples, please have a look at this answer, as well as this and this. Related answer can also be found here. Additionally, as the buffer is discarded when the close()method is called, you could also use FastAPI/Starlette's BackgroundTasks to close the buffer after returning the response, in order to release the memory. Alternatively, you could get the bytes using pdf_bytes = buffer.getvalue(), then close the buffer using buffer.close() and finally, return Response(pdf_bytes, headers=.... Example from fastapi import Response, BackgroundTasks @app.get("/pdf") def get_pdf(background_tasks: BackgroundTasks): buffer = io.BytesIO() # BytesIO stream containing the pdf data # ... background_tasks.add_task(buffer.close) headers = {'Content-Disposition': 'inline; filename="out.pdf"'} return Response(buffer.getvalue(), headers=headers, media_type='application/pdf') To have the PDF file downloaded rather than viewed in the borwser, use: headers = {'Content-Disposition': 'attachment; filename="out.pdf"'} | 8 | 14 |
73,616,000 | 2022-9-6 | https://stackoverflow.com/questions/73616000/hide-pandas-warning-sqlalchemy | I want to hide this warning UserWarning: pandas only support SQLAlchemy connectable(engine/connection) ordatabase string URI or sqlite3 DBAPI2 connectionother DBAPI2 objects are not tested, please consider using SQLAlchemy and I've tried import warnings warnings.simplefilter(action='ignore', category=UserWarning) import pandas but the warning still shows. My python script read data from databases. I'm using pandas.read_sql for SQL queries and psycopg2 for db connections. Also I'd like to know which line triggers the warning. | I tried this and it doesn't work. import warnings warnings.filterwarnings('ignore') Therefore, I used SQLAlchemy (as the warning message wants me to do so) to wrap the psycopg2 connection. I followed the instruction here: SQLAlchemy for psycopg2 documentation A simple example: import psycopg2 import sqlalchemy import pandas as pd conn = sqlalchemy.create_engine(f"postgresql+psycopg2://{user}:{pw}@{host}:{port}/{db}") query = "select count(*) from my_table" pd.read_sql(query, conn) The warning doesn't get triggered anymore. | 5 | 9 |
73,599,734 | 2022-9-4 | https://stackoverflow.com/questions/73599734/python-dataclass-one-attribute-referencing-other | @dataclass class Stock: symbol: str price: float = get_price(symbol) Can a dataclass attribute access to the other one? In the above example, one can create a Stock by providing a symbol and the price. If price is not provided, it defaults to a price which we get from some function get_price. Is there a way to reference symbol? This example generates error NameError: name 'symbol' is not defined. | You can use __post_init__ here. Because it's going to be called after __init__, you have your attributes already populated so do whatever you want to do there: from typing import Optional from dataclasses import dataclass def get_price(name): # logic to get price by looking at `name`. return 1000.0 @dataclass class Stock: symbol: str price: Optional[float] = None def __post_init__(self): if self.price is None: self.price = get_price(self.symbol) obj1 = Stock("boo", 2000.0) obj2 = Stock("boo") print(obj1.price) # 2000.0 print(obj2.price) # 1000.0 So if user didn't pass price while instantiating, price is None. So you can check it in __post_init__ and ask it from get_price. There is also another shape of the above answer which basically adds nothing more to the existing one. I just added for the records since someone might attempt to do this as well and wonder how is it different with the previous one: @dataclass class Stock: symbol: str price: InitVar[Optional[float]] = None def __post_init__(self, price): self.price = get_price(self.symbol) if price is None else price You mark the price as InitVar and you can get it with a parameter named price in the __post_init__ method. | 10 | 13 |
73,550,398 | 2022-8-31 | https://stackoverflow.com/questions/73550398/how-to-download-a-large-file-using-fastapi | I am trying to download a large file (.tar.gz) from FastAPI backend. On server side, I simply validate the filepath, and I then use Starlette.FileResponse to return the whole fileβjust like what I've seen in many related questions on StackOverflow. Server side: return FileResponse(path=file_name, media_type='application/octet-stream', filename=file_name) After that, I get the following error: File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 149, in serialize_response return jsonable_encoder(response_content) File "/usr/local/lib/python3.10/dist-packages/fastapi/encoders.py", line 130, in jsonable_encoder return ENCODERS_BY_TYPE[type(obj)](obj) File "pydantic/json.py", line 52, in pydantic.json.lambda UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte I also tried using StreamingResponse, but got the same error. Any other ways to do it? The StreamingResponse in my code: @x.post("/download") async def download(file_name=Body(), token: str | None = Header(default=None)): file_name = file_name["file_name"] # should be something like xx.tar def iterfile(): with open(file_name,"rb") as f: yield from f return StreamingResponse(iterfile(),media_type='application/octet-stream') Ok, here is an update to this problem. I found the error did not occur on this api, but the api doing forward request of this. @("/") def f(): req = requests.post(url ="/download") return req.content And here if I returned a StreamingResponse with .tar file, it led to (maybe) encoding problems. When using requests, remember to set the same media-type. Here is media_type='application/octet-stream'. And it works! | If you find yield from f being rather slow when using StreamingResponse with file-like objects, for instance: from fastapi import FastAPI from fastapi.responses import StreamingResponse some_file_path = 'large-video-file.mp4' app = FastAPI() @app.get('/') def main(): def iterfile(): with open(some_file_path, mode='rb') as f: yield from f return StreamingResponse(iterfile(), media_type='video/mp4') you could instead create a generator where you read the file in chunks using a specified chunk size; hence, speeding up the process. Examples can be found below. Note that StreamingResponse can take either an async generator or a normal generator/iterator to stream the response body. In case you used the standard open() method that doesn't support async/await, you would have to declare the generator function with normal def. Regardless, FastAPI/Starlette will still work asynchronously, as it will check whether the generator you passed is asynchronous (as shown in the source code), and if is not, it will then run the generator in a separate thread, using iterate_in_threadpool, that is then awaited. You can set the Content-Disposition header in the response (as described in this answer, as well as here and here) to indicate whether the content is expected to be displayed inline in the browser (if you are streaming, for example, a .mp4 video, .mp3 audio file, etc), or as an attachment that is downloaded and saved locally (using the specified filename). As for the media_type (also known as MIME type), there are two primary MIME types (see Common MIME types): text/plain is the default value for textual files. A textual file should be human-readable and must not contain binary data. application/octet-stream is the default value for all other cases. An unknown file type should use this type. For a file with .tar extension, as shown in your question, you can also use a different subtype from octet-stream, that is, x-tar. Otherwise, if the file is of unknown type, stick to application/octet-stream. See the linked documentation above for a list of common MIME types. Option 1 - Using normal generator from fastapi import FastAPI from fastapi.responses import StreamingResponse CHUNK_SIZE = 1024 * 1024 # = 1MB - adjust the chunk size as desired some_file_path = 'large_file.tar' app = FastAPI() @app.get('/') def main(): def iterfile(): with open(some_file_path, 'rb') as f: while chunk := f.read(CHUNK_SIZE): yield chunk headers = {'Content-Disposition': 'attachment; filename="large_file.tar"'} return StreamingResponse(iterfile(), headers=headers, media_type='application/x-tar') Option 2 - Using async generator with aiofiles from fastapi import FastAPI from fastapi.responses import StreamingResponse import aiofiles CHUNK_SIZE = 1024 * 1024 # = 1MB - adjust the chunk size as desired some_file_path = 'large_file.tar' app = FastAPI() @app.get('/') async def main(): async def iterfile(): async with aiofiles.open(some_file_path, 'rb') as f: while chunk := await f.read(CHUNK_SIZE): yield chunk headers = {'Content-Disposition': 'attachment; filename="large_file.tar"'} return StreamingResponse(iterfile(), headers=headers, media_type='application/x-tar') | 5 | 17 |
73,589,431 | 2022-9-3 | https://stackoverflow.com/questions/73589431/value-of-field-type-must-be-one-of-4-in-a-modal-using-discord-py-2-0 | I am trying to show the user a Modal after they display a button, which contains a dropdown select menu from which they can choose multiple options. This code has functioned in the past, but is not causing an exception. Specifically: [2022-09-02 22:30:47] [ERROR ] discord.ui.view: Ignoring exception in view <TestButtonView timeout=180.0 children=1> for item <Button style=<ButtonStyle.primary: 1> url=None disabled=False label='Test' emoji=None row=None> Traceback (most recent call last): File "C:\Users\adria\PycharmProjects\sblBot\venv\lib\site-packages\discord\ui\view.py", line 425, in _scheduled_task await item.callback(interaction) File "C:\Users\adria\PycharmProjects\sblBot\main.py", line 1131, in test_button_callback await interaction.response.send_modal(TestModal()) File "C:\Users\adria\PycharmProjects\sblBot\venv\lib\site-packages\discord\interactions.py", line 852, in send_modal await adapter.create_interaction_response( File "C:\Users\adria\PycharmProjects\sblBot\venv\lib\site-packages\discord\webhook\async_.py", line 220, in request raise HTTPException(response, data) discord.errors.HTTPException: 400 Bad Request (error code: 50035): Invalid Form Body In data.components.0.components.0: Value of field "type" must be one of (4,). I have reduced my code to the minimal reproducible example of my issue. Here is the code for the Modal: class TestModal(discord.ui.Modal, title='Test'): def __init__(self, **kw): super().__init__(**kw) select = discord.ui.Select( placeholder='Select a tier.', options=[discord.SelectOption(label='test')] ) async def on_submit(self, interaction: discord.Interaction): await interaction.response.defer() And here is the code for the view with the button (the f): class TestButtonView(discord.ui.View): def __init__(self, **kw): super().__init__(**kw) self.add_buttons() def add_buttons(self): test_button = discord.ui.Button(label='Test', style=discord.ButtonStyle.blurple) async def test_button_callback(interaction: discord.Interaction): await interaction.response.send_modal(TestModal()) test_button.callback = test_button_callback self.add_item(test_button) And finally, the command to send the button view: @client.command(hidden=True) async def test(ctx): await ctx.send(view=TestButtonView()) | ui.Modal does only support items with type 4, which is a ui.TextInput. Means ui.Select is not a supported item (yet). See Component Types table: Discord API Documentation The error is quite inaccurate, but it's intended that the error handling is not done better because of future purposes. See here: Add better errors for incorrect items added to modal | 6 | 6 |
73,596,677 | 2022-9-4 | https://stackoverflow.com/questions/73596677/uwsgi-locking-up-after-a-few-requests-with-nginx-traefik-flask-app-running-over | Problem I have an app that uses nginx to serve my Python Flask app in production that only after a few requests starts locking up and timing out (will serve the first request or two quickly then start timing out and locking up afterwards). The Nginx app is served via Docker, the uwsgi Python app is served on barebones macOS (this Python app interfaces with the Docker instance running on the OS itself), the routing occurs via Traefik. Findings This problem only occurs in production and the only difference there is I'm using Traefik's LetsEncrypt SSL certs to use HTTPS to protect the API. I've narrowed the problem down to the following two docker-compose config lines (when present the problem persists, when removed the problem is corrected but SSL no longer is enabled): - "traefik.http.routers.harveyapi.tls=true" - "traefik.http.routers.harveyapi.tls.certresolver=letsencrypt" Once locked up, I must restart the uwsgi processes to fix the problem just to have it lock right back up. Restarting nginx (Docker container) doesn't fix the problem which leads me to believe that uwsgi doesn't like the SSL config I'm using? Once I disable SSL support, I can send 2000 requests to the API and have it only take a second or two. Once enabled again, uwsgi can't even respond to 2 requests. Desired Outcome I'd like to be able to support SSL certs to enforce HTTPS connections to this API. I can currently run HTTP with this setup fine (thousands of concurrent connections) but that breaks when trying to use HTTPS. Configs I host dozens of other PHP sites with near identical setups. The only difference between those projects and this one is that they run PHP in Docker and this runs Python Uwsgi on barebones macOS. Here is the complete dump of configs for this project: traefik.toml # Traefik v2 Configuration # Documentation: https://doc.traefik.io/traefik/migration/v1-to-v2/ [entryPoints] # http should be redirected to https [entryPoints.web] address = ":80" [entryPoints.web.http.redirections.entryPoint] to = "websecure" scheme = "https" [entryPoints.websecure] address = ":443" [entryPoints.websecure.http.tls] certResolver = "letsencrypt" # Enable ACME (Let's Encrypt): automatic SSL [certificatesResolvers.letsencrypt.acme] email = "[email protected]" storage = "/etc/traefik/acme/acme.json" [certificatesResolvers.letsencrypt.acme.httpChallenge] entryPoint = "web" [log] level = "DEBUG" # Enable Docker Provider [providers.docker] endpoint = "unix:///var/run/docker.sock" exposedByDefault = false # Must pass `traefik.enable=true` label to use Traefik network = "traefik" # Enable Ping (used for healthcheck) [ping] docker-compose.yml version: "3.8" services: harvey-nginx: build: . restart: always networks: - traefik labels: - traefik.enable=true labels: - "traefik.http.routers.harveyapi.rule=Host(`project.com`, `www.project.com`)" - "traefik.http.routers.harveyapi.tls=true" - "traefik.http.routers.harveyapi.tls.certresolver=letsencrypt" networks: traefik: name: traefik uwsgi.ini [uwsgi] ; uwsgi setup master = true memory-report = true auto-procname = true strict = true vacuum = true die-on-term = true need-app = true ; concurrency enable-threads = true cheaper-initial = 5 ; workers to spawn on startup cheaper = 2 ; minimum number of workers to go down to workers = 10 ; highest number of workers to run ; workers harakiri = 60 ; Restart workers if they have hung on a single request max-requests = 500 ; Restart workers after this many requests max-worker-lifetime = 3600 ; Restart workers after this many seconds reload-on-rss = 1024 ; Restart workers after this much resident memory reload-mercy = 3 ; How long to wait before forcefully killing workers worker-reload-mercy = 3 ; How long to wait before forcefully killing workers ; app setup protocol = http socket = 127.0.0.1:5000 module = wsgi:APP ; daemonization ; TODO: Name processes `harvey` here daemonize = /tmp/harvey_daemon.log nginx.conf server { listen 80; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; location / { include uwsgi_params; # TODO: Please note this only works for macOS: https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host # and will require adjusting for your OS. proxy_pass http://host.docker.internal:5000; } } Dockerfile FROM nginx:1.23-alpine RUN rm /etc/nginx/conf.d/default.conf COPY nginx.conf /etc/nginx/conf.d Additional Context I've added additional findings on the GitHub issue where I've documented my journey for this problem: https://github.com/Justintime50/harvey/issues/67 | This is no longer a problem and the solution is real frustrating - it was Docker's fault. For ~6 months there was a bug in Docker that was dropping connections (ultimately leading to the timeouts mentioned above) which was finally fixed in Docker Desktop 4.14. The moment I upgraded Docker (it had just come out at the time and I thought I would try the hail Mary upgrade having already turned every dial and adjusted every config param without any luck), it finally stopped timing out and dropping connections. I was suddenly able to send through tens of thousands of concurrent requests without issue. TLDR: uWSGI, Nginx, nor my config were at fault here. Docker had a bug that has been patched. If others on macOS are facing this problem, try upgrading to at least Docker Dekstop 4.14. | 4 | 6 |
73,604,954 | 2022-9-5 | https://stackoverflow.com/questions/73604954/error-when-using-python-kaleido-from-r-to-convert-plotly-graph-to-static-image | I am trying to use the R reticulate package to convert a plotly graph to a static image. I am using save_image/kaleido. Link to documentation for save_image / kaleido Initial setup: install.packages("reticulate") reticulate::install_miniconda() reticulate::conda_install('r-reticulate-test', 'python-kaleido') reticulate::conda_install('r-reticulate-test', 'plotly', channel = 'plotly') reticulate::use_miniconda('r-reticulate-test') Here is my (buggy) attempt: > library(plotly) > p <- plot_ly(x = 1:10) > save_image(p,"test.png") No trace type specified: Based on info supplied, a 'histogram' trace seems appropriate. Read more about this trace type -> https://plotly.com/r/reference/#histogram Error in py_run_string_impl(code, local, convert) : NameError: name 'sys' is not defined > My query is : How do I fix the error that the name 'sys' is not defined? Funnily, if I do : > reticulate::repl_python() Python 3.10.6 (/root/.local/share/r-miniconda/envs/r-reticulate-test/bin/python) Reticulate 1.26.9000 REPL -- A Python interpreter in R. Enter 'exit' or 'quit' to exit the REPL and return to R. >>> import sys >>> exit > save_image(p,"test.png") No trace type specified: Based on info supplied, a 'histogram' trace seems appropriate. Read more about this trace type -> https://plotly.com/r/reference/#histogram > then it works and produces the picture that I am seeking. Can someone tell me why I need to invoke repl_python, then import sys and exit it ? How can I fix this ? I need this since I need to create an automated script to create graphs. | As @Salim B pointed out there is a workaround documented to call import sys in Python before executing save_img(): p <- plot_ly(x = 1:10) reticulate::py_run_string("import sys") save_image(p, "./pic.png") | 6 | 13 |
73,581,384 | 2022-9-2 | https://stackoverflow.com/questions/73581384/plotting-a-fancy-diagonal-correlation-matrix-with-coefficients-in-upper-triangle | I have the following synthetic dataframe, including numerical and categorical columns as well as the label column. I want to plot a diagonal correlation matrix and display correlation coefficients in the upper part as the following: expected output: Despite the point that categorical columns within synthetic dataset/dataframedf needs to be converted into numerical, So far I have used this seaborn example using 'titanic' dataset which is synthetic and fits my task, but I added label column as follows: import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="white") # Generate a large random dataset with synthetic nature (categorical + numerical) data = sns.load_dataset("titanic") df = pd.DataFrame(data=data) # Generate label column randomly '0' or '1' df['label'] = np.random.randint(0,2, size=len(df)) # Compute the correlation matrix corr = df.corr() # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr, dtype=bool)) # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(230, 20, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, cmap=cmap, vmin=-1.0, vmax=1.0, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) I checked a related post but couldn't figure it out to do this task. The best I could find so far is this workaround which can be installed using this package that gives me the following output: #!pip install heatmapz # Import the two methods from heatmap library from heatmap import heatmap, corrplot import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="white") # Generate a large random dataset data = sns.load_dataset("titanic") df = pd.DataFrame(data=data) # Generate label column randomly '0' or '1' df['label'] = np.random.randint(0,2, size=len(df)) # Compute the correlation matrix corr = df.corr() # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr, dtype=bool)) mask[np.diag_indices_from(mask)] = False np.fill_diagonal(mask, True) # Set up the matplotlib figure plt.figure(figsize=(8, 8)) # Draw the heatmap using "Heatmapz" package corrplot(corr[mask], size_scale=300) Sadly, corr[mask] doesn't mask the upper triangle in this package. I also noticed that in R, reaching this fancy plot is much easier, so I'm open if there is a more straightforward way to convert Python Pandas dataFrame to R dataframe since it seems there is a package, so-called rpy2 that we could use Python & R together even in Google Colab notebook: Ref.1 from rpy2.robjects import pandas2ri pandas2ri.activate() So if it is the case, I find this post1 & post2 using R for regarding Visualization of a correlation matrix. So, in short, my 1st priority is using Python and its packages Matplotlib, seaborn, Plotly Express, and then R and its packages to reach the expected output. Note I provided you with executable code in google Colab notebook with R using dataset so that you can form/test your final answer if your solution is by rpy2 otherwise I'd be interested in a Pythonic solution. | I'd be interested in a Pythonic solution. Use a seaborn scatter plot with matplotlib text/line annotations: Plot the lower triangle via sns.scatterplot with square markers Annotate the upper triangle via plt.text Draw the heatmap grid via plt.vlines and plt.hlines Full code using the titanic sample: import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_theme(style="white") # generate sample correlation matrix df = sns.load_dataset("titanic") df["label"] = np.random.randint(0, 2, size=len(df)) corr = df.corr() # mask and melt correlation matrix mask = np.tril(np.ones_like(corr, dtype=bool)) | corr.abs().le(0.1) melt = corr.mask(mask).melt(ignore_index=False).reset_index() melt["size"] = melt["value"].abs() fig, ax = plt.subplots(figsize=(8, 6)) # normalize colorbar cmap = plt.cm.RdBu norm = plt.Normalize(-1, 1) sm = plt.cm.ScalarMappable(norm=norm, cmap=cmap) cbar = plt.colorbar(sm, ax=ax) cbar.ax.tick_params(labelsize="x-small") # plot lower triangle (scatter plot with normalized hue and square markers) sns.scatterplot(ax=ax, data=melt, x="index", y="variable", size="size", hue="value", hue_norm=norm, palette=cmap, style=0, markers=["s"], legend=False) # format grid xmin, xmax = (-0.5, corr.shape[0] - 0.5) ymin, ymax = (-0.5, corr.shape[1] - 0.5) ax.vlines(np.arange(xmin, xmax + 1), ymin, ymax, lw=1, color="silver") ax.hlines(np.arange(ymin, ymax + 1), xmin, xmax, lw=1, color="silver") ax.set(aspect=1, xlim=(xmin, xmax), ylim=(ymax, ymin), xlabel="", ylabel="") ax.tick_params(labelbottom=False, labeltop=True) plt.xticks(rotation=90) # annotate upper triangle for y in range(corr.shape[0]): for x in range(corr.shape[1]): value = corr.mask(mask).to_numpy()[y, x] if pd.notna(value): plt.text(x, y, f"{value:.2f}", size="x-small", # color=sm.to_rgba(value), weight="bold", ha="center", va="center") Note that since most of these titanic correlations are low, I disabled the text coloring for readability. If you want color-coded text, uncomment the color=sm.to_rgba(value) line at the end: | 10 | 1 |
73,593,712 | 2022-9-3 | https://stackoverflow.com/questions/73593712/calculating-similarities-of-text-embeddings-using-clip | I am trying to use CLIP to calculate the similarities between strings. (I know that CLIP is usually used with text and images but it should work with only strings as well.) I provide a list of simple text prompts and calculate the similarity between their embeddings. The similarities are off but I can't figure what I'm doing wrong. import torch import clip from torch.nn import CosineSimilarity cos = CosineSimilarity(dim=1, eps=1e-6) def gen_features(model, text): tokens = clip.tokenize([text]).to(device) text_features = model.encode_text(tokens) return text_features def dist(v1, v2): #return torch.dist(normalize(v1), normalize(v2)) # euclidean distance #return cos(normalize(v1), normalize(v2)).item() # cosine similarity similarity = (normalize(v1) @ normalize(v2).T) return similarity.item() device = "cuda" if torch.cuda.is_available() else "cpu" model_name = "ViT-B/32" model, _ = clip.load(model_name, device=device) sentences = ["A cat", "A dog", "A labrador", "A poodle", "A wolf", "A lion", "A house"] with torch.no_grad(): embeddings = [(sentence, gen_features(model, sentence)) for sentence in sentences] for label1, embedding1 in embeddings: for label2, embedding2 in embeddings: print(f"{label1} -> {label2}: {dist(embedding1, embedding2)}") Output A cat -> A cat: 0.9999998211860657 A cat -> A dog: 0.9361147880554199 A cat -> A labrador: 0.8170720934867859 A cat -> A poodle: 0.8438302278518677 A cat -> A wolf: 0.9086413979530334 A cat -> A lion: 0.8914517164230347 A cat -> A house: 0.8724125027656555 A dog -> A cat: 0.9361147880554199 A dog -> A dog: 1.0000004768371582 A dog -> A labrador: 0.8481228351593018 A dog -> A poodle: 0.9010260105133057 A dog -> A wolf: 0.9260395169258118 A dog -> A lion: 0.886112630367279 A dog -> A house: 0.8852840662002563 A labrador -> A cat: 0.8170720934867859 A labrador -> A dog: 0.8481228351593018 A labrador -> A labrador: 1.000000238418579 A labrador -> A poodle: 0.7722526788711548 A labrador -> A wolf: 0.8111101984977722 A labrador -> A lion: 0.783727765083313 A labrador -> A house: 0.7569846510887146 A poodle -> A cat: 0.8438302278518677 A poodle -> A dog: 0.9010260105133057 A poodle -> A labrador: 0.7722526788711548 A poodle -> A poodle: 0.999999463558197 A poodle -> A wolf: 0.8539597988128662 A poodle -> A lion: 0.8460092544555664 A poodle -> A house: 0.8119628429412842 A wolf -> A cat: 0.9086413979530334 A wolf -> A dog: 0.9260395169258118 A wolf -> A labrador: 0.8111101984977722 A wolf -> A poodle: 0.8539597988128662 A wolf -> A wolf: 1.000000238418579 A wolf -> A lion: 0.9043934941291809 A wolf -> A house: 0.860664427280426 A lion -> A cat: 0.8914517164230347 A lion -> A dog: 0.886112630367279 A lion -> A labrador: 0.783727765083313 A lion -> A poodle: 0.8460092544555664 A lion -> A wolf: 0.9043934941291809 A lion -> A lion: 1.0000004768371582 A lion -> A house: 0.8402873873710632 A house -> A cat: 0.8724125027656555 A house -> A dog: 0.8852840662002563 A house -> A labrador: 0.7569846510887146 A house -> A poodle: 0.8119628429412842 A house -> A wolf: 0.860664427280426 A house -> A lion: 0.8402873873710632 A house -> A house: 0.9999997615814209 The results show that a dog is closer to a house than it is for a labrador 0.885 vs 0.848 which doesn't make sense. I've tried cosine similarity and euclidean distance to check whether the distance measure was wrong, but the results are similar. Where am I going wrong? | If you use the text embeddings from the output of CLIPTextModel ([number of prompts, 77, 512]), flatten them ([number of prompts, 39424]) and the apply cosine similarity, you'll get improved results. This code lets you test both solutions ([1,512] and [77,512]). I'm running it on Google Colab. !pip install -U torch transformers import torch from torch.nn import CosineSimilarity from transformers import CLIPTokenizer, CLIPModel, CLIPTextModel cossim = CosineSimilarity(dim=0, eps=1e-6) def dist(v1, v2): return cossim(v1, v2) torch_device = "cuda" if torch.cuda.is_available() else "cpu" models = [ 'openai/clip-vit-base-patch16', 'openai/clip-vit-base-patch32', 'openai/clip-vit-large-patch14', ] model_id = models[1] tokenizer = CLIPTokenizer.from_pretrained(model_id) text_encoder = CLIPTextModel.from_pretrained(model_id).to(torch_device) model = CLIPModel.from_pretrained(model_id).to(torch_device) prompts = [ "A cat", "A dog", "A labrador", "A poodle", "A wolf", "A lion", "A house", ] text_inputs = tokenizer( prompts, padding="max_length", return_tensors="pt", ).to(torch_device) text_features = model.get_text_features(**text_inputs) text_embeddings = torch.flatten(text_encoder(text_inputs.input_ids.to(torch_device))['last_hidden_state'],1,-1) print("\n\nusing text_features") for i1, label1 in enumerate(prompts): for i2, label2 in enumerate(prompts): if (i2>=i1): print(f"{label1} <-> {label2} = {dist(text_features[i1], text_features[i2]):.4f}") print("\n\nusing text_embeddings") for i1, label1 in enumerate(prompts): for i2, label2 in enumerate(prompts): if (i2>=i1): print(f"{label1} <-> {label2} = {dist(text_embeddings[i1], text_embeddings[i2]):.4f}") You'll get the same values for the [1,512] embedding A cat <-> A cat = 1.0000 A cat <-> A dog = 0.9361 A cat <-> A labrador = 0.8171 A cat <-> A poodle = 0.8438 A cat <-> A wolf = 0.9086 A cat <-> A lion = 0.8915 A cat <-> A house = 0.8724 A dog <-> A dog = 1.0000 **A dog <-> A labrador = 0.8481** A dog <-> A poodle = 0.9010 A dog <-> A wolf = 0.9260 A dog <-> A lion = 0.8861 **A dog <-> A house = 0.8853** A labrador <-> A labrador = 1.0000 A labrador <-> A poodle = 0.7723 A labrador <-> A wolf = 0.8111 A labrador <-> A lion = 0.7837 A labrador <-> A house = 0.7570 A poodle <-> A poodle = 1.0000 A poodle <-> A wolf = 0.8540 A poodle <-> A lion = 0.8460 A poodle <-> A house = 0.8120 A wolf <-> A wolf = 1.0000 A wolf <-> A lion = 0.9044 A wolf <-> A house = 0.8607 A lion <-> A lion = 1.0000 A lion <-> A house = 0.8403 A house <-> A house = 1.0000 But the results have improved with the [1,77,512] embedding, and now the dog is closer to the labrador than to the house. Still, you'll get funny results such as the cat being more similar to a house than to a poodle. A cat <-> A cat = 1.0000 A cat <-> A dog = 0.8880 A cat <-> A labrador = 0.8057 A cat <-> A poodle = 0.7579 A cat <-> A wolf = 0.8558 A cat <-> A lion = 0.8358 A cat <-> A house = 0.8024 A dog <-> A dog = 1.0000 **A dog <-> A labrador = 0.8794** A dog <-> A poodle = 0.8583 A dog <-> A wolf = 0.8888 A dog <-> A lion = 0.8265 **A dog <-> A house = 0.8294** A labrador <-> A labrador = 1.0000 A labrador <-> A poodle = 0.8006 A labrador <-> A wolf = 0.8182 A labrador <-> A lion = 0.7958 A labrador <-> A house = 0.7608 A poodle <-> A poodle = 1.0000 A poodle <-> A wolf = 0.7928 A poodle <-> A lion = 0.7735 A poodle <-> A house = 0.7623 A wolf <-> A wolf = 1.0000 A wolf <-> A lion = 0.8496 A wolf <-> A house = 0.8063 A lion <-> A lion = 1.0000 A lion <-> A house = 0.7671 A house <-> A house = 1.0000 | 10 | 9 |
73,584,269 | 2022-9-2 | https://stackoverflow.com/questions/73584269/vertex-ai-automatic-retraining | Iβm trying to create a Vertex AI endpoint with Monitoring enabled that can trigger a Vertex AI pipeline execution when one of the deployed models drops its performance. However, Vertex AI does not provide any built-in feature to do it. Is there a method to capture the alert thrown by Vertex AI Monitoring and trigger the Pipeline? | The Vertex AI Model Monitoring jobs are logged as part of Cloud Logging 1. You can react to those loggings using log-based alerts 2. For that, you need to configure a notification channel to PubSub 3 Based on those PubSub message you can trigger a Cloud Function 4 The Cloud Function can initiate the Vertex AI Pipeline run to re-train the model 5 https://cloud.google.com/vertex-ai/docs/model-monitoring/using-model-monitoring#cloud-logging-info https://cloud.google.com/logging/docs/alerting/log-based-alerts https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.notificationChannels#NotificationChannel https://cloud.google.com/functions/docs/calling/pubsub https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.pipelineJobs/create | 4 | 3 |
73,622,815 | 2022-9-6 | https://stackoverflow.com/questions/73622815/gcp-python-delete-project-labels | I am trying to delete labels from projects, by using the UpdateProjectRequest. It says that it needs an 'update_mask', but I do not know how to created this mask. It is a google.protobuf.field_mask_pb2.FieldMask https://cloud.google.com/python/docs/reference/cloudresourcemanager/latest/google.cloud.resourcemanager_v3.types.UpdateProjectRequest What i have tried is: from google.cloud import resourcemanager_v3 from google.protobuf import field_mask_pb2 def sample_update_project(project_id): client = resourcemanager_v3.ProjectsClient() update_mask = field_mask_pb2.FieldMask(paths=["labels.newkey:None"]) resourcemanager_v3.Project=f"projects/{project_id}" operation = client.update_project(update_mask=update_mask) print("Waiting for operation to complete...") response = operation.result() print(response) Thanks | As you already mentioned, when you want to delete all labels, it requires an update mask for the labels field. Reviewing this documentation, please note that if a request is provided, this should not be set. Also, you will find a code example that might help you. Additionally, you should import filed_mask_pb2 if you will update data with a FieldMask use as follows: from google.protobuf import field_mask_pb2 Having the main question in mind, one of two methods can be used to remove a label: Use the read-modify-write pattern to completely remove the label, which removes both the key and the value. To do this, perform these steps: Use the resource's get() function to read the current labels. Use a text editor or programmatically change the returned labels to add or remove any necessary keys and their values. Call the patch() function on the resource to write the changed labels. To keep the key and remove the value, set the value to null, but reviewing your code, you are currently using None. Also, consider the following documentation:Creating and managing labels and Method: projects.patch UPDATE Working again with your main question, here is a code that it was already tested to delete labels as you requested: import googleapiclient.discovery def sample_update_project_old(project_id): manager = googleapiclient.discovery.build('cloudresourcemanager', 'v1') request = manager.projects().get(projectId=project_id) project = request.execute() del project['labels']['key'] # replace 'key' with your actual key value request = manager.projects().update(projectId=project_id, body=project) project = request.execute() sample_update_project_old("your-project-id") | 5 | 1 |
73,612,107 | 2022-9-5 | https://stackoverflow.com/questions/73612107/selenium-chromedrivermanager-webdriver-install-not-updating-through-proxy | I need to access a website using selenium and Chrome using python. Below is shortened version of my code. from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.service import Service # PROXY='https://myproxy.com:3128' PROXY = 'https://username:[email protected]:3128' proxyuser='username' #this is the proxy user name used in above string proxypwd='password' #this is the proxy password used in above string chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--proxy-server=%s' % PROXY) chrome_options.add_argument("ignore-certificate-errors"); chrome = webdriver.Chrome(options=chrome_options,service=Service(ChromeDriverManager().install())) chrome.get("https://www.google.com") while True: print('ok') I am behind a corporate proxy server which requires authentication. I am not sure how to pass login credentials and proxy settings for Install of chromedriver When above code is run without proxy, it works as expected. But it gives error as follows when run using proxy connection: [WDM] - ====== WebDriver manager ====== [WDM] - Current google-chrome version is 105.0.5195 [WDM] - Get LATEST chromedriver version for 105.0.5195 google-chrome Traceback (most recent call last): File "D:\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "D:\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "D:\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 1040, in _validate_conn conn.connect() File "D:\Python\Python39\lib\site-packages\urllib3\connection.py", line 414, in connect self.sock = ssl_wrap_socket( File "D:\Python\Python39\lib\site-packages\urllib3\util\ssl_.py", line 449, in ssl_wrap_socket ssl_sock = _ssl_wrap_socket_impl( File "D:\Python\Python39\lib\site-packages\urllib3\util\ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "D:\Python\Python39\lib\ssl.py", line 500, in wrap_socket return self.sslsocket_class._create( File "D:\Python\Python39\lib\ssl.py", line 1040, in _create self.do_handshake() File "D:\Python\Python39\lib\ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\Python\Python39\lib\site-packages\requests\adapters.py", line 489, in send resp = conn.urlopen( File "D:\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 785, in urlopen retries = retries.increment( File "D:\Python\Python39\lib\site-packages\urllib3\util\retry.py", line 550, in increment raise six.reraise(type(error), error, _stacktrace) File "D:\Python\Python39\lib\site-packages\urllib3\packages\six.py", line 769, in reraise raise value.with_traceback(tb) File "D:\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "D:\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "D:\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 1040, in _validate_conn conn.connect() File "D:\Python\Python39\lib\site-packages\urllib3\connection.py", line 414, in connect self.sock = ssl_wrap_socket( File "D:\Python\Python39\lib\site-packages\urllib3\util\ssl_.py", line 449, in ssl_wrap_socket ssl_sock = _ssl_wrap_socket_impl( File "D:\Python\Python39\lib\site-packages\urllib3\util\ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "D:\Python\Python39\lib\ssl.py", line 500, in wrap_socket return self.sslsocket_class._create( File "D:\Python\Python39\lib\ssl.py", line 1040, in _create self.do_handshake() File "D:\Python\Python39\lib\ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\eclipse-workspace\Essential Training\test.py", line 17, in <module> chrome = webdriver.Chrome(options=chrome_options,service=Service(ChromeDriverManager().install())) File "D:\Python\Python39\lib\site-packages\webdriver_manager\chrome.py", line 37, in install driver_path = self._get_driver_path(self.driver) File "D:\Python\Python39\lib\site-packages\webdriver_manager\core\manager.py", line 29, in _get_driver_path binary_path = self.driver_cache.find_driver(driver) File "D:\Python\Python39\lib\site-packages\webdriver_manager\core\driver_cache.py", line 95, in find_driver driver_version = driver.get_version() File "D:\Python\Python39\lib\site-packages\webdriver_manager\core\driver.py", line 42, in get_version self.get_latest_release_version() File "D:\Python\Python39\lib\site-packages\webdriver_manager\drivers\chrome.py", line 44, in get_latest_release_version resp = self._http_client.get(url=latest_release_url) File "D:\Python\Python39\lib\site-packages\webdriver_manager\core\http.py", line 31, in get resp = requests.get(url=url, verify=self._ssl_verify, **kwargs) File "D:\Python\Python39\lib\site-packages\requests\api.py", line 73, in get return request("get", url, params=params, **kwargs) File "D:\Python\Python39\lib\site-packages\requests\api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "D:\Python\Python39\lib\site-packages\requests\sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "D:\Python\Python39\lib\site-packages\requests\sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "D:\Python\Python39\lib\site-packages\requests\adapters.py", line 547, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)) Bypassing proxy is not possible because of corporate policies request("get", url, params=params, **kwargs) this line in errors i believe is the issue. it seems the driver is not able to determine the new version available on net as the request function here should have proxy settings - i am not able to see the **kwargs values, ideally - i believe, **kwargs should have proxy argument. - i am not sure but. | I have modified the code as follows. essentially i have now used custom download manager and this is worling from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager from webdriver_manager.core.download_manager import WDMDownloadManager from webdriver_manager.core.http import HttpClient from selenium.webdriver.chrome.service import Service from requests import Response import urllib3 import requests import os os.environ['WDM_SSL_VERIFY'] = '0' capabilities = webdriver.DesiredCapabilities.CHROME urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) PROXY = "https://username:[email protected]:3128" proxyuser='username' proxypwd='password' opt = webdriver.ChromeOptions() opt.add_argument('--proxy-server=%s' % PROXY) opt.add_argument("ignore-certificate-errors") class CustomHttpClient(HttpClient): def get(self, url, params=None) -> Response: proxies={'http': 'http://username:[email protected]:3128', 'https': 'http://username:[email protected]:3128', } return requests.get(url, params,proxies=proxies,verify=False) http_client = CustomHttpClient() download_manager = WDMDownloadManager(http_client) chrome = webdriver.Chrome(service=Service(ChromeDriverManager(download_manager=download_manager).install()),options=opt) chrome.get("https://www.google.com") while True: print('ok') | 5 | 5 |
73,611,981 | 2022-9-5 | https://stackoverflow.com/questions/73611981/django-how-to-annotate-group-by-display-it-in-serializer | I have following Model class ModelAnswer(BaseModel): questions = models.ForeignKey( to=Question, on_delete=models.CASCADE ) answer = models.TextField() user = models.ForeignKey(User, on_delete=models.CASCADE) Basically usecase is answer can be added multiple time i.e at once 3 answer can be added and all answers needed to be added for particular question I just made another model to keep track of these things in next model for my easiness. class AnswerLog(BaseModel): answer = models.ForeignKey(to=ModelAnswer, on_delete=models.CASCADE, null=True, blank=True) user = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True) order = models.PositiveIntegerField(null=True, blank=True) I am getting my response in this format [ { "answer":{ "id":42, "user":1, "questions":"what did you do today", "subcategory":"Circumstance", "is_intentional":"False", "answer":"I played well", "created_at":"2022-09-05T21:00:57.604051" }, "order":1, "category":"sports" }, { "answer":{ "id":43, "user":1, "questions":"what was your achievment?", "subcategory":"Result", "is_intentional":"False", "answer":"a broked my leg", "created_at":"2022-09-05T21:00:57.626193" }, "order":1, "category":"sports" } ] I just want my above response in a bit easier format in this way just combine by order and category because both will be ( category can still be same for another answer so order will only be different for next answer i.e 2) [{ "answer":[{ "id":42, "user":1, "questions":"what did you do today", "subcategory":"Circumstance", "is_intentional":"False", "answer":"I played well", "created_at":"2022-09-05T21:00:57.604051" },{ "id":43, "user":1, "questions":"what was your achievment?", "subcategory":"Result", "is_intentional":"False", "answer":"a broked my leg", "created_at":"2022-09-05T21:00:57.626193" }], "order":1, "category":"sports", } ] My serializer is as follows class AnswerLogSerializer(serializers.ModelSerializer): answer = ListAnswerSerializer() category = serializers.CharField(source='answer.questions.category.name') class Meta: model = AnswerLog fields = ['answer','order', 'category'] My view is as class ListAnswerLogView(generics.ListAPIView): serializer_class = serializers.AnswerLogSerializer def get_queryset(self): return AnswerLog.objects.all() | View from collections import defaultdict class ListAnswerLogView(ListAPIView): serializer_class = AnswerLogSerializer def get_queryset(self): grouped_answers = defaultdict(lambda: dict(answer=set())) for answer_log in AnswerLog.objects.all(): grouped_by_key = ( answer_log.order, answer_log.answer.questions.category ) grouped_answers[grouped_by_key]['answer'].add(answer_log.answer) for key in grouped_answers: grouped_answers[key].update(dict( order=key[0], question_category=key[1] )) return grouped_answers.values() Serializer class AnswerLogSerializer(serializers.ModelSerializer): answer = ListAnswerSerializer(many=True) category = serializers.CharField(source='question_category.name') class Meta: model = AnswerLog fields = ['answer', 'order', 'category'] P.S. i tried a lot to solve the problem just using django queryset features, but each of them leads to a problem. So I went this way to use dictionary to solve the problem. | 5 | 4 |
73,623,986 | 2022-9-6 | https://stackoverflow.com/questions/73623986/sqlalchemy-how-to-create-a-composite-index-between-a-polymorphic-class-and-its | I am trying to get a composite index working between a polymorphic subclass and it's parent. Alembic autogenerate does not seem to detect Indexes outside of __table_args__. I can't use __table_args__ because, being in the subclass, it does not count my class as having a __table__. How do I create a composite Index between these? class Main(Base, SomeMixin): __tablename__ = "main" __table_args__ = ( # Some constraints and Indexes specific to main ) id = Column(String, primary_key=True, default=func.generate_object_id()) mtype = Column(String, nullable=False) __mapper_args__ = {"polymorphic_on": mtype} class SubClass(Main): __mapper_args__ = {"polymorphic_identity": "subclass"} bid = Column(String, ForeignKey("other.id", ondelete="CASCADE")) # My index specific to Subclass Index( "ix_main_bid_mtype", "bid", "mtype", ) The goal is to have something like this pop with alembic autogenerate: def upgrade(): # ### commands auto generated by Alembic - please adjust! ### op.create_index( "ix_main_bid_mtype", "main", ["bid", "mtype"], unique=False, ) # ### end Alembic commands ### def downgrade(): # ### commands auto generated by Alembic - please adjust! ### op.drop_index(op.f("ix_main_bid_mtype"), table_name="main") # ### end Alembic commands ### Thank you for your time and potential future help. EDIT: Note: The other fields are detected by autogenerate, only the index done this way does not seem to work. | Create the index externally after both classes: class Main(Base, SomeMixin): __tablename__ = "main" __table_args__ = ( # Some constraints and Indexes specific to main ) id = Column(String, primary_key=True, default=func.generate_object_id()) mtype = Column(String, nullable=False) __mapper_args__ = {"polymorphic_on": mtype} class SubClass(Main): __mapper_args__ = {"polymorphic_identity": "subclass"} bid = Column(String, ForeignKey("other.id", ondelete="CASCADE")) Index("ix_main_bid_mtype", SubClass.bid, SubClass.mtype) | 7 | 4 |
73,623,225 | 2022-9-6 | https://stackoverflow.com/questions/73623225/using-python-difflib-to-compare-more-than-two-files | I would like to get an overview over e.g. the ldd dependency list of multiple (3+) computers by comparing them with each other and highlighting the differences. For example, if I have a dict that looks as following: my_ldd_outputs = { 01:"<ldd_output>", 02:"<ldd_output>", ... 09:"<ldd_output>", 10:"<ldd_output>" } I would like the output to look something like <identical line 1> <identical line 2> <identical line 3> <differing line 4> (computer 01 02) <differing line 4> (computer 04 05 06 07) <differing line 4> (computer 08 09 10) <identical line 5> <identical line 6> ... My first approach involved python difflib, where my idea was to first get to a datastructure where all the ldd_output lists (just the result split with \n) from the abovementioned my_ldd_outputs dictionary are the same length, and any missing line that exists in another ldd_output string is added with a string. So if two files looked like this: ldd_1 = """ <identical line 1> <identical line 2> <differing line 3> <identical line 4> <extra line 5> <identical line 6> """ ldd_2 = """ <identical line 1> <identical line 2> <differing line 3> <identical line 4> <identical line 6> """ My goal was to store those files as ldd_1 = """ <identical line 1> <identical line 2> <differing line 3> <identical line 4> <extra line 5> <identical line 6> """ ldd_2 = """ <identical line 1> <identical line 2> <differing line 3> <identical line 4> <None> <identical line 6> """ And ultimately just iterate over every line of the converted files (which now all have the same length) and compare each line in terms of their differences and ignore any <None> entries so the diff can be printed consecutively. I created a function that uses python difflib to fill the missing lines from other files with a <None> string. However, I am not sure how to expand this function to incorporate an arbitrary amount of diffs def generate_diff(file_1, file_2): #differing hashvalues from ldd can be ignored, we only care about version and path def remove_hashvalues(input): return re.sub("([a-zA-Z0-9_.-]{32}\/|\([a-zA-Z0-9_.-]*\))", "<>", input) diff = [line.strip() for line in difflib.ndiff(remove_hashvalues(base).splitlines(keepends=True),remove_hashvalues(file_2).splitlines(keepends=True))] list_1 = [] list_2 = [] i = 0 while i<len(diff): if diff[i].strip(): if diff[i][0:2]=="- ": lost = [] gained = [] while diff[i][0:2]=="- " or diff[i][0:2]=="? ": if diff[i][0:2]=="- ": lost.append(diff[i][1:].strip()) i+=1 while diff[i][0:2]=="+ " or diff[i][0:2]=="? ": if diff[i][0:2]=="+ ": gained.append(diff[i][1:].strip()) i+=1 while len(lost) != len(gained): lost.append("<None>") if len(lost)<len(gained) else gained.insert(0,"<None>") list_1+=lost; list_2+=gained elif diff[i][0:2]=="+ ": list_1.append("<None>"); list_2.append(diff[i][1:].strip()) if not diff[i][0:2]=="? ": list_1.append(diff[i].strip()); list_2.append(diff[i].strip()) i+=1 return list_1, list_2 I also found this tool that allows the comparison of multiple files, but unfortunately its not designed to compare code. EDIT: I adjusted the solution suggestion of @AyoubKaanich to create a more simplified version that does what I want: from collections import defaultdict import re def transform(input): input = re.sub("([a-zA-Z0-9_.-]{32}\/|\([a-zA-Z0-9_.-]*\))", "<>", input) # differing hashvalues can be ignored, we only care about version and path return sorted(input.splitlines()) def generate_diff(outputs: dict): mapping = defaultdict(set) for target, output in outputs.items(): for line in transform(output): mapping[line.strip()].add(target) result = [] current_line = None color_index = 0 for line in sorted(mapping.keys()): if len(outputs) == len(mapping[line]): if current_line: current_line = None result.append((line)) else: if current_line != line.split(" ")[0]: current_line = line.split(" ")[0] color_index+=1 result.append((f"\033[3{color_index%6+1}m{line}\033[0m",mapping[line])) return result The only downside is that this does not apply to diffs where the string varies in an arbitrary section as opposed to just the beginning, which is what difflib is good at detecting. However, for the case of ldd, since the dependency is always listed at first, sorting alphabetically and taking the first section of the string works. | Pure Python solution, no libraries or extra dependencies. Note: this solutions works due some assumptions: Order of lines do not matter A line either exists, or is missing (no logic to check similarity between lines) from collections import defaultdict import re def transform(input): # differing hashvalues from ldd can be ignored, we only care about version and path input = re.sub("([a-zA-Z0-9_.-]{32}\/|\([a-zA-Z0-9_.-]*\))", "<>", input) return sorted(input.splitlines()) def generate_diff(outputs: dict, common_threshold = 0): """ common_threshold: how many outputs need to contain line to consider it common and mark outputs that do not have it as missing """ assert(common_threshold <= len(outputs)) mapping = defaultdict(set) for target, output in outputs.items(): for line in transform(output): mapping[line].add(target) for line in sorted(mapping.keys()): found = mapping[line] if len(outputs) == len(found): print(' ' + line) elif len(found) >= common_threshold: missed_str = ",".join(map(str, set(outputs.keys()) - found)) print(f'- {line} ({missed_str})') else: added_str = ",".join(map(str, found)) print(f'+ {line} ({added_str})') Sample execution my_ldd_outputs = { 'A': """ linux-vdso.so.1 (0x00007ffde4f09000) libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007fe0594f3000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe0592cb000) /lib64/ld-linux-x86-64.so.2 (0x00007fe059690000) """, 'B': """ linux-vdso.so.1 (0x00007fff697b6000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1c54045000) /lib64/ld-linux-x86-64.so.2 (0x00007f1c54299000) """, 'C': """ linux-vdso.so.1 (0x00007fffd61f9000) libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x00007f08a51a3000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f08a4f7b000) /lib64/ld-linux-x86-64.so.2 (0x00007f08a5612000) """, 'D': """ linux-vdso.so.1 (0x00007ffcf9ddd000) libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x00007fa2e381b000) libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007fa2e37ef000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa2e35c7000) libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x00007fa2e3530000) /lib64/ld-linux-x86-64.so.2 (0x00007fa2e3cd7000) """, 'E': """ linux-vdso.so.1 (0x00007ffc2deab000) libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x00007f31fed91000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f31fed75000) libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f31fed49000) libgssapi_krb5.so.2 => /lib/x86_64-linux-gnu/libgssapi_krb5.so.2 (0x00007f31fecf5000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f31feacd000) libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x00007f31fea34000) /lib64/ld-linux-x86-64.so.2 (0x00007f31ff2af000) libkrb5.so.3 => /lib/x86_64-linux-gnu/libkrb5.so.3 (0x00007f31fe969000) libk5crypto.so.3 => /lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007f31fe93a000) libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007f31fe934000) libkrb5support.so.0 => /lib/x86_64-linux-gnu/libkrb5support.so.0 (0x00007f31fe926000) libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 (0x00007f31fe91f000) libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f31fe909000) """ } generate_diff(my_ldd_outputs, 2) Outputs /lib64/ld-linux-x86-64.so.2 <> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 <> + libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 <> (E) - libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 <> (B,A) + libgssapi_krb5.so.2 => /lib/x86_64-linux-gnu/libgssapi_krb5.so.2 <> (E) + libk5crypto.so.3 => /lib/x86_64-linux-gnu/libk5crypto.so.3 <> (E) + libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 <> (E) + libkrb5.so.3 => /lib/x86_64-linux-gnu/libkrb5.so.3 <> (E) + libkrb5support.so.0 => /lib/x86_64-linux-gnu/libkrb5support.so.0 <> (E) - libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 <> (C,B,A) + libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 <> (E) - libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 <> (C,B,A) + libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 <> (A) + libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 <> (E) linux-vdso.so.1 <> | 4 | 1 |
73,597,456 | 2022-9-4 | https://stackoverflow.com/questions/73597456/what-is-the-python-poetry-config-file-after-1-2-0-release | I have been using python-poetry for over a year now. After poetry 1.2.0 release, I get such an info warning: Configuration file exists at ~/Library/Application Support/pypoetry, reusing this directory. Consider moving configuration to ~/Library/Preferences/pypoetry, as support for the legacy directory will be removed in an upcoming release. But in docs, it is still indicated for macOS: ~/Library/Application Support/pypoetry https://python-poetry.org/docs/configuration/ My question is that if ~/Library/Preferences/pypoetry is the latest decision what should I do for moving configuration to there? Is just copy-pasting enough? | Looks like it is as simple as copy/pasting to the new directory. I got the same error after upgrading to Poetry 1.2. So I created a pypoetry folder in the new Preferences directory, copy/pasted the config.toml to it, and just to be safe, I renamed the original folder to: ~/Library/Application Support/pypoetry_bak After doing this and running poetry -V, the error is gone. | 14 | 16 |
73,624,959 | 2022-9-6 | https://stackoverflow.com/questions/73624959/pubkeyacceptedkeytypes-ssh-rsa-with-paramiko | Is there a way to have the same behavior with Paramiko, as when using ssh option: -o PubkeyAcceptedKeyTypes=+ssh-rsa | Paramiko uses ssh-rsa by default. No need to enable it. But if you have problems with public keys, it might be because recent versions of Paramiko first try rsa-sha2-*. And some legacy servers choke on that. So you likely rather want to disable the rsa-sha2-*. For that, see: Paramiko authentication fails with "Agreed upon 'rsa-sha2-512' pubkey algorithm" (and "unsupported public key algorithm: rsa-sha2-512" in sshd log) | 4 | 1 |
73,616,963 | 2022-9-6 | https://stackoverflow.com/questions/73616963/runtimeerror-a-view-of-a-leaf-variable-that-requires-grad-is-being-used-in-an | I am working on some paper replication, but I am having trouble with it. According to the log, it says that RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation. However, when I check the line where the error is referring to, it was just a simple property setter inside the class: @pdfvec.setter def pdfvec(self, value): self.param.pdfvec[self.key] = value # where the error message is referring to Isn't in-place operations are something like += or *= etc.? I don't see why this error message appeared in this line. I am really confused about this message, and I will be glad if any one knows any possible reason this can happen. For additional information, this is the part where the setter function was called: def _update_params(params, pdfvecs): idx = 0 for param in params: totdim = param.stats.numel() shape = param.stats.shape param.pdfvec = pdfvecs[idx: idx + totdim].reshape(shape) # where the setter function was called idx += totdim I know this can still lack information for solving the problem, but if you know any possiblity why the error message appeared I would be really glad to hear. | In-place operation means the assignment you've done is modifiying the underlying storage of your Tensor, of which requires_grad is set to True, according to your error message. That said, your param.pdfvec[self.key] is not a leaf Tensor, because they will be updated during back-propagation. And you tried to assign a value to it , that will interference with autograd, so this action is prohibited by default. You can do this by directly modifying its underlying storage(f.e., with .data). | 8 | 12 |
73,592,665 | 2022-9-3 | https://stackoverflow.com/questions/73592665/python-requests-get-post-with-data-over-ssl-truncates-the-response | For example take this app.py and run it with python3 -m flask run --cert=adhoc. from flask import Flask app = Flask(__name__) @app.route('/', methods=["GET", "POST"]) def hello_world(): return {"access_token": "aa" * 50000} How could sending data truncate the response? >>> import requests >>> len(requests.post('https://127.0.0.1:5000/', verify=False).content) 100020 >>> len(requests.post('https://127.0.0.1:5000/', data={'a':'b'}, verify=False).content) 81920 PS 1: It works as expected without SSL; PS 2: GET produces the same behaviour; PS 3: Curl produces the correct result: $ curl -s -k -X POST https://127.0.0.1:5000/ -H 'Content-Type: application/json' -d '{"a":"b"}' | wc -m 100020 PS 4: I reported this as a bug on requests' github. | This is a weird one, and I confess I am not entirely sure if my answer is really correct in every detail. But it seems you might need to report this to Flask rather than requests... @app.route('/', methods=["GET", "POST"]) def hello_world(): print(f"hello_world: content_type: {request.content_type}, data: {request.data}") return {"access_token": "aa" * 50000} This code responds as expected to both >>> len(requests.post('https://127.0.0.1:5000/', data={"a":"b"}, verify=False).content) and >>> len(requests.post('https://127.0.0.1:5000/', verify=False).content) Now comment out the print(), and the data=... request gets an exception: raise ChunkedEncodingError(e) requests.exceptions.ChunkedEncodingError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer')) I suspect that Flask will read the data from the request only on demand. But with SSL, reading data will affect encryption (e.g. via cipher block chaining). So sending a response before fully reading the request leads to encoding problems. So read all the data you get from the request before you send an answer. | 4 | 3 |
73,626,888 | 2022-9-6 | https://stackoverflow.com/questions/73626888/function-not-being-called-in-main-function | I'm fairly new to python and I'm trying to build a game with Pygame. I kept having issues with collisions not being recognized. Here's the code I tried import pygame pygame.init() WIDTH, HEIGHT = (900, 500) WIN = pygame.display.set_mode((WIDTH, HEIGHT)) pygame.display.set_caption('Bong Pong') FPS = 60 WHITE = (255, 255, 255) BLACK = (0, 0, 0) RED = (255, 0, 0) BORDER = pygame.Rect(WIDTH// 2 -5, 0, 10, HEIGHT) VEL = 3 PLAYER_HEIGHT = 50 PLAYER_WIDTH = 15 BALL_HEIGHT = 15 BALL_WIDTH = 15 def draw_window(player_1, player_2, game_ball): WIN.fill(WHITE) pygame.draw.rect(WIN, BLACK, BORDER) pygame.draw.rect(WIN, BLACK, player_1) pygame.draw.rect(WIN, BLACK, player_2) pygame.draw.rect(WIN, RED, game_ball) pygame.display.update() def player_1_movement(keys_pressed, player_1): if keys_pressed[pygame.K_w] and player_1.y - VEL > 0: player_1.y -= VEL if keys_pressed [pygame.K_s] and player_1.y + PLAYER_HEIGHT + VEL < 500: player_1.y += VEL def player_2_movement(keys_pressed, player_2): if keys_pressed[pygame.K_UP] and player_2.y - VEL > 0: player_2.y -= VEL if keys_pressed [pygame.K_DOWN] and player_2.y + PLAYER_HEIGHT + VEL < 500: player_2.y += VEL def player_collision(player_1, player_2, game_ball, ball_vel_x): if game_ball.colliderect(player_1) or game_ball.colliderect(player_2): ball_vel_x *= -1 def main(): clock = pygame.time.Clock() run = True player_1 = pygame.Rect(50, HEIGHT//2 - PLAYER_HEIGHT// 2, PLAYER_WIDTH, PLAYER_HEIGHT) player_2 = pygame.Rect(850, HEIGHT//2 - PLAYER_HEIGHT// 2, PLAYER_WIDTH, PLAYER_HEIGHT) game_ball = pygame.Rect(50 + PLAYER_WIDTH, HEIGHT//2 - BALL_HEIGHT// 2, BALL_WIDTH, BALL_HEIGHT) ball_vel_y = 2 ball_vel_x = 2 while run: clock.tick(FPS) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False pygame.QUIT() keys_pressed = pygame.key.get_pressed() player_1_movement(keys_pressed, player_1) player_2_movement(keys_pressed, player_2) draw_window(player_1, player_2, game_ball) if game_ball.y - BALL_HEIGHT - VEL <= 0 or game_ball.y + BALL_HEIGHT + VEL >= 500: ball_vel_y *= -1 game_ball.y -= ball_vel_y player_collision(player_1, player_2, game_ball, ball_vel_x) game_ball.x += ball_vel_x main() if __name__ == '__main__': main() I've also tried putting the game_ball.x += ball_vel_x in the player_collsion function but it doesn't reverse it properly. I've already solved the issue by putting the entire player_collision function code inside the main function like so def main(): clock = pygame.time.Clock() run = True player_1 = pygame.Rect(50, HEIGHT//2 - PLAYER_HEIGHT// 2, PLAYER_WIDTH, PLAYER_HEIGHT) player_2 = pygame.Rect(850, HEIGHT//2 - PLAYER_HEIGHT// 2, PLAYER_WIDTH, PLAYER_HEIGHT) game_ball = pygame.Rect(50 + PLAYER_WIDTH, HEIGHT//2 - BALL_HEIGHT// 2, BALL_WIDTH, BALL_HEIGHT) ball_vel_y = 2 ball_vel_x = 2 while run: clock.tick(FPS) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False pygame.QUIT() keys_pressed = pygame.key.get_pressed() player_1_movement(keys_pressed, player_1) player_2_movement(keys_pressed, player_2) draw_window(player_1, player_2, game_ball) if game_ball.y - BALL_HEIGHT - VEL <= 0 or game_ball.y + BALL_HEIGHT + VEL >= 500: ball_vel_y *= -1 game_ball.y -= ball_vel_y if game_ball.colliderect(player_1) or game_ball.colliderect(player_2): ball_vel_x *= -1 game_ball.x += ball_vel_x main() and this works exactly how I want it I just want to clear up my understanding of why this function wouldn't work properly if just called in the main function instead of putting it directly in. | Python has no concept of in-out parameters. The argument is passed by value. If you change ball_vel_x in the player_collision function, only the parameter changes, but the argument remains unchanged. You must return the new value of ball_vel_x from the function: def player_collision(player_1, player_2, game_ball, ball_vel_x): if game_ball.colliderect(player_1) or game_ball.colliderect(player_2): ball_vel_x *= -1 return ball_vel_x def main(): # [...] while run: # [...] ball_vel_x = player_collision(player_1, player_2, game_ball, ball_vel_x) # [...] Another possibility is to store the velocity in an object (e.g. gyame.math.Vector2). A variable stores a reference to an object, so you can change the object's attributes in the function if the variable is an argument of the function call: def player_collision(player_1, player_2, game_ball, ball_vel): if game_ball.colliderect(player_1) or game_ball.colliderect(player_2): ball_vel.x *= -1 def main(): # [...] ball_vel = pygame.math.Vector2(2, 2) while run: # [...] if game_ball.y - BALL_HEIGHT - VEL <= 0 or game_ball.y + BALL_HEIGHT + VEL >= 500: ball_vel.y *= -1 game_ball.y -= ball_vel.y player_collision(player_1, player_2, game_ball, ball_vel) game_ball.x += ball_vel.x | 3 | 6 |
73,621,269 | 2022-9-6 | https://stackoverflow.com/questions/73621269/jax-jitting-functions-parameters-vs-global-variables | I've have the following doubt about Jax. I'll use an example from the official optax docs to illustrate it: def fit(params: optax.Params, optimizer: optax.GradientTransformation) -> optax.Params: opt_state = optimizer.init(params) @jax.jit def step(params, opt_state, batch, labels): loss_value, grads = jax.value_and_grad(loss)(params, batch, labels) updates, opt_state = optimizer.update(grads, opt_state, params) params = optax.apply_updates(params, updates) return params, opt_state, loss_value for i, (batch, labels) in enumerate(zip(TRAINING_DATA, LABELS)): params, opt_state, loss_value = step(params, opt_state, batch, labels) if i % 100 == 0: print(f'step {i}, loss: {loss_value}') return params # Finally, we can fit our parametrized function using the Adam optimizer # provided by optax. optimizer = optax.adam(learning_rate=1e-2) params = fit(initial_params, optimizer) In this example, the function step uses the variable optimizer despite it not being passed within the function arguments (since the function is being jitted and optax.GradientTransformation is not a supported type). However, the same function uses other variables that are instead passed as parameters (i.e., params, opt_state, batch, labels). I understand that jax functions needs to be pure in order to be jitted, but what about input (read-only) variables. Is there any difference if I access a variable by passing it through the function arguments or if I access it directly since it's in the step function scope? What if this variable is not constant but modified between separate step calls? Are they treated like static arguments if accessed directly? Or are they simply jitted away and so modifications of such parameters will not be considered? To be more specific, let's look at the following example: def fit(params: optax.Params, optimizer: optax.GradientTransformation) -> optax.Params: opt_state = optimizer.init(params) extra_learning_rate = 0.1 @jax.jit def step(params, opt_state, batch, labels): loss_value, grads = jax.value_and_grad(loss)(params, batch, labels) updates, opt_state = optimizer.update(grads, opt_state, params) updates *= extra_learning_rate # not really valid code, but you get the idea params = optax.apply_updates(params, updates) return params, opt_state, loss_value for i, (batch, labels) in enumerate(zip(TRAINING_DATA, LABELS)): extra_learning_rate = 0.1 params, opt_state, loss_value = step(params, opt_state, batch, labels) extra_learning_rate = 0.01 # does this affect the next `step` call? params, opt_state, loss_value = step(params, opt_state, batch, labels) return params vs def fit(params: optax.Params, optimizer: optax.GradientTransformation) -> optax.Params: opt_state = optimizer.init(params) extra_learning_rate = 0.1 @jax.jit def step(params, opt_state, batch, labels, extra_lr): loss_value, grads = jax.value_and_grad(loss)(params, batch, labels) updates, opt_state = optimizer.update(grads, opt_state, params) updates *= extra_lr # not really valid code, but you get the idea params = optax.apply_updates(params, updates) return params, opt_state, loss_value for i, (batch, labels) in enumerate(zip(TRAINING_DATA, LABELS)): extra_learning_rate = 0.1 params, opt_state, loss_value = step(params, opt_state, batch, labels, extra_learning_rate) extra_learning_rate = 0.01 # does this now affect the next `step` call? params, opt_state, loss_value = step(params, opt_state, batch, labels, extra_learning_rate) return params From my limited experiments, they perform differently as the second step call doesn't uses the new learning rates in the global case and also no 're-jitting' happens, however I'd like to know if there's any standard practice/rules I need to be aware of. I'm writing a library where performance is fundamental and I don't want to miss some jit optimizations because I'm doing things wrong. | During JIT tracing, JAX treats global values as implicit arguments to the function being traced. You can see this reflected in the jaxpr representing the function. Here are two simple functions that return equivalent results, one with implicit arguments and one with explicit: import jax import jax.numpy as jnp def f_explicit(a, b): return a + b def f_implicit(b): return a_global + b a_global = jnp.arange(5.0) b = jnp.ones(5) print(jax.make_jaxpr(f_explicit)(a_global, b)) # { lambda ; a:f32[5] b:f32[5]. let c:f32[5] = add a b in (c,) } print(jax.make_jaxpr(f_implicit)(b)) # { lambda a:f32[5]; b:f32[5]. let c:f32[5] = add a b in (c,) } Notice the only difference in the two jaxprs is that in f_implicit, the a variable comes before the semicolon: this is the way that jaxpr representations indicate the argument is passed via closure rather than via an explicit argument. But the computation generated by these two functions will be identical. That said, one difference to be aware of is that when an argument passed by closure is a hashable constant, it will be treated as static within the traced function (similar when explicit arguments are marked static via static_argnums or static_argnames within jax.jit): a_global = 1.0 print(jax.make_jaxpr(f_implicit)(b)) # { lambda ; a:f32[5]. let b:f32[5] = add 1.0 a in (b,) } Notice in the jaxpr representation the constant value is inserted directly as an argument to the add operation. The explicit way to to get the same result for a JIT-compiled function would look something like this: from functools import partial @partial(jax.jit, static_argnames=['a']) def f_explicit(a, b): return a + b | 5 | 7 |
73,616,848 | 2022-9-6 | https://stackoverflow.com/questions/73616848/getting-valueerror-input-contains-nan-when-using-package-pmdarima | I get the error ValueError: Input contains NaN, when I try to predict the next value of series by using ARIMA model from pmdarima. But the data I use didn't contains null values. codes: from pmdarima.arima import ARIMA tmp_series = pd.Series([0.8867208063423082, 0.4969678051201152, -0.35079875681211814, 0.07156197743204402, 0.6888394890593726, 0.6136916470350972, 0.9020102952782968, 0.38539523911177426, -0.02211092685162178, 0.7051282791422511, -0.21841121961990842, 0.003262841037836234, 0.3970253153400027, 0.8187445259415379, -0.525847439014037, 0.3039480910711944, 0.0279240073596233, 0.8238419467739897, 0.8234157376839023, 0.5897892005398399, 0.8333118174945449]) model_211 = ARIMA(order=(2, 1, 1), out_of_sample_size=0, mle_regression=True, suppress_warnings=True) model_211.fit(tmp_series[:-1]) print(model_211.predict()) error message: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [7], in <cell line: 7>() 5 display(model_211.params()) 6 display(model_211.aic()) ----> 7 display(model_211.predict()) File /usr/local/lib/python3.8/dist-packages/pmdarima/arima/arima.py:793, in ARIMA.predict(self, n_periods, X, return_conf_int, alpha, **kwargs) 790 arima = self.arima_res_ 791 end = arima.nobs + n_periods - 1 --> 793 f, conf_int = _seasonal_prediction_with_confidence( 794 arima_res=arima, 795 start=arima.nobs, 796 end=end, 797 X=X, 798 alpha=alpha) 800 if return_conf_int: 801 # The confidence intervals may be a Pandas frame if it comes from 802 # SARIMAX & we want Numpy. We will to duck type it so we don't add 803 # new explicit requirements for the package 804 return f, check_array(conf_int, force_all_finite=False) File /usr/local/lib/python3.8/dist-packages/pmdarima/arima/arima.py:205, in _seasonal_prediction_with_confidence(arima_res, start, end, X, alpha, **kwargs) 202 conf_int[:, 1] = f + q * np.sqrt(var) 204 y_pred = check_endog(f, dtype=None, copy=False, preserve_series=True) --> 205 conf_int = check_array(conf_int, copy=False, dtype=None) 207 return y_pred, conf_int File /usr/local/lib/python3.8/dist-packages/sklearn/utils/validation.py:899, in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator, input_name) 893 raise ValueError( 894 "Found array with dim %d. %s expected <= 2." 895 % (array.ndim, estimator_name) 896 ) 898 if force_all_finite: --> 899 _assert_all_finite( 900 array, 901 input_name=input_name, 902 estimator_name=estimator_name, 903 allow_nan=force_all_finite == "allow-nan", 904 ) 906 if ensure_min_samples > 0: 907 n_samples = _num_samples(array) File /usr/local/lib/python3.8/dist-packages/sklearn/utils/validation.py:146, in _assert_all_finite(X, allow_nan, msg_dtype, estimator_name, input_name) 124 if ( 125 not allow_nan 126 and estimator_name (...) 130 # Improve the error message on how to handle missing values in 131 # scikit-learn. 132 msg_err += ( 133 f"\n{estimator_name} does not accept missing values" 134 " encoded as NaN natively. For supervised learning, you might want" (...) 144 "#estimators-that-handle-nan-values" 145 ) --> 146 raise ValueError(msg_err) 148 # for object dtype data, we only check for NaNs (GH-13254) 149 elif X.dtype == np.dtype("object") and not allow_nan: ValueError: Input contains NaN. So, I have two questions: Is there any parameters I should set, in order to avoid this error? I found out the similar problem: Failing to predict next value using ARIMA: Input contains NaN, infinity or a value too large for dtype('float64'). In the comment of this post says : It's caused by a unsolved issue. I'm not sure if this error is also caused by the same issue. If so, is there any suggestion of other package of ARIMA model? Environment Information: I perform this code in a docker container OS info: Distributor ID: Ubuntu Description: Ubuntu 20.04.4 LTS Release: 20.04 Codename: focal python env info: Python 3.8.10 pip package info (I only list related package, I put complete pip package list in here): Package Version ---------------------------- -------------------- numpy 1.22.4 pandas 1.4.3 pmdarima 2.0.1 scikit-learn 1.1.1 scipy 1.8.1 statsmodels 0.13.2 | Downgrading the following packages will resolve this error: numpy==1.19.3 pandas==1.3.3 pmdarima==1.8.3 | 5 | 0 |
73,618,820 | 2022-9-6 | https://stackoverflow.com/questions/73618820/nested-loop-optimisation-in-python-for-a-list-of-50k-items | I have a csv file with roughly 50K rows of search engine queries. Some of the search queries are the same, just in a different word order, for example "query A this is " and "this is query A". I've tested using fuzzywuzzy's token_sort_ratio function to find matching word order queries, which works well, however I'm struggling with the runtime of the nested loop, and looking for optimisation tips. Currently the nested for loops take around 60 hours to run on my machine. Does anyone know how I might speed this up? Code below: from fuzzywuzzy import fuzz from fuzzywuzzy import process import pandas as pd from tqdm import tqdm filePath = '/content/queries.csv' df = pd.read_csv(filePath) table1 = df['keyword'].to_list() table2 = df['keyword'].to_list() data = [] for kw_t1 in tqdm(table1): for kw_t2 in table2: score = fuzz.token_sort_ratio(kw_t1,kw_t2) if score == 100 and kw_t1 != kw_t2: data +=[[kw_t1, kw_t2, score]] data_df = pd.DataFrame(data, columns=['query', 'queryComparison', 'score']) Any advice would be appreciated. Thanks! | Since what you are looking for are strings consisting of identical words (just not necessarily in the same order), there is no need to use fuzzy matching at all. You can instead use collections.Counter to create a frequency dict for each string, with which you can map the strings under a dict of lists keyed by their frequency dicts. You can then output sub-lists in the dicts whose lengths are greater than 1. Since dicts are not hashable, you can make them keys of a dict by converting them to frozensets of tuples of key-value pairs first. This improves the time complexity from O(n ^ 2) of your code to O(n) while also avoiding overhead of performing fuzzy matching. from collections import Counter matches = {} for query in df['keyword']: matches.setdefault(frozenset(Counter(query.split()).items()), []).append(query) data = [match for match in matches.values() if len(match) > 1] Demo: https://replit.com/@blhsing/WiseAfraidBrackets | 5 | 1 |
73,614,379 | 2022-9-5 | https://stackoverflow.com/questions/73614379/how-do-i-use-piecewise-af%ef%ac%81ne-transformation-to-straighten-curved-text-line-cont | Consider the following image: and the following bounding contour( which is a smooth version of the output of a text-detection neural network of the above image ), so this contour is a given. I need to warp both images so that I end up with a straight enough textline, so that it can be fed to a text recognition neural network: using Piecewise Afο¬ne Transformation, or some other method. with an implementation if possible or key points of implementation in python. I know how to find the medial axis, order its points, simplify it (e.g using Douglas-Peucker algorithm), and find the corresponding points on a straight line. EDIT: the question can be rephrased -naively- as the following : have you tried the "puppet warp" feature in Adobe Photoshop? you specify "joint" points on an image , and you move these points to the desired place to perform the image warping, we can calculate the source points using a simplified medial axis (e.g 20 points instead of 200 points), and calculate the corresponding target points on a straight line, how to perform Piecewise Afο¬ne Transformation using these two sets of points( source and target)? EDIT: modified the images, my bad Papers Here's a paper that does the needed result: A Novel Technique for Unwarping Curved Handwritten Texts Using Mathematical Morphology and Piecewise Affine Transformation another paper: A novel method for straightening curved text-lines in stylistic documents Similar questions: Straighten B-Spline Challenge : Curved text extraction using python How to convert curves in images to lines in Python? Deforming an image so that curved lines become straight lines Straightening a curved contour | If the goal is to just unshift each column, then: import numpy as np from PIL import Image source_img = Image.open("73614379-input-v2.png") contour_img = Image.open("73614379-map-v3.png").convert("L") assert source_img.size == contour_img.size contour_arr = np.array(contour_img) != 0 # convert to boolean array col_offsets = np.argmax( contour_arr, axis=0 ) # find the first non-zero row for each column assert len(col_offsets) == source_img.size[0] # sanity check min_nonzero_col_offset = np.min( col_offsets[col_offsets > 0] ) # find the minimum non-zero row target_img = Image.new("RGB", source_img.size, (255, 255, 255)) for x, col_offset in enumerate(col_offsets): offset = col_offset - min_nonzero_col_offset if col_offset > 0 else 0 target_img.paste( source_img.crop((x, offset, x + 1, source_img.size[1])), (x, 0) ) target_img.save("unshifted3.png") with the new input and the new contour from OP outputs this image: | 5 | 3 |
73,604,732 | 2022-9-5 | https://stackoverflow.com/questions/73604732/selenium-webdriverexception-unknown-error-session-deleted-because-of-page-cras | Good morning, This is a duplicate of a similar post on StackOverflow WHICH DIDN'T have an answer that solved the problem for me. For the last couple of days, my Python-Selenium script which uses Chrome Driver 104 has been having issue while scrolling down on infinite scroll, dynamically-loaded pages. This script is used to scroll Facebook and perform certain RPA actions like sending messages, etc. (I have only attached the snippet related to the error). In summary, the user enters a number of posts to reach, and the script will reach this specific number of posts, for example, the first 1000 post, and perform certain actions (Doesn't violate Facebbook TOS) This script is NOT running in a docker instance or any kind of container, using my full PC resources. Also, this script has been test on: 1- Windows 11 PC with 16GB ram and i7 Processor 2- MacBook - 16 GB 3- Windows Server 2019 - 32 GB of Ram, i7 Process 4- Linux Ubuntu 22.0 Server - 16 GB of Ram (increased Dev/shm to 30 GB on this server) 5- Google Colab Kernel (increased dev/shm) All of the above had exactly the same error trace with the same error, session deletion because of page crash. When the script reaches around 800-900 posts (this is a random number though, it once reached 1,2k posts for me then failed the next time at 400?) the page will become really slow and then crash. Now Something to notice here, I CAN scroll FAR more than 1500 posts normally on my PC (like manually), and it definitely DOES NOT crash. So, I am pretty sure this is a bug in my script, not because of memory issues (Maybe a memory leak in the script, but not a hardware issue I mean). When the script breaks, the ram is actually not near 80% of the total RAM. If I ran the script in non-headless mode, I would receive an error message on Chrome that says: "Oh Snap, Chrome out of memory" To save your time, I read the following posts on Stackover flow and they didn't help: 1- unknown error: session deleted because of page crash from unknown error: cannot determine loading status from tab crashed with ChromeDriver Selenium 2- selenium.WebDriverException: unknown error: session deleted because of page crash from tab crashed 3- Python Selenium session deleted because of page crash from unknown error: cannot determine loading status from tab crashed 4- Getting "org.openqa.selenium.WebDriverException: unknown error: session deleted because of page crash" error when executing automation scripts (Which uses Java, but still read it though) 5- Selenium error with simple driver.get() method : session deleted because of page crash from unknown error: cannot determine loading status What I did to try and solve the issue (and It didn't work): 1- Resized Window, according to this post. 2- Used Chrome Options --no-sandbox and --disable-dev-shm-usage 3- Tried using --js-flags (--max_old_space_size=8096) 4- Disabled all notifications, geolocation messages, images 5- Made sure my dev/shm on mac and linux is large enough as well as the temp folder in Windows 6- Added a LOT of time.sleep() between the scrolls. 7- Tried using a different scrolling method (To go to the bottom of page with javascript, 'driver.execute_script()' 8- Using Firefox GeckoDriver as well as Edge and Opera. 9- Using different ways to check the number of posts on the page (Bs4, LXML) which doesn't seem to be the issue as the issue happens in the scroll part. The snippet that causes the issue: (The chrome options aren't listed in the code, but I load them from a separate file, I will write them down after the code though) # Start Selenium Imports from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By from selenium.webdriver.chrome.service import Service # Selenium Imports Finished from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.action_chains import ActionChains def login(email, password): driver.get('https://www.facebook.com/') #Email driver.find_element(By.NAME,'email').send_keys(email) #Password driver.find_element(By.NAME,'pass').send_keys(password, Keys.RETURN) time.sleep(2) def reachPosts(noOfPosts = 50) -> None: posts = driver.find_element(By.XPATH,"//div[@role='feed']").find_elements(By.CSS_SELECTOR, ".g4tp4svg.mfclru0v.om3e55n1.p8bdhjjv") postsNo = len(posts) posts = None while postsNo < noOfPosts+1: scroll_down() posts = driver.find_element(By.XPATH,"//div[@role='feed']").find_elements(By.CSS_SELECTOR, ".g4tp4svg.mfclru0v.om3e55n1.p8bdhjjv") time.sleep(1) print(len(posts)) postsNo = len(posts) if postsNo >= 1000: time.sleep(10) posts = None posts = None #----------------Scroll Function!-----------------------------# def scroll_down(): """A method for scrolling the page.""" # Scroll down to the bottom. #driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") for i in range(3): actions.send_keys(Keys.SPACE).perform() #-----------------End-----------------------------------------# def openGroup(facebookUrl, inputDate): print("Opening Facebook Link") driver.get(f'{facebookUrl}?sorting_setting=CHRONOLOGICAL') time.sleep(2) reachPosts(creds["Number of posts"]) posts = driver.find_element(By.XPATH,"//div[@role='feed']").find_elements(By.CSS_SELECTOR, ".g4tp4svg.mfclru0v.om3e55n1.p8bdhjjv") noOfPosts = creds["Number of posts"] def main(): global creds creds = openCredentials() login(creds["email"], creds["password"]) for group in creds['Facebook Groups']: openGroup(group, creds["Date"]) time.sleep(3) Chrome Options used: "--disable-extensions", "--disable-application-cache", "--headless" "window-size=600,450", "--disable-blink-features=AutomationControlled", "--enable-javascript", "disable-infobars", "--js-flags='--max_old_space_size=8196'", "--max_old_space_size=4096", "max_old_space_size=9000", "--disable-dev-shm-usage", "--incognito", "--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36" The error Traceback (most recent call last): File "D:\Work & Projects\Work\Upwork\Facebook Groups Scraper\src\facebookScraperIndiv.py", line 313, in <module> main() File "D:\Work & Projects\Work\Upwork\Facebook Groups Scraper\src\facebookScraperIndiv.py", line 302, in main openGroup(group, creds["Date"]) File "D:\Work & Projects\Work\Upwork\Facebook Groups Scraper\src\facebookScraperIndiv.py", line 254, in openGroup reachPosts(creds["Number of posts"]) File "D:\Work & Projects\Work\Upwork\Facebook Groups Scraper\src\facebookScraperIndiv.py", line 84, in reachPosts scroll_down() File "D:\Work & Projects\Work\Upwork\Facebook Groups Scraper\src\facebookScraperIndiv.py", line 104, in scroll_down actions.send_keys(Keys.SPACE).perform() File "D:\Work & Projects\Work\Upwork\Facebook Groups Scraper\lib\site-packages\selenium\webdriver\common\action_chains.py", line 78, in perform self.w3c_actions.perform() File "D:\Work & Projects\Work\Upwork\Facebook Groups Scraper\lib\site-packages\selenium\webdriver\common\actions\action_builder.py", line 88, in perform self.driver.execute(Command.W3C_ACTIONS, enc) File "D:\Work & Projects\Work\Upwork\Facebook Groups Scraper\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 434, in execute self.error_handler.check_response(response) File "D:\Work & Projects\Work\Upwork\Facebook Groups Scraper\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 243, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: session deleted because of page crash from unknown error: cannot determine loading status from tab crashed (Session info: chrome=105.0.5195.102) Stacktrace: Backtrace: Ordinal0 [0x0024DF13+2219795] Ordinal0 [0x001E2841+1779777] Ordinal0 [0x000F4100+803072] Ordinal0 [0x000E6F18+749336] Ordinal0 [0x000E5F94+745364] Ordinal0 [0x000E6528+746792] Ordinal0 [0x000EF42F+783407] Ordinal0 [0x000FA938+829752] Ordinal0 [0x0014F3CF+1176527] Ordinal0 [0x0013E616+1107478] Ordinal0 [0x00117F89+950153] Ordinal0 [0x00118F56+954198] GetHandleVerifier [0x00542CB2+3040210] GetHandleVerifier [0x00532BB4+2974420] GetHandleVerifier [0x002E6A0A+565546] GetHandleVerifier [0x002E5680+560544] Ordinal0 [0x001E9A5C+1808988] Ordinal0 [0x001EE3A8+1827752] Ordinal0 [0x001EE495+1827989] Ordinal0 [0x001F80A4+1867940] BaseThreadInitThunk [0x76236739+25] RtlGetFullPathName_UEx [0x774D90AF+1215] RtlGetFullPathName_UEx [0x774D907D+1165] (No symbol) [0x00000000] | For everyone facing the same issue in the future. The problem isn't with the chrome driver, it is with the DOM getting too large that causes the V8 JS memory to break at a point and call the OOM. To fix this for me, I thought of using the Facebook mobile version and it actually worked. Facebook mobile version of the website is much lighter and has much less complicated DOM which allows me to reach 5k+ posts actually. I hope this helps everyone wondering the same. If you have a similar issue, try to find ways to simplify the DOM or have another simplified DOM Views, I found some extensions help with that as well. | 4 | 1 |
73,617,676 | 2022-9-6 | https://stackoverflow.com/questions/73617676/mypy-slow-when-using-vscodes-python-extension | When enabling mypy in vscode ("python.linting.mypyEnabled": true,), then any manual mypy commands become very slow and CPU intensive (10s before vs 3min after). It seems like the two mypy processes should be independent, or even aid each other through a cache, but they seem to be in each other's way. I noticed that from a clean venv it doesn't happen for a while. Only after the vscode has run mypy do the manual mypy commands become slow, even when vscode is not running anymore. The only related question I could find was this. | I found that making vscode use a different cache directory resolves the problem. Consider adding e.g. the following to your settings.json: "python.linting.mypyArgs": [ "--cache-dir=.mypy_cache/.vscode" ], Bonus: by remaining within the default directory (mypy_cache), the second cache directory is ignored by git. | 5 | 6 |
73,617,305 | 2022-9-6 | https://stackoverflow.com/questions/73617305/django-rest-framework-attributeerror-got-attributeerror-when-attempting-to-ge | Got AttributeError when attempting to get a value for field Firstname in serializer NameSerializer. The serializer field might be named incorrectly and not match any attribute or key on the QuerySet instance. The original exception text was: 'QuerySet' object has no attribute Firstname. Error: serializers.py from rest_framework import serializers from .models import Name, ForeName class NameSerializer(serializers.ModelSerializer): class Meta: model = Name fields = '__all__' class ForeNameSerializer(serializers.ModelSerializer): forenames = NameSerializer(many=True, read_only=True) class Meta: model = ForeName fields= '__all__' models.py from django.db import models import uuid # create your models here class ForeName(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) Forename = models.CharField(max_length=30) def __str__(self): return self.Forename class Name(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) Firstname = models.ForeignKey(ForeName, on_delete=models.PROTECT, related_name="forenames") views.py from rest_framework.decorators import api_view from rest_framework.response import Response from .serializers import NameSerializer from .models import Name # Create your views here. @api_view(['GET']) def names_list(request): names = Name.objects.all() myname = NameSerializer(names) return Response({"restult": { "Forename" : myname.data, } | You need to add many=True in your serializer when initializing with multiple instances. myname = NameSerializer(names,many=True) | 3 | 9 |
73,569,894 | 2022-9-1 | https://stackoverflow.com/questions/73569894/permutation-based-alternative-to-scipy-stats-ttest-1samp | I would like to use a permutation-based alternative to scipy.stats.ttest_1samp to test if the mean of my observations is significantly greater than zero. I stumbled upon scipy.stats.permutation_test but I am not sure if this can also be used in my case? I also stumbled upon mne.stats.permutation_t_test which seems to do what I want, but I would like to stick to scipy if I can. Example: import numpy as np from scipy import stats # create data np.random.seed(42) rvs = np.random.normal(loc=5,scale=5,size=100) # compute one-sample t-test t,p = stats.ttest_1samp(rvs,popmean=0,alternative='greater') | This test can be performed with permutation_test. With permutation_type='samples', it "permutes" the signs of the observations. Assuming data has been generated as above, the test can be performed as from scipy import stats def t_statistic(x, axis=-1): return stats.ttest_1samp(x, popmean=0, axis=axis).statistic res = stats.permutation_test((rvs,), t_statistic, permutation_type='samples') print(res.pvalue) If you only care about the p-value, you can get the same result with np.mean as the statistic instead of t_statistic. Admittedly, this behavior for permutation_type='samples' with only one sample is a bit buried in the documentation. Accordingly, if data contains only one sample, then the null distribution is formed by independently changing the sign of each observation. But a test producing the same p-value could also be performed as a two-sample test in which the second sample is the negative of the data. To avoid special cases, that's actually what permutation_test does under the hood. In this case, the example code above is a lot faster than permutation_test right now. I'll try to improve that for SciPy 1.10, though. | 4 | 5 |
73,599,970 | 2022-9-4 | https://stackoverflow.com/questions/73599970/how-to-solve-wkhtmltopdf-reported-an-error-exit-with-code-1-due-to-network-err | I'm using Django. This is code is in views.py. def download_as_pdf_view(request, doc_type, pk): import pdfkit file_name = 'invoice.pdf' pdf_path = os.path.join(settings.BASE_DIR, 'static', 'pdf', file_name) template = get_template("paypal/card_invoice_detail.html") _html = template.render({}) pdfkit.from_string(_html, pdf_path) return FileResponse(open(pdf_path, 'rb'), filename=file_name, content_type='application/pdf') Traceback is below. [2022-09-05 00:56:35,785] ERROR [django.request.log_response:224] Internal Server Error: /paypal/download_pdf/card_invoice/MTE0Nm1vamlva29zaGkz/ Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner response = get_response(request) File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/opt/project/app/paypal/views.py", line 473, in download_as_pdf_view pdfkit.from_string(str(_html), pdf_path) File "/usr/local/lib/python3.8/site-packages/pdfkit/api.py", line 75, in from_string return r.to_pdf(output_path) File "/usr/local/lib/python3.8/site-packages/pdfkit/pdfkit.py", line 201, in to_pdf self.handle_error(exit_code, stderr) File "/usr/local/lib/python3.8/site-packages/pdfkit/pdfkit.py", line 155, in handle_error raise IOError('wkhtmltopdf reported an error:\n' + stderr) OSError: wkhtmltopdf reported an error: Exit with code 1 due to network error: ProtocolUnknownError [2022-09-05 00:56:35,797] ERROR [django.server.log_message:161] "GET /paypal/download_pdf/card_invoice/MTE0Nm1vamlva29zaGkz/ HTTP/1.1" 500 107486 This is work file. pdfkit.from_url('https://google.com', 'google.pdf') However pdfkit.from_string and pdfkit.from_file return "ProtocolUnknownError" Please help me! Update I tyied this code. _html = '''<html><body><h1>Hello world</h1></body></html>''' pdfkit.from_string(_html), pdf_path) It worked fine. I saved above html as sample.html. Then run this code I added this parameter options={"enable-local-file-access": ""} _html = render_to_string('path/to/sample.html') pdfkit.from_string(str(_html), pdf_path, options={"enable-local-file-access": ""}) It worked fine! And the "ProtocolUnknownError" error is gone thanks to options={"enable-local-file-access": ""}. So, I changed the HTML file path to the one I really want to use. _html = render_to_string('path/to/invoice.html') pdfkit.from_string(_html, pdf_path, options={"enable-local-file-access": ""}) return FileResponse(open(pdf_path, 'rb'), filename=file_name, content_type='application/pdf') It does not finish convert pdf. When I run the code line by line. stdout, stderr = result.communicate(input=input) does not return. It was processing long time. | I solved this problem. Theare are 3 step to pass this problems. You need to set options {"enable-local-file-access": ""}. pdfkit.from_string(_html, pdf_path, options={"enable-local-file-access": ""}) pdfkit.from_string() can't load css from URL. It's something like this. <link rel="stylesheet" href="https://path/to/style.css"> css path should be absolute path or write style in same file. If css file load another file. ex: font file. It will be ContentNotFoundError. My solution I used simple css file like this. body { font-size: 18px; padding: 55px; } h1 { font-size: 38px; } h2 { font-size: 28px; } h3 { font-size: 24px; } h4 { font-size: 20px; } table, th, td { margin: auto; text-align: center; border: 1px solid; } table { width: 80%; } .text-right { text-align: right; } .text-left { text-align: left; } .text-center { text-align: center; } This code insert last css file as style in same html. import os import pdfkit from django.http import FileResponse from django.template.loader import render_to_string from paypal.models import Invoice from website import settings def download_as_pdf_view(request, pk): # create PDF from HTML template file with context. invoice = Invoice.objects.get(pk=pk) context = { # please set your contexts as dict. } _html = render_to_string('paypal/card_invoice_detail.html', context) # remove header _html = _html[_html.find('<body>'):] # create new header new_header = '''<!DOCTYPE html> <html lang="ja"> <head> <meta charset="utf-8"/> </head> <style> ''' # add style from css file. please change to your css file path. css_path = os.path.join(settings.BASE_DIR, 'paypal', 'static', 'paypal', 'css', 'invoice.css') with open(css_path, 'r') as f: new_header += f.read() new_header += '\n</style>' print(new_header) # add head to html _html = new_header + _html[_html.find('<body>'):] with open('paypal/sample.html', 'w') as f: f.write(_html) # for debug # convert html to pdf file_name = 'invoice.pdf' pdf_path = os.path.join(settings.BASE_DIR, 'static', 'pdf', file_name) pdfkit.from_string(_html, pdf_path, options={"enable-local-file-access": ""}) return FileResponse(open(pdf_path, 'rb'), filename=file_name, content_type='application/pdf') | 27 | 44 |
73,610,869 | 2022-9-5 | https://stackoverflow.com/questions/73610869/the-expanded-size-of-the-tensor-1011-must-match-the-existing-size-512-at-non | I have a trained a LayoutLMv2 model from huggingface and when I try to inference it on a single image, it gives the runtime error. The code for this is below: query = '/Users/vaihabsaxena/Desktop/Newfolder/labeled/Others/Two.pdf26.png' image = Image.open(query).convert("RGB") encoded_inputs = processor(image, return_tensors="pt").to(device) outputs = model(**encoded_inputs) preds = torch.softmax(outputs.logits, dim=1).tolist()[0] pred_labels = {label:pred for label, pred in zip(label2idx.keys(), preds)} pred_labels The error comes when when I do model(**encoded_inputs). The processor is called directory from Huggingface and is initialized as follows along with other APIs: feature_extractor = LayoutLMv2FeatureExtractor() tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased") processor = LayoutLMv2Processor(feature_extractor, tokenizer) The model is defined and trained as follows: model = LayoutLMv2ForSequenceClassification.from_pretrained( "microsoft/layoutlmv2-base-uncased", num_labels=len(label2idx) ) model.to(device); optimizer = AdamW(model.parameters(), lr=5e-5) num_epochs = 3 for epoch in range(num_epochs): print("Epoch:", epoch) training_loss = 0.0 training_correct = 0 #put the model in training mode model.train() for batch in tqdm(train_dataloader): outputs = model(**batch) loss = outputs.loss training_loss += loss.item() predictions = outputs.logits.argmax(-1) training_correct += (predictions == batch['labels']).float().sum() loss.backward() optimizer.step() optimizer.zero_grad() print("Training Loss:", training_loss / batch["input_ids"].shape[0]) training_accuracy = 100 * training_correct / len(train_data) print("Training accuracy:", training_accuracy.item()) validation_loss = 0.0 validation_correct = 0 for batch in tqdm(valid_dataloader): outputs = model(**batch) loss = outputs.loss validation_loss += loss.item() predictions = outputs.logits.argmax(-1) validation_correct += (predictions == batch['labels']).float().sum() print("Validation Loss:", validation_loss / batch["input_ids"].shape[0]) validation_accuracy = 100 * validation_correct / len(valid_data) print("Validation accuracy:", validation_accuracy.item()) The complete error trace: RuntimeError Traceback (most recent call last) /Users/vaihabsaxena/Desktop/Newfolder/pytorch.ipynb Cell 37 in <cell line: 4>() 2 image = Image.open(query).convert("RGB") 3 encoded_inputs = processor(image, return_tensors="pt").to(device) ----> 4 outputs = model(**encoded_inputs) 5 preds = torch.softmax(outputs.logits, dim=1).tolist()[0] 6 pred_labels = {label:pred for label, pred in zip(label2idx.keys(), preds)} File ~/opt/anaconda3/envs/env_pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs) 1126 # If we don't have any hooks, we want to skip the rest of the logic in 1127 # this function, and just call forward. 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] File ~/opt/anaconda3/envs/env_pytorch/lib/python3.9/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py:1071, in LayoutLMv2ForSequenceClassification.forward(self, input_ids, bbox, image, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1061 visual_position_ids = torch.arange(0, visual_shape[1], dtype=torch.long, device=device).repeat( 1062 input_shape[0], 1 1063 ) 1065 initial_image_embeddings = self.layoutlmv2._calc_img_embeddings( 1066 image=image, 1067 bbox=visual_bbox, ... 896 input_shape[0], 1 897 ) 898 final_position_ids = torch.cat([position_ids, visual_position_ids], dim=1) RuntimeError: The expanded size of the tensor (1011) must match the existing size (512) at non-singleton dimension 1. Target sizes: [1, 1011]. Tensor sizes: [1, 512] I have tried to set up the tokenizer to cut off the max length but it finds encoded_inputs as Nonetype however the image is still there. What is going wrong here? | The error message tells you that the extracted text via ocr is longer (1011 tokens) than the underlying text model is able to handle (512 tokens). Depending on your task, you maybe can truncate your text with the tokenizer parameter truncation (the processor will pass this parameter to the tokenizer): import torch from transformers import LayoutLMv2FeatureExtractor, LayoutLMv2Tokenizer, LayoutLMv2Processor, LayoutLMv2ForSequenceClassification from PIL import Image, ImageDraw, ImageFont query = "/content/Screenshot_20220905_202551.png" image = Image.open(query).convert("RGB") feature_extractor = LayoutLMv2FeatureExtractor() tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased") processor = LayoutLMv2Processor(feature_extractor, tokenizer) model = LayoutLMv2ForSequenceClassification.from_pretrained("microsoft/layoutlmv2-base-uncased", num_labels=2) encoded_inputs = processor(image, return_tensors="pt") # Model will raise an error because the tensor is longer as the trained position embeddings print(encoded_inputs["input_ids"].shape) encoded_inputs = processor(image, return_tensors="pt", truncation=True) print(encoded_inputs["input_ids"].shape) outputs = model(**encoded_inputs) preds = torch.softmax(outputs.logits, dim=1).tolist()[0] Output: torch.Size([1, 644]) torch.Size([1, 512]) For this code, I used the following screenshot: | 4 | 8 |
73,589,662 | 2022-9-3 | https://stackoverflow.com/questions/73589662/in-apache-beam-dataflows-writetobigquery-transform-how-do-you-enable-the-deadl | In this document, Apache Beam suggests the deadletter pattern when writing to BigQuery. This pattern allows you to fetch rows that failed to be written from the transform output with the 'FailedRows' tag. However, when I try to use it: WriteToBigQuery( table=self.bigquery_table_name, schema={"fields": self.bigquery_table_schema}, method=WriteToBigQuery.Method.FILE_LOADS, temp_file_format=FileFormat.AVRO, ) A schema mismatch in one of my elements causes the following exception: Error message from worker: Traceback (most recent call last): File "/my_code/apache_beam/io/gcp/bigquery_tools.py", line 1630, in write self._avro_writer.write(row) File "fastavro/_write.pyx", line 647, in fastavro._write.Writer.write File "fastavro/_write.pyx", line 376, in fastavro._write.write_data File "fastavro/_write.pyx", line 320, in fastavro._write.write_record File "fastavro/_write.pyx", line 374, in fastavro._write.write_data File "fastavro/_write.pyx", line 283, in fastavro._write.write_union ValueError: [] (type <class 'list'>) do not match ['null', 'double'] on field safety_proxy During handling of the above exception, another exception occurred: Traceback (most recent call last): File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 841, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam/runners/common.py", line 1334, in apache_beam.runners.common._OutputProcessor.process_outputs File "/my_code/apache_beam/io/gcp/bigquery_file_loads.py", line 258, in process writer.write(row) File "/my_code/apache_beam/io/gcp/bigquery_tools.py", line 1635, in write ex, self._avro_writer.schema, row)).with_traceback(tb) File "/my_code/apache_beam/io/gcp/bigquery_tools.py", line 1630, in write self._avro_writer.write(row) File "fastavro/_write.pyx", line 647, in fastavro._write.Writer.write File "fastavro/_write.pyx", line 376, in fastavro._write.write_data File "fastavro/_write.pyx", line 320, in fastavro._write.write_record File "fastavro/_write.pyx", line 374, in fastavro._write.write_data File "fastavro/_write.pyx", line 283, in fastavro._write.write_union ValueError: Error writing row to Avro: [] (type <class 'list'>) do not match ['null', 'double'] on field safety_proxy Schema: ... From what I gather, the schema mismatch causes fastavro._write.Writer.write to fail and throw an exception. Instead, I would like WriteToBigQuery to apply the deadletter behavior and return my malformed rows as FailedRows tagged output. Is there a way to achieve this? Thanks EDIT: Adding more detailed example of what I'm trying to do: from apache_beam import Create from apache_beam.io.gcp.bigquery import BigQueryWriteFn, WriteToBigQuery from apache_beam.io.textio import WriteToText ... valid_rows = [{"some_field_name": i} for i in range(1000000)] invalid_rows = [{"wrong_field_name": i}] pcoll = Create(valid_rows + invalid_rows) # This fails because of the 1 invalid row write_result = ( pcoll | WriteToBigQuery( table=self.bigquery_table_name, schema={ "fields": [ {'name': 'some_field_name', 'type': 'INTEGER', 'mode': 'NULLABLE'}, ] }, method=WriteToBigQuery.Method.FILE_LOADS, temp_file_format=FileFormat.AVRO, ) ) # What I want is for WriteToBigQuery to partially succeed and output the failed rows. # This is because I have pipelines that run for multiple hours and fail because of # a small amount of malformed rows ( write_result[BigQueryWriteFn.FAILED_ROWS] | WriteToText('gs://my_failed_rows/') ) | You can use a dead letter queue in the pipeline instead of let BigQuery catch errors for you. Beam proposes a native way for error handling and dead letter queue with TupleTags but the code is little verbose. I created an open source library called Asgarde for Python sdk and Java sdk to apply error handling for less code, more concise and expressive code : https://github.com/tosun-si/pasgarde (also the Java version : https://github.com/tosun-si/asgarde) You can install it with pip : asgarde==0.16.0 pip install asgarde==0.16.0 from apache_beam import Create from apache_beam.io.gcp.bigquery import BigQueryWriteFn, WriteToBigQuery from apache_beam.io.textio import WriteToText from asgarde.collection_composer import CollectionComposer def validate_row(self, row) -> Dict : field = row['your_field'] if field is None or field == '': # You can raise your own custom exception raise ValueError('Bad field') ... valid_rows = [{"some_field_name": i} for i in range(1000000)] invalid_rows = [{"wrong_field_name": i}] pcoll = Create(valid_rows + invalid_rows) # Dead letter queue proposed by Asgarde, it's return output and Failure PCollection. output_pcoll, failure_pcoll = (CollectionComposer.of(pcoll) .map(self.validate_row)) # Good sink ( output_pcoll | WriteToBigQuery( table=self.bigquery_table_name, schema={ "fields": [ {'name': 'some_field_name', 'type': 'INTEGER', 'mode': 'NULLABLE'}, ] }, method=WriteToBigQuery.Method.FILE_LOADS ) ) # Bad sink : PCollection[Failure] / Failure contains inputElement and # stackTrace. ( failure_pcoll | beam.Map(lambda failure : self.your_failure_transformation(failure)) | WriteToBigQuery( table=self.bigquery_table_name, schema=your_schema_for_failure_table, method=WriteToBigQuery.Method.FILE_LOADS ) ) The structure of Failure object proposed by Asgarde lib : @dataclass class Failure: pipeline_step: str input_element: str exception: Exception In the validate_row function, you will apply your validation logic and detect bad fields. You will raise an exception in this case, and Asgarde will catch the error for you. The result of CollectionComposer flow is : PCollection of output, in this case, I think is a PCollection[Dict] PCollection[Failure] At the end you can process to multi sink : Write good outputs to Bigquery Write failures to Bigquery You can also apply the same logic with native Beam error handling and TupleTags, I proposed an exemple in a project from my Github repository : https://github.com/tosun-si/teams-league-python-dlq-native-beam-summit/blob/main/team_league/domain_ptransform/team_stats_transform.py | 3 | 4 |
73,609,583 | 2022-9-5 | https://stackoverflow.com/questions/73609583/efficient-ways-to-aggregate-and-replicate-values-in-a-numpy-matrix | In my work I often need to aggregate and expand matrices of various quantities, and I am looking for the most efficient ways to do these actions. E.g. I'll have an NxN matrix that I want to aggregate from NxN into PxP where P < N. This is done using a correspondence between the larger dimensions and the smaller dimensions. Usually, P will be around 100 or so. For example, I'll have a hypothetical 4x4 matrix like this (though in practice, my matrices will be much larger, around 1000x1000) m=np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]) >>> m array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]]) and a correspondence like this (schematically): 0 -> 0 1 -> 1 2 -> 0 3 -> 1 that I usually store in a dictionary. This means that indices 0 and 2 (for rows and columns) both get allocated to new index 0 and indices 1 and 3 (for rows and columns) both get allocated to new index 1. The matrix could be anything at all, but the correspondence is always many-to-one when I want to compress. If the input matrix is A and the output matrix is B, then cell B[0, 0] would be the sum of A[0, 0] + A[0, 2] + A[2, 0] + A[2, 2] because new index 0 is made up of original indices 0 and 2. The aggregation process here would lead to: array([[ 1+3+9+11, 2+4+10+12 ], [ 5+7+13+15, 6+8+14+16 ]]) = array([[ 24, 28 ], [ 40, 44 ]]) I can do this by making an empty matrix of the right size and looping over all 4x4=16 cells of the initial matrix and accumulating in nested loops, but this seems to be inefficient and the vectorised nature of numpy is always emphasised by people. I have also done it by using np.ix_ to make sets of indices and use m[row_indices, col_indices].sum(), but I am wondering what the most efficient numpy-like way to do it is. Conversely, what is the sensible and efficient way to expand a matrix using the correspondence the other way? For example with the same correspondence but in reverse I would go from: array([[ 1, 2 ], [ 3, 4 ]]) to array([[ 1, 2, 1, 2 ], [ 3, 4, 3, 4 ], [ 1, 2, 1, 2 ], [ 3, 4, 3, 4 ]]) where the values simply get replicated into the new cells. In my attempts so far for the aggregation, I have used approaches with pandas methods with groupby on index and columns and then extracting the final matrix with, e.g. df.values. However, I don't know the equivalent way to expand a matrix, without using a lot of things like unstack and join and so on. And I see people often say that using pandas is not time-efficient. Edit 1: I was asked in a comment about exactly how the aggregation should be done. This is how it would be done if I were using nested loops and a dictionary lookup between the original dimensions and the new dimensions: >>> m=np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) >>> mnew=np.zeros((2,2)) >>> big2small={0:0, 1:1, 2:0, 3:1} >>> for i in range(4): ... inew = big2small[i] ... for j in range(4): ... jnew = big2small[j] ... mnew[inew, jnew] += m[i, j] ... >>> mnew array([[24., 28.], [40., 44.]]) Edit 2: Another comment asked for the aggregation example towards the start to be made more explicit, so I have done so. | Assuming you don't your indices don't have a regular structure I would do it try sparse matrices. import scipy.sparse as ss import numpy as np # your current array of indices g=np.array([[0,0],[1,1],[2,0],[3,1]]) # a sparse matrix of (data=ones, (row_ind=g[:,0], col_ind=g[:,1])) # it is one for every pair (g[i,0], g[i,1]), zero elsewhere u=ss.csr_matrix((np.ones(len(g)), (g[:,0], g[:,1]))) Aggregate m=np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) u.T @ m @ u Expand m2 = np.array([[1,2],[3,4]]) u @ m2 @ u.T | 4 | 3 |
73,585,377 | 2022-9-2 | https://stackoverflow.com/questions/73585377/warning-401-error-credentials-not-correct-for-azure-artifact-during-pip-instal | When I attempt to install a package from our Azure DevOps Artifacts feed, I get the error: Command: pip install org-test-framework --index-url https://<company_url>/tfs/<orgname>/_packaging/<feedname>/pypi/simple/ The keyring prompts for username and password for the site once entered getting the error as below Error: WARNING: 401 Error, Credentials not correct for https://<company_url>/tfs/<org name>/_packaging/<feedname>/pypi/simple/org-test-framework/ ERROR: Could not find a version that satisfies the requirement org-test-framework (from versions: none) ERROR: No matching distribution found for org-test-framework Note: Same user and password works when I try in the browser and able to download the package directly from above url, so the issue is not my credentials. Also I have tried with everyone in the team and same issue exist! | You need to use PAT for authenticate as password. Create a Personal access token with Packaging > Read scope to authenticate into Azure DevOps. Refer to this official link for details: https://learn.microsoft.com/en-us/azure/devops/artifacts/quickstarts/python-packages?view=azure-devops#manually-configure-authentication | 6 | 7 |
73,599,859 | 2022-9-4 | https://stackoverflow.com/questions/73599859/setting-environment-variable-when-running-python-unit-tests-inside-vscode | I'd like to execute code inside my unit tests conditioned on whether they're running from within VSCode or the command line. Is there a way to do so? The reasoning is to add additional visual feedback through cv2.imwrite statements, but to omit these when running a full regression from the command line or when running my CI. I known that I can set a debugging profile inside launch.json and define environment variables there, but this applies only when debugging a unit test: { "name": "Debug Tests", "type": "python", "request": "test", "console": "integratedTerminal", "python": "${command:python.interpreterPath}", "justMyCode": false, "env": { "MY_ENVIRONMENT_SWITCH_FOR_WRITING_JPEGS": "1" } }, Is there a way to achieve something similar when not running through the debugger? | Try defining environment variables by using .env files .env: MY_ENVIRONMENT_SWITCH_FOR_WRITING_JPEGS = 1 test1.py: import os from pathlib import Path from dotenv import find_dotenv, load_dotenv env_path = Path(".") / ".env" load_dotenv(dotenv_path=env_path, verbose=True) print(os.getenv("MY_ENVIRONMENT_SWITCH_FOR_WRITING_JPEGS")) | 3 | 5 |
73,605,095 | 2022-9-5 | https://stackoverflow.com/questions/73605095/why-the-grad-is-unavailable-for-the-tensor-in-gpu | a = torch.nn.Parameter(torch.ones(5, 5)) a = a.cuda() print(a.requires_grad) b = a b = b - 2 print('a ', a) print('b ', b) loss = (b - 1).pow(2).sum() loss.backward() print(a.grad) print(b.grad) After executing codes, the a.grad is None although a.requires_grad is True. But if the code a = a.cuda() is removed, a.grad is available after the loss backward. | The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more information. a = torch.nn.Parameter(torch.ones(5, 5)) a = a.cuda() print(a.requires_grad) b = a b = b - 2 print('a ', a) print('b ', b) loss = (b - 1).pow(2).sum() a.retain_grad() # added this line loss.backward() print(a.grad) That happens because of your line a = a.cuda() that override the original value of a. You could use a = torch.nn.Parameter(torch.ones(5, 5)) a.cuda() Or a = torch.nn.Parameter(torch.ones(5, 5, device='cuda')) a = torch.nn.Parameter(torch.ones(5, 5).cuda()) Or explicitly requesting to retain the gradients of a a.retain_grad() # added this line Erasing the gradients of intermediate variables can save significant amount of memory. So it is good that you retain gradients only where you need. | 5 | 2 |
73,599,783 | 2022-9-4 | https://stackoverflow.com/questions/73599783/vscode-pylance-auto-import-incorrect-root-path | Let's suppose that I have a project with this file structure: project_dir βββ src βββ package1 β βββ module1.py βββ package2 βββ module2.py When I want to use some class from module1 in some other module I type something like class SomeNewClass(ClassFromModule1): # here I press ctrl-space to make auto-import ... When Pylance makes auto-import it always takes it starting from the root directory of the project, so I got this auto-import: from src.package1.module1 import ClassFromModule1 The problem is that my working directory is src and I want to get from package1.module1 import ClassFromModule1 Is there any way to fix root path of the auto-import Pylance feature? | By default, if you import a module in src, it should look like the following (same effect as you want). So please check if the following settings in vscode are modified: Open Settings and search for Analysis: Auto Search Paths. This item is enabled (checked) by default. If it was modified to be off, turn it back on. Here are some examples: Folder structure: Project_dir ββ folderone β ββ package3 β ββ moudle3.py # --class C ββ src β ββ package1 β β ββ moudle1.py # --class A β ββ package2 β ββ moudle2.py # --class B ββ Test.py If you import the modules in the src folder (the above settings are enabled by default): # import code will not include `src` (like the picture shown above) from package1.moudle1 import A If the above features are turned off, the import code is as follows: If importing modules under other folders: The import code will start from the first level folder under the current workspace. Note: vscode uses the currently open folder as the workspace. So which level of folder you open will also affect the code in the import part. The above example is to open the Project_dir folder as a workspace in vscode. If I put Project_dir in a new folder ImpDemo and open the ImpDemo folder in vscode. Then the code of the import part is like this: | 6 | 4 |
73,603,289 | 2022-9-4 | https://stackoverflow.com/questions/73603289/why-doesnt-parameter-type-dictstr-unionstr-int-accept-value-of-type-di | I have a type for a dictionary of variables passed to a template: VariablesDict = Dict[str, Union[int, float, str, None]] Basically, any dictionary where the keys are strings and the values are strings, numbers or None. I use this type in several template related functions. Take this example function: def render_template(name: str, variables: VariablesDict): ... Calling this function with a dictionary literal works fine: render_template("foo", {"key": "value"}) However, if I assign the dictionary to a variable first, like this: variables = {"key": "value"} render_template("foo", variables) Mypy gives an error: Argument 2 to "render_template" has incompatible type "Dict[str, str]"; expected "Dict[str, Union[int, float, str, None]]" It seems to me that any value of type Dict[str, str] should be safe to pass to a function that expects a parameter of type Dict[str, Union[int, float, str, None]]. Why doesn't that work by default? Is there anything I can do to make this work? | The reason it doesn't work is that Dict is mutable, and a function which accepts a Dict[str, int|float|str|None] could therefore reasonably insert any of those types into its argument. If the argument was actually a Dict[str, str], it now contains values that violate its type. (For more on this, google "covariance/contravariance/invariance" and "Liskov Substitution Principle" -- as a general rule, mutable containers are invariant over their generic type[s].) As long as render_template doesn't need to modify the dict you pass to it, an easy fix is to have it take a Mapping (which is an abstract supertype of dict that doesn't imply mutability, and is therefore covariant) instead of a Dict: def render_template(name: str, variables: Mapping[str, Union[int, float, str, None]]): ... | 10 | 11 |
73,599,594 | 2022-9-4 | https://stackoverflow.com/questions/73599594/asyncio-works-in-python-3-10-but-not-in-python-3-8 | Consider the following code: import asyncio sem: asyncio.Semaphore = asyncio.Semaphore(2) async def async_run() -> None: async def async_task() -> None: async with sem: await asyncio.sleep(1) print('spam') await asyncio.gather(*[async_task() for _ in range(3)]) asyncio.run(async_run()) Run with Python 3.10.6 (Fedora 35), it works just like in the schoolbook. However, when I run it with Python 3.8.10 (Ubuntu 20.04), I get the following error: Traceback (most recent call last): File "main.py", line 21, in <module> asyncio.run(async_run()) File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "main.py", line 18, in async_run print(future_entry_index, await future_entry) File "/usr/lib/python3.8/asyncio/tasks.py", line 619, in _wait_for_one return f.result() # May raise f.exception(). File "main.py", line 11, in async_task async with sem: File "/usr/lib/python3.8/asyncio/locks.py", line 97, in __aenter__ await self.acquire() File "/usr/lib/python3.8/asyncio/locks.py", line 496, in acquire await fut RuntimeError: Task <Task pending name='Task-4' coro=<async_run.<locals>.async_task() running at main.py:11> cb=[as_completed.<locals>._on_completion() at /usr/lib/python3.8/asyncio/tasks.py:606]> got Future <Future pending> attached to a different loop It's async with sem line and the Semaphore object that cause the error. Without it, everything works without errors, but not the way I want it to. I can't provide the loop parameter anywhere, for even where it's allowed, it has been deprecated since Python 3.8 and removed in Python 3.10. How to make the code work with Python 3.8? Update. A glimpse at the asyncio code showed that the Python versions differ a lot. However, the Semaphores can't be just broken in 3.8, right? | As discussed in this answer, pre-python 3.10 Semaphore sets its loop on __init__ based on the current running loop, while asyncio.run starts a new loop. And so, when you try and async.run your coros, you are using a different loop than your Semaphore is defined on, for which the correct error message really is got Future <Future pending> attached to a different loop. Fortunately, making the code work on both python versions is not too hard: Solution 1 Don't make a new loop, use the existing loop to run your function: import asyncio sem: asyncio.Semaphore = asyncio.Semaphore(value=2) async def async_task() -> None: async with sem: await asyncio.sleep(1) print(f"spam {sem._value}") async def async_run() -> None: await asyncio.gather(*[async_task() for _ in range(3)]) loop = asyncio.get_event_loop() loop.run_until_complete(async_run()) loop.close() Solution 2 Initialize the semaphore object within the loop created by asyncio.run: import asyncio async def async_task2(sem) -> None: async with sem: await asyncio.sleep(1) print(f"spam {sem._value}") async def async_run2() -> None: sem = asyncio.Semaphore(2) await asyncio.gather(*[async_task2(sem) for _ in range(3)]) asyncio.run(async_run2()) Both snippets work on python3.8 and python3.10. Presumably it was because of weirdness like this that they removed the loop parameter from most of asyncio in python 3.10. Compare the __init__ for semaphore from 3.8 compared to 3.10: Python 3.8 class Semaphore(_ContextManagerMixin): """A Semaphore implementation. A semaphore manages an internal counter which is decremented by each acquire() call and incremented by each release() call. The counter can never go below zero; when acquire() finds that it is zero, it blocks, waiting until some other thread calls release(). Semaphores also support the context management protocol. The optional argument gives the initial value for the internal counter; it defaults to 1. If the value given is less than 0, ValueError is raised. """ def __init__(self, value=1, *, loop=None): if value < 0: raise ValueError("Semaphore initial value must be >= 0") self._value = value self._waiters = collections.deque() if loop is None: self._loop = events.get_event_loop() else: self._loop = loop warnings.warn("The loop argument is deprecated since Python 3.8, " "and scheduled for removal in Python 3.10.", DeprecationWarning, stacklevel=2) Python 3.10: class Semaphore(_ContextManagerMixin, mixins._LoopBoundMixin): """A Semaphore implementation. A semaphore manages an internal counter which is decremented by each acquire() call and incremented by each release() call. The counter can never go below zero; when acquire() finds that it is zero, it blocks, waiting until some other thread calls release(). Semaphores also support the context management protocol. The optional argument gives the initial value for the internal counter; it defaults to 1. If the value given is less than 0, ValueError is raised. """ def __init__(self, value=1, *, loop=mixins._marker): super().__init__(loop=loop) if value < 0: raise ValueError("Semaphore initial value must be >= 0") self._value = value self._waiters = collections.deque() self._wakeup_scheduled = False | 5 | 5 |
73,598,187 | 2022-9-4 | https://stackoverflow.com/questions/73598187/how-fix-python-import-error-in-vs-code-editor-when-using-a-dev-container | I've opened a project with the following structure in VS Code (1.71.0 on macOS, Intel) and activated a Dev Container (I've tried the default Python 3.9 and 3.10 containers from Microsoft, with and without using python3 -m venv ...): project/ .devcontainer/ devcontainer.json Dockerfile foo/ foo/ tests/ test_bar.py <-- IDE reports import error in this file resources/ __init__.py bar.py setup.py In VS Code's terminal window, I can successfully run test_bar.py from directory project/foo with: python3 -m unittest discover foo/tests -p 'test_*.py' So the project is valid and runs OK from the command line. But when I open file project/foo/foo/tests/test_bar.py in VS Code, I see the error Unable to import 'foo' pylint(import-error) underlined in red for the following line: from foo import bar I see similar supposed errors for external packages I've installed with pip3 install. I've tried to inform VS Code by adding various relative and absolute paths (e.g. /workspaces/project/foo) to various places in project/.devcontainer.json, such as at: customizations.vscode.settings python.analysis.extraPaths python.autoComplete.extraPaths python.testing.unittestargs But I've had no luck so far (after many IDE restarts and container image rebuilds). So I'm left wondering; how should one fix such IDE-flagged import errors in VS Code when using a Dev Container? Additional Info As a file was requested in the comments, here are the key test project files I used (I've not fixed any paths; my last test project was named vscode-python-dev-container, not project, which I used as shorthand above). devcontainer.json (with the containerEnv section added for the suggested PYTHONPATH change): // For format details, see https://aka.ms/devcontainer.json. For config options, see the README at: // https://github.com/microsoft/vscode-dev-containers/tree/v0.245.2/containers/python-3 { "name": "Python 3", "build": { "dockerfile": "Dockerfile", "context": "..", "args": { // Update 'VARIANT' to pick a Python version: 3, 3.10, 3.9, 3.8, 3.7, 3.6 // Append -bullseye or -buster to pin to an OS version. // Use -bullseye variants on local on arm64/Apple Silicon. "VARIANT": "3.10-bullseye", // Options "NODE_VERSION": "lts/*" } }, "containerEnv": { "PYTHONPATH": "/workspaces/vscode-python-dev-container/foo" }, // Configure tool-specific properties. "customizations": { // Configure properties specific to VS Code. "vscode": { // Set *default* container specific settings.json values on container create. "settings": { "python.defaultInterpreterPath": "/usr/local/bin/python", "python.linting.enabled": true, "python.linting.pylintEnabled": true, "python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8", "python.formatting.blackPath": "/usr/local/py-utils/bin/black", "python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf", "python.linting.banditPath": "/usr/local/py-utils/bin/bandit", "python.linting.flake8Path": "/usr/local/py-utils/bin/flake8", "python.linting.mypyPath": "/usr/local/py-utils/bin/mypy", "python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle", "python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle", "python.linting.pylintPath": "/usr/local/py-utils/bin/pylint" }, // Add the IDs of extensions you want installed when the container is created. "extensions": [ "ms-python.python", "ms-python.vscode-pylance" ] } }, // Use 'forwardPorts' to make a list of ports inside the container available locally. // "forwardPorts": [], // Use 'postCreateCommand' to run commands after the container is created. // "postCreateCommand": "pip3 install --user -r requirements.txt", // Comment out to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root. "remoteUser": "vscode" } Dockerfile: # See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.245.2/containers/python-3/.devcontainer/base.Dockerfile # [Choice] Python version (use -bullseye variants on local arm64/Apple Silicon): 3, 3.10, 3.9, 3.8, 3.7, 3.6, 3-bullseye, 3.10-bullseye, 3.9-bullseye, 3.8-bullseye, 3.7-bullseye, 3.6-bullseye, 3-buster, 3.10-buster, 3.9-buster, 3.8-buster, 3.7-buster, 3.6-buster ARG VARIANT="3.10-bullseye" FROM mcr.microsoft.com/vscode/devcontainers/python:0-${VARIANT} # [Choice] Node.js version: none, lts/*, 16, 14, 12, 10 ARG NODE_VERSION="none" RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi # [Optional] If your pip requirements rarely change, uncomment this section to add them to the image. # COPY requirements.txt /tmp/pip-tmp/ # RUN pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \ # && rm -rf /tmp/pip-tmp # [Optional] Uncomment this section to install additional OS packages. # RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ # && apt-get -y install --no-install-recommends <your-package-list-here> # [Optional] Uncomment this line to install global node packages. # RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g <your-package-here>" 2>&1 bar.py: """An example module.""" JSON_STR = '{"a": 1}' test_bar.py: """An example test.""" import json import unittest import requests # test external lib access from foo import bar class TestFooBar(unittest.TestCase): """An example test class.""" a = requests.__name__ # to eliminate warning above that package isn't used def test_foo_bar_json(self): """An example test.""" data = json.loads(bar.JSON_STR) self.assertEqual(data['a'], 1) | I often have this same issue with Python+Devcontainer. What works for me is: setting this env variable: export PYTHONPATH=/workspace, you can also try with /workspaces or /yourProjectName depending on your setup. restart the Python Language Server with Ctrl+Shift+P > Python: Restart Language Server. Note that you do not need to rebuild the container. When you find the correct value for PYTHONPATH, you can add it to your Devcontainer configuration. | 5 | 6 |
73,598,938 | 2022-9-4 | https://stackoverflow.com/questions/73598938/why-is-dataclass-field-shared-across-instances | First time using dataclass, also not really good at Python. The following behaviour conflicts with my understanding so far: from dataclasses import dataclass @dataclass class X: x: int = 1 y: int = 2 @dataclass class Y: c1: X = X(3, 4) c2: X = X(5, 6) n1 = Y() n2 = Y() print(id(n1.c1)) print(id(n2.c1)) n1.c1.x = 99999 print(n2) This prints 140459664164272 140459664164272 Y(c1=X(x=99999, y=4), c2=X(x=5, y=6)) Why does c1 behave like a class variable? What can I do to keep n2.c1 != n1.c1, do I need to write an init function? I can get sensible results with this addition to Y: def __init__(self): self.c1 = X(3, 4) self.c2 = X(5, 6) prints: 140173334359840 140173335445072 Y(c1=X(x=3, y=4), c2=X(x=5, y=6)) | Why does c1 behave like a class variable? Because you specified default value for them and they're now a class attribute. In the Mutable Default Values section, it's mentioned: Python stores default member variable values in class attributes. But look at this: @dataclass class X: x: int = 1 y: int = 2 @dataclass class Y: c1: X c2: X = X(5, 6) print("c1" in Y.__dict__) # False print("c2" in Y.__dict__) # True c1 doesn't have default value so it's not in class's namespace. Indeed by doing so(defining default value), Python stores that c1 and c2 inside both instance's namespace (n1.__dict__) and class's namespace (Y.__dict__). Those are the same objects, only the reference is passed: @dataclass class X: x: int = 1 y: int = 2 @dataclass class Y: c1: X = X(3, 4) c2: X = X(5, 6) n1 = Y() n2 = Y() print("c1" in Y.__dict__) # True print("c1" in n1.__dict__) # True print(id(n1.c1)) # 140037361903232 print(id(n2.c1)) # 140037361903232 print(id(Y.c1)) # 140037361903232 So now, If you want them to be different you have several options: Pass arguments while instantiating (Not a good one): @dataclass class X: x: int = 1 y: int = 2 @dataclass class Y: c1: X = X(3, 4) c2: X = X(5, 6) n1 = Y(X(3, 4), X(5, 6)) n2 = Y(X(3, 4), X(5, 6)) print("c1" in Y.__dict__) # True print("c1" in n1.__dict__) # True print(id(n1.c1)) # 140058585069264 print(id(n2.c1)) # 140058584543104 print(id(Y.c1)) # 140058585065088 Use field and pass default_factory: from dataclasses import dataclass, field @dataclass class X: x: int = 1 y: int = 2 @dataclass class Y: c1: X = field(default_factory=lambda: X(3, 4)) c2: X = field(default_factory=lambda: X(5, 6)) n1 = Y() n2 = Y() print("c1" in Y.__dict__) # False print("c1" in n1.__dict__) # True print(id(n1.c1)) # 140284815353136 print(id(n2.c1)) # 140284815353712 In the second option, because I didn't specify default parameter(you can't mix both), nothing is going to be stored in the class's namespace. field(default=SOMETHING) is another way of saying = SOMETHING. | 5 | 5 |
73,598,825 | 2022-9-4 | https://stackoverflow.com/questions/73598825/how-to-get-file-from-url-in-python | I want to download text files using python, how can I do so? I used requests module's urlopen(url).read() but it gives me the bytes representation of file. | When downloading text files with python I like to use the wget module import wget remote_url = 'https://www.google.com/test.txt' local_file = 'local_copy.txt' wget.download(remote_url, local_file) If that doesn't work try using urllib from urllib import request remote_url = 'https://www.google.com/test.txt' file = 'copy.txt' request.urlretrieve(remote_url, file) When you are using the request module you are reading the file directly from the internet and it is causing you to see the text in byte format. Try to write the text to a file then view it manually by opening it on your desktop import requests remote_url = 'test.com/test.txt' local_file = 'local_file.txt' data = requests.get(remote_url) with open(local_file, 'wb')as file: file.write(data.content) | 3 | 2 |
73,598,430 | 2022-9-4 | https://stackoverflow.com/questions/73598430/how-to-make-a-customized-grouped-dataframe-with-multiple-aggregations | I have a standard dataframe like the one below : Id Type Speed Efficiency Durability 0 Id001 A OK OK nonOK 1 Id002 A nonOK OK nonOK 2 Id003 B nonOK nonOK nonOK 3 Id004 B nonOK nonOK OK 4 Id005 A nonOK nonOK OK 5 Id006 A OK OK OK 6 Id007 A OK nonOK OK 7 Id008 B nonOK nonOK OK 8 Id009 C OK OK OK 9 Id010 B OK OK nonOK 10 Id011 C OK nonOK OK 11 Id012 C OK nonOK OK 12 Id013 C nonOK OK OK 13 Id014 C nonOK nonOK OK 14 Id015 C nonOK nonOK OK And I'm trying to get this kind of output : Type Test Speed Efficiency Durability 0 A OK 3 3 3 1 A nonOK 2 2 2 2 B OK 1 1 2 3 B nonOK 3 3 2 4 C OK 3 2 6 5 C nonOK 3 4 0 I tried with df.groupby('Type').agg('count') but it doesn't give the expected output. Is it possible to make this kind of transformation with pandas, please ? | You could try as follows: out = df.groupby('Type').agg({col:'value_counts' for col in df.columns[2:]})\ .fillna(0).astype(int).sort_index().reset_index().rename( columns={'level_1':'Test'}) print(out) Type Test Speed Efficiency Durability 0 A OK 3 3 3 1 A nonOK 2 2 2 2 B OK 1 1 2 3 B nonOK 3 3 2 4 C OK 3 2 6 5 C nonOK 3 4 0 | 5 | 3 |
73,563,677 | 2022-9-1 | https://stackoverflow.com/questions/73563677/setting-torch-nn-linear-diagonal-elements-zero | I am trying to build a model with a layer of torch.nn.linear with same input size and output size, so the layer would be square matrix. In this model, I want the diagonal elements of this matrix fixed to zero. Which means, during training, I don't want the diagonal elements to be changed from zero. I could only think of adding some kind of step that change diag elements to zero for each training epoch, but I am not sure if this is valid or efficient way. Is there a definite way of making this kind of layer which can ensure that the diagonal elements don't change? Sorry if my question is weird. | You can always implement your own layers. Note that all custom layers should be implemented as classes derived from nn.Module. For example: class LinearWithZeroDiagonal(nn.Module): def __init__(self, num_features, bias): super(LinearWithZeroDiagonal, self).__init__() self.base_linear_layer = nn.Linear(num_features, num_features, bias) def forward(self, x): # first, make sure the diagonal is zero with torch.no_grad(): self.base_linear_layer.weight.fill_diagonal_(0.) return self.base_linear_layer(x) | 4 | 3 |
73,596,506 | 2022-9-4 | https://stackoverflow.com/questions/73596506/typex-is-list-vs-typex-list | In Python, suppose one wants to test whether the variable x is a reference to a list object. Is there a difference between if type(x) is list: and if type(x) == list:? This is how I understand it. (Please correct me if I am wrong) type(x) is list tests whether the expressions type(x) and list evaluate to the same object and type(x) == list tests whether the two objects have equivalent (in some sense) values. type(x) == list should evaluate to True as long as x is a list. But can type(x) evaluate to a different object from what list refers to? What exactly does the expression list evaluate to? (I am new to Python, coming from C++, and still can't quite wrap my head around the notion that types are also objects.) Does list point to somewhere in memory? What data live there? | The "one obvious way" to do it, that will preserve the spirit of "duck typing" is isinstance(x, list). Rather, in most cases, one's code won't be specific to a list, but could work with any sequence (or maybe it needs a mutable sequence). So the recomendation is actually: from collections.abc import MutableSequence ... if isinstance(x, MutableSequence): ... Now, going into your specific questions: What exactly does the expression list evaluate to? Does list point to somewhere in memory? What data live there? list in Python points to a class. A class that can be inherited, extended, etc...and thanks to a design choice of Python, the syntax for creating an instance of a class is indistinguishable from calling a function. So, when teaching Python to novices, one could just casually mention that list is a "function" (I prefer not, since it is straightout false - the generic term for both functions and classes in regards to that they can be "called" and will return a result is callable) Being a class, list does live in a specific place in memory - the "where" does not make any difference when coding in Python - but yes, there is one single place in memory where a class, which in Python is also an object, an instance of type, exists as a data structure with pointers to the various methods that one can use in a Python list. As for: type(x) is list tests whether the expressions type(x) and list evaluate to the same object and type(x) == list tests whether the two objects have equivalent (in some sense) values. That is correct: is is a special operator that unlike others cannot be overriden for any class and checks for object itentity - in the cPython implementation, it checks if both operands are at the same memory address (but keep in mind that that address, though visible through the built-in function id, behaves as if it is opaque from Python code). As for the "sense" in which objects are "equal" in Python: one can always override the behavior of the == operator for a given object, by creating the special named method __eq__ in its class. (The same is true for each other operator - the language data model lists all available "magic" methods). For lists, the implemented default comparison automatically compares each element recursively (calling the .__eq__ method for each item pair in both lists, if they have the same size to start with, of course) type(x) == list should evaluate to True as long as x is a list. But can type(x) evaluate to a different object from what list refers to? Not if "x" is a list proper: type(x) will always evaluate to list. But == would fail if x were an instance of a subclass of list, or another Sequence implementation: that is why it is always better to compare classes using the builtins isinstance and issubclass. | 3 | 4 |
73,594,044 | 2022-9-3 | https://stackoverflow.com/questions/73594044/structural-pattern-matching-python-match-at-any-position-in-sequence | I have a list of objects, and want to check if part of the list matches a specific pattern. Consider the following lists: l1 = ["foo", "bar"] l2 = [{1, 2},"foo", "bar"] l3 = ["foo", "bar", 5] l4 = [{1,2},"foo", "bar", 5, 6] How would I match the sequence ["foo", "bar"] in all the different cases? My naive idea is: match l4: case [*_, "foo", "bar", *_]: print("matched!") Unfortunately this is a SyntaxError: multiple starred names in sequence pattern. The issue is, that I don't know how many elements are leading and trailing the pattern. Edit: I think I need to clarify: "foo", "bar" is just a stand-in for a much more complex pattern. (I am working with an AST object) | def struct_match(lst_target, lst_pattern): for i in range(len(lst_target)-(len(lst_pattern)-1)): if lst_target[i:i+len(lst_pattern)] == lst_pattern: print('matched!') break l1 = ["foo", "bar"] l2 = [{1, 2},"foo", "bar"] l3 = ["foo", "bar", 5] l4 = [{1,2},"foo", "bar", 5, 6] l5 = [{1,2},"foo", "baz", "bar", 5, 6] patt = ["foo", "bar"] struct_match(l1, patt) struct_match(l2, patt) struct_match(l3, patt) struct_match(l4, patt) struct_match(l5, patt) # outputs matched! matched! matched! matched! PS: I just found a beautiful recursive solution here (recursion is always beautiful... if your list is not too long) | 5 | 1 |
73,593,736 | 2022-9-3 | https://stackoverflow.com/questions/73593736/why-can-we-inherit-typing-namedtuple | After Python 3.6, we have typing.NamedTuple, which is a typed version of collections.namedtuple(), we can inherit it like a class: class Employee(NamedTuple): name: str id: int Compared with collections.namedtuple, this syntax is more beautiful, but I still can't understand its implementation, whether we look at typing.py file, or do some simple tests, we will find that it is a function rather than a class: # Python 3.10.6 typing.py def NamedTuple(typename, fields=None, /, **kwargs): """...""" if fields is None: fields = kwargs.items() elif kwargs: raise TypeError("Either list of fields or keywords" " can be provided to NamedTuple, not both") try: module = sys._getframe(1).f_globals.get('__name__', '__main__') except (AttributeError, ValueError): module = None return _make_nmtuple(typename, fields, module=module) >>> type(NamedTuple) <class 'function'> I understand that it uses some metaclass magic, but I don't understand what happens when using class MyClass(NamedTuple). For this reason, I have tried to customize a function to inherit: >>> def func_for_inherit(*args, **kwargs): ... print(args, kwargs) ... >>> class Foo(func_for_inherit): ... foo: str ... Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: function() argument 'code' must be code, not str Well, this got a result that I can't understand. When inheriting a user-defined function, it seems that its class was called. What happened behind this? | typing.NamedTuple uses a really esoteric feature, __mro_entries__: If a base that appears in class definition is not an instance of type, then an __mro_entries__ method is searched on it. If found, it is called with the original bases tuple. This method must return a tuple of classes that will be used instead of this base. The tuple may be empty, in such case the original base is ignored. Immediately after the NamedTuple function definition, the following code appears: _NamedTuple = type.__new__(NamedTupleMeta, 'NamedTuple', (), {}) def _namedtuple_mro_entries(bases): if len(bases) > 1: raise TypeError("Multiple inheritance with NamedTuple is not supported") assert bases[0] is NamedTuple return (_NamedTuple,) NamedTuple.__mro_entries__ = _namedtuple_mro_entries This sets NamedTuple.__mro_entries__ to a function that tells the class creation system to use an actual class, _NamedTuple, as the base class. (_NamedTuple then uses metaclass features to customize the class creation process further, and the end result is a class that directly inherits from tuple.) | 5 | 8 |
73,590,510 | 2022-9-3 | https://stackoverflow.com/questions/73590510/modulenotfounderror-no-module-named-requests-error-for-docker-run | I have created the docker image to run python script. Now when I run below command it throwing error. command in powershell: docker build -t pull_request_summary . Error: Traceback (most recent call last): File "/usr/app/src/./listPullRequest.py", line 1, in import requests ModuleNotFoundError: No module named 'requests' Just to confirm the "requests module is already installed on my machine. The script is running fine if I directly run from PowerShell. It just throwing error while running docker images. My question is where it is expecting the "request" module of python? | your container also must have requests installed on it In your docker file put this line RUN pip install requests | 3 | 5 |
73,572,941 | 2022-9-1 | https://stackoverflow.com/questions/73572941/writing-style-to-prevent-string-concatenation-in-a-list-of-strings | Suppose I have a list/tuple of strings, COLOURS = [ "White", "Black", "Red" "Green", "Blue" ] for c in COLOURS: # rest of the code Sometimes I forget placing a comma after each entry in the list ("Red" in the above snippet). This results in one "RedGreen" instead of two separate "Red" and "Green" list items. Since this is valid Python, no IDE/text editor shows a warning/error. The incorrect value comes to the limelight only during testing. What writing style or code structure should I use to prevent this? | You're incorrect that "no IDE/text editor shows a warning/error". Pylint can identify this problem using rule implicit-str-concat (W1404) with flag check-str-concat-over-line-jumps. (And for that matter, there are lots of things that are valid Python that a linter will warn you about, like bare except: for example.) Personally, I'm using VSCode, so I enabled Pylint via the Python extension (python.linting.pylintEnabled) and set up a pylintrc like this: [tool.pylint] check-str-concat-over-line-jumps = yes Now VSCode gives this warning for your list: Implicit string concatenation found in list pylint(implicit-str-concat) [Ln 4, Col 1] Lastly, there are probably other linters that can find the same problem, but Pylint is the first one I found. | 7 | 5 |
73,587,060 | 2022-9-2 | https://stackoverflow.com/questions/73587060/how-to-load-a-pandas-column-from-a-csv-file-with-lists-as-lists-and-not-as-strin | when I write a list in pandas when I read it, its dtype is string and not array, is there any way to write a column of list in such a way that be again array type when we read it? Here is what I mean: d=[['d',['A','B','C']],['p',['F','G']]] df=pd.DataFrame(d) df.to_csv('file.csv') when I run the following code, pd.read_csv('file.csv')['1'].values[0] the output is: "['A', 'B', 'C']" but I want this: ['A', 'B', 'C'] | one solution would be to run it through literal_eval. you make a dictionary with the column name as key and the converter function as value. and pass that into read_csv with the keyword converters Note that if your column has mixed data (also strings and other stuff) you might want to write a custom function which filters and converts the different types from ast import literal_eval df1 = pd.read_csv('file.csv', converters={'1': literal_eval}) df1 output: type(df1["1"][0]) output: | 3 | 6 |
73,563,804 | 2022-9-1 | https://stackoverflow.com/questions/73563804/what-is-the-recommended-way-to-instantiate-and-pass-around-a-redis-client-with-f | I'm using FastAPI with Redis. My app looks something like this from fastapi import FastAPI import redis # Instantiate redis client r = redis.Redis(host='localhost', port=6379, db=0, decode_responses=True) # Instantiate fastapi app app = FastAPI() @app.get("/foo/") async def foo(): x = r.get("foo") return {"message": x} @app.get("/bar/") async def bar(): x = r.get("bar") return {"message": x} Is it bad practice to create r as a module-scoped variable like this? If so what are the drawbacks? In Tiangolo's tutorial on setting up a SQL database connection he uses a dependency, which I guess in my case would look something like this from fastapi import Depends, FastAPI import redis # Instantiate fastapi app app = FastAPI() # Dependency def get_redis(): return redis.Redis(host='localhost', port=6379, db=0, decode_responses=True) @app.get("/foo/") async def foo(r = Depends(get_redis)): x = r.get("foo") return {"message": x} @app.get("/bar/") async def bar(r = Depends(get_redis)): x = r.get("bar") return {"message": x} I'm a bit confused as to which of these methods (or something else) would be preferred and why. | Depends will evaluate every time your function got a request, so your second example will create a new connection for each request. As @JarroVGIT said, we can use connection pooling to maintain the connection from FastAPI to Redis and reduce open-closing connection costs. Usually, I create a different file to define the connection. Let's say we have config/db.py: import redis def create_redis(): return redis.ConnectionPool( host='localhost', port=6379, db=0, decode_responses=True ) pool = create_redis() Then in the main.py from fastapi import Depends, FastAPI import redis from config.db import pool app = FastAPI() def get_redis(): # Here, we re-use our connection pool # not creating a new one return redis.Redis(connection_pool=pool) @app.get("/items/{item_id}") def read_item(item_id: int, cache = Depends(get_redis)): status = cache.get(item_id) return {"item_name": status} @app.put("/items/{item_id}") def update_item(item_id: int, cache = Depends(get_redis)): cache.set(item_id, "available") return {"status": "available", "item_id": item_id} Usually, I also split the dependencies file like the doc so we can call it from our routing module, but for simplicity, I will leave it like this. You can check this repo to experiment by yourself. It has more comprehensive code and I have already created several scenarios that might help you understand the difference. And it will also cover how your first example may block other endpoints. | 17 | 20 |
73,569,804 | 2022-9-1 | https://stackoverflow.com/questions/73569804/dataset-batch-doesnt-work-as-expected-with-a-zipped-dataset | I have a dataset like this: a = tf.data.Dataset.range(1, 16) b = tf.data.Dataset.range(16, 32) zipped = tf.data.Dataset.zip((a, b)) list(zipped.as_numpy_iterator()) # output: [(0, 16), (1, 17), (2, 18), (3, 19), (4, 20), (5, 21), (6, 22), (7, 23), (8, 24), (9, 25), (10, 26), (11, 27), (12, 28), (13, 29), (14, 30), (15, 31)] When I apply batch(4) to it, the expected result is an array of batches, where each batch contains four tuples: [[(0, 16), (1, 17), (2, 18), (3, 19)], [(4, 20), (5, 21), (6, 22), (7, 23)], [(9, 24), (10, 25), (10, 26), (11, 27)], [(12, 28), (13, 29), (14, 30), (15, 31)]] But this is what I receive instead: batched = zipped.batch(4) list(batched.as_numpy_iterator()) # Output: [(array([0, 1, 2, 3]), array([16, 17, 18, 19])), (array([4, 5, 6, 7]), array([20, 21, 22, 23])), (array([ 8, 9, 10, 11]), array([24, 25, 26, 27])), (array([12, 13, 14, 15]), array([28, 29, 30, 31]))] I'm following this tutorial, he does the same steps but gets the correct output somehow. Update: according to the documentation this is the intended behavior: The components of the resulting element will have an additional outer dimension, which will be batch_size But it doesn't make any sense. To my understanding, dataset is a list of pieces of data. It doesn't matter the shape of those pieces of data, when we are batching it we are combining the elements [whatever their shape is] into batches, therefore it should always insert the new dimention to the second position ((length, a, b, c) -> (length', batch_size, a, b, c)). So my questions are: I wonder what is the purpose of batch() being implemented this way? And what is the alternative that does what I described? | One thing you can try doing is something like this: import tensorflow as tf a = tf.data.Dataset.range(16) b = tf.data.Dataset.range(16, 32) zipped = tf.data.Dataset.zip((a, b)).batch(4).map(lambda x, y: tf.transpose([x, y])) list(zipped.as_numpy_iterator()) [array([[ 0, 16], [ 1, 17], [ 2, 18], [ 3, 19]]), array([[ 4, 20], [ 5, 21], [ 6, 22], [ 7, 23]]), array([[ 8, 24], [ 9, 25], [10, 26], [11, 27]]), array([[12, 28], [13, 29], [14, 30], [15, 31]])] but they are still not tuples. Or: zipped = tf.data.Dataset.zip((a, b)).batch(4).map(lambda x, y: tf.unstack(tf.transpose([x, y]), num = 4)) [(array([ 0, 16]), array([ 1, 17]), array([ 2, 18]), array([ 3, 19])), (array([ 4, 20]), array([ 5, 21]), array([ 6, 22]), array([ 7, 23])), (array([ 8, 24]), array([ 9, 25]), array([10, 26]), array([11, 27])), (array([12, 28]), array([13, 29]), array([14, 30]), array([15, 31]))] | 6 | 1 |
73,582,293 | 2022-9-2 | https://stackoverflow.com/questions/73582293/airflow-external-api-call-gives-negsignal-sigsegv-error | I am calling weather API using Python script but the airflow task fails with error Negsignal.SIGSEGV. The Python script to call the weather API work fine when ran outside Airflow. DAG from airflow import DAG from airflow.operators.bash_operator import BashOperator from airflow.operators.python_operator import PythonOperator from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator from datetime import datetime, timedelta from scripts.weather_analysis.data_collection import query_weather_data import pendulum local_tz = pendulum.timezone("Asia/Calcutta") default_args = { 'owner': 'airflow', 'depends_on_past': False, #'start_date': airflow.utils.dates.days_ago(2), --> doesn't work 'start_date': datetime(2022, 8, 29, tzinfo=local_tz), } dag = DAG('weather_dag_2', default_args=default_args, schedule_interval ='0 * * * *', ) # DAG to fetch weather data from api t1 = PythonOperator( task_id = 'callApi', python_callable = query_weather_data, dag=dag ) Python script - query_weather_data.py import requests import json from scripts.weather_analysis.config import API_KEY from datetime import datetime def query_weather_data(): parameters = {'q':'Brooklyn, USA', 'appId': API_KEY} result = requests.get("http://api.openweathermap.org/data/2.5/weather?",parameters) if result.status_code == 200: json_data = result.json() print(json_data) else: print("Unable to fetch api data") Error Log: [2022-09-02, 17:00:04 IST] {local_task_job.py:156} INFO - Task exited with return code Negsignal.SIGSEGV [2022-09-02, 17:00:04 IST] {taskinstance.py:1407} INFO - Marking task as FAILED. dag_id=weather_dag_2, task_id=callApi, execution_date=20220902T103000, start_date=20220902T113004, end_date=20220902T113004 Environment details: MacOS Monterey Airflow=2.3.4 Airflow deployment mode=Local Python=3.10 I already tried the solution listed here Airflow DAG fails when PythonOperator tries to call API and download data but it doesn't solve my issue. Please help. | I am afraid this is a problem with your machine. SIGSEGV is an indication of serious problem with the environment you run Airflow on, not Airflow itself. Neither Airflow nor the code of yours (Which might be the culprit) does not seem to use any low-level C-Code (Airflow for sure and your code is likely not to use it) and this is the only way how the "code" of application might generate it. If you do not use any other custom code, then your environment and deployment is definitely the problem. There is not much Airflow can do about it, it seems that the python environment that you use by Airflow is broken - this might be because you have wrong architecture (ARM vs Intel and no emulation for example) - or because you have some librares that your Python loads and crash, but it has nothing to do with Airfow. You have not written on how you are deploying the Airflow - except that it is local, but my advice would be to recreate the environment from scratch and make sure you create a completely separate virtualenv for Airflow and you install airflow following the standard installation instructions (including constraints) https://airflow.apache.org/docs/apache-airflow/stable/installation/installing-from-pypi.html. However if your python installation is broken, you might need to nuke it and install from scratch. If you are using Docker Images, then you have to make sure that you use the right images for your architecture (Intel/ARM) depending if you are using Intel or M1 based processor. Airflow Docker images as of 2.3.0 are published for both Intel and ARM, but you might have an intel emulated old docker desktop installed that might not cope with different architecture well so you might need to nuke the installation and reinstall it from scratch if this is the case. Generally speaking - trace down if you have any custom code of yours and remove/disable it to see if it makes a difference and then progressively nuke everything you use: virtualenv python installation docker environment You can also go the other way round. Get a basic "quick-start" of airflow and get it work, and progressively add your customisation or change the deployment to get closer to what you get (for example change Python version) - and do it one-step-at-a-time. The moment it breaks, you will know what's the reason. If even basic quickstart does not work for you after you follow it rigorously and handle all the caveats described in the docs, this might indicate you might even need to nuke and reinstall the OS of yours, or in extreme cases fix the hardware (SIGSEGV often happens when memory or disk gets corrupted). | 5 | 2 |
73,578,690 | 2022-9-2 | https://stackoverflow.com/questions/73578690/how-to-load-custom-yolo-v-7-trained-model | How do I load a custom yolo v-7 model. This is how I know to load a yolo v-5 model : model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/runs/train/exp15/weights/last.pt', force_reload=True) I saw videos online and they suggested to use this : !python detect.py --weights runs/train/yolov7x-custom/weights/best.pt --conf 0.5 --img-size 640 --source final_test_v1.mp4 But I want it to be loaded like a normal model and give me the bounding box co-ordinates of where ever it found the objects. This is how I did it in yolo v-5: from models.experimental import attempt_load yolov5_weight_file = r'weights/rider_helmet_number_medium.pt' # ... may need full path model = attempt_load(yolov5_weight_file, map_location=device) def object_detection(frame): img = torch.from_numpy(frame) img = img.permute(2, 0, 1).float().to(device) #convert to required shape based on index img /= 255.0 if img.ndimension() == 3: img = img.unsqueeze(0) pred = model(img, augment=False)[0] pred = non_max_suppression(pred, conf_set, 0.20) # prediction, conf, iou # print(pred) detection_result = [] for i, det in enumerate(pred): if len(det): for d in det: # d = (x1, y1, x2, y2, conf, cls) x1 = int(d[0].item()) y1 = int(d[1].item()) x2 = int(d[2].item()) y2 = int(d[3].item()) conf = round(d[4].item(), 2) c = int(d[5].item()) detected_name = names[c] # print(f'Detected: {detected_name} conf: {conf} bbox: x1:{x1} y1:{y1} x2:{x2} y2:{y2}') detection_result.append([x1, y1, x2, y2, conf, c]) frame = cv2.rectangle(frame, (x1, y1), (x2, y2), (255,0,0), 1) # box if c!=1: # if it is not head bbox, then write use putText frame = cv2.putText(frame, f'{names[c]} {str(conf)}', (x1, y1), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,0,255), 1, cv2.LINE_AA) return (frame, detection_result) | You cannot use attempt_load from the Yolov5 repo as this method is pointing to the ultralytics release files. You need to use attempt_load from Yolov7 repo as this one is pointing to the right files. # yolov7 def attempt_download(file, repo='WongKinYiu/yolov7'): # Attempt file download if does not exist file = Path(str(file).strip().replace("'", '').lower()) ... # yolov5 def attempt_download(file, repo='ultralytics/yolov5', release='v6.2'): # Attempt file download from GitHub release assets if not found locally. release = 'latest', 'v6.2', etc. from utils.general import LOGGER def github_assets(repository, version='latest'): ... Then you can download it like this: # load yolov7 method from models.experimental import attempt_load model = attempt_load('yolov7.pt', map_location='cuda:0') # load FP32 model | 4 | 2 |
73,561,079 | 2022-8-31 | https://stackoverflow.com/questions/73561079/yet-another-combinations-with-conditions-question | I want to efficiently generate pairs of elements from two lists equal to their Cartesian product with some elements omitted. The elements in each list are unique. The code below does exactly what's needed but I'm looking to optimize it perhaps by replacing the loop. See the comments in the code for details. Any advice would be appreciated. from itertools import product from pprint import pprint as pp def pairs(list1, list2): """ Return all combinations (x,y) from list1 and list2 except: 1. Omit combinations (x,y) where x==y """ tuples = filter(lambda t: t[0] != t[1], product(list1,list2)) """ 2. Include only one of the combinations (x,y) and (y,x) """ result = [] for t in tuples: if not (t[1], t[0]) in result: result.append(t) return result list1 = ['A', 'B', 'C'] list2 = ['A', 'D', 'E'] pp(pairs(list1, list1)) # Test a list with itself pp(pairs(list1, list2)) # Test two lists with some common elements Output [('A', 'B'), ('A', 'C'), ('B', 'C')] [('A', 'D'), ('A', 'E'), ('B', 'A'), ('B', 'D'), ('B', 'E'), ('C', 'A'), ('C', 'D'), ('C', 'E')] | About 5-6 times faster than the fastest in your answer's benchmark. I build sets of values that appear in both lists or just one, and combine them appropriately without further filtering. from itertools import product, combinations def pairs(list1, list2): a = {*list1} b = {*list2} ab = a & b return [ *product(a, b-a), *product(a-b, ab), *combinations(ab, 2) ] You could also make it an iterator (because unlike previous solutions, I don't need to store the already produced pairs to filter further ones): from itertools import product, combinations, chain def pairs(list1, list2): a = {*list1} b = {*list2} ab = a & b return chain( product(a, b-a), product(a-b, ab), combinations(ab, 2) ) | 5 | 2 |
73,562,722 | 2022-8-31 | https://stackoverflow.com/questions/73562722/extending-generic-class-getitem-in-python-to-accept-more-params | How can one extend __class_getitem__ for a Python Generic class? I want to add arguments to __class_getitem__ while having some be propagated upwards to Generic.__class_getitem__. Please see the below code snippet for an example use case (that doesn't run): from typing import ClassVar, Generic, TypeVar T = TypeVar("T") class Foo(Generic[T]): cls_attr: ClassVar[int] def __class_getitem__(cls, cls_attr: int, item): cls.cls_attr = cls_attr return super().__class_getitem__(item) def __init__(self, arg: T): pass foo = Foo[1, bool](arg=True) Gives me this TypeError: Traceback (most recent call last): File "/path/to/file.py", line 17, in <module> foo = Foo[1, bool](arg=True) TypeError: Foo.__class_getitem__() missing 1 required positional argument: 'item' | As @juanpa.arrivillaga suggests, this is the way to go: from typing import ClassVar, Generic, TypeVar T = TypeVar("T") class Foo(Generic[T]): cls_attr: ClassVar[int] def __class_getitem__(cls, item: tuple[int, T]): cls.cls_attr = item[0] return super().__class_getitem__(item[1]) def __init__(self, arg: T): self.arg = arg foo = Foo[1, bool](arg=True) assert foo.cls_attr == 1 assert foo.arg Unfortunately, it looks like Python type inspection tooling is not advanced enough to understand this pattern. For example, mypy==0.971 (Sept 2022) doesn't support __class_getitem__ yet per https://github.com/python/mypy/issues/11501. | 4 | 3 |
73,570,416 | 2022-9-1 | https://stackoverflow.com/questions/73570416/python-multiprocessing-cant-pickle-local-object | i have read a little about multiprocessing and pickling problems, I have also read that there are some solutions but I don't really know how can they help to mine situation. I am building Test Runner where I use Multiprocessing to call modified Test Class methods. Modified by metaclass so I can have setUp and tearDown methods before and after each run test. Here is my Parent Metaclass: class MetaTestCase(type): def __new__(cls, name: str, bases: Tuple, attrs: dict): def replaced_func(fn): def new_test(*args, **kwargs): args[0].before() result = fn(*args, **kwargs) args[0].after() return result return new_test # If method is found and its name starts with test, replace it for i in attrs: if callable(attrs[i]) and attrs[i].__name__.startswith('test'): attrs[i] = replaced_func(attrs[i]) return (super(MetaTestCase, cls).__new__(cls, name, bases, attrs)) I am using this Sub Class to inherit MetaClass: class TestCase(metaclass=MetaTestCase): def before(self) -> None: """Overridable, execute before test part.""" pass def after(self) -> None: """Overridable, execute after test part.""" pass And then I use this in my TestSuite Class: class TestApi(TestCase): def before(self): print('before') def after(self): print('after') def test_api_one(self): print('test') Sadly when I try to execute that test with multiprocessing.Process it fails on AttributeError: Can't pickle local object 'MetaTestCase.__new__.<locals>.replaced_func.<locals>.new_test' Here is how I create and execute Process: module = importlib.import_module('tests.api.test_api') # Finding and importing module object = getattr(module, 'TestApi') # Getting Class from module process = Process(target=getattr(object, 'test_api_one')) # Calling class method process.start() process.join() I tried to use pathos.helpers.mp.Process, it passes pickling phase I guess but has some problems with Tuple that I don't understand: Process Process-1: Traceback (most recent call last): result = fn(*args, **kwargs) IndexError: tuple index out of range Is there any simple solution for that so I can pickle that object and run test sucessfully along with my modified test class? | As for your original question of why you are getting the pickling error, this answer summarizes the problem and offers solutions (similar to those already provided here). Now as to why you are receiving the IndexError, this is because you are not passing an instance of the class to the function (the self argument). A quick fix would be to do this (also, please don't use object as a variable name): module = importlib.import_module('tests.api.test_api') # Finding and importing module obj = getattr(module, 'TestApi') test_api = obj() # Instantiate! # Pass the instance explicitly! Alternatively, you can also do target=test_api.test_api_one process = Process(target=getattr(obj, 'test_api_one'), args=(test_api, )) process.start() process.join() Ofcourse, you can also opt to make the methods of the class as class methods, and pass the target function as obj.method_name. Also, as a quick sidenote, the usage of a metaclass for the use case shown in the example seems like an overkill. Are you sure you can't do what you want with class decorators instead (which might also be compatible with the standard library's multiprocessing)? | 5 | 3 |
73,572,119 | 2022-9-1 | https://stackoverflow.com/questions/73572119/no-idea-how-to-exit-source-control-in-vscode | I was trying to connect my Python files. Didn't go as planned and for some reason started source control without a clue how to get back. Does anyone have a clue how I exit and just go back to normal? | You may have inadvertently clicked the Initialize Repository button. You can remove the source control by opening the directory in your File Manager and deleting the .git folder. That will remove the source control from your project. If you want to go further, you can right click on the source control icon: and then click Hide 'Source Control'. | 8 | 4 |
73,568,255 | 2022-9-1 | https://stackoverflow.com/questions/73568255/what-is-the-correct-way-to-obtain-explanations-for-predictions-using-shap | I'm new to using shap, so I'm still trying to get my head around it. Basically, I have a simple sklearn.ensemble.RandomForestClassifier fit using model.fit(X_train,y_train), and so on. After training, I'd like to obtain the Shap values to explain predictions on unseen data. Based on the docs and other tutorials, this seems to be the way to go: explainer = shap.Explainer(model.predict, X_train) shap_values = explainer.shap_values(X_test) However, this takes a long time to run (about 18 hours for my data). If I replace the model.predict with just model in the first line, i.e: explainer = shap.Explainer(model, X_train) shap_values = explainer.shap_values(X_test) It significantly reduces the runtime (down to about 40 minutes). So that leaves me to wonder what I'm actually getting in the second case? To reiterate, I just want to be able to explain new predictions, and it seems strange to me that it would be this expensive - so I'm sure I'm doing something wrong. | I think your question already contains a hint: explainer = shap.Explainer(model.predict, X_train) shap_values = explainer.shap_values(X_test) is expensive and most probably is a kind of an exact algo to calculate Shapely values out of a function. explainer = shap.Explainer(model, X_train) shap_values = explainer.shap_values(X_test) averages readily available predictions from trained model. To prove the first claim (second is the fact of the matter) let's study source code for Explainer class. Class definition: class Explainer(Serializable): """ Uses Shapley values to explain any machine learning model or python function. This is the primary explainer interface for the SHAP library. It takes any combination of a model and masker and returns a callable subclass object that implements the particular estimation algorithm that was chosen. """ def __init__(self, model, masker=None, link=links.identity, algorithm="auto", output_names=None, feature_names=None, linearize_link=True, seed=None, **kwargs): """ Build a new explainer for the passed model. Parameters ---------- model : object or function User supplied function or model object that takes a dataset of samples and computes the output of the model for those samples. So, now you know one can provide either a model or a function as the first argument. In case Pandas is supplied as masker: if safe_isinstance(masker, "pandas.core.frame.DataFrame") or \ ((safe_isinstance(masker, "numpy.ndarray") or sp.sparse.issparse(masker)) and len(masker.shape) == 2): if algorithm == "partition": self.masker = maskers.Partition(masker) else: self.masker = maskers.Independent(masker) Finally, if callable is supplied: elif callable(self.model): if issubclass(type(self.masker), maskers.Independent): if self.masker.shape[1] <= 10: algorithm = "exact" else: algorithm = "permutation" Hopefully, you see now why the first one is an exact one (and thus takes long to calculate). Now to your question(s): What is the correct way to obtain explanations for predictions using Shap? and So that leaves me to wonder what I'm actually getting in the second case? If you have a model (tree, linear, whatever) which is supported by SHAP use: explainer = shap.Explainer(model, X_train) shap_values = explainer.shap_values(X_test) These are SHAP values extracted from a model and this is why SHAP came into existence. If it's not supported, use 1st one. Both should give similar results. | 8 | 12 |
73,566,774 | 2022-9-1 | https://stackoverflow.com/questions/73566774/group-by-and-combine-intersecting-overlapping-geometries-in-geopandas | I have a geopandas dataframe that has several rows with overlapping polygon geometries along with an index (unique and sequential). I want to merge the overlapping polygon geometries into a multi-polygon and keep the corresponding minimum index of the individual overlapping polygons. For example: The geodataframe is as follows: original geodataframe Lets say the polygon geometries with index 10233, 10235, 10238 overlap. I want a single row for this with these geometries merged in a multi-polygon (instead of 3 separate geometries) and the corresponding index should be the minimum index of the 3 rows that is 10233. I would like to do this for the entire geodataframe I tried using the dissolve function from geopandas: gdf = gdf.dissolve(by = 'index').reset_index() This does not do anything since 'index' is unique. I also tried: gdf = gdf.dissolve().reset_index() However, this combines all geometries into single row of multi-polygons | I think this is what you had in mind: import geopandas as gpd # load your geodataframe .. # self join on geodataframe to get all polygon intersections intersects = gdf.sjoin(gdf, how="left", predicate="intersects") # dissolve intersections on right index indices using the minimum value intersects_diss = intersects.dissolve("id_right",aggfunc="min") # dissolve again on left index using minimum intersects_diss = intersects_diss.reset_index().dissolve("id_left",aggfunc="min") | 3 | 7 |
73,567,100 | 2022-9-1 | https://stackoverflow.com/questions/73567100/pip-install-with-extra-index-url-to-requirements-txt | I have following install comand for a package: pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116 and would need to add this to my requirements.txt file. I have done it by adding this to the end of the file: -i https://download.pytorch.org/whl/cu116 torch torchvision torchaudio after all the other requirements. I do not have the means to test it right now, and would need to know if I have done it correctly as I never did it before. Is this the proper way of adding it to the requirements.txt file? If yes, adding other packages after these, for example numpy, will it be affected by the url? do I have to sort of "clear" the url, or is it simply added to the list of urls it searches in? Any help appreciated | Found the solution, the command you used and the requirements.txt file were NOT in fact the same. It works with a requirements.txt like this --extra-index-url https://download.pytorch.org/whl/cu116 torch torchvision torchaudio Turns out -i is not the same as --extra-index-url. Docs: https://pip.pypa.io/en/stable/reference/requirements-file-format/ | 8 | 24 |
73,568,036 | 2022-9-1 | https://stackoverflow.com/questions/73568036/flake8-line-break-before-binary-operator-how-to-fix-it | I keep getting: W503 line break before binary operator Please help me fix my code, as I can't figure out what is wrong here: def check_actionable(self, user_name, op, changes_table): return any(user_name in row.inner_text() and row.query_selector( self.OPS[op]) is not None and row.query_selector(self.DISABLED) is not None for row in changes_table) | the other (imo better) alternative to ignore (which resets the default ignore list) is to use extend-ignore by default both W503 and W504 are ignored (as they conflict and have flip-flopped historically). there are other rules which are ignored by default as well that you may want to preserve extend-ignore = ABC123 ignore on the other hand resets the ignore list removing the defaults disclaimer: I'm the current flake8 maintainer | 6 | 18 |
73,568,980 | 2022-9-1 | https://stackoverflow.com/questions/73568980/how-to-get-pairwise-iterator-with-last-element-as-being-the-first | I am using the following function pairwise to get the iteration of ordered pairs. For example, if the iterable is a list [1, 2, 3, 4, 5, 6] then I want to get (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 1). If I use the following function from itertools import tee, zip_longest def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return zip_longest(a, b) then it returns (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, None). I am using dataloader in my code as iterable, so I want to pass only iterable as input to the function pairwise and I don't want to pass extra inputs. How do I get the first element as the last element in the last item as mentioned above? | zip_longest has fillvalue parameter return zip_longest(a, b, fillvalue=iterable[0]) or as suggested in the comments use the returned value of the next(b, None) in fillvalue def pairwise(iterable): a, b = tee(iterable) return zip_longest(a, b, fillvalue=next(b, None)) Output print(list(pairwise(lst))) # [(1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 1)] You can also do it without converting the list to iterators def pairwise(iterable): return zip_longest(iterable, iterable[1:], fillvalue=iterable[0]) | 5 | 7 |
73,567,411 | 2022-9-1 | https://stackoverflow.com/questions/73567411/ansible-molecule-python-not-found | I have some ansible roles and I would like to use molecule testing with them. When I execute command molecule init scenario -r get_files_uid -d docker I get the following file structure get_files_uid βββ molecule β βββ default β βββ converge.yml β βββ molecule.yml β βββ verify.yml βββ tasks β βββ main.yml βββ vars βββ main.yml After that, I execute molecule test and I receive the following error: PLAY [Converge] **************************************************************** TASK [Gathering Facts] ********************************************************* fatal: [instance]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"ansible.legacy.setup": {"failed": true, "module_stderr": "/bin/sh: python: command not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}}, "msg": "The following modules failed to execute: ansible.legacy.setup\n"} PLAY RECAP ********************************************************************* instance : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 My ansible.cfg looks like this: [defaults] roles_path = roles ansible_python_interpreter = /usr/bin/python3 And I use MacOS with Ansible ansible [core 2.13.3] config file = None configured module search path = ['/Users/scherevko/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /opt/homebrew/Cellar/ansible/6.3.0/libexec/lib/python3.10/site-packages/ansible ansible collection location = /Users/scherevko/.ansible/collections:/usr/share/ansible/collections executable location = /opt/homebrew/bin/ansible python version = 3.10.6 (main, Aug 11 2022, 13:36:31) [Clang 13.1.6 (clang-1316.0.21.2.5)] jinja version = 3.1.2 libyaml = True molecule version: molecule 4.0.1 using python 3.10 ansible:2.13.3 delegated:4.0.1 from molecule docker:2.0.0 from molecule_docker requiring collections: community.docker>=3.0.0-a2 podman:2.0.2 from molecule_podman requiring collections: containers.podman>=1.7.0 ansible.posix>=1.3.0 When I run molecule --debug test I see ANSIBLE_PYTHON_INTERPRETER: python not found How to fix that? | The default scaffold for role molecule role initialization uses quay.io/centos/centos:stream8 as the test instance image (see molecule/default/molecule.yml) This image does not have any /usr/bin/python3 file available: $ docker run -it --rm quay.io/centos/centos:stream8 ls -l /usr/bin/python3 ls: cannot access '/usr/bin/python3': No such file or directory If you let ansible discover the available python by itself, you'll see that the interpreter actually found is /usr/libexec/platform-python like in the following demo (no ansible.cfg in use): $ docker run -d --rm --name instance quay.io/centos/centos:stream8 tail -f /dev/null 2136ad2e8b91f73d21550b2403a6b37f152a96c2373fcb5eb0491a323b0ed093 $ ansible instance -i instance, -e ansible_connection=docker -m setup | grep discovered "discovered_interpreter_python": "/usr/libexec/platform-python", $ docker stop instance instance Since your ansible.cfg only contains a default value for role path besides that wrong python interpreter path, I suggest you simply remove that file which will fix your problem. At the very least, remove the line defining ansible_python_interpreter to use default settings. Note that you should also make sure that ANSIBLE_PYTHON_INTERPRETER is not set as a variable in your current shell (and remove that definition from whatever shell init file if it is the case). Hardcoding the path of the python interpreter should anyway be your very last solution in very few edge cases. | 4 | 0 |
73,567,668 | 2022-9-1 | https://stackoverflow.com/questions/73567668/convert-numbers-in-millions-and-thousands-to-string-format | I have a column in my pandas dataframe: df = pd.DataFrame([[3000000, 2500000, 1800000, 800000, 500000]], columns=['Market value']) I want to convert the numbers in this column to a format with millions and hundred thousands, for example: 3000000 -> β¬3M 2500000 -> β¬2.5M 1800000 -> β¬1.8M 800000 -> β¬800K 500000 -> β¬500K This is my attempt so far: df['Market Value'] = np.select(condlist = [(df['Market value']/1000) >= 1000], choicelist = ['β¬'+(df['Market value'].astype(float)/1000000).astype(int).astype(str) + 'M'], default = 'β¬'+(df['Market value'].astype(float)/1000).astype(int).astype(str) + 'K') This produces the output: 3000000 -> β¬3M 2500000 -> β¬2M * this needs to be β¬2.5M 1800000 -> β¬1M * this needs to be β¬1.8M 800000 -> β¬800K 500000 -> β¬500K | You can apply this function to the column: def format(num): if num > 1000000: if not num % 1000000: return f'β¬{num // 1000000}M' return f'β¬{round(num / 1000000, 1)}M' return f'β¬{num // 1000}K' Testing: nums_list = [3000000, 2500000, 1800000, 800000, 500000] for num in nums_list: print(format(num)) Output: β¬3M β¬2.5M β¬1.8M β¬800K β¬500K | 4 | 5 |
73,566,474 | 2022-9-1 | https://stackoverflow.com/questions/73566474/unable-to-locate-package-python-openssl | I'm trying to install Pyenv, and I'm running on Ubuntu 22.04 LTS. but whenever I run this command sudo apt install -y make build-essential libssl-dev zlib1g-dev \ libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev \ libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python-openssl \ git I get this error Unable to locate package python-openssl I've tried searching for solutions online, but I think they have encountered it on older versions of Ubuntu and not on the latest version. | Make sure your list of packages is updated (sudo apt update). Python openssl bindings are available in 22.04 in python3-openssl (link), so you can install it by running sudo apt install python3-openssl | 31 | 58 |
73,560,307 | 2022-8-31 | https://stackoverflow.com/questions/73560307/exclude-some-attributes-from-fields-method-of-dataclass | I have a python dataclass that looks something like: @dataclass class MyDataClass: field0: int = 0 field1: int = 0 # --- Some other attribute that shouldn't be considered as _fields_ of the class attr0: int = 0 attr1: int = 0 I'd like to write the class in such a way that, when calling dataclasses.fields(my_data:=MyDataClass()), only field0 and field1 are reported. As a workaround, I've splitted the class in two inheriting classes, as: @dataclass class MyData: field0: int = 0 field1: int = 0 class MyDataClass(MyData): # --- Some other attribute that shouldn't be considered as _fields_ of the class attr0: int = 0 attr1: int = 0 It works, but I don't know if it's the right way (some drawback I'm not considering?) or if there is a more straightforward way to do it | Not the most elegant solution, but you may declare the non-field attributes as InitVar[int] and set them in a __post_init__() method. @dataclass class MyDataClass: field0: int = 0 field1: int = 0 # --- Some other attribute that shouldn't be considered as _fields_ of the class attr0: InitVar[int] = 0 attr1: InitVar[int] = 0 def __post_init__(self, attr0, attr1): self.attr0 = attr0 self.attr1 = attr1 | 5 | 4 |
73,559,902 | 2022-8-31 | https://stackoverflow.com/questions/73559902/flask-app-hello-run-error-no-such-option-app | TL;DR running flask --app from the command line results in Error: No such option: --app. The current version of Anaconda (at the time of this question) includes Flask version 1.x. Here is the code from the Flask hello world tutorial at: palletsprojects.com quickstart from flask import Flask app = Flask(__name__) @app.route("/") def hello_world(): return "<p>Hello, World!</p>" and running it with: flask --app hello run results in the error, Error: No such option: --app How can I run this with the "--app" option? | Display your flask version with flask --version. Using the defaults from the anaconda package manager condata update flask won't update flask to 2.x. --app works in >=2.2.x. In my case, at the time of this post, conda search flask did not show anything past version 2.1.3. To get this version or the latest available, navigate to https://anaconda.org/search?q=flask select the appropriate result for your OS. Use one of the example provided and add your desired version to the end e.g. conda install -c conda-forge flask=2.2.2 Run flask --app and you should get: Error: Option '--app' requires an argument. Note: If you're seeing an Exception: HTTPConnectionPool(..., make sure you are running these command from an anaconda terminal. | 10 | 6 |
73,556,924 | 2022-8-31 | https://stackoverflow.com/questions/73556924/how-to-reorder-measures-in-customizing-hover-label-appearance-plotly | I'm making an interactive map, there is no problem with the map itself, there is a problem with the way the elements are located in the additional hover marks. Is there any way to change this order? Is it possible not to display latitude and longitude indicators there? Sample code: @st.cache(hash_funcs={dict: lambda _: None}) def my_stat_map(df_region_map): fig_map = px.scatter_mapbox(df_region_map, hover_name='Region name', hover_data=['Confirmed', 'Deaths', 'Recovered', 'Daily confirmed', 'Daily deaths', 'Daily recovered'], lat='Latitude dd', lon='Longitude dd', size='Confirmed', color='Confirmed', color_continuous_scale='Sunsetdark', zoom=3, size_max=45, opacity=0.8, height=600) fig_map.update_layout(mapbox_style="carto-positron", showlegend=True) fig_map.update_layout(margin={"r": 0, "t": 0, "l": 0, "b": 0}) dict_map = {'map_key': fig_map} return dict_map What do I get: How to change the order in this output? I would like to remove the latitude and longitude, or at least move them to the end of the output. | This is covered here: https://plotly.com/python/hover-text-and-formatting/#disabling-or-customizing-hover-of-columns-in-plotly-express fig_map = px.scatter_mapbox( df_region_map, hover_name="Region name", hover_data={ "Confirmed":True, "Deaths":True, "Recovered":True, "Daily confirmed":True, "Daily deaths":True, "Daily recovered":True, "Latitude dd":False, "Longitude dd":False }, lat="Latitude dd", lon="Longitude dd", size="Confirmed", color="Confirmed", color_continuous_scale="Sunsetdark", zoom=3, size_max=45, opacity=0.8, height=600, ) fig_map.update_layout(mapbox_style="carto-positron", showlegend=True) fig_map.update_layout(margin={"r": 0, "t": 0, "l": 0, "b": 0}) fig_map create data frame import plotly.express as px import pandas as pd import requests df_ = pd.read_csv( "https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/latest/owid-covid-latest.csv" ) # alpha3 df_geo = pd.json_normalize( requests.get( "https://raw.githubusercontent.com/eesur/country-codes-lat-long/master/country-codes-lat-long-alpha3.json" ).json()["ref_country_codes"] ).rename(columns={"latitude": "Latitude dd", "longitude": "Longitude dd"}) df = df_.loc[ :, [ "iso_code", "location", "total_cases", "total_deaths", "total_tests", "new_cases", "new_deaths", "new_tests", ], ].rename( columns={ "location": "Region name", "total_cases": "Confirmed", "total_deaths": "Deaths", "total_tests": "Recovered", "new_cases": "Daily confirmed", "new_deaths": "Daily deaths", "new_tests": "Daily recovered", } ) df_region_map = pd.merge(df, df_geo, left_on="iso_code", right_on="alpha3").dropna(subset="Confirmed") | 6 | 2 |
73,558,171 | 2022-8-31 | https://stackoverflow.com/questions/73558171/inheritance-with-dataclasses | I am trying to understand what are the good practices when using inheritance with dataclasses. Let's say I want an "abstract" parent class containing a set of variables and methods, and then a series of child classes that inherit these methods and the variables, where in each of them the variables have a different default value. from dataclasses import dataclass @dataclass class ParentClass: a_variable: str def a_function(self) -> None: print("I am a class") # ONE @dataclass class DataclassChild1(ParentClass): a_variable: str = "DataclassChild" # TWO @dataclass class DataclassChild2(ParentClass): def __init__(self) -> None: super().__init__(a_variable="Child") # THREE class ClassChild(ParentClass): def __init__(self) -> None: super().__init__(a_variable="Child") What would be the correct way to implement this (one/two/three), if any? Or is it an overkill and it would be best to just use different instances of ParentClass, passing different values to the constructor? I think I should use the @dataclass decorator also for the child classes, but if I check the type of the child class, that seems to be a dataclass even if I don't use it. Plus, I feel like overwriting __init__ defeats the purpose of using a dataclass in the first place, but on the other hand the standard dataclass syntax seems useless because it would mean having to rewrite all the variables in the child classes (a_variable: str = "DataclassChild"). | I would argue that #1 is the most correct method. For the example you showed, it appears to be irrelevant which method you use, but if you add a second variable, the differences become apparent. This is implicitly confirmed by the Inheritance section in the documentation. @dataclass class ParentClass: a: str b: str = "parent-b" # This works smoothly @dataclass class ChildClass1(ParentClass): a: str = "child-a" # This works, but is a maintenance nightmare @dataclass class ChildClass2(ParentClass): def __init__(self, a="child-a", b="parent-b"): super().__init__(a, b) # This works, but it changes the signature and only works if a is first @dataclass class ChildClass3(ParentClass): def __init__(self, a="child-a", **kwargs): super().__init__(a, **kwargs) Right now, the dataclass decorator is adding default methods, including __init__ to your class. That means that if you wanted to use option #2 or #3, you would have to know and copy the function signature for all the parameters. At the same time, option #1 allows you to change the default for just a. The other way to do what you're doing is to create a __post_init__ method for your child classes, which can then override the parent default value: @dataclass class ParentClass: a: str = '' # Or pick some other universally acceptable marker @dataclass class ChildClass(ParentClass): def __post_init__(self): if self.a == '': self.a = "child-a" This is also needlessly complex for most scenarios, but may be useful for a more complex situation. Normally __post_init__ is meant to be used to initialize derived fields, as in the example in the linked documentation. | 5 | 3 |
73,558,036 | 2022-8-31 | https://stackoverflow.com/questions/73558036/add-label-multi-index-on-top-of-columns | Context: I'd like to add a new multi-index/row on top of the columns. For example if I have this dataframe: tt = pd.DataFrame({'A':[1,2,3],'B':[4,5,6],'C':[7,8,9]}) Desired Output: How could I make it so that I can add "Table X" on top of the columns A,B, and C? Table X A B C 0 1 4 7 1 2 5 8 2 3 6 9 Possible solutions(?): I was thinking about transposing the dataframe, adding the multi-index, and transpose it back again, but not sure how to do that without having to write the dataframe columns manually (I've checked other SO posts about this as well) Thank you! | If you want a data frame like you wrote, you need a Multiindex data frame, try this: import pandas as pd # you need a nested dict first dict_nested = {'Table X': {'A':[1,2,3],'B':[4,5,6],'C':[7,8,9]}} # then you have to reform it reformed_dict = {} for outer_key, inner_dict in dict_nested.items(): for inner_key, values in inner_dict.items(): reformed_dict[(outer_key, inner_key)] = values # last but not least convert it to a multiindex dataframe multiindex_df = pd.DataFrame(reformed_dict) print(multiIndex_df) # >> Table X # >> A B C # >> 0 1 4 7 # >> 1 2 5 8 # >> 2 3 6 9 | 6 | 2 |
73,557,943 | 2022-8-31 | https://stackoverflow.com/questions/73557943/mypy-fails-with-overloads-and-literals | I'm trying to understand typing.overload and have applied it in a simple case where I want a function that takes input x: Literal["foo", "bar"] and returns the list [x]. I'd like mypy to type the resulting list as list[Literal["foo"]] or list[Literal["bar"]] depending on the value of x. I know I could achieve this with a TypeVar, but I'd still like to understand why the code below fails with the following error: test.py:14: error: Overloaded function implementation cannot produce return type of signature 1 test.py:14: error: Overloaded function implementation cannot produce return type of signature 2 from typing import Literal, overload @overload def f(x: Literal["foo"]) -> list[Literal["foo"]]: ... @overload def f(x: Literal["bar"]) -> list[Literal["bar"]]: ... def f(x: Literal["foo", "bar"]) -> list[Literal["foo", "bar"]]: return [x] | Lists in Python are invariant. That means that, even if B is a subtype of A, there is no relation between the types list[A] and list[B]. If list[B] were allowed to be a subtype of list[A], then someone could come along and do this. my_b_list: list[B] = [] my_a_list: list[A] = my_b_list my_a_list.append(A()) print(my_b_list) # Oh no, a list[B] contains an A value! If you plan to modify the returned list, then what you're doing isn't safe. End of story. If you plan to treat the list as immutable, then consider what operations you actually need, and you may be able to find a covariant supertype of list in typing. For example, Sequence is a popular choice. It supports iteration, random access, and length access, while explicitly not allowing mutation. from typing import Literal, overload, Sequence @overload def f(x: Literal["foo"]) -> Sequence[Literal["foo"]]: ... @overload def f(x: Literal["bar"]) -> Sequence[Literal["bar"]]: ... def f(x: Literal["foo", "bar"]) -> Sequence[Literal["foo", "bar"]]: return [x] (Note: typing.Sequence is deprecated in Python 3.9; if you only plan to support 3.9+, you might use collections.abc.Sequence instead) | 7 | 4 |
73,557,596 | 2022-8-31 | https://stackoverflow.com/questions/73557596/count-occurrences-of-stings-in-a-row-pandas | I'm trying to count the number of instances of a certain sting in a row in a pandas dataframe. In the example here I utilized a lambda function and pandas .count() to try and count the number of times 'True' exists in each row. Though instead of a count of 'True' it is just returning a boolean whether or not it exists in the row... #create dataframe d = {'Period': [1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4], 'Result': ['True','None','False','True','False','True','False','True','True','False','False','True','False','True','False','False'], 'Result1': ['True','None','False','True','False','True','False','True','True','False','False','True','False','True','False','False'], 'Result2': ['True','None','False','True','False','True','False','True','True','False','False','True','False','True','False','False']} df = pd.DataFrame(data=d) #count instances of Trus or False in each row df['Count'] = df.apply(lambda row: row.astype(str).str.count('True').any(), axis=1) print(df) The desired outcome is: Period Result Result1 Result2 Count 1 True True True 3 2 None None None 0 3 False False False 0 4 True True True 3 1 False False False 0 2 True True True 3 3 False False False 0 ... ... ... ... ...... | You can use np.where: df['count'] = np.where(df == 'True', 1, 0).sum(axis=1) Regarding why your apply returns a boolean: both any and all returns boolean, not numbers Edit: You can include df.isin for multiple conditions: df['count'] = np.where(df.isin(['True', 'False']), 1, 0).sum(axis=1) | 3 | 4 |
73,553,299 | 2022-8-31 | https://stackoverflow.com/questions/73553299/how-do-you-perform-conditional-operations-on-different-elements-in-a-pandas-data | Let's say I have a Pandas Dataframe of the price and stock history of a product at 10 different points in time: df = pd.DataFrame(index=[np.arange(10)]) df['price'] = 10,10,11,15,20,10,10,11,15,20 df['stock'] = 30,20,13,8,4,30,20,13,8,4 df price stock 0 10 30 1 10 20 2 11 13 3 15 8 4 20 4 5 10 30 6 10 20 7 11 13 8 15 8 9 20 4 How do I perform operations between specific rows that meet certain criteria? In my example row 0 and row 5 meet the criteria "stock over 25" and row 4 and row 9 meet the criteria "stock under 5". I would like to calculate: df['price'][4] - df['price'][0] and df['price'][9] - df['price'][5] but not df['price'][9] - df['price'][0] or df['price'][4] - df['price'][5]. In other words, I would like to calculate the price change between the most recent event where stock was under 5 vs the most recent event where stock was over 25; over the whole series. Of course, I would like to do this over larger datasets where picking them manually is not good. | First, set up data frame and add some calculations: import pandas as pd import numpy as np df = pd.DataFrame(index=[np.arange(10)]) df['price'] = 10,10,11,15,20,10,10,11,15,20 df['stock'] = 30,20,13,8,4,30,20,13,8,4 df['stock_under_5'] = df['stock'] < 5 df['stock_over_25'] = df['stock'] > 25 df['cum_stock_under_5'] = df['stock_under_5'].cumsum() df['change_stock_under_5'] = df['cum_stock_under_5'].diff() df['change_stock_under_5'].iloc[0] = df['stock_under_5'].iloc[0]*1 df['next_row_change_stock_under_5'] = df['change_stock_under_5'].shift(-1) df['cum_stock_over_25'] = df['stock_over_25'].cumsum() df['change_stock_over_25'] = df['cum_stock_over_25'].diff() df['change_stock_over_25'].iloc[0] = df['stock_over_25'].iloc[0]*1 df['next_row_change_stock_over_25'] = df['change_stock_over_25'].shift(-1) df['row'] = np.arange(df.shape[0]) df['next_row'] = df['row'].shift(-1) df['next_row_price'] = df['price'].shift(-1) Next we find all windows where either the stock went over 25 or below 5 by grouping over the cumulative marker of those events. changes = ( df.groupby(['cum_stock_under_5', 'cum_stock_over_25']) .agg({'row':'first', 'next_row':'last', 'change_stock_under_5':'max', 'change_stock_over_25':'max', 'next_row_change_stock_under_5':'max', 'next_row_change_stock_over_25':'max', 'price':'first', 'next_row_price':'last'}) .assign(price_change = lambda x: x['next_row_price'] - x['price']) .reset_index(drop=True) ) For each window we find what happened at the beginning of the window: if change_stock_under_5 = 1 it means the window started with the stock going under 5, if change_stock_over_25 = 1 it started with the stock going over 25. Same for the end of the window using the columns next_row_change_stock_under_5 and next_row_change_stock_over_25 Now, we can readily extract the stock price change in rows where the stock went from being over 25 to being under 5: from_over_to_below = changes[(changes['change_stock_over_25']==1) & (changes['next_row_change_stock_under_5']==1)] and the other way around: from_below_to_over = changes[(changes['change_stock_under_5']==1) & (changes['next_row_change_stock_over_25']==1)] You can for example calculate the average price change when the stock went from over 25 to below 5: from_over_to_below.price_change.mean() | 4 | 1 |
73,551,471 | 2022-8-31 | https://stackoverflow.com/questions/73551471/deploying-django-channels-with-nginx | I am trying to deploy the Django application in AWS ubuntu os with Django channels, Using Nginx. I had configured the Django server in Nginx. But I don't know how to configure channels or Redis-server in Nginx. My nginx config is as follows: server { listen 80; server_name 52.77.215.218; location / { include proxy_params; proxy_pass http://localhost:8000/ } } My settings.py: CHANNEL_LAYERS = { "default": { "BACKEND": "channels_redis.core.RedisChannelLayer", "CONFIG": { "hosts": [("127.0.0.1", 6379)], }, }, } My requirements.txt: aioredis==1.3.1 asgiref==3.5.2 async-timeout==4.0.2 attrs==22.1.0 autobahn==22.7.1 Automat==20.2.0 certifi==2022.6.15 cffi==1.15.1 channels==3.0.5 channels-redis==2.4.2 charset-normalizer==2.1.1 constantly==15.1.0 coreapi==2.3.3 coreschema==0.0.4 cryptography==37.0.4 daphne==3.0.2 defusedxml==0.7.1 Django==4.1 django-cors-headers==3.13.0 django-templated-mail==1.1.1 djangorestframework==3.13.1 djangorestframework-simplejwt==4.8.0 djoser==2.1.0 gunicorn==20.1.0 hiredis==2.0.0 hyperlink==21.0.0 idna==3.3 incremental==21.3.0 itypes==1.2.0 Jinja2==3.1.2 MarkupSafe==2.1.1 msgpack==0.6.2 mysql==0.0.3 mysqlclient==2.1.1 oauthlib==3.2.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.21 PyJWT==2.4.0 pyOpenSSL==22.0.0 python3-openid==3.2.0 pytz==2022.2.1 requests==2.28.1 requests-oauthlib==1.3.1 service-identity==21.1.0 six==1.16.0 social-auth-app-django==4.0.0 social-auth-core==4.3.0 sqlparse==0.4.2 Twisted==22.4.0 twisted-iocpsupport==1.0.2 txaio==22.2.1 typing_extensions==4.3.0 tzdata==2022.2 uritemplate==4.1.1 urllib3==1.26.12 zope.interface==5.4.0 When I run server with python3 manage.py runserver 0.0.0.0:8000 Server running good and also connecting with redis-server but when I run server with gunicorn app.wsgi -b 0.0.0.0:800 then failed to connect with webbsocket. I also tried Hostinger VPS as well, but Same Issue. | You will need to download Daphne. Daphne is a high-performance websocket server for Django channels. https://docs.djangoproject.com/en/4.1/howto/deployment/asgi/daphne/ https://github.com/django/daphne https://channels.readthedocs.io/en/stable/deploying.html How you can run daphne: daphne -b 0.0.0.0 -p 8070 django_project.asgi:application Here is my Nginx conf for channels: upstream django { server 127.0.0.1:8080; } upstream websockets{ server 127.0.0.1:8070; } server { ... ... location /ws { proxy_pass http://websockets; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; ... } location / { proxy_pass http://django; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header X-Real-IP $remote_addr; ... } } | 4 | 4 |
73,516,000 | 2022-8-28 | https://stackoverflow.com/questions/73516000/is-there-a-way-to-cumulatively-and-distinctively-expand-list-in-polars | For distance, I want to accomplish conversion like below. df = pl.DataFrame({ "col": [["a"], ["a", "b"], ["c"]] }) ββββββββββββββ β col β β --- β β list[str] β ββββββββββββββ‘ β ["a"] β β ["a", "b"] β β ["c"] β ββββββββββββββ β β β ββββββββββββββ¬ββββββββββββββββββ β col β col_cum β β --- β --- β β list[str] β list[str] β ββββββββββββββͺββββββββββββββββββ‘ β ["a"] β ["a"] β β ["a", "b"] β ["a", "b"] β β ["c"] β ["a", "b", "c"] β ββββββββββββββ΄ββββββββββββββββββ I've tried polars.Expr.cumulative_eval(), but could not get it to work. I can access the first element and last element in every iteration. But I want here is the result of the previous iteration i think. Could I get some help? | We can use the cumulative_eval expression. But first, let's expand your data so that we can include some other things that may be of interest. We'll include a group variable to show how the algorithm can be used with grouping variables. We'll also include an empty list to show how the algorithm will handle these. import polars as pl df = pl.DataFrame( { "group": [1, 1, 1, 2, 2, 2, 2], "var": [["a"], ["a", "b"], ["c"], ["p"], ["q", "p"], [], ["s"]], } ) df shape: (7, 2) βββββββββ¬βββββββββββββ β group β var β β --- β --- β β i64 β list[str] β βββββββββͺβββββββββββββ‘ β 1 β ["a"] β β 1 β ["a", "b"] β β 1 β ["c"] β β 2 β ["p"] β β 2 β ["q", "p"] β β 2 β [] β β 2 β ["s"] β βββββββββ΄βββββββββββββ The Algorithm Here's the heart of the algorithm: ( df .with_columns( pl.col('var') .cumulative_eval( pl.element() .explode() .unique() .sort() .implode() ) .list.drop_nulls() .over('group') .alias('cumulative') ) ) shape: (7, 3) βββββββββ¬βββββββββββββ¬ββββββββββββββββββ β group β var β cumulative β β --- β --- β --- β β i64 β list[str] β list[str] β βββββββββͺβββββββββββββͺββββββββββββββββββ‘ β 1 β ["a"] β ["a"] β β 1 β ["a", "b"] β ["a", "b"] β β 1 β ["c"] β ["a", "b", "c"] β β 2 β ["p"] β ["p"] β β 2 β ["q", "p"] β ["p", "q"] β β 2 β [] β ["p", "q"] β β 2 β ["s"] β ["p", "q", "s"] β βββββββββ΄βββββββββββββ΄ββββββββββββββββββ How it works cumulative_eval allows us to treat a subset of rows for a column as if it was a Series itself (with the exception that we access the elements of the underlying Series using polars.element.) So, let's simulate what the cumulative_eval expression is doing by working work with the Series itself directly. We'll simulate what the algorithm does when cumulative_eval reaches the last row where group == 1 (the third row). The first major step of the algorithm is to explode the lists. explode will put each element of each list on its own row: ( df .select( pl.col('var') .filter(pl.col('group') == 1) .explode() ) ) shape: (4, 1) βββββββ β var β β --- β β str β βββββββ‘ β a β β a β β b β β c β βββββββ In the next step, we will use unique and sort to eliminate duplicates and keep the order consistent. ( df .select( pl.col('var') .filter(pl.col('group') == 1) .explode() .unique() .sort() ) ) shape: (3, 1) βββββββ β var β β --- β β str β βββββββ‘ β a β β b β β c β βββββββ At this point, we need only to roll up all the values into a list. ( df .select( pl.col('var') .filter(pl.col('group') == 1) .explode() .unique() .sort() .implode() ) ) shape: (1, 1) βββββββββββββββββββ β var β β --- β β list[str] β βββββββββββββββββββ‘ β ["a", "b", "c"] β βββββββββββββββββββ And that is the value that cumulative_eval returns for the third row. Performance The documentation for cumulative_eval comes with a strong warning about performance. Warning: This can be really slow as it can have O(n^2) complexity. Donβt use this for operations that visit all elements. Let's simulate some data. The code below generates about 9.5 million records, 10,000 groups, so that there are about 950 observations per group. import numpy as np from string import ascii_lowercase rng = np.random.default_rng(1) nbr_rows = 10_000_000 df = ( pl.DataFrame({ 'group': rng.integers(1, 10_000, size=nbr_rows), 'sub_list': rng.integers(1, 10_000, size=nbr_rows), 'var': rng.choice(list(ascii_lowercase), nbr_rows) }) .group_by('group', 'sub_list') .agg( pl.col('var') ) .drop('sub_list') .sort('group') ) df shape: (9515737, 2) βββββββββ¬βββββββββββββ β group β var β β --- β --- β β i64 β list[str] β βββββββββͺβββββββββββββ‘ β 1 β ["q", "r"] β β 1 β ["z"] β β 1 β ["b"] β β 1 β ["j"] β β ... β ... β β 9999 β ["z"] β β 9999 β ["e"] β β 9999 β ["s"] β β 9999 β ["s"] β βββββββββ΄βββββββββββββ One my 32-core system, here's the wall-clock time: import time start = time.perf_counter() ( df .with_columns( pl.col('var') .cumulative_eval( pl.element() .explode() .unique() .sort() .implode() ) .list.drop_nulls() .over('group') .alias('cumulative') ) ) print(time.perf_counter() - start) shape: (9515737, 3) βββββββββ¬βββββββββββββ¬ββββββββββββββββββββββ β group β var β cumulative β β --- β --- β --- β β i64 β list[str] β list[str] β βββββββββͺβββββββββββββͺββββββββββββββββββββββ‘ β 1 β ["q", "r"] β ["q", "r"] β β 1 β ["z"] β ["q", "r", "z"] β β 1 β ["b"] β ["b", "q", ... "z"] β β 1 β ["j"] β ["b", "j", ... "z"] β β ... β ... β ... β β 9999 β ["z"] β ["a", "b", ... "z"] β β 9999 β ["e"] β ["a", "b", ... "z"] β β 9999 β ["s"] β ["a", "b", ... "z"] β β 9999 β ["s"] β ["a", "b", ... "z"] β βββββββββ΄βββββββββββββ΄ββββββββββββββββββββββ >>> print(time.perf_counter() - start) 118.46121257600134 Roughly 2 minutes on my system for 9.5 million records. Depending on your system, you may get better or worse performance. But the point is that is didn't take hours to complete. If you need better performance, we can come up with a better-performing algorithm (or perhaps put in a feature request for a cumlist feature in Polars, which might have a complexity better than the O(n^2) complexity of cumulative_eval.) | 5 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.