question_id
int64 59.5M
79.4M
| creation_date
stringlengths 8
10
| link
stringlengths 60
163
| question
stringlengths 53
28.9k
| accepted_answer
stringlengths 26
29.3k
| question_vote
int64 1
410
| answer_vote
int64 -9
482
|
---|---|---|---|---|---|---|
75,357,468 | 2023-2-6 | https://stackoverflow.com/questions/75357468/matplotlib-supylabel-on-second-axis-of-multiplot | I'm not finding it possible to add a second supylabel for a right-hand y-axis of a multiplot. Can anyone please confirm 1) whether or not it can be done and/or 2)provide guidance on how? I am trying to achieve this: Because there are a variable number of subplots (sometimes an odd number, sometimes even) across the broader project, using subplot-level labelling to label the "middle" subplot would be problematic. I'm presently accomplishing with figure level text. Which looks fine within python, but the right label gets cut-off by savefig. I can only get it to work if I dummy-in null ax-level y-labels " \n". nrows = len(dftmp.GroupingCol.unique()) ncols = 1 fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(14,10), constrained_layout=True, sharex=True) for e, ep in enumerate(dftmp.GroupingCol.unique(), start=1): # define a figure axis and plot data ax = plt.subplot(nrows, ncols, e) dftmp["ValueCol"].loc[dftmp["GroupingCol"]==ep].plot(ax=ax, kind="bar", color=barcolor_lst) #, use_index=False) # horizontal reference line (zero change) zero_line = plt.axhline(0, color='k', linewidth=0.8) # y-axis extent limits ax.set_ylim([50*(-1.1), 50*1.1]) # create right-hand y-axis ax2 = ax.twinx() # y-axis extent limits ax2.set_ylim([200*(-1), 200]) # null y-label placeholder to accommodate fig-level pseudo-supylabel ax2.set_ylabel(" \n") # requires space and newline to work # create supylabel for left-axis supy_left = fig.supylabel("Left-hand y-axis super label", fontweight="bold") #, pad = 7)#, fontdict=fontdict) #fontweight='bold') # use fig-level text as pseudo-supylable for right-axis fig.text(x=0.97, y=0.5, s="Right-hand y-axis super label\n\n", size=13, fontweight='bold', rotation=270, ha='center', va='center') # create super-label for x-axis supx = fig.supxlabel("Bottom super label", fontweight="bold") In the absence of the fig.text line I tried naming a second supylabel as a different object and the code runs, but doesn't produce the label. supy_right = fig.supylabel("Cumulative net change (m^3)", fontweight="bold", position=(0.9,0.5)) | I have found the suplabels to be a little unreliable in many cases, so I resort to low-level tricks in cases like these: import matplotlib.pyplot as plt fig = plt.figure(figsize=(4, 4)) # dummy axes 1 ax = fig.add_subplot(1, 1, 1) ax.set_xticks([]) ax.set_yticks([]) [ax.spines[side].set_visible(False) for side in ('left', 'top', 'right', 'bottom')] ax.patch.set_visible(False) ax.set_xlabel('x label', labelpad=30) ax.set_ylabel('y label left', labelpad=30) # dummy axes 2 for right ylabel ax = fig.add_subplot(1, 1, 1) ax.set_xticks([]) ax.set_yticks([]) [ax.spines[side].set_visible(False) for side in ('left', 'top', 'right', 'bottom')] ax.patch.set_visible(False) ax.yaxis.set_label_position('right') ax.set_ylabel('y label right', labelpad=30) # actual data axes num_rows = 4 for i in range(num_rows): ax = fig.add_subplot(num_rows, 1, i + 1) ... fig.tight_layout() You need to adjust the labelpad values according to your liking. The rest can be taken care of by fig.tight_layout() (you might need to specify the rect though). EDIT: having re-read your question, have you tried increasing the pad_inches value when calling savefig()? | 5 | 2 |
75,354,617 | 2023-2-5 | https://stackoverflow.com/questions/75354617/pip-install-dotenv-error-1-windows-10-pro | When I do pip install dotenv it says this - `Collecting dotenv Using cached dotenv-0.0.5.tar.gz (2.4 kB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error Γ python setup.py egg_info did not run successfully. β exit code: 1 β°β> [72 lines of output] C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. warnings.warn( error: subprocess-exited-with-error python setup.py egg_info did not run successfully. exit code: 1 [17 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 14, in <module> File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\setuptools\__init__.py", line 2, in <module> from setuptools.extension import Extension, Library File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\setuptools\extension.py", line 5, in <module> from setuptools.dist import _get_unpatched File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\setuptools\dist.py", line 7, in <module> from setuptools.command.install import install File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\setuptools\command\__init__.py", line 8, in <module> from setuptools.command import install_scripts File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\setuptools\command\install_scripts.py", line 3, in <module> from pkg_resources import Distribution, PathMetadata, ensure_directory File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-wheel-xv3lcsr9\distribute_009ecda977a04fb699d5559aac28b737\pkg_resources.py", line 1518, in <module> register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. Traceback (most recent call last): File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\installer.py", line 82, in fetch_build_egg subprocess.check_call(cmd) File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 413, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\ANJUTI~1\\AppData\\Local\\Temp\\tmpcq62ekpo', '--quiet', 'distribute']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "C:\Users\Anju Tiwari\AppData\Local\Temp\pip-install-j7w9rs9u\dotenv_0f4daa500bef4242bb24b3d9366608eb\setup.py", line 13, in <module> setup(name='dotenv', File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\__init__.py", line 86, in setup _install_setup_requires(attrs) File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\__init__.py", line 80, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\dist.py", line 875, in fetch_build_eggs resolved_dists = pkg_resources.working_set.resolve( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\pkg_resources\__init__.py", line 789, in resolve dist = best[req.key] = env.best_match( ^^^^^^^^^^^^^^^ File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\pkg_resources\__init__.py", line 1075, in best_match return self.obtain(req, installer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\pkg_resources\__init__.py", line 1087, in obtain return installer(requirement) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\dist.py", line 945, in fetch_build_egg return fetch_build_egg(self, req) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Anju Tiwari\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\installer.py", line 84, in fetch_build_egg raise DistutilsError(str(e)) from e distutils.errors.DistutilsError: Command '['C:\\Users\\Anju Tiwari\\AppData\\Local\\Programs\\Python\\Python311\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\ANJUTI~1\\AppData\\Local\\Temp\\tmpcq62ekpo', '--quiet', 'distribute']' returned non-zero exit status 1. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ Encountered error while generating package metadata. β°β> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details.` I tried doing pip install dotenv but then that error come shown above. I also tried doing pip install -U dotenv but it didn't work and the same error came. Can someone please help me fix this? | pip install python-dotenv worked for me. | 7 | 25 |
75,351,597 | 2023-2-5 | https://stackoverflow.com/questions/75351597/how-can-i-chat-with-chatgpt-using-python | I asked ChatGPT to show me how I could use OpenAi's API to interact with it in my terminal window and it generated code which I modified a little bit in order to do what I wanted. Here is the Python code: import requests with open('../api-key.txt','r') as key: data = key.read().strip() api_key = data model="text-danvinci-003" def chat_with_chatgpt(prompt): res = requests.post(f"https://api.openai.com/v1/engines/{model}/jobs", headers = { "Content-Type":"application/json", "Authorization":f"Bearer {api_key}" }, json={ "prompt":prompt, "max_tokens":100 }).json() print(res) return res.choices[0].text while True: prompt = input('Me: ') response = chat_with_chatgpt(prompt) print(f'ChatGPT: {response}') But when I ran this code I got some messages that said: Me: hello {'error': {'message': 'That model does not exist', 'type': 'invalid_request_error', 'param': None, 'code': None}} Traceback (most recent call last): File "/data/data/com.termux/files/home/python/main.py", line 23, in <module> response = chat_with_chatgpt(prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/data/com.termux/files/home/python/main.py", line 19, in chat_with_chatgpt return res.choices[0].text ^^^^^^^^^^^ AttributeError: 'dict' object has no attribute 'choices' The response I got is an error dict. For some reason, I am not able to install OpenAi via pip install openai on my system, so this is the only option I have. | I believe the engines API endpoints were deprecated in favour of models. For more info read here: https://help.openai.com/en/articles/6283125-what-happened-to-engines You will want to look at the completions endpoint instead https://platform.openai.com/docs/api-reference/completions Here's an example of the URL structure and headers required, using cURL. curl https://api.openai.com/v1/completions \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer YOUR_API_KEY' \ -d '{ "model": "text-davinci-003", "prompt": "Say this is a test", "max_tokens": 7, "temperature": 0 }' The general structure of the code should be fine, you'll just want to swap out the endpoint in use. def chat_with_chatgpt(prompt): res = requests.post(f"https://api.openai.com/v1/completions", headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" }, json={ "model": model "prompt": prompt, "max_tokens": 100 }).json() return res.choices[0].text | 4 | 4 |
75,350,395 | 2023-2-5 | https://stackoverflow.com/questions/75350395/how-should-we-manage-datetime-fields-in-sqlmodel-in-python | Let's say I want to create an API with a Hero SQLModel, below are minimum viable codes illustrating this: from typing import Optional from sqlmodel import Field, Relationship, SQLModel from datetime import datetime from sqlalchemy import Column, TIMESTAMP, text class HeroBase(SQLModel): # essential fields name: str = Field(index=True) secret_name: str age: Optional[int] = Field(default=None, index=True) created_datetime: datetime = Field(sa_column=Column(TIMESTAMP(timezone=True), nullable=False, server_default=text("now()"))) updated_datetime: datetime = Field(sa_column=Column(TIMESTAMP(timezone=True), nullable=False, server_onupdate=text("now()"))) team_id: Optional[int] = Field(default=None, foreign_key="team.id") class Hero(HeroBase, table=True): # essential fields + uniq identifier + relationships id: Optional[int] = Field(default=None, primary_key=True) team: Optional["Team"] = Relationship(back_populates="heroes") class HeroRead(HeroBase): # uniq identifier id: int class HeroCreate(HeroBase): # same and Base pass class HeroUpdate(SQLModel): # all essential fields without datetimes name: Optional[str] = None secret_name: Optional[str] = None age: Optional[int] = None team_id: Optional[int] = None class HeroReadWithTeam(HeroRead): team: Optional["TeamRead"] = None My question is, how should the SQLModel for HeroUpdate be like? Does it include the create_datetime and update_datetime fields? How do I delegate the responsibility of creating these fields to the database instead of using the app to do so? | Does [the HeroUpdate model] include the create_datetime and update_datetime fields? Well, you tell me! Should the API endpoint for updating an entry in the hero table be able to change the value in the create_datetime and update_datetime columns? I would say, obviously not. Fields like that serve as metadata about entries in the DB and are typically only ever written to by the DB. It is strange enough that you include them in the model for creating new entries in the table. Why would you let the API set the value of when an entry in the DB was created/updated? One could even argue that those fields should not be visible to "the outside" at all. But I suppose you could include them in HeroRead for example, if you wanted to present that metadata to the consumers of the API. How do I delegate the responsibility of creating [the create_datetime and update_datetime] fields to the database instead of using the app to do so? You already have delegated it. You (correctly) defined a server_default and server_onupdate values for the Column instances that represent those fields. That means the DBMS will set their values accordingly, unless they are passed explicitly in a SQL statement. What I would suggest is the following re-arrangement of your models: from datetime import datetime from typing import Optional from sqlmodel import Column, Field, SQLModel, TIMESTAMP, text class HeroBase(SQLModel): name: str = Field(index=True) secret_name: str age: Optional[int] = Field(default=None, index=True) class Hero(HeroBase, table=True): id: Optional[int] = Field(default=None, primary_key=True) created_datetime: Optional[datetime] = Field(sa_column=Column( TIMESTAMP(timezone=True), nullable=False, server_default=text("CURRENT_TIMESTAMP"), )) updated_datetime: Optional[datetime] = Field(sa_column=Column( TIMESTAMP(timezone=True), nullable=False, server_default=text("CURRENT_TIMESTAMP"), server_onupdate=text("CURRENT_TIMESTAMP"), )) class HeroRead(HeroBase): id: int class HeroCreate(HeroBase): pass class HeroUpdate(SQLModel): name: Optional[str] = None secret_name: Optional[str] = None age: Optional[int] = None (I use CURRENT_TIMESTAMP to test with SQLite.) Demo: from sqlmodel import Session, create_engine, select # Initialize database & session: engine = create_engine("sqlite:///", echo=True) SQLModel.metadata.create_all(engine) session = Session(engine) # Create: hero_create = HeroCreate(name="foo", secret_name="bar") session.add(Hero.from_orm(hero_create)) session.commit() # Query (SELECT): statement = select(Hero).filter(Hero.name == "foo") hero = session.execute(statement).scalar() # Read (Response): hero_read = HeroRead.from_orm(hero) print(hero_read.json(indent=4)) # Update (comprehensive as in the docs, although we change only one field): hero_update = HeroUpdate(secret_name="baz") hero_update_data = hero_update.dict(exclude_unset=True) for key, value in hero_update_data.items(): setattr(hero, key, value) session.add(hero) session.commit() # Read again: hero_read = HeroRead.from_orm(hero) print(hero_read.json(indent=4)) Here is what the CREATE statement looks like: CREATE TABLE hero ( created_datetime TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL, updated_datetime TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL, name VARCHAR NOT NULL, secret_name VARCHAR NOT NULL, age INTEGER, id INTEGER NOT NULL, PRIMARY KEY (id) ) Here is the output of the the two HeroRead instances: { "name": "foo", "secret_name": "bar", "age": null, "id": 1 } { "name": "foo", "secret_name": "baz", "age": null, "id": 1 } I did not include the timestamp columns in the read model, but SQLite does not honor ON UPDATE anyway. | 5 | 5 |
75,333,446 | 2023-2-3 | https://stackoverflow.com/questions/75333446/fastapi-create-a-generic-response-model-that-would-suit-requirements | I've been working with FastAPI for some time, it's a great framework. However real life scenarios can be surprising, sometimes a non-standard approach is necessary. There's a one case I'd like to ask your help with. There's a strange external requirement that a model response should be formatted as stated in example: Desired behavior: GET /object/1 {status: βsuccessβ, data: {object: {id:β1β, category: βtestβ β¦}}} GET /objects {status: βsuccessβ, data: {objects: [...]}}} Current behavior: GET /object/1 would respond: {id: 1,field1:"content",... } GET /objects/ would send a List of Object e.g.,: { [ {id: 1,field1:"content",... }, {id: 1,field1:"content",... }, ... ] } You can substitute 'object' by any class, it's just for description purposes. How to write a generic response model that will suit those reqs? I know I can produce response model that would contain status:str and (depending on class) data structure e.g ticket:Ticket or tickets:List[Ticket]. The point is there's a number of classes so I hope there's a more pythonic way to do it. Thanks for help. | Generic model with static field name A generic model is a model where one field (or multiple) are annotated with a type variable. Thus the type of that field is unspecified by default and must be specified explicitly during subclassing and/or initialization. But that field is still just an attribute and an attribute must have a name. A fixed name. To go from your example, say that is your model: { "status": "...", "data": { "object": {...} # type variable } } Then we could define that model as generic in terms of the type of its object attribute. This can be done using Pydantic's GenericModel like this: from typing import Generic, TypeVar from pydantic import BaseModel from pydantic.generics import GenericModel M = TypeVar("M", bound=BaseModel) class GenericSingleObject(GenericModel, Generic[M]): object: M class GenericMultipleObjects(GenericModel, Generic[M]): objects: list[M] class BaseGenericResponse(GenericModel): status: str class GenericSingleResponse(BaseGenericResponse, Generic[M]): data: GenericSingleObject[M] class GenericMultipleResponse(BaseGenericResponse, Generic[M]): data: GenericMultipleObjects[M] class Foo(BaseModel): a: str b: int class Bar(BaseModel): x: float As you can see, GenericSingleObject reflects the generic type we want for data, whereas GenericSingleResponse is generic in terms of the type parameter M of GenericSingleObject, which is the type of its data attribute. If we now want to use one of our generic response models, we would need to specify it with a type argument (a concrete model) first, e.g. GenericSingleResponse[Foo]. FastAPI deals with this just fine and can generate the correct OpenAPI documentation. The JSON schema for GenericSingleResponse[Foo] looks like this: { "title": "GenericSingleResponse[Foo]", "type": "object", "properties": { "status": { "title": "Status", "type": "string" }, "data": { "$ref": "#/definitions/GenericSingleObject_Foo_" } }, "required": [ "status", "data" ], "definitions": { "Foo": { "title": "Foo", "type": "object", "properties": { "a": { "title": "A", "type": "string" }, "b": { "title": "B", "type": "integer" } }, "required": [ "a", "b" ] }, "GenericSingleObject_Foo_": { "title": "GenericSingleObject[Foo]", "type": "object", "properties": { "object": { "$ref": "#/definitions/Foo" } }, "required": [ "object" ] } } } To demonstrate it with FastAPI: from fastapi import FastAPI app = FastAPI() @app.get("/foo/", response_model=GenericSingleResponse[Foo]) async def get_one_foo() -> dict[str, object]: return {"status": "foo", "data": {"object": {"a": "spam", "b": 123}}} Sending a request to that route returns the following: { "status": "foo", "data": { "object": { "a": "spam", "b": 123 } } } Dynamically created model If you actually want the attribute name to also be different every time, that is obviously no longer possible with static type annotations. In that case we would have to resort to actually creating the model type dynamically via pydantic.create_model. In that case there is really no point in genericity anymore because type safety is out of the window anyway, at least for the data model. We still have the option to define a GenericResponse model, which we can specify via our dynamically generated models, but this will make every static type checker mad, since we'll be using variables for types. Still, it might make for otherwise concise code. We just need to define an algorithm for deriving the model parameters: from typing import Any, Generic, Optional, TypeVar from pydantic import BaseModel, create_model from pydantic.generics import GenericModel M = TypeVar("M", bound=BaseModel) def create_data_model( model: type[BaseModel], plural: bool = False, custom_plural_name: Optional[str] = None, **kwargs: Any, ) -> type[BaseModel]: data_field_name = model.__name__.lower() if plural: model_name = f"Multiple{model.__name__}" if custom_plural_name: data_field_name = custom_plural_name else: data_field_name += "s" kwargs[data_field_name] = (list[model], ...) # type: ignore[valid-type] else: model_name = f"Single{model.__name__}" kwargs[data_field_name] = (model, ...) return create_model(model_name, **kwargs) class GenericResponse(GenericModel, Generic[M]): status: str data: M Using the same Foo and Bar examples as before: class Foo(BaseModel): a: str b: int class Bar(BaseModel): x: float SingleFoo = create_data_model(Foo) MultipleBar = create_data_model(Bar, plural=True) This also works as expected with FastAPI including the automatically generated schemas/documentations: from fastapi import FastAPI app = FastAPI() @app.get("/foo/", response_model=GenericResponse[SingleFoo]) # type: ignore[valid-type] async def get_one_foo() -> dict[str, object]: return {"status": "foo", "data": {"foo": {"a": "spam", "b": 123}}} @app.get("/bars/", response_model=GenericResponse[MultipleBar]) # type: ignore[valid-type] async def get_multiple_bars() -> dict[str, object]: return {"status": "bars", "data": {"bars": [{"x": 3.14}, {"x": 0}]}} Output is essentially the same as with the first approach. You'll have to see, which one works better for you. I find the second option very strange because of the dynamic key/field name. But maybe that is what you need for some reason. | 5 | 14 |
75,331,219 | 2023-2-3 | https://stackoverflow.com/questions/75331219/recursion-question-in-python-no-conditionals-or-loops | I am trying to figure out how to print the word "hello" 121 times in python. I need to use a function without conditionals or loops, no new lines, and no multiplying the string by an integer. I thinking something like: print_hello(): print('hello') print_hello() print_hello() but I can't seem to find a way to limit the recursive output without conditionals. Any help would be greatly appreciated. Update Here are all of the constraints to the challenge, it's possible recursion isn't the right approach. max 20 lines of code no inputs to a function. no imports or modules no if statements no loops (for or while) no new line characters (\n) can't do something like: print("hello\n"*121) no semi colons or tuples can't use exec functions | (Note: I've edited the message. So see better solution at the end) If we don't try to find a silver bullet, using a trick that would have been forgotten in the restrictions (global variables to bypass parameters interdiction, and/try to bypass if interdiction, multiplying list to bypass multiplication of string interdiction, ...), as I understand the question, it is about finding a correct splitting in functions/subfunctions to have 20 lines of code (no semicolon) For example, if we had to print 32 hello, we could, as geeks used to reason in power of 2, def h2(): print("hello") print("hello") def h4(): h2() h2() def h8(): h4() h4() def h16(): h8() h8() h16() h16() Which are 14 lines. But the def parts complicates things, and complicates what is optimal. For example, here, since we don't use h2 elsewhere that in h4, it would be shorter to directly print hello 4 times in h4. Likewise, for h16 def h4(): print("hello") print("hello") print("hello") print("hello") def h16(): h4() h4() h4() h4() h16() h16() which are only 12 lines. Now, here, number is 121. That is a specific number. Which is even the reason why I believe that this is the kind of expected solution: it is not easy to decide what intermediate function we need to create. That would be an interesting problem per se: create a code, that optimize the number of lines needed, using this kind of subfunctions encapsulation. But one combination that fit in 20 lines is def h3(): print("hello") print("hello") print("hello") def h7(): h3() h3() print("hello") def h15(): h7() h7() print("hello") def h60(): h15() h15() h15() h15() h60() h60() print("hello") I know it is way less smart (and, even, "smart ass") than all the other solutions that were proposed. But I am really convinced that this is the kind of expected solution. It may seem too simple and naive. But it is not that an easy problem to decide which h?? to write (well, it is not that hard if the constraint is just "fit in 20 lines". But I would have a harder time if the constraint was "use the smallest number of lines possible") Edit I couldn't resist, so I wrote a code that optimize this def size(n, l): ml=max(k for k in l if k<=n) nm=n//ml r=n-nm*ml if r==0: return nm else: return nm+size(r, l) def sizefn(l): return sum(size(k, [x for x in l if x<k])+1 for k in l if k>1) def totsize(n,l): return size(n, l) + sizefn(l) rec=1000 def compute(l, k): global rec if k>120: return False sfn =sizefn(l) if sfn+2>=rec: return False f = size(121, l) if sfn+f<rec: rec=sfn+f print(f'{sfn+f} ({sfn}+{f}) :', l) compute(l+[k], k+1) compute(l, k+1) What it does is just try all possible combinations of intermediate functions. So, that is, theoretically 2ΒΉΒ²β° combinations (all intermediate function h? may exist or not), and count how many line of code that would be, and keep the best. Except that I do it with Branch&Bound, allowing to avoid examination of whole subsets of the set of all combination. Result is 121 (0+121) : [1] 64 (3+61) : [1, 2] 47 (6+41) : [1, 2, 3] 40 (9+31) : [1, 2, 3, 4] 37 (12+25) : [1, 2, 3, 4, 5] 36 (15+21) : [1, 2, 3, 4, 5, 6] 35 (31+4) : [1, 2, 3, 4, 5, 6, 7, 8, 9, 13, 39] 34 (28+6) : [1, 2, 3, 4, 5, 6, 7, 8, 9, 23] 33 (28+5) : [1, 2, 3, 4, 5, 6, 7, 8, 10, 30] 32 (28+4) : [1, 2, 3, 4, 5, 6, 7, 8, 13, 39] 31 (25+6) : [1, 2, 3, 4, 5, 6, 7, 8, 23] 30 (25+5) : [1, 2, 3, 4, 5, 6, 7, 10, 30] 29 (25+4) : [1, 2, 3, 4, 5, 6, 7, 13, 39] 28 (22+6) : [1, 2, 3, 4, 5, 6, 8, 24] 27 (22+5) : [1, 2, 3, 4, 5, 6, 10, 30] 26 (20+6) : [1, 2, 3, 4, 5, 6, 23] 25 (19+6) : [1, 2, 3, 4, 5, 8, 24] 24 (19+5) : [1, 2, 3, 4, 5, 10, 30] 23 (17+6) : [1, 2, 3, 4, 6, 24] 22 (16+6) : [1, 2, 3, 4, 8, 24] 21 (16+5) : [1, 2, 3, 5, 10, 30] 20 (14+6) : [1, 2, 3, 6, 24] 19 (13+6) : [1, 2, 4, 8, 24] 18 (12+6) : [1, 2, 6, 24] And it ends there. Meaning that there is no better solution that the one with h2, h6, and h24 (h1 is just print(hello)) For a total of 18 lines Which gives def h2(): print("hello") print("hello") def h6(): h2() h2() h2() def h24(): h6() h6() h6() h6() h24() h24() h24() h24() h24() print("hello") | 3 | 4 |
75,318,736 | 2023-2-2 | https://stackoverflow.com/questions/75318736/how-do-you-analyze-the-runtime-complexity-of-a-recursive-function-with-both-expo | I'm not sure how the two recursive calls and the floor division of the following function interact regarding time complexity and big o notation. def recurse(n: int, k: int) -> int: if n <= 0 or k <= 0: return 1 return recurse(n//2, k) + recurse(n, k//2) I see how O(2^(nk)) could serve as an upper bound because the k portion of (n//2,k) and the n portion of (n,k//2) dominate the growth rates. However I could also see something along the lines of (nlog(n)*mlog(m)) working as well and I'm not sure what to settle on. Edit: | Here's an exact solution. The first thing to note is that, ignoring cases where n < 0 or k < 0, we only care about how many times we need to divide n or k by two (using floor division) before they reach zero. For example, if n = 7, then we have 7, 3, 1, 0, which is three divisions by two. The number of times a non-negative integer v can be divided by two (using floor division) before reaching zero is ceil(log2(v+1)). Let a = ceil(log2(n+1)) and b = ceil(log2(k+1)). Then each recursive call reduces either a or b by one, and continues until either a or b reaches zero. Now consider the call tree. This is a binary tree, but not a balanced tree (since some paths are longer than others). A call is a leaf call if either n or k is zero. Further, each leaf call corresponds to a single unique path through the call tree. Each recursive call reduces either a or b by one, so each leaf call corresponds to a unique monotonic path from (a, b) to either (0, x) or (x, 0), since the recursive calls end when either a or b becomes zero. We can model this by extending each path to (0, 0). Then we just need to count the number of paths from (a, b) to (0, 0). This is just (a+b)!/(a!*b!). This is a well-known result that isn't difficult to derive. It can be expressed as a multicombination (i.e., a combination in which multiple instances of a given value are allowed). So the number of leaf calls is (a+b)!/(a!*b!). The number of non-leaf calls is one less than this. So we have: leaf calls: (a+b)!/(a!*b!) non-leaf calls: (a+b)!/(a!*b!) - 1 total calls: 2*(a+b)!/(a!*b!) - 1 The exact time complexity of recurse is O((a+b)!/(a!*b!)), where a = ceil(log2(n+1)) and b = ceil(log2(k+1)). The return value of recurse is the number of leaf calls. The function complexity below behaves identically: from math import factorial def complexity(n: int, k: int) -> int: if n <= 0 or k <= 0: return 1 a = ceil_log2(n+1) b = ceil_log2(k+1) return factorial(a + b) // (factorial(a) * factorial(b)) where ceil_log2 is defined as: def ceil_log2(x: int) -> int: if x <= 0: raise ValueError("log2 of zero or negative integer") return (x - 1).bit_length() | 4 | 3 |
75,320,937 | 2023-2-2 | https://stackoverflow.com/questions/75320937/why-doesnt-float-throw-an-exception-when-the-argument-is-outside-the-range-of | I'm using Python 3.10 and I have: a = int(2 ** 1023 * (1 + (1 - 2 ** -52))) Now, the value of a is the biggest integer value in double precision floating point format. So, I'm expecting float(a + 1) to give an OverflowError error, as noted in here: If the argument is outside the range of a Python float, an OverflowError will be raised. But, to my surprise, it doesn't throw the error, instead, it happily returns: 1.7976931348623157e+308 which seems like sys.float_info.max. I also do float(a + 2), float(a + 3), float(a + 4), etc but it still returns 1.7976931348623157e+308. Only until I do float(a + a) then it throws the expected exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> OverflowError: int too large to convert to float It seems like the smallest number that fails is a + 2 ** 970, as noted in this comment. So, what could be the reason for this? | As was commented, 2**970 is the smallest addition that rounds up. This makes sense as follows: 1023 =exp - 52 =n, where the "smallest" field is 2**-n ----- 971 L G RS so, the largest is: (2**1023)*1.111...111(1)00 123...012 3 45 // bit #s ^ 555 5 55 2**1022 ^ ^ ^ ^ 2**-971 ^ ^ ^ 2**970 ...so adding 2**970 will round up, per the LGRS=1100 values. These are the Least significant, Guard, Round, and Sticky bits from the IEEE 754 spec. This can be demo'd in python as follows: >>> import sys >>> print("%100.10f" % (sys.float_info.max)) 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.0000000000 >>> # note that even though this is even in base 10, the significand is odd in binary since the Least significant bit is 1 ... >>> print("%f" % (sys.float_info.max+2**970)) inf >>> print("%100.10f" % (sys.float_info.max+2**969)) 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.0000000000 >>> | 4 | 3 |
75,312,706 | 2023-2-1 | https://stackoverflow.com/questions/75312706/find-all-combinations-of-positive-integers-in-increasing-order-that-adds-up-to-a | How to write a function that takes n (where n > 0) and returns the list of all combinations of positive integers that sum to n? This is a common question on the web. And there are different answers provided such as 1, 2 and 3. However, in the answers provided, they use two functions to solve the problem. I want to do it with only one single function. Therefore, I coded as follows: def all_combinations_sum_to_n(n): from itertools import combinations_with_replacement combinations_list = [] if n < 1: return combinations_list l = [i for i in range(1, n + 1)] for i in range(1, n + 1): combinations_list = combinations_list + (list(combinations_with_replacement(l, i))) result = [list(i) for i in combinations_list if sum(i) == n] result.sort() return result If I pass 20 to my function which is all_combinations_sum_to_n(20), the OS of my machine kills the process as it is very costly. I think the space complexity of my function is O(n*n!). How do I modify my code so that I don't have to create any other function and yet my single function has an improved time or space complexity? I don't think it is possible by using itertools.combinations_with_replacement. UPDATE All answers provided by Barmar, ShadowRanger and pts are great. As I was looking for an efficient answer in terms of both memory and runtime, I used https://perfpy.com and selected python 3.8 to compare the answers. I used six different values of n and in all cases, ShadowRanger's solution had the highest score. Therefore, I chose ShadowRanger's answer as the best one. The scores were as follows: | You've got two main problems, one causing your current problem (out of memory) and one that will continue the problem even if you solve that one: You're accumulating all combinations before filtering, so your memory requirements are immense. You don't even need a single list if your function can be a generator (that is iterated to produce a value at a time) rather than returning a fully realized list, and even if you must return a list, you needn't generate such huge intermediate lists. You might think you need at least one list for sorting purposes, but combinations_with_replacement is already guaranteed to produce a predictable order based on the input ordering, and since range is ordered, the values produced will be ordered as well. Even if you solve the memory problem, the computational cost of just generating that many combinations is prohibitive, due to poor scaling; for the memory, but not CPU, optimized version of the code below, it handles an input of 11 in 0.2 seconds, 12 in ~2.6 seconds, and 13 in ~11 seconds; at that scaling rate, 20 is going to approach heat death of the universe timeframes. Barmar's answer is one solution to both problems, but it's still doing work eagerly and storing the complete results when the complete work might not even be needed, and it involves sorting and deduplication, which aren't strictly necessary if you're sufficiently careful about how you generate the results. This answer will fix both problems, first the memory issue, then the speed issue, without ever needing memory requirements above linear in n. Solving the memory issue alone actually makes for simpler code, that still uses your same basic strategy, but without consuming all your RAM needlessly. The trick is to write a generator function, that avoids storing more than one results at a time (the caller can listify if they know the output is small enough and they actually need it all at once, but typically, just looping over the generator is better): from collections import deque # Cheap way to just print the last few elements from itertools import combinations_with_replacement # Imports should be at top of file, # not repeated per call def all_combinations_sum_to_n(n): for i in range(1, n + 1): # For each possible length of combinations... # For each combination of that length... for comb in combinations_with_replacement(range(1, n + 1), i): if sum(comb) == n: # If the sum matches... yield list(comb) # yield the combination # 13 is the largest input that will complete in some vaguely reasonable time, ~10 seconds on TIO print(*deque(all_combinations_sum_to_n(13), maxlen=10), sep='\n') Try it online! Again, to be clear, this will not complete in any reasonable amount of time for an input of 20; there's just too much redundant work being done, and the growth pattern for combinations scales with the factorial of the input; you must be more clever algorithmically. But for less intense problems, this pattern is simpler, faster, and dramatically more memory-efficient than a solution that builds up enormous lists and concatenates them. To solve in a reasonable period of time, using the same generator-based approach (but without itertools, which isn't practical here because you can't tell it to skip over combinations when you know they're useless), here's an adaptation of Barmar's answer that requires no sorting, produces no duplicates, and as a result, can produce the solution set in less than a 20th of a second, even for n = 20: def all_combinations_sum_to_n(n, *, max_seen=1): for i in range(max_seen, n // 2 + 1): for subtup in all_combinations_sum_to_n(n - i, max_seen=i): yield (i,) + subtup yield (n,) for x in all_combinations_sum_to_n(20): print(x) Try it online! That not only produces the individual tuples with internally sorted order (1 is always before 2), but produces the sequence of tuples in sorted order (so looping over sorted(all_combinations_sum_to_n(20)) is equivalent to looping over all_combinations_sum_to_n(20) directly, the latter just avoids the temporary list and a no-op sorting pass). | 4 | 3 |
75,323,859 | 2023-2-2 | https://stackoverflow.com/questions/75323859/choose-a-random-element-in-each-row-of-a-2d-array-but-only-consider-the-elements | I have a 2D array data and a boolean array mask of shapes (M,N). I need to randomly pick an element in each row of data. However, the element I picked should be true in the given mask. Is there a way to do this without looping over every row? In every row, there are at least 2 elements for which mask is true. Minimum Working Example: data = numpy.arange(8).reshape((2,4)) mask = numpy.array([[True, True, True, True], [True, True, False, False]]) selected_data = numpy.random.choice(data, mask, num_elements=1, axis=1) The 3rd line above doesn't work. I want something like that. I've listed below some valid solutions. selected_data = [0,4] selected_data = [1,5] selected_data = [2,5] selected_data = [3,4] | It is easier to work with the indices of the mask. We can get the indices of the True values from the mask and stack them together to create 2D coordinates array. All of the values inside the indices2d are possible to sample. Then we can shuffle the array and get the first index of the unique row values. Since the array is shuffled, it is random choice. Then we can match the selected 2D indices to the original data. See below; import numpy data = numpy.arange(8).reshape((2,4)) mask = numpy.array([[True, True, True, True], [True, True, False, False]]) for _ in range(20): indices2d = numpy.dstack(numpy.where(mask)).squeeze().astype(numpy.int32) numpy.random.shuffle(indices2d) randomElements = indices2d[numpy.unique(indices2d[:, 0], return_index=True)[1]] print(data[randomElements[:,0],randomElements[:,1]]) Output [0 5] [1 4] [0 5] [1 4] [1 5] [0 5] [1 4] [1 5] [3 4] [2 5] [2 4] [3 5] [2 4] [0 4] [0 4] [0 4] [0 5] [3 5] [3 5] [1 4] 12.7 ms Β± 80.9 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) | 3 | 2 |
75,322,357 | 2023-2-2 | https://stackoverflow.com/questions/75322357/how-to-run-unittest-tests-from-multiple-directories | I have 2 directories containing tests: project/ | |-- test/ | | | |-- __init__.py | |-- test_1.py | |-- my_submodule/ | |-- test/ | |-- __init__.py |-- test_2.py How can I run all tests? python -m unittest discover . only runs test_1.py and obviously python -m unittest discover my_submodule only runs test_2.py | unittest currently sees project/my_submodule as an arbitrary directory to ignore, not a package to import. Just add project/my_submodule/__init__.py to change that. | 3 | 6 |
75,323,747 | 2023-2-2 | https://stackoverflow.com/questions/75323747/polars-looping-through-the-rows-in-a-dataset | I am trying to loop through a Polars recordset using the following code: import polars as pl df = pl.DataFrame({ "start_date": ["2020-01-02", "2020-01-03", "2020-01-04"], "Name": ["John", "Joe", "James"] }) for row in df.rows(): print(row) ('2020-01-02', 'John') ('2020-01-03', 'Joe') ('2020-01-04', 'James') Is there a way to specifically reference 'Name' using the named column as opposed to the index? In Pandas this would look something like: import pandas as pd df = pd.DataFrame({ "start_date": ["2020-01-02", "2020-01-03", "2020-01-04"], "Name": ["John", "Joe", "James"] }) for index, row in df.iterrows(): df['Name'][index] 'John' 'Joe' 'James' | You can specify that you want the rows to be named for row in mydf.rows(named=True): print(row) It will give you a dict: {'start_date': '2020-01-02', 'Name': 'John'} {'start_date': '2020-01-03', 'Name': 'Joe'} {'start_date': '2020-01-04', 'Name': 'James'} You can then call row['Name'] Note that: previous versions returned namedtuple instead of dict. it's less memory intensive to use iter_rows overall it's not recommended to iterate through the data this way Row iteration is not optimal as the underlying data is stored in columnar form; where possible, prefer export via one of the dedicated export/output methods. | 19 | 28 |
75,315,117 | 2023-2-1 | https://stackoverflow.com/questions/75315117/attributeerror-connection-object-has-no-attribute-connect-when-use-df-to-sq | I am trying to store data retrieved from a website into MySQL database via a pandas data frame. However, when I make the function call df.to_sql(), the compiler give me an error message saying: AttributeError: 'Connection' object has no attribute 'connect'. I tested it couple times and I am sure that there is neither connection issue nor table existence issue involved. Is there anything wrong with the code itself? The code I am using is the following: from sqlalchemy import create_engine, text import pandas as pd import mysql.connector config = configparser.ConfigParser() config.read('db_init.INI') password = config.get("section_a", "Password") host = config.get("section_a", "Port") database = config.get("section_a", "Database") engine = create_engine('mysql+mysqlconnector://root:{0}@{1}/{2}'. format(password, host, database), pool_recycle=1, pool_timeout=57600, future=True) conn = engine.connect() df.to_sql("tableName", conn, if_exists='append', index = False) The full stack trace looks like this: Traceback (most recent call last): File "/Users/chent/Desktop/PFSDataParser/src/FetchPFS.py", line 304, in <module> main() File "/Users/chent/Desktop/PFSDataParser/src/FetchPFS.py", line 287, in main insert_to_db(experimentDataSet, expName) File "/Users/chent/Desktop/PFSDataParser/src/FetchPFS.py", line 89, in insert_to_db df.to_sql(tableName, conn, if_exists='append', index = False) File "/Users/chent/opt/anaconda3/lib/python3.9/site-packages/pandas/core/generic.py", line 2951, in to_sql return sql.to_sql( File "/Users/chent/opt/anaconda3/lib/python3.9/site-packages/pandas/io/sql.py", line 698, in to_sql return pandas_sql.to_sql( File "/Users/chent/opt/anaconda3/lib/python3.9/site-packages/pandas/io/sql.py", line 1754, in to_sql self.check_case_sensitive(name=name, schema=schema) File "/Users/chent/opt/anaconda3/lib/python3.9/site-packages/pandas/io/sql.py", line 1647, in check_case_sensitive with self.connectable.connect() as conn: AttributeError: 'Connection' object has no attribute 'connect' The version of pandas I am using is 1.4.4, sqlalchemy is 2.0 I tried to make a several execution of sql query, for example, CREATE TABLE xxx IF NOT EXISTS or SELECT * FROM, all of which have given me the result I wish to see. | I just run into this problem too. Pandas 1.x doesn't support SqlAlchemy 2 yet. As the relevant Github issue shows the next release of Pandas will require sqlalchemy<2.0. For now you have to downgrade to SqlAlchemy 1.4.x with eg : pip install --upgrade SQLAlchemy==1.4.46 The problem is caused by an incompatibility between the Pandas version and SqlAlchemy 2.0. SqlAlchemy 2.0 was released on January 28, 2023 while even the latest Pandas version at the time, 1.5.3 was released on January 19. Pandas does support sqlalchemy.engine.Connection. From the docs : cons : qlalchemy.engine.(Engine or Connection) or sqlite3.Connection Using SQLAlchemy makes it possible to use any DB supported by that library. Legacy support is provided for sqlite3.Connection objects. The user is responsible for engine disposal and connection closure for the SQLAlchemy connectable See here. I downgraded to SqlAlchemy 1.4.46 and to_sql stopped complaining. If you use pip you can downgrade using : pip install --upgrade SQLAlchemy==1.4.46 or pip install SQLAlchemy pip install SQLAlchemy==1.4.46 | 15 | 19 |
75,318,798 | 2023-2-2 | https://stackoverflow.com/questions/75318798/in-a-2d-numpy-array-find-max-streak-of-consecutive-1s | I have a 2d numpy array like so. I want to find the maximum consecutive streak of 1's for every row. a = np.array([[1, 1, 1, 1, 1], [1, 0, 1, 0, 1], [1, 1, 0, 1, 0], [0, 0, 0, 0, 0], [1, 1, 1, 0, 1], [1, 0, 0, 0, 0], [0, 1, 1, 0, 0], [1, 0, 1, 1, 0], ] ) Desired Output: [5, 1, 2, 0, 3, 1, 2, 2] I have found the solution to above for a 1D array: a = np.array([1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0]) d = np.diff(np.concatenate(([0], a, [0]))) np.max(np.flatnonzero(d == -1) - np.flatnonzero(d == 1)) > 4 On similar lines, I wrote the following but it doesn't work. d = np.diff(np.column_stack(([0] * a.shape[0], a, [0] * a.shape[0]))) np.max(np.flatnonzero(d == -1) - np.flatnonzero(d == 1)) | The 2D equivalent of you current code would be using pad, diff, where and maximum.reduceat: # pad with a column of 0s on left/right # and get the diff on axis=1 d = np.diff(np.pad(a, ((0,0), (1,1)), constant_values=0), axis=1) # get row/col indices of -1 row, col = np.where(d==-1) # get groups of rows val, idx = np.unique(row, return_index=True) # subtract col indices of -1/1 to get lengths # use np.maximum.reduceat to get max length per group of rows out = np.zeros(a.shape[0], dtype=int) out[val] = np.maximum.reduceat(col-np.where(d==1)[1], idx) Output: array([5, 1, 2, 0, 3, 1, 2, 2]) Intermediates: np.pad(a, ((0,0), (1,1)), constant_values=0) array([[0, 1, 1, 1, 1, 1, 0], [0, 1, 0, 1, 0, 1, 0], [0, 1, 1, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 0, 1, 0], [0, 1, 0, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0], [0, 1, 0, 1, 1, 0, 0]]) np.diff(np.pad(a, ((0,0), (1,1)), constant_values=0), axis=1) array([[ 1, 0, 0, 0, 0, -1], [ 1, -1, 1, -1, 1, -1], [ 1, 0, -1, 1, -1, 0], [ 0, 0, 0, 0, 0, 0], [ 1, 0, 0, -1, 1, -1], [ 1, -1, 0, 0, 0, 0], [ 0, 1, 0, -1, 0, 0], [ 1, -1, 1, 0, -1, 0]]) np.where(d==-1) (array([0, 1, 1, 1, 2, 2, 4, 4, 5, 6, 7, 7]), array([5, 1, 3, 5, 2, 4, 3, 5, 1, 3, 1, 4])) col-np.where(d==1)[1] array([5, 1, 1, 1, 2, 1, 3, 1, 1, 2, 1, 2]) np.unique(row, return_index=True) (array([0, 1, 2, 4, 5, 6, 7]), array([ 0, 1, 4, 6, 8, 9, 10])) out = np.zeros(a.shape[0], dtype=int) array([0, 0, 0, 0, 0, 0, 0, 0]) out[val] = np.maximum.reduceat(col-np.where(d==1)[1], idx) array([5, 1, 2, 0, 3, 1, 2, 2]) | 5 | 6 |
75,316,741 | 2023-2-1 | https://stackoverflow.com/questions/75316741/attributeerror-engine-object-has-no-attribute-execute-when-trying-to-run-sq | I have the following line of code that keeps giving me an error that Engine object has no object execute. I think I have everything right but no idea what keeps happening. It seemed others had this issue and restarting their notebooks worked. I'm using Pycharm and have restarted that without any resolution. Any help is greatly appreciated! import pandas as pd from sqlalchemy import create_engine, text import sqlalchemy import pymysql masterlist = pd.read_excel('Masterlist.xlsx') user = 'root' pw = 'test!*' db = 'hcftest' engine = create_engine("mysql+pymysql://{user}:{pw}@localhost:3306/{db}" .format(user=user, pw=pw, db=db)) results = engine.execute(text("SELECT * FROM companyname;")) for row in results: print(row) | There was a change from 1.4 to 2.0. The above code will run fine with sqlalchemy version 1.4 I believe. setting SQLALCHEMY_WARN_20=1 python and running the above code reveals this warning: <stdin>:1: RemovedIn20Warning: The Engine.execute() method is considered legacy as of the 1.x series of SQLAlchemy and will be removed in 2.0. All statement execution in SQLAlchemy 2.0 is performed by the Connection.execute() method of Connection, or in the ORM by the Session.execute() method of Session. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9) So the correct way to do the code now is: with engine.connect() as conn: result = conn.execute(stmt) source here describing the behavior in 1.4 and here describing the behavior in 2.0 | 25 | 52 |
75,316,207 | 2023-2-1 | https://stackoverflow.com/questions/75316207/python-equivalent-to-as-type-assertion-in-typescript | In TypeScript you can override type inferences with the as keyword const canvas = document.querySelector('canvas') as HTMLCanvasElement; Are there similar techniques in Python3.x typing without involving runtime casting? I want to do something like the following: class SpecificDict(TypedDict): foo: str bar: str res = request(url).json() as SpecificDict | If I understand you correctly, you're looking for typing.cast: from typing import cast res = cast(dict, request(url)) This will assert to a typechecker that res is assigned to a value that is a dictionary, but it won't have any effects at runtime. | 10 | 7 |
75,315,550 | 2023-2-1 | https://stackoverflow.com/questions/75315550/i-dont-understand-why-is-this-for-loop-so-fast | Today I was solving Project Euler's problem #43 Problem and I ran into a somewhat interesting problem. I don't understand why is my code so fast? from itertools import permutations def main(): numbers1_9 = [0,1,2,3,4,5,6,7,8,9] list_of_all_permutations = list(permutations(numbers1_9, 10)) length_of_my_list = len(list_of_all_permutations) number_of_times_it_ran=0 result = [] for n in list_of_all_permutations: number_of_times_it_ran+=1 if n[0] == 0: continue elif n[3] % 2 == 0 and (n[2]+n[3]+n[4]) % 3 == 0 and n[5] % 5 ==0 and int(str(n[4])+str(n[5])+str(n[6])) % 7 == 0 and (n[5]+n[7]-n[6]) % 11 == 0 and int(str(n[6])+str(n[7])+str(n[8])) % 13 == 0 and int(str(n[7])+str(n[8])+str(n[9])) % 17 == 0: temp_list = [] for digits_of_n in n: temp_list.append(str(digits_of_n)) result.append(int("".join(temp_list))) print(f"Added {temp_list}, Remaining: {length_of_my_list-number_of_times_it_ran}") print(f"The code ran {number_of_times_it_ran} times and the result is {sum(result)}") if __name__ == "__main__": main() I mean it went through the for loop 3,628,800 times, checked all those parameters, and only took a second. Added ['1', '4', '0', '6', '3', '5', '7', '2', '8', '9'], Remaining: 3142649 Added ['1', '4', '3', '0', '9', '5', '2', '8', '6', '7'], Remaining: 3134251 Added ['1', '4', '6', '0', '3', '5', '7', '2', '8', '9'], Remaining: 3124649 Added ['4', '1', '0', '6', '3', '5', '7', '2', '8', '9'], Remaining: 2134649 Added ['4', '1', '3', '0', '9', '5', '2', '8', '6', '7'], Remaining: 2126251 Added ['4', '1', '6', '0', '3', '5', '7', '2', '8', '9'], Remaining: 2116649 The code ran 3628800 times, and the result is 16695334890 This is the output. The code finished in 1.3403266999521293 seconds. | It runs so quickly because of the short-circuiting feature of logical operators. The first three conditions in the if statement are easy to calculate, and they filter out the vast majority (around 97%) of all the permutations, so you hardly ever need to execute the more expensive operations like int(str(n[4])+str(n[5])+str(n[6])). So when you have a bunch of conditions that you're connecting with and, and the order that they're tested doesn't matter for the logic, you should put the ones that are easiest to test or are most likely to fail first. | 3 | 4 |
75,315,203 | 2023-2-1 | https://stackoverflow.com/questions/75315203/is-the-last-parameter-of-glvertexattribpointer-a-0-or-none | I am trying to setup a simple 3D Engine in pyOpenGL. My current goal was to achieve a 2D rectangle being displayed to the screen, which isn't working at all. (nothing is being rendered to the screen, no exception is being thrown by the program.) The render method I use is following: @staticmethod def render(model: RawModel): glBindVertexArray(model.get_vao_id()) glEnableVertexAttribArray(0) glDrawArrays(GL_TRIANGLES, 1, model.get_vertex_count()) glDisableVertexAttribArray(0) glBindVertexArray(0) I suppose something goes wrong with the glDrawArrays() method, because of how I bound my Buffer data: @classmethod def bind_indices_buffer(cls, attribute_number: int, data: list): data = numpy.array(data, dtype='float32') vbo_id = glGenBuffers(1) cls.__vbos.append(vbo_id) glBindBuffer(GL_ARRAY_BUFFER, vbo_id) glBufferData(GL_ARRAY_BUFFER, data, GL_STATIC_DRAW) glVertexAttribPointer(attribute_number, 3, GL_FLOAT, False, 0, 0) glBindBuffer(GL_ARRAY_BUFFER, 0) | The problem is here: glVertexAttribPointer(attribute_number, 3, GL_FLOAT, False, 0, 0) glVertexAttribPointer(attribute_number, 3, GL_FLOAT, False, 0, None) The type of the lase argument of glVertexAttribIPointer is const GLvoid *. So the argument must be None or ctypes.c_void_p(0), but not 0. | 3 | 3 |
75,310,650 | 2023-2-1 | https://stackoverflow.com/questions/75310650/how-to-get-font-path-from-font-name-python | My aim is to get the font path from their common font name and then use them with PIL.ImageFont. I got the names of all installed fonts by using tkinter.font.families(), but I want to get the full path of each font so that I can use them with PIL.ImageFont. Is there any other way to use the common font name with ImageFont.truetype() method? | I'm not exactly sure what you really want - but here is a way to get a list of the full path to all the fonts on your system and their names and weights: #!/usr/bin/env python3 import matplotlib.font_manager from PIL import ImageFont # Iterate over all font files known to matplotlib for filename in matplotlib.font_manager.findSystemFonts(): # Avoid these two trouble makers - don't know why they are problematic if "Emoji" not in filename and "18030" not in filename: # Look up what PIL knows about the font font = ImageFont.FreeTypeFont(filename) name, weight = font.getname() print(f'File: {filename}, fontname: {name}, weight: {weight}') Sample Output File: /System/Library/Fonts/Supplemental/NotoSansLepcha-Regular.ttf, fontname: Noto Sans Lepcha, weight: Regular File: /System/Library/Fonts/ZapfDingbats.ttf, fontname: Zapf Dingbats, weight: Regular File: /System/Library/Fonts/Supplemental/Zapfino.ttf, fontname: Zapfino, weight: Regular File: /System/Library/Fonts/Supplemental/NotoSansMultani-Regular.ttf, fontname: Noto Sans Multani, weight: Regular File: /System/Library/Fonts/Supplemental/NotoSansKhojki-Regular.ttf, fontname: Noto Sans Khojki, weight: Regular File: /System/Library/Fonts/Supplemental/Mishafi Gold.ttf, fontname: Mishafi Gold, weight: Regular File: /System/Library/Fonts/Supplemental/NotoSansMendeKikakui-Regular.ttf, fontname: Noto Sans Mende Kikakui, weight: Regular File: /System/Library/Fonts/MuktaMahee.ttc, fontname: Mukta Mahee, weight: Regular File: /Users/mark/Library/Fonts/JetBrainsMonoNL-Italic.ttf, fontname: JetBrains Mono NL, weight: Italic ... ... | 4 | 1 |
75,313,574 | 2023-2-1 | https://stackoverflow.com/questions/75313574/automate-the-update-of-packages-in-pyproject-toml-from-virtualenv-or-pip-tools | I am trying to update my Python CI environment and am working on package management right now. I have several reasons that I do not want to use Poetry; however, one nice feature of poetry is the fact that it automatically updates the pyproject.toml file. I know that pip-tools can create a requirements.txt file from the pyproject.toml file; however, is there any feature within virtualenv or pip-tools that will enable an automatic update of the pyproject.toml file when you install a package with pip to your virtual environment? | A standard tool-agnostic add command does not exist. It is being discussed here: https://discuss.python.org/t/poetry-add-but-for-pep-621/22957 I do not know if there is such a feature in pip-tools. I am pretty sure it does not exist in virtualenv, that would be quite out of scope. Your can always adopt a "dev workflow tool" (PDM, Hatch, Poetry*) if you want it. *: Poetry uses a non-standard notation for the project metadata, but you seem to be currently using the standard notation (aka PEP621) so this could be a step backward if you were to migrate to Poetry. | 6 | 3 |
75,307,905 | 2023-2-1 | https://stackoverflow.com/questions/75307905/python-typing-for-a-metaclass-singleton | I have a Python (3.8) metaclass for a singleton as seen here I've tried to add typings like so: from typing import Dict, Any, TypeVar, Type _T = TypeVar("_T", bound="Singleton") class Singleton(type): _instances: Dict[Any, _T] = {} def __call__(cls: Type[_T], *args: Any, **kwargs: Any) -> _T: if cls not in cls._instances: cls._instances[cls] = super().__call__(*args, **kwargs) return cls._instances[cls] In the line: _instances: Dict[Any, _T] = {} MyPy warns: Mypy: Type variable "utils.singleton._T" is unbound I've tried different iterations of this to no avail; it's very hard for me to figure out how to type this dict. Further, the line: def __call__(cls: Type[_T], *args: Any, **kwargs: Any) -> _T: Produces: Mypy: The erased type of self "Type[golf_ml.utils.singleton.Singleton]" is not a supertype of its class "golf_ml.utils.singleton.Singleton" How could I correctly type this? | This should work: from __future__ import annotations import typing as t _T = t.TypeVar("_T") class Singleton(type, t.Generic[_T]): _instances: dict[Singleton[_T], _T] = {} def __call__(cls, *args: t.Any, **kwargs: t.Any) -> _T: if cls not in cls._instances: cls._instances[cls] = super().__call__(*args, **kwargs) return cls._instances[cls] Rough explanations: _T = TypeVar("_T", bound="Singleton") is not correct - Singleton is type(type(obj)) where obj: _T = Singleton.__call__(...). In proper usage, the argument to bound= can only be type(obj) or some union typing construct, not type(type(obj). Type variable "_T" is unbound indicates that you need to make Singleton generic with respect to _T to bind _T. The erased type of self ... error message is telling you that you've "erased" the type checker's inferred type* of cls. Technically speaking, __call__ is the same on a metaclass as any other instance method - the first argument is simply the type of the owning class. In the current static typing system, however, a metaclass's instance method's first argument is not in concordance with type[...]. *The inferred type is explicitly Self in the following: import typing as t Self = t.TypeVar("Self", bound="A") class A: def instancemethod(arg: Self) -> None: pass @classmethod def classmethod_(arg: type[Self]) -> None: pass Runtime is important too, so the final sanity check is to make sure you've actually implemented a singleton using this metaclass: class Logger(metaclass=Singleton): pass >>> print(Logger() is Logger()) True | 5 | 7 |
75,308,944 | 2023-2-1 | https://stackoverflow.com/questions/75308944/polars-case-statement | I am trying to pick up the package polars from Python. I come from an R background so appreciate this might be an incredibly easy question. I want to implement a case statement where if any of the conditions below are true, it will flag it to 1 otherwise it will be 0. My new column will be called 'my_new_column_flag' I am however getting the error message Traceback (most recent call last): File "", line 2, in File "C:\Users\foo\Miniconda3\envs\env\lib\site-packages\polars\internals\lazy_functions.py", line 204, in col return pli.wrap_expr(pycol(name)) TypeError: argument 'name': 'int' object cannot be converted to 'PyString' import polars as pl import numpy as np np.random.seed(12) df = pl.DataFrame( { "nrs": [1, 2, 3, None, 5], "names": ["foo", "ham", "spam", "egg", None], "random": np.random.rand(5), "groups": ["A", "A", "B", "C", "B"], } ) print(df) df.with_columns( pl.when(pl.col('nrs') == 1).then(pl.col(1)) .when(pl.col('names') == 'ham').then(pl.col(1)) .when(pl.col('random') == 0.014575).then(pl.col(1)) .otherwise(pl.col(0)) .alias('my_new_column_flag') ) Can anyone help? | pl.col selects a column with the given name (as string). What you want is a column with literal value set to one: pl.lit(1) df.with_columns( pl.when(pl.col('nrs') == 1).then(pl.lit(1)) .when(pl.col('names') == 'ham').then(pl.lit(1)) .when(pl.col('random') == 0.014575).then(pl.lit(1)) .otherwise(pl.lit(0)) .alias('my_new_column_flag') ) PS: it may look more natural to use predicate for your flat (and cast it to int if you want it to be 0/1 instead of true/false): df.with_columns( ((pl.col("nrs") == 1) | (pl.col("names") == "ham") | (pl.col("random") == 0.014575)) .alias("my_new_column_flag") .cast(int) ) | 8 | 10 |
75,308,719 | 2023-2-1 | https://stackoverflow.com/questions/75308719/convert-a-series-of-number-become-one-single-line-of-numbers | If I have a series of numbers in a DataFrame with one column, e.g.: import pandas as pd data = [4, 5, 6, 7, 8, 9, 10, 11] pd.DataFrame(data) Which looks like this (left column = index, right column = data): 0 4 1 5 2 6 3 7 4 8 5 9 6 10 7 11 How do I make it into one sequence number, so (4 5 6 7 8 9 10 11) in python or pandas ? because i want to put that into xml file so it looks like this <Or> <numbers> <example>4 5 6 7 8 9 10 11</example> </numbers> </Or> | You can use a f-string with conversion of the integers to string and str.join: text = f''' <Or> <numbers> <example>{" ".join(s.astype(str))}</example> </numbers> </Or>''' Output: <Or> <numbers> <example>4 5 6 7 8 9 10 11</example> </numbers> </Or> | 3 | 2 |
75,308,496 | 2023-2-1 | https://stackoverflow.com/questions/75308496/how-do-i-run-uvicorn-in-a-docker-container-that-exposes-the-port | I am developing a fastapi inside a docker container in windows/ubuntu (code below). When I test the app outside the container by running python -m uvicorn app:app --reload in the terminal and then navigating to 127.0.0.1:8000/home everything works fine: { Data: "Test" } However, when I docker-compose up I can neither run python -m uvicorn app:app --reload in the container (due to the port already being used), nor see anything returned in the browser. I have tried 127.0.0.1:8000/home, host.docker.internal:8000/home and localhost:8000/home and I always receive: { detail: "Not Found" } What step am I missing? Dockerfile: FROM python:3.8-slim EXPOSE 8000 ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 COPY requirements.txt . RUN python -m pip install -r requirements.txt WORKDIR /app COPY . /app RUN adduser -u nnnn --disabled-password --gecos "" appuser && chown -R appuser /app USER appuser CMD ["gunicorn", "--bind", "0.0.0.0:8000", "-k", "uvicorn.workers.UvicornWorker", "app:app"] Docker-compose: version: '3.9' services: fastapitest: image: fastapitest build: context: . dockerfile: ./Dockerfile ports: - 8000:8000 extra_hosts: - "host.docker.internal:host-gateway" app.py: from fastapi import FastAPI app = FastAPI() @app.get("/home") #must be one line above the function fro the route def home(): return {"Data": "Test"} if __name__ == '__main__': import uvicorn uvicorn.run(app, host="127.0.0.1", port=8000) | The issue here is that when you specify host="127.0.0.1" to uvicorn, that means you can only access that port from that same machine. Now, when you run outside docker, you are on the same machine, so everything works. But since a docker container is (at least to some degree) a different computer, you need to tell it to allow connections from outside the container as well. To do this, switch to host="0.0.0.0", and then you should be able to access your dockerized API on http://localhost:8000. | 3 | 16 |
75,307,473 | 2023-2-1 | https://stackoverflow.com/questions/75307473/how-to-read-a-sqlite-database-file-using-polars-package-in-python | I want to read a SQLite database file (database.sqlite) using polars package. I tried following unsuccessfully: import sqlite3 import polars as pl conn = sqlite3.connect('database.sqlite') df = pl.read_sql("SELECT * from table_name", conn) print(df) Getting following error: AttributeError: 'sqlite3.Connection' object has no attribute 'split' Any suggestions? | From the docs, you can see pl.read_sql accepts connection string as a param, and you are sending the object sqlite3.Connection, and that's why you get that message. You should first generate the connection string, which is url for your db db_path = 'database.sqlite' connection_string = 'sqlite://' + db_path And after that, you can type the updated next line, which gave you problems: df = pl.read_sql("SELECT * from table_name", connection_string) | 5 | 6 |
75,304,491 | 2023-2-1 | https://stackoverflow.com/questions/75304491/difference-between-bare-except-and-specifying-a-specific-exception | I could write a simple except clause without writing an Exception in front of it. I mean, the last line could be like this : except: print('Hit an exception other than KeyError or NameError!') What is the point of writing Exception in front of an except clause? try: discounted_price(instrument, discount) except KeyError: print("There is a keyerror in your code") except NameError: print('There is a TypeError in your code') except Exception: print('an exception occured') I tried writing an except clause without an Exception and it worked the same. I know the point of catching specific errors. If I want to ask more clearly, what is the difference between these two clauses : except Exception: print('an exception occured') except : print('an exception occured') | A bare expect try: ... except: pass or catching any exception whatsoever try: ... except Exception: pass are bad practice, because you can be hiding bug or be interfering with the normal procedure of the program. You should only catch exception that you know how to handle, everything else you should let it propagate. For some example: Hide bug: it can hide some typo in your code as Caridorc example show making you think that you had a problem different than the real problem Interfering with the normal procedure: you can make it an unexpectedly unkillable program or get in the way of the normal procedure of the program by discarding an exception that another part of the code was expecting. like for example while True: try: print("running") except: print("I'm immortal muahahaha") this piece of code now you can't stop with you usual control-z (control-z throw and KeyboardInterrupt exception into your program) so you now need to close the whole interpreter/kill it with the task admin just to stop it, and if this was unintended you just introduced a new bug and depending on what you're doing it can be catastrophic. To illustrate how catastrophic it can be, consider the following hypothetical case: imagine you make so benign function for a medical device and you put something like this try: ... except: print("some error happens") now it just so happens that while you piece of code was running a HeartAttack exception was raised and your catch it all and ignore piece of code will do, well, just that, and here is the twist this device was a pacemaker... well, congratulation you just killed the poor guy. And that is why you should only catch the exception you know how to deal with, everything else you let it pass and hope that somebody along the line know how to deal with it, like in the example above, you and your piece of code don't know how to deal with a HeartAttack, but the pacemaker do and the pacemaker was the one that call your piece of code let it deal with it... for a no so extreme example consider this simple code def get_number_from_user(): while True: try: return int(input("write a number: ")) except: print("not a valid number try again") if your user was done with your program and this happens to be the thing running he/she might want to kill it with a control-z as you usually do with any program, but it will find that it doesn't work, the correct way here is to catch the error we know how to deal with in this case, namely ValueError, everything else isn't this function business def get_number_from_user(): while True: try: return int(input("write a number: ")) except ValueError: print("not a valid number try again") You also ask about the difference between try: ... except: pass and this try: ... except Exception: pass the difference is that a bare except can catch any and all kind of exception, that in python is anything that is or inherit from BaseException that sit at the top of the exception hierarchy, while except Exception will catch only Exception itself or anything that inherit from it (the same apply for any particular exception you put there), this small distinction allow to make some exceptions more special than other, like the aforementioned KeyboardInterrupt that inherit from BaseException instead of Exception, and that is used to signal that the user wants to terminate this program, so you should do so and this distinction is made basically so new programmers don't shoot themselves in the foot when using except Exception | 3 | 1 |
75,268,393 | 2023-1-28 | https://stackoverflow.com/questions/75268393/yolov8-how-does-it-handle-different-image-sizes | Yolov8 and I suspect Yolov5 handle non-square images well. I cannot see any evidence of cropping the input image, i.e. detections seem to go to the enge of the longest side. Does it resize to a square 640x604 which would change the aspect ratio of objects making them more difficult to detect? When training on a custom dataset starting from a pre-trained model, what does the imgsz (image size) parameter actually do? | Modern Yolo versions, from v3 onwards, can handle arbitrary sized images as long as both sides are a multiple of 32. This is because the maximum stride of the backbone is 32 and it is a fully convolutional network. But there are clearly two different cases for how input images to the model are preprocessed: Training An example. Let's say you start a training by: from ultralytics.yolo.engine.model import YOLO model = YOLO("yolov8n.pt") results = model.train(data="coco128.yaml", imgsz=512) By printing what is fed to the model (im) in trainer.py you will obtain the following output: Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size 0%| | 0/8 [00:00<?, ?it/s] torch.Size([16, 3, 512, 512]) 1/100 1.67G 1.165 1.447 1.198 226 512: 12%|ββ | 1/8 [00:01<00:08, 1.15s/it] torch.Size([16, 3, 512, 512]) 1/100 1.68G 1.144 1.511 1.22 165 512: 25%|βββ | 2/8 [00:02<00:06, 1.10s/it] torch.Size([16, 3, 512, 512]) So, during training, images have to be reshaped to the same size in order to be able to create mini-batches as you cannot concatenate tensors of different shapes. To preserve the aspect ratio of the images, in order to avoid distortion, they are usually "letterbox'ed". imgsz selects the size of the images to train on. Prediction Now, let's have a look at prediction. Let's say you select the images under assets as source and imgsz 512 by from ultralytics.yolo.engine.model import YOLO model = YOLO("yolov8n.pt") results = model.predict(stream=True, imgsz=512) # source already setup By printing the original image shape (im0) and the one fed to the model (im) in predictor.py you will obtain the following output: (yolov8) β ultralytics git:(main) β python new.py Ultralytics YOLOv8.0.23 π Python-3.8.15 torch-1.11.0+cu102 CUDA:0 (Quadro P2000, 4032MiB) YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs im0s (1080, 810, 3) im torch.Size([1, 3, 512, 384]) image 1/2 /home/mikel.brostrom/ultralytics/ultralytics/assets/bus.jpg: 512x384 4 persons, 1 bus, 7.4ms im0s (720, 1280, 3) im torch.Size([1, 3, 288, 512]) image 2/2 /home/mikel.brostrom/ultralytics/ultralytics/assets/zidane.jpg: 288x512 3 persons, 2 ties, 5.8ms Speed: 0.4ms pre-process, 6.6ms inference, 1.5ms postprocess per image at shape (1, 3, 512, 512) You can see that the longest image side is reshaped to 512. The short side is reshaped to the closest multiple of 32 while maintaining the aspect ratio. As you are not feeding multiple images at the same time you don't need to reshape images into the same shape and stack them, making it possible to avoid padding. | 12 | 17 |
75,229,981 | 2023-1-25 | https://stackoverflow.com/questions/75229981/how-to-use-polars-cut-method-returning-result-to-original-df | Update: pl.cut was removed from Polars. Expression equivalents were added instead: .cut() .qcut() How can I use it in select context, such as df.with_columns? To be more specific, if I have a polars dataframe with a lot of columns and one of them is called x, how can I do pl.cut on x and append the grouping result into the original dataframe? Below is what I tried but it does not work: df = pl.DataFrame({"a": [1, 2, 3, 4, 5], "b": [2, 3, 4, 5, 6], "x": [1, 3, 5, 7, 9]}) df.with_columns(pl.cut(pl.col("x"), bins=[2, 4, 6])) Thanks so much for your help. | From the docs, as of 2023-01-25, cut takes a Series and returns a DataFrame. Unlike many/most methods and functions, it doesn't take an expression so you can't use it in a select or with_column(s). To get your desired result you'd have to join it to your original df. Additionally, it appears that cut doesn't necessarily maintain the same dtypes as the parent series. (This is most certainly a bug) As such you have to cast it back to, in this case, int. You'd have: df=df.join( pl.cut(df.get_column('x'),bins=[2,4,6]).with_columns(pl.col('x').cast(pl.Int64())), on='x' ) shape: (5, 5) βββββββ¬ββββββ¬ββββββ¬ββββββββββββββ¬ββββββββββββββ β a β b β x β break_point β category β β --- β --- β --- β --- β --- β β i64 β i64 β i64 β f64 β cat β βββββββͺββββββͺββββββͺββββββββββββββͺββββββββββββββ‘ β 1 β 2 β 1 β 2.0 β (-inf, 2.0] β β 2 β 3 β 3 β 4.0 β (2.0, 4.0] β β 3 β 4 β 5 β 6.0 β (4.0, 6.0] β β 4 β 5 β 7 β inf β (6.0, inf] β β 5 β 6 β 9 β inf β (6.0, inf] β βββββββ΄ββββββ΄ββββββ΄ββββββββββββββ΄ββββββββββββββ | 5 | 4 |
75,303,038 | 2023-1-31 | https://stackoverflow.com/questions/75303038/how-to-write-poisson-cdf-as-python-polars-expression | I have a collection of polars expressions being used to generate features for an ML model. I'd like to add a poission cdf feature to this collection whilst maintaining lazy execution (with benefits of speed, caching etc...). I so far have not found an easy way of achieving this. I've been able to get the result I'd like outside of the desired lazy expression framework with: import polars as pl from scipy.stats import poisson df = pl.DataFrame({"count": [9,2,3,4,5], "expected_count": [7.7, 0.2, 0.7, 1.1, 7.5]}) result = poisson.cdf(df["count"].to_numpy(), df["expected_count"].to_numpy()) df = df.with_columns(pl.Series(result).alias("poisson_cdf")) However, in reality I'd like this to look like: df = pl.DataFrame({"count": [9,2,3,4,5], "expected_count": [7.7, 0.2, 0.7, 1.1, 7.5]}) df = df.select( [ ... # bunch of other expressions here poisson_cdf() ] ) where poisson_cdf is some polars expression like: def poisson_cdf(): # this is just for illustration, clearly wont work return scipy.stats.poisson.cdf(pl.col("count"), pl.col("expected_count")).alias("poisson_cdf") I also tried using a struct made up of "count" and "expected_count" and apply like advised in the docs when applying custom functions. However, my dataset is several millions of rows in reality - leading to absurd execution time. Any advice or guidance here would be appreciated. Ideally there exists an expression like this somewhere out there? Thanks in advance! | It sounds like you want to use .map_batches() df.with_columns( pl.struct("count", "expected_count") .map_batches(lambda x: poisson.cdf(x.struct.field("count"), x.struct.field("expected_count")) ) .alias("poisson_cdf") ) shape: (5, 3) βββββββββ¬βββββββββββββββββ¬ββββββββββββββ β count | expected_count | poisson_cdf β β --- | --- | --- β β i64 | f64 | f64 β βββββββββͺβββββββββββββββββͺββββββββββββββ‘ β 9 | 7.7 | 0.75308 β β 2 | 0.2 | 0.998852 β β 3 | 0.7 | 0.994247 β β 4 | 1.1 | 0.994565 β β 5 | 7.5 | 0.241436 β βββββββββ΄βββββββββββββββββ΄ββββββββββββββ | 3 | 1 |
75,272,909 | 2023-1-29 | https://stackoverflow.com/questions/75272909/does-polars-module-not-have-a-method-for-appending-dataframes-to-output-files | When writing a DataFrame to a csv file, I would like to append to the file, instead of overwriting it. While pandas DataFrame has the .to_csv() method with the mode parameter available, thus allowing to append the DataFrame to a file, None of the Polars DataFrame write methods seem to have that parameter. | To append to a CSV file for example - you can pass a file object e.g. import polars as pl df1 = pl.DataFrame({"a": [1, 2], "b": [3 ,4]}) df2 = pl.DataFrame({"a": [5, 6], "b": [7 ,8]}) with open("out.csv", mode="a") as f: df1.write_csv(f) df2.write_csv(f, include_header=False) >>> from pathlib import Path >>> print(Path("out.csv").read_text(), end="") a,b 1,3 2,4 5,7 6,8 | 6 | 8 |
75,268,283 | 2023-1-28 | https://stackoverflow.com/questions/75268283/how-to-extract-date-from-datetime-column-in-polars | I am trying to move from pandas to polars but I am running into the following issue. df = pl.DataFrame( { "integer": [1, 2, 3], "date": [ "2010-01-31T23:00:00+00:00", "2010-02-01T00:00:00+00:00", "2010-02-01T01:00:00+00:00" ] } ) df = df.with_columns( pl.col("date").str.to_datetime().dt.convert_time_zone("Europe/Amsterdam") ) Yields the following dataframe: shape: (3, 2) βββββββββββ¬βββββββββββββββββββββββββββββββββ β integer β date β β --- β --- β β i64 β datetime[ΞΌs, Europe/Amsterdam] β βββββββββββͺβββββββββββββββββββββββββββββββββ‘ β 1 β 2010-02-01 00:00:00 CET β β 2 β 2010-02-01 01:00:00 CET β β 3 β 2010-02-01 02:00:00 CET β βββββββββββ΄βββββββββββββββββββββββββββββββββ As you can see, I transformed the datetime string from UTC to CET succesfully. However, if I try to cast to pl.Date (as suggested here), it seems to extract the date from the UTC string even though it has been transformed, e.g.: df = df.with_columns( pl.col("date").cast(pl.Date).alias("valueDay") ) shape: (3, 3) βββββββββββ¬βββββββββββββββββββββββββββββββββ¬βββββββββββββ β integer β date β valueDay β β --- β --- β --- β β i64 β datetime[ΞΌs, Europe/Amsterdam] β date β βββββββββββͺβββββββββββββββββββββββββββββββββͺβββββββββββββ‘ β 1 β 2010-02-01 00:00:00 CET β 2010-01-31 β # <- NOT OK β 2 β 2010-02-01 01:00:00 CET β 2010-02-01 β β 3 β 2010-02-01 02:00:00 CET β 2010-02-01 β βββββββββββ΄βββββββββββββββββββββββββββββββββ΄βββββββββββββ The valueDay should be 2010-02-01 for all 3 values. Can anyone help me fix this? A pandas dt.date like way to approach this would be nice. By the way, what is the best way to optimize this code? Do I constantly have to assign everything to df or is there a way to chain all of this? | Update: .dt.date() has since been added to Polars. import polars as pl df = pl.DataFrame( { "integer": [1, 2, 3], "date": [ "2010-01-31T23:00:00+00:00", "2010-02-01T00:00:00+00:00", "2010-02-01T01:00:00+00:00", ], } ) df = df.with_columns( pl.col("date").str.to_datetime().dt.convert_time_zone("Europe/Amsterdam") ) df = df.with_columns( pl.col("date").dt.date().alias("valueDay"), pl.col("date").dt.day().alias("day"), pl.col("date").dt.month().alias("month"), pl.col("date").dt.year().alias("year"), ) shape: (3, 6) βββββββββββ¬βββββββββββββββββββββββββββββββββ¬βββββββββββββ¬ββββββ¬ββββββββ¬βββββββ β integer β date β valueDay β day β month β year β β --- β --- β --- β --- β --- β --- β β i64 β datetime[ΞΌs, Europe/Amsterdam] β date β i8 β i8 β i32 β βββββββββββͺβββββββββββββββββββββββββββββββββͺβββββββββββββͺββββββͺββββββββͺβββββββ‘ β 1 β 2010-02-01 00:00:00 CET β 2010-02-01 β 1 β 2 β 2010 β β 2 β 2010-02-01 01:00:00 CET β 2010-02-01 β 1 β 2 β 2010 β β 3 β 2010-02-01 02:00:00 CET β 2010-02-01 β 1 β 2 β 2010 β βββββββββββ΄βββββββββββββββββββββββββββββββββ΄βββββββββββββ΄ββββββ΄ββββββββ΄βββββββ | 6 | 5 |
75,285,711 | 2023-1-30 | https://stackoverflow.com/questions/75285711/difference-between-collections-abc-sequence-and-typing-sequence | I was reading an article and about collection.abc and typing class in the python standard library and discover both classes have the same features. I tried both options using the code below and got the same results from collections.abc import Sequence def average(sequence: Sequence): return sum(sequence) / len(sequence) print(average([1, 2, 3, 4, 5])) # result is 3.0 from typing import Sequence def average(sequence: Sequence): return sum(sequence) / len(sequence) print(average([1, 2, 3, 4, 5])) # result is 3.0 Under what condition will collection.abc become a better option to typing. Are there benefits of using one over the other? | Good on you for using type annotations! As the documentations says, if you are on Python 3.9+, you should most likely never use typing.Sequence due to its deprecation. Since the introduction of generic alias types in 3.9 the collections.abc classes all support subscripting and should be recognized correctly by static type checkers of all flavors. So the benefit of using collections.abc.T over typing.T is mainly that the latter is deprecated and should not be used. As mentioned by jsbueno in his answer, annotations will have no runtime implications either way, unless of course they are explicitly picked up by a piece of code (see my other answer here). They are just an essential part of good coding style. But your function would still work, i.e. your script would execute without error, even if you annotated your function with something absurd like def average(sequence: 4%3): .... Proper annotations are still extremely valuable. Thus, I would recommend you get used to some of the best practices as soon as possible. (A more-or-less strict static type checker like mypy is very helpful for that.) For one thing, when you are using generic types like Sequence, you should always provide the appropriate type arguments. Those may be type variables, if your function is also generic or they may be concrete types, but you should always include them. In your case, assuming you expect the contents of your sequence to be something that can be added with the same type and divided by an integer, you might want to e.g. annotate it as Sequence[float]. (In the Python type system, float is considered a supertype of int, even though there is no nominal inheritance.) Another recommendation is to try and be as broad as possible in the parameter types. (This echoes the Python paradigm of dynamic typing.) The idea is that you just specify that the object you expect must be able to "quack", but you don't say it must be a duck. In your example, since you are reliant on the argument being compatible with sum as well as with len, you should consider what types those functions expect. The len function is simple, since it basically just calls the __len__ method of the object you pass to it. The sum function is more nuanced, but in your case the relevant part is that it expects an iterable of elements that can be added (e.g. float). If you take a look at the collections ABCs, you'll notice that Sequence actually offers much more than you need, being that it is a reversible collection. A Collection is the broadest built-in type that fulfills your requirements because it has __iter__ (from Iterable) and __len__ (from Sized). So you could do this instead: from collections.abc import Collection def average(numbers: Collection[float]) -> float: return sum(numbers) / len(numbers) (By the way, the parameter name should not reflect its type.) Lastly, if you wanted to go all out and be as broad as possible, you could define your own protocol that is even broader than Collection (by getting rid of the Container inheritance): from collections.abc import Iterable, Sized from typing import Protocol, TypeVar T = TypeVar("T", covariant=True) class SizedIterable(Sized, Iterable[T], Protocol[T]): ... # Literal ellipsis, not a placeholder def average(numbers: SizedIterable[float]) -> float: return sum(numbers) / len(numbers) This has the advantage of supporting very broad structural subtyping, but is most likely overkill. (For the basics of Python typing, PEP 483 and PEP 484 are a must-read.) | 13 | 25 |
75,284,417 | 2023-1-30 | https://stackoverflow.com/questions/75284417/attributeerror-module-networkx-has-no-attribute-from-numpy-matrix | A is co occurrence dataframe. Why it shown AttributeError: module 'networkx' has no attribute 'from_numpy_matrix' import numpy as np import networkx as nx import matplotlib A=np.matrix(coocc) G=nx.from_numpy_matrix(A) | It was updated to nx.from_numpy_array(A) | 15 | 13 |
75,281,114 | 2023-1-30 | https://stackoverflow.com/questions/75281114/list-only-main-packages-with-pip-list | With pip list, we can see the install packages in our environment. There is no problem with that. We can also write them to a req.txt file with pip freeze and quickly load them in other environments with this req.txt file. My question here is, for example, when we install pandas, libraries such as numpy are installed with them, and we can see other libraries installed outside of pandas with pip list. Can I use an option with pip list to see only the main libraries I have installed here? For example, is there an option to show only pandas (no libraries like numpy) when I do a pip list? | You can use --not-required flag. This will list packages that are not dependencies of installed packagespip.pypa.io. python -m pip list --not-required or if pip is in $PATH pip list --not-required | 3 | 5 |
75,267,582 | 2023-1-28 | https://stackoverflow.com/questions/75267582/python-environment-setup-seems-complicated-and-unsolvable | I've been a developer for about three years, primarily working in TypeScript and Node.js. I'm trying to broaden my skillset by learning Python (and eventually expanding my learning of computer vision, machine learning (ML), etc.), but I feel incredibly frustrated by trying to get Python to work consistently on my machine. Surely I'm doing something wrong, but I just can't really understand what it is. I've run into these problems mostly when using ML packages (TensorFlow, Whisper, OpenCV (although I was eventually able to resolve this), so I don't know if it's related to M1 support of one of the common dependencies, etc. My current understanding of Python is that: M1 support of Python is version-dependent at best. venv is the only environment manager I should need to use I should use pyenv to install Python versions so as to not conflict with OS-installed python (macOS dependencies) I'll use the latest project I'm working on as an example. My machine and environment Mac Pro M1, macOS v12.6 (Monterey) pyenv 2.3.9 Python 3.7.13 Fish shell version 3.5.1 My general workflow to get a project started: Create a virtual environment using venv python3 -m venv <some_environment_name> Open created directory using Visual Studio Code, and activate the virtual environment Here is where I encounter my first issue, which seems to be persistent. source /Users/paal/src/whisper_transcription Output: /bin/activate.fish functions: Function '_old_fish_prompt' does not exist ~/src/whisper_transcription/bin/activate.fish (line 18): functions -c _old_fish_prompt fish_prompt ^ in function 'deactivate' with arguments 'nondestructive' called on line 30 of file ~/src/whisper_transcription/bin/activate.fish from sourcing file ~/src/whisper_transcription/bin/activate.fish (Type 'help functions' for related documentation) functions: Function 'fish_prompt' does not exist ~/src/whisper_transcription/bin/activate.fish (line 47): functions -c fish_prompt _old_fish_prompt ^ from sourcing file ~/src/whisper_transcription/bin/activate.fish (Type 'help functions' for related documentation) fish: Unknown command: _old_fish_prompt ~/src/whisper_transcription/bin/activate.fish (line 71): _old_fish_prompt ^ in function 'fish_prompt' in command substitution (whisper_transcription) So, to resolve this, I add the following if statement to the fish.config file. if type -q $program _old_fish_prompt end Looking at GitHub issues, this seems to be a persistent issue for the Fish shell and this seems to at least temporarily resolve it. Or, I just switch to Z shell (executable zsh). OK, so with that resolved I move on. The environment is activated, I'm using Z shell now, and I can successfully run a Python script that prints "hello world" to the console. Then comes the nightmare of installing any packages. It seems like any project I start has some weird edge case of compatibility issues. Between M1 processors, Python versions, builds not working correctly, etc. For example, import whisper ... # The rest of the file This with any other code or even by itself throws the following error: Traceback (most recent call last): File "main.py", line 1, in <module> import whisper File "/Users/paal/src/whisper_transcription/lib/python3.7/site-packages/whisper/__init__.py", line 12, in <module> from .decoding import DecodingOptions, DecodingResult, decode, detect_language File "/Users/paal/src/whisper_transcription/lib/python3.7/site-packages/whisper/decoding.py", line 514 if prefix := self.options.prefix: ^ SyntaxError: invalid syntax This appears to be some problem with the Python version. From what I understand, the := operator isn't valid syntax until Python 3.8. However, dependencies of Whisper (PyTorch) only seems to be supported up to version 3.7.9. So, you can see, it seems like I just end up in these bizarre circular problems where some dependency of some package I want to use isn't supported by either the platform or the current Python version, and they seem basically unsolvable (at least with my current knowledge of Python). Why is this seemingly so complicated? I'm clearly doing something wrong here, and obviously I'm out of my comfort and knowledge zone, but these issues feel very daunting and opaque, and difficult to actually troubleshoot this in any consistent or clear way. Is there a resource that makes this stuff more clear? Is Python development on M1 chips just this broken at the moment? How can I get past these seemingly basic issues to start actually learning? I'm not necessarily looking for a solution to this specific problem here, but if thereβs general advice about environment management and how to make things work somewhat reliably, I'm fine troubleshooting. I just feel like every time I start trying to learn, I end up in these rabbit holes that take hours and hours to fix and sometimes don't even really resolve things. | Instead of creating virtual environments using venv, you can use a more sophisticated tool like Poetry which enables you to manage you environments in a much better manner. I have been working on Python projects and one thing that sucks is compatibility issue with other host machines. This is where Poetry comes into the picture. There are a handful of videos available on Poetry that you can refer to. One great feature I have to mention is it tells you which package version is compatible with your Python version. This saves a lot of time of searching on the Internet for the particular package version that works with our Python version. Every time I work on a new project, I generally create a new virtual environment inside that project only using Poetry. So when I want to give this project to someone else or run on other host machine, I give the project with the virtual environment present in the project directory itself. This way they can just run the project from the virtual environment we gave them along with the project instead of having to install all the dependencies on their system and all that. For even advanced users, there is the option of Docker where you containerize your application and stuff, and I myself am learning that right now. But looking at your scenario, I would suggest using the Poetry package for managing virtual environments and their dependencies. Here is a great tutorial on the Poetry package | 5 | 4 |
75,290,492 | 2023-1-30 | https://stackoverflow.com/questions/75290492/how-to-get-model-specification-paramters-for-models-estimated-with-nixtlas-stat | I am using the statsforecast package to fit an AUTOarima model with an external regressor, which works fine. I need to get the model parameters and modify the parameter for the external regressor and rerun the model for scenario analysis. I also need a model summary to provide with my research. How can I get the model specification/parameters and/or a summary from the fitted model in the statsforecast package? A similar questions has been asked on Github (https://github.com/Nixtla/statsforecast/issues/72) but remains unanswerd as of now. I looked through the documentation (https://nixtla.github.io/statsforecast/models.html) but I couldn't locate any method similar to model.get_params() or model.summary() from the sklearn package or any method that would allow me to print the model parameters or a model summary. | Nixtla statsforecast model stores those information under the hood. There is no method like get_params() to access those, but you can do that pretty easily when you have the model trained. Please see the example below: import pandas as pd from statsforecast import StatsForecast from statsforecast.models import AutoARIMA, AutoCES, AutoETS, AutoTheta train_dt = pd.read_csv('//data_you_will_forecast_in_stastforecast_format.csv') models = [ AutoARIMA(season_length=period), AutoTheta(season_length=period), AutoETS(season_length=period), AutoCES(season_length=period), ] sf = StatsForecast(df=train_df, # used data frame models=models, # a list of models. Select the models you want from models and import them. freq='MS', # a string indicating the frequency of the data. n_jobs=-1, fallback_model=SeasonalNaive(season_length=period) # a model to be used if a model fails. sf.fit(train_df) When sf models are fit, the data are accessible as follows: sf.fitted_ # access an array of fitted models. sf.fitted_[0][n].model_ # access dictionary of all model's parameters The dictionary will give all information on the fitted model, including in-sample data, aic (or other metric by which the best model was selected), best model parameters, and many others. In case of AutoARIMA it is under the key 'arma' | 4 | 5 |
75,277,492 | 2023-1-29 | https://stackoverflow.com/questions/75277492/yolov8-get-predicted-class-name | I just want to get class data in my python script like: person, car, truck, dog but my output more than this. Also I can not use results as a string. Python script: from ultralytics import YOLO model = YOLO("yolov8n.pt") results = model.predict(source="0") Output: 0: 480x640 1 person, 1 car, 7.1ms 0: 480x640 1 person, 1 car, 7.2ms 0: 480x640 1 person, 1 car, 7.1ms 0: 480x640 1 person, 1 car, 7.1ms 0: 480x640 1 person, 1 car, 7.1ms 0: 480x640 1 person, 7.9ms 0: 480x640 1 person, 7.1ms 0: 480x640 1 person, 1 car, 7.1ms 0: 480x640 1 person, 1 car, 7.1ms | You can pass each class to the model's name dict like this: from ultralytics.yolo.engine.model import YOLO model = YOLO("yolov8n.pt") results = model.predict(stream=True, imgsz=512) # source already setup names = model.names for r in results: for c in r.boxes.cls: print(names[int(c)]) output: YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs bus person person person person image 1/2 /home/xyz/ultralytics/ultralytics/assets/bus.jpg: 512x384 4 persons, 1 bus, 35.7ms person person person tie tie image 2/2 /home/xyz/ultralytics/ultralytics/assets/zidane.jpg: 288x512 3 persons, 2 ties, 199.0ms Speed: 3.9ms pre-process, 117.4ms inference, 27.9ms postprocess per image at shape (1, 3, 512, 512) | 8 | 20 |
75,286,814 | 2023-1-30 | https://stackoverflow.com/questions/75286814/flask-send-pyaudio-to-browser | I'm sending my servers microphone's audio to the browser (mostly like this post but with some modified options). All works fine, until you head over to a mobile or safari, where it doesn't work at all. I've tried using something like howler to take care of the frontend but with not success (still works in chrome and on the computer but not on the phones Safari/Chrome/etc). <audio> ... </audio> works fine in chrome but only on the computer. function play_audio() { var sound = new Howl({ src: ['audio_feed'], format: ['wav'], html5: true, autoplay: true }); sound.play(); } How does one send a wav-generated audio feed which is 'live' that works in any browser? EDIT 230203: I have narrowed the error down to headers (at least what I think is causing the errors). What headers should one use to make the sound available in all browsers? Take this simple app.py for example: from flask import Flask, Response, render_template import pyaudio import time app = Flask(__name__) @app.route('/') def index(): return render_template('index.html', headers={'Content-Type': 'text/html'}) def generate_wav_header(sampleRate, bitsPerSample, channels): datasize = 2000*10**6 o = bytes("RIFF",'ascii') o += (datasize + 36).to_bytes(4,'little') o += bytes("WAVE",'ascii') o += bytes("fmt ",'ascii') o += (16).to_bytes(4,'little') o += (1).to_bytes(2,'little') o += (channels).to_bytes(2,'little') o += (sampleRate).to_bytes(4,'little') o += (sampleRate * channels * bitsPerSample // 8).to_bytes(4,'little') o += (channels * bitsPerSample // 8).to_bytes(2,'little') o += (bitsPerSample).to_bytes(2,'little') o += bytes("data",'ascii') o += (datasize).to_bytes(4,'little') return o def get_sound(InputAudio): FORMAT = pyaudio.paInt16 CHANNELS = 2 CHUNK = 1024 SAMPLE_RATE = 44100 BITS_PER_SAMPLE = 16 wav_header = generate_wav_header(SAMPLE_RATE, BITS_PER_SAMPLE, CHANNELS) stream = InputAudio.open( format=FORMAT, channels=CHANNELS, rate=SAMPLE_RATE, input=True, input_device_index=1, frames_per_buffer=CHUNK ) first_run = True while True: if first_run: data = wav_header + stream.read(CHUNK) first_run = False else: data = stream.read(CHUNK) yield(data) @app.route('/audio_feed') def audio_feed(): return Response( get_sound(pyaudio.PyAudio()), content_type = 'audio/wav', ) if __name__ == '__main__': app.run(debug=True) With a index.html looking like this: <html> <head> <title>Test audio</title> </head> <body> <button onclick="play_audio()"> Play audio </button> <div id="audio-feed"></div> </body> <script> function play_audio() { var audio_div = document.getElementById('audio-feed'); const audio_url = "{{ url_for('audio_feed') }}" audio_div.innerHTML = "<audio controls><source src="+audio_url+" type='audio/x-wav;codec=pcm'></audio>"; } </script> </html> Fire upp the flask development server python app.py and test with chrome, if you have a microphone you will hear the input sound (headphones preferably, otherwise you'll get a sound loop). Firefox works fine too. But If you try the same app with any browser on an iPhone you'll get no sound, and the same goes for safari on MacOS. There's no errors and you can see that the byte stream of the audio is getting downloaded in safari, but still no sound. What is causing this? I think I should use some kind of headers in the audio_feed response but with hours of debugging I cannot seem to find anything for this. EDIT 230309: @Markus is pointing out to follow RFC7233 HTTP Range Request. And that's probably it. While firefox, chrome and probably more browsers on desktop send byte=0- as header request, safari and browsers used on iOS send byte=0-1 as header request. | EDITED 2023-03-12 It turns out that it is sufficient to convert the audio live stream to mp3. For this you can use ffmpeg. The executable has to be available in the execution path of the server process. Here is a working draft tested with windows laptop as server and Safari on iPad as client: from subprocess import Popen, PIPE from threading import Thread from flask import Flask, Response, render_template import pyaudio FORMAT = pyaudio.paFloat32 CHANNELS = 1 CHUNK_SIZE = 4096 SAMPLE_RATE = 44100 BITS_PER_SAMPLE = 16 app = Flask(__name__) @app.route('/') def index(): return render_template('index.html', headers={'Content-Type': 'text/html'}) def read_audio(inp, audio): while True: inp.write(audio.read(num_frames=CHUNK_SIZE)) def response(): a = pyaudio.PyAudio().open( format=FORMAT, channels=CHANNELS, rate=SAMPLE_RATE, input=True, input_device_index=1, frames_per_buffer=CHUNK_SIZE ) c = f'ffmpeg -f f32le -acodec pcm_f32le -ar {SAMPLE_RATE} -ac {CHANNELS} -i pipe: -f mp3 pipe:' p = Popen(c.split(), stdin=PIPE, stdout=PIPE) Thread(target=read_audio, args=(p.stdin, a), daemon=True).start() while True: yield p.stdout.readline() @app.route('/audio_feed', methods=['GET']) def audio_feed(): return Response( response(), headers={ # NOTE: Ensure stream is not cached. 'Cache-Control': 'no-cache, no-store, must-revalidate', 'Pragma': 'no-cache', 'Expires': '0', }, mimetype='audio/mpeg') if __name__ == "__main__": app.run(host='0.0.0.0') In index.html change the type to audio/mp3: <!DOCTYPE html> <html> <head> <title>Test audio</title> </head> <body> <button onclick="play_audio()"> Play audio </button> <div id="audio-feed"></div> </body> <script> function play_audio() { var audio_div = document.getElementById('audio-feed'); const audio_url = "{{ url_for('audio_feed') }}" audio_div.innerHTML = "<audio preload='all' controls><source src=" + audio_url + " type='audio/mp3'></audio>"; } </script> </html> Disclaimer: This is just a basic demo. It opens an audio-ffmpeg subprocess for each call to the audio_feed handler. It doesn't cache data for multiple requests, it doesn't remove unused threads and it doesn't delete data that isn't consumed. Credits: how to convert wav to mp3 in live using python? | 9 | 2 |
75,299,987 | 2023-1-31 | https://stackoverflow.com/questions/75299987/boto3-get-query-runtime-statistics-sometimes-not-returning-rows-data | I have a lambda that attempts to find out whether a previously executed athena query has returned any rows or not. To do so I am using the boto3 function get_query_runtime_statistics and then extracting the "Rows" data: response = athena_client.get_query_runtime_statistics(QueryExecutionId=query_id) row_count = response["QueryRuntimeStatistics"]["Rows"]["OutputRows"] However, in a previous execution the response object has not contained the "Rows" data, resulting in a KeyError being thrown. I know I can get around the KeyError by using .get("Rows", {}).get("OutputRows") etc. I reran the exact same query in the athena console (it returns 0 rows) and then used the query ID to get the runtime statistics of this duplicate query execution. This time it had the "Rows" data in the response. Therefore the behaviour doesn't appear to be consistent for a given query string; however, if I get the statistics for the original query execution the response consistently does not contain the "Rows" data. What I want to know is whether every time "Rows" data is not present can I assume that the output row count was zero? I couldn't find anything in the AWS docs explaining why "Rows" may not always be present in the API response. Thanks :) PS. If you don't want to follow the link to the documentation, here is the response schema according to boto3: { 'QueryRuntimeStatistics': { 'Timeline': { 'QueryQueueTimeInMillis': 123, 'QueryPlanningTimeInMillis': 123, 'EngineExecutionTimeInMillis': 123, 'ServiceProcessingTimeInMillis': 123, 'TotalExecutionTimeInMillis': 123 }, 'Rows': { 'InputRows': 123, 'InputBytes': 123, 'OutputBytes': 123, 'OutputRows': 123 }, 'OutputStage': { 'StageId': 123, 'State': 'string', 'OutputBytes': 123, 'OutputRows': 123, 'InputBytes': 123, 'InputRows': 123, 'ExecutionTime': 123, 'QueryStagePlan': { 'Name': 'string', 'Identifier': 'string', 'Children': [ {'... recursive ...'}, ], 'RemoteSources': [ 'string', ] }, 'SubStages': [ {'... recursive ...'}, ] } } } | I raised a support ticket and got the follwoing responses: The query finished successfully but it failed as an async process of getting runtime stats. This is an internal issue and internal team is aware about it and is working on it to fix the same. I asked for clarification whether this issue only happens on queries that produce zero results, this was the response: The issue could happen regardless of the query. Also as informed by internal team, it may take approximately 15-30 days to know the root cause and fix the issue. [sent on 2023-02-10] I hope this is helpful to anyone else who comes across this :) | 4 | 3 |
75,299,506 | 2023-1-31 | https://stackoverflow.com/questions/75299506/cannot-import-name-save-virtual-workbook-from-openpyxl-writer-excel | Is there an update to the library? Before it worked perfectly, and today I updated and it no longer loads I searched but I can't find any other option | Looks like the new recommendation from the developers is to use a temp-file: https://openpyxl.readthedocs.io/en/3.1/tutorial.html?highlight=save#saving-as-a-stream update: I ended up having to use this with modifications from tempfile import NamedTemporaryFile from openpyxl import Workbook wb = Workbook() with NamedTemporaryFile() as tmp: tmp.close() # with statement opened tmp, close it so wb.save can open it wb.save(tmp.name) with open(tmp.name, 'rb') as f: f.seek(0) # probably not needed anymore new_file_object = f.read() because the with statement opens the file and then wb.save (which expects a string filename) attempts to open it, resulting in an Exception | 10 | 10 |
75,287,534 | 2023-1-30 | https://stackoverflow.com/questions/75287534/indexerror-descartes-polygonpatch-wtih-shapely | I used to use shapely to make a cirle and plot it on a previously populated plot. This used to work perfectly fine. Recently, I am getting an index error. I broke my code to even the simplest of operations and it cant even do the simplest of circles. import descartes import shapely.geometry as sg import matplotlib.pyplot as plt circle = sg.Point((0,0)).buffer(1) # Plot the cricle fig = plt.figure() ax = fig.add_subplot(111) patch = descartes.PolygonPatch(circle) ax.add_patch(patch) plt.show() Below is the error I am getting now. I feel it might be a new version mismatch of something that could have happened. I tried uninstalling and re-installing the last known stable version and that didnt help either --------------------------------------------------------------------------- IndexError Traceback (most recent call last) Cell In[20], line 6 4 fig = plt.figure() 5 ax = fig.add_subplot(111) ----> 6 patch = descartes.PolygonPatch(circle) 7 ax.add_patch(patch) 8 plt.show() File ~/env/lib/python3.8/site-packages/descartes/patch.py:87, in PolygonPatch(polygon, **kwargs) 73 def PolygonPatch(polygon, **kwargs): 74 """Constructs a matplotlib patch from a geometric object 75 76 The `polygon` may be a Shapely or GeoJSON-like object with or without holes. (...) 85 86 """ ---> 87 return PathPatch(PolygonPath(polygon), **kwargs) File ~/env/lib/python3.8/site-packages/descartes/patch.py:62, in PolygonPath(polygon) 58 else: 59 raise ValueError( 60 "A polygon or multi-polygon representation is required") ---> 62 vertices = concatenate([ 63 concatenate([asarray(t.exterior)[:, :2]] + 64 [asarray(r)[:, :2] for r in t.interiors]) 65 for t in polygon]) 66 codes = concatenate([ 67 concatenate([coding(t.exterior)] + 68 [coding(r) for r in t.interiors]) for t in polygon]) 70 return Path(vertices, codes) File ~/env/lib/python3.8/site-packages/descartes/patch.py:63, in <listcomp>(.0) 58 else: 59 raise ValueError( 60 "A polygon or multi-polygon representation is required") 62 vertices = concatenate([ ---> 63 concatenate([asarray(t.exterior)[:, :2]] + 64 [asarray(r)[:, :2] for r in t.interiors]) 65 for t in polygon]) 66 codes = concatenate([ 67 concatenate([coding(t.exterior)] + 68 [coding(r) for r in t.interiors]) for t in polygon]) 70 return Path(vertices, codes) IndexError: too many indices for array: array is 0-dimensional, but 2 were indexed | So from what I could tell, this issue comes from a broken implementation of shapely within descartes. My speculation is that shapely changed how it handles Polygon exteriors and descartes simply hasn't been updated. I don't know if it is the best idea, but I edited my installation of descartes directly to fix this issue: Navigate to your descartes installation and open patch.py. At line 62 you should see this piece of code: vertices = concatenate([ concatenate([asarray(t.exterior)[:, :2]] + [asarray(r)[:, :2] for r in t.interiors]) for t in polygon]) Simply change t.exterior to t.exterior.coords. This hopefully should fix your issue. vertices = concatenate([ concatenate([asarray(t.exterior.coords)[:, :2]] + [asarray(r)[:, :2] for r in t.interiors]) for t in polygon]) I'm trying to find a way to provide the descartes devs with this feedback. | 5 | 20 |
75,233,794 | 2023-1-25 | https://stackoverflow.com/questions/75233794/how-is-the-multiprocessing-queue-instance-serialized-when-passed-as-an-argument | A related question came up at Why I can't use multiprocessing.Queue with ProcessPoolExecutor?. I provided a partial answer along with a workaround but admitted that the question raises another question, namely why a multiprocessing.Queue instance can be passed as the argument to a multiprocessing.Process worker function. For example, the following code fails under platforms that use either the spawn or fork method of creating new processes: from multiprocessing import Pool, Queue def worker(q): print(q.get()) with Pool(1) as pool: q = Queue() q.put(7) pool.apply(worker, args=(q,)) The above raises: RuntimeError: Queue objects should only be shared between processes through inheritance Yet the following program runs without a problem: from multiprocessing import Process, Queue def worker(q): print(q.get()) q = Queue() q.put(7) p = Process(target=worker, args=(q,)) p.start() p.join() It appears that arguments to a multiprocessing pool worker function ultimately get put on the pool's input queue, which is implemented as a multiprocessing.SimpleQueue, and you cannot put a multiprocessing.Queue instance to a multiprocessing.SimpleQueue instance, which uses a ForkingPickler for serialization. So how is the multiprocessing.Queue serialized when passed as an argument to a multiprocessing.Process that allows it to be used in this way? | I wanted to expand on the accepted answer so I added my own which also details a way to make queues, locks, etc. picklable and able to be sent through a pool. Why this happens Basically, it's not that Queues cannot be serialized, it's just that multiprocessing is only equipped to serialize these when it knows sufficient information about the target process it will be sent to (whether that be the current process or some else) which is why it works when you are spawning a process yourself (using Process class) but not when you are simply putting it in a queue (like when using a Pool). Look over the source code for multiprocessing.queues.Queue (or other connection objects like Condition). You'll find that in their __getstate__ method (the method called when a Queue instance is being pickled), there is a call to function multiprocessing.context.assert_spawning. This "assertion" will only pass if the current thread is spawning a process. If that is not the case, multiprocessing raises the error you see and quits. Now the reason why multiprocessing does not even bother to pickle the queue in case the assertion fails is that it does not have access to the Popen object created when a thread creates a subprocess (for windows, you can find this at multiprocessing.popen_spawn_win32.Popen). This object stores data about the target process including its pid and process handle. Multiprocessing requires this information because a Queue contains mutexes, and to successfully pickle and later rebuild these again, multiprocessing must call DuplicateHandle through winapi with the information from the Popen object. Without this object being present, multiprocessing does not know what to do and raises an error. So this is where our problem lies, but it is something fixable if we can teach multiprocessing a different approach to steal the duplicate handles from inside the target process itself without ever requiring it's information in advance. Making Picklable Queues Pay attention to the class multiprocessing.synchronize.SemLock. It's the base class for all multiprocessing locks, so its objects are subsequently present in queues, pipes, etc. The way it's currently pickled is like how I described above, it requires the target process's handle to create a duplicate handle. However, we can instead define a __reduce__ method for SemLock where we will create a duplicate handle using the current process's handle, and then from the target process, duplicate the previously created handle which will now be valid in the target process's context. It's quite a mouthful, but a similar approach is actually used to pickle PipeConnection objects as well, but instead of a __reduce__ method, it uses the dispatch table to do so. After this is done, we can the subclass Queue and remove the call to assert_spawning since it will no longer be required. This way, we will now successfully be able to pickle locks, queues, pipes, etc. Here's the code with examples: import os, pickle from multiprocessing import Pool, Lock, synchronize, get_context import multiprocessing.queues import _winapi def work(q): print("Worker: Main says", q.get()) q.put('haha') class DupSemLockHandle(object): """ Picklable wrapper for a handle. Attempts to mirror how PipeConnection objects are pickled using appropriate api """ def __init__(self, handle, pid=None): if pid is None: # We just duplicate the handle in the current process and # let the receiving process steal the handle. pid = os.getpid() proc = _winapi.OpenProcess(_winapi.PROCESS_DUP_HANDLE, False, pid) try: self._handle = _winapi.DuplicateHandle( _winapi.GetCurrentProcess(), handle, proc, 0, False, _winapi.DUPLICATE_SAME_ACCESS) finally: _winapi.CloseHandle(proc) self._pid = pid def detach(self): """ Get the handle, typically from another process """ # retrieve handle from process which currently owns it if self._pid == os.getpid(): # The handle has already been duplicated for this process. return self._handle # We must steal the handle from the process whose pid is self._pid. proc = _winapi.OpenProcess(_winapi.PROCESS_DUP_HANDLE, False, self._pid) try: return _winapi.DuplicateHandle( proc, self._handle, _winapi.GetCurrentProcess(), 0, False, _winapi.DUPLICATE_CLOSE_SOURCE | _winapi.DUPLICATE_SAME_ACCESS) finally: _winapi.CloseHandle(proc) def reduce_lock_connection(self): sl = self._semlock dh = DupSemLockHandle(sl.handle) return rebuild_lock_connection, (dh, type(self), (sl.kind, sl.maxvalue, sl.name)) def rebuild_lock_connection(dh, t, state): handle = dh.detach() # Duplicated handle valid in current process's context # Create a new instance without calling __init__ because we'll supply the state ourselves lck = t.__new__(t) lck.__setstate__((handle,)+state) return lck # Add our own reduce function to pickle SemLock and it's child classes synchronize.SemLock.__reduce__ = reduce_lock_connection class PicklableQueue(multiprocessing.queues.Queue): """ A picklable Queue that skips the call to context.assert_spawning because it's no longer needed """ def __init__(self, *args, **kwargs): ctx = get_context() super().__init__(*args, **kwargs, ctx=ctx) def __getstate__(self): return (self._ignore_epipe, self._maxsize, self._reader, self._writer, self._rlock, self._wlock, self._sem, self._opid) def is_locked(l): """ Returns whether the given lock is acquired or not. """ locked = l.acquire(block=False) if locked is False: return True else: l.release() return False if __name__ == '__main__': # Example that shows that you can now pickle/unpickle locks and they'll still point towards the same object l1 = Lock() p = pickle.dumps(l1) l2 = pickle.loads(p) print('before acquiring, l1 locked:', is_locked(l1), 'l2 locked', is_locked(l2)) l2.acquire() print('after acquiring l1 locked:', is_locked(l1), 'l2 locked', is_locked(l2)) # Example that shows how you can pass a queue to Pool and it will work with Pool() as pool: q = PicklableQueue() q.put('laugh') pool.map(work, (q,)) print("Main: Worker says", q.get()) Output before acquiring, l1 locked: False l2 locked False after acquiring l1 locked: True l2 locked True Worker: Main says laugh Main: Worker says haha Disclaimer: The above code will only work on Windows. If you are on UNIX then you may try using @Booboo's modified code below (reported working but has not been adequately tested, full code link here): import os, pickle from multiprocessing import Pool, Lock, synchronize, get_context, Process import multiprocessing.queues import sys _is_windows= sys.platform == 'win32' if _is_windows: import _winapi . . . class DupSemLockHandle(object): """ Picklable wrapper for a handle. Attempts to mirror how PipeConnection objects are pickled using appropriate api """ def __init__(self, handle, pid=None): if pid is None: # We just duplicate the handle in the current process and # let the receiving process steal the handle. pid = os.getpid() if _is_windows: proc = _winapi.OpenProcess(_winapi.PROCESS_DUP_HANDLE, False, pid) try: self._handle = _winapi.DuplicateHandle( _winapi.GetCurrentProcess(), handle, proc, 0, False, _winapi.DUPLICATE_SAME_ACCESS) finally: _winapi.CloseHandle(proc) else: self._handle = handle self._pid = pid def detach(self): """ Get the handle, typically from another process """ # retrieve handle from process which currently owns it if self._pid == os.getpid(): # The handle has already been duplicated for this process. return self._handle if not _is_windows: return self._handle # We must steal the handle from the process whose pid is self._pid. proc = _winapi.OpenProcess(_winapi.PROCESS_DUP_HANDLE, False, self._pid) try: return _winapi.DuplicateHandle( proc, self._handle, _winapi.GetCurrentProcess(), 0, False, _winapi.DUPLICATE_CLOSE_SOURCE | _winapi.DUPLICATE_SAME_ACCESS) finally: _winapi.CloseHandle(proc) | 9 | 7 |
75,289,130 | 2023-1-30 | https://stackoverflow.com/questions/75289130/flatten-nested-pydantic-model | from typing import Union from pydantic import BaseModel, Field class Category(BaseModel): name: str = Field(alias="name") class OrderItems(BaseModel): name: str = Field(alias="name") category: Category = Field(alias="category") unit: Union[str, None] = Field(alias="unit") quantity: int = Field(alias="quantity") When instantiated like this: OrderItems(**{'name': 'Test','category':{'name': 'Test Cat'}, 'unit': 'kg', 'quantity': 10}) It returns data like this: OrderItems(name='Test', category=Category(name='Test Cat'), unit='kg', quantity=10) But I want the output like this: OrderItems(name='Test', category='Test Cat', unit='kg', quantity=10) How can I achieve this? | You should try as much as possible to define your schema the way you actually want the data to look in the end, not the way you might receive it from somewhere else. UPDATE: Generalized solution (one nested field or more) To generalize this problem, let's assume you have the following models: from pydantic import BaseModel class Foo(BaseModel): x: bool y: str z: int class _BarBase(BaseModel): a: str b: float class Config: orm_mode = True class BarNested(_BarBase): foo: Foo class BarFlat(_BarBase): foo_x: bool foo_y: str Problem: You want to be able to initialize BarFlat with a foo argument just like BarNested, but the data to end up in the flat schema, wherein the fields foo_x and foo_y correspond to x and y on the Foo model (and you are not interested in z). Solution: Define a custom root_validator with pre=True that checks if a foo key/attribute is present in the data. If it is, it validates the corresponding object against the Foo model, grabs its x and y values and then uses them to extend the given data with foo_x and foo_y keys: from pydantic import BaseModel, root_validator from pydantic.utils import GetterDict ... class BarFlat(_BarBase): foo_x: bool foo_y: str @root_validator(pre=True) def flatten_foo(cls, values: GetterDict) -> GetterDict | dict[str, object]: foo = values.get("foo") if foo is None: return values # Assume `foo` must ba valid `Foo` data: foo = Foo.validate(foo) return { "foo_x": foo.x, "foo_y": foo.y, } | dict(values) Note that we need to be a bit more careful inside a root validator with pre=True because the values are always passed in the form of a GetterDict, which is an immutable mapping-like object. So we cannot simply assign new values foo_x/foo_y to it like we would to a dictionary. But nothing is stopping us from returning the cleaned up data in the form of a regular old dict. To demonstrate, we can throw some test data at it: test_dict = {"a": "spam", "b": 3.14, "foo": {"x": True, "y": ".", "z": 0}} test_orm = BarNested(a="eggs", b=-1, foo=Foo(x=False, y="..", z=1)) test_flat = '{"a": "beans", "b": 0, "foo_x": true, "foo_y": ""}' bar1 = BarFlat.parse_obj(test_dict) bar2 = BarFlat.from_orm(test_orm) bar3 = BarFlat.parse_raw(test_flat) print(bar1.json(indent=4)) print(bar2.json(indent=4)) print(bar3.json(indent=4)) The output: { "a": "spam", "b": 3.14, "foo_x": true, "foo_y": "." } { "a": "eggs", "b": -1.0, "foo_x": false, "foo_y": ".." } { "a": "beans", "b": 0.0, "foo_x": true, "foo_y": "" } The first example simulates a common situation, where the data is passed to us in the form of a nested dictionary. The second example is the typical database ORM object situation, where BarNested represents the schema we find in a database. The third is just to show that we can still correctly initialize BarFlat without a foo argument. One caveat to note is that the validator does not get rid of the foo key, if it finds it in the values. If your model is configured with Extra.forbid that will lead to an error. In that case, you'll just need to have an extra line, where you coerce the original GetterDict to a dict first, then pop the "foo" key instead of getting it. Original post (flatten single field) If you need the nested Category model for database insertion, but you want a "flat" order model with category being just a string in the response, you should split that up into two separate models. Then in the response model you can define a custom validator with pre=True to handle the case when you attempt to initialize it providing an instance of Category or a dict for category. Here is what I suggest: from pydantic import BaseModel, validator class Category(BaseModel): name: str class OrderItemBase(BaseModel): name: str unit: str | None quantity: int class OrderItemCreate(OrderItemBase): category: Category class OrderItemResponse(OrderItemBase): category: str @validator("category", pre=True) def handle_category_model(cls, v: object) -> object: if isinstance(v, Category): return v.name if isinstance(v, dict) and "name" in v: return v["name"] return v Here is a demo: if __name__ == "__main__": insert_data = '{"name": "foo", "category": {"name": "bar"}, "quantity": 1}' insert_obj = OrderItemCreate.parse_raw(insert_data) print(insert_obj.json(indent=2)) ... # insert into DB response_obj = OrderItemResponse.parse_obj(insert_obj.dict()) print(response_obj.json(indent=2)) Here is the output: { "name": "foo", "unit": null, "quantity": 1, "category": { "name": "bar" } } { "name": "foo", "unit": null, "quantity": 1, "category": "bar" } One of the benefits of this approach is that the JSON Schema stays consistent with what you have on the model. If you use this in FastAPI that means the swagger documentation will actually reflect what the consumer of that endpoint receives. You could of course override and customize schema creation, but... why? Just define the model correctly in the first place and avoid headache in the future. | 8 | 9 |
75,284,890 | 2023-1-30 | https://stackoverflow.com/questions/75284890/how-to-cut-image-according-to-two-points-in-opencv | I have this input image (feel free to download it and try your solution, please): I need to find points A and B that are closest to the left down and right upper corner. And than I would like to cut of the image. See desired output: So far I have this function, but it does not find points A, B correctly: def CheckForLess(list1, val): return(all(x < val for x in list1)) def find_corner_pixels(img): # Get image dimensions height, width = img.shape[:2] # Find the first non-black pixel closest to the left-down and right-up corners nonempty = [] for i in range(height): for j in range(width): # Check if the current pixel is non-black if not CheckForLess(img[i, j], 10): nonempty.append([i, 1080 - j]) return min(nonempty) , max(nonempty) Can you help me please? The solution provided by Achille works on one picture, but if I change input image to this: It gives wrong output: | I noticed that your image has an alpha mask that already segment the foreground. This imply using the flag cv.IMREAD_UNCHANGED when reading the image with openCV (cv.imread(filename, cv.IMREAD_UNCHANGED)). If this is the case you can have a try to the following: import sys from typing import Tuple import cv2 as cv import numpy as np class DetectROI: def __init__(self, alpha_threshold: int = 125, display: bool = False, gaussian_sigma: float = 1., gaussian_window: Tuple[int, int] = (3, 3), relative_corner: float = 0.25, relative_line_length: float = 0.25, relative_max_line_gap: float = 0.02, working_size: Tuple[int, int] = (256, 256)): self.alpha_threshold = alpha_threshold self.display = display self.working_size = working_size self.gaussian_sigma = gaussian_sigma self.gaussian_window = gaussian_window self.relative_line_length = relative_line_length self.relative_max_line_gap = relative_max_line_gap self.relative_corner = relative_corner self._origin: Tuple[int, int] = (0, 0) self._src_shape: Tuple[int, int] = (0, 0) def __call__(self, src): # get cropped contour cnt_img = self.get_cropped_contour(src) left_lines, right_lines = self.detect_lines(cnt_img) x, y, w, h = self.get_bounding_rectangle(left_lines + right_lines) # top_left = (x, y) top_right = (x + w, y) # bottom_right = (x + w, y + h) bottom_left = (x, y + h) if self.display: src = cv.rectangle(src, bottom_left, top_right, (0, 0, 255, 255), 3) cv.namedWindow("Source", cv.WINDOW_KEEPRATIO) cv.imshow("Source", src) cv.waitKey() return bottom_left, top_right def get_cropped_contour(self, src): self._src_shape = tuple(src.shape[:2]) msk = np.uint8((src[:, :, 3] > self.alpha_threshold) * 255) msk = cv.resize(msk, self.working_size) cnt, _ = cv.findContours(msk, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE) cnt_img = cv.drawContours(np.zeros_like(msk), cnt, 0, (255,)) cnt = cnt[0] x, y, w, h = cv.boundingRect(np.array(cnt)) top_left = (x, y) # top_right = (x + w, y) bottom_right = (x + w, y + h) # bottom_left = (x, y + h) self._origin = top_left cnt_img = cnt_img[self._origin[1]:bottom_right[1], self._origin[0]:bottom_right[0]] if self.display: cv.namedWindow("Contours", cv.WINDOW_KEEPRATIO) cv.imshow("Contours", cnt_img) return cnt_img def detect_lines(self, img): img = cv.GaussianBlur(img, self.gaussian_window, self.gaussian_sigma) lines = cv.HoughLinesP(img, 1, np.pi / 180, 50, 50, int(self.relative_line_length*img.shape[0]), int(self.relative_max_line_gap*img.shape[0])) if self.display: lines_img = np.repeat(img[:, :, None], 3, axis=2) if lines is not None: for i in range(0, len(lines)): l = lines[i][0] cv.line(lines_img, (l[0], l[1]), (l[2], l[3]), (255, 0, 0), 2, cv.LINE_AA) # keep lines close to bottom left and bottom right images corner = self.relative_corner left_lines = [] right_lines = [] if lines is not None: # left side for i in range(0, len(lines)): l = lines[i][0] if (l[1] > (1 - corner) * img.shape[1] and l[0] < corner * img.shape[0]) \ or (l[3] > (1 - corner) * img.shape[1] and l[2] < corner * img.shape[0]): left_lines.append(l) elif (l[1] > (1 - corner) * img.shape[1] and l[0] > (1 - corner) * img.shape[0]) \ or (l[3] > (1 - corner) * img.shape[1] and l[2] > (1 - corner) * img.shape[0]): right_lines.append(l) if self.display: if lines is not None: for l in left_lines + right_lines: cv.line(lines_img, (l[0], l[1]), (l[2], l[3]), (0, 0, 255), 2, cv.LINE_AA) cv.namedWindow("Contours", cv.WINDOW_KEEPRATIO) cv.imshow("Contours", lines_img) return left_lines, right_lines def get_bounding_rectangle(self, lines): cnt = sum(([(l[0], l[1]), (l[2], l[3])] for l in lines), []) x, y, w, h = cv.boundingRect(np.array(cnt)) x += self._origin[0] y += self._origin[1] y = np.int32(np.round(y * self._src_shape[0] / self.working_size[0])) h = np.int32(np.round(h * self._src_shape[0] / self.working_size[0])) x = np.int32(np.round(x * self._src_shape[1] / self.working_size[1])) w = np.int32(np.round(w * self._src_shape[1] / self.working_size[1])) return x, y, w, h def main(argv): default_file = r'book.png' filename = argv[0] if len(argv) > 0 else default_file src = cv.imread(filename, cv.IMREAD_UNCHANGED) detector = DetectROI(display=True) return detector(src) if __name__ == "__main__": print("bottom_left: {}, top_right: {}".format(*main(sys.argv[1:]))) The underlying idea is the following: threshold the alpha mask to get the foreground compute the contour of the alpha mask detect the lines (assuming the right and left border to be rather strait) keep the lines that start from the bottom left and the bottom right of the image (drawn in red) Here is the obtained result I hope this is robust enough | 3 | 2 |
75,283,937 | 2023-1-30 | https://stackoverflow.com/questions/75283937/capture-real-time-stdout-and-stderr-when-run-a-function-in-a-process-python | I have a python function and want to run it as a separate process with multiprocessing package. def run(ctx: Context): print("hello world!") return ctx afterward running it as a separate process with the following script: import multiprocessing p = multiprocessing.Process(target=run, args=(ctx, )) p.start() p.join() Now, I need to capture live stdout and stderr of the above process. Is there any way like as: import subprocess proc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE) while True: line = proc.stdout.readline() if not line: break But I need to pass the function not running a command with Popen. Do you know how can I read stdout when I run my function in a separate process? | My approach would be to create a custom context manager that can temporarily replace sys.stdout and sys.stderr with io.String() instances to capture the output and return this. For this you need to make the target of your Process a new function that can setup the context manager and return the results, for which a multiprocessing.Queue is used (this, by the way, would be needed anyway if you expect run to return its result back to the main process): from multiprocessing import Process, Queue from io import StringIO import sys class CaptureOutput: def __enter__(self): self._stdout_output = '' self._stderr_output = '' self._stdout = sys.stdout sys.stdout = StringIO() self._stderr = sys.stderr sys.stderr = StringIO() return self def __exit__(self, *args): self._stdout_output = sys.stdout.getvalue() sys.stdout = self._stdout self._stderr_output = sys.stderr.getvalue() sys.stderr = self._stderr def get_stdout(self): return self._stdout_output def get_stderr(self): return self._stderr_output def run(ctx): print("hello world!") print("It works!", file=sys.stderr) raise Exception('Oh oh!') # Comment out to have a successful completion return ctx def worker(ctx, queue): import traceback with CaptureOutput() as capturer: try: result = run(ctx) except Exception as e: result = e print(traceback.format_exc(), file=sys.stderr) queue.put((result, capturer.get_stdout(), capturer.get_stderr())) if __name__ == '__main__': queue = Queue() ctx = None # for demo purposes p = Process(target=worker, args=(ctx, queue)) p.start() # Must do this call before call to join: result, stdout_output, stderr_output = queue.get() p.join() print('stdout:', stdout_output) print('stderr:', stderr_output) Prints: stdout: hello world! stderr: It works! Traceback (most recent call last): File "C:\Booboo\test\test.py", line 44, in worker result = run(ctx) File "C:\Booboo\test\test.py", line 36, in run raise Exception('Oh oh!') # Comment out to have a successful completion Exception: Oh oh! | 4 | 2 |
75,245,758 | 2023-1-26 | https://stackoverflow.com/questions/75245758/how-to-use-poetry-in-google-colab | Context I am currently working on a team project, where we need to train neural networks. Some members are working on their local computer, and some on Colab (for GPU usage). We need to have the same dependencies. I am already familiar in using poetry on a local computer, but not on Colab, and I was wondering how to use it in Colab. So I did some tests, and I encountered some issues. Maybe I can find some answers here. Thank you in advance! π Issues 1st issue: poetry add <package> does not update pyproject.toml I want to add a new package, suppose torch. According to poetry's documentation, to install a new package, we need to run poetry add <package>. Since I run this command for the first time, a virtual environment is created, as well as the poetry.lock. But no packages were installed. Moreover, the poetry.lock file is updated, but not the pyproject.toml. This happens only on Colab. I tried on my local computer, and the command indeed automatically update the pyproject.toml file as well. 2nd issue: poetry run pip install <package> does not update pyproject.toml Instead, we can install the package with poetry run pip install <package>. I saw this command on the following GitHub gist. The packages are now installed in the virtual environment, but the pyproject.toml was not updated. Here is a link to the Colab notebook I used for those tests. Thank you again! | Hello for everyone reading this post. I settled on a solution. The main issue was that pyproject.toml was not updated automatically, so I just decided to modify it by hand. Here is the steps for using poetry in Colab, whether you create your own poetry project, or cloning a repo on Github. https://github.com/elise-chin-adway/poetry-and-colab/blob/main/Using_python_poetry_in_Google_Colab.ipynb I don't know if it is the best solution or not, but I hope it will help other people :) | 6 | 6 |
75,291,812 | 2023-1-31 | https://stackoverflow.com/questions/75291812/how-do-i-normalize-a-path-format-to-unix-style-while-on-windows | I am storing paths in a json file using a python script. I want the paths to be stored in the same format (Unix-style) no matter which OS the script is run on. So basically I want to run the os.path.normpath function, but it only converts paths to Unix-style instead of changing its function depending on the host OS. What is the best way to do this? | You can convert Windows-style path to UNIX-style after calling os.path.normpath. The conversion can be conveniently done with pathlib.PureWindowsPath.as_posix: import os.path from pathlib import PureWindowsPath path = r'\foo\\bar' path = os.path.normpath(path) if os.path.sep == '\\': path = PureWindowsPath(path).as_posix() print(path) This outputs, in Windows: /foo/bar | 4 | 7 |
75,304,110 | 2023-1-31 | https://stackoverflow.com/questions/75304110/keras-model-predicts-different-results-using-the-same-input | I built a Keras sequential model on the simple dataset. I am able to train the model, however every time I try to get a prediction on the same input I get different values. Anyone knows why? I read through different Stackoverflow here (Why the exactly identical keras model predict different results for the same input data in the same env, Keras saved model predicting different values on different session, different prediction after load a model in keras), but couldn't find the answer. I tried to set the Tensorflow seed and still getting different results. Here is my code from pandas import concat from pandas import DataFrame # create sequence length = 10 sequence = [i/float(length) for i in range(length)] # create X/y pairs df = DataFrame(sequence) df = concat([df, df.shift(1)], axis=1) df.dropna(inplace=True) print(df) # convert to LSTM friendly format values = df.values X, y = values[:, 0], values[:, 1] X = X.reshape(len(X), 1, 1) print(X.shape, y.shape) output is: 0 0 1 0.1 0.0 2 0.2 0.1 3 0.3 0.2 4 0.4 0.3 5 0.5 0.4 6 0.6 0.5 7 0.7 0.6 8 0.8 0.7 9 0.9 0.8 (9, 1, 1) (9,) Then start building the model #configure network from tensorflow import keras from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM tf.random.set_seed(1337) n_batch = len(X) n_neurons = 10 #design network model = Sequential() model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X.shape[1], X.shape[2]), stateful=True)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(X,y,epochs=2,batch_size=n_batch,verbose=1,shuffle=False) Now every time I run the following code to get the prediction I get different results as you can see here model.predict(X) ********output************** array([[0.03817442], [0.07164046], [0.10493257], [0.13797525], [0.17069395], [0.20301574], [0.23486984], [0.26618803], [0.29690543]], dtype=float32) model.predict(X) ********output************** array([[0.04415776], [0.08242793], [0.12048437], [0.15823033], [0.19556962], [0.2324073 ], [0.26865062], [0.3042098 ], [0.33899906]], dtype=float32) | The problem is setting stateful=True in your LSTM layer, as this keeps the state between predict calls, so each prediction depends on previous predictions. So as a solution, set stateful=False. | 3 | 3 |
75,302,200 | 2023-1-31 | https://stackoverflow.com/questions/75302200/python-types-literal-of-logging-level-as-type | the following code: import logging print(type(1)) print(type(logging.WARNING)) prints: <class 'int'> <class 'int'> yet, according to mypy, the first line of this code snippet is legal, but the second is not (Variable "logging.WARNING" is not valid as a type): OneOrTwo = Literal[1,2] # ok WarningOrError = Literal[logging.WARNING, logging.ERROR] # not ok I do not understand why the definition of OneOrTwo is ok but WarningOrError is not. I would like also know what could be done to use a legal equivalent of WarningOrError, i.e. something I could use like this: def a(arg: WarningOrError)->None: pass note: mypy redirects to https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases , but this did not clarify things for me. | Literal only accepts literal arguments. PEP586 has very rigid definitions of what constitutes a "literal" in this context. You can read about it here. The problem with your definition of WarningOrError is that the actual definition of logging.ERROR and WARNING makes them mutable (you can look at the source code of logging here), and hence illegal literals. Constant expressions like 1 or 2, on the other hand, can be known statically (they never change!) and therefore are acceptable literals. Among the accepted legal literals, Enum objects can help you achieve what you want: import logging from typing import Literal from enum import Enum class Log(Enum): ERROR = logging.ERROR WARNING = logging.WARNING LogType = Literal[Log.ERROR, Log.WARNING] def a(arr: LogType): pass | 4 | 5 |
75,300,394 | 2023-1-31 | https://stackoverflow.com/questions/75300394/pandas-replace-certain-values-within-groups-using-group-maximus | Here's my table: category number probability 1102 24 0.3 1102 18 0.6 1102 16 0.1 2884 24 0.16 2884 15 0.8 2884 10 0.04 so I want to replace the number column that has probability lower than 15% with the number that has the highest probability within groups: category number probability 1102 24 0.3 1102 18 0.6 1102 18 0.1 2884 24 0.16 2884 15 0.8 2884 15 0.04 | Find the number corresponding to max prob in a group then use loc to update values n = df.sort_values('probability').groupby('category')['number'].transform('last') df.loc[df['probability'] <= 0.15, 'number'] = n category number probability 0 1102 24 0.30 1 1102 18 0.60 2 1102 18 0.10 3 2884 24 0.16 4 2884 15 0.80 5 2884 15 0.04 | 3 | 3 |
75,296,495 | 2023-1-31 | https://stackoverflow.com/questions/75296495/sum-the-values-of-a-column-with-python | I'm new to python, I would like to read a column of values ββfrom a csv file and add them together, but only those to the left of the "," My csv File: Name Code Angel 19;90 Eduardo 20;21 Miguel 30;45 I would like to be able to sum only the numbers to the left of the "Code" column, so that my output is "19+20+30 = 69". I tried deleting the ";" and converting the string to int but sums but joins the numbers together and I have this output: Your final sum is : 1990 +2021 +3045 = 7056 | If need sum values before ; use Series.str.extract with casting to integers and then sum: out = df['Code'].str.extract('(.*);', expand=False).astype('int').sum() Or use Series.str.split with select first values of lists by str[0]: out = df['Code'].str.split(';').str[0].astype('int').sum() If need sum all values create DataFrame by expand=True and summing first per rows and then Series: out = df['Code'].str.split(';', expand=True).astype('int').sum().sum() If need sum without ; use Series.str.replace: out = df['Code'].str.replace(';','', regex=True).astype('int').sum() | 5 | 5 |
75,295,132 | 2023-1-31 | https://stackoverflow.com/questions/75295132/how-to-place-specific-constraints-on-the-parameters-of-a-pydantic-model | How can I place specific constraints on the parameters of a Pydantic model? In particular, I would like: start_date must be at least "2019-01-01" end_date must be greater than start_date code must be one and only one of the values ββin the set cluster must be one and only one of the values ββin the set The code I'm using is as follows: from fastapi import FastAPI from pydantic import BaseModel from typing import Set import uvicorn app = FastAPI() class Query(BaseModel): start_date: str end_date: str code: Set[str] = { "A1", "A2", "A3", "A4", "X1", "X2", "X3", "X4", "X5", "Y1", "Y2", "Y3" } cluster: Set[str] = {"C1", "C2", "C3"} @app.post("/") async def read_table(query: Query): return {"msg": query} if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=8000) | Pydantic has a set of constrained types that allows you to define specific constraints on values. start_date must be at least "2019-01-01" >>> class Foo(BaseModel): ... d: condate(ge=datetime.date.fromisoformat('2019-01-01') >>> Foo(d=datetime.date.fromisoformat('2018-01-12')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pydantic\main.py", line 342, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for Foo d ensure this value is greater than or equal to 2019-01-01 (type=value_error.number.not_ge; limit_value=2019-01-01) >>> Foo(d=datetime.date.fromisoformat('2020-01-12')) Foo(d=datetime.date(2020, 1, 12)) end_date must be greater than start_date For more complicated rules, you can use a root validator: from pydantic import BaseModel, root_validator from datetime import date class StartEnd(BaseModel): start: date end: date @root_validator def validate_dates(cls, values): if values['start'] > values['end']: raise ValueError('start is after end') return values StartEnd(start=date.fromisoformat('2023-01-01'), end=date.fromisoformat('2022-01-01')) Gives: pydantic.error_wrappers.ValidationError: 1 validation error for StartEnd __root__ start is after end (type=value_error) For code and cluster, you can use an Enum instead from pydantic import BaseModel from enum import Enum # StrEnum in 3.11+ class ClusterEnum(str, Enum): C1 = "C1" C2 = "C2" C3 = "C3" class ClusterVal(BaseModel): cluster: ClusterEnum print(ClusterVal(cluster='C3').cluster.value) # outputs C3 | 4 | 3 |
75,276,563 | 2023-1-29 | https://stackoverflow.com/questions/75276563/tkinter-scroll-with-touchpad-mouse-gestures-two-fingers-scrolling-in-tkint | I wanted to implement two finger scrolling in tkinter. Here is the result of my own attempt: import tkinter as tk class Scene: def __init__(self, canvas): self.canvas = canvas self.elements = [ { "type": "rect", "x": canvas.winfo_width() / 2, "y": canvas.winfo_height() / 2, "width": 200, "height": 200, "color": (55 / 255, 55 / 255, 10 / 255), }, { "type": "rect", "x": 100, "y": 300, "width": 200, "height": 200, "color": (155 / 255, 200 / 255, 10 / 255), }, ] def update_scene(self, offset): for element in self.elements: element["x"] -= offset[0] element["y"] -= offset[1] self.render_scene() def render_scene(self): self.canvas.delete("all") for element in self.elements: if element["type"] == "rect": self.canvas.create_rectangle( element["x"], element["y"], element["x"] + element["width"], element["y"] + element["height"], fill=f"#{int(element['color'][0] * 255):02x}{int(element['color'][1] * 255):02x}{int(element['color'][2] * 255):02x}", ) else: print(f"Error: type {element['type']} is not supported.") root = tk.Tk() root.geometry("{}x{}".format(800, 600)) canvas = tk.Canvas(root) canvas.pack(fill="both", expand=True) canvas_scroll = [0, 0] scene = Scene(canvas) scene.render_scene() def on_mouse_scroll(event): canvas_scroll[0] = event.delta canvas_scroll[1] = event.delta scene.update_scene(canvas_scroll) canvas.bind("<MouseWheel>", on_mouse_scroll) root.mainloop() The above only works in one diagonal/direction, instead of any direction (up, down, left, right, and all four diagonals) The above was inspired by a Javascript snippet I found here: https://jsfiddle.net/qmyho24r/ I know using Shift-MouseWheel works, but then I have to also press the shift key, instead of just using the trackpad and two fingers (like in the Javascript example). How can I use two fingers scrolling in Tkinter? | tkinter does only support horizontal scrolling on windows from patchlevel 8.6.10 <= I've created a small example that works for me with tkinter 8.6.12 on Win11. When using two fingers with a little gap between them I can successfully scroll in both direction and move the View in a circle. I retrieve two different events depending on which direction I swap. This is also mentioned in the documentation. import tkinter as tk import sys #https://stackoverflow.com/a/13874620/13629335 OS = sys.platform def horizontal_scroll(event): if OS in ('win32','darwin'): canvas.xview_scroll(int(event.delta/120), 'units') elif OS == 'linux': if event.num == 5: canvas.xview_scroll(-1, 'units') elif event.num == 4: canvas.xview_scroll(1, 'units') def vertical_scroll(event): if OS in ('win32','darwin'): canvas.yview_scroll(int(event.delta/120), 'units') elif OS == 'linux': if event.num == 5: canvas.yview_scroll(-1, 'units') elif event.num == 4: canvas.yview_scroll(1, 'units') root = tk.Tk('test') if int(root.tk.call("info", "patchlevel").split('.')[-1]) >= 10: #https://docs.activestate.com/activetcl/8.6/get/relnotes/ canvas = tk.Canvas(root,highlightthickness=0,bd=0) #something to show v = viewsize = 150 cw = canvas.winfo_reqwidth()+v ch = canvas.winfo_reqheight()+v/2 s = square = 50 canvas.create_rectangle(0,0, s,s, tags=('NW',)) canvas.create_rectangle(cw-s,0, cw,s, tags=('NE',)) canvas.create_rectangle(cw-s,ch-s, cw,ch, tags=('SE',)) canvas.create_rectangle(0,ch-s, s,ch, tags=('SW',)) canvas.pack(fill='both', expand=True, padx=10) #update scrollregion canvas.configure(scrollregion=canvas.bbox('all')) #bindings #https://stackoverflow.com/a/17457843/13629335 if OS in ('win32','darwin'): #https://apple.stackexchange.com/q/392936 root.bind('<MouseWheel>', vertical_scroll) root.bind('<Shift-MouseWheel>', horizontal_scroll) if OS == 'linux': #https://stackoverflow.com/a/17452217/13629335 root.bind('<Button-4>', vertical_scroll) root.bind('<Button-5>', vertical_scroll) root.bind('<Shift-Button-4>', horizontal_scroll) root.bind('<Shift-Button-5>', horizontal_scroll) root.mainloop() else: print('at least for windows it is supported at patchlevel 8.6.10') root.destroy() | 4 | 2 |
75,237,114 | 2023-1-25 | https://stackoverflow.com/questions/75237114/max-retries-exceeded-with-url-failed-to-establish-a-new-connection-errno-111 | I keep getting this error: HTTPConnectionPool(host='127.0.0.1', port=8001): Max retries exceeded with url: /api/v1/auth/sign_in (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0f8cbdd430>: Failed to establish a new connection: [Errno 111] Connection refused')) I searched through the stackoverflow and couldn't find the solution that would help me. Here's my code example: host = 'http://127.0.0.1:8001' response = requests.request(method=request_data['method'], url=f'{host}/{settings.ACCOUNTS_API_PREFIX}{request_data["url"]}', json=json_data, params=params, headers=headers, ) Basically I'm trying to send a POST request to authenticate myself on the service, however I keep getting the above error. I have 2 containers - one is a web application (Django), another one is accounts that stores all details of the users to authenticate them. Both containers are up and running, I can open the website, I can open the API swagger for accounts, however I can't send the POST request and get any response. Containers settings as follows: container_1: build: context: ./container_1 dockerfile: Dockerfile env_file: - '.env' stdin_open: true tty: true ports: - '8000:8000' expose: - 8000 volumes: - ./data:/data working_dir: /data command: [ "./start.sh" ] networks: - web container_2: context: ./container_2 dockerfile: Dockerfile env_file: 'accounts/settings/.env' stdin_open: true tty: true environment: - 'DJANGO_SETTINGS_MODULE=project.settings' expose: - 8000 ports: - "8001:8000" volumes: - ./data:/app networks: - web Can someone assist me to figure it out? | Answer of @jordanm was right and it fixed my problem: Change host = 'http://127.0.0.1:8001' to host = 'http://container_2:8000' | 5 | 12 |
75,291,343 | 2023-1-30 | https://stackoverflow.com/questions/75291343/what-is-the-internal-load-factor-of-a-sets-in-python | I am trying to find out what the internal load factor is for the Python sets. For dictionary which uses a hash table with a load factor of 0.66 (2/3) is. The number of buckets start at 8 and when the 6th key is inserted the number of buckets increases to 16 The table below shows the shift in buckets. bucket shift 8 5 16 10 32 21 64 42 128 85 This can be seen with de following Python code where the size of a dictionary and sets is shows with the getsizeof method: import sys d = {} s = set() for x in range(25): d[x] = 1 s.add(x) print(len(d), sys.getsizeof(d), sys.getsizeof(s)) # of elements memory used for dict memory used for sets 1 232 216 2 232 216 3 232 216 4 232 216 5 232 728 6 360 728 7 360 728 8 360 728 9 360 728 10 360 728 11 640 728 12 640 728 13 640 728 14 640 728 15 640 728 16 640 728 17 640 728 18 640 728 19 640 2264 20 640 2264 21 640 2264 22 1176 2264 23 1176 2264 24 1176 2264 25 1176 2264 The above table shows that the shift in the buckets correct is for the dictionary, but not for the sets. The memory in sets is different. I am trying to find out what the load factor is for a set. Is that also 2/3? Or am I doing something wrong with the code? | Currently, it's about 3/5. See the source: if ((size_t)so->fill*5 < mask*3) return 0; return set_table_resize(so, so->used>50000 ? so->used*2 : so->used*4); fill is the number of occupied table cells (including "deleted entry" markers), and mask is 1 less than the total table capacity. | 3 | 5 |
75,284,768 | 2023-1-30 | https://stackoverflow.com/questions/75284768/modulenotfounderror-after-installing-a-python-package | Problem summary I am very new to python package development. I developed a package and published it at TestPyPI. I install this package trough pip with no errors. However, python is giving me a "ModuleNotFoundError" when I try to import it, and I have no idea why. Can someone help me? Repro steps First, I install the package with: pip install -i https://test.pypi.org/simple/ spark-map==0.2.76 Then, I open a new terminal, start the python interpreter, and try to import this package, but python gives me a ModuleNotFoundError: >>> import spark_map Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'spark_map' What I discover When I cd to the root folder of the package, and open the python interpreter, and run import spark_map, it works fine with no errors; That pip did not installed the package succesfully; However I checked this. I got no error messages when I install the package, and when I run pip list after the pip install command, I see spark_map on the list of installed packages. > pip list ... many packages spark-map 0.2.76 ... more packages The folder where spark_map was installed can be out of the module search path of Python; I checked this as well. pip is installing the package on a folder called Python310\lib\site-packages, and this folder is included inside the sys.path variable: >>> import sys >>> for path in sys.path: ... print(path) C:\Users\pedro\AppData\Local\Programs\Python\Python310\python310.zip C:\Users\pedro\AppData\Local\Programs\Python\Python310\DLLs C:\Users\pedro\AppData\Local\Programs\Python\Python310\lib C:\Users\pedro\AppData\Local\Programs\Python\Python310 C:\Users\pedro\AppData\Local\Programs\Python\Python310\lib\site-packages C:\Users\pedro\AppData\Local\Programs\Python\Python310\lib\site-packages\win32 C:\Users\pedro\AppData\Local\Programs\Python\Python310\lib\site-packages\win32\lib C:\Users\pedro\AppData\Local\Programs\Python\Python310\lib\site-packages\Pythonwin Information about the system I am on Windows 10, Python 3.10.9, trying to install and import the spark_map package, version 0.2.76.(https://test.pypi.org/project/spark-map/). Information about the code The package source code is hosted at GitHub, and the folder structure of this package is essentially this: root β ββββspark_map β ββββ__init__.py β ββββfunctions.py β ββββmapping.py β ββββtests β ββββfunctions β ββββmapping β ββββ.gitignore ββββLICENSE ββββpyproject.toml ββββREADME.md ββββREADME.rst The pyproject.toml file of the package: [build-system] requires = ["setuptools>=61.0", "toml"] build-backend = "setuptools.build_meta" [project] name = "spark_map" version = "0.2.76" authors = [ { name="Pedro Faria", email="[email protected]" } ] description = "Pyspark implementation of `map()` function for spark DataFrames" readme = "README.md" requires-python = ">=3.7" license = { file = "LICENSE.txt" } classifiers = [ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ] dependencies = [ "pyspark", "setuptools", "toml" ] [project.urls] Homepage = "https://pedropark99.github.io/spark_map/" Repo = "https://github.com/pedropark99/spark_map" Issues = "https://github.com/pedropark99/spark_map/issues" [tool.pytest.ini_options] pythonpath = [ "." ] [tool.setuptools] py-modules = [] What I tried As @Dorian Turba suggested, I moved the source code into a src folder. Now, the structure of the package is this: root ββββsrc β ββββspark_map β ββββ__init__.py β ββββfunctions.py β ββββmapping.py β ββββtests ββββ.gitignore ββββLICENSE ββββpyproject.toml ββββREADME.md ββββREADME.rst After that, I executed python -m pip install -e . (the log of this command is on the image below). The package was compiled and installed succesfully. However, when I open a new terminal, in a different location, and try to run python -c "import spark_map", I still get the same error. I also tried to start a virtual environment (with python -m venv env), and install the package inside this virtual environment (with pip install -e .). Then, I executed python -c "import spark_map". But the problem still remains. I executed pip list too, to check if the package was installed. The full log of commands is on the image below: | The source of the problem The source of the problem is at the "build process" of the package. In other words, pip install was installing a "not valid package". Basically, I use setuptools to build the package. When I compiled (or "build" the package with python -m build, the source code of the package (that is, all contents of the src directory), was not included in the compiled TAR archive. Fix using setuptools The documentation for setuptools talks about this issue of finding the source code for your project. In essence, setuptools was not finding the source code of the package. So I needed to help him find these files, by adding these two options to my pyproject.toml file: [tool.setuptools] packages = ["spark_map"] package-dir = {"" = "src"} How can you identify this problem ? If you are having a similar problem at installing and importing your package, you might have this same problem, as I did. To check if that is your case, build your project with python -m build. Then, open the source distribution of your package (that is, the TAR archive), and check if the source code is there, inside this TAR file. If not, than, you have this exact problem. | 7 | 5 |
75,286,085 | 2023-1-30 | https://stackoverflow.com/questions/75286085/what-is-the-difference-between-assert-and-a-if-condition-embedded-with-a-raise | i am currently learning about the assert statement in python and i cant seem to understand its main usage and what seperates it from simply raising an exception. if i wrote an if statement along with my condition and simply raised an exception if the condition is not met, how is that different from using the assert statement? def times_ten(number): return number * 100 result = times_ten(20) assert result == 200, 'Expected times_ten(20) to return 200, instead got ' + str(result) to me both codes do the same thing def times_ten(number): return number * 100 result = times_ten(20) if result != 200: raise Exception('Expected times_ten(20) to return 200, instead got ' + str(result)) | Not much. The documentation provides the equivalent if statements to an assert statement. assert expression is the same as if __debug__: if not expression: raise AssertionError() while assert expression1, expression2 is the same as if __debug__: if not expression1: raise AssertionError(expression2) As you can see, it's an AssertionError, not a generic Exception, that is raised when the condition is false, and there is a guard that can be set to False on startup (using the -O option) to prevent the assertion from being checked at all. | 3 | 4 |
75,282,511 | 2023-1-30 | https://stackoverflow.com/questions/75282511/df-to-table-throw-error-typeerror-init-got-multiple-values-for-argument | I have dataframe in pandas :- purchase_df. I want to convert it to sql table so I can perform sql query in pandas. I tried this method purchase_df.to_sql('purchase_df', con=engine, if_exists='replace', index=False) It throw an error TypeError: __init__() got multiple values for argument 'schema' I have dataframe name purchase_df and I need to perform sql query on it. I need to perform sql query on this dataframe like this ....engine.execute('''select * from purchase_df where condition'''). For this I need to convert dataframe into sql table as in our server pandas_sql is not installed only sql alchemy is installed. I ran this code in pycharm locally and it work perfectly fine but when i tried this in databrick notebook it is showing an error. Even though week ago it was running fine in databrick notebook too. Help me to fix this. note:- pandas version '1.3.4' Name: SQLAlchemy Version: 2.0.0 | It seems that the version 2.0.0 (realeased on January 26, 2023) of SQLAlchemy is not compatible with earlier versions of pandas. I suggest you to upgrade your pandas version to the latest (version 1.5.3) with : pip install --upgrade pandas Or: conda upgrade pandas | 23 | 29 |
75,274,640 | 2023-1-29 | https://stackoverflow.com/questions/75274640/how-to-get-key-and-value-instead-of-only-value-when-filtering-with-jmespath | Input data: s = {'111': {'name': 'john', 'exp': '1'}, '222': {'name': 'mia', 'exp': '1'}} Code: import jmespath jmespath.search("(*)[?name=='john']", s) Output: [{'name': 'john', 'exp': '1'}] Output I want: [{'111': {'name': 'john', 'exp': '1'}}] | Convert the dictionary to the list l1 = [{'key': k, 'value': v} for k, v in s.items()] gives [{'key': '111', 'value': {'name': 'john', 'exp': '1'}}, {'key': '222', 'value': {'name': 'mia', 'exp': '1'}}] Select the values where the attribute name is john l2 = jmespath.search('[?value.name == `john`]', l1) gives [{'key': '111', 'value': {'name': 'john', 'exp': '1'}}] Convert the list back to the dictionary s2 = dict([[i['key'], i['value']] for i in l2]) gives the expected result {'111': {'name': 'john', 'exp': '1'}} Example of complete code for testing #!/usr/bin/python3 import jmespath s = {'111': {'name': 'john', 'exp': '1'}, '222': {'name': 'mia', 'exp': '1'}} # '333': {'name': 'john', 'exp': '1'}} l1 = [{'key': k, 'value': v} for k, v in s.items()] print(l1) l2 = jmespath.search('[?value.name == `john`]', l1) print(l2) s2 = dict([[i['key'], i['value']] for i in l2]) print(s2) | 4 | 2 |
75,275,563 | 2023-1-29 | https://stackoverflow.com/questions/75275563/attributeerror-module-sqlalchemy-has-no-attribute-all | In my GitHub CI I get errors like the one below since today: File "/home/runner/.local/lib/python3.8/site-packages/fb4/login_bp.py", line 12, in <module> from fb4.sqldb import db File "/home/runner/.local/lib/python3.8/site-packages/fb4/sqldb.py", line 8, in <module> db = SQLAlchemy() File "/home/runner/.local/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py", line 758, in __init__ _include_sqlalchemy(self, query_class) File "/home/runner/.local/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py", line 112, in _include_sqlalchemy for key in module.__all__: AttributeError: module 'sqlalchemy' has no attribute '__all__' CRITICAL: Exiting due to uncaught exception <class 'ImportError'> without being aware of any significant commit that might cause this. My local tests and my Jenkins CI still work. I changed the matrix to stick to python 3.8 instead of also trying 3.9, 3.10 and 3.11 also taking into account that a similar problem in python 3.9 AttributeError: module 'posix' has no attribute '__all__' was due to missing 3.9 support. How can the above error could be debugged and mitigated? My assumption is that the problem is in the setup/environment or some strange behaviour change of GitHub actions, Python, pip or the test environment or whatever. I am a committer of the projects involved which are: https://github.com/WolfgangFahl/pyOnlineSpreadSheetEditing and potentially https://github.com/WolfgangFahl/pyFlaskBootstrap4 Update: After following the suggestions by @snakecharmerb the logs Now show a version conflict RROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts The conflict is caused by: The user requested Flask~=2.0.2 bootstrap-flask 1.8.0 depends on Flask flask-dropzone 1.6.0 depends on Flask flask-login 0.6.2 depends on Flask>=1.0.4 flask-httpauth 1.0.0 depends on Flask flask-sqlalchemy 3.0.2 depends on Flask>=2.2 Which is interesting since i am trying to avoid the ~ notation ... and indeed it was a typo ... let's see whether the fix to upgrade Flask-SQLAlchemy>=3.0.2 works now. I have accepted the answer after setting the version as suggested. There are followup problems but the question is answered. | It seems the .__all__ attribute has been removed in the recently released SQLAlchemy 2.0. You may need to pin the SQLAlchemy version in your config somehow. Or ensure that you are using Flask-SQLAlchemy 3.0.2 or later, as this issue suggests that version has the required fix. | 27 | 27 |
75,275,130 | 2023-1-29 | https://stackoverflow.com/questions/75275130/z-label-does-not-show-up-in-3d-matplotlib-scatter-plot | The z-label does not show up in my figure. What is wrong? import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.set_xlabel("x") ax.set_ylabel("y") ax.set_zlabel("z") plt.show() Output Neither ax.set_zlabel("z") nor ax.set(zlabel="z") works. The x- and y-labels work fine. | That's a padding issue. labelpadfloat The distance between the axis label and the tick labels. Defaults to rcParams["axes.labelpad"] (default: 4.0) = 4. You can use matplotlib.axis.ZAxis.labelpad to adjust this value for the z-axis : import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111, projection="3d") ax.set_xlabel("x") ax.set_ylabel("y") ax.set_zlabel("StackOverflow", rotation=90) ax.zaxis.labelpad=-0.7 # <- change the value here plt.show(); Output : | 10 | 10 |
75,257,369 | 2023-1-27 | https://stackoverflow.com/questions/75257369/pypdf2-paper-size-manipulation | I am using PyPDF2 to take an input PDF of any paper size and convert it to a PDF of A4 size with the input PDF scaled and fit in the centre of the output pdf. Here's an example of an input (convert to pdf with imagemagick convert image.png input.pdf), which can be of any dimensions: And the expected output is: I'm not a developer and my knowledge of python is basic but I have been trying to figure this out from the documentation, but haven't had much success. My latest attempt is as follows: from pypdf import PdfReader, PdfWriter, Transformation, PageObject from pypdf import PaperSize pdf_reader = PdfReader("input.pdf") page = pdf_reader.pages[0] writer = PdfWriter() A4_w = PaperSize.A4.width A4_h = PaperSize.A4.height # resize page2 to fit *inside* A4 h = float(page.mediabox.height) w = float(page.mediabox.width) print(A4_h, h, A4_w, w) scale_factor = min(A4_h / h, A4_w / w) print(scale_factor) transform = Transformation().scale(scale_factor, scale_factor).translate(0, A4_h / 3) print(transform.ctm) # page.scale_by(scale_factor) page.add_transformation(transform) # merge the pages to fit inside A4 # prepare A4 blank page page_A4 = PageObject.create_blank_page(width=A4_w, height=A4_h) page_A4.merge_page(page) print(page_A4.mediabox) writer.add_page(page_A4) writer.write("output.pdf") Which gives this output: I could be completely off track with my approach and it may be the inefficient way of doing it. I was hoping I would have a simple function in the package where I can define the output paper size and the scaling factor, similar to this. | You almost got it! The transformations are applied only to the content, but not to the boxes (mediabox/trimbox/cropbox/artbox/bleedbox). You need to adjust the cropbox: from pypdf.generic import RectangleObject page.cropbox = RectangleObject((0, 0, A4_w, A4_h)) Full script from pypdf import PdfReader, PdfWriter, Transformation, PageObject, PaperSize from pypdf.generic import RectangleObject reader = PdfReader("input.pdf") page = reader.pages[0] writer = PdfWriter() A4_w = PaperSize.A4.width A4_h = PaperSize.A4.height # resize page to fit *inside* A4 h = float(page.mediabox.height) w = float(page.mediabox.width) scale_factor = min(A4_h/h, A4_w/w) transform = Transformation().scale(scale_factor,scale_factor).translate(0, A4_h/3) page.add_transformation(transform) page.cropbox = RectangleObject((0, 0, A4_w, A4_h)) # merge the pages to fit inside A4 # prepare A4 blank page page_A4 = PageObject.create_blank_page(width = A4_w, height = A4_h) page.mediabox = page_A4.mediabox page_A4.merge_page(page) writer.add_page(page_A4) writer.write('output.pdf') | 4 | 6 |
75,235,531 | 2023-1-25 | https://stackoverflow.com/questions/75235531/problem-when-installing-python-from-source-ssl-package-missing-even-though-open | The Problem Trying to install Python-3.11.1 from source on Zorin OS (Ubuntu16 based) I get the following errors when I try to pip install any package into a newly created venv: python3.11 -m venv venv source venv/bin/active pip install numpy WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/numpy/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/numpy/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/numpy/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/numpy/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/numpy/ Could not fetch URL https://pypi.org/simple/numpy/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/numpy/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy Obviously, the SSL package seems to be missing, however I made sure to have both openssl and libssl-dev installed before installing python. More specifically, I made sure to have all packages installed lined out here. The Exact Steps I Took To Install Make sure all packages that are required are installed (the once above) cd .../python-installs Download Python from python.org tar -xvzf Python-3.11.1.tgz cd Python-3.11.1 and then ./configure \ --prefix=/opt/python/3.11.1 \ --enable-shared \ --enable-optimizations \ --enable-ipv6 \ --with-openssl=/usr/lib/ssl \ --with-openssl-rpath=auto \ LDFLAGS=-Wl,-rpath=/opt/python/3.11.1/lib,--disable-new-dtags make <- Note that I get a lot off error messages from gcc here, very similar to this, however it seems its successful at the end make altinstall Parts of this installation process are from [1], [2] Running python3.11 seems to work fine, however I cannot pip install anything into a venv created by Python3.11.1. Other Possible Error Sources Before trying to reinstall Python3.11.1, I always made sure to delete all files in the following places that were associated with Python3.11.1: /usr/local/bin/... /usr/local/lib/... /usr/local/man/man1/... /usr/local/share/man/man1/... /usr/local/lib/pkgconfig/... /opt/python/... I also tried adding Python-3.11.1 to PATH by adding PATH=/opt/python/3.11.1/bin:$PATH to /etc/profile.d/python.sh, but it didn't seem to do much in my case. When configuring the python folder I am using --with-openssl=/usr/lib/ssl, though perhaps I need to use something else? I tried --with-openssl=/usr/bin/openssl but that doesn't work because openssl is a file and not a folder and it gives me an error message and doesn't even configure anything. Conclusion From my research I found that most times this error relates to the openssl library not being installed (given that python versions >= 3.10 will need it to be installed), and that installing it and reinstalling python seemed to fix the issue. However in my case it doesn't, and I don't know why that is. The most likely cause is that something is wrong with my openssl configuration, but I wouldn't know what. Any help would be greatly appreciated. | After some more research, I realized that I didn't have libbz2-dev installed, which is obviously the first thing one should check if they get the errors above but oh well. For anyone who still finds himself struggling, here are my complete steps I took: Make sure the following libraries are installed apt show libbz2-dev apt show openssl apt show libssl-dev # Other libraries that might also be needed apt show liblzma-dev cd .../python-installs Download the target Python version from python.org as Gzipped tar ball tar -xvzf Python-3.11.1.tgz sudo mkdir opt/python sudo mkdir opt/python/3.11.1 cd Python-3.11.1 and then ./configure --prefix=/opt/python/3.11.1 \ --enable-optimizations make <- Note that I still get a lot of error messages from gcc, also get a always_inline not in line error message sudo make altinstall Add PATH=/opt/python/3.11.1/bin:$PATH to the file /etc/profile.d/python.sh reboot Then to test if the error is gone one can for example test: python3.11 -m venv venv source venv/bin/active pip install pandas python3.11 import pandas exit() If there are no errors then everything worked out. Obviously the version needs to be changed to the actual target version of yours. Note If you your newly installed python version does not appear in terminal, it might be because the file /etc/profile.d/python.sh already existed with the Path to the python version (for example, if you had to install it multiple times). In that case, delete the file (or at least the PATH to the target version) and then recreate it. After rebooting it should appear in terminal. | 4 | 3 |
75,269,700 | 2023-1-28 | https://stackoverflow.com/questions/75269700/pre-commit-fails-to-install-isort-5-11-4-with-error-runtimeerror-the-poetry-co | pre-commit suddenly started to fail installing the isort hook in our builds today with the following error [INFO] Installing environment for https://github.com/pycqa/isort. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... An unexpected error has occurred: CalledProcessError: command: ('/builds/.../.cache/pre-commit/repo0_h0f938/py_env-python3.8/bin/python', '-mpip', 'install', '.') return code: 1 expected return code: 0 [...] stderr: ERROR: Command errored out with exit status 1: [...] File "/tmp/pip-build-env-_3j1398p/overlay/lib/python3.8/site-packages/poetry/core/masonry/api.py", line 40, in prepare_metadata_for_build_wheel poetry = Factory().create_poetry(Path(".").resolve(), with_groups=False) File "/tmp/pip-build-env-_3j1398p/overlay/lib/python3.8/site-packages/poetry/core/factory.py", line 57, in create_poetry raise RuntimeError("The Poetry configuration is invalid:\n" + message) RuntimeError: The Poetry configuration is invalid: - [extras.pipfile_deprecated_finder.2] 'pip-shims<=0.3.4' does not match '^[a-zA-Z-_.0-9]+$' It seems to be related with poetry configuration.. | Upgrading the hook to the freshly released isort 5.12.0 seems to be fixing the issue. Looking at the commit stack from isort repo, it sounds like recent version of Poetry had a breaking change incompatible with isort <= 5.11.4 (commit) | 80 | 85 |
75,268,412 | 2023-1-28 | https://stackoverflow.com/questions/75268412/python-type-hints-for-unpacking-object | I'm trying to implement type hinting for object unpacking. Here is what I have currently from typing import Tuple class A: def __init__(self, x: int, y: str): self.x = x self.y = y def astuple(self) -> Tuple[int, str]: return self.x, self.y # Need to annotate the return type of __iter__ def __iter__(self): return iter(self.astuple()) a = A(1, "a") # This cannot infer the type of x and y x, y = a reveal_type(x) reveal_type(y) # This infers the type of p and q as int and str respectively p, q = a.astuple() reveal_type(p) reveal_type(q) prints $ mypy unpack_object.py unpack_object.py:20: note: Revealed type is "Any" unpack_object.py:21: note: Revealed type is "Any" unpack_object.py:24: note: Revealed type is "builtins.int" unpack_object.py:25: note: Revealed type is "builtins.str" Success: no issues found in 1 source file However, I would like mypy to infer correct types for x, y (int, str). How can I achieve this? | There is no way to define your own heterogeneous iterable type in Python. Make A a subclass of NamedTuple instead. from typing import NamedTuple class A(NamedTuple): x: int y: str x, y = A(1, "a") reveal_type(x) # builtins.int reveal_type(y) # builtins.str | 4 | 3 |
75,249,150 | 2023-1-26 | https://stackoverflow.com/questions/75249150/how-to-use-class-based-views-in-fastapi | I am trying to use class based views in my FastApi project to reduce redundancy of code. Basically I need CRUD functionality for all of my models and therefor would have to write the same routes over and over again. I created a small example project to display my progress so far, but I ran into some issues. I know there is this Fastapi-utils but as far as I understand only reduces the number of Dependencies to call and is no longer maintained properly (last commit was March 2020). I have some arbitrary pydantic Schema/Model. The SQLAlchemy models and DB connection are irrelevant for now. from typing import Optional from pydantic import BaseModel class ObjBase(BaseModel): name: Optional[str] class ObjCreate(ObjBase): pass class ObjUpdate(ObjBase): pass class Obj(ObjBase): id: int A BaseService class is used to implement DB access. To simplify this there is no DB access right now and only get (by id) and list (all) is implemented. from typing import Any, Generic, List, Optional, Type, TypeVar from pydantic import BaseModel SchemaType = TypeVar("SchemaType", bound=BaseModel) CreateSchemaType = TypeVar("CreateSchemaType", bound=BaseModel) UpdateSchemaType = TypeVar("UpdateSchemaType", bound=BaseModel) class BaseService(Generic[SchemaType, CreateSchemaType, UpdateSchemaType]): def __init__(self, model: Type[SchemaType]): self.model = model def get(self, id: Any) -> Any: return {"id": id} def list(self, skip: int = 0, limit: int = 100) -> Any: return [ {"id": 1}, {"id": 2}, ] This BaseService can then be inherited by a ObjService class providing these base functions for the previously defined pydantic Obj Model. from schemas.obj import Obj, ObjCreate, ObjUpdate from .base import BaseService class ObjService(BaseService[Obj, ObjCreate, ObjUpdate]): def __init__(self): super(ObjService, self).__init__(Obj) In the init.py file in this directory a function is provided to get an ObjService instance. from fastapi import Depends from .obj import ObjService def get_obj_service() -> ObjService: return ObjService() So far everything is working. I can inject the Service Class into the relevant FastApi routes. But all routes need to be written for each model and CRUD function. Making it tedious when providing the same API endpoints for multiple models/schemas. Therefor my thought was to use something similar to the logic behind the BaseService by providing a BaseRouter which defines these routes and inherit from that class for each model. The BaseRouter class: from typing import Generic, Type, TypeVar from fastapi import APIRouter, Depends from pydantic import BaseModel from services.base import BaseService SchemaType = TypeVar("SchemaType", bound=BaseModel) CreateSchemaType = TypeVar("CreateSchemaType", bound=BaseModel) UpdateSchemaType = TypeVar("UpdateSchemaType", bound=BaseModel) class BaseRouter(Generic[SchemaType, CreateSchemaType, UpdateSchemaType]): def __init__(self, schema: Type[SchemaType], prefix: str, service: BaseService): self.schema = schema self.service = service self.router = APIRouter( prefix=prefix ) self.router.add_api_route("/", self.list, methods=['GET']) self.router.add_api_route("/{id}", self.get, methods=['GET']) def get(self, id): return self.service.get(id) def list(self): return self.service.list() The ObjRouter class: from schemas.obj import Obj, ObjCreate, ObjUpdate from .base import BaseRouter from services.base import BaseService class ObjRouter(BaseRouter[Obj, ObjCreate, ObjUpdate]): def __init__(self, prefix: str, service: BaseService): super(ObjRouter, self).__init__(Obj, prefix, service) The init.py file in that directory from fastapi import Depends from services import get_obj_service from services.obj import ObjService from .obj import ObjRouter def get_obj_router(service: ObjService = Depends(get_obj_service())) -> ObjRouter: return ObjRouter("/obj", service).router In my main.py file this router is added to the FastApi App. from fastapi import Depends, FastAPI from routes import get_obj_router app = FastAPI() app.include_router(get_obj_router()) When starting the app the routes Get "/obj" and Get "/obj/id" show up in my Swagger Docs for the project. But when testing one of the endpoints I am getting an AttributeError: 'Depends' object has no attribute 'list' As far as I understand Depends can only be used in FastApi functions or functions that are dependecies themselves. Therefor I tried altering the app.include_router line in my main.py by this app.include_router(Depends(get_obj_router())) But it again throws an AttributeError: 'Depends' object has no attribute 'routes'. Long story short question: What am I doing wrong? Is this even possible in FastApi or do I need to stick to defining the same CRUD Api Endpoints over and over again? The reason I want to use the Dependenvy Injection capabilities of FastApi is that later I will use the following function call in my Service classes to inject the DB session and automatically close it after the request: def get_db(): db = SessionLocal() try: yield db finally: db.close() As far as I understand this is only possible when the highest call in the dependency hierachy (Route depends on Service depends on get_db) is done by a FastApi Route. PS: This is my first question on StackOverflow, please be gentle. | since your question is understandably very long, I will post a full working example at the bottom of this answer. Dependencies in FastAPI are callables that can modify an endpoints parameters and pass values down to them. In the api model they work in the endpoint level. To pass-on any dependency results you need to explicitly pass them to the controller function. In the example below I have created a dummy Session class and a dummy session injection function (injecting_session). Then I have added this dependency to the BaseRouter functions get and list and passed the result on to the BaseObject class get and list functions. As promised; A fully working example: from typing import Optional, TypeVar, Type, Generic, Any, Union, Sequence from fastapi import Depends, APIRouter, FastAPI from pydantic import BaseModel class ObjBase(BaseModel): name: Optional[str] class ObjCreate(ObjBase): pass class ObjUpdate(ObjBase): pass class Obj(ObjBase): id: int SchemaType = TypeVar("SchemaType", bound=BaseModel) CreateSchemaType = TypeVar("CreateSchemaType", bound=BaseModel) UpdateSchemaType = TypeVar("UpdateSchemaType", bound=BaseModel) class Session: def __str__(self): return "I am a session!" async def injecting_session(): print("Creating Session") return Session() class BaseService(Generic[SchemaType, CreateSchemaType, UpdateSchemaType]): def __init__(self, model: Type[SchemaType]): self.model = model def get(self, id: Any, session: Session) -> Any: print(session) return {"id": id} def list(self, session: Session) -> Any: print(session) return [ {"id": 1}, {"id": 2}, ] class ObjService(BaseService[Obj, ObjCreate, ObjUpdate]): def __init__(self): super(ObjService, self).__init__(Obj) def get_obj_service() -> ObjService: return ObjService() SchemaType2 = TypeVar("SchemaType2", bound=BaseModel) CreateSchemaType2 = TypeVar("CreateSchemaType2", bound=BaseModel) UpdateSchemaType2 = TypeVar("UpdateSchemaType2", bound=BaseModel) class BaseRouter(Generic[SchemaType2, CreateSchemaType2, UpdateSchemaType2]): def __init__(self, schema: Type[SchemaType2], prefix: str, service: BaseService): self.schema = schema self.service = service self.router = APIRouter( prefix=prefix ) self.router.add_api_route("/", self.list, methods=['GET']) self.router.add_api_route("/{id}", self.get, methods=['GET']) def get(self, id, session=Depends(injecting_session)): return self.service.get(id, session) def list(self, session=Depends(injecting_session)): return self.service.list(session) class ObjRouter(BaseRouter[Obj, ObjCreate, ObjUpdate]): def __init__(self, path, service): super(ObjRouter, self).__init__(Obj, path, service) def get_obj_router(service=get_obj_service()) -> APIRouter: # returns API router now return ObjRouter("/obj", service).router app = FastAPI() app.include_router(get_obj_router()) By adding parameters to injecting_session() you can add parameters to all endpoints that use the dependency. | 5 | 6 |
75,264,394 | 2023-1-27 | https://stackoverflow.com/questions/75264394/class-function-vs-method | I was watching Learn Python - Full Course for Beginners [Tutorial] on YouTube here. At timestamp 4:11:54 the tutor explains what a class function is, however from my background in object oriented programming using other languages I thought the correct term would be method? Now I am curious if there is a difference between a class function and method? | Method is the correct term for a function in a class. Methods and functions are pretty similar to each other. The only difference is that a method is called with an object and has the possibility to modify data of an object. Functions can modify and return data but they dont have an impact on objects. Edit : Class function and method both mean the same thing although saying class function is not the right way to say it. | 3 | 2 |
75,263,744 | 2023-1-27 | https://stackoverflow.com/questions/75263744/adding-bar-labels-shrinks-dodged-bars-in-seaborn-objects | I am trying to add text labels to the top of a grouped/dodged bar plot using seaborn.objects. Here is a basic dodged bar plot: import seaborn.objects as so import pandas as pd dat = pd.DataFrame({'group':['a','a','b','b'], 'x':['1','2','1','2'], 'y':[3,4,1,2]}) (so.Plot(dat, x = 'x', y = 'y', color = 'group') .add(so.Bar(),so.Dodge())) I can add text labels to the top of a non-dodged bar plot using so.Text(), no problem. (so.Plot(dat.query('group == "a"'), x = 'x', y = 'y', text = 'group') .add(so.Bar()) .add(so.Text({'va':'bottom'}))) However, when I combine dodging with text, the bars shrink and move far apart. (so.Plot(dat.query('group == "a"'), x = 'x', y = 'y', text = 'group') .add(so.Bar()) .add(so.Text({'va':'bottom'}))) This looks worse the more categories there are - in my actual application the bars have thinned out to single lines. Setting the gap parameter of so.Dodge() or the width parameter of so.Bar() doesn't seem to be capable of solving the problem (although either will alleviate it slightly if I'm not too picky). I'm guessing that the bar plot is using the so.Dodge() settings appropriate for text in order to figure out its own dodging, but that doesn't seem to be working right. Note that reversing the order I .add() the geometries doesn't seem to do anything. How can I avoid this? | This isn't ideal but can be worked around by assigning text only in the layer with the Text (or un-assigning it from the Bar layer) and restricting the variables used to compute the text dodge: ( so.Plot(dat, x='x', y='y', color='group') .add(so.Bar(), so.Dodge()) .add(so.Text({'va':'bottom'}), so.Dodge(by=["color"]), text='group') ) | 4 | 4 |
75,262,933 | 2023-1-27 | https://stackoverflow.com/questions/75262933/python-pandas-wide-data-identify-earliest-and-maximum-columns-in-time-series | I am working with a data frame that is written in wide format. Each book has a number of sales, but some quarters have null values because the book was not released before that quarter. import pandas as pd data = {'Book Title': ['A Court of Thorns and Roses', 'Where the Crawdads Sing', 'Bad Blood', 'Atomic Habits'], 'Metric': ['Book Sales','Book Sales','Book Sales','Book Sales'], 'Q1 2022': [100000,0,0,0], 'Q2 2022': [50000,75000,0,35000], 'Q3 2022': [25000,150000,20000,45000], 'Q4 2022': [25000,20000,10000,65000]} df1 = pd.DataFrame(data) What I would like to do is create one field that identifies "ID of first available quarter" ("First Quarter ID"), and another that identifies "ID of quarter with maximum sales" ("Max Quarter ID"). Then I would like to show two fields with the sales in the first available quarter and the second available quarter. Tips to go about this? Thank you! | Edit, updated approach making better use of groupby after melting #melt table to be long-form long_df1 = df1.melt( id_vars = ['Book Title','Metric'], value_name = 'Sales', var_name = 'Quarter', ) #remove rows that have 0 sales (could be dropna if null values used instead) long_df1 = long_df1[long_df1['Sales'].gt(0)] #groupby book title and find the first/max quarter/sales gb = long_df1.groupby('Book Title') first_df = gb[['Quarter','Sales']].first() max_df = long_df1.loc[gb['Sales'].idxmax(),['Book Title','Quarter','Sales']].set_index('Book Title') #concatenate the first/max dfs out_df = pd.concat( (first_df.add_prefix('First '),max_df.add_prefix('Max ')), axis=1 ).reset_index() Output | 3 | 1 |
75,261,249 | 2023-1-27 | https://stackoverflow.com/questions/75261249/how-to-assign-feature-weights-in-xgbclassifier | I am trying to assign a higher weight to one feature above others. Here is my code. ## Assign weight to High Net Worth feature cols = list(train_X.columns.values) # 0 - 1163 --Other Columns # 1164 --High Net Worth #Create an array of feature weights other_col_wt = [1]*1164 high_net_worth_wt = [5] feature_wt = other_col_wt + high_net_worth_wt feature_weights = np.array(feature_wt) # Initialize the XGBClassifier xgboost = XGBClassifier(subsample = 0.8, # subsample = 0.8 ideal for big datasets silent=False, # whether print messages during construction colsample_bytree = 0.4, # subsample ratio of columns when constructing each tree gamma=10, # minimum loss reduction required to make a further partition on a leaf node of the tree, regularisation parameter objective='binary:logistic', eval_metric = ["auc"], feature_weights = feature_weights ) # Hypertuning parameters lr = [0.1,1] # learning_rate = shrinkage for updating the rules ne = [100] # n_estimators = number of boosting rounds md = [3,4,5] # max_depth = maximum tree depth for base learners # Grid Search clf = GridSearchCV(xgboost,{ 'learning_rate':lr, 'n_estimators':ne, 'max_depth':md },cv = 5,return_train_score = False) # Fitting the model with the custom weights clf.fit(train_X,train_y, feature_weights = feature_weights) clf.cv_results_ I went through the documentation here and this Akshay Sehgal's stackoverflow response here for a similar question. But when I use the above code, I get below error? Could anyone please help me where I am doing it wrong? Thanks. | I think you need to remove feature_weights from the init of XGBClassifier. At least, this works when I try your example. | 4 | 3 |
75,247,445 | 2023-1-26 | https://stackoverflow.com/questions/75247445/error-when-making-a-dash-datatable-filterable-by-columns-values | Only when I add the property filter_action="native" in a dash.DataTable in order to make it possible for the user to filter rows by column values I get an error that varies with the browser I run the webapp on: Edge: Cannot read properties of undefined (reading 'placeholder_text') (This error originated from the built-in JavaScript code that runs Dash apps. Click to see the full stack trace or open your browser's console.) TypeError: Cannot read properties of undefined (reading 'placeholder_text') at s.value (http://localhost:8050/_dash-component-suites/dash/dash_table/async-table.js:2:236716)... Chrome: same as Edge Firefox: r is undefined (This error originated from the built-in JavaScript code that runs Dash apps. Click to see the full stack trace or open your browser's console.) value@http://localhost:8050/_dash-component-suites/dash/dash_table/async-table.js:2:236702... Note that without setting that single property the app works perfectly. I terribly need the user to be able to filter rows by column values: what can I do to solve this issue? environment python 3.7, Flask 2.2.2, dash 2.8.0 | I found a possible solution: that is giving also the columns attribute during DataTable declaration (no, giving just data is not sufficient): columns=[{"name": i, 'id': i} for i in df.columns] The minimal code for making a DataTable with working filtering of rows based on column values is dash_table.DataTable( id='my_table_id', columns=[ {"name": i, 'id': i} for i in df.columns ], data=df.to_dict('records'), filter_action="native", ) | 4 | 7 |
75,256,024 | 2023-1-27 | https://stackoverflow.com/questions/75256024/calculating-timedeltas-across-daylight-saving | I'm facing a python timezones problem and am unsure of what is the right approach to deal with it. I have to calculate timedeltas from given start and end DateTime objects. It can happen that daylight saving time will change during the runtime of my events, so I have to take that into account. So far I've learned that for this to work I need to save my start and end times as timezone aware DateTime objects rather than regular UTC DateTimes. I've been looking into DateTime.tzinfo, pytz,and dateutil but from what I understand these are all mostly focused on localised display of UTC DateTime objects or calculating the offsets between different timezones. Other helpers I found expect the timezone as a UTC offset, so would already require me to know if a date is affected by daylight saving or not. So, I guess my question is: Is there a way so save a DateTime as "Central Europe" and have it be aware of daytime savings when doing calculations with them? Or, if not, what would be the established way to check if two DateTime objects are within daylight saving, so I can manually adjust the result if necessary? I'd be grateful for any pointers. | You just need to produce an aware (localised) datetime instance, then any calculation you do with it will take DST into account. Here as an example with pytz: >>> import pytz >>> from datetime import * >>> berlin = pytz.timezone('Europe/Berlin') >>> d1 = berlin.localize(datetime(2023, 3, 25, 12)) datetime.datetime(2023, 3, 25, 12, 0, tzinfo=<DstTzInfo 'Europe/Berlin' CET+1:00:00 STD>) >>> d2 = berlin.localize(datetime(2023, 3, 26, 12)) datetime.datetime(2023, 3, 26, 12, 0, tzinfo=<DstTzInfo 'Europe/Berlin' CEST+2:00:00 DST>) >>> d2 - d1 datetime.timedelta(seconds=82800) >>> (d2 - d1).total_seconds() / 60 / 60 23.0 | 4 | 3 |
75,252,652 | 2023-1-26 | https://stackoverflow.com/questions/75252652/python-sqlalchemy-postgresql-deprecated-api-features | I am using following code to create the function and trigger to update the created_at and updated_at fields. with upgrade of new module getting the deprecated API warning. How can I replace engine.execute(sa.text(create_refresh_updated_at_func.format(schema=my_schema))) line to remove the warning message? Code: mapper_registry.metadata.create_all(engine, checkfirst=True) create_refresh_updated_at_func = """ CREATE OR REPLACE FUNCTION {schema}.refresh_updated_at() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = NOW(); RETURN NEW; END; $$ LANGUAGE plpgsql; """ my_schema = "public" engine.execute(sa.text(create_refresh_updated_at_func.format(schema=my_schema))) Warrning: RemovedIn20Warning: Deprecated API features detected! These feature(s) are not compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to updating applications, ensure requirements files are pinned to "sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set environment variable SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9) engine.execute(sa.text(create_refresh_updated_at_func.format(schema=my_schema))) | SQLAlchemy no longer supports autocommit at the library level. You need to run the execute within a transaction. This should work: with engine.begin() as conn: conn.execute(text(create_refresh_updated_at_func.format(schema=my_schema))) migration-core-connection-transaction You could also use the driver-level isolation level like this but I think the connections from this pool will all be set to autocommit: engine2 = create_engine(f"postgresql+psycopg2://{username}:{password}@/{db}", isolation_level='AUTOCOMMIT') with engine2.connect() as conn: conn.execute(text(create_refresh_updated_at_func.format(schema=my_schema))) | 4 | 4 |
75,242,731 | 2023-1-26 | https://stackoverflow.com/questions/75242731/is-there-a-c-equivalent-for-pythons-self-documenting-expressions-in-f-strings | Since Python 3.8 it is possible to use self-documenting expressions in f-strings like this: >>> variable=5 >>> print(f'{variable=}') variable=5 is there an equivalent feature in C#? | Yes. int variable = 5; Console.WriteLine($"variable={variable}"); That outputs: variable=5 The key here is the $ that precedes the string literal. To do what you want with the name coming dynamically, I'd suggest a more explicit approach of using an extension method. Try this: public static class SelfDocExt { public static string SelfDoc<T>( this T value, [CallerArgumentExpression(nameof(value))] string name = "") => $"{name}={value}"; } Then you can write this: int variable = 5; Console.WriteLine($"{variable.SelfDoc()}"); It solves your problem without breaking string interpolation. | 4 | 5 |
75,248,944 | 2023-1-26 | https://stackoverflow.com/questions/75248944/polymorphism-in-callablle-under-python-type-checking-pylance | For my code I have an aggregate class that needs a validation method defined for each of the subclasses of base class BaseC, in this case InheritC inherits from BaseC. The validation method is then passed into the aggregate class through a register method. See the following simple example from typing import Callable class BaseC: def __init__(self) -> None: pass class InheritC(BaseC): def __init__(self) -> None: super().__init__() @classmethod def validate(cls, c:'InheritC') ->bool: return False class AggrC: def register_validate_fn(self, fn: Callable[[BaseC], bool])-> None: self.validate_fn = fn ac = AggrC() ic = InheritC() ac.validate_fn(ic.fn) I added type hints on the parameter for registering a function, which is a Callable object Callable[[BaseC], bool] since potentially there will be several other validation methods which is defined for each class inherited from BaseC. However, pylance doesn't seem to recognize this polymorphism in a Callable type hint and throws a warning (I set up my VScode to type check it) that said Argument of type "(c: InheritC) -> bool" cannot be assigned to parameter "fn" of type "(BaseC) -> bool" in function "register_fn" Type "(c: InheritC) -> bool" cannot be assigned to type "(BaseC) -> bool" Parameter 1: type "BaseC" cannot be assigned to type "InheritC" "BaseC" is incompatible with "InheritC" Pylance(reportGeneralTypeIssues) I don't see where I made an mistake in design, and I don't want to simply ignore the warning. Can any one explain why this is invaid? Or is it just simply a bug from pylance I'm using python version 3.8.13 for development. | I'll be using the below sample code, where I've fixed a couple of bugs: from typing import Callable class BaseC: def __init__(self) -> None: pass class InheritC(BaseC): def __init__(self) -> None: super().__init__() @classmethod def validate(cls, c:'InheritC') ->bool: return False class AggrC: def register_validate_fn(self, fn: Callable[[BaseC], bool])-> None: self.validate_fn = fn ac = AggrC() ic = InheritC() ac.register_validate_fn(ic.validate) This Python runs without errors, but still produces the same error you're seeing when run through a type checker (in my case, MyPY): $ mypy stackoverflow_test.py stackoverflow_test.py:21: error: Argument 1 to "register_validate_fn" of "AggrC" has incompatible type "Callable[[], bool]"; expected "Callable[[BaseC], bool]" [arg-type] Found 1 error in 1 file (checked 1 source file) This is a subtle issue, which is easy to overlook: it's an issue with contravariance of parameter types. The reason this is easy to overlook is because most object-oriented tutelage focuses on classes and objects, and doesn't really discuss functions as being types with inheritance. Indeed, most languages with object-oriented language features don't support declaring a function as inheriting from another function! Lets reduce this as much as possible: from typing import Callable class Parent: def foo(self): pass class Child(Parent): def bar(self): pass def takesParent(parameter: Parent): parameter.foo() def takesChild(parameter: Child): parameter.bar() def takesFunction(function: Callable): # What should the signature of `function` be to support both functions above? pass How should you define function: Callable to make it compatible with both functions? Lets take a look at what takesFunction could do, which would be valid for both functions: def takesFunction(function: Callable): child = Child() function(child) This function should work if you pass either function, because takesParent will call child.foo(), which is valid; and takesChild will call child.bar(), which is also valid. OK, how about this function? def takesFunction(function: Callable): parent = Parent() function(parent) In this case, function(parent) can only work with takesParent, because if you pass takesChild, takesChild will call parent.bar() - which doesn't exist! So, the signature that supports passing both functions looks like this: def takesFunction(function: Callable[[Child], None]): Which is counter-intuitive to many people. The parameter function must be type-hinted as taking the most specific parameter type. Passing a function with a less specific parameter - a superclass - is compatible, but passing one with a more specific parameter isn't. This can be a difficult topic to understand, so I'm sorry if I didn't make it very clear, but I hope this answer helped. | 4 | 4 |
75,240,766 | 2023-1-25 | https://stackoverflow.com/questions/75240766/problem-converting-an-image-for-a-3-color-e-ink-display | I am trying to process an image file into something that can be displayed on a Black/White/Red e-ink display, but I am running into a problem with the output resolution. Based on the example code for the display, it expects two arrays of bytes (one for Black/White, one for Red), each 15,000 bytes. The resolution of the e-ink display is 400x300. I'm using the following Python script to generate two BMP files: one for Black/White and one for Red. This is all working, but the file sizes are 360,000 bytes each, which won't fit in the ESP32 memory. The input image (a PNG file) is 195,316 bytes. The library I'm using has a function called EPD_4IN2B_V2_Display(BLACKWHITEBUFFER, REDBUFFER);, which wants the full image (one channel for BW, one for Red) to be in memory. But, with these image sizes, it won't fit on the ESP32. And, the example uses 15KB for each color channel (BW, R), so I feel like I'm missing something in the image processing necessary to make this work. Can anyone shed some light on what I'm missing? How would I update the Python image-processing script to account for this? I am using the Waveshare 4.2inch E-Ink display and the Waveshare ESP32 driver board. A lot of the Python code is based on this StackOverflow post but I can't seem to find the issue. import io import traceback from wand.image import Image as WandImage from PIL import Image # This function takes as input a filename for an image # It resizes the image into the dimensions supported by the ePaper Display # It then remaps the image into a tri-color scheme using a palette (affinity) # for remapping, and the Floyd Steinberg algorithm for dithering # It then splits the image into two component parts: # a white and black image (with the red pixels removed) # a white and red image (with the black pixels removed) # It then converts these into PIL Images and returns them # The PIL Images can be used by the ePaper library to display def getImagesToDisplay(filename): print(filename) red_image = None black_image = None try: with WandImage(filename=filename) as img: img.resize(400, 300) with WandImage() as palette: with WandImage(width = 1, height = 1, pseudo ="xc:red") as red: palette.sequence.append(red) with WandImage(width = 1, height = 1, pseudo ="xc:black") as black: palette.sequence.append(black) with WandImage(width = 1, height = 1, pseudo ="xc:white") as white: palette.sequence.append(white) palette.concat() img.remap(affinity=palette, method='floyd_steinberg') red = img.clone() black = img.clone() red.opaque_paint(target='black', fill='white') black.opaque_paint(target='red', fill='white') red_image = Image.open(io.BytesIO(red.make_blob("bmp"))) black_image = Image.open(io.BytesIO(black.make_blob("bmp"))) red_bytes = io.BytesIO(red.make_blob("bmp")) black_bytes = io.BytesIO(black.make_blob("bmp")) except Exception as ex: print ('traceback.format_exc():\n%s',traceback.format_exc()) return (red_image, black_image, red_bytes, black_bytes) if __name__ == "__main__": print("Running...") file_path = "testimage-tree.png" with open(file_path, "rb") as f: image_data = f.read() red_image, black_image, red_bytes, black_bytes = getImagesToDisplay(file_path) print("bw: ", red_bytes) print("red: ", black_bytes) black_image.save("output/bw.bmp") red_image.save("output/red.bmp") print("BW file size:", len(black_image.tobytes())) print("Red file size:", len(red_image.tobytes())) | As requested, and in the event that it may be useful for future reader, I write a little bit more extensively what I've said in comments (and was verified to be indeed the reason of the problem). The e-ink display needs usually a black&white image. That is 1 bit per pixel image. Not a grayscale (1 channel byte per pixel), even less a RGB (3 channels/bytes per pixel). I am not familiar with bi-color red/black displays. But it seems quite logical that it behave just like 2 binary displays (one black & white display, and one black-white & red display). Sharing the same location. What your code seemingly does is to remove all black pixels from a RGB image, and use it as a red image, and remove all red pixels from the same RDB image, and use it as a black image. But since those images are obtained with clone they are still RGB images. RGB images that happen to contain only black and white pixels, or red and white pixels, but still RGB image. With PIL, it is the mode that control how images are represented in memory, and therefore, how they are saved to file. Relevant modes are RGB, L (grayscale aka 1 linear byte/channel per pixel), and 1 (binary aka 1 bit per pixel). So what you need is to convert to mode 1. Usind .convert('1') method on both your images. Note that 400x300Γ3 (uncompressed rgb data for your image) is 360000, which is what you got. 400Γ300 (L mode for same image) is 120000, and 400Γ300/8 (1 mode, 1 bit/pixel) is 15000, which is precisely the expected size as you mentioned. So that is another confirmation that, indeed, 1 bit/pixel image is expected. | 4 | 4 |
75,244,419 | 2023-1-26 | https://stackoverflow.com/questions/75244419/aws-ecs-environment-variable-not-available-python | I am using AWS ECS with a Python Framework and in my task definition i have the option to add environment variables that will be available to the service(cluster). Here is where i added the env variables: When i then try to print all the env variables in my service i do not get access to these variables and i am not sure why. Here i printed all my env using environ: for a in os.environ: print('Var: ', a, 'Value: ', os.getenv(a)) print("all done") Result: DB_PORT or APP_KEY is not available in my service or python-code. | I had a similar problem and it seems to me that the full environment is passed only to the PID 1 (init process, which in a container should be CMD/ENTRYPOINT command). Cron is not that process so you cannot assume it sees the same environment. What I did may not be the best solution, it is rather a hack, but it works. The environment of a process is available in /proc/<pid>/environ, so in this case /proc/1/environ. I grab it from there and I store it in a file for a future use: for I in `cat /proc/1/environ | strings`; do echo "export $I"; done > /src/.profile and then I just source /src/.profile in my scripts (the cron job in your case). If you need AWS credentials, you may also need access to ECS_CONTAINER_METADATA_URI_V4 environment variable and that one will be also there. | 3 | 4 |
75,232,011 | 2023-1-25 | https://stackoverflow.com/questions/75232011/why-does-exe-built-using-pyinstaller-isnt-working | here's the error: Traceback (most recent call last): File "main.py", line 5, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module File "eel\__init__.py", line 8, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module File "bottle.py", line 73, in <module> AttributeError: 'NoneType' object has no attribute 'write' here's the pyinstaller command: pyinstaller --noconfirm --onedir --windowed --icon icon.ico --name "Useful Tools for Windows" --uac-admin --add-data "web;web/" main.py here's the source code: # Imports system commands import os # Imports eel, a Electron like GUI for Python. import eel eel.init("web") # Fixes Chrome not installed. eel.browsers.set_path( "chrome", "C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe" ) @eel.expose def openHosts(): os.system( "attrib -r %WINDIR%\system32\drivers\etc\hosts && start notepad.exe %windir%\system32\drivers\etc\hosts" ) @eel.expose def openOfficeAddins(): os.system("explorer.exe %AppData%\Microsoft\AddIns") @eel.expose def openCurrentUserStartMenu(): os.system("explorer.exe %AppData%\Microsoft\Windows\Start Menu") @eel.expose def openAllUserStartMenu(): os.system("explorer.exe C:\ProgramData\Microsoft\Windows\Start Menu") @eel.expose def openSentTo(): os.system("explorer.exe %Appdata%\Microsoft\Windows\SendTo") @eel.expose def openCurrentUserStartup(): os.system("explorer.exe %AppData%\Microsoft\Windows\Start Menu\Programs\Startup") @eel.expose def openAllUsersStartup(): os.system( "explorer.exe C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp" ) @eel.expose def openWordStartup(): os.system("explorer.exe %AppData%\Microsoft\Word\STARTUP") @eel.expose def openPSReadLineHistory(): os.system( "notepad %AppData%\Microsoft\Windows\PowerShell\PSReadLine\ConsoleHost_history.txt" ) @eel.expose def installYouTubeDownloader(): os.system("winget install MoheshwarAmarnathBiswas.YouTubeVideoDownloader") @eel.expose def installPowerToys(): os.system("winget install Microsoft.PowerToys") @eel.expose def installSysInternals(): os.system("winget install --id 9P7KNL5RWT25") @eel.expose def openAccountPictures(): os.system("explorer shell:AccountPictures") @eel.expose def openFolderOptions(): os.system("explorer shell:::{6DFD7C5C-2451-11d3-A299-00C04F8EF6AF}") @eel.expose def openInternetOptions(): os.system("explorer shell:::{A3DD4F92-658A-410F-84FD-6FBBBEF2FFFE}") @eel.expose def openSoundOptions(): os.system("explorer shell:::{F2DDFC82-8F12-4CDD-B7DC-D4FE1425AA4D}") @eel.expose def openPowerOptions(): os.system("explorer shell:::{025A5937-A6BE-4686-A844-36FE4BEC8B6D}") @eel.expose def openOptionalFeatures(): os.system("start %WINDIR%\System32\OptionalFeatures.exe") @eel.expose def openControlPanel(): os.system("start control.exe") @eel.expose def wsreset(): os.system("start wsreset.exe") @eel.expose def wureset(): os.system("net stop bits") os.system("net stop wuauserv") os.system("net stop CryptSvc") os.system("net stop msiserver") os.system("del %windir%\SoftwareDistribution") os.system("del %windir%\system32\catroot2") os.system("net start bits") os.system("net start wuauserv") os.system("net start CryptSvc") os.system("net start msiserver") @eel.expose def openGodMode(): os.system("explorer.exe shell:::{ed7ba470-8e54-465e-825c-99712043e01c}") eel.start("index.html") Full source code: https://github.com/fluentmoheshwar/useful-tools Dependencies used: altgraph==0.17.3 black==22.12.0 bottle==0.12.23 bottle-websocket==0.2.9 cffi==1.15.1 click==8.1.3 colorama==0.4.6 Eel==0.15.1 future==0.18.3 gevent==22.10.2 gevent-websocket==0.10.1 greenlet==2.0.1 mypy-extensions==0.4.3 pathspec==0.10.3 pefile==2022.5.30 platformdirs==2.6.2 pycparser==2.21 pyinstaller==5.7.0 pyinstaller-hooks-contrib==2022.15 pyparsing==3.0.9 pywin32-ctypes==0.2.0 tomli==2.0.1 whichcraft==0.6.1 zope.event==4.6 zope.interface==5.5.2 Using Microsoft Store version of Python: Yes. Using venv: yes. I tried installing wheel and reinstalling dependencies. I tried adding C:\Windows\System32\downlevel to path. I tried recreating venv. | One of the libraries you are using is attempting to write to sys.stdout and sys.stderr, which are set to None when you run pyinstaller with the -w --windowed or --noconsole options. You need to explicitly set sys.stderr and sys.stdout in your programs code as early as possible to a writeable object like an open file or an io buffer. for example: import sys, io buffer = io.StringIO() sys.stdout = sys.stderr = buffer | 4 | 6 |
75,240,319 | 2023-1-25 | https://stackoverflow.com/questions/75240319/conda-environment-using-incorrect-module-because-of-order-os-sys-path | I am using conda version 4.14.0. When I activate a conda environment I can see that the current numpy module is 1.22.3 conda list | grep -i numpy numpy 1.22.3 py39hc58783e_2 conda-forge When I run python in the conda environment and load numpy it shows version 1.19.1 Python 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:25:59) [GCC 10.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> np.__version__ '1.19.5' Looking at the sys.path locations I see that it is searching my local account site packages before the environment site-packages, how can this order be updated? >>> print(sys.path) ['', '/opt/tljh/user/lib/python39.zip', '/opt/main/user/lib/python3.9', '/opt/main/user/lib/python3.9/lib-dynload', '/home/user/.local/lib/python3.9/site-packages', '/opt/main/user/lib/python3.9/site-packages'] Thank you for your help. I cannot find where these variables are added through conda. | The user site packages (at ~/.local) remaining available is the default behaviour of conda envs, and even issues requesting adding an option to disable that have been repeatedly closed by conda maintainers (example). It is an unfortunate decision from Conda that the user site is included by default. You can exclude it from sys.path by setting an environment variable: export PYTHONNOUSERSITE=1 That sounds hacky but it is actually the recommended workaround from conda themselves (ref). Or, you can do what I do and just keep the user site totally empty, to avoid such issues without needing to mess around with env vars. Uninstall everything from ~/.local and use separate venvs for everything. ~/.local/bin/python3.9 -m pip uninstall -r <(~/.local/bin/python3.9 -m pip freeze) You can use a project like pipx as a replacement to install any utilities which should be globally available, this avoids polluting the user site (and conda envs) with their dependencies. Watch #7707 (make python user site-packages isolation in .local directory configurable) for possible future developments from conda here. | 4 | 5 |
75,229,250 | 2023-1-25 | https://stackoverflow.com/questions/75229250/is-there-a-method-to-run-a-conda-environment-in-google-colab | I have a YML file for a Conda environment that runs with Python 3.8.15 (environment.yml). I am currently trying to load that file into my Google Colab, based on this answer: conda environment in google colab [google-colaboratory]. !wget -c https://repo.anaconda.com/archive/Anaconda3-2022.10-Windows-x86_64.exe !chmod +x Anaconda3-2022.10-Windows-x86_64.exe !bash ./Anaconda3-2022.10-Windows-x86_64.exe -b -f -p /usr/local And while the executable file for Anaconda was installed in my Google Drive folder, when I run the code, it turms out that Colab could not execute that file: ./Anaconda3-2022.10-Windows-x86_64.exe: ./Anaconda3-2022.10-Windows-x86_64.exe: cannot execute binary file Is there any other method that I could use to install Anaconda to work with it in Google Colab? And furthermore, how should I load my environment.yml file after getting Anaconda to run in Colab? | You cannot run a Jupyter notebook (Colab session) with a new Conda environment, but you can use Conda to augment the packages in the existing Python installation. Installation is streamlined with condacolab. See the condacolab documentation. An quick example would go something like: First Cell !pip install -q condacolab import condacolab condacolab.install() # expect a kernel restart Second Cell mamba install [pkg1 pkg2 ...] # or, if you have a YAML mamba env update -n base -f env.yaml In the YAML case, you must match the same version of Python with the Colab version. | 6 | 12 |
75,236,716 | 2023-1-25 | https://stackoverflow.com/questions/75236716/anyway-to-get-rid-of-math-floor-for-positive-odd-integers-with-sympy-simplify | I'm trying to simplify some expressions of positive odd integers with sympy. But sympy refuses to expand floor, making the simplification hard to proceed. To be specific, x is a positive odd integer (actually in my particular use case, the constraint is even stricter. But sympy can only do odd and positive, which is fine). x // 2 should be always equal to (x - 1) / 2. Example code here: from sympy import Symbol, simplify x = Symbol('x', odd=True, positive=True) expr = x // 2 - (x - 1) / 2 print(simplify(expr)) prints -x/2 + floor(x/2) + 1/2. Ideally it should print 0. What I've tried so far: Simplify (x - 1) // 2 - (x - 1) / 2. Turns out to be 0. Multiply the whole thing by 2: 2 * (x // 2 - (x - 1) / 2). Gives me: -x + 2*floor(x/2) + 1. Try to put more weights on the FLOOR op by customizing the measure. No luck. Use sympy.core.evaluate(False) context when creating the expression. Nuh. Tune other parameters like ratio, rational, and play with other function like expand, factor, collect. Doesn't work either. EDIT: Wolfram alpha can do this. I tried to look like the assumptions of x along with some expressions. It surprises me that (x - 1) / 2).is_integer returns None, which means unknown. I'm running out of clues. I'm even looking for alternativese of sympy. Any ideas guys? | I fail to see why sympy can't simplify that. But, on another hand, I've discovered the existence of odd parameter just now, with your question. What I would have done, without the knowledge of odd is k = Symbol('k', positive=True, integer=True) x = 2*k-1 expr = x // 2 - (x - 1) / 2 Then, expr is 0, without even the need to simplify. So, can't say why you way doesn't work (and why that odd parameter exists if it is not used correctly to guess that x-1 is even, and therefore (x-1)/2 integer). But, in the meantime, my way of defining an odd integer x works. | 3 | 4 |
75,231,984 | 2023-1-25 | https://stackoverflow.com/questions/75231984/read-sql-in-chunks-with-polars | I am trying to read a large database table with polars. Unfortunately, the data is too large to fit into memory and the code below eventually fails. Is there a way in polars how to define a chunksize, and also write these chunks to parquet, or use the lazy dataframe interface to keep the memory footprint low? import polars as pl df = pl.read_sql("SELECT * from TABLENAME", connection_string) df.write_parquet("output.parquet") | Yes and no. There's not a predefined method to do it but you can certainly do it yourself. You'd do something like: rows_at_a_time=1000 curindx=0 while True: df = pl.read_sql(f"SELECT * from TABLENAME limit {curindx},{rows_at_a_time}", connection_string) if df.shape[0]==0: break df.write_parquet(f"output{curindx}.parquet") curindx+=rows_at_a_time ldf=pl.concat([pl.scan_parquet(x) for x in os.listdir(".") if "output" in x and "parquet" in x]) This borrows limit syntax from this answer assuming you're using mysql or a db that has the same syntax which isn't trivial assumption. You may need to do something like this if not using mysql. Otherwise you just read your table in chunks, saving each chunk to a local file. When the chunk you get back from your query has 0 rows then it stops looping and loads all the files to a lazy df. You can almost certainly (and should) increase the rows_at_a_time to something greater than 1000 but that's dependent on your data and computer memory. | 5 | 2 |
75,232,363 | 2023-1-25 | https://stackoverflow.com/questions/75232363/pandas-dataframe-aggregating-function-to-count-also-nan-values | I have the following dataframe print(A) Index 1or0 0 1 0 1 2 0 2 3 0 3 4 1 4 5 1 5 6 1 6 7 1 7 8 0 8 9 1 9 10 1 And I have the following Code (Pandas Dataframe count occurrences that only happen immediately), which counts the occurrences of values that happen immediately one after another. ser = A["1or0"].ne(A["1or0"].shift().bfill()).cumsum() B = ( A.groupby(ser, as_index=False) .agg({"Index": ["first", "last", "count"], "1or0": "unique"}) .set_axis(["StartNum", "EndNum", "Size", "Value"], axis=1) .assign(Value= lambda d: d["Value"].astype(str).str.strip("[]")) ) print(B) β StartNum EndNum Size Value 0 1 3 3 0 1 4 7 4 1 2 8 8 1 0 3 9 10 2 1 The issues is, when NaN Values occur, the code doesn't put them together in one interval it count them always as one sized interval and not e.g. 3 print(A2) Index 1or0 0 1 0 1 2 0 2 3 0 3 4 1 4 5 1 5 6 1 6 7 1 7 8 0 8 9 1 9 10 1 10 11 NaN 11 12 NaN 12 13 NaN print(B2) β StartNum EndNum Size Value 0 1 3 3 0 1 4 7 4 1 2 8 8 1 0 3 9 10 2 1 4 11 11 1 NaN 5 12 12 1 NaN 6 13 13 1 NaN But I want B2 to be the following print(B2Wanted) β StartNum EndNum Size Value 0 1 3 3 0 1 4 7 4 1 2 8 8 1 0 3 9 10 2 1 4 11 13 3 NaN What do I need to change so that it works also with NaN? | First fillna with a value this is not possible (here -1) before creating your grouper: group = A['1or0'].fillna(-1).diff().ne(0).cumsum() # or # s = A['1or0'].fillna(-1) # group = s.ne(s.shift()).cumsum() B = (A.groupby(group, as_index=False) .agg(**{'StartNum': ('Index', 'first'), 'EndNum': ('Index', 'last'), 'Size': ('1or0', 'size'), 'Value': ('1or0', 'first') }) ) Output: StartNum EndNum Size Value 0 1 3 3 0.0 1 4 7 4 1.0 2 8 8 1 0.0 3 9 10 2 1.0 4 11 13 3 NaN | 3 | 5 |
75,184,430 | 2023-1-20 | https://stackoverflow.com/questions/75184430/how-to-redirect-the-user-to-another-page-after-login-using-javascript-fetch-api | Using the following JavaScript code, I make a request to obtain the firebase token, and then a POST request to my FastAPI backend, using the JavaScript fetch() method, in order to login the user. Then, in the backend, as can be seen below, I check whether or not the token is valid, and if so, return a redirect (i.e., RedirectResponse) to another webpage. The problem is that the redirect in the browser does not work, and the previous page remains. function loginGoogle() { var provider = new firebase.auth.GoogleAuthProvider(); firebase.auth() //.currentUser.getToken(provider) .signInWithPopup(provider) .then((result) => { /** @type {firebase.auth.OAuthCredential} */ var credential = result.credential; // This gives you a Google Access Token. You can use it to access the Google API. var token = credential.idToken; // The signed-in user info. var user = result.user; // ... }) .catch((error) => { // Handle Errors here. var errorCode = error.code; var errorMessage = error.message; // The email of the user's account used. var email = error.email; // The firebase.auth.AuthCredential type that was used. var credential = error.credential; // ... }); firebase.auth().currentUser.getIdToken(true).then(function(idToken) { console.log(idToken) const token = idToken; const headers = new Headers({ 'x-auth-token': token }); const request = new Request('http://localhost:8000/login', { method: 'POST', headers: headers }); fetch(request) .then(response => response.json()) .then(data => console.log(data)) .catch(error => console.error(error)); }) The endpoint in the backend that returns the login page that contains the HTML code with the button and the loginGoogle function: @router.get("/entrar") def login(request: Request): return templates.TemplateResponse("login.html", {"request": request}) I call this POST endpoint and then a redirect to /1 which is a GET route, and with status_code being 303, which is how @tiangolo specifies it in the doc to redirect from a POST to a GET route. @router.post("/login") async def login(x_auth_token: str = Header(None)): valid_token = auth.verify_id_token(x_auth_token) if valid_token: print("token validado") return RedirectResponse(url="/1", status_code=status.HTTP_303_SEE_OTHER) else: return {"msg": "Token no recibido"} This is the GET endpoint to which the user should be redirected, but it doesn't: @app.get("/1") def get_landing(request: Request): return templates.TemplateResponse("landing.html", {"request": request}) Swagger screenshot of testing the /login endpoint: | Option 1 - Returning RedirectResponse When using the fetch() function to make an HTTP request to a server that responds with a RedirectResponse, the redirect response will be automatically followed on client side (as explained here), as the redirect mode is set to follow, by default, in the fetch() function. This means that the user won't be redirected to the new URL, but rather fetch() will follow that redirection behind the scenes and return the response from the redirect URL. You might expected that setting redirect to manual instead would allow you to get the redirect URL (contained in the Location response header) and manually navigate to the new page, but this is not the case, as described here. However, you could still use the default redirect mode in the fetch() request, i.e., follow (no need to manually specify it, as it is already set by defaultβin the example below, it is manually defined for clarity purposes only), and then use the Response.redirected property, in order to check whether or not the response is the result of a request that you made which was redirected. If so, you can use Response.url, which will return the "final URL obtained after any redirects", and using JavaScript's window.location.href, you can redirect the user to the target URL (i.e., the redirect page). Instead of window.location.href, one could also use window.location.replace(). The difference between the two is that when using location.replace(), after navigating to the given URL, the current page will not be saved in the session historyβmeaning that the user won't be able to use the back button to navigate to it (which might actually be the way one likes their frontend to behave in such cases). Working Example app.py from fastapi import FastAPI, Request, status, Depends from fastapi.templating import Jinja2Templates from fastapi.responses import RedirectResponse from fastapi.security import OAuth2PasswordRequestForm app = FastAPI() templates = Jinja2Templates(directory='templates') @app.get('/') async def index(request: Request): return templates.TemplateResponse('index.html', {'request': request}) @app.post('/login') async def login(data: OAuth2PasswordRequestForm = Depends()): # perform some validation, using data.username and data.password credentials_valid = True if credentials_valid: return RedirectResponse(url='/welcome',status_code=status.HTTP_302_FOUND) else: return 'Validation failed' @app.get('/welcome') async def welcome(): return 'You have been successfully redirected' templates/index.html <!DOCTYPE html> <html> <head> <script> document.addEventListener("DOMContentLoaded", (event) => { document.getElementById("myForm").addEventListener("submit", function (e) { e.preventDefault(); // Cancel the default action var formElement = document.getElementById('myForm'); var data = new FormData(formElement); fetch('/login', { method: 'POST', redirect: 'follow', body: data, }) .then(res => { if (res.redirected) { window.location.href = res.url; // or, location.replace(res.url); return; } else return res.text(); }) .then(data => { document.getElementById("response").innerHTML = data; }) .catch(error => { console.error(error); }); }); }); </script> </head> <body> <form id="myForm"> <label for="username">Username:</label><br> <input type="text" id="username" name="username" value="[email protected]"><br> <label for="password">Password:</label><br> <input type="password" id="password" name="password" value="pa55w0rd"><br><br> <input type="submit" value="Submit" class="submit"> </form> <div id="response"></div> </body> </html> Option 2 - Returning JSON response containing the redirect URL Instead of returning a RedirectResponse from the server, you could have the server returning a normal JSON response with the URL included in the JSON object. On client side, you could check whether the JSON object returned from the serverβas a result of the fetch() requestβincludes the url key, and if so, retrieve its value and redirect the user to the target URL, using JavaScript's window.location.href or window.location.replace(). Alternatively, one could add the redirect URL to a custom response header on server side (see examples here and here on how to set a response header in FastAPI), and access it on client side, after posting the request using fetch(), as shown here (Note that if you were doing a cross-origin request, you would have to set the Access-Control-Expose-Headers response header on server side (see examples here and here, as well as FastAPI's CORSMiddleware documentation on how to use the expose_headers argument), indicating that your custom response header, which includes the redirect URL, should be made available to JS scripts running in the browser, since only the CORS-safelisted response headers are exposed by default). Working Example app.py from fastapi import FastAPI, Request, status, Depends from fastapi.templating import Jinja2Templates from fastapi.security import OAuth2PasswordRequestForm app = FastAPI() templates = Jinja2Templates(directory='templates') @app.get('/') async def index(request: Request): return templates.TemplateResponse('index.html', {'request': request}) @app.post('/login') async def login(data: OAuth2PasswordRequestForm = Depends()): # perform some validation, using data.username and data.password credentials_valid = True if credentials_valid: return {'url': '/welcome'} else: return 'Validation failed' @app.get('/welcome') async def welcome(): return 'You have been successfully redirected' templates/index.html <!DOCTYPE html> <html> <head> <script> document.addEventListener("DOMContentLoaded", (event) => { document.getElementById("myForm").addEventListener("submit", function (e) { e.preventDefault(); // Cancel the default action var formElement = document.getElementById('myForm'); var data = new FormData(formElement); fetch('/login', { method: 'POST', body: data, }) .then(res => res.json()) .then(data => { if (data.url) window.location.href = data.url; // or, location.replace(data.url); else document.getElementById("response").innerHTML = data; }) .catch(error => { console.error(error); }); }); }); </script> </head> <body> <form id="myForm"> <label for="username">Username:</label><br> <input type="text" id="username" name="username" value="[email protected]"><br> <label for="password">Password:</label><br> <input type="password" id="password" name="password" value="pa55w0rd"><br><br> <input type="submit" value="Submit" class="submit"> </form> <div id="response"></div> </body> </html> Option 3 - Using HTML <form> in the frontend If using a fetch() request is not a requirement for your project, you could instead use a normal HTML <form> and have the user click on the submit button to send the POST request to the server. In this way, using a RedirectResponse on server side (as demonstrated in Option 1) would result in having the user on client side automatically be redirected to the target URL, without any further action. Working examples can be found in this answer, as well as this answer and this answer. | 5 | 8 |
75,211,934 | 2023-1-23 | https://stackoverflow.com/questions/75211934/how-can-i-use-when-then-and-otherwise-with-multiple-conditions-in-polars | I have a data set with three columns. Column A is to be checked for strings. If the string matches foo or spam, the values in the same row for the other two columns L and G should be changed to XX. For this I have tried the following. df = pl.DataFrame( { "A": ["foo", "ham", "spam", "egg",], "L": ["A54", "A12", "B84", "C12"], "G": ["X34", "C84", "G96", "L6",], } ) print(df) shape: (4, 3) ββββββββ¬ββββββ¬ββββββ β A β L β G β β --- β --- β --- β β str β str β str β ββββββββͺββββββͺββββββ‘ β foo β A54 β X34 β β ham β A12 β C84 β β spam β B84 β G96 β β egg β C12 β L6 β ββββββββ΄ββββββ΄ββββββ expected outcome shape: (4, 3) ββββββββ¬ββββββ¬ββββββ β A β L β G β β --- β --- β --- β β str β str β str β ββββββββͺββββββͺββββββ‘ β foo β XX β XX β β ham β A12 β C84 β β spam β XX β XX β β egg β C12 β L6 β ββββββββ΄ββββββ΄ββββββ I tried this df = df.with_columns( pl.when((pl.col("A") == "foo") | (pl.col("A") == "spam")) .then((pl.col("L")= "XX") & (pl.col( "G")= "XX")) .otherwise((pl.col("L"))&(pl.col( "G"))) ) However, this does not work. Can someone help me with this? | For setting multiple columns to the same value you could use: df.with_columns( pl.when(pl.col("A").is_in(["foo", "spam"])) .then(pl.lit("XX")) .otherwise(pl.col("L", "G")) .name.keep() ) shape: (4, 3) ββββββββ¬ββββββ¬ββββββ β A β L β G β β --- β --- β --- β β str β str β str β ββββββββͺββββββͺββββββ‘ β foo β XX β XX β β ham β A12 β C84 β β spam β XX β XX β β egg β C12 β L6 β ββββββββ΄ββββββ΄ββββββ .is_in() can be used instead of multiple == x | == y chains If you need different values, you can pack them into a struct and extract/unnest the fields. df.with_columns( pl.when(pl.col("A").is_in(["foo", "spam"])) .then(pl.struct(L = pl.lit("AAA"), G = pl.lit("BBB"))) .otherwise(pl.struct("L", "G")) .struct.field("L", "G") # or .struct.unnest() ) shape: (4, 3) ββββββββ¬ββββββ¬ββββββ β A β L β G β β --- β --- β --- β β str β str β str β ββββββββͺββββββͺββββββ‘ β foo β AAA β BBB β β ham β A12 β C84 β β spam β AAA β BBB β β egg β C12 β L6 β ββββββββ΄ββββββ΄ββββββ | 9 | 10 |
75,188,523 | 2023-1-20 | https://stackoverflow.com/questions/75188523/installing-requirements-txt-in-a-venv-inside-vscode | Apart from typing out commands - is there a good way to install requirements.txt inside VSCode. I have a workspace with 2 folders containing different Python projects added - each has it's own virtual environment. I would like to run a task to execute and install the requirements to each of these. I have tried adding a task to tasks.json to try and do it for one with no success. { "version": "2.0.0", "tasks": [ { "label": "Service1: Install requirements", "type": "shell", "runOptions": {}, "command": "'${workspaceFolder}/sources/api/.venv/Scripts/activate'; pip install -r '${workspaceFolder}/sources/api/requirements.txt'", "problemMatcher": [] } ] } This runs - but you can see it refer to my global Python packages h:\program files\python311\lib\site-packages - not the virtual environment. I am running on Windows for this - but would like it to work eventually with Linux. | I've written a more detailed post before, but as Andez mentioned in comments, this is also a suitable post for the answer. This task can be ran in Windows, Linux and MacOS. { "version": "2.0.0", "tasks": [ { "label": "Build Python Env", "type": "shell", "group": { "kind": "build", "isDefault": true }, "options": { "cwd": "${workspaceFolder}" }, "linux": { "command": "python3 -m venv py_venv && source py_venv/bin/activate && python3 -m pip install --upgrade pip && python3 -m pip install -r requirements.txt && deactivate py_venv" }, "osx": { "command": "python3 -m venv py_venv && source py_venv/bin/activate && python3 -m pip install --upgrade pip && python3 -m pip install -r requirements.txt && deactivate py_venv" }, "windows": { "options": { "shell": { "executable": "C:\\Windows\\system32\\cmd.exe", "args": [ "/d", "/c" ] }, }, "command": "(if not exist py_venv py -m venv py_venv) && .\\py_venv\\Scripts\\activate.bat && py -m pip install --upgrade pip && py -m pip install -r requirements.txt && deactivate py_venv" }, "problemMatcher": [] } ] } | 5 | 5 |
75,198,237 | 2023-1-22 | https://stackoverflow.com/questions/75198237/stream-a-zst-compressed-file-line-by-line | I am trying to sift through a big database that is compressed in a .zst. I am aware that I can simply just decompress it and then work on the resulting file, but that uses up a lot of space on my ssd and takes 2+ hours so I would like to avoid that if possible. Often when I work with large files I would stream it line by line with code like with open(filename) as f: for line in f.readlines(): do_something(line) I know gzip has this with gzip.open(filename,'rt') as f: for line in f: do_something(line) but it doesn't seem to work with .zsf, so I am wondering if there're any libraries that can decompress and stream the decompressed data in a similar way. For example: with zstlib.open(filename) as f: for line in f.zstreadlines(): do_something(line) | Knowing which package to use and what the corresponding docs are can be a bit confusing, as there appears to be several Python bindings to the actual Zstandard library. Below, I am referring to the library by Gregory Szorc, that I installed from condas default channel with: conda install zstd # check: conda list zstd # # Name Version Build Channel # zstd 1.5.5 hc292b87_0 (even though the docs say to install with pip, which I don't unless there is no other way, as I like my conda environments to remain usable). I am only inferring that this version is the one from G. Szorc, based on the comments in the __init__.py file: # Copyright (c) 2017-present, Gregory Szorc # All rights reserved. # # This software may be modified and distributed under the terms # of the BSD license. See the LICENSE file for details. """Python interface to the Zstandard (zstd) compression library.""" from __future__ import absolute_import, unicode_literals # This module serves 2 roles: # # 1) Export the C or CFFI "backend" through a central module. # 2) Implement additional functionality built on top of C or CFFI backend. Thus, I think that the corresponding documentation is here. In any case, quick test after install: import zstandard as zstd with zstd.open('test.zstd', 'w') as f: for i in range(10_000): f.write(f'foo {i} bar\n') with zstd.open('test.zstd', 'r') as f: for i, line in enumerate(f): if i % 1000 == 0: print(f'line {i:4d}: {line}', end='') Produces: line 0: foo 0 bar line 1000: foo 1000 bar line 2000: foo 2000 bar line 3000: foo 3000 bar line 4000: foo 4000 bar line 5000: foo 5000 bar line 6000: foo 6000 bar line 7000: foo 7000 bar line 8000: foo 8000 bar line 9000: foo 9000 bar Notes: if the file was written in binary (not text), then use mode='rb', same as a regular file. The underlying file is always written in binary mode, but if we use text mode for open, then according to open's doc, "(...) an io.TextIOWrapper if opened for reading or writing in text mode". notice that I use the iterator of f, not readlines(). From the inline docstring, they make it sound like readlines() returns a list of lines from the file, i.e. the whole thing is slurped in memory. With the iterator, it is more likely that only portions of the file are in memory at any moment (in zstd's buffer). Reading this part of the docs however, I am less sure of the above. Stay tuned... (Edit: tested empirically, it holds, see below). Addendum ABout notes 2 and 3 above: I tested empirically, by changing the number of lines to 100 millions and compared the memory usage of two versions (using htop): Streaming version with zstd.open('test.zstd', 'r') as f: for i, line in enumerate(f): if i % 10_000_000 == 0: print(f'line {i:8d}: {line}', end='') --no bump in memory usage. Readlines version with zstd.open('test.zstd', 'r') as f: for i, line in enumerate(f.readlines()): if i % 10_000_000 == 0: print(f'line {i:8d}: {line}', end='') --bump in memory usage by a few GBs. This may be specific to the version installed (1.5.5). | 7 | 5 |
75,204,255 | 2023-1-22 | https://stackoverflow.com/questions/75204255/how-to-force-a-platform-wheel-using-build-and-pyproject-toml | I am trying to force a Python3 non-universal wheel I'm building to be a platform wheel, despite not having any native build steps that happen during the distribution-packaging process. The wheel will include an OS-specific shared library, but that library is built and copied into my package directory by a larger build system that my package knows nothing about. By the time my Python3 package is ready to be built into a wheel, my build system has already built the native shared library and copied it into the package directory. This SO post details a solution that works for the now-deprecated setup.py approach, but I'm unsure how to accomplish the same result using the new and now-standard build / pyproject.toml system: mypackage/ mypackage.py # Uses platform.system and importlib to load the local OS-specific library pyproject.toml mysharedlib.so # Or .dylib on macOS, or .dll on Windows Based on the host OS performing the build, I would like the resulting wheel to be manylinux, macos, or windows. I build with python3 -m build --wheel, and that always emits mypackage-0.1-py3-none-any.whl. What do I have to change to force the build to emit a platform wheel? | Update 2 Sept 2023: -C=--build-option=--plat {your-platform-tag} no longer works, so I added my preferred replacement to the end of the list. ========== OK, after some research and reading of code, I can present a bit of information and a few solutions that might meet other people's needs, summarized here: Firstly, pyproject.toml is not mutually exclusive from setup.py. setuptools will complain about deprecation if you create a distribution package via python3 setup.py ... and no pyproject.toml file is present. However, setup.py is still around and available, but it's a mistake to duplicate project configuration values (name, version, etc). So, put as much as your package will allow inside your pyproject.toml file, and use setup.py for things like overriding the Distribution class, or overriding the bdist_wheel module, etc. As far as creating platform wheels, there are a few approaches that work, with pros and cons: Override the bdist_wheel command class in setup.py as described here and set self.root_is_pure to False in the finalize_options override. This forces the python tag (e.g. cp39) to be set, along with the platform tag. Override the Distribution class in setup.py as described here and override has_ext_modules() to simply return True. This also forces the python and platform tags to be set. Add an unused minimal extension module to your packaging definition, as described here and here. This lengthens the build process and adds a useless "dummy" shared library to be dragged along wherever your wheel goes. This solution appears to not work anymore! Add the argument -C=--build-option=--plat {your-platform-tag} to the build invocation (for my case that's python -m build -w -n, for example). This leaves the Python tag untouched but you have to supply your own tag; there's no way to say "use whatever the native platform is". You can discover the exact platform tag with the command wheel.bdist_wheel.get_platform(pathlib.Path('.')) after importing the pathlib and wheel.bdist_wheel packages, but that can be cumbersome because wheel isn't a standard library package. Simply rename your wheel from mypkg-py3-none-any.whl to mypkg-py3-none-macosx_13_0_x86_64.whl- it appears that the platform tag is only encoded into the filename, and not any of the package metadata that's generated during the distribution-package process. Use the wheel package utility to update the tags, to turn the pure wheel into a platform wheel. python -m wheel tags --platform-tag macosx_13_0_x86_64 mypkg-py3-none-any.whl will emit a new platform wheel with the tags you want. In the end I chose the final options because it required the least amount of work- no setup.py files need to be introduced solely to accomplish this, and the build logs make it clear that a platform wheel (not a pure wheel) is being created. | 5 | 6 |
75,202,475 | 2023-1-22 | https://stackoverflow.com/questions/75202475/joblib-persistence-across-sessions-machines | Is joblib (https://joblib.readthedocs.io/en/latest/index.html) expected to be reliable across different machines, or ways of running functions, even different sessions on the same machine over time? For concreteness if you run this code in a Jupyter notebook, or as a python script, or piped to the stdin a python interpreter, you get different cache entries. The piped version seems to be a special case where you get a JobLibCollisionWarning that leads to it running every time and never reading from the cache. The other two though, end up having a different path saved in the joblib cache dir, and inside each one the same hash directory (fb65b1dace3932d1e66549411e3310b6) exists. from joblib import Memory memory = Memory('./cache', verbose=0) @memory.cache def job(x): print(f'Running with {x}') return x**2 print(job(2)) you get multiple cache entries. These entries also are in folders that contain path information (including what appears to be a tmp directory for the notebook entry, e.g. main--var-folders-3q-ht_2mtk52hl7ydxrcr87z2gr0000gn-T-ipykernel-3189892766), so it looks like if I transferred to another machine that the jobs would all be run again. I don't know how that path is reliable in the long run, it seems likely the tmpdir could change, or the ipykernel could have some other number associated with it. Is this expected? | The canonical way of using joblib's disk caching seems to require always having your function in a .py file, and not in a notebook cell (see e.g. this issue). I've found a workaround for using joblib in a jupyter notebook anyway, so that it hits the cache even if you re-run a cell or restart the notebook (which does not happen by default, as you've found). It is to manually set the __module__ of your function to some unique identifier (e.g, the notebook name). The following is a wrapper for Memory.cache that does that: def cache(mem, module, **mem_kwargs): def cache_(f): f.__module__ = module f.__qualname__ = f.__name__ return mem.cache(f, **mem_kwargs) return cache_ Usage, with your example function: from joblib import Memory mem = Memory('./cache') @cache(mem, "my_notebook_name") def job(x): print(f'Running with {x}') return x**2 This will save the function's outputs in ./cache/my_notebook_name/job/. (Had this idea while reading joblib's source, specifically get_func_name in func_inspect.py, in which the function's __module__ and __name__/__qualname__ are read). | 3 | 5 |
75,227,015 | 2023-1-24 | https://stackoverflow.com/questions/75227015/error-mkdocstrings-generation-error-no-module-named | I was building a documentation site for my python project using mkdocstrings. For generating the code referece files I followed this instructions https://mkdocstrings.github.io/recipes/ I get these errors: INFO - Building documentation... INFO - Cleaning site directory INFO - The following pages exist in the docs directory, but are not included in the "nav" configuration: - reference\SUMMARY.md - reference_init_.md ... ... - reference\tests\manual_tests.md ERROR - mkdocstrings: No module named ' ' ERROR - Error reading page 'reference/init.md': ERROR - Could not collect ' ' This is my file structure: This is my docs folder: I have the same gen_ref_pages.py file shown in the page: from pathlib import Path import mkdocs_gen_files nav = mkdocs_gen_files.Nav() for path in sorted(Path("src").rglob("*.py")): module_path = path.relative_to("src").with_suffix("") doc_path = path.relative_to("src").with_suffix(".md") full_doc_path = Path("reference", doc_path) parts = tuple(module_path.parts) if parts[-1] == "__init__": parts = parts[:-1] elif parts[-1] == "__main__": continue nav[parts] = doc_path.as_posix() # with mkdocs_gen_files.open(full_doc_path, "w") as fd: ident = ".".join(parts) fd.write(f"::: {ident}") mkdocs_gen_files.set_edit_path(full_doc_path, path) with mkdocs_gen_files.open("reference/SUMMARY.md", "w") as nav_file: # nav_file.writelines(nav.build_literate_nav()) # ``` This is my mkdocs.yml: ``` site_name: CA Prediction Docs theme: name: "material" palette: primary: deep purple logo: assets/logo.png favicon: assets/favicon.png features: - navigation.instant - navigation.tabs - navigation.expand - navigation.top # - navigation.sections - search.highlight - navigation.footer icon: repo: fontawesome/brands/git-alt copyright: Copyright © 2022 - 2023 Ezequiel GonzΓ‘lez extra: social: - icon: fontawesome/brands/github link: https://github.com/ezegonmac - icon: fontawesome/brands/linkedin link: https://www.linkedin.com/in/ezequiel-gonzalez-macho-329583223/ repo_url: https://github.com/ezegonmac/TFG-CellularAutomata repo_name: ezegonmac/TFG-CellularAutomata plugins: - search - gen-files: scripts: - docs/gen_ref_pages.py - mkdocstrings nav: - Introduction: index.md - Getting Started: getting-started.md - API Reference: reference.md # - Reference: reference/ - Explanation: explanation.md | Following the mkdocstrings documentation I also received that same error. After some tinkering, I was able to successfully serve the docs by adding paths: [src] (see below). mkdocs.yml plugins: - search - gen-files: scripts: - docs/gen_ref_pages.py - mkdocstrings: default_handler: python handlers: python: paths: [src] | 7 | 3 |
75,157,428 | 2023-1-18 | https://stackoverflow.com/questions/75157428/redis-exceptions-dataerror-invalid-input-of-type-dict-convert-to-a-bytes-s | Goal: store a dict() or {} as the value for a key-value pair, to set() onto Redis. Code import redis r = redis.Redis() value = 180 my_dict = dict(bar=value) r.set('foo', my_dict) redis.exceptions.DataError: Invalid input of type: 'dict'. Convert to a bytes, string, int or float first. | You cannot pass a dictionary object as a value in the set() operation to Redis. However, we can use either pickle or json to get the Bytes of an object. Whichever you already have imported would be optimal, imho. Pickle Serialize to pickle (with pickle.dumps) pre-set() import pickle my_dict = {'a': 1, 'b': 2} dict_bytes = pickle.dumps(my_dict) r.set('my_key', dict_bytes) Deserialize the object (dict) (with pickle.loads) post-get(): dict_bytes = r.get('my_key') my_dict = pickle.loads(dict_bytes) JSON string Serialize to JSON string (with json.dumps) pre-set() import json my_dict = {'a': 1, 'b': 2} dict_str = json.dumps(my_dict) r.set('my_key', dict_str) Deserialize the object (dict) (with json.loads) post-get(): dict_str = r.get('my_key') my_dict = json.loads(dict_str) | 8 | 17 |
75,215,780 | 2023-1-23 | https://stackoverflow.com/questions/75215780/implement-qcut-functionality-using-polars | I have been using polars but it seems like it lacks qcut functionality as pandas do. I am not sure about the reason but is it possible to achieve the same effect as pandas qcut using current available polars functionalities? The following shows an example about what I can do with pandas qcut. import pandas as pd data = pd.Series([11, 1, 2, 2, 3, 4, 5, 1, 2, 3, 4, 5]) pd.qcut(data, [0, 0.2, 0.4, 0.6, 0.8, 1], labels=['q1', 'q2', 'q3', 'q4', 'q5']) The results are as follows: 0 q5 1 q1 2 q1 3 q1 4 q3 5 q4 6 q5 7 q1 8 q1 9 q3 10 q4 11 q5 dtype: category So, I am curious how can I get the same result by using polars? Thanks for your help. | Update: Series.qcut was added in polars version 0.16.15 data = pl.Series([11, 1, 2, 2, 3, 4, 5, 1, 2, 3, 4, 5]) data.qcut([0.2, 0.4, 0.6, 0.8], labels=['q1', 'q2', 'q3', 'q4', 'q5'], maintain_order=True) shape: (12, 3) ββββββββ¬ββββββββββββββ¬βββββββββββ β β break_point β category β β --- β --- β --- β β f64 β f64 β cat β ββββββββͺββββββββββββββͺβββββββββββ‘ β 11.0 β inf β q5 β β 1.0 β 2.0 β q1 β β 2.0 β 2.0 β q1 β β 2.0 β 2.0 β q1 β β β¦ β β¦ β β¦ β β 2.0 β 2.0 β q1 β β 3.0 β 3.6 β q3 β β 4.0 β 4.8 β q4 β β 5.0 β inf β q5 β ββββββββ΄ββββββββββββββ΄βββββββββββ Old answer: From what I can tell .qcut() uses the linear quantile of the bin values? If so, you could implement that part "manually": import polars as pl data = pl.Series([11, 1, 2, 2, 3, 4, 5, 1, 2, 3, 4, 5]) bins = [0.2, 0.4, 0.6, 0.8] labels = ["q1", "q2", "q3", "q4", "q5"] pl.cut(data, bins=[data.quantile(val, "linear") for val in bins], labels=labels) shape: (12, 3) ββββββββ¬ββββββββββββββ¬βββββββββββ β | break_point | category β β --- | --- | --- β β f64 | f64 | cat β ββββββββͺββββββββββββββͺβββββββββββ‘ β 1.0 | 2.0 | q1 β β 1.0 | 2.0 | q1 β β 2.0 | 2.0 | q1 β β 2.0 | 2.0 | q1 β β 2.0 | 2.0 | q1 β β 3.0 | 3.6 | q3 β β 3.0 | 3.6 | q3 β β 4.0 | 4.8 | q4 β β 4.0 | 4.8 | q4 β β 5.0 | inf | q5 β β 5.0 | inf | q5 β β 11.0 | inf | q5 β ββββββββ΄ββββββββββββββ΄βββββββββββ | 6 | 10 |
75,202,383 | 2023-1-22 | https://stackoverflow.com/questions/75202383/raise-packaging-version-invalidversion-linux-pip | after updating pip and setuptools==66.0, pip stopped working and responds to all attempts to call it like this: Traceback (most recent call last): File "/usr/bin/pip", line 6, in <module> from pkg_resources import load_entry_point File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3249, in <module> def _initialize_master_working_set(): File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3223, in _call_aside f(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 3261, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 619, in _build_master return cls._build_from_requirements(__requires__) File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 632, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 1044, in __init__ self.scan(search_path) File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 1077, in scan self.add(dist) File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 1096, in add dists.sort(key=operator.attrgetter('hashcmp'), reverse=True) File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 2631, in hashcmp self.parsed_version, File "/usr/local/lib/python3.8/dist-packages/pkg_resources/__init__.py", line 2685, in parsed_version raise packaging.version.InvalidVersion(f"{str(ex)} {info}") from None pkg_resources.extern.packaging.version.InvalidVersion: Invalid version: '0.23ubu ntu1' (package: distro-info) thanks | This turns out to be by design by the setuptools developers, please see some commentary on it at https://github.com/pypa/setuptools/issues/3772#issuecomment-1384342813. In short, it's by been removed to enforce strict semantic versioning of packages. If you can't upgrade the versioning to be semantic versioning compatible, please pin setuptools to version < 66. Here is Python's versioning specification: https://peps.python.org/pep-0440/ | 5 | 8 |
75,179,679 | 2023-1-20 | https://stackoverflow.com/questions/75179679/syntax-highlighting-for-python-docstrings-in-neovim-treesitter | I'm using Treesitter with Neovim v0.8.2 and Python. With a default configuration of those 3, python docstrings are highlighted as strings, and I'd like to highlight them as comments. I've tried creating a ~/.config/nvim/after/syntax/python.vim file with syn region Comment start=/"""/ end=/"""/ and I expected """<things here>""" to be highlighted as comments. I'm guessing this is because treesitter is disabling syntax highlighting, so on that note has anyone been able to add custom highlighting rules to Treesitter or after it? | checkout https://github.com/nvim-treesitter/nvim-treesitter/issues/4392 it's quite the read, but you can override how treesitter parses the objects in the document/buffer, can't take credit for the below solution, but should get you where you need to be. after/queries/python/highlights.scm extends ; Module docstring (module . (expression_statement (string) @comment)) ; Class docstring (class_definition body: (block . (expression_statement (string) @comment))) ; Function/method docstring (function_definition body: (block . (expression_statement (string) @comment))) ; Attribute docstring ((expression_statement (assignment)) . (expression_statement (string) @comment)) | 4 | 1 |
75,224,636 | 2023-1-24 | https://stackoverflow.com/questions/75224636/importerror-cannot-import-name-int-from-numpy | I am trying to import sklearn library by writing code like from sklearn.preprocessing import MinMaxScaler but it kept showing same error. I tried uninstalling and reinstalling but no change. Command prompt is also giving same error. Recently I installed some python libraries but that never affected my enviroment. I also tried running the code in jupyter notebook. When I tried to import numpy like import numpy as np, it ran successfully. So the problem is only with sklearn. Also, I have worked with sklearn before but have never seen such an error. | Run pip3 install --upgrade scipy OR upgrade whatever tool that tried to import np.int and failed np.int is same as normal int of python and scipy was outdated for me | 6 | 13 |
75,209,217 | 2023-1-23 | https://stackoverflow.com/questions/75209217/printing-a-list-within-a-list-as-a-string-on-new-lines | I'm really struggling with this issue, and can't seem to find an answer anywhere. I've got a text file which has name of the station and location, the task is to print out the names of the stations all underneath each other in order and same for the locations. In my text file the names of the stations are always made up of two words and the location is 3 words. text_file = "London Euston 12 London 56, Aylesbury Vale 87 Parkway 99, James Cook 76 University 87, Virginia Water 42 Surrey 78" Desired outcome would be: Stations: London Euston Aylesbury Vale James Cook Virginia Water Locations: 12 London 56 87 Parkway 99 76 University 87 42 Surrey 78 my current code: replaced = text_file.replace(","," ") replaced_split = replaced.split() i = 0 b = 2 stations = [] locations = [] while b < len(replaced_split): locations.append(replaced_split[b:b+3]) b += 5 while i < len(replaced_split): stations.append(replaced_split[i:i+2]) i += 5 for x in range(len(stations)): print(stations[x]) for y in range(len(locations)): print(dates[y]) The outcome I'm receiving is printing lists out: ['London', 'Euston'] ['Aylesbury', 'Vale'] ['James', 'Cook'] ['Virginia', 'Water'] ['12', 'London', '56'] ['87', 'Parkway', '99'] ['76', 'University', '87'] ['42', 'Surrey', '78'] | This is a simple a straight forward approach using for-loop instead of while loop. What the code does: It splits your string into substrings, where every substring is separated by comma and space. After that it splits those substrings again by space then joins the first two elements of each substring to create station and appends it to the stations list, and finally joins the rest of the substring leaving the first two elements as the location and appends that to the locations list. Now you loop the lists to print their elements stations = [] locations = [] for char in text_file.split(", "): parts = char.split(" ") stations.append(" ".join(parts[:2])) locations.append(" ".join(parts[2:])) print("Stations:") for station in stations: print(station) print("\nLocations:") for location in locations: print(location) You can also choose to unpack stations and locations lists instead of using a traditional for-loop to print elements one at a time: print("Stations:") print(*stations, sep='\n') print("\nLocations:") print(*locations, sep='\n') Stations: London Euston Aylesbury Vale James Cook Virginia Water Locations: 12 London 56 87 Parkway 99 76 University 87 42 Surrey 78 | 3 | 2 |
75,180,598 | 2023-1-20 | https://stackoverflow.com/questions/75180598/typeerror-object-of-type-type-is-not-json-serializable | The code works fine in Postman and provides a valid response but fails to generate the OpenAPI/Swagger UI automatic docs. class Role(str, Enum): Internal = "internal" External = "external" class Info(BaseModel): id: int role: Role class AppInfo(Info): info: str @app.post("/api/v1/create", status_code=status.HTTP_200_OK) async def create(info: Info, apikey: Union[str, None] = Header(str)): if info: alias1 = AppInfo(info="Portal Gun", id=123, role=info.role) alias2 = AppInfo(info="Plumbus", id=123, , role=info.role) info_dict.append(alias1.dict()) info_dict.append(alias2.dict()) return {"data": info_dict} else: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail=f"Please provide the input" ) Error received: TypeError: Object of type 'type' is not JSON serializable | Issue The reason you get the following error in the console (Note that this error could also be raised by other causesβsee here): TypeError: Object of type 'type' is not JSON serializable as well as the error below in the browser, when trying to load the OpenAPI/Swagger UI autodocs at /docs: Fetch error Internal Server Error /openapi.json is due to the following line in your code: apikey: Union[str, None] = Header(str) ^^^ Solution When declaring a Header parameter (or any other type of parameter, i.e., Path, Query, Cookie, etc), the first value that is passed to the Header class constructor (i.e., __init__ method) is the default value, which can either be None or some default value based on the type you specified for the parameterβin your case that could be some string value, e.g., 'some-api-key', not the type str). Since you defined the parameter as Optional, you could simply pass None as the default value: apikey: Union[str, None] = Header(None) Please have a look at this answer and this answer for more details on Optional parameters in FastAPI. | 3 | 7 |
75,216,548 | 2023-1-24 | https://stackoverflow.com/questions/75216548/aws-sam-cli-throws-error-error-building-docker-image | I am trying to use the SAM CLI on my M1 Mac. I followed the steps outlined in these docs: sam init cd sam-app sam build sam deploy --guided I did not modify the code or the yaml files. I can start the local Lambda function as expected: β sam-app sam local start-api Mounting HelloWorldFunction at http://127.0.0.1:3000/hello [GET] You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically. If you used sam build before running local commands, you will need to re-run sam build for the changes to be picked up. You only need to restart SAM CLI if you update your AWS SAM template 2023-01-23 17:54:06 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit) But as soon as I hit the endpoint by doing: curl http://localhost:3000/hello The Lambda RIE starts throwing errors and returns a 502. Invoking app.lambda_handler (python3.9) Image was not found. Removing rapid images for repo public.ecr.aws/sam/emulation-python3.9 Building image................... Failed to build Docker Image NoneType: None Exception on /hello [GET] Traceback (most recent call last): File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 2073, in wsgi_app response = self.full_dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() ^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/apigw/local_apigw_service.py", line 361, in _request_handler self.lambda_runner.invoke(route.function_name, event, stdout=stdout_stream_writer, stderr=self.stderr) File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/commands/local/lib/local_lambda.py", line 137, in invoke self.local_runtime.invoke( File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/lib/telemetry/metric.py", line 315, in wrapped_func return_value = func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/lambdafn/runtime.py", line 177, in invoke container = self.create(function_config, debug_context, container_host, container_host_interface) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/lambdafn/runtime.py", line 73, in create container = LambdaContainer( ^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_container.py", line 93, in __init__ image = LambdaContainer._get_image( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_container.py", line 236, in _get_image return lambda_image.build(runtime, packagetype, image, layers, architecture, function_name=function_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_image.py", line 164, in build self._build_image( File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_image.py", line 279, in _build_image raise ImageBuildException("Error building docker image: {}".format(log["error"])) samcli.commands.local.cli_common.user_exceptions.ImageBuildException: Error building docker image: The command '/bin/sh -c mv /var/rapid/aws-lambda-rie-x86_64 /var/rapid/aws-lambda-rie && chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1 I found this Github issue where someone recommended to do the following: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes But it yielded no results. Does anyone know what I'm doing wrong, or how to resolve this issue so the docker container can build correctly? Thanks. | In your template.yaml, change the following lines from Architectures: - x86_64 to Architectures: - arm64 The reason why this works is that sam init defaults to the x86_64 architecture. Since you have an M1 Mac, performance will be better with a Docker image for the arm64 architecture. You can tell this is the case as well by looking at the error message which mentions aws-lambda-rie-x86_64. For the M1 Mac, you'll need aws-lambda-rie-arm64 which it will find once you change those lines above and then re-run your sam commands. Alternatively, if you want to start from scratch you can pass the -a parameter to specify the architecture like so: sam init -a arm64 and that will fix your problem as well. Note that this will have a side-effect of deploying your Lambda function onto arm64 in AWS utilizing Graviton processors. If you don't want that, you can look try other workarounds in https://github.com/aws/aws-sam-cli/issues/3891 | 11 | 11 |
75,204,360 | 2023-1-22 | https://stackoverflow.com/questions/75204360/how-to-call-an-async-function-during-debugging | I usually like to call some functions during debugging in the console just to see some quick results. However with async functions, this doesn't seem to be possible: import asyncio async def func1(): print('func1') def func2(): print('func2') async def main(): task = asyncio.create_task(func1()) await task # put a break point here asyncio.run(main()) Let's say we put a break point in the line of await task Now if I call func2() in the console it will print 'func2' perfectly fine. However, if I enter await task in console, I will get the below error: File ".../anaconda3/lib/python3.9/asyncio/base_events.py", line 585, in _check_running raise RuntimeError( RuntimeError: Cannot run the event loop while another loop is running python 3.9 pycharm 2022.3.1 Is there any way I can call the async functions in the console like the non-async functions? | You can suspend the current_task and then run the event loop until the task is done. import asyncio from asyncio import tasks def wait(task_or_coro): task = asyncio.ensure_future(task_or_coro) loop, current_task = task.get_loop(), tasks.current_task() tasks._leave_task(loop, current_task) while not task.done(): loop._run_once() tasks._enter_task(loop, current_task) return task.result() Call wait(task) instead of await task. | 3 | 5 |
75,159,675 | 2023-1-18 | https://stackoverflow.com/questions/75159675/installing-open3d-ml-with-pytorch-on-macos | I created a virtualenv with python 3.10 and installed open3d and PyTorch according to the instructions on open3d-ml webpage: Open3d-ML but when I tested it with import open3d.ml.torch I get the error: Exception: Open3D was not built with PyTorch support! Steps to reproduce python3.10 -m venv .venv source .venv/bin/activate pip install --upgrade pip pip install open3d pip install torch torchvision torchaudio Error % python -c "import open3d.ml.torch as ml3d" Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/xx/.venv/lib/python3.10/site-packages/open3d/ml/torch/__init__.py", line 34, in <module> raise Exception('Open3D was not built with PyTorch support!') Exception: Open3D was not built with PyTorch support! Environment: % python3 --version Python 3.10.9 % pip freeze open3d==0.16.1 torch==1.13.1 torchaudio==0.13.1 torchvision==0.14.1 OS macOS 12.6 Kernel Version: Darwin 21.6.0 I also checked below similar issues but they don't have answers: https://github.com/isl-org/Open3D/discussions/5849 https://github.com/isl-org/Open3D-ML/issues/557 Open3D-ML and pytorch According to this issue 5849 the problem can't be related only to MacOs because, in a docker with Ubuntu20.04, there is a similar error. Does anyone know how we can tackle this? | Finally, I decided to build Open3D from the source for Mac M1. I followed almost the official open3d page and thanks to this medium in one of the replies. Build Open3d-ml with Pytorch on Mac M1 For the OS environment see the main question. conda create --name open3d-ml-build python==3.10 conda activate open3d-ml-build # install pytorch from pytorch.org conda install pytorch torchvision torchaudio -c pytorch # now clone open3d in a desired location git clone --branch v0.16.1 [email protected]:isl-org/Open3D.git ./foo/open3d-0.16-build cd open3d-0.16-build mkdir build && cd build git clone [email protected]:isl-org/Open3D-ML.git Now make sure you are in the activeted conda env. Build (takes very long and a lot of memory) Note on Mac M1 you don't have Cuda but Metal Performance Shaders (MPS) so I made CUDA Flags OFF in the cmake configuration. which python3 >> /Users/XX/miniconda3/envs/open3d-ml-build/bin/python3 # in the build direcotry cmake -DBUILD_CUDA_MODULE=OFF -DGLIBCXX_USE_CXX11_ABI=OFF \ -DBUILD_PYTORCH_OPS=ON -DBUILD_CUDA_MODULE=OFF \ -DBUNDLE_OPEN3D_ML=ON -DOPEN3D_ML_ROOT=Open3D-ML \ -DBUILD_JUPYTER_EXTENSION:BOOL=OFF \ -DPython3_ROOT=/Users/XX/miniconda3/envs/open3d-ml-build/bin/python3 .. make -j$(sysctl -n hw.physicalcpu) [verbose=1] If it fails, try it again or run it with verbose and look for fatal error. Install # Install pip package in the current python environment make install-pip-package # if error: Module Not found yapf pip install yapf # Create Python package in build/lib make python-package # Create pip wheel in build/lib # This creates a .whl file that you can install manually. make pip-package sanity check Again in the activated conda environment # if not installed pip install tensorboard python3 -c "import open3d; import open3d.ml.torch" pip freeze | grep torch torch==1.13.1 torchaudio==0.13.1 torchvision==0.14.1 If you don't get any errors you should be good to go. | 3 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.