question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
68,705,698
2021-8-9
https://stackoverflow.com/questions/68705698/how-to-write-tests-for-pydantic-models-in-fastapi
I just started using FastAPI but I do not know how do I write a unit test (using pytest) for a Pydantic model. Here is a sample Pydantic model: class PhoneNumber(BaseModel): id: int country: str country_code: str number: str extension: str I want to test this model by creating a sample PhoneNumber instance and ensure that the PhoneNumber instance tallies with the field types. For example: PhoneNumber(1, "country", "code", "number", "extension") Then, I want to assert that PhoneNumber.country equals "country".
The test you want to achieve is straightforward to do with pytest: import pytest def test_phonenumber(): pn = PhoneNumber(id=1, country="country", country_code="code", number="number", extension="extension") assert pn.id == 1 assert pn.country == 'country' assert pn.country_code == 'code' assert pn.number == 'number' assert pn.extension == 'extension' But I agree with this comment: Generally speaking, you don't write tests like this. Pydantic has a good test suite (including a unit test like the one you're proposing) . Your test should cover the code and logic you wrote, not the packages you imported. If you have a model like PhoneNumber model without any special/complex validations, then writing tests that simply instantiates it and checks attributes won't be that useful. Tests like those are like testing Pydantic itself. If, however, your model has some special/complex validator functions, for example, it checks if country and country_code match: from pydantic import BaseModel, root_validator class PhoneNumber(BaseModel): ... @root_validator(pre=True) def check_country(cls, values): """Check that country_code is the 1st 2 letters of country""" country: str = values.get('country') country_code: str = values.get('country_code') if not country.lower().startswith(country_code.lower()): raise ValueError('country_code and country do not match') return values ...then a unit test for that specific behavior would be more useful: import pytest def test_phonenumber_country_code(): """Expect test to fail because country_code and country do not match""" with pytest.raises(ValueError): PhoneNumber(id=1, country='JAPAN', country_code='XY', number='123', extension='456') Also, as I mentioned in my comment, since you mentioned FastAPI, if you are using this model as part of a route definition (either it's a request parameter or a response model), then a more useful test would be making sure that your route can use your model correctly. @app.post("/phonenumber") async def add_phonenumber(phonenumber: PhoneNumber): """The model is used here as part of the Request Body""" # Do something with phonenumber return JSONResponse({'message': 'OK'}, status_code=200) from fastapi.testclient import TestClient client = TestClient(app) def test_add_phonenumber_ok(): """Valid PhoneNumber, should be 200/OK""" # This would be what the JSON body of the request would look like body = { "id": 1, "country": "Japan", "country_code": "JA", "number": "123", "extension": "81", } response = client.post("/phonenumber", json=body) assert response.status_code == 200 def test_add_phonenumber_error(): """Invalid PhoneNumber, should be a validation error""" # This would be what the JSON body of the request would look like body = { "id": 1, "country": "Japan", # `country_code` is missing "number": 99999, # `number` is int, not str "extension": "81", } response = client.post("/phonenumber", json=body) assert response.status_code == 422 assert response.json() == { 'detail': [{ 'loc': ['body', 'country_code'], 'msg': 'field required', 'type': 'value_error.missing' }] }
17
36
68,669,767
2021-8-5
https://stackoverflow.com/questions/68669767/pycharm-can%c2%b4t-find-reference-of-any-opencv-function-in-init-py
I'm using PyCharm 2021.2 Professional edition and I have installed opencv-python with: pip install opencv-python However, the IDE keeps giving me the following warning when I try to use cv2 package: Cannot find reference 'resize' in '__init__.py' Here I gave the example of the resize function, but it's happening for every function in cv2 package. Although the code runs with no errors, I can't use the auto complete feature, which is a bit annoying. I found an answer here that might help. The guy says to use: import cv2.cv2 as cv2 However this is not working for me. Im getting the following error: ERROR: No matching distribution found for cv2 That's because there is no package named cv2 inside opencv. Does anyone know how to solve this problem? Is it a PyCharm's issue? UPDATE Here is the output of the command pip show opencv-python: Name: opencv-python Version: 4.5.3.56 Summary: Wrapper package for OpenCV python bindings. Home-page: https://github.com/skvark/opencv-python Author: None Author-email: None License: MIT Location: z:\appdata\python\lib\site-packages Requires: numpy Required-by:
This solution worked for me. In preferences, Select Python Interpreter Click the setting icon ( gear on right of box that display your Python Interpreter and select Show All A list of all your configured Interpreters is show with your current interpreter already hi-lighted. With your interpreter still highlighted, click the Icon that shows a folder and subfolder at the top. Tool tip should say "Show Paths for Selected Interpreter. Click the + button and add the following path: /lib/python3.9/site-packages/cv2 The .../python3.9... will be different if you are using a different Python Version. Click Ok until you are back to the main IDE window Tested on Mac OS 12.4, PyCharm 2022.1
9
21
68,649,314
2021-8-4
https://stackoverflow.com/questions/68649314/how-to-display-current-virtual-environtment-in-python-in-oh-my-posh
First, I'm using hotstick.minimal theme in oh my posh. And it looks like this. As you can see, a current venv doesn't look good. And I made some changes in JSON file. Then it looks like this. I don't want to display the name of venv on the left. How can I do that? This is my JSON file: { "$schema": "https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/main/themes/schema.json", "final_space": true, "osc99": true, "console_title": true, "console_title_style": "template", "console_title_template": "{{.Folder}}{{if .Root}} :: root{{end}} :: {{.Shell}}", "blocks": [ { "type": "prompt", "alignment": "left", "segments": [ { "type": "root", "style": "plain", "foreground": "yellow", "properties": { "root_icon": "" } }, { "type": "path", "style": "powerline", "foreground": "black", "background": "#68D6D6", "powerline_symbol": "", "leading_diamond": "", "trailing_diamond": "", "properties": { "prefix": " \uF07C ", "style": "folder" } }, { "type": "python", "style": "powerline", "powerline_symbol": "\uE0B0", "foreground": "#100e23", "background": "#906cff", "properties": { "prefix": " \uE235 " } }, { "type": "git", "style": "powerline", "powerline_symbol": "", "foreground": "black", "background": "green", "properties": { "display_stash_count": true, "display_upstream_icon": true, "display_status": true, "display_status_detail": true, "branch_icon": "  ", "branch_identical_icon": "≡", "branch_ahead_icon": "↑", "branch_behind_icon": "↓", "branch_gone": "≢", "local_working_icon": "", "local_staged_icon": "", "stash_count_icon": "", "commit_icon": "▷ ", "tag_icon": "▶ ", "rebase_icon": "Ɫ ", "cherry_pick_icon": "✓ ", "merge_icon": "◴ ", "no_commits_icon": "[no commits]", "status_separator_icon": " │", "status_colors_enabled": true, "color_background": true, "local_changes_color": "yellow" } } ] } ] } NOTE: Some symbols may not appear due to the font.
You have to include the following in your $PROFILE (profile.ps1): $env:VIRTUAL_ENV_DISABLE_PROMPT = 1 Two notes: Deactivate the venv first Close and re-open the terminal for it to work. See a fuller discussion here: https://github.com/JanDeDobbeleer/oh-my-posh/discussions/390
10
10
68,681,092
2021-8-6
https://stackoverflow.com/questions/68681092/typing-namedtuple-and-mutable-default-arguments
Given I want to properly using type annotations for named tuples from the typing module: from typing import NamedTuple, List class Foo(NamedTuple): my_list: List[int] = [] foo1 = Foo() foo1.my_list.append(42) foo2 = Foo() print(foo2.my_list) # prints [42] What is the best or cleanest ways to avoid the mutable default value misery in Python? I have a few ideas, but nothing really seems to be good Using None as default class Foo(NamedTuple): my_list: Optional[List[int]] = None foo1 = Foo() if foo1.my_list is None foo1 = foo1._replace(my_list=[]) # super ugly foo1.my_list.append(42) Overwriting __new__ or __init__ won't work: AttributeError: Cannot overwrite NamedTuple attribute __init__ AttributeError: Cannot overwrite NamedTuple attribute __new__ Special @classmethod class Foo(NamedTuple): my_list: List[int] = [] @classmethod def use_me_instead(cls, my_list=None): if not my_list: my_list = [] return cls(my_list) foo1 = Foo.use_me_instead() foo1.my_list.append(42) # works! Maybe using frozenset and avoid mutable attributes altogether? But that won't work with Dicts as there are no frozendicts. Does anyone have a good answer?
EDIT: Blending my approach with Sebastian Wagner's idea of using a decorator, we can achieve something like this: from typing import NamedTuple, List, Callable, TypeVar, Type, Any, cast from functools import wraps T = TypeVar('T') def default_factory(**factory_kw: Callable[[], Any]) -> Callable[[Type[T]], Type[T]]: def wrapper(wcls: Type[T], /) -> Type[T]: @wraps(wcls.__new__) def __new__(cls: Type[T], *args: Any, **kwargs: Any) -> T: for key, factory in factory_kw.items(): kwargs.setdefault(key, factory()) new = super(cls, cls).__new__(cls, *args, **kwargs) # type: ignore[misc] # This call to cast() is necessary if you run MyPy with the --strict argument return cast(T, new) cls_name = wcls.__name__ wcls.__name__ = wcls.__qualname__ = f'_{cls_name}' return type(cls_name, (wcls, ), {'__new__': __new__, '__slots__': ()}) return wrapper @default_factory(my_list=list) class Foo(NamedTuple): # You do not *need* to have the default value in the class body, # but it makes MyPy a lot happier my_list: List[int] = [] foo1 = Foo() foo1.my_list.append(42) foo2 = Foo() print(f'foo1 list: {foo1.my_list}') # prints [42] print(f'foo2 list: {foo2.my_list}') # prints [] print(Foo) # prints <class '__main__.Foo'> print(Foo.__mro__) # prints (<class '__main__.Foo'>, <class '__main__._Foo'>, <class 'tuple'>, <class 'object'>) from inspect import signature print(signature(Foo.__new__)) # prints (_cls, my_list: List[int] = []) Run it through MyPy, and MyPy informs us that the revealed type of foo1 and foo2 is still "Tuple[builtins.list[builtins.int], fallback=__main__.Foo]" Original answer below. How about this? (Inspired by this answer here): from typing import NamedTuple, List, Optional, TypeVar, Type class _Foo(NamedTuple): my_list: List[int] T = TypeVar('T', bound="Foo") class Foo(_Foo): "A namedtuple defined as `_Foo(mylist)`, with a default value of `[]`" __slots__ = () def __new__(cls: Type[T], mylist: Optional[List[int]] = None) -> T: mylist = [] if mylist is None else mylist return super().__new__(cls, mylist) # type: ignore f, g = Foo(), Foo() print(isinstance(f, Foo)) # prints "True" print(isinstance(f, _Foo)) # prints "True" print(f.mylist is g.mylist) # prints "False" Run it through MyPy and the revealed type of f and g will be: "Tuple[builtins.list[builtins.int], fallback=__main__.Foo]". I'm not sure why I had to add the # type: ignore to get MyPy to stop complaining — if anybody can enlighten me on that, I'd be interested. Seems to work fine at runtime.
9
2
68,704,002
2021-8-8
https://stackoverflow.com/questions/68704002/importerror-cannot-import-name-abcindexclass-from-pandas-core-dtypes-generic
I have this output : [Pandas-profiling] ImportError: cannot import name 'ABCIndexClass' from 'pandas.core.dtypes.generic' when trying to import pandas-profiling in this fashion : from pandas_profiling import ProfileReport It seems to import pandas-profiling correctly but struggles when it comes to interfacing with pandas itself. Both libraries are currently up to date through conda. It doesn't seem to match any of the common problems associated with pandas-profiling as per their documentation, and I can't seem to locate a more general solution of importing the name ABCIndexClass. Thanks
Thanks to the @aflyingtoaster's answer, the following workaround has worked fine for me: Edit the file "~/[your_conda_env_path]/lib/site-packages/visions/dtypes/boolean.py" Find the row "from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries" and just replace ABCIndexClass for ABCIndex. Save the boolean.py file and enjoy your report!
19
23
68,721,086
2021-8-10
https://stackoverflow.com/questions/68721086/plotly-how-to-define-marker-color-based-on-category-string-value-for-a-3d-scatt
I am using plotly.graph_object for 3D scatter plot. I'd like to define marker color based on category string value. The category values are A2, A3, A4. How to modify below code? Thanks Here is what I did: import plotly.graph_objects as go x=df_merged_pc['PC1'] y=df_merged_pc['PC2'] z=df_merged_pc['PC3'] color=df_merged_pc['AREA'] fig=go.Figure(data=[go.Scatter3d(x=x,y=y,z=z,mode='markers', marker=dict(size=12, color=df_merged_pc['AREA'], colorscale ='Viridis'))]) fig.show() The error I got is: ValueError: Invalid element(s) received for the 'color' property of scatter3d.marker Invalid elements include: ['A3', 'A3', 'A3', 'A3', 'A3', 'A3', 'A3', 'A2', 'A2', 'A2']
I might be wrong here, but it sounds to me like you're actually asking for a widely used built-in feature of plotly.express where you can assign a color to subgroups of labeled data. Take the dataset px.data.iris as an example with: fig = px.scatter_3d(df, x='sepal_length', y='sepal_width', z='petal_width', color='species') Here, the colors are assigned to different species of which you have three unique values ['setosa', 'versicolor', 'virginica']: sepal_length sepal_width petal_length petal_width species species_id 0 5.1 3.5 1.4 0.2 setosa 1 1 4.9 3.0 1.4 0.2 setosa 1 2 4.7 3.2 1.3 0.2 setosa 1 3 4.6 3.1 1.5 0.2 setosa 1 4 5.0 3.6 1.4 0.2 setosa 1 This example can be expanded upon by changing the color scheme like above, in which case your color scheme can be defined by a dictionary: colors = {"flower": 'green', 'not a flower': 'rgba(50,50,50,0.6)'} Or you can specify a discrete color sequence with: color_discrete_sequence = plotly.colors.sequential.Viridis You can also add a new column like random.choice(['flower', 'not a flower']) to change the category you would like your colors associated with. Plotly.graph_objects If you would like to use go.Scatter3d instead I would build one trace per unique subgroup, and set up the colors using itertools.cycle like this: colors = cycle(plotly.colors.sequential.Viridis) fig = go.Figure() for s in dfi.species.unique(): df = dfi[dfi.species == s] fig.add_trace(go.Scatter3d(x=df['sepal_length'], y = df['sepal_width'], z = df['petal_width'], mode = 'markers', name = s, marker_color = next(colors))) Complete code for plotly express import plotly.express as px import random df = px.data.iris() colors = {"flower": 'green', 'not a flower': 'rgba(50,50,50,0.6)'} df['plant'] = [random.choice(['flower', 'not a flower']) for obs in range(0, len(df))] fig = px.scatter_3d(df, x='sepal_length', y='sepal_width', z='petal_width', color = 'plant', color_discrete_map=colors ) fig.show() Complete code for plotly graph objects import plotly.graph_objects as go import plotly from itertools import cycle dfi = px.data.iris() colors = cycle(plotly.colors.sequential.Viridis) fig = go.Figure() for s in dfi.species.unique(): df = dfi[dfi.species == s] fig.add_trace(go.Scatter3d(x=df['sepal_length'], y = df['sepal_width'], z = df['petal_width'], mode = 'markers', name = s, marker_color = next(colors))) fig.show()
5
6
68,673,221
2021-8-5
https://stackoverflow.com/questions/68673221/warning-running-pip-as-the-root-user
I am making simple image of my python Django app in Docker. But at the end of the building container it throws next warning (I am building it on Ubuntu 20.04): WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead Why does it throw this warning if I am installing Python requirements inside my image? I am building my image using: sudo docker build -t my_app:1 . Should I be worried about warning that pip throws, because I know it can break my system? Here is my Dockerfile: FROM python:3.8-slim-buster WORKDIR /app COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
The way your container is built doesn't add a user, so everything is done as root. You could create a user and install to that users's home directory by doing something like this; FROM python:3.8.3-alpine RUN pip install --upgrade pip RUN adduser -D myuser USER myuser WORKDIR /home/myuser COPY --chown=myuser:myuser requirements.txt requirements.txt RUN pip install --user -r requirements.txt ENV PATH="/home/myuser/.local/bin:${PATH}" COPY --chown=myuser:myuser . . CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
120
68
68,705,613
2021-8-9
https://stackoverflow.com/questions/68705613/can-i-use-abstract-methods-to-import-file-specific-formatting-of-python-pandas
I have a class FileSet with a method _process_series, which contains a bunch of if-elif blocks doing filetag-specific formatting of different pandas.Series: elif filetag == "EntityA": ps[filetag+"_Id"] = str(ps[filetag+"_Id"]).strip() ps[filetag+"_DateOfBirth"] = str(pd.to_datetime(ps[filetag+"_DateOfBirth"]).strftime('%Y-%m-%d')).strip() ps[filetag+"_FirstName"] = str(ps[filetag+"_FirstName"]).strip().capitalize() ps[filetag+"_LastName"] = str(ps[filetag+"_LastName"]).strip().capitalize() ps[filetag+"_Age"] = relativedelta(datetime.today(), datetime.strptime(ps[filetag+"_DateOfBirth"], "%Y-%m-%d")).years return ps I'd like to define an abstract format method in the class and keep these blocks of formatting in separate modules that are imported when _process_series is called for a given filetag. Forgive the pseudo-code, but something like: for tag in filetag: from my_formatters import tag+'_formatter' as fmt ps = self.format(pandas_series, fmt) return ps And the module would contain the formatting block: # my_formatters.EntityA_formatter ps[filetag+"_Id"] = str(ps[filetag+"_Id"]).strip() ps[filetag+"_DateOfBirth"] = str(pd.to_datetime(ps[filetag+"_DateOfBirth"]).strftime('%Y-%m-%d')).strip() ps[filetag+"_FirstName"] = str(ps[filetag+"_FirstName"]).strip().capitalize() ps[filetag+"_LastName"] = str(ps[filetag+"_LastName"]).strip().capitalize() ps[filetag+"_Age"] = relativedelta(datetime.today(), datetime.strptime(ps[filetag+"_DateOfBirth"], "%Y-%m-%d")).years return ps
Why not use globals with asterisk: from my_formatters import * for tag in filetag: fmt = globals()[tag + '_formatter'] ps = self.format(pandas_series, fmt) return ps I converted your pseudocode to real code. globals documentation: Return a dictionary representing the current global symbol table. This is always the dictionary of the current module (inside a function or method, this is the module where it is defined, not the module from which it is called).
6
1
68,733,440
2021-8-10
https://stackoverflow.com/questions/68733440/set-and-get-attributes-in-bunch-type-object
For simplicity to parse/create JSON, machine learning applications usually uses the Bunch object, e.g. https://github.com/dsc/bunch/blob/master/bunch/__init__.py When getting, there's a nested EAFP idiom that checks through the dict.get() function and then trying to access it with dictionary square bracket syntax, i.e. class Bunch(dict): def __getattr___(self, k): try: return object.__getattribute__(self, k) except AttributeError: try: return self[k] except KeyError: raise AttributeError And when trying to set an attribute, def __setattr__(self, k, v): try: # Throws exception if not in prototype chain object.__getattribute__(self, k) except AttributeError: try: self[k] = v except: raise AttributeError(k) else: object.__setattr__(self, k, v) Seems like the sklearn implementation follows the same train of thought but has lesser checks https://github.com/scikit-learn/scikit-learn/blob/2beed5584/sklearn/utils/__init__.py#L61 class Bunch(dict): def __init__(self, **kwargs): super().__init__(kwargs) def __setattr__(self, key, value): self[key] = value def __dir__(self): return self.keys() def __getattr__(self, key): try: return self[key] except KeyError: raise AttributeError(key) def __setstate__(self, state): # Bunch pickles generated with scikit-learn 0.16.* have an non # empty __dict__. This causes a surprising behaviour when # loading these pickles scikit-learn 0.17: reading bunch.key # uses __dict__ but assigning to bunch.key use __setattr__ and # only changes bunch['key']. More details can be found at: # https://github.com/scikit-learn/scikit-learn/issues/6196. # Overriding __setstate__ to be a noop has the effect of # ignoring the pickled __dict__ pass The nested EAFP seems a little hard to maintain, my questions here are: Is there a simpler way to handle get and set functions for Bunch data objects? Are there any other Dict like object that allows mutability between attributes and keys? How should Bunch object's .update() function work, shallow or deep copying? Or just let the default dict.update() do what it does? Understanding dict.copy() - shallow or deep?
Lucky for you, all objects have an internal dict-like object that manages the attributes of the object (this is in the __dict__ attribute). To do what you're asking, you just need to make the class use itself as the __dict__ object: class Bunch(dict): def __init__(self, *args, **kwargs): self.__dict__ = self super().__init__(*args, **kwargs) Usage: >>> b = Bunch() >>> b.foo = 3 >>> b["foo"] 3 >>> b["foo"] = 5 >>> b.foo 5 >>> b["bar"] = 1 >>> b.bar 1
5
3
68,680,322
2021-8-6
https://stackoverflow.com/questions/68680322/pytube-urllib-error-httperror-http-error-410-gone
I've been getting this error on several programs for now. I've tried upgrading pytube, reinstalling it, tried some fixes, changed URLs and code, but nothing seems to work. from pytube import YouTube #ask for the link from user link = input("Enter the link of YouTube video you want to download: ") yt = YouTube(link) #Showing details print("Title: ",yt.title) print("Number of views: ",yt.views) print("Length of video: ",yt.length) print("Rating of video: ",yt.rating) #Getting the highest resolution possible ys = yt.streams.get_highest_resolution() #Starting download print("Downloading...") ys.download() print("Download completed!!") and this is the error code: File "C:\Users\Madjid\PycharmProjects\pythonProject\app2.py", line 6, in <module> yt = YouTube(link) File "C:\Users\Madjid\PycharmProjects\pythonProject\venv\lib\site-packages\pytube\__main__.py", line 91, in __init__ self.prefetch() File "C:\Users\Madjid\PycharmProjects\pythonProject\venv\lib\site-packages\pytube\__main__.py", line 181, in prefetch self.vid_info_raw = request.get(self.vid_info_url) File "C:\Users\Madjid\PycharmProjects\pythonProject\venv\lib\site-packages\pytube\request.py", line 36, in get return _execute_request(url).read().decode("utf-8") File "C:\Users\Madjid\PycharmProjects\pythonProject\venv\lib\site-packages\pytube\request.py", line 24, in _execute_request return urlopen(request) # nosec File "E:\Python\lib\urllib\request.py", line 214, in urlopen return opener.open(url, data, timeout) File "E:\Python\lib\urllib\request.py", line 523, in open response = meth(req, response) File "E:\Python\lib\urllib\request.py", line 632, in http_response response = self.parent.error( File "E:\Python\lib\urllib\request.py", line 555, in error result = self._call_chain(*args) File "E:\Python\lib\urllib\request.py", line 494, in _call_chain result = func(*args) File "E:\Python\lib\urllib\request.py", line 747, in http_error_302 return self.parent.open(new, timeout=req.timeout) File "E:\Python\lib\urllib\request.py", line 523, in open response = meth(req, response) File "E:\Python\lib\urllib\request.py", line 632, in http_response response = self.parent.error( File "E:\Python\lib\urllib\request.py", line 561, in error return self._call_chain(*args) File "E:\Python\lib\urllib\request.py", line 494, in _call_chain result = func(*args) File "E:\Python\lib\urllib\request.py", line 641, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 410: Gone
Try to upgrade, there is a fix in version 11.0.0: python -m pip install --upgrade pytube
21
31
68,720,486
2021-8-10
https://stackoverflow.com/questions/68720486/how-to-fix-function-symbol-pango-context-set-round-glyph-positions-error
I have deployed a Django project using Apache2, everything is working fine except for weazyprint which creates PDF file for forms. The pdf was working fine in testing and local host. Now everytime I access the pdf it is showing this error: FileNotFoundError at /business_plan/businessplan/admin/info/2/pdf/ [Errno 2] No such file or directory: '/home/ahesham/Portfolio/static\\css\\report.css' I have tried to change the \ and adding it twice but it didn't work here is the views.py def admin_order_pdf(request, info_id): info = get_object_or_404(Info, id=info_id) html = render_to_string('businessplan/pdf.html', {'info': info}) response = HttpResponse(content_type='application/pdf') response['Content-Disposition'] = 'filename="order_{}.pdf"'.format( Info.id) weasyprint.HTML(string=html,base_url=request.build_absolute_uri()).write_pdf(response, stylesheets=[weasyprint.CSS(settings.STATICFILES_DIRS[0] + '\css\\report.css')], presentational_hints=True) return response The error is comming from this \css\\report.css knowing that the report.css file is in the static folder and all css and js of the deplyed site is working perfectly fine and I tried python manage.py collectstatic did not work. I am not sure exactly why is the error showing if it is because of ubunto or django views. Update: I have tried changing the location to be as following: stylesheets=[weasyprint.CSS(settings.STATICFILES_DIRS[0] + '/css/report.css')], presentational_hints=True) This is the error that appeared function/symbol 'pango_context_set_round_glyph_positions' not found in library 'libpango-1.0.so.0': /usr/lib/x86_64-linux-gnu/libpango-1.0.so.0: undefined symbol: pango_context_set_round_glyph_positions So I have searched for the solution and I tried to download this package: sudo apt-get install libpango1.0-0 and also tried: install libpango1.0-dev but still nothing changed it wouldn't work getting the same error. I have also tried replacing static directory with static root as the project is deployed buy i got the same error function/symbol 'pango_context_set_round_glyph_positions' the following: stylesheets=[weasyprint.CSS(settings.STATIC_ROOT + '/css/report.css')] This is the settings file: STATIC_URL = '/static/' #if DEBUG: # STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static')] #else: #STATIC_ROOT = os.path.join(BASE_DIR, 'static') STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static' )] MEDIA_URL = '/media/' MEDIA_ROOT = os.path.join(BASE_DIR, 'media') I am using Ubuntu 18.04 LTS Disk - Apache2 Server My question: What am I doing wrong and how do I fix it? Please let me know if additional information required to help fix this error. Any feedback will be highly appreciated if there are other ways to display PDF in Django admin
I had also faced the same error 'pango_context_set_round_glyph_positions' not found in library i think you must be using weasyprint version 53.0 which requires pango version 1.44.0+ downgrading the weasyprint version to 52.5 solved my issue.because this does not require recent version of pango. you can also check this https://github.com/Kozea/WeasyPrint/issues/1384
29
36
68,727,546
2021-8-10
https://stackoverflow.com/questions/68727546/solving-optimal-control-problem-with-constraint-x0-x2-0-with-gekko
I am starting to learn Gekko and I am testing optimal control problems. I am trying to solve the following optimal control problem with Gekko The solution of this problem is (x_1(t) = (t-2)^2 - 2) How to build the constraint x(0) + x(2) = 0? My code gives me a wrong solution. m = GEKKO(remote=False) # initialize gekko nt = 101 m.time = np.linspace(0,2,nt) #end_loc = nt-1 # Variables x1 = m.CV(fixed_initial=False) x2 = m.CV(fixed_initial=False) x3 = m.Var(value=0) #u = m.Var(value=0,lb=-2,ub=2) u = m.MV(fixed_initial=False,lb=-2,ub=2) u.STATUS = 1 p = np.zeros(nt) # mark final time point p[-1] = 1.0 final = m.Param(value=p) p1 = np.zeros(nt) p1[0] = 1.0 p1[-1] = 1.0 infin = m.Param(value=p1) # Equations m.Equation(x1.dt()==x2) m.Equation(x2.dt()==u) m.Equation(x3.dt()==x1) # Constraints m.Equation(infin*x1==0) m.Equation(final*x2==0) m.Obj(x3*final) # Objective function #m.fix(x2,pos=end_loc,val=0.0) m.options.IMODE = 6 # optimal control mode #m.solve(disp=True) # solve m.solve(disp=False) # solve plt.figure(1) # plot results plt.plot(m.time,x1.value,'k-',label=r'$x_1$') plt.plot(m.time,x2.value,'b-',label=r'$x_2$') plt.plot(m.time,x3.value,'g-',label=r'$x_3$') plt.plot(m.time,u.value,'r--',label=r'$u$') plt.legend(loc='best') plt.xlabel('Time') plt.ylabel('Value') plt.show() plt.figure(1) # plot results plt.plot(m.time,x1.value,'k-',label=r'$x_1$') plt.plot(m.time,(m.time-2)**2-2,'g--',label=r'$\hat x_1$') plt.legend(loc='best') plt.xlabel('Time') plt.ylabel('Value') plt.show()
Use m.integral or m.vsum() to create a time weighted summation or vertical summation along the time direction. Here is a solution that replicates the exact solution. from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt m = GEKKO(remote=True) # initialize gekko nt = 501 m.time = np.linspace(0,2,nt) # insert a small time step at the beginning and end # for final and initial condition equation x(1e-8)+x(2-1e-8)=0 # this doesn't change the numerical solution significantly m.time = np.insert(m.time,1,1e-8) m.time = np.insert(m.time,len(m.time)-1,2.0-1e-8) nt += 2 # Variables x1 = m.Var(fixed_initial=False,lb=-100,ub=100) x2 = m.Var(fixed_initial=False) u = m.MV(fixed_initial=False,lb=-2,ub=2) u.STATUS = 1 p = np.zeros(nt) # mark final time point p[-2] = 1.0 final = m.Param(value=p) q = p.copy() q[1] = 1.0 final_initial = m.Param(value=q) xfi = m.Var() m.Equation(xfi==final_initial*x1) # Equations m.Equation(x1.dt()==x2) m.Equation(x2.dt()==u) m.Equation(final*m.vsum(xfi)==0) m.Equation(final*x2==0) m.Minimize(m.integral(x1)*final) # Objective function m.options.IMODE = 6 # optimal control mode m.options.NODES = 2 m.solve(disp=True) # solve plt.figure(1) # plot results plt.subplot(2,1,1) plt.plot(m.time,x1.value,'k-',label=r'$x_1$') plt.plot(m.time,x2.value,'b-',label=r'$x_2$') plt.plot(m.time,u.value,'--',color='orange',label=r'$u$') plt.legend(loc='best') plt.ylabel('Value') plt.subplot(2,1,2) plt.plot(m.time,x1.value,'k-',label=r'$x_1$') plt.plot(m.time,(m.time-2)**2-2,'r--',label=r'$\hat x_1$') plt.legend(loc='best') plt.xlabel('Time') plt.ylabel('Value') plt.show() One issue is that x1(1-e8)+x1(2-1e-8)=0 is used as a constraint instead of x1(0)+x1(2)=0. The numerical solutions should be nearly equivalent and the 1e-8 can be further reduced.
5
1
68,671,852
2021-8-5
https://stackoverflow.com/questions/68671852/best-way-to-iterate-through-elements-of-pandas-series
All of the following seem to be working for iterating through the elements of a pandas Series. I'm sure there's more ways of doing it. What are the differences and which is the best way? import pandas arr = pandas.Series([1, 1, 1, 2, 2, 2, 3, 3]) # 1 for el in arr: print(el) # 2 for _, el in arr.iteritems(): print(el) # 3 for el in arr.array: print(el) # 4 for el in arr.values: print(el) # 5 for i in range(len(arr)): print(arr.iloc[i])
TL;DR Iterating in pandas is an antipattern and can usually be avoided by vectorizing, applying, aggregating, transforming, or cythonizing. However if Series iteration is absolutely necessary, performance will depend on the dtype and index: Index Fastest if numpy dtype Fastest if pandas dtype Idiomatic Unneeded in s.to_numpy() in s.array in s Default in enumerate(s.to_numpy()) in enumerate(s.array) in s.items() Custom in zip(s.index, s.to_numpy()) in s.items() in s.items() For numpy-based Series, use s.to_numpy() If the Series is a python or numpy dtype, it's usually fastest to iterate the underlying numpy ndarray: for el in s.to_numpy(): # if dtype is datetime, int, float, str, string datetime int float float + nan str string To access the index, it's actually fastest to enumerate() or zip() the numpy ndarray: for i, el in enumerate(s.to_numpy()): # if default range index for i, el in zip(s.index, s.to_numpy()): # if custom index Both are faster than the idiomatic s.items() / s.iteritems(): datetime + index To micro-optimize, switch to s.tolist() for shorter int/float/str Series: for el in s.to_numpy(): # if >100K elements for el in s.tolist(): # to micro-optimize if <100K elements Warning: Do not use list(s) as it doesn't use compiled code which makes it slower. For pandas-based Series, use s.array or s.items() Pandas extension dtypes contain extra (meta)data, e.g.: pandas dtype contents Categorical 2 arrays DatetimeTZ array + timezone metadata Interval 2 arrays Period array + frequency metadata ... ... Converting these extension arrays to numpy "may be expensive" since it could involve copying/coercing the data, so: If the Series is a pandas extension dtype, it's generally fastest to iterate the underlying pandas array: for el in s.array: # if dtype is pandas-only extension For example, with ~100 unique Categorical values: Categorical DatetimeTZ Period Interval To access the index, the idiomatic s.items() is very fast for pandas dtypes: for i, el in s.items(): # if need index for pandas-only dtype DatetimeTZ + index Interval + index Period + index To micro-optimize, switch to the slightly faster enumerate() for default-indexed Categorical arrays: for i, el in enumerate(s.array): # to micro-optimize Categorical dtype if need default range index Categorical + index Caveats Avoid using s.values: Use s.to_numpy() to get the underlying numpy ndarray Use s.array to get the underlying pandas array Avoid modifying the iterated Series: You should never modify something you are iterating over. This is not guaranteed to work in all cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect! Avoid iterating manually whenever possible by instead: Vectorizing, (boolean) indexing, etc. Applying functions, e.g.: s.apply(some_function) s.agg(['min', 'max', 'mean']) s.transform([np.sqrt, np.exp]) Note: These are not vectorizations despite the common misconception. Offloading to cython/numba Specs: ThinkPad X1 Extreme Gen 3 (Core i7-10850H 2.70GHz, 32GB DDR4 2933MHz) Versions: python==3.9.2, pandas==1.3.1, numpy==1.20.2 Testing data: Series generation code in snippet ''' Note: This is python code in a js snippet, so "run code snippet" will not work. The snippet is just to avoid cluttering the main post with supplemental code. ''' import pandas as pd import numpy as np int_series = pd.Series(np.random.randint(1000000000, size=n)) float_series = pd.Series(np.random.randn(size=n)) floatnan_series = pd.Series(np.random.choice([np.nan, np.inf]*n + np.random.randn(n).tolist(), size=n)) str_series = pd.Series(np.random.randint(10000000000000000, size=n)).astype(str) string_series = pd.Series(np.random.randint(10000000000000000, size=n)).astype('string') datetime_series = pd.Series(np.random.choice(pd.date_range('2000-01-01', '2021-01-01'), size=n)) datetimetz_series = pd.Series(np.random.choice(pd.date_range('2000-01-01', '2021-01-01', tz='CET'), size=n)) categorical_series = pd.Series(np.random.randint(100, size=n)).astype('category') interval_series = pd.Series(pd.arrays.IntervalArray.from_arrays(-np.random.random(size=n), np.random.random(size=n))) period_series = pd.Series(pd.period_range(end='2021-01-01', periods=n, freq='s'))
14
40
68,734,504
2021-8-10
https://stackoverflow.com/questions/68734504/boxplot-by-two-groups-in-pandas
I have the following dataset: df_plots = pd.DataFrame({'Group':['A','A','A','A','A','A','B','B','B','B','B','B'], 'Type':['X','X','X','Y','Y','Y','X','X','X','Y','Y','Y'], 'Value':[1,1.2,1.4,1.3,1.8,1.5,15,19,18,17,12,13]}) df_plots Group Type Value 0 A X 1.0 1 A X 1.2 2 A X 1.4 3 A Y 1.3 4 A Y 1.8 5 A Y 1.5 6 B X 15.0 7 B X 19.0 8 B X 18.0 9 B Y 17.0 10 B Y 12.0 11 B Y 13.0 And I want to create boxplots per Group (there are two in the example) and in each plot to show by type. I have tried this: fig, axs = plt.subplots(1,2,figsize=(8,6), sharey=False) axs = axs.flatten() for i, g in enumerate(df_plots[['Group','Type','Value']].groupby(['Group','Type'])): g[1].boxplot(ax=axs[i]) Results in an IndexError, because the loop tries to create 4 plots. --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-12-8e1150950024> in <module> 3 4 for i, g in enumerate(df[['Group','Type','Value']].groupby(['Group','Type'])): ----> 5 g[1].boxplot(ax=axs[i]) IndexError: index 2 is out of bounds for axis 0 with size 2 Then I tried this: fig, axs = plt.subplots(1,2,figsize=(8,6), sharey=False) axs = axs.flatten() for i, g in enumerate(df_plots[['Group','Type','Value']].groupby(['Group','Type'])): g[1].boxplot(ax=axs[i], by=['Group','Type']) But no, I have the same problem. The expected result should have only two plots, and each plot have a box-and-whisker per Type. This is a sketch of this idea: Please, any help will be greatly appreciated, with this code I can control some aspects of the data that I can't with seaborn.
As @Prune mentioned, the immediate issue is that your groupby() returns four groups (AX, AY, BX, BY), so first fix the indexing and then clean up a couple more issues: Change axs[i] to axs[i//2] to put groups 0 and 1 on axs[0] and groups 2 and 3 on axs[1]. Add positions=[i] to place the boxplots side by side rather than stacked. Set the title and xticklabels after plotting (I'm not aware of how to do this in the main loop). for i, g in enumerate(df_plots.groupby(['Group', 'Type'])): g[1].boxplot(ax=axs[i//2], positions=[i]) for i, ax in enumerate(axs): ax.set_title('Group: ' + df_plots['Group'].unique()[i]) ax.set_xticklabels(['Type: X', 'Type: Y']) Note that mileage may vary depending on version: matplotlib.__version__ pd.__version__ confirmed working 3.4.2 1.3.1 confirmed not working 3.0.1 1.2.4
5
3
68,719,486
2021-8-9
https://stackoverflow.com/questions/68719486/checksummismatcherror-conda-detected-a-mismatch-between-the-expected-content-an
I have installed many many packages including torch, gpytorch, ... in the past in Windows, Ubuntu and Mac following this scenario: conda create -n env_name conda activate env_name conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia However, this time on Ubuntu, I interfered the following error on downloading the package which apparently after downloading when checking the checksum, it sees a mismatch. I also tried removing those *.bz2 files just in case if there is a pre-downloaded file, it didn't work. ChecksumMismatchError: Conda detected a mismatch between the expected content and downloaded content for url 'https://conda.anaconda.org/pytorch/linux-64/torchaudio-0.9.0-py39.tar.bz2'. download saved to: /home/amin/anaconda3/pkgs/torchaudio-0.9.0-py39.tar.bz2 expected md5: 7224453f68125005e034cb6646f2f0a3 actual md5: 6bbb8056603453427bbe4cca4b033361 ChecksumMismatchError: Conda detected a mismatch between the expected content and downloaded content for url 'https://conda.anaconda.org/pytorch/linux-64/torchvision-0.10.0-py39_cu111.tar.bz2'. download saved to: /home/amin/anaconda3/pkgs/torchvision-0.10.0-py39_cu111.tar.bz2 expected md5: 78b4c927e54b06d7a6d18eec8b3f2d18 actual md5: 69dd8411c573903db293535017742bd9 My system information: Linux SPOT-Server 5.8.0-63-generic #71~20.04.1-Ubuntu SMP Thu Jul 15 17:46:08 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux My conda --version is also 4.8.2. I also should add that I have same issue on Windows having conda --version equal to 4.10.1.
The PyTorch channel maintainers had an issue when uploading some new package builds, which has since been resolved (see GitHub Issue). The technical cause was uploading new builds with identical versions and build numbers as before, without replacing the previous build. This caused the expected MD5 checksum to correspond to the new upload, but the tarball that was ultimately downloaded still corresponded to the previous upload, leading to a checksum mismatch.
5
1
68,732,114
2021-8-10
https://stackoverflow.com/questions/68732114/how-can-i-select-rows-except-last-row-of-one-column
I'd like to select one column only but all the rows except last row. If I did it like below, the result is empty. a = data_vaf.loc[:-1, 'Area']
loc:location iloc:index location. They just can't operate implicitly. Therefore we exclude last raw by iloc then select the column Area As shown by the comment from @ThePyGuy data_vaf.iloc[:-1]['Area'] Here's the structure of iloc[row, column] And iloc[row] do the same thing as iloc[row,:] df.iloc[:-1] do the same thing as df[:-1]
4
6
68,731,560
2021-8-10
https://stackoverflow.com/questions/68731560/valueerror-axes-dont-match-array-cant-transpose-an-array
Traceback Error Traceback (most recent call last): File "C:\Users\trial2\trial.py", line 55, in <module> image_stack(image) File "C:\Users\trial2\trial.py", line 41, in image_stack transposed_axes = np.transpose(img, axes=concat) File "<__array_function__ internals>", line 5, in transpose File "C:\Users\trial2\venv\lib\site-packages\numpy\core\fromnumeric.py", line 660, in transpose return _wrapfunc(a, 'transpose', axes) File "C:\Users\trial2\venv\lib\site-packages\numpy\core\fromnumeric.py", line 57, in _wrapfunc return bound(*args, **kwds) ValueError: axes don't match array The Traceback Error is also there, if that helps to solve the issue I need to calculate the new transpose axes for the image that has been inputted into the function as parameter. The function gets the shape of the image, and then calculates the transpose axes for the image. The problem is occurring in transposing the image array to the new axes. I can't figure out the issue to why it isn't working. Do I need to turn the concat variable into a tuple for it to work properly or is something else causing the issue for it def image_stack(image): img_shape = cv2.imread(image).shape img = np.asarray(img_shape) print(type(img)) n = len(img) val_1 = list(range(1, n - 1, 2)) val_2 = list(range(0, n - 1, 2)) print(img) if n % 2 == 0: y_ax = val_1 x_ax = val_2 axes = (y_ax, x_ax, [n-1]) else: y_ax = val_2 x_ax = val_1 axes = (y_ax, x_ax, [n - 1]) print(type(axes)) concat = np.concatenate(axes) print(concat) if type(axes) == tuple: transposed_axes = np.transpose(img, axes=concat) print(transposed_axes) image = 'C:\\trial_images\\9.jpg' image_stack(image)
The type of the axes argument doesn't matter: In [96]: arr = np.ones( [720, 1280, 3 ] ) In [97]: np.transpose(arr,[0,1,2]).shape Out[97]: (720, 1280, 3) In [98]: np.transpose(arr,(0,1,2)).shape Out[98]: (720, 1280, 3) In [99]: np.transpose(arr,np.array([0,1,2])).shape Out[99]: (720, 1280, 3) but if I provide more values than dimensions of arr I get your error: In [103]: np.transpose(arr,[0,1,2,3]).shape Traceback (most recent call last): File "<ipython-input-103-fc01ec77b59e>", line 1, in <module> np.transpose(arr,[0,1,2,3]).shape File "<__array_function__ internals>", line 5, in transpose File "/usr/local/lib/python3.8/dist-packages/numpy/core/fromnumeric.py", line 660, in transpose return _wrapfunc(a, 'transpose', axes) File "/usr/local/lib/python3.8/dist-packages/numpy/core/fromnumeric.py", line 57, in _wrapfunc return bound(*args, **kwds) ValueError: axes don't match array I expect the same for [0,1,2] and a 2d arr. Hopefully this gives you ideas of how to debug problems yourself. Fireup an interactive session and try some alternatives.
8
6
68,726,290
2021-8-10
https://stackoverflow.com/questions/68726290/setting-learning-rate-for-stochastic-weight-averaging-in-pytorch
Following is a small working code for Stochastic Weight Averaging in Pytorch taken from here. loader, optimizer, model, loss_fn = ... swa_model = torch.optim.swa_utils.AveragedModel(model) scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=300) swa_start = 160 swa_scheduler = SWALR(optimizer, swa_lr=0.05) for epoch in range(300): for input, target in loader: optimizer.zero_grad() loss_fn(model(input), target).backward() optimizer.step() if epoch > swa_start: swa_model.update_parameters(model) swa_scheduler.step() else: scheduler.step() # Update bn statistics for the swa_model at the end torch.optim.swa_utils.update_bn(loader, swa_model) # Use swa_model to make predictions on test data preds = swa_model(test_input) In this code after 160th epoch the swa_scheduler is used instead of the usual scheduler. What does swa_lr signify? The documentation says, Typically, in SWA the learning rate is set to a high constant value. SWALR is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant. So what happens to the learning rate of the optimizer after 160th epoch? Does swa_lr affect the optimizer learning rate? Suppose that at the beginning of the code the optimizer was ADAM initialized with a learning rate of 1e-4. Then does the above code imply that for the first 160 epochs the learning rate for training will be 1e-4 and then for the remaining number of epochs it will be swa_lr=0.05? If yes, is it a good idea to define swa_lr also to 1e-4?
does the above code imply that for the first 160 epochs the learning rate for training will be 1e-4 No it won't be equal to 1e-4, during the first 160 epochs the learning rate is managed by the first scheduler scheduler. This one is a initialize as a torch.optim.lr_scheduler.CosineAnnealingLR. The learning rate will follow this curve: for the remaining number of epochs it will be swa_lr=0.05 This is partially true, during the second part - from epoch 160 - the optimizer's learning rate will be handled by the second scheduler swa_scheduler. This one is initialized as a torch.optim.swa_utils.SWALR. You can read on the documentation page: SWALR is a learning rate scheduler that anneals the learning rate to a fixed value [swa_lr], and then keeps it constant. By default (cf. source code), the number of epochs before annealing is equal to 10. Therefore the learning rate from epoch 170 to epoch 300 will be equal to swa_lr and will stay this way. The second part will be: This complete profile, i.e. both parts: If yes, is it a good idea to define swa_lr also to 1e-4 It is mentioned in the docs: Typically, in SWA the learning rate is set to a high constant value. Setting swa_lr to 1e-4 would result in the following learning-rate profile:
5
8
68,722,516
2021-8-10
https://stackoverflow.com/questions/68722516/exclude-some-attributes-from-str-representation-of-a-dataclass
We have this class: from dataclasses import dataclass, field from datetime import datetime from typing import List, Dict @dataclass class BoardStaff: date: str = datetime.now() fullname: str address: str ## attributes to be excluded in __str__: degree: str rank: int = 10 badges: bool = False cases_dict: Dict[str, str] = field(default_factory=dict) cases_list: List[str] = field(default_factory=list) Emp = BoardStaff('Jack London', address='Unknown', degree='MA') As BoardStaff is a dataclass, one can easily do print(Emp) to receive: BoardStaff(fullname='Jack London', address='Unknown', degree='MA', rank=10, badges=False, cases={}, date=datetime.datetime(2021, 8, 10, 11, 36, 50, 693428)). However, I want some attributes (i.e. the last 5 ones) to be excluded from the representation, so I had to define __str__ method and manually exclude some attributes like so: def __str__(self): str_info = { k: v for k, v in self.__dict__.items() if k not in ['degree', 'rank', 'other'] and v } return str(str_info) But is there a better way to do the exclusion, like using some parameters when defining the attributes?
Obvious solution Simply define your attributes as fields with the argument repr=False: from dataclasses import dataclass, field from datetime import datetime from typing import List, Dict @dataclass class BoardStaff: date: str = datetime.now() fullname: str address: str ## attributes to be excluded in __str__: degree: str = field(repr=False) rank: int = field(default=10, repr=False) badges: bool = field(default=False, repr=False) cases_dict: Dict[str, str] = field(default_factory=dict, repr=False) cases_list: List[str] = field(default_factory=list, repr=False) Emp = BoardStaff('Jack London', address='Unknown', degree='MA') This works nicely alongside marking attributes as "private" by giving them names starting with leading underscores, as others have suggested in the comments. More advanced solutions If you're looking for a more general solution that doesn't involve defining so many fields with repr=False, you could do something like this. It's pretty similar to the solution you thought up yourself, but it creates a __repr__ that's more similar to the usual dataclass __repr__: from dataclasses import dataclass, field from datetime import datetime from typing import List, Dict @dataclass class BoardStaff: fullname: str address: str degree: str date: str = datetime.now() rank: int = 10 badges: bool = False cases_dict: Dict[str, str] = field(default_factory=dict) cases_list: List[str] = field(default_factory=list) def __repr__(self): dict_repr = ', '.join( f'{k}={v!r}' for k, v in filter( lambda item: item[0] in {'fullname', 'address', 'date'}, self.__dict__.items() ) ) return f'{self.__class__.__name__}({dict_repr})' Emp = BoardStaff('Jack London', address='Unknown', degree='MA') print(Emp) (N.B. I had to reorder your fields slightly, as having default-argument parameters before parameters with no default will raise an error.) If you don't want to hardcode your __repr__ fields into your __repr__ methods, you could mark your non-__repr__ fields as private attributes, as was suggested in the comments by @DarkKnight, and use this as a signal for your __repr__ method: from dataclasses import dataclass, field from datetime import datetime from typing import List, Dict @dataclass class BoardStaff: fullname: str address: str _degree: str date: str = datetime.now() _rank: int = 10 _badges: bool = False _cases_dict: Dict[str, str] = field(default_factory=dict) _cases_list: List[str] = field(default_factory=list) def __repr__(self): dict_repr = ', '.join( f'{k}={v!r}' for k, v in filter( lambda item: not item[0].startswith('_'), self.__dict__.items() ) ) return f'{self.__class__.__name__}({dict_repr})' Emp = BoardStaff('Jack London', address='Unknown', _degree='MA') print(Emp) You could even potentially write your own decorator that would generate custom __repr__ methods for you on a class-by-class basis. E.g., this decorator will generate __repr__ methods that will only include the arguments you pass to the decorator: from dataclasses import dataclass, field from datetime import datetime from typing import List, Dict from functools import partial def dataclass_with_repr_fields( keys, init=True, eq=True, order=False, unsafe_hash=False, frozen=False, cls=None ): if cls is None: return partial( dataclass_with_repr_fields, keys, init=init, eq=eq, order=order, unsafe_hash=unsafe_hash, frozen=frozen) cls = dataclass( cls, init=init, repr=False, eq=eq, order=order, unsafe_hash=unsafe_hash, frozen=frozen ) def __repr__(self): dict_repr = ', '.join( f'{k}={v!r}' for k, v in filter( lambda item: item[0] in keys, self.__dict__.items() ) ) return f'{self.__class__.__name__}({dict_repr})' cls.__repr__ = __repr__ return cls @dataclass_with_repr_fields({'fullname', 'address', 'date'}) class BoardStaff: fullname: str address: str degree: str date: str = datetime.now() rank: int = 10 badges: bool = False cases_dict: Dict[str, str] = field(default_factory=dict) cases_list: List[str] = field(default_factory=list) @dataclass_with_repr_fields({'name', 'surname'}) class Manager: name: str surname: str salary: int private_medical_details: str Emp = BoardStaff('Jack London', address='Unknown', degree='MA') print(Emp) manager = Manager('John', 'Smith', 600000, 'badly asthmatic') print(manager)
21
32
68,647,962
2021-8-4
https://stackoverflow.com/questions/68647962/identify-current-thread-in-concurrent-futures-threadpoolexecutor
the following code has 5 workers .... each opens its own worker_task() with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: future_to_url = {executor.submit(worker_task, command_, site_): site_ for site_ in URLS} for future in concurrent.futures.as_completed(future_to_url): url = future_to_url[future] try: data = future.result() BUT ..... inside each worker_task() ...... I cannot identify ... which of the 5 workers is currently being used (Worker_ID) If I want to print('worker 3 has finished') inside worker_task() ..... I cannot do this because executor.submit does not allow Any solutions?
You can get name of worker thread with the help of threading.current_thread() function. Please find some example below: from concurrent.futures import ThreadPoolExecutor, Future from threading import current_thread from time import sleep from random import randint # imagine these are urls URLS = [i for i in range(100)] def do_some_work(url, a, b): """Simulates some work""" sleep(2) rand_num = randint(a, b) if rand_num == 5: raise ValueError("No! 5 found!") r = f"{current_thread().getName()}||: {url}_{rand_num}\n" return r def show_fut_results(fut: Future): """Callback for future shows results or shows error""" if not fut.exception(): print(fut.result()) else: print(f"{current_thread().getName()}| Error: {fut.exception()}\n") if __name__ == '__main__': with ThreadPoolExecutor(max_workers=10) as pool: for i in URLS: _fut = pool.submit(do_some_work, i, 1, 10) _fut.add_done_callback(show_fut_results) If you want more control over threads, use threading module: from threading import Thread from queue import Queue from time import sleep from random import randint import logging # imagine these are urls URLS = [f"URL-{i}" for i in range(100)] # number of worker threads WORKER_NUM = 10 def do_some_work(url: str, a: int, b: int) -> str: """Simulates some work""" sleep(2) rand_num = randint(a, b) if rand_num == 5: raise ValueError(f"No! 5 found in URL: {url}") r = f"{url} = {rand_num}" return r def thread_worker_func(q: Queue, a: int, b: int) -> None: """Target function for Worker threads""" logging.info("Started working") while True: try: url = q.get() # if poison pill - stop worker thread if url is None: break r = do_some_work(url, a, b) logging.info(f"Result: {r}") except ValueError as ex: logging.error(ex) except Exception as ex: logging.error(f"Unexpected error: {ex}") logging.info("Finished working") if __name__ == '__main__': logging.basicConfig( level=logging.INFO, format="%(levelname)s | %(threadName)s | %(asctime)s | %(message)s", ) in_q = Queue(50) workers = [ Thread(target=thread_worker_func, args=(in_q, 1, 10, ), name=f"MyWorkerThread-{i+1}") for i in range(WORKER_NUM) ] [w.start() for w in workers] # start distributing tasks for _url in URLS: in_q.put(_url) # send poison pills to worker-threads for w in workers: in_q.put(None) # wait worker thread to join Main Thread logging.info("Main Thread waiting for Worker Threads") [w.join() for w in workers] logging.info("Workers joined") sleep(10) logging.info("App finished")
6
8
68,721,853
2021-8-10
https://stackoverflow.com/questions/68721853/how-to-fix-google-sheets-api-has-not-been-used-in-project
I want to built a questionnaire line chatbot and transmit the answer to google sheet. Here is my code: ''' import os from flask import Flask, request, abort from linebot import ( LineBotApi, WebhookHandler ) from linebot.exceptions import ( InvalidSignatureError ) from linebot.models import ( MessageEvent, TextMessage, TextSendMessage, ) from oauth2client.service_account import ServiceAccountCredentials import gspread from datetime import datetime, date, timedelta, time import time gsp_scopes = ['https://spreadsheets.google.com/feeds'] SPREAD_SHEETS_KEY = os.environ.get('SPREAD_SHEETS_KEY') credential_file_path = 'credentials.json' def auth_gsp_client(file_path, scopes): credentials = ServiceAccountCredentials.from_json_keyfile_name(file_path, scopes) return gspread.authorize(credentials) def records(A, B, C, D, E): gsp_client = auth_gsp_client(credential_file_path, gsp_scopes) worksheet = gsp_client.open_by_key(SPREAD_SHEETS_KEY).sheet1 worksheet.insert_row([A, B, C, D, E], 2) return True app = Flask(__name__) LINE_CHANNEL_ACCESS_TOKEN = os.environ.get('LINE_CHANNEL_ACCESS_TOKEN') LINE_CHANNEL_SECRET = os.environ.get('LINE_CHANNEL_SECRET') line_bot_api = LineBotApi(LINE_CHANNEL_ACCESS_TOKEN) handler = WebhookHandler(LINE_CHANNEL_SECRET) @app.route("/", methods=['GET']) def hello(): return 'hello heroku' @app.route("/callback", methods=['POST']) def callback(): signature = request.headers['X-Line-Signature'] body = request.get_data(as_text=True) try: handler.handle(body, signature) except InvalidSignatureError: print("Invalid signature. Please check your channel access token/channel secret.") abort(400) return 'OK' user_command_dict = {} @handler.add(MessageEvent, message=TextMessage) def handle_message(event): user_message = event.message.text user_id = event.source.user_id user_command = user_command_dict.get(user_id) if user_message == '@問卷' and user_command == None: print(user_message) reply_message = [ TextSendMessage(text='這是問卷'), TextSendMessage(text='B'), TextSendMessage(text='開始') ] user_command_dict[user_id] = '@問卷1' if user_command == '@問卷1': answer = user_message if answer=='yes': time.sleep(3) reply_message=TextSendMessage(text='問題一') user_command_dict[user_id] = '@問卷2' if user_command == '@問卷2': global answer1 answer1 = user_message time.sleep(3) reply_message=TextSendMessage(text='問題二') user_command_dict[user_id] = '@問卷3' if user_command == '@問卷3': global answer2 answer2 = user_message time.sleep(3) reply_message=TextSendMessage(text='問題三') user_command_dict[user_id] = '@問卷4' if user_command == '@問卷4': global answer3 answer3 = user_message Date = date.today() today=Date.strftime("%Y/%b/%d") time.sleep(3) print(today, answer1, answer2, answer3) reply_message=TextSendMessage(text='問題結束') records(today, user_id, answer1, answer2, answer3) user_command_dict[user_id] = None #else: #print(user_message) #reply_message=TextSendMessage(text=event.message.text) line_bot_api.reply_message( event.reply_token, reply_message) if __name__ == "__main__": app.run() ''' I push it in heroku, but I get 2021-08-10T06:00:56.303548+00:00 app[web.1]: gspread.exceptions.APIError: {'code': 403, 'message': 'Google Sheets API has not been used in project 10137149515 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/sheets.googleapis.com/overview?project=10137149515 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.', 'status': 'PERMISSION_DENIED', 'details': [{'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Google developers console API activation', 'url': 'https://console.developers.google.com/apis/api/sheets.googleapis.com/overview?project=10137149515'}]}, {'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'SERVICE_DISABLED', 'domain': 'googleapis.com', 'metadata': {'consumer': 'projects/10137149515', 'service': 'sheets.googleapis.com'}}]} 2021-08-10T06:00:56.304235+00:00 app[web.1]: 10.1.7.41 - - [10/Aug/2021:14:00:56 +0800] "POST /callback HTTP/1.1" 500 290 "-" "LineBotWebhook/2.0" Please help and tell me what's wrong. THX
Google Sheets API has not been used in project 10137149515 before or it is disabled. Enable it by visiting This is a settings issue in your Google cloud console account if you follow the link to your project When you set up your project you need to tell google which APIs you intend to use you have forgotten to add that you will be using the Google sheets api. enable google sheets api Go to library on the left In the search bar search for Google sheets api and select it then click enable It should only take a few minutes then run your code again.
10
12
68,718,381
2021-8-9
https://stackoverflow.com/questions/68718381/why-does-the-key-kwargs-appear-when-using-kwargs
Why does {'kwargs':{'1':'a', '2':'b'}} appear when I run test_func()? I would have expected just this to print: {'1':'a', '2':'b'}. Code: class MyClass: def __init__(self, **kwargs): self.kwargs = kwargs def test_func(self): print(self.kwargs) test_kwargs = {'1':'a', '2':'b'} my_class = MyClass(kwargs=test_kwargs) my_class.test_func() Ouput: {'kwargs': {'1': 'a', '2': 'b'}}
It's because you initialize the instance by passing 1 keyword argument named kwargs with the dictionary as value. If you want to see the dictionary as kwargs, you need to call in using my_class = MyClass(**test_kwargs)
4
7
68,715,304
2021-8-9
https://stackoverflow.com/questions/68715304/dual-x-axis-with-same-data-different-scale
I'd like to plot some data in Python using two different x-axes. For ease of explanation, I will say that I want to plot light absorption data, which means I plot absorbance vs. wavelength (nm) or energy (eV). I want to have a plot where the bottom axis denotes the wavelength in nm, and the top axis denotes energy in eV. The two are not linearly dependent (as you can see in my MWE below). My full MWE: import numpy as np import matplotlib.pyplot as plt import scipy.constants as constants # Converting wavelength (nm) to energy (eV) def WLtoE(wl): # E = h*c/wl h = constants.h # Planck constant c = constants.c # Speed of light J_eV = constants.e # Joule-electronvolt relationship wl_nm = wl * 10**(-9) # convert wl from nm to m E_J = (h*c) / wl_nm # energy in units of J E_eV = E_J / J_eV # energy in units of eV return E_eV x = np.arange(200,2001,5) x_mod = WLtoE(x) y = 2*x + 3 fig, ax1 = plt.subplots() ax2 = ax1.twiny() ax1.plot(x, y, color='red') ax2.plot(x_mod, y, color = 'green') ax1.set_xlabel('Wavelength (nm)', fontsize = 'large', color='red') ax1.set_ylabel('Absorbance (a.u.)', fontsize = 'large') ax1.tick_params(axis='x', colors='red') ax2.set_xlabel('Energy (eV)', fontsize='large', color='green') ax2.tick_params(axis='x', colors='green') ax2.spines['top'].set_color('green') ax2.spines['bottom'].set_color('red') plt.tight_layout() plt.show() This yields: Now this is close to what I want, but I'd like to solve the following two issues: One of the axes needs to be reversed - high wavelength equals low energy but this is not the case in the figure. I tried using x_mod = WLtoE(x)[::-1] for example but this does not solve this issue. Since the axes are not linearly dependent, I'd like the top and bottom axis to "match". For example, right now 1000 nm lines up with 3 eV (more or less) but in reality 1000 nm corresponds to 1.24 eV. So one of the axes (preferably the bottom, wavelength axis) needs to be condensed/expanded to match the correct value of energy at the top. In other words, I'd like the red and green curve to coincide. I appreciate any and all tips & tricks to help me make a nice plot! Thanks in advance. ** EDIT ** DeX97's answer solved my problem perfectly although I made some minor changes, as you can see below. I just made some changes in the way I plotted things, defining the functions like DeX97 worked perfectly. Edited code for plotting fig, ax1 = plt.subplots() ax1.plot(WLtoE(x), y) ax1.set_xlabel('Energy (eV)', fontsize = 'large') ax1.set_ylabel('Absorbance (a.u.)', fontsize = 'large') # Create the second x-axis on which the wavelength in nm will be displayed ax2 = ax1.secondary_xaxis('top', functions=(EtoWL, WLtoE)) ax2.set_xlabel('Wavelength (nm)', fontsize='large') # Invert the wavelength axis ax2.invert_xaxis() # Get ticks from ax1 (energy) E_ticks = ax1.get_xticks() E_ticks = preventDivisionByZero(E_ticks) # Make own array of wavelength ticks, so they are round numbers # The values are not linearly spaced, but that is the idea. wl_ticks = np.asarray([200, 250, 300, 350, 400, 500, 600, 750, 1000, 2000]) # Set the ticks for ax2 (wl) ax2.set_xticks(wl_ticks) # Make the values on ax2 (wavelength) integer values ax2.xaxis.set_major_formatter(FormatStrFormatter('%i')) plt.tight_layout() plt.show()
In your code example, you plot the same data twice (albeit transformed using E=h*c/wl). I think it would be sufficient to only plot the data once, but create two x-axes: one displaying the wavelength in nm and one displaying the corresponding energy in eV. Consider the adjusted code below: import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import FormatStrFormatter import scipy.constants as constants from sys import float_info # Function to prevent zero values in an array def preventDivisionByZero(some_array): corrected_array = some_array.copy() for i, entry in enumerate(some_array): # If element is zero, set to some small value if abs(entry) < float_info.epsilon: corrected_array[i] = float_info.epsilon return corrected_array # Converting wavelength (nm) to energy (eV) def WLtoE(wl): # Prevent division by zero error wl = preventDivisionByZero(wl) # E = h*c/wl h = constants.h # Planck constant c = constants.c # Speed of light J_eV = constants.e # Joule-electronvolt relationship wl_nm = wl * 10**(-9) # convert wl from nm to m E_J = (h*c) / wl_nm # energy in units of J E_eV = E_J / J_eV # energy in units of eV return E_eV # Converting energy (eV) to wavelength (nm) def EtoWL(E): # Prevent division by zero error E = preventDivisionByZero(E) # Calculates the wavelength in nm return constants.h * constants.c / (constants.e * E) * 10**9 x = np.arange(200,2001,5) y = 2*x + 3 fig, ax1 = plt.subplots() ax1.plot(x, y, color='black') ax1.set_xlabel('Wavelength (nm)', fontsize = 'large') ax1.set_ylabel('Absorbance (a.u.)', fontsize = 'large') # Invert the wavelength axis ax1.invert_xaxis() # Create the second x-axis on which the energy in eV will be displayed ax2 = ax1.secondary_xaxis('top', functions=(WLtoE, EtoWL)) ax2.set_xlabel('Energy (eV)', fontsize='large') # Get ticks from ax1 (wavelengths) wl_ticks = ax1.get_xticks() wl_ticks = preventDivisionByZero(wl_ticks) # Based on the ticks from ax1 (wavelengths), calculate the corresponding # energies in eV E_ticks = WLtoE(wl_ticks) # Set the ticks for ax2 (Energy) ax2.set_xticks(E_ticks) # Allow for two decimal places on ax2 (Energy) ax2.xaxis.set_major_formatter(FormatStrFormatter('%.2f')) plt.tight_layout() plt.show() First of all, I define the preventDivisionByZero utility function. This function takes an array as input and checks for values that are (approximately) equal to zero. Subsequently, it will replace these values with a small number (sys.float_info.epsilon) that is not equal to zero. This function will be used in a few places to prevent division by zero. I will come back to why this is important later. After this function, your WLtoE function is defined. Note that I added the preventDivisionByZero function at the top of your function. In addition, I defined a EtoWL function, which does the opposite compared to your WLtoE function. Then, you generate your dummy data and plot it on ax1, which is the x-axis for the wavelength. After setting some labels, ax1 is inverted (as was requested in your original post). Now, we create the second axis for the energy using ax2 = ax1.secondary_xaxis('top', functions=(WLtoE, EtoWL)). The first argument indicates that the axis should be placed at the top of the figure. The second (keyword) argument is given a tuple containing two functions: the first function is the forward transform, while the second function is the backward transform. See Axes.secondary_axis for more information. Note that matplotlib will pass values to these two functions whenever necessary. As these values can be equal to zero, it is important to handle those cases. Hence, the preventDivisionByZero function! After creating the second axis, the label is set. Now we have two x-axes, but the ticks on both axis are at different locations. To 'solve' this, we store the tick locations of the wavelength x-axis in wl_ticks. After ensuring there are no zero elements using the preventDivisionByZero function, we calculate the corresponding energy values using the WLtoE function. These corresponding energy values are stored in E_ticks. Now we simply set the tick locations of the second x-axis equal to the values in E_ticks using ax2.set_xticks(E_ticks). To allow for two decimal places on the second x-axis (energy), we use ax2.xaxis.set_major_formatter(FormatStrFormatter('%.2f')). Of course, you can choose the desired number of decimal places yourself. The code given above produces the following graph:
5
2
68,716,239
2021-8-9
https://stackoverflow.com/questions/68716239/passing-query-prameters-in-an-api-request
Why do I need to use a dictionary and set the parameters? Such as lat, lng in following: parameters = { "lat": MY_LAT, "lng": MY_LONG, "formatted": 0 } response = requests.get(url="https://api.sunrise-sunset.org/json", params=parameters) Why can't I do the below, for instance to put in latitude? response = requests.get(url="https://api.sunrise-sunset.org/json", params=parameters, lat=MY_LAT)
You can't pass the arguments by name because the Python requests library doesn't know what parameters a given URL accepts. The API at sunrise-sunset.org takes lat and lng parameters, but most other APIs would have no use for them. By passing a dictionary of key=value pairs, you tell requests both the names and values of the parameters expected by the specific API you're calling.
5
0
68,712,420
2021-8-9
https://stackoverflow.com/questions/68712420/is-it-possible-to-run-isort-formatter-from-black-command-in-python
I like to get inspiration from well designed python projects. The last one that inspired me was the poetry repository. I copied a lot from that, but the subject of this post are black and isort. Both are well configured in pyproject.toml: [tool.isort] profile = "black" ... known_first_party = "poetry" [tool.black] line-length = 88 include = '\.pyi?$' exclude = ''' /( ... )/ ''' and formatting is configured in the Makefileas: format: clean @poetry run black poetry/ tests/ I thought that running make format would run blackwhich would internally run isort, but when I ran isort ., it correctly formated the import statements afterwards. It then seems black did not run isort. Question: does black run isort internally?
Question: does black run isort internally? No, it doesn't. isort has a profile = "black" option that makes it adhere to Black's standards though. The poetry repository itself has a pre-commit hook defined here in .pre-commit-config.yaml that makes sure isort is run (along with a couple of other tools).
6
10
68,712,892
2021-8-9
https://stackoverflow.com/questions/68712892/how-to-create-dev-requirements-txt-from-extras-require-section-of-setup-cfg
I use pip-tools to manage my dependencies and environments which perfectly generates a requirements.txt file for my package that consists of a setup.py that looks like this: #! /usr/bin/env python import os from setuptools import setup if "CI_COMMIT_TAG" in os.environ: VERSION = os.environ["CI_COMMIT_TAG"] else: VERSION = "0.0.0" setup(version=VERSION) and a setup.cfg like this: ... [options] python_requires = >=3.7 zip_safe = False packages = find: include_package_data = True install_requires = PyYAML Cython numpy==1.17.5 pandas==0.25.3 ... package_dir= foo=bar [options.extras_require] testing = tox>=3.1.2 pytest>=3.4.0 coverage = coverage pytest-cov>=2.5.1 other = anybadge ... Running $ pip-compile --index-url https://foo@[email protected]/api/v4/projects/236/packages/pypi/simple --no-header --allow-unsafe yields my package requirements: ... async-timeout==3.0.1 # via aiohttp attrs==21.2.0 # via aiohttp bcrypt==3.2.0 ... But this only includes all the packages from the install_requires section of my setup.cfg file and not the requirements from extras_require. It should work with a dev_requirements.in file as described here but I would rather only use one configuration file. How can I create a separate dev_requirements.txt from this extras_require section of my setup.cfg file using pip-compile without having to create a dev_requirements.in file? Thanks in advance!
After digging for a while, I found my answer in another issue: $ pip-compile --extra testing --extra other
4
10
68,708,788
2021-8-9
https://stackoverflow.com/questions/68708788/running-azure-functions-with-vs-code-instanlty-fails-with-econnrefused
Yesterday I could run and debug my Azure Function project with VS Code. Today, when I'm hitting F5 ("Start Debugging"), a pop-up instantly appears with connect ECONNREFUSED 127.0.0.1:9091 I believe it's a network issue, since VS Code doesn't even open a terminal displaying "Executing task [...]". But within VS Code I can browse the extension marketplace, run "pip install" within the terminal. Here is my configuration: launch.json { "version": "0.2.0", "configurations": [ { "name": "Python: Current file", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal" }, { "name": "Azure Functions", "type": "python", "request": "attach", "port": 9091, "preLaunchTask": "func: host start" } ] } tasks.json { "version": "2.0.0", "tasks": [ { "type": "func", "command": "host start", "problemMatcher": "$func-python-watch", "isBackground": true, "dependsOn": "pipInstall", "options": { "cwd": "${config:azureFunctions.projectSubpath}" } }, { "label": "pipInstall", "type": "shell", "command": "${config:python.pythonPath} -m pip install -r requirements.txt", "problemMatcher": [], "options": { "cwd": "${config:azureFunctions.projectSubpath}" } } ] } settings.json { "azureFunctions.deploySubpath": "azure-functions\\my_project", "azureFunctions.projectSubpath": "azure-functions\\my_project", "azureFunctions.scmDoBuildDuringDeployment": true, "azureFunctions.pythonVenv": ".venv", "azureFunctions.projectLanguage": "Python", "azureFunctions.projectRuntime": "~3", "debug.internalConsoleOptions": "neverOpen", "python.pythonPath": "C:\\Users\\myself\\BigRepo\\azure-functions\\my_project\\.venv\\Scripts\\python.exe", "powershell.codeFormatting.addWhitespaceAroundPipe": true //"terminal.integrated.shell.windows": "&&" }
This is so weird. I edited my launch.json back and forth, sometimes it was a valid one, sometimes it was not, and now when I'm saving my original launch.json, the configuration works. This configuration works fine: { "version": "0.2.0", "configurations": [ { "name": "Python: Current file", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal" }, { "name": "Azure Functions", "type": "python", "request": "attach", "port": 9091, "preLaunchTask": "func: host start" } ] } I'm not sure I understand what is going on.
5
3
68,667,893
2021-8-5
https://stackoverflow.com/questions/68667893/implementing-python-decorators-in-a-toy-example
I have been trying to find a use case to learn decorators and I think I have found one which is relevant to me. I am using the following codes. In the file class1.py I have: import pandas as pd, os class myClass(): def __init__(self): fnDone = f'C:\user1\Desktop\loc1\fn.csv' if os.path.exists(fnDone): return self.Fn1() pd.DataFrame({'Done': 1}, index=[0]).to_csv(fnDone) def Fn1(self): print('something') if __name__ == '__main__': myClass() In the file class2.py I have: class myClassInAnotherFile(): def __init__(self): fnDone = f'C:\user1\Desktop\loc2\fn.csv' if os.path.exists(fnDone): return self.Fn1() self.Fn2() pd.DataFrame({'Done': 1}, index=[0]).to_csv(fnDone) def Fn1(self): print('something') def Fn2(self): print('something else') if __name__ == '__main__': myClassInAnotherFile('DoneFile12) Is there a way to define a generic decorator code in another file called utilities.py so that I can do something of the following sort: Desired in the file class1.py I have: import pandas as pd, os class myClass(): def __init__(self): fnDone = f'C:\user1\Desktop\loc1\fn.csv' self.Fn1() pd.DataFrame({'Done': 1}, index=[0]).to_csv(fnDone) def Fn1(self): print('something') if __name__ == '__main__': @myDecorator myClass() In the file class2.py I have: class myClassInAnotherFile(): def __init__(self): fnDone = f'C:\user1\Desktop\loc2\fn.csv' self.Fn1() self.Fn2() pd.DataFrame({'Done': 1}, index=[0]).to_csv(fnDone) def Fn1(self): print('something') def Fn2(self): print('something else') if __name__ == '__main__': @myDecorator myClassInAnotherFile() Essentially mimicking the original behavior using a decorator. Edit1: I am looking to extend the functionality of my class definitions. In the both original class definitions, I repeat the code which checks for fnDone file and if it is present, exits the class. Goal is to have a decorator which checks for the fnDone file and exits the class if it is present. Edit2: I can do this as a function also but I am trying to learn how to extend functionality of a class or method using decorators. Edit3: Does it make it easier if I have the following instead in class1.py: def myClass(): fnDone = f'C:\user1\Desktop\loc1\fn.csv' if os.path.exists(fnDone): return self.Fn1() pd.DataFrame({'Done': 1}, index=[0]).to_csv(fnDone) def Fn1(self): print('something') if __name__ == '__main__': myClass() and class2.py as following: def myClassInAnotherFile(): fnDone = f'C:\user1\Desktop\loc2\fn.csv' if os.path.exists(fnDone): return self.Fn1() self.Fn2() pd.DataFrame({'Done': 1}, index=[0]).to_csv(fnDone) def Fn1(self): print('something') def Fn2(self): print('something else') if __name__ == '__main__': myClassInAnotherFile('DoneFile12)
Because fnDone is a local variable rather than a parameter, it makes using a decorator a bit awkward. If you modify the code slightly to pass in fnDone as a parameter, it makes using a decorator more of a viable option. For example, you could make a decorator that wraps the constructor of an object, and checks if the file passed in exist or not: import os.path from functools import wraps import pandas as pd def check_file_exists(f): @wraps(f) def _inner(self, fn_done): if os.path.exists(fn_done): return f(self, fn_done) return _inner class MyClass: @check_file_exists def __init__(self, fn_done) -> None: pd.DataFrame({'Done': 1}, index=[0]).to_csv(fn_done) if __name__ == "__main__": MyClass("fn.csv")
5
4
68,703,741
2021-8-8
https://stackoverflow.com/questions/68703741/using-new-in-inherited-dataclasses
Suppose I have the following code that is used to handle links between individuals and countries: from dataclasses import dataclass @dataclass class Country: iso2 : str iso3 : str name : str countries = [ Country('AW','ABW','Aruba'), Country('AF','AFG','Afghanistan'), Country('AO','AGO','Angola')] countries_by_iso2 = {c.iso2 : c for c in countries} countries_by_iso3 = {c.iso3 : c for c in countries} @dataclass class CountryLink: person_id : int country : Country country_links = [ CountryLink(123, countries_by_iso2['AW']), CountryLink(456, countries_by_iso3['AFG']), CountryLink(789, countries_by_iso2['AO'])] print(country_links[0].country.name) This is all working fine, but I decide that I want to make it a bit less clunky to be able to handle the different forms of input. I also want to use __new__ to make sure that we are getting a valid ISO code each time, and I want to object to fail to be created in that case. I therefore add a couple new classes that inherit from this: @dataclass class CountryLinkFromISO2(CountryLink): def __new__(cls, person_id : int, iso2 : str): if iso2 not in countries_by_iso2: return None new_obj = super().__new__(cls) new_obj.country = countries_by_iso2[iso2] return new_obj @dataclass class CountryLinkFromISO3(CountryLink): def __new__(cls, person_id : int, iso3 : str): if iso3 not in countries_by_iso3: return None new_obj = super().__new__(cls) new_obj.country = countries_by_iso3[iso3] return new_obj country_links = [ CountryLinkFromISO2(123, 'AW'), CountryLinkFromISO3(456, 'AFG'), CountryLinkFromISO2(789, 'AO')] This appears to work at first glance, but then I run into a problem: a = CountryLinkFromISO2(123, 'AW') print(type(a)) print(a.country) print(type(a.country)) returns: <class '__main__.CountryLinkFromISO2'> AW <class 'str'> The inherited object has the right type, but its attribute country is just a string instead of the Country type that I expect. I have put in print statements in the __new__ that check the type of new_obj.country, and it is correct before the return line. What I want to achieve is to have a be an object of the type CountryLinkFromISO2 that will inherit changes I make to CountryLink and for it to have an attribute country that is taken from the dictionary countries_by_iso2. How can I achieve this?
Just because the dataclass does it behind the scenes, doesn't mean you classes don't have an __init__(). They do and it looks like: def __init__(self, person_id: int, country: Country): self.person_id = person_id self.country = country When you create the class with: CountryLinkFromISO2(123, 'AW') that "AW" string gets passed to __init__() and sets the value to a string. Using __new__() in this way is fragile and returning None from a constructor is fairly un-pythonic (imo). Maybe you would be better off making an actual factory function that returns either None or the class you want. Then you don't need to mess with __new__() at all. @dataclass class CountryLinkFromISO2(CountryLink): @classmethod def from_country_code(cls, person_id : int, iso2 : str): if iso2 not in countries_by_iso2: return None return cls(person_id, countries_by_iso2[iso2]) a = CountryLinkFromISO2.from_country_code(123, 'AW') If for some reason it needs to work with __new__(), you could return None from new when there's no match, and set the country in __post_init__(): @dataclass class CountryLinkFromISO2(CountryLink): def __new__(cls, person_id : int, iso2 : str): if iso2 not in countries_by_iso2: return None return super().__new__(cls) def __post_init__(self): self.country = countries_by_iso2[self.country]
7
8
68,700,008
2021-8-8
https://stackoverflow.com/questions/68700008/difference-between-just-reshaping-and-reshaping-and-getting-transpose
I'm currently studying CS231 assignments and I've realized something confusing. When calculating gradients, when I first reshape x then get transpose I got the correct result. x_r=x.reshape(x.shape[0],-1) dw= x_r.T.dot(dout) However, when I reshape directly as the X.T shape it doesn't return the correct result. dw = x.reshape(-1,x.shape[0]).dot(dout) Can someone explain the following question? How does the order of getting elements with np.reshape() change? How reshaping (N,d1,d2..dn) shaped array into N,D array differs from getting a reshaped array of (D,N) with its transpose.
While both your approaches result in arrays of same shape, there will by a difference in the order of elements due to the way numpy reads / writes elements. By default, reshape uses a C-like index order, which means the elements are read / written with the last axis index changing fastest, back to the first axis index changing slowest (taken from the documentation). Here is an example of what that means in practice. Let's assume the following array x: x = np.asarray([[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10], [11, 12]]]) print(x.shape) # (2, 3, 2) print(x) # output [[[ 1 2] [ 3 4] [ 5 6]] [[ 7 8] [ 9 10] [11 12]]] Now let's reshape this array the following two ways: opt1 = x.reshape(x.shape[0], -1) opt2 = x.reshape(-1, x.shape[0]) print(opt1.shape) # outptu: (2, 6) print(opt2.shape) # output: (6, 2) print(opt1) # output: [[ 1 2 3 4 5 6] [ 7 8 9 10 11 12]] print(opt2) # output: [[ 1 2] [ 3 4] [ 5 6] [ 7 8] [ 9 10] [11 12]] reshape first inferred the shape of the new arrays and then returned a view where it read the elements in C-like index order. To exemplify this on opt1: since the original array x has 12 elements, it inferred that the new array opt1 must have a shape of (2, 6) (because 2*6=12). Now, reshape returns a view where: opt1[0][0] == x[0][0][0] opt1[0][1] == x[0][0][1] opt1[0][2] == x[0][1][0] opt1[0][3] == x[0][1][1] opt1[0][4] == x[0][2][0] opt1[0][5] == x[0][2][1] opt1[1][0] == x[1][0][0] ... opt1[1][5] == x[1][2][1] So as described above, the last axis index changes fastest and the first axis index slowest. In the same way, the output for opt2 will be computed. You can now verify that transposing the first option will result in the same shape but a different order of elements: opt1 = opt1.T print(opt1.shape) # output: (6, 2) print(opt1) # output: [[ 1 7] [ 2 8] [ 3 9] [ 4 10] [ 5 11] [ 6 12]] Obviously, the two approaches do not result in the same array due to element ordering, even though they will have the same shape.
6
8
68,701,240
2021-8-8
https://stackoverflow.com/questions/68701240/fastapi-post-request-with-bytes-object-got-422-error
I am writing a python post request with a bytes body: with open('srt_file.srt', 'rb') as f: data = f.read() res = requests.post(url='http://localhost:8000/api/parse/srt', data=data, headers={'Content-Type': 'application/octet-stream'}) And in the server part, I tried to parse the body: app = FastAPI() BaseConfig.arbitrary_types_allowed = True class Data(BaseModel): data: bytes @app.post("/api/parse/{format}", response_model=CaptionJson) async def parse_input(format: str, data: Data) -> CaptionJson: ... However, I got the 422 error: {"detail":[{"loc":["body"],"msg":"value is not a valid dict","type":"type_error.dict"}]} So where is wrong with my code, and how should I fix it? Thank you all in advance for helping out!!
FastAPI by default will expect you to pass json which will parse into a dict. It can't do that if it's isn't json, which is why you get the error you see. You can use the Request object instead to receive arbitrary bytes from the POST body. from fastapi import FastAPI, Request app = FastAPI() @app.get("/foo") async def parse_input(request: Request): data: bytes = await request.body() # Do something with data You might consider using Depends which will allow you to clean up your route function. FastAPI will first call your dependency function (parse_body in this example) and will inject that as an argument into your route function. from fastapi import FastAPI, Request, Depends app = FastAPI() async def parse_body(request: Request): data: bytes = await request.body() return data @app.get("/foo") async def parse_input(data: bytes = Depends(parse_body)): # Do something with data pass
6
11
68,697,824
2021-8-8
https://stackoverflow.com/questions/68697824/numpy-error-when-importing-pandas-with-aws-lambda
I'm currently have an issue with importing the library pandas to my AWS Lambda Function. I have tried two scenarios. Installing pandas directly into one folder with my lambda_function and uploading the zipped file. Creating a layer with an uploaded zip file with the following structure: - python - lib - python3.8 - site-packages - all the pandas packages here My lambda_function is just: import json import pandas as pd def lambda_handler(event, context): return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } This is my error: START RequestId: 9e27641e-587b-4be2-b9be-c9be85007f9e Version: $LATEST [ERROR] Runtime.ImportModuleError: Unable to import module 'main': Unable to import required dependencies: numpy: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.8 from "/var/lang/bin/python3.8" * The NumPy version is: "1.21.1" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: No module named 'numpy.core._multiarray_umath' Is there any other approach? I don't want to use Docker for this task. Thanks!
I have solved the issue, thanks to this article: https://korniichuk.medium.com/lambda-with-pandas-fd81aa2ff25e In my case, I cannot normally install the libraries through pip, I'm on a windows machine. You must install the linux versions of pandas and numpy. Since I'm on python 3.8 I installed these versions: numpy-1.21.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl pandas-1.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl After downloading the packages, I replaced the pandas and numpy folders that were originally from the install via pip install pandas. I used my first scenario as showed in my question.
11
7
68,695,851
2021-8-7
https://stackoverflow.com/questions/68695851/mypy-cannot-find-implementation-or-library-stub-for-module
I have: foo/ ├── __init__.py ├── bar.py └── baz ├── __init__.py └── alice.py In bar.py, I import Alice, which is an empty class with nothing in it but the name attribute set to "Alice". from baz.alice import Alice a = Alice() print(a.name) This runs properly: $ python foo/bar.py Alice But mypy complains: $ mypy --version mypy 0.910 $ mypy --strict . foo/bar.py:1: error: Cannot find implementation or library stub for module named "baz.alice" foo/bar.py:1: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports Found 1 error in 1 file (checked 6 source files) Why is mypy complaining?
mypy has its own search path for imports and does not resolve imports exactly as Python does and it isn't able to find the baz.alice module. Check the documentation listed in the error message, specifically the section on How imports are found: The rules for searching for a module foo are as follows: The search looks in each of the directories in the search path (see above) until a match is found. If a package named foo is found (i.e. a directory foo containing an __init__.py or __init__.pyi file) that’s a match. If a stub file named foo.pyi is found, that’s a match. If a Python module named foo.py is found, that’s a match. The documentation also states that this in the section on Mapping file paths to modules: For each file to be checked, mypy will attempt to associate the file (e.g. project/foo/bar/baz.py) with a fully qualified module name (e.g. foo.bar.baz). There's a few ways to solve this particular issue: As paul41 mentioned in his comment, one option to solve this issue is by providing the fully qualified import (from foo.baz.alice import Alice), and then running from a top-level module (a .py file in the root level). You could add a # type: ignore to the import line. You can edit the MYPYPATH variable to point to the foo directory: (venv) (base) ➜ mypy foo/bar.py --strict foo/bar.py:3: error: Cannot find implementation or library stub for module named "baz.alice" foo/bar.py:3: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports Found 1 error in 1 file (checked 1 source file) (venv) (base) ➜ export MYPYPATH=foo/ (venv) (base) ➜ mypy foo/bar.py --strict Success: no issues found in 1 source file
50
44
68,688,309
2021-8-6
https://stackoverflow.com/questions/68688309/why-does-dask-seem-to-store-parquet-inefficiently
When I save the same table using Pandas and Dask into Parquet, Pandas creates a 4k file, wheres Dask creates a 39M file. Create the dataframe import pandas as pd import pyarrow as pa import pyarrow.parquet as pq import dask.dataframe as dd n = int(1e7) df = pd.DataFrame({'col': ['a'*64]*n}) Save it in different ways # Pandas: 4k df.to_parquet('example-pandas.parquet') # PyArrow: 4k pq.write_table(pa.Table.from_pandas(df), 'example-pyarrow.parquet') # Dask: 39M dd.from_pandas(df, npartitions=1).to_parquet('example-dask.parquet', compression='snappy') At first I thought that Dask doesn't utilize dictionary and run-length encoding, but that does not seem to be the case. I am not sure if I'm interpreting the metadata info correctly, but at the very least, it seems to be exactly the same: >>> pq.read_metadata('example-pandas.parquet').row_group(0).column(0) <pyarrow._parquet.ColumnChunkMetaData object at 0x7fbee7d1a770> file_offset: 548 file_path: physical_type: BYTE_ARRAY num_values: 10000000 path_in_schema: col is_stats_set: True statistics: <pyarrow._parquet.Statistics object at 0x7fbee7d2cc70> has_min_max: True min: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa max: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa null_count: 0 distinct_count: 0 num_values: 10000000 physical_type: BYTE_ARRAY logical_type: String converted_type (legacy): UTF8 compression: SNAPPY encodings: ('PLAIN_DICTIONARY', 'PLAIN', 'RLE') has_dictionary_page: True dictionary_page_offset: 4 data_page_offset: 29 total_compressed_size: 544 total_uncompressed_size: 596 >>> pq.read_metadata('example-dask.parquet/part.0.parquet').row_group(0).column(0) <pyarrow._parquet.ColumnChunkMetaData object at 0x7fbee7d3d180> file_offset: 548 file_path: physical_type: BYTE_ARRAY num_values: 10000000 path_in_schema: col is_stats_set: True statistics: <pyarrow._parquet.Statistics object at 0x7fbee7d3d1d0> has_min_max: True min: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa max: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa null_count: 0 distinct_count: 0 num_values: 10000000 physical_type: BYTE_ARRAY logical_type: String converted_type (legacy): UTF8 compression: SNAPPY encodings: ('PLAIN_DICTIONARY', 'PLAIN', 'RLE') has_dictionary_page: True dictionary_page_offset: 4 data_page_offset: 29 total_compressed_size: 544 total_uncompressed_size: 596 Why is Dask-create Parquet so much larger? Alternatively, how can I inspect the possible problems further?
Dask appears to be saving an int64 index... >>> meta.row_group(0).column(1) <pyarrow._parquet.ColumnChunkMetaData object at 0x7fa41e1babd0> file_offset: 40308181 file_path: physical_type: INT64 num_values: 10000000 path_in_schema: __null_dask_index__ is_stats_set: True statistics: <pyarrow._parquet.Statistics object at 0x7fa41e1badb0> has_min_max: True min: 0 max: 9999999 null_count: 0 distinct_count: 0 num_values: 10000000 physical_type: INT64 logical_type: None converted_type (legacy): NONE compression: SNAPPY encodings: ('PLAIN_DICTIONARY', 'PLAIN', 'RLE', 'PLAIN') has_dictionary_page: True dictionary_page_offset: 736 data_page_offset: 525333 total_compressed_size: 40307445 total_uncompressed_size: 80284661 You can disable this with write_index: dd.from_pandas(df, npartitions=1).to_parquet('example-dask.parquet', compression='snappy', write_index=False) Pyarrow won't generate any indices. Pandas does generate an index but, at least when using the arrow engine, simple linear indices will be saved as metadata and not an actual column. >>> table = pq.read_table('example-pandas.parquet') >>> pandas_meta = json.loads(table.schema.metadata[b'pandas']) >>> pandas_meta['index_columns'][0] {'kind': 'range', 'name': None, 'start': 0, 'stop': 10000000, 'step': 1}
5
4
68,677,902
2021-8-6
https://stackoverflow.com/questions/68677902/is-there-complete-documentation-for-setup-cfg
The Python Packaging Tutorial recommends that "Static metadata (setup.cfg) should be preferred. Dynamic metadata (setup.py) should be used only as an escape hatch when absolutely necessary. setup.py used to be required, but can be omitted with newer versions of setuptools and pip." The guide to packaging and distributing projects explains that "setup.cfg is an ini file that contains option defaults for setup.py commands. For an example, see the setup.cfg in the PyPA sample project." That example is entirely useless, and there doesn't appear to be a lot of other helpful information. The examples in the tutorial suggest that some, or perhaps all, valid arguments to setuptools.setup() can be listed in setup.cfg, but there is no real explanation to this effect. In particular, it's not clear how to translate a list argument, like that for the very common and important install_requires parameter, into lines in setup.cfg. The proper way to do this, as I've determined by deduction and experimentation, appears to be as follows: [options] install_requires = dependency_1 dependency_2 Obviously it would be better for this to be properly documented somewhere so that new package creators do not have to undergo a similar process just to specify their projects' dependencies. Does such documentation exist?
Yes, in the documentation of the setuptools. Here it is: https://setuptools.readthedocs.io/en/latest/userguide/declarative_config.html
34
19
68,676,637
2021-8-6
https://stackoverflow.com/questions/68676637/attributeerror-word2vec-object-has-no-attribute-most-similar-word2vec
I am using Word2Vec and using a wiki trained model that gives out the most similar words. I ran this before and it worked but now it gives me this error even after rerunning the whole program. I tried to take off return_path=True but im still getting the same error print(api.load('glove-wiki-gigaword-50', return_path=True)) model.most_similar("glass") #ERROR: /Users/me/gensim-data/glove-wiki-gigaword-50/glove-wiki-gigaword-50.gz --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-153-3bf32168d154> in <module> 1 print(api.load('glove-wiki-gigaword-50', return_path=True)) ----> 2 model.most_similar("glass") AttributeError: 'Word2Vec' object has no attribute 'most_similar' #MODEL this is the model I used print( '%s (%d records): %s' % ( model_name, model_data.get('num_records', -1), model_data['description'][:40] + '...', ) ) Edit: here is my gensim download & output !python -m pip install -U gensim OUTPUT: Requirement already satisfied: gensim in ./opt/anaconda3/lib/python3.8/site-packages (4.0.1) Requirement already satisfied: numpy>=1.11.3 in ./opt/anaconda3/lib/python3.8/site-packages (from gensim) (1.20.1) Requirement already satisfied: smart-open>=1.8.1 in ./opt/anaconda3/lib/python3.8/site-packages (from gensim) (5.1.0) Requirement already satisfied: scipy>=0.18.1 in ./opt/anaconda3/lib/python3.8/site-packages (from gensim) (1.6.2)
You are probably looking for <MODEL>.wv.most_similar, so please try: model.wv.most_similar("glass")
5
18
68,683,160
2021-8-6
https://stackoverflow.com/questions/68683160/how-to-add-to-annotations-using-the-fmt-option-of-bar-label
I'm trying to use the new bar_label option in Matplotlib but am unable to find a way to append text e.g. '%' after the label values. Previously, using ax.text I could use f-strings, but I can't find a way to use f-strings with the bar-label approach. fig, ax = plt.subplots(1, 1, figsize=(12,8)) hbars = ax.barh(wash_needs.index, wash_needs.values, color='#2a87c8') ax.tick_params(axis='x', rotation=0) # previously I used this approach to add labels #for i, v in enumerate(wash_needs): # ax.text(v +3, i, str(f"{v/temp:.0%}"), color='black', ha='right', va='center') ax.bar_label(hbars, fmt='%.2f', padding=3) # this adds a label but I can't find a way to append a '%' after the number plt.show()
I found a way to append '%' to the label figures - add an additional '%%' ax.bar_label(hbars, fmt='%.2f%%', padding=3) Working Example import pandas as pd import seaborn as sns # for tips data tips = sns.load_dataset('tips').loc[:15, ['total_bill', 'tip']] tips.insert(2, 'tip_percent', tips.tip.div(tips.total_bill).mul(100).round(2)) total_bill tip tip_percent 0 16.99 1.01 5.94 1 10.34 1.66 16.05 2 21.01 3.50 16.66 3 23.68 3.31 13.98 4 24.59 3.61 14.68 # plot ax = tips.plot(kind='barh', y='tip_percent', legend=False, figsize=(12, 8)) labels = ax.set(xlabel='Tips: Percent of Bill (%)', ylabel='Record', title='Tips % Demo') annotations = ax.bar_label(ax.containers[0], fmt='%.2f%%', padding=3) ax.margins(x=0.1)
6
14
68,682,091
2021-8-6
https://stackoverflow.com/questions/68682091/docker-postgres-role-does-not-exist
I am using postgres with docker and having trouble with it. I successfully docker-compose up --build When I run below command it works fine. psql starts with my_username user as expected. I can see my database, \l, \dt commands works ok. docker-compose exec db psql --username=my_username --dbname=my_database But when I run below commands I get error role “postgres” does not exist, Additionally \l, \dt not works, even psql command not works docker exec -it my_db_container_1 bash su - postgres createuser --createdb --password new_user How can I get it right in the second case? What is going wrong? I am confused docker-compose.yml version: "3.9" services: #### other services #### db: image: postgres:latest restart: always environment: POSTGRES_DB: my_database POSTGRES_USER: my_username POSTGRES_PASSWORD: my_password ports: - 5432 volumes: - postgres_data:/var/lib/postgresql/data/ volumes: postgres_data:
You have changed the default username/database/password that the postgres database is initialized with by providing the POSTGRES_USER, POSTGRES_DB, and POSTGRES_PASSWORD environment variables. When you run createuser without a -U option, it tries to connect as the current user (postgres in this case) which doesn't exist on the database because you initialized it with the user my_username. The reason docker-compose exec db psql --username=my_username --dbname=my_database works is because you are correctly supplying the username and database names that the database was initialized with. If you remove the POSTGRES_USER and POSTGRES_DB environment variables, it will initialize with the defaults postgres/postgres. Note that because you are mounting a volume into this container, which will now have an initialized database in it already, it will not be reinitialized even if you restart your compose. You need to docker volume rm that volume in order to have the database be reinitialized when you start the container.
6
0
68,682,209
2021-8-6
https://stackoverflow.com/questions/68682209/parse-json-without-quotes-in-python
I am trying to parse JSON input as string in Python, not able to parse as list or dict since the JSON input is not in a proper format (Due to limitations in the middleware can't do much here.) { "Records": "{Output=[{_fields=[{Entity=ABC , No=12345, LineNo= 1, EffDate=20200630}, {Entity=ABC , No=567, LineNo= 1, EffDate=20200630}]}" } I tried json.loads and ast.literal (invalid syntax error). How can I load this?
If the producer of the data is consistent, you can start with something like the following, that aims to bridge the JSON gap. import re import json source = { "Records": "{Output=[{_fields=[{Entity=ABC , No=12345, LineNo= 1, EffDate=20200630}, {Entity=ABC , No=567, LineNo= 1, EffDate=20200630}]}" } s = source["Records"] # We'll start by removing any extraneous white spaces s2 = re.sub('\s', '', s) # Surrounding any word with " s3 = re.sub('(\w+)', '"\g<1>"', s2) # Replacing = with : s4 = re.sub('=', ':', s3) # Lastly, fixing missing closing ], } ## Note that }} is an escaped } for f-string. s5 = f"{s4}]}}" >>> json.loads(s5) {'Output': [{'_fields': [{'Entity': 'ABC', 'No': '12345', 'LineNo': '1', 'EffDate': '20200630'}, {'Entity': 'ABC', 'No': '567', 'LineNo': '1', 'EffDate': '20200630'}]}]} Follow up with some robust testing and have a nice polished ETL with your favorite tooling.
5
3
68,675,254
2021-8-6
https://stackoverflow.com/questions/68675254/how-can-i-scroll-down-using-selenium
The code is as below. driver = webdriver.Chrome(chromedriver_path) #webdriver path driver.get('https://webtoon.kakao.com/content/%EB%B0%94%EB%8B%88%EC%99%80-%EC%98%A4%EB%B9%A0%EB%93%A4/1781') #website access time.sleep(2) driver.execute_script("window.scrollTo(0, 900)") #scroll down time.sleep(1) However, the page does not scroll. How can I scroll? Page link to be scrolled
Tried with the below code, it did scroll. driver.get("https://webtoon.kakao.com/content/%EB%B0%94%EB%8B%88%EC%99%80-%EC%98%A4%EB%B9%A0%EB%93%A4/1781") time.sleep(2) options = driver.find_element_by_xpath("//div[@id='root']/main/div/div/div/div[1]/div[3]/div/div/div[1]/div/div[2]/div/div[1]/div/div/div/div") driver.execute_script("arguments[0].scrollIntoView(true);",options)
4
3
68,670,406
2021-8-5
https://stackoverflow.com/questions/68670406/why-do-seaborn-countplots-and-histplots-display-the-same-hexadecimal-color-diffe
I'm trying to keep a singular color palette in my thesis, and I noticed that the blue of my histplots and the blue of my countplots are slightly different shades, even though I set them to the exact same hexadecimal value. Is there a setting that I'm missing or do these different plots not just show the hexadecimal as given? I've tried playing around with the countplot saturation but it doesn't match the color. Ideally all of my histplots would have the same color as my countplots (and bar plots which use the countplot coloring too). Below a minimum code example: import seaborn as sns import matplotlib.pyplot as plt sns.set(rc={'figure.figsize':(20,10)}, font_scale=2) plt.rcParams['axes.grid'] = False titanic = sns.load_dataset('titanic') fig, ax = plt.subplots(1,2) sns.countplot(x="class", data=titanic, ax=ax[0], color='#5975a4') sns.histplot(x="who", data=titanic, ax=ax[1], color='#5975a4') It produces the following figure:
The countplot has a saturation parameter (more saturation is more "real" color, less saturation is closer to grey). Seaborn uses saturation in bar plots to make the default colors look "smoother". The default saturation is 0.75; it can be set to 1 to get the "true" color. The histplot has an alpha parameter, making the color semi-transparent. The color gets mixed with the background, so it looks different depending on the background color. In this case, the alpha seems to default to 0.75. As that also has an effect similar to saturation, the histplot doesn't use saturation. The transparency is especially useful when multiple histograms are drawn in the same subplot. To get both in "real" color, set both the saturation of the countplot and the alpha of the histplot to 1: import seaborn as sns import matplotlib.pyplot as plt sns.set(rc={'figure.figsize': (20, 10)}, font_scale=2) plt.rcParams['axes.grid'] = False titanic = sns.load_dataset('titanic') fig, ax = plt.subplots(1, 2) sns.countplot(x="class", data=titanic, ax=ax[0], color='#5975a4', saturation=1) sns.histplot(x="who", data=titanic, ax=ax[1], color='#5975a4', alpha=1) plt.show() PS: By default, a countplot uses only 80% of the width, while a histogram uses the full width. If desired, the histogram bars can be shrunk, e.g. sns.histplot(..., shrink=0.8), to get the same width as the countplot.
5
7
68,671,158
2021-8-5
https://stackoverflow.com/questions/68671158/how-to-check-python-scripts-for-f-strings-which-are-missing-the-f-literal-for
I often forget to prefix formatted strings with "f". A buggy example: text = "results is {result}" Where it should be text = f"results is {result}" I make this error A LOT; my IDE don't report it, and the program runs without exceptions. I thought maybe to scan my source code for quoted strings, check for {,} characters, and search if it's missing a prefix "f" literal; But it's better, i guess, to use a parser? Or maybe someone did it already?
The problem is that text = "results is {result}" is a valid template string, so you can later use it in your program like: >>> text.format(result=1) 'results is 1' >>> text.format(result=3) 'results is 3' What you can achieve is just checking if an f-string does indeed use variables inside, like pylint and flake8 already do. For what you seek, however, there is something going on with PyLint: this issue is 3 years old, but it is exactly what you need, and recently - three days ago - an user submitted a pull request that is currently WIP, and should resolve your problem.
5
4
68,671,051
2021-8-5
https://stackoverflow.com/questions/68671051/special-text-to-latin-characters-in-python
I have the following pandas data frame: the_df = pd.DataFrame({'id':[1,2],'name':['Joe','𝒮𝒶𝓇𝒶𝒽']}) the_df id name 0 1 Joe 1 2 𝒮𝒶𝓇𝒶𝒽 As you can see, we can read the second name as "Sarah", but it's written with special characters. I want to create a new column with these characters converted to latin characters. I have tried this approach: the_df['latin_name'] = the_df['name'].str.extract(r'(^[a-zA-Z\s]*)') the_df id name latin_name 0 1 Joe Joe 1 2 𝒮𝒶𝓇𝒶𝒽 But it doesn't recognize the letters. Please, any help on this will be greatly appreciated.
Try .str.normalize the_df['name'].str.normalize('NFKC').str.extract(r'(^[a-zA-Z\s]*)') Output: 0 0 Joe 1 Sarah
5
5
68,669,841
2021-8-5
https://stackoverflow.com/questions/68669841/how-can-i-use-this-complex-number-in-numpy-matrix
Here's the Python code I'm working on: def inver_hopf(x,y,z): return (1/np.sqrt(x**2+y**2+(1+z)**2))*np.matrix([[1+z],[x+y.j]],dtype=complex) The problem happens at [x+y.j], where j means complex unit. It returns me the error message AttributeError: 'int' object has no attribute 'j'. If I remove the dot, then it returns NameError: name 'yj' is not defined. How can I correct that? Thanks!
j alone is a variable, you can have the complex number by typing 1j
4
5
68,668,895
2021-8-5
https://stackoverflow.com/questions/68668895/when-i-run-the-code-it-says-typeerror-unlink-got-an-unexpected-keyword-argum
This is my code. It is everything I have in my programme: from pathlib import Path new_dir = Path.home() / "new_directory" file_path = new_dir / "program2.py" file_path.unlink(missing_ok=True) The file program2.py does not exist; that is why I wanted to set the missing_ok parameter to True so that it would not raise a FileExistsError. But every time I run the code it gives me the following message: file_path.unlink(missing_ok=True) TypeError: unlink() got an unexpected keyword argument 'missing_ok' Do I have an outdated version of python or have I made a mistake in the code, help would be much appreciated!
The missing_ok parameter was added to Path.unlink only on python 3.8. You should upgrade python to newer version if you want to use this parameter. You can check your python version with the command python -V
11
17
68,664,973
2021-8-5
https://stackoverflow.com/questions/68664973/create-sqlalchemy-session-on-event
If I want to use database while processing a request, I make a Dependency Injection like this: @app.post("/sample_test") async def sample_test(db: Session = Depends(get_db)): return db.query(models.User.height).all() But I cannot do it with events like this: @app.on_event("startup") async def sample_test(db: Session = Depends(get_db)): return db.query(models.User.height).all() because starlette events don't support Depends. This is my get_db() function: def get_db(): db = SessionLocal() try: yield db finally: db.close() just like in FastAPI manual (https://fastapi.tiangolo.com/tutorial/sql-databases/). How can I access get_db() inside my event function, so I can work with a Session? I've tried: @app.on_event("startup") async def sample_test(db: Session = Depends(get_db)): db = next(get_db()) return db.query(models.User.height).all() but it doesn't work. I use MSSQL, if it's important.
Instead of using a dependency you can import the SessionLocal you've created as shown in the FastAPI manual and use a contextmanager to open and close this session: @app.on_event("startup") async def sample_test(): with SessionLocal() as db: return db.query(models.User.height).all()
5
4
68,664,644
2021-8-5
https://stackoverflow.com/questions/68664644/how-can-i-convert-from-utc-time-to-local-time-in-python
So, I want to convert UTC date time 2021-08-05 10:03:24.585Z to Indian date time how to convert it? What I tried is from datetime import datetime from pytz import timezone st = "2021-08-05 10:03:24.585Z" datetime_object = datetime.strptime(st, '%Y-%m-%d %H:%M:%S.%fZ') local_tz = timezone('Asia/Kolkata') start_date = local_tz.localize(datetime_object) print(start_date.replace(tzinfo=local_tz)) But Still the output is not with the timezone I have mentioned how can I convert the time and print the time from the time. Output: 2021-08-05 10:03:24.585000+05:21
You can use sth like this: from datetime import datetime from dateutil import tz from_zone = tz.gettz('UTC') to_zone = tz.gettz('Asia/Kolkata') utc = datetime.strptime('2011-01-21 02:37:21', '%Y-%m-%d %H:%M:%S') utc = utc.replace(tzinfo=from_zone) central = utc.astimezone(to_zone)
4
7
68,663,853
2021-8-5
https://stackoverflow.com/questions/68663853/filter-list-of-object-with-condition-in-python
I have a list structure like this: listpost = [ { "post_id":"01", "text":"abc", "time": datetime.datetime(2021, 8, 5, 15, 53, 19), "type":"normal", }, { "post_id":"02", "text":"nothing", "time":datetime.datetime(2021, 8, 5, 15, 53, 19), "type":"normal", } ] I want to filter the list by text in [text] key if only the [text] has "abc" so the example will look like this listpost = [ { "post_id":"01", "text":"abc", "time": datetime.datetime(2021, 8, 5, 15, 53, 19), "type":"normal", } ] My code: from facebook_scraper import get_posts listposts = [] for post in get_posts("myhealthkkm", pages=1): listposts.append(post) print(listposts)
Since you specifically asked about filtering the list you have, you can use filter builtin with lambda to filter out the elements from the list. >>> list(filter(lambda x: x.get('text', '')=='abc', listpost)) [{'post_id': '01', 'text': 'abc', 'time': datetime.datetime(2021, 8, 5, 15, 53, 19), 'type': 'normal'}] But I'd recommend to filter it out upfront before actually appending it to the list, to avoid unnecessary computations due to the need to re-iterate the items i.e. appending only the items that match the criteria. Something like this: for post in get_posts("myhealthkkm", pages=1): if <post match the condition>: listposts.append(post) # append the post
10
12
68,660,700
2021-8-5
https://stackoverflow.com/questions/68660700/how-exactly-is-a-decimal-object-encoded-in-python
I'm currently writing code using decimal.Decimal in python (v3.8.5). I was wondering if anyone knows how the Decimal object is actually encoded. I can't understand why the memory size is the same even if I change getcontext().prec, which is equal to change coefficients and exponent in decimal floating-points, as follows from decimal import * from sys import getsizeof ## coefficient bits = 3 getcontext().prec = 3 temp = Decimal('1')/Decimal('3') print(temp.as_tuple()) >>> DecimalTuple(sign=0, digits=(3, 3, 3), exponent=-3) print(getsizeof(temp)) >>> 104 ## coefficient bits = 30 getcontext().prec = 30 temp = Decimal('1')/Decimal('3') print(temp.as_tuple()) >>> DecimalTuple(sign=0, digits=(3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3), exponent=-30) print(getsizeof(temp)) >>> 104 In order to understand the above behavior, I read the source code of Decimal class and the attached document. https://github.com/python/cpython/blob/main/Lib/_pydecimal.py http://speleotrove.com/decimal/decarith.html http://speleotrove.com/decimal/decbits.pdf According to the document, Python's Decimal object is implemented based on IEEE 754-2008, and the decimal digits of coefficient continuation are converted into binary digits using DPD (Densely packed decimal) encoding. Therefore, according to the DPD algorithm, we can calculate the number of bits when the decimal digits of the coefficient continuation are encoded into binary digits. And since the sign, exponent continuation, and combination field are simply expressed in binary, the number of bits when encoded can be easily calculated. So, we can calculate the number of bits when Decimal obejcet is encoded by the following formula. bits = (sign) + (exp) + (comb) + (compressed coeff) Here, sign and combination are fixed at 1bit and 5bits, respectively (according to the definition of IEEE 754-2008. https://en.wikipedia.org/wiki/Decimal_floating_point). So, I wrote the above code to check the list of {sign, exponent, coefficient} using as_tuple() of the Decimal object, and calculate the actual number of bits in memory. However, as mentioned above, the memory size of the Decimal object did not change at all, even though the number of digits in the coefficient should have changed. (I understand that a Decimal object is not only a decimal encoding but also a list and other objects.) The following two questions arise. (1) Am I wrong in my understanding of the encoding algorithm of the Decimal object in python? (Does python3.8.5 use a more efficient encoding algorithm than IEEE 754-2008?) (2) Assuming that my understanding of the algorithm is correct, why does the memory size of the Decimal object remain the same even though the coefficient has been changed? (According to the definition of IEEE754-2008, when coefficient continuation is changed, exponent continuation is also changed, and total bits should be changed.) I myself am a student who usually studies in the field of mechanical engineering, and I am a complete beginner in informatics. If there is any part of my original understanding that is wrong or if there is any strange logical development, please let me know. I appreciate your help.
For sys.getsizeof: Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to. Since Decimal is a Python class with references to several other objects (EDIT: see below), you just get the total size of the references, which is constant — not including the referred values, which are not. getcontext().prec = 3 temp = Decimal(3) / Decimal(1) print(sys.getsizeof(temp)) print(sys.getsizeof(temp._int)) getcontext().prec = 300 temp = Decimal(3) / Decimal(1) print(sys.getsizeof(temp)) # same print(sys.getsizeof(temp._int)) # not same (Note that _int slot I used in the example is an internal implementation detail of CPython's Decimal, as hinted by the leading underscore; this code is not guaranteed to work in other Python implementations, or even in other versions.) EDIT: Oops, my first answer was on an old Python, where Decimal is implemented in Python. The version you asked about has it implemented in C. The C version actually stores everything inside the object itself, but your difference in precision was not sufficient to detect the difference (as memory is allocated in discrete chunks). Try it with getcontext().prec = 300 instead.
9
6
68,660,642
2021-8-5
https://stackoverflow.com/questions/68660642/how-can-i-open-a-new-tab-with-selenium-python
I'm trying to make a program that opens multiple websites, but I can't get it to press control-t. I've tried multiple solutions, but I can't find one that works. When I do the keydown method, I get an error that says webdriver has no attribute key_down and when I try send_keys(Keys.CONTROL + 't') it doesn't raise any errors, or do anything. How can I open a new tab? Here's my try : from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.action_chains import ActionChains import time PATH = "C:\Program Files (x86)\chromedriver.exe" driver = webdriver.Chrome(PATH) driver.get("https://youtube.com") search = driver.find_element_by_id("search") #search.keydown(Keys.CONTROL) #Webelement.key_down(Keys.CONTROL).send_keys('t').key_up(Keys.CONTROL).perform() search.send_keys(Keys.CONTROL+'t') time.sleep(10)
You can do it as from selenium import webdriver driver.get("https://www.youtube.com") search = driver.find_element_by_id("search") driver.execute_script("window.open('https://www.google.com')")
5
8
68,655,717
2021-8-4
https://stackoverflow.com/questions/68655717/is-it-possible-to-freeze-a-dataclass-object-in-post-init-or-later
I'm wondering whether it's possible to "freeze" a dataclass object in post_init() or even after an object was defined. So instead of: @dataclass(frozen=True) class ClassName: var1: type = value Having something like: @dataclass class ClassName: var1: type = None def __post_init__(self): self.var1 = value FREEZE() Or even sth like: a = ClassName() FREEZE(a) Possible or not and why?
No, it isn't. But "frozen" can be subverted trivially, just use: @dataclass(frozen=True) class ClassName: var1: type = value def __post_init__(self): object.__setattr__(self, 'var1', value)
5
5
68,654,842
2021-8-4
https://stackoverflow.com/questions/68654842/pandas-to-sql-server-speed-python-bulk-insert
This is probably a highly discussed topic, but i have not found "the answer" yet. I am inserting big tables into Azure SQL Server monthly. I process the raw data in memory with python and Pandas. I really like the speed and versatility of Pandas. Sample DataFrame size = 5.2 million rows, 50 columns, 250 MB memory allocated Transferring the processed Pandas DataFrame to Azure SQL Server is always the bottleneck. For data transfer, I used to_sql (with sqlalchemy). I tried fast_executemany, various chunk sizes etc arguments. The fastest way I found so far is to export the DataFrame to a csv file, then BULK INSERT that into SQL server using either SSMS, bcp, Azure Blob etc. However, i am looking into doing this bypassing the csv file creation, since my df has all the dtypes set, already loaded in memory. What is your fastest means of transfer this df to SQL Server, utilizing solely python/Pandas? I am also interested in solutions like using binary file transfer etc. - as long as I eliminate flat file export/import. Thanks
I had a similar issue, and I resolved it using a BCP utility. The basic description of the bottleneck issue is that it seems to be using RBAR data entry, as in Row-By-Agonizing-Row inserts, i.e. one insert statement/record. Going the bulk insert route has saved me a lot of time. The real benefit seemed to come once I crossed the threshold of 1M+ records, which you seem to well ahead of. Link to utility:https://github.com/yehoshuadimarsky/bcpandas
5
4
68,650,162
2021-8-4
https://stackoverflow.com/questions/68650162/fastapi-receive-list-of-objects-in-body-request
I need to create an endpoint that can receive the following JSON and recognize the objects contained in it: {​ "data": [ {​ "start": "A", "end": "B", "distance": 6 }​, {​ "start": "A", "end": "E", "distance": 4 }​ ] } I created a model to handle a single object: class GraphBase(BaseModel): start: str end: str distance: int And with it, I could save it in a database. But now I need to receive a list of objects and save them all. I tried to do something like this: class GraphList(BaseModel): data: Dict[str, List[GraphBase]] @app.post("/dummypath") async def get_body(data: schemas.GraphList): return data But I keep getting this error on FastApi: Error getting request body: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) and this message on the response: { "detail": "There was an error parsing the body" } I'm new to python and even newer to FastApi, how can I transform that JSON to a list of GraphBaseto save them in my db?
This is a working example. from typing import List from pydantic import BaseModel from fastapi import FastAPI app = FastAPI() class GraphBase(BaseModel): start: str end: str distance: int class GraphList(BaseModel): data: List[GraphBase] @app.post("/dummypath") async def get_body(data: GraphList): return data I could try this API on the autogenerated docs. Or, on the console (you may need to adjust the URL depending on your setting): curl -X 'POST' \ 'http://localhost:8000/dummypath' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "data": [ { "start": "string", "end": "string", "distance": 0 } ] }' The error looks like the data problem. And I found that you have extra spaces at several places. Try the following: { "data": [ { "start": "A", "end": "B", "distance": 6 }, { "start": "A", "end": "E", "distance": 4 } ] } The positions of extra spaces (which I removed) are below:
9
12
68,644,548
2021-8-4
https://stackoverflow.com/questions/68644548/simultaneous-assignment-indexing-different-list-elements-in-python
>>arr = [4, 2, 1, 3] >>arr[0], arr[arr[0]-1] = arr[arr[0]-1], arr[0] >>arr Result I expect >>[3, 2, 1, 4] Result I get >>[3, 2, 4, 3] Basically I'm trying to swap the #4 and #3 (In my actual problem, the index wont be 0, but rather an iterator "i" . So I cant just do arr[0], arr[3] = arr[3], arr[0]) I thought I understood simultaneous assignment fairly well. Apparently I was mistaken. I don't understand why arr[arr[0]-1] on the left side of the assignment is evaluating to arr[2] instead of arr[3]. If the assignments happen simultaneously (evaluated from the right), arr[0] (within the index of the 2nd element on the left)should still be "4" arr[0] -1 (the index of the 2nd element on the left) should thus be "3"
Because the target list does not get evaluated simultaneously. Here is the relevant section of the docs: The object must be an iterable with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets. Two things to keep in mind, the right hand side evaluates the expression first. So on the RHS, we first create the tuple : (3, 4) Note, that is done left to right. Now, the assignment to each target in the target list on the left is done in order: arr[0] = 3 Then the next target, arr[0] is 3, and 3-1 is 2 arr[2] = 4 So a simple solution is to just to compute the indices first before the swap: >>> arr = [4, 2, 1, 3] >>> i, j = arr[0] - 1, 0 >>> arr[j], arr[i] = arr[i], arr[j] >>> arr [3, 2, 1, 4] Here is a demonstration using a verbose list that we can define easily: >>> class NoisyList(list): ... def __getitem__(self, item): ... value = super().__getitem__(item) ... print("GETTING", item, "value of", value) ... return value ... def __setitem__(self, item, value): ... print("SETTING", item, 'with', value) ... super().__setitem__(item, value) ... >>> arr = NoisyList([4, 2, 1, 3]) >>> arr[0], arr[arr[0]-1] = arr[arr[0]-1], arr[0] GETTING 0 value of 4 GETTING 3 value of 3 GETTING 0 value of 4 SETTING 0 with 3 GETTING 0 value of 3 SETTING 2 with 4
14
11
68,596,593
2021-7-30
https://stackoverflow.com/questions/68596593/how-to-automatically-break-long-string-constants-in-python-code-using-black-form
Python formatting guidelines, the famous PEP8 recommends no line longer than 79 chars. I can easily auto-format my code to a max line length with the Black Formatter, but it does not break long strings. The linter will still complain about a long URL in your code and Black won't help. Is it possible to automatically break long strings with Black formatter?
Edit 2024-05-27: updating for new black and VSCode configurations. First you must install VSCode Black Extension. Yes it is possible due to a new feature. First make sure that you have a very recent Black formatter installed. Now just run black with the option --preview. In VSCode you can configure it in your settings.json file: "black-formatter.args": [ "--line-length", "100", "--preview" ], After you edit settings.json, restart Black server for the change to take effect: Cmd/Ctrl + Shift + P -> Black Formatter: Restart Server. BTW, if you want to increase the default line length, it is a good idea to also change to the same value in your linter: "flake8.args": [ "--max-line-length=100", ], Some teams really prefer longer lines, don't let them use this as a reason for not automatically formatting. BTW, PEP8 support greater line length: Some teams strongly prefer a longer line length. For code maintained exclusively or primarily by a team that can reach agreement on this issue, it is okay to increase the line length limit up to 99 characters, provided that comments and docstrings are still wrapped at 72 characters.
10
11
68,606,661
2021-8-1
https://stackoverflow.com/questions/68606661/what-is-difference-between-nn-module-and-nn-sequential
I am just learning to use PyTorch as a beginner. If anyone is familiar with PyTorch, would you tell me the difference between nn.Module and nn.Sequential? My questions are What is the advantage to use nn.Module instead of nn.Sequential? Which is regularly utilised to build the model? How we should select nn.Module or nn.Sequential?
TLDR; answering your questions What is the advantage to use nn.Module instead of nn.Sequential? While nn.Module is the base class to implement PyTorch models, nn.Sequential is a quick way to define a sequential neural network structures inside or outside an existing nn.Module. Which is regularly utilized to build the model? Both are widely used. How we should select nn.Module or nn.Sequential? All neural networks are implemented with nn.Module. If the layers are sequentially used (self.layer3(self.layer2(self.layer1(x))), you can leverage nn.Sequential to not have to define the forward function of the model. I should start by mentioning that nn.Module is the base class for all neural network modules in PyTorch. As such nn.Sequential is actually a direct subclass of nn.Module, you can look for yourself on this line. When creating a new neural network, you would usually go about creating a new class and inheriting from nn.Module, and defining two methods: __init__ (the initializer, where you define your layers) and forward (the inference code of your module, where you use your layers). That's all you need, since PyTorch will handle backward pass with Autograd. Here is an example of a module: class NN(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(10, 4) self.fc2 = nn.Linear(4, 2) def forward(self, x) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) return x If the model you are defining is sequential, i.e. the layers are called sequentially on the input, one by one. Then, you can simply use a nn.Sequential. As I explained earlier, nn.Sequential is a special kind of nn.Module made for this particular widespread type of neural network. The equivalent here is: class NN(nn.Sequential): def __init__(self): super().__init__( nn.Linear(10, 4), nn.ReLU(), nn.Linear(4, 2), nn.ReLU()) Or a simpler way of putting it is: NN = Sequential( nn.Linear(10, 4), nn.ReLU(), nn.Linear(4, 2), nn.ReLU()) The objective of nn.Sequential is to quickly implement sequential modules such that you are not required to write the forward definition, it being implicitly known because the layers are sequentially called on the outputs. In a more complicated module though, you might need to use multiple sequential submodules. For instance, take a CNN classifier, you could define a nn.Sequential for the CNN part, then define another nn.Sequential for the fully connected classifier section of the model.
32
62
68,593,165
2021-7-30
https://stackoverflow.com/questions/68593165/what-is-the-difference-between-cached-property-in-django-vs-pythons-functools
Django has a decorator called cached_property which can be imported from django.utils.functional. On the other hand, Python 3.8 added cached_property to the standard library which can be imported from functools. Are both equivalent, i.e., are they interchangeable? or what is the difference between both? Are there any best practices when to use one or the other?
After some research both basically work the same way and the only difference you would see would be in the error handling and performance. There is a ticket #30949 on Django's issue tracker to use functools.cached_property instead of django.utils.functional.cached_property. You can see the source code [GitHub] for functools.cached_property and also for django's version [GitHub]. The basic difference is functool's version does a little bit more error handling and the main difference is that functool (pre Python 3.12) uses locking mechanism for thread safety, this causes a performance dip compared to Django's version. From some benchmarking done in the ticket linked above it seems Django's version is much more efficient in terms of performance: % python benchmark.py ..................... Django Cache: Mean +- std dev: 12.8 ms +- 0.2 ms ..................... Python Cache: Mean +- std dev: 113 ms +- 2 ms There is also an issue 43468 on Python's bug tracker regarding this. Note that this locking behavior is removed in Python version 3.12 onwards and the performance should be mostly similar now. In summary if you are using Python 3.12+ prefer functools version otherwise use Django's version if thread safety is not an issue otherwise you might want to use the functools version.
28
27
68,620,927
2021-8-2
https://stackoverflow.com/questions/68620927/installing-scipy-and-scikit-learn-on-apple-m1
The installation on the m1 chip for the following packages: Numpy 1.21.1, pandas 1.3.0, torch 1.9.0 and a few other ones works fine for me. They also seem to work properly while testing them. However when I try to install scipy or scikit-learn via pip this error appears: ERROR: Failed building wheel for numpy Failed to build numpy ERROR: Could not build wheels for numpy which use PEP 517 and cannot be installed directly Why should Numpy be build again when I have the latest version from pip already installed? Every previous installation was done using python3.9 -m pip install ... on Mac OS 11.3.1 with the apple m1 chip. Maybe somebody knows how to deal with this error or if its just a matter of time.
UPDATE: scikit-learn now works via pip ✅ Just first brew install openblas - it has instructions for different processors (wikipedia) brew install openblas export OPENBLAS=$(/opt/homebrew/bin/brew --prefix openblas) export CFLAGS="-falign-functions=8 ${CFLAGS}" # ^ no need to add to .zshrc, just doing this once. pip install scikit-learn Worked great on Apple Silicon M1 🎉 Extra details about how Pip works Pip downloaded the source from Pipy, then built the wheel targeting MacOS X 12.0, and arm64 (apple silicon): scikit_learn-1.0.1-cp38-cp38-macosx_12_0_arm64.whl. Building wheels for collected packages: scikit-learn Building wheel for scikit-learn (pyproject.toml) ... done Created wheel for scikit-learn: filename=scikit_learn-1.0.1-cp38-cp38-macosx_12_0_arm64.whl size=6364030 sha256=0b0cc9a21af775e0c8077ee71698ff62da05ab62efc914c5c15cd4bf97867b31 Successfully built scikit-learn Installing collected packages: scipy, scikit-learn Successfully installed scikit-learn-1.0.1 scipy-1.7.3 Note on Pipy: we usually download either a pre-built wheel (yay, this is excellent for reliable distribution and ensuring compatability). Or, if no prebuilt wheel exists (sad) then we download a tar.gz and build it ourselves. This happens because the authors don't publish a prebuilt wheel to Pipy, but more and more people are adding this to their CI (github actions) workflow. Building the wheel ourselves takes more cpu time, and is generally less reliable but works in this case. Here we are downloading a pre-built wheel that has very few limitations: it works for any version of python 3, for any os, for any architecture (like amd64 or arm64): click-8.0.3-py3-none-any.whl Collecting click>=7.0 Downloading click-8.0.3-py3-none-any.whl Here apparently we had no wheel available, so we have to build it ourselves with setuptools running setup.py. Collecting grpcio>=1.28.1 Downloading grpcio-1.42.0.tar.gz (21.3 MB) |████████████████████████████████| 21.3 MB 12.7 MB/s Preparing metadata (setup.py) ... done ## later in the process it installs using setuptools Running setup.py install for grpcio ... done Good luck and happy piping.
31
47
68,640,984
2021-8-3
https://stackoverflow.com/questions/68640984/how-to-stop-google-colab-runtime-without-it-automatically-restarting
I want to stop a Google Colab notebook programmatically when the thing I want to do has ended, I thought of putting a line at the end that would stop it from running. I have tried these, but none work. They all restart instead of shutting down or then just give an error. I got these from here: Is there a function in google.colab module to close the runtime https://newbedev.com/google-colab-how-to-restart-runtime-using-python-code-or-command-line-interface import os os.kill(os.getpid(), 9) import sys sys.exit() exit() quit() !kill -9 -1 There is a shortcut for ending all runtimes, (It's undefined by default by I added it as Ctrl + Shift + K), so maybe I could write a program that would virtually type those keys. But it opens a popup before shutting down and I may not be there to confirm it. How can I do this? Thanks!
Now (since September 12th, 2022) you can do this: from google.colab import runtime runtime.unassign() It was announced in this GitHub issue: https://github.com/googlecolab/colabtools/issues/2568. Thanks @HappyFace for commenting this link.
6
4
68,596,302
2021-7-30
https://stackoverflow.com/questions/68596302/f1-score-metric-per-class-in-tensorflow
I have implemented the following metric to look at Precision and Recall of the classes I deem relevant. metrics=[tf.keras.metrics.Recall(class_id=1, name='Bkwd_R'),tf.keras.metrics.Recall(class_id=2, name='Fwd_R'),tf.keras.metrics.Precision(class_id=1, name='Bkwd_P'),tf.keras.metrics.Precision(class_id=2, name='Fwd_P')] How can I implement the same in Tensorflow 2.5 for F1 score (i.e specifically for class 1 and class 2, and not class 0, without a custom function. Update Using this metric setup: tfa.metrics.F1Score(num_classes = 3, average = None, name = f1_name) I get the following during training: 13367/13367 [==============================] 465s 34ms/step - loss: 0.1683 - f1_score: 0.5842 - val_loss: 0.0943 - val_f1_score: 0.3314 and when I do model.evaluate: 224/224 [==============================] - 11s 34ms/step - loss: 0.0665 - f1_score: 0.3325 and the scoring = Score: [0.06653735041618347, array([0.99740255, 0. , 0. ], dtype=float32)] The problem is that this is training based on the average, but I would like to train on the F1 score of a sensible averaging/each of the last two values/classes in the array (which are 0 in this case) Edit Will accept a non tensorflow specific function that gives the desired result (with full function and call during fit code) but was really hoping for something using the exisiting tensorflow code if it exists)
As is mentioned in David Harris' comment, a neural network model is trained on loss functions, not on metric scores. Losses help drive the model towards a solution to provide accurate labels via backpropagation. Metrics help to provide a comparable evaluation of that model's performance that are a lot more human-legible. So, that being said, I feel like what you're saying in your question is that "there are three classes, and I want the model to care more about the last two of the three". I want to IF that's the case, one approach you can take is to weight your samples by label. Let's say that you have labels in an array y_train. # Which classes are you wanting to focus on classes_i_care_about = [1, 2] # Initialize all weights to 1.0 sample_weights = np.ones(shape=(len(y_train),)) # Give the classes you care about 50% more weight sample_weight[np.isin(y_train, classes_i_care_about)] = 1.5 ... model.fit( x=X_train, y=y_train, sample_weight=sample_weight, epochs=5 ) This is the best advice I can offer without knowing more. If you're looking for other info on how you can have your model do better on certain classes, other info could be useful, such as: What's the proportions of labels in your dataset? What is the last layer of your model architecture? Dense(3, activation="softmax")? What loss are you using? Here's a more complete, reproducible example that shows what I'm talking about with the sample weights: import numpy as np from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.preprocessing import OneHotEncoder from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam import tensorflow_addons as tfa iris_data = load_iris() # load the iris dataset x = iris_data.data y_ = iris_data.target.reshape(-1, 1) # Convert data to a single column # One Hot encode the class labels encoder = OneHotEncoder(sparse=False) y = encoder.fit_transform(y_) # Split the data for training and testing train_x, test_x, train_y, test_y = train_test_split(x, y, test_size=0.20) # Build the model def get_model(): model = Sequential() model.add(Dense(10, input_shape=(4,), activation='relu', name='fc1')) model.add(Dense(10, activation='relu', name='fc2')) model.add(Dense(3, activation='softmax', name='output')) # Adam optimizer with learning rate of 0.001 optimizer = Adam(lr=0.001) model.compile( optimizer, loss='categorical_crossentropy', metrics=[ 'accuracy', tfa.metrics.F1Score( num_classes=3, average=None, ) ] ) return model model = get_model() model.fit( train_x, train_y, verbose=2, batch_size=5, epochs=25, ) results = model.evaluate(test_x, test_y) print('Final test set loss: {:4f}'.format(results[0])) print('Final test set accuracy: {:4f}'.format(results[1])) print('Final test F1 scores: {}'.format(results[2])) Final test set loss: 0.585964 Final test set accuracy: 0.633333 Final test F1 scores: [1. 0.15384616 0.6206897 ] Now, we add weight to classes 1 and 2: sample_weight = np.ones(shape=(len(train_y),)) sample_weight[ (train_y[:, 1] == 1) | (train_y[:, 2] == 1) ] = 1.5 model = get_model() model.fit( train_x, train_y, sample_weight=sample_weight, verbose=2, batch_size=5, epochs=25, ) results = model.evaluate(test_x, test_y) print('Final test set loss: {:4f}'.format(results[0])) print('Final test set accuracy: {:4f}'.format(results[1])) print('Final test F1 scores: {}'.format(results[2])) Final test set loss: 0.437623 Final test set accuracy: 0.900000 Final test F1 scores: [1. 0.8571429 0.8571429] Here, the model has emphasized learning these, and their respective performance is improved.
5
5
68,611,397
2021-8-1
https://stackoverflow.com/questions/68611397/pos-weight-in-binary-cross-entropy-calculation
When we deal with imbalanced training data (there are more negative samples and less positive samples), usually pos_weight parameter will be used. The expectation of pos_weight is that the model will get higher loss when the positive sample gets the wrong label than the negative sample. When I use the binary_cross_entropy_with_logits function, I found: bce = torch.nn.functional.binary_cross_entropy_with_logits pos_weight = torch.FloatTensor([5]) preds_pos_wrong = torch.FloatTensor([0.5, 1.5]) label_pos = torch.FloatTensor([1, 0]) loss_pos_wrong = bce(preds_pos_wrong, label_pos, pos_weight=pos_weight) preds_neg_wrong = torch.FloatTensor([1.5, 0.5]) label_neg = torch.FloatTensor([0, 1]) loss_neg_wrong = bce(preds_neg_wrong, label_neg, pos_weight=pos_weight) However: >>> loss_pos_wrong tensor(2.0359) >>> loss_neg_wrong tensor(2.0359) The losses derived from wrong positive samples and negative samples are the same, so how does pos_weight work in the imbalanced data loss calculation?
TLDR; both losses are identical because you are computing the same quantity: both inputs are identical, the two batch elements and labels are just switched. Why are you getting the same loss? I think you got confused in the usage of F.binary_cross_entropy_with_logits (you can find a more detailed documentation page with nn.BCEWithLogitsLoss). In your case your input shape (aka the output of your model) is one-dimensional, which means you only have a single logit x, not two). In your example you have preds_pos_wrong = torch.FloatTensor([0.5, 1.5]) label_pos = torch.FloatTensor([1, 0]) This means your batch size is 2, and since by default the function is averaging the losses of the batch elements, you end up with the same result for BCE(preds_pos_wrong, label_pos) and BCE(preds_neg_wrong, label_neg). The two elements of your batch are just switched. You can verify this very easily by not averaging the loss over the batch-elements with the reduction='none' option: >>> F.binary_cross_entropy_with_logits(preds_pos_wrong, label_pos, pos_weight=pos_weight, reduction='none') tensor([2.3704, 1.7014]) >>> F.binary_cross_entropy_with_logits(preds_pos_wrong, label_pos, pos_weight=pos_weight, reduction='none') tensor([1.7014, 2.3704]) Looking into F.binary_cross_entropy_with_logits: That being said the formula for the binary cross-entropy is: bce = -[y*log(sigmoid(x)) + (1-y)*log(1- sigmoid(x))] Where y (respectively sigmoid(x) is for the positive class associated with that logit, and 1 - y (resp. 1 - sigmoid(x)) is the negative class. The documentation could be more precise on the weighting scheme for pos_weight (not to be confused with weight, which is the weighting of the different logits output). The idea with pos_weight as you said, is to weigh the positive term, not the whole term. bce = -[w_p*y*log(sigmoid(x)) + (1-y)*log(1- sigmoid(x))] Where w_p is the weight for the positive term, to compensate for the positive to negative sample imbalance. In practice, this should be w_p = #negative/#positive. Therefore: >>> w_p = torch.FloatTensor([5]) >>> preds = torch.FloatTensor([0.5, 1.5]) >>> label = torch.FloatTensor([1, 0]) With the builtin loss function, >>> F.binary_cross_entropy_with_logits(preds, label, pos_weight=w_p, reduction='none') tensor([2.3704, 1.7014]) Compared with the manual computation: >>> z = torch.sigmoid(preds) >>> -(w_p*label*torch.log(z) + (1-label)*torch.log(1-z)) tensor([2.3704, 1.7014])
5
11
68,611,570
2021-8-1
https://stackoverflow.com/questions/68611570/vsc-how-to-auto-close-curly-brackets-in-f-strings
Hi I want to ask if it's possible to enable auto closing curly brackets in f-strings in Visual Studio Code. In Python you use often f-strings and therefore you need curly brackets. print(f"Hello {name}!") I already found something but I don't know if that feature is already implemented and if not if I can implement it with external plugins?: https://github.com/microsoft/vscode-python/issues/13673
Use the "Always" Auto Closing Brackets setting, but limit it to Python in your settings.json: "[python]": { "editor.autoClosingBrackets": "always" }
4
6
68,578,277
2021-7-29
https://stackoverflow.com/questions/68578277/adding-a-nullable-column-in-spark-dataframe
In Spark, literal columns, when added, are not nullable: from pyspark.sql import SparkSession, functions as F spark = SparkSession.builder.getOrCreate() df = spark.createDataFrame([(1,)], ['c1']) df = df.withColumn('c2', F.lit('a')) df.printSchema() # root # |-- c1: long (nullable = true) # |-- c2: string (nullable = false) How to create a nullable column?
The shortest method I've found - using when (the otherwise clause seems not needed): df = df.withColumn('c2', F.when(F.lit(True), F.lit('a'))) If in Scala: .withColumn("c2", when(lit(true), lit("a"))) Full test result: from pyspark.sql import SparkSession, functions as F spark = SparkSession.builder.getOrCreate() df = spark.createDataFrame([(1,)], ['c1']) df = df.withColumn('c2', F.when(F.lit(True), F.lit('a'))) df.show() # +---+---+ # | c1| c2| # +---+---+ # | 1| a| # +---+---+ df.printSchema() # root # |-- c1: long (nullable = true) # |-- c2: string (nullable = true)
7
11
68,571,543
2021-7-29
https://stackoverflow.com/questions/68571543/using-a-pip-requirements-file-in-a-conda-yml-file-throws-attributeerror-fileno
I have a requirements.txt like numpy and an environment.yml containing # run via: conda env create --file environment.yml --- name: test dependencies: - python>=3 - pip - pip: - -r file:requirements.txt when I then run conda env create --file environment.yml I get Pip subprocess output: Pip subprocess error: ERROR: Exception: <... error traceback in pip > AttributeError: 'FileNotFoundError' object has no attribute 'read' failed CondaEnvException: Pip failed It is also strange how pip is called, as reported just before the error occurs: ['$HOME/.conda/envs/test/bin/python', '-m', 'pip', 'install', '-U', '-r', '$HOME/test/condaenv.8d3003nm.requirements.txt'] (I replace my home path with $HOME) Note the weird expansion of the requirements.txt. Any ideas?
Changes to Pip Behavior in 21.2.1 A recent change in the Pip code has changed its behavior to be more strict with respect to file: URI syntax. As pointed out by a PyPA member and Pip developer, the syntax file:requirements.txt is not a valid URI according to the RFC8089 specification. Instead, one must either drop the file: scheme altogether: name: test dependencies: - python>=3 - pip - pip: - -r requirements.txt or provide a valid URI, which means using an absolute path (or a local file server): name: test dependencies: - python>=3 - pip - pip: - -r file:/full/path/to/requirements.txt # - -r file:///full/path/to/requirements.txt # alternate syntax
21
32
68,561,453
2021-7-28
https://stackoverflow.com/questions/68561453/m1-mac-gdal-wrong-architecture-error-django
I'm trying to get a django project up and running, which depends on GDAL library. I'm working on a M1 based mac. Following the instructions on official Django docs, I've installed the necessary packages via brew $ brew install postgresql $ brew install postgis $ brew install gdal $ brew install libgeoip gdalinfo --version runs fine and shows the version as 3.3.1 gdal-config --libs returns this path: -L/opt/homebrew/Cellar/gdal/3.3.1_2/lib -lgdal a symlink is also placed on the homebrew's lib directory, which is in my path env variable. When I try to run django without specifying the path to gdal library, it complains that it cannot find the GDAL package (even though the library is reachable, as a symlink to it is available through path env variable). When I try to specify the path to the GDAL library using GDAL_LIBRARY_PATH, I get this error: OSError: dlopen(/opt/homebrew/Cellar/gdal/3.3.1_2/lib/libgdal.dylib, 6): no suitable image found. Did find: /opt/homebrew/Cellar/gdal/3.3.1_2/lib/libgdal.dylib: mach-o, but wrong architecture /opt/homebrew/Cellar/gdal/3.3.1_2/lib/libgdal.29.dylib: mach-o, but wrong architecture P.s. I've already seen this answer, but it didn't help. Isn't that strange when I try to run gdalinfo it runs fine but when django tries to run it throws me this error? What am I doing wrong?
GDAL and Python are likely compiled for different CPU architectures. On an M1 system the OS can run both native arm64 and emulated x86_64 binaries. To check: run file /opt/homebrew/Cellar/gdal/3.3.1_2/lib/libgdal.dylib and file $(which python3), which should show the supported CPU architectures for both. If the two don't match you'll have to reinstall one of them. Note that if you reinstall Python you also have to reinstall all Python packages with C extensions.
12
7
68,625,748
2021-8-2
https://stackoverflow.com/questions/68625748/attributeerror-cant-get-attribute-new-block-on-module-pandas-core-internal
I was using pyspark on AWS EMR (4 r5.xlarge as 4 workers, each has one executor and 4 cores), and I got AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks'. Below is a snippet of the code that threw this error: search = SearchEngine(db_file_dir = "/tmp/db") conn = sqlite3.connect("/tmp/db/simple_db.sqlite") pdf_ = pd.read_sql_query('''select zipcode, lat, lng, bounds_west, bounds_east, bounds_north, bounds_south from simple_zipcode''',conn) brd_pdf = spark.sparkContext.broadcast(pdf_) conn.close() @udf('string') def get_zip_b(lat, lng): pdf = brd_pdf.value out = pdf[(np.array(pdf["bounds_north"]) >= lat) & (np.array(pdf["bounds_south"]) <= lat) & (np.array(pdf['bounds_west']) <= lng) & (np.array(pdf['bounds_east']) >= lng) ] if len(out): min_index = np.argmin( (np.array(out["lat"]) - lat)**2 + (np.array(out["lng"]) - lng)**2) zip_ = str(out["zipcode"].iloc[min_index]) else: zip_ = 'bad' return zip_ df = df.withColumn('zipcode', get_zip_b(col("latitude"),col("longitude"))) Below is the traceback, where line 102, in get_zip_b refers to pdf = brd_pdf.value: 21/08/02 06:18:19 WARN TaskSetManager: Lost task 12.0 in stage 7.0 (TID 1814, ip-10-22-17-94.pclc0.merkle.local, executor 6): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 605, in main process() File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 597, in process serializer.dump_stream(out_iter, outfile) File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py", line 223, in dump_stream self.serializer.dump_stream(self._batched(iterator), stream) File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py", line 141, in dump_stream for obj in iterator: File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py", line 212, in _batched for item in iterator: File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 450, in mapper result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs) File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 450, in <genexpr> result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs) File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py", line 90, in <lambda> return lambda *a: f(*a) File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/util.py", line 121, in wrapper return f(*args, **kwargs) File "/mnt/var/lib/hadoop/steps/s-1IBFS0SYWA19Z/Mobile_ID_process_center.py", line 102, in get_zip_b File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py", line 146, in value self._value = self.load_from_path(self._path) File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py", line 123, in load_from_path return self.load(f) File "/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py", line 129, in load return pickle.load(file) AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '/mnt/miniconda/lib/python3.9/site-packages/pandas/core/internals/blocks.py'> Some observations and thought process: 1, After doing some search online, the AttributeError in pyspark seems to be caused by mismatched pandas versions between driver and workers? 2, But I ran the same code on two different datasets, one worked without any errors but the other didn't, which seems very strange and undeterministic, and it seems like the errors may not be caused by mismatched pandas versions. Otherwise, neither two datasets would succeed. 3, I then ran the same code on the successful dataset again, but this time with different spark configurations: setting spark.driver.memory from 2048M to 4192m, and it threw AttributeError. 4, In conclusion, I think the AttributeError has something to do with driver. But I can't tell how they are related from the error message, and how to fix it: AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks'.
Solutions Keeping the pickle file unchanged ,upgrade your pandas version to 1.3.x and then load the pickle file. Or Keeping your current pandas version unchanged, downgrade the pandas version to 1.2.x on the dumping side, and then dump a new pickle file with v1.2.x. Load it on your side with your pandas of version 1.2.x In short your pandas version used to dump the pickle(dump_version, probably 1.3.x) isn't comptaible with your pandas version used to load the pickle (load_version, probably 1.2.x) . To solve it, try to upgrade the pandas version(load_version) to 1.3.x in the loading environment and then load the pickle. Or downgrade the pandas version(dump_version) to 1.2.x and then redump a new pickle. After this, you can load the new pickle with your pandas of version 1.2.x And this has nothing to do with PySpark In long This issue is related to the backward imcompatibility between Pandas version 1.2.x and 1.3.x. In the version 1.2.5 and before, Pandas use the variable name new_blocks in module pandas.core.internals.blocks cf source code v1.2.5. On 2 July 2021, Pandas released version 1.3.0. In this update, Pandas changed the api, the variable name new_blocks in module pandas.core.internals.blocks has been changed to new_block cf source code v1.3.0. This change of API will result into two imcompatiblity errors: If you have dumped a pickle with Pandas v1.3.x, and you try to load the pickle with Pandas v1.2.x, you will get the following error: AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '.../site-packages/pandas/core/internals/blocks.py'>'> Python throw this error complaining that it can not found the attribute new_block on your current pandas.core.internals.blocks because in order to pickle load an object, it has to use the exact same class used for dumping the pickle. This is exactly your case: Having dumped the pickle with Pandas v1.3.x and try to load the pickle with Pandas v1.2.x To reproduce the error pip install --upgrade pandas==1.3.4 import numpy as np import pandas as pd df =pd.DataFrame(np.random.rand(3,6)) with open("dump_from_v1.3.4.pickle", "wb") as f: pickle.dump(df, f) quit() pip install --upgrade pandas==1.2.5 import pickle with open("dump_from_v1.3.4.pickle", "rb") as f: df = pickle.load(f) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-2-ff5c218eca92> in <module> 1 with open("dump_from_v1.3.4.pickle", "rb") as f: ----> 2 df = pickle.load(f) 3 AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/blocks.py'>
54
78
68,558,129
2021-7-28
https://stackoverflow.com/questions/68558129/opening-the-second-window-using-undetected-chromedriver-selenium-python
I'm trying to open two or more separate windows. I was able to open the first window by running from selenium import webdriver import undetected_chromedriver.v2 as uc options = webdriver.ChromeOptions() options.add_argument(r"--user-data-dir=C:\Users\username\AppData\Local\Google\Chrome\User Data") drivers = list() drivers.append(uc.Chrome(options=options)) Now I tried to open the second window by simply repeating the last line (drivers.append(uc.Chrome(options=options))), but it returned RuntimeError: you cannot reuse the ChromeOptions object So I tried options = webdriver.ChromeOptions() options.add_argument(r"--user-data-dir=C:\Users\username\AppData\Local\Google\Chrome\User Data") drivers.append(uc.Chrome(options=options)) This time it returned WebDriverException: unknown error: cannot connect to chrome at 127.0.0.1:54208 from chrome not reachable How can I fix this?
This worked for me , I couldn't use v2 but it works in v1. import undetected_chromedriver as uc uc.install(executable_path=PATH,) drivers_dict={} def scraping_function(link): try: thread_name= threading.current_thread().name #sometime we are going to have different thread name in each iteration so a little regex might help thread_name = re.sub("ThreadPoolExecutor-(\d*)_(\d*)", r"ThreadPoolExecutor-0_\2", thread_name) print(f"re.sub -> {thread_name}") driver = drivers_dict[thread_name] except KeyError: drivers_dict[threading.current_thread().name] = uc.Chrome(options=options,executable_path=PATH) driver = drivers_dict[threading.current_thread().name] driver.get(link)
5
2
68,614,547
2021-8-1
https://stackoverflow.com/questions/68614547/tensorflow-libdevice-not-found-why-is-it-not-found-in-the-searched-path
Win 10 64-bit 21H1; TF2.5, CUDA 11 installed in environment (Python 3.9.5 Xeus) I am not the only one seeing this error; see also (unanswered) here and here. The issue is obscure and the proposed resolutions are unclear/don't seem to work (see e.g. here) Issue Using the TF Linear_Mixed_Effects_Models.ipynb example (download from TensorFlow github here) execution reaches the point of performing the "warm up stage" then throws the error: InternalError: libdevice not found at ./libdevice.10.bc [Op:__inference_one_e_step_2806] The console contains this output showing that it finds the GPU but XLA initialisation fails to find the - existing! - libdevice in the specified paths 2021-08-01 22:04:36.691300: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9623 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) 2021-08-01 22:04:37.080007: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. 2021-08-01 22:04:54.122528: I tensorflow/compiler/xla/service/service.cc:169] XLA service 0x1d724940130 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2021-08-01 22:04:54.127766: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (0): NVIDIA GeForce GTX 1080 Ti, Compute Capability 6.1 2021-08-01 22:04:54.215072: W tensorflow/compiler/tf2xla/kernels/random_ops.cc:241] Warning: Using tf.random.uniform with XLA compilation will ignore seeds; consider using tf.random.stateless_uniform instead if reproducible behavior is desired. 2021-08-01 22:04:55.506464: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:73] Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice. 2021-08-01 22:04:55.512876: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:74] Searched for CUDA in the following directories: 2021-08-01 22:04:55.517387: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] C:/Users/Julian/anaconda3/envs/TF250_PY395_xeus/Library/bin 2021-08-01 22:04:55.520773: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 2021-08-01 22:04:55.524125: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] . 2021-08-01 22:04:55.526349: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:79] You can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule's DebugOptions. For most apps, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda will work. Now the interesting thing is that the paths searched includes "C:/Users/Julian/anaconda3/envs/TF250_PY395_xeus/Library/bin" the content of that folder includes all the (successfully loaded at TF startup) DLLs, including cudart64_110.dll, dudnn64_8.dll... and of course libdevice.10.bc Question Since TF says it is searching this location for this file and the file exists there, what is wrong and how do I fix it? (NB C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 does not exist... CUDA is intalled in the environment; this path must be a best guess for an OS installation) Info: I am setting the path by aPath = '--xla_gpu_cuda_data_dir=C:/Users/Julian/anaconda3/envs/TF250_PY395_xeus/Library/bin' print(aPath) os.environ['XLA_FLAGS'] = aPath but I have also set an OS environment variable XLA_FLAGS to the same string value... I don't know which one is actually working yet, but the fact that the console output says it searched the intended path is good enough
The diagnostic information is unclear and thus unhelpful; there is however a resolution The issue was resolved by providing the file (as a copy) at this path C:\Users\Julian\anaconda3\envs\TF250_PY395_xeus\Library\bin\nvvm\libdevice\ Note that C:\Users\Julian\anaconda3\envs\TF250_PY395_xeus\Library\bin was the path given to XLA_FLAGS, but it seems it is not looking for the libdevice file there it is looking for the \nvvm\libdevice\ path This means that I can't just set a different value in XLA_FLAGS to point to the actual location of the libdevice file because, to coin a phrase, it's not (just) the file it's looking for. The debug info earlier: 2021-08-05 08:38:52.889213: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:73] Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice. 2021-08-05 08:38:52.896033: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:74] Searched for CUDA in the following directories: 2021-08-05 08:38:52.899128: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] C:/Users/Julian/anaconda3/envs/TF250_PY395_xeus/Library/bin 2021-08-05 08:38:52.902510: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 2021-08-05 08:38:52.905815: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] . is incorrect insofar as there is no "CUDA" in the search path; and FWIW I think a different error should have been given for searching in C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 since there is no such folder (there's an old V10.0 folder there, but no OS install of CUDA 11) Until/unless path handling is improved by TensorFlow such file structure manipulation is needed in every new (Anaconda) python environment. Full thread in TensorFlow forum here
33
6
68,582,382
2021-7-29
https://stackoverflow.com/questions/68582382/how-to-pip-install-pickle-under-python-3-9-in-windows
I need the pickle package installed under my Python 3.9 under Windows 10. What I tried When trying with pip install pickle I was getting: ERROR: Could not find a version that satisfies the requirement pickle (from versions: none) ERROR: No matching distribution found for pickle Then I tried the solution suggested in this question using pip install pickle5 but got the following error: error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ I then tried to install the Tool suggested by the error but got same error message after trying again pip install pickle5. Question Which is the correct way to install pickle package under Python 3.9 in Windows 10? UPDATE There is no need to install pickle module as it comes already installed along with Python 3.x. Just needed to do import pickle and voila!
Cedric UPDATED answer is right. Pickle exists within Python 3.9. You don't need pip install pickle. Just use it. import pickle
15
28
68,605,481
2021-7-31
https://stackoverflow.com/questions/68605481/why-sqlalchemy-declarative-base-object-has-no-attribute-query
I created declarative table. from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, String from sqlalchemy.dialects.postgresql import UUID import uuid Base = declarative_base() class User(Base): __tablename__ = 'users' id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4, unique=True) name = Column(String) I need to filter data. In the Flask-SQLAlchemy, I do name = 'foo' User.query.filter_by(name=name).first() But if I use SQLAlchemy without Flask, then I get the error: type object 'User' has no attribute 'query' The only way that works for me is to filter the data through the session. engine = create_engine('DATABASE_URL') Session = sessionmaker(bind=engine) session = Session() name = 'foo' user = session.query(User).filter_by(name=name).first() session.close()
The Model.query... idiom is not a default part of the SQLAlchemy ORM; it's a customisation provided by Flask-SQLAlchemy. It is not available in base SQLAlchemy, and that is why you get the error message.
8
10
68,634,761
2021-8-3
https://stackoverflow.com/questions/68634761/env-python3-9-no-such-file-or-directory
I have some python code formatters as git pre-commit hook and I have changed my python version as brew list | grep python [email protected] [email protected] brew unlink [email protected] brew unlink [email protected] brew link [email protected] python -V Python 3.7.9 and know seems something get broken and on git commit I get env: python3.9: No such file or directory, so what is env? and how I can edit it to use [email protected]?
In .git/hooks/pre-commit I have #!/usr/bin/env python3.9 and running pre-commit install fixed it to #!/usr/bin/env python3.7
7
5
68,624,314
2021-8-2
https://stackoverflow.com/questions/68624314/do-asynchronous-context-managers-need-to-protect-their-cleanup-code-from-cancell
The problem (I think) The contextlib.asynccontextmanager documentation gives this example: @asynccontextmanager async def get_connection(): conn = await acquire_db_connection() try: yield conn finally: await release_db_connection(conn) It looks to me like this can leak resources. If this code's task is cancelled while this code is on its await release_db_connection(conn) line, the release could be interrupted. The asyncio.CancelledError will propagate up from somewhere within the finally block, preventing subsequent cleanup code from running. So, in practical terms, if you're implementing a web server that handles requests with a timeout, a timeout firing at the exact wrong time could cause a database connection to leak. Full runnable example import asyncio from contextlib import asynccontextmanager async def acquire_db_connection(): await asyncio.sleep(1) print("Acquired database connection.") return "<fake connection object>" async def release_db_connection(conn): await asyncio.sleep(1) print("Released database connection.") @asynccontextmanager async def get_connection(): conn = await acquire_db_connection() try: yield conn finally: await release_db_connection(conn) async def do_stuff_with_connection(): async with get_connection() as conn: await asyncio.sleep(1) print("Did stuff with connection.") async def main(): task = asyncio.create_task(do_stuff_with_connection()) # Cancel the task just as the context manager running # inside of it is executing its cleanup code. await asyncio.sleep(2.5) task.cancel() try: await task except asyncio.CancelledError: pass print("Done.") asyncio.run(main()) Output on Python 3.7.9: Acquired database connection. Did stuff with connection. Done. Note that Released database connection is never printed. My questions This is a problem, right? Intuitively to me, I expect .cancel() to mean "cancel gracefully, cleaning up any resources used along the way." (Otherwise, why would they have implemented cancellation as exception propagation?) But I could be wrong. Maybe, for example, .cancel() is meant to be fast instead of graceful. Is there an authoritative source that clarifies what .cancel() is supposed to do here? If this is indeed a problem, how do I fix it?
Focusing on protecting the cleanup from cancellation is a red herring. There is a multitude of things that can go wrong and the context manager has no way to know which errors can occur, and which errors must be protected against. It is the responsibility of the resource handling utilities to properly handle errors. If release_db_connection must not be cancelled, it must protect itself against cancellation. If acquire/release must be run as a pair, it must be a single async with context manager. Further protection, e.g. against cancellation, may be involved internally as well. async def release_db_connection(conn): """ Cancellation safe variant of `release_db_connection` Internally protects against cancellation by delaying it until cleanup. """ # cleanup is run in separate task so that it # cannot be cancelled from the outside. shielded_release = asyncio.create_task(asyncio.sleep(1)) # Wait for cleanup completion – unlike `asyncio.shield`, # delay any cancellation until we are done. try: await shielded_release except asyncio.CancelledError: await shielded_release # propagate cancellation when we are done raise finally: print("Released database connection.") Note: Asynchronous cleanup is tricky. For example, a simple asyncio.shield is not sufficient if the event loop does not wait for shielded tasks. Avoid inventing your own protection and rely on the underlying frameworks to do the right thing. The cancellation of a task is a graceful shutdown that a) still allows async operations and b) may be delayed/suppressed. Coroutines being prepared to handle the CancelledError for cleanup is explicitly allowed. Task.cancel The coroutine then has a chance to clean up or even deny the request by suppressing the exception with a try … … except CancelledError … finally block. […] Task.cancel() does not guarantee that the Task will be cancelled, although suppressing cancellation completely is not common and is actively discouraged. A forceful shutdown is coroutine.close/GeneratorExit. This corresponds to an immediate, synchronous shutdown and forbids suspension via await, async for or async with. coroutine.close […] it raises GeneratorExit at the suspension point, causing the coroutine to immediately clean itself up.
16
2
68,620,436
2021-8-2
https://stackoverflow.com/questions/68620436/cannot-import-name-stop-words-from-sklearn-feature-extraction
I've been trying to follow an NLP notebook, and they use: from sklearn.feature_extraction import stop_words However, this is throwing the following error: ImportError: cannot import name 'stop_words' from 'sklearn.feature_extraction' My guess is that stop_words is not (or maybe no longer) part of the 'feature_extraction' part of sklearn, but I might be wrong. I have seen some articles that used sklearn.feature_extraction.stop_words, but at the same time I see places which have used 'text' in place of 'feature_extraction'. Am I misunderstanding something? sklearn is definitely up to date (version 0.24) and I import something from sklearn.manifold earlier in the notebook. Thanks for your help!
I have sklearn version 0.24.1, and I found that the module is now private - it's called _stop_words. So: from sklearn.feature_extraction import _stop_words After a little digging, I found that this change was made in version 0.22, in response to this issue. It looks like they want people to use the "canonical" import for the task at hand, as described in the API docs.
10
17
68,617,654
2021-8-2
https://stackoverflow.com/questions/68617654/error-problem-nothing-provides-usr-libexec-platform-python-needed-by-mongodb
I am using Fedora Linux and when i want to update MongoDB tools (mongodb-org-tools) or my packages via sudo dnf update i always get error like this: Error: Problem: problem with installed package mongodb-org-database-tools-extra-4.4.4-1.el8.x86_64 - cannot install the best update candidate for package mongodb-org-database-tools-extra-4.4.4-1.el8.x86_64 - nothing provides /usr/libexec/platform-python needed by mongodb-org-database-tools-extra-4.4.5-1.el8.x86_64 - nothing provides /usr/libexec/platform-python needed by mongodb-org-database-tools-extra-4.4.6-1.el8.x86_64 - nothing provides /usr/libexec/platform-python needed by mongodb-org-database-tools-extra-4.4.7-1.el8.x86_64 (try to add '--skip-broken' to skip uninstallable packages) i had similar error for updating mongodb and i solved them by sudo dnf upgrade mongodb-org-mongos --best --allowerasing. but still i have problem with mongodb tools
I also had problems installing Mongodb on Fedora 33. These problems occurred when I had the following code in /etc/yum.repos.d/mongodb-org.repo : [Mongodb] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/8/mongodb-org/4.4/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-4.4.asc But if I use this repository instead (i.e. replace the above code in /etc/yum.repos.d/mongodb-org.repo with the code below), it all works fine: [Mongodb] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/4.4/x86_64 gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-4.4.asc Next install mongodb: sudo dnf install mongodb-org Start the service: sudo service start mongod After starting the service as above, you can then use the usual systemctl commands to stop, start and show status of the service. The above command to start the service is needed only once. sudo systemctl stop mongod sudo systemctl start mongod sudo systemctl status mongod Further note on Fedora 34: The above does not work on Fedora 34 as mongodb-org-shell dependencies on an older version of openssl causes problems: - nothing provides libcrypto.so.10()(64bit) needed by mongodb-org-shell-4.4.0-1.amzn1.x86_64 - nothing provides libssl.so.10()(64bit) needed by mongodb-org-shell-4.4.0-1.amzn1.x86_64 I conclude from https://jira.mongodb.org/browse/SERVER-58870 that the Mongodb team do not plan to support their product on Fedora going forward, as Mongodb 5.0 is also not supported on Fedora 34, although a workaround is suggested. I'm therefore going to consider other NoSQL options.
10
7
68,616,000
2021-8-2
https://stackoverflow.com/questions/68616000/pip-install-uwsgi-gives-error-attributeerror-module-os-has-no-attribute-un
System : Windows 10 Python : 3.9.5 I was learning to Deploying a Flask app on Google Cloud. I was trying to install uwsgi (on my windows system) as shown in this youtube video. pip install uwsgi This error is coming (flask) D:\projects\websites\googleHostFlaskApp>pip install uwsgi Collecting uwsgi Using cached uWSGI-2.0.19.1.tar.gz (803 kB) ERROR: Command errored out with exit status 1: command: 'd:\projects\websites\googlehostflaskapp\env\scripts\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\HP\\AppData\\Local\\Temp\\pip-install-93528com\\uwsgi_9bf2ba795b9549d6a9a0d2dfa78cd774\\setup.py'"'"'; __file__='"'"'C:\\Users\\HP\\AppData\\Local\\Temp\\pip-install-93528com\\uwsgi_9bf2ba795b9549d6a9a0d2dfa78cd774\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\HP\AppData\Local\Temp\pip-pip-egg-info-tiy5mnl9' cwd: C:\Users\HP\AppData\Local\Temp\pip-install-93528com\uwsgi_9bf2ba795b9549d6a9a0d2dfa78cd774\ Complete output (7 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\HP\AppData\Local\Temp\pip-install-93528com\uwsgi_9bf2ba795b9549d6a9a0d2dfa78cd774\setup.py", line 3, in <module> import uwsgiconfig as uc File "C:\Users\HP\AppData\Local\Temp\pip-install-93528com\uwsgi_9bf2ba795b9549d6a9a0d2dfa78cd774\uwsgiconfig.py", line 8, in <module> uwsgi_os = os.uname()[0] pi ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/c7/75/45234f7b441c59b1eefd31ba3d1041a7e3c89602af24488e2a22e11e7259/uWSGI-2.0.19.1.tar.gz#sha256=faa85e053c0b1be4d5585b0858d3a511d2cd10201802e8676060fd0a109e5869 (from https://pypi.org/simple/uwsgi/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Answers i tried : For windows : I am not sure where we have to place the file given in the answer ? For ubuntu : I get that i have to add some development library to python, but not able to find the correct one here to install it in my windows system.
Installing UWSGI wasn't an easy task. @Seraph answer did helped but he missed some points, that i did to finally install UWSGI. So here is Complete Steps so that you won't have to waste a day on this like me: Install Cygwin download (Cygwin is an open source collection of tools that allows Unix or Linux applications to be compiled and run on a Windows operating system from within a Linux-like interface). This youtube link will help you in the process of downloading Cygwin. Packages u require in cygwin : gcc-core gcc-g++ libintl-devel python3-devel python38-devel gettext-devel (if it gives error that it require c compiler, then gcc-core will help, if it errors: -lintl missing, then libintl-devel helps) 3.After you open the Cygwin app, you can follow the above youtube link, to successfully run uwsgi on your Windows System.
12
8
68,636,431
2021-8-3
https://stackoverflow.com/questions/68636431/multiline-if-statement-with-a-single-conditional
Lets say I have two variables self.SuperLongSpecificCorperateVariableNameIcantChangeCommunication and self.SuperLongSpecificCorperateVariableNameIcantChangeControl And I need to compare them. The issue being that, when I put them both in an if statement, it blows past the style checker's line length. if (self.SuperLongSpecificCorperateVariableNameIcantChangeCommunication != self.SuperLongSpecificCorperateVariableNameIcantChangeControl): The way around this would be to split this into two lines. if (self.SuperLongSpecificCorperateVariableNameIcantChangeCommunication \ != self.SuperLongSpecificCorperateVariableNameIcantChangeControl): My coworkers are split on whether PEP 8 has you split between conditionals or whether you can split up a conditional itself. Ideally we would get approval to change the variable name, but in the meantime, what does PEP 8 say we should do in this case?
Firstly, PEP 8 says you can split long lines under Maximum Line Length: Long lines can be broken over multiple lines by wrapping expressions in parentheses. These should be used in preference to using a backslash for line continuation. In fact, the backslash in your example is not needed because of the parentheses. PEP 8 says you can split a conditional under multiline if-statements, although the main focus of that section is how to distinguish it from the following block. When the conditional part of an if-statement is long enough to require that it be written across multiple lines, it's worth noting that the combination of a two character keyword (i.e. if), plus a single space, plus an opening parenthesis creates a natural 4-space indent for the subsequent lines of the multiline conditional. This can produce a visual conflict with the indented suite of code nested inside the if-statement, which would also naturally be indented to 4 spaces. This PEP takes no explicit position on how (or whether) to further visually distinguish such conditional lines from the nested suite inside the if-statement. Acceptable options in this situation include, but are not limited to: # No extra indentation. if (this_is_one_thing and that_is_another_thing): do_something() # Add a comment, which will provide some distinction in editors # supporting syntax highlighting. if (this_is_one_thing and that_is_another_thing): # Since both conditions are true, we can frobnicate. do_something() # Add some extra indentation on the conditional continuation line. if (this_is_one_thing and that_is_another_thing): do_something() Personally, I would go for the last option for maximum readability. So that gives us: if (self.SuperLongSpecificCorperateVariableNameIcantChangeCommunication != self.SuperLongSpecificCorperateVariableNameIcantChangeControl): do_something() Other options You could use temporary "internal use" names to shorten the line: _Comm = self.SuperLongSpecificCorperateVariableNameIcantChangeCommunication _Control = self.SuperLongSpecificCorperateVariableNameIcantChangeControl if _Comm != _Control: do_something() This is assuming the context is not in a local scope. If it is actually in a local scope, they don't need to be "internal use". You could use a helper function to give them shorter names in a local scope. Since they're attributes, you can pass in their object: def _compare(instance): a = instance.SuperLongSpecificCorperateVariableNameIcantChangeCommunication b = instance.SuperLongSpecificCorperateVariableNameIcantChangeControl return a != b if _compare(self): do_something()
5
2
68,639,461
2021-8-3
https://stackoverflow.com/questions/68639461/print-formatted-numpy-array
I want to print formatted numpy array along with a float with different significant figures. Consider the following code a = 3.14159 X = np.array([1.123, 4.456, 7.789]) print('a = %4.3f, X = %3.2f' % (a, X)) ------------------------ TypeError: only size-1 arrays can be converted to Python scalars I desire following output- a = 3.141, X = [1.12, 4.45, 7.78] Suggest a modification in the code.
Convert array to string first with array2string: print('a = %4.3f, X = %s' % (a, np.array2string(X, precision=2))) # a = 3.142, X = [1.12 4.46 7.79]
5
3
68,637,971
2021-8-3
https://stackoverflow.com/questions/68637971/how-to-detect-zstd-compression
I am currently working on a python application, that works with facebook api's. As we all know, facebook loves their own technology and is working with zstd for data compression. The problem: facebook is returning either a uncompressed response with normal json or if the response is longer, it is responding with a zstd compressed json. My current code is something like this: import zstd import json def handle_response(response) json = None try: json = json.loads(zstd.decompress(response.content)) except: json = json.loads(response.text) return json I am currently wondering, if there is a more clean way to do this and even detect zstd.
What you're doing is fine. You could, I suppose, check to see if the stream starts with the four bytes 28 b5 2f fd. If it doesn't, it's not a zstd stream. If it does, it may be a zstd stream. In the latter case, you would try to decompress and if it fails, you would fall back to just copying the input. That turns out to be exactly the same as what you're already doing, since the first thing that zstd.decompress is doing is to look for that signature.
7
10
68,637,153
2021-8-3
https://stackoverflow.com/questions/68637153/python-error-in-vscode-sorry-something-went-wrong-activating-intellicode-suppo
My code is not working in vscode when I click to run code I see this error: Sorry, something went wrong activating IntelliCode support for Python. Please check the "Python" and "VS IntelliCode" output windows for details. and when I tried to rerun the code I saw this message; Code is already running The code doesn't stop when I click on ctrl+c so I have to close the editor and open it back. I don't understand why this happened, please help.
I would just like to add a few helpful links: Intellicode Issue 57 Intellicode Issue 266 Gitmemory issue 486082039 For a lot of people, it just began working after a few tries randomly. See this text (quoted from issue 57): There's a race condition in the activation of both the IntelliCode and Python language server extensions. Even if the Python extension is loaded, the language server that the extension spins up might not be fully initialized yet. So if the Python extension loads, then the IntelliCode extension, then the Python language server initializes, we will have this problem. For some people, it was working to reload VS Intellicode pack following the reinstall the Python extension pack. Thank you.
13
8
68,603,585
2021-7-31
https://stackoverflow.com/questions/68603585/mypy-why-does-typevar-not-work-without-bound-specified
I'm trying to understand type annotations and I have the following code: from typing import TypeVar T = TypeVar('T') class MyClass(): x: int = 10 def foo(obj: T) -> None: print(obj.x) foo(MyClass()) When I run mypy, I get the following error: main.py:9: error: "T" has no attribute "x" Found 1 error in 1 file (checked 1 source file) But when I add bound='MyClass' to the TypeVar, it shows no errors. What is the reason for this behavior? I tried to read the documentation, but didn't find any answer on what exactly is happening when bound is set to a default value.
This isn't what a TypeVar is usually used for. The following function is a good example of the kind of function that a TypeVar is typically used for: def baz(obj): return obj This function will work with an argument of any type, so one solution for annotating this function could be to use typing.Any, like so: from typing import Any def baz(obj: Any) -> Any: return obj This isn't great, however. We generally should use Any as a last resort only, as it doesn't give the type-checker any information about the variables in our code. Lots of potential bugs will fly under the radar if we use Any too liberally, as the type-checker will essentially give up, and not check that portion of our code. In this situation, there's a lot more information that we can feed to the type-checker. We don't know what the type of the input argument will be, and we don't know what the return type will be, but we do know that the input type and the return type will be the same, whatever they are. We can show this kind of relationship between types — a type-dependent relationship — by using a TypeVar: from typing import TypeVar T = TypeVar('T') def baz(obj: T) -> T: return obj We can also use TypeVars in similar, but more complex, situations. Consider this function, which will accept a sequence of any type, and construct a dictionary using that sequence: def bar(some_sequence): return {some_sequence.index(elem): elem for elem in some_sequence} We can annotate this function like this: from typing import TypeVar, Sequence V = TypeVar('V') def bar(some_sequence: Sequence[V]) -> dict[int, V]: return {some_sequence.index(elem): elem for elem in some_sequence} Whatever the inferred type is of some_sequence's elements, we can guarantee the values of the dictionary that is returned will be of the same type. Bound TypeVars Bound TypeVars are useful for when we have a function where we have some kind of type dependency like the above, but we want to narrow the types involved a little more. For example, imagine the following code: class BreakfastFood: pass class Spam(BreakfastFood): pass class Bacon(BreakfastFood): pass def breakfast_selection(food): if not isinstance(food, BreakfastFood): raise TypeError("NO.") # do some more stuff here return food In this code, we've got a type-dependency like in the previous examples, but there's an extra complication: the function will throw a TypeError if the argument passed to it isn't an instance of — or an instance of a subclass of — the BreakfastFood class. In order for this function to pass a type-checker, we need to constrain the TypeVar we use to BreakfastFood and its subclasses. We can do this by using the bound keyword-argument: from typing import TypeVar class BreakfastFood: pass B = TypeVar('B', bound=BreakfastFood) class Spam(BreakfastFood): pass class Bacon(BreakfastFood): pass def breakfast_selection(food: B) -> B: if not isinstance(food, BreakfastFood): raise TypeError("NO.") # do some more stuff here return food What's going on in your code If you annotate the obj argument in your foo function with an unbound TypeVar, you're telling the type-checker that obj could be of any type. But the type-checker correctly raises an error here: you've told it that obj could be of any type, yet your function assumes that obj has an attribute x, and not all objects in python have x attributes. By binding the T TypeVar to instances of — and instances of subclasses of — MyClass, we're telling the type-checker that the obj argument should be an instance of MyClass, or an instance of a subclass of MyClass. All instances of MyClass and its subclasses have x attributes, so the type-checker is happy. Hooray! However, your current function shouldn't really be using TypeVars at all, in my opinion, as there's no kind of type-dependency involved in your function's annotation. If you know that the obj argument should be an instance of — or an instance of a subclass of — MyClass, and there is no type-dependency in your annotations, then you can simply annotate your function directly with MyClass: class MyClass: x: int = 10 def foo(obj: MyClass) -> None: print(obj.x) foo(MyClass()) If, on the other hand, obj doesn't need to be an instance of — or an instance of a subclass of — MyClass, and in fact any class with an x attribute will do, then you can use typing.Protocol to specify this: from typing import Protocol class SupportsXAttr(Protocol): x: int class MyClass: x: int = 10 def foo(obj: SupportsXAttr) -> None: print(obj.x) foo(MyClass()) Explaining typing.Protocol fully is beyond the scope of this already-long answer, but here's a great blog post on it.
4
13
68,631,257
2021-8-3
https://stackoverflow.com/questions/68631257/how-is-str-joiniterable-method-implemented-in-python-linear-time-string-conca
I am trying to implement my own str.join method in Python, e.g: ''.join(['aa','bbb','cccc']) returns 'aabbbcccc'. I know that string concatenation using the join method would result in linear (in the number of characters of the result) complexity, and I want to know how to do it, as using the '+' operator in a for loop would result in quadratic complexity e.g.: res='' for word in ['aa','bbb','cccc']: res = res + word As strings are immutable, this copies a new string at each iteration resulting in quadratic running time. However, I want to know how to do it in linear time or find how ''.join works exactly. I could not find anywhere a linear time algorithm nor the implementation of str.join(iterable). Any help is much appreciated.
Joining str as actual str is a red herring and not what Python itself does: Python operates on mutable bytes, not the str, which also removes the need to know string internals. In specific, str.join converts its arguments to bytes, then pre-allocates and mutates its result. This directly corresponds to: a wrapper to encode/decode str arguments to/from bytes summing the len of elements and separators allocating a mutable bytesarray to construct the result copying each element/separator directly into the result # helper to convert to/from joinable bytes def str_join(sep: "str", elements: "list[str]") -> "str": joined_bytes = bytes_join( sep.encode(), [elem.encode() for elem in elements], ) return joined_bytes.decode() # actual joining at bytes level def bytes_join(sep: "bytes", elements: "list[bytes]") -> "bytes": # create a mutable buffer that is long enough to hold the result total_length = sum(len(elem) for elem in elements) total_length += (len(elements) - 1) * len(sep) result = bytearray(total_length) # copy all characters from the inputs to the result insert_idx = 0 for elem in elements: result[insert_idx:insert_idx+len(elem)] = elem insert_idx += len(elem) if insert_idx < total_length: result[insert_idx:insert_idx+len(sep)] = sep insert_idx += len(sep) return bytes(result) print(str_join(" ", ["Hello", "World!"])) Notably, while the element iteration and element copying basically are two nested loops, they iterate over separate things. The algorithm still touches each character/byte only thrice/once.
7
6
68,631,476
2021-8-3
https://stackoverflow.com/questions/68631476/how-to-find-the-index-of-the-max-value-in-a-list-for-python
I have a list, and I need to find the maximum element in the list and also record the index at which that max element is at. This is my code. list_c = [-14, 7, -9, 2] max_val = 0 idx_max = 0 for i in range(len(list_c)): if list_c[i] > max_val: max_val = list_c[i] idx_max = list_c.index(i) return list_c, max_val, idx_max Please help. I am just beginning to code and got stuck here.
You are trying to find the index of i. It should be list_c[i]. Easy way/Better way is: idx_max = i. (Based on @Matthias comment above.) Use print to print the results and not return. You use return inside functions. Also your code doesn't work if list_c has all negative values because you are setting max_val to 0. You get a 0 always. list_c = [-14, 7, -9, 2] max_val = 0 idx_max = 0 for i in range(len(list_c)): if list_c[i] > max_val: max_val = list_c[i] idx_max = i print(list_c, max_val, idx_max) [-14, 7, -9, 2] 7 1
14
0
68,628,542
2021-8-2
https://stackoverflow.com/questions/68628542/checking-if-file-exists-in-google-bucket-via-apache-airflow
I have a DAG that takes the results of a script in a Google cloud bucket, loads it into a table in Google BigQuery, then deletes the file in the bucket. I want the DAG to check every hour over the weekends. Right now, I'm using a GoogleCloudStoragetoBigQueryOperator. If the file is not there, the DAG fails. Is there a way I can set up the DAG to where it won't fail if the file isn't there? Maybe a try/catch?
You could use GCSObjectExistenceSensor from Google provider package in order to verify if the file is present before running downstream tasks. gcs_object_exists = GCSObjectExistenceSensor( bucket=BUCKET_1, object=PATH_TO_UPLOAD_FILE, mode='poke', task_id="gcs_object_exists_task", ) You can check the official example here. Keep in mind this sensor extends from BaseSensorOperator so you can define params such as poke_interval, timeout and mode to suit your needs. soft_fail (bool) – Set to true to mark the task as SKIPPED on failure poke_interval (float) – Time in seconds that the job should wait in between each tries timeout (float) – Time, in seconds before the task times out and fails. mode (str) – How the sensor operates. Options are: { poke | reschedule }, default is poke. When set to poke the sensor is taking up a worker slot for its whole execution time and sleeps between pokes. Use this mode if the expected runtime of the sensor is short or if a short poke interval is required. Note that the sensor will hold onto a worker slot and a pool slot for the duration of the sensor’s runtime in this mode. When set to reschedule the sensor task frees the worker slot when the criteria is not yet met and it’s rescheduled at a later time. Use this mode if the time before the criteria is met is expected to be quite long. The poke interval should be more than one minute to prevent too much load on the scheduler. exponential_backoff (bool) – allow progressive longer waits between pokes by using exponential backoff algorithm source
4
8
68,625,921
2021-8-2
https://stackoverflow.com/questions/68625921/how-to-adjust-matplotlib-colorbar-range-in-xarray-plot
I have a plot that looks like this I cannot understand how to manually change or set the range of data values for the colorbar. I would like to experiment with ranges based on the data values shown in the plots and change the colorbar to (-4,4). I see that plt.clim, vmin and vmax are functions to possibly use. Here is my code: import cdsapi import xarray as xr import matplotlib.pyplot as plt import numpy as np import cartopy.crs as ccrs # Also requires cfgrib library. c = cdsapi.Client() url = c.retrieve( 'reanalysis-era5-single-levels-monthly-means', { 'product_type': 'monthly_averaged_reanalysis', 'format': 'grib', 'variable': ['100m_u_component_of_wind','100m_v_component_of_wind'], 'year': ['2006','2007','2008','2009','2010','2011','2012','2013','2014','2015','2016','2017','2018','2019','2020','2021'], 'month': ['01','02','03','04','05','06','07','08','09','10','11','12'], 'time': '00:00', 'grid': [0.25, 0.25], 'area': [70.00, -180.00, -40.00, 180.00], }, "C:\\Users\\U321103\\.spyder-py3\\ERA5_MAPPING\\100m_wind_U_V.grib") path = "C:\\Users\\U321103\\.spyder-py3\\ERA5_MAPPING\\100m_wind_U_V.grib" ds = xr.load_dataset(path, engine='cfgrib') wind_abs = np.sqrt(ds.u100**2 + ds.v100**2) monthly_means = wind_abs.mean(dim='time') wind_abs_clim = wind_abs.sel(time=slice('2006-01','2020-12')).groupby('time.month').mean(dim='time') # select averaging period wind_abs_anom = ((wind_abs.groupby('time.month') / wind_abs_clim))-1 #deviation from climo fg = wind_abs_anom.sel(time=slice('2021-01',None)).groupby('time.month').mean(dim='time').plot(col='month', col_wrap=3,transform=ccrs.PlateCarree(), cbar_kwargs={'orientation':'horizontal','shrink':0.6, 'aspect':40,'label':'Percent Deviation'},robust=False,subplot_kws={'projection': ccrs.Mercator()}) fg.map(lambda: plt.gca().coastlines())
I was able to reproduce your figure and found that I could add vmin and vmax as shown below. For some reason that meant I also had to specify the colormap, otherwise I ended up with viridis. But the code below works for me (with a bit of refactoring as I got it working — the only material change here is in the plotting section at the bottom). First, loading the data: import cdsapi c = cdsapi.Client() params = { 'product_type': 'monthly_averaged_reanalysis', 'format': 'grib', 'variable': ['100m_u_component_of_wind', '100m_v_component_of_wind'], 'year': [f'{n}' for n in range(2006, 2022)], 'month': [f'{n:02d}' for n in range(1, 13)], 'time': '00:00', 'grid': [0.25, 0.25], 'area': [70.00, -180.00, -40.00, 180.00], } path = '100m_wind_U_V.grib' url = c.retrieve('reanalysis-era5-single-levels-monthly-means', params, path, ) Then there's the data pipeline: import xarray as xr import numpy as np # Also need cfgrib library. ds = xr.load_dataset(path, engine='cfgrib') wind_abs = np.sqrt(ds.u100**2 + ds.v100**2) monthly_means = wind_abs.mean(dim='time') wind_abs_clim = (wind_abs.sel(time=slice('2006-01','2020-12')) .groupby('time.month') .mean(dim='time')) wind_abs_anom = ((wind_abs.groupby('time.month') / wind_abs_clim)) - 1 Finally the plotting: import cartopy.crs as ccrs import matplotlib.pyplot as plt cbar_kwargs = {'orientation':'horizontal', 'shrink':0.6, 'aspect':40, 'label':'Percent Deviation'} subplot_kws = {'projection': ccrs.Mercator()} fg = (wind_abs_anom.sel(time=slice('2021-01', None)) .groupby('time.month') .mean(dim='time') .plot(col='month', col_wrap=3, transform=ccrs.PlateCarree(), cmap='RdBu_r', vmin=-3, vmax=3, # <-- New bit. cbar_kwargs=cbar_kwargs, robust=False, subplot_kws=subplot_kws )) fg.map(lambda: plt.gca().coastlines()) Sometimes I'll use a percentile to control the values for vmin and vmax automatically, like max_ = np.percentile(data, 99), then vmin=-max_, vmax=max_. This deals nicely with outliers that stretch the colormap, but it requires you to be able to calculate those values before making the plot. If you want to start having more control over the plot, it might be a good idea to stop using the xarray plotting interface and use matplotlib and cartopy directly. Here's what that might look like (replacing all of the plotting code above): import cartopy.crs as ccrs import matplotlib.pyplot as plt sel = wind_abs_anom.sel(time=slice('2021-01', None)) left, *_, right = wind_abs_anom.longitude top, *_, bottom = wind_abs_anom.latitude # Min and max latitude. extent = [left, right, bottom, top] fig, axs = plt.subplots(nrows=2, ncols=3, figsize=(15, 6), subplot_kw={'projection': ccrs.PlateCarree()}, ) for ax, (month, group) in zip(axs.flat, sel.groupby('time.month')): mean = group.mean(dim='time') im = ax.imshow(mean, transform=ccrs.PlateCarree(), extent=extent, cmap='RdBu_r', vmin=-3, vmax=3) ax.set_title(f'month = {month}') ax.coastlines() cbar_ax = fig.add_axes([0.2, 0.0, 0.6, 0.05]) # Left, bottom, width, height. cbar = fig.colorbar(im, cax=cbar_ax, extend='both', orientation='horizontal') cbar.set_label('Percent deviation') plt.show() For some reason, when I try to use ccra.Mercator() for the map, the data gets distorted; maybe you can figure that bit out.
4
9
68,626,923
2021-8-2
https://stackoverflow.com/questions/68626923/numpy-matrix-creation-timing-oddity
My application requires a starting matrix where each column is staggered-by-1 from the previous. It will contain millions of complex numbers representing a signal, but a small example is: array([[ 0, 1, 2, 3], [ 1, 2, 3, 4], [ 2, 3, 4, 5], [ 3, 4, 5, 6], [ 4, 5, 6, 7], [ 5, 6, 7, 8], [ 6, 7, 8, 9], [ 7, 8, 9, 10]]) I tried two creation methods, one fast, one slow. I don't understand why the fast matrix creation method causes subsequent calculations to run slowly, while the slow matrix creation results in faster running calculations. The subroutine calcs() simply takes FFTs to offer minimal code to demonstrate the issue I see in my actual signal processing code. A sample run yields: python ex.py Slow Create, Fast Math 57.90 ms, create 36.79 ms, calcs() 94.69 ms, total Fast Create, Slow Math 15.13 ms, create 355.38 ms, calcs() 370.50 ms, total Code follows. Any insight would be appreciated! import numpy as np import time N = 65536 Np = 64 # Random signal for demo. x = np.random.randint(-50,50,N+Np) + 1j*np.random.randint(-50,50,N+Np) def calcs(sig): np.fft.fft(sig) print('Slow Create, Fast Math') t0 = time.time() X = np.zeros((N, Np), dtype=complex) for col in range(Np): X[:,col] = x[col:col+N] t1 = time.time() calcs(X) t2 = time.time() print(' %6.2f ms, create' % (1e3 * (t1 - t0))) print(' %6.2f ms, calcs()' % (1e3 * (t2 - t1))) print(' %6.2f ms, total' % (1e3 * (t2 - t0))) print('Fast Create, Slow Math') t0 = time.time() X = np.array([x[i:i+N] for i in range(Np)]).transpose() t1 = time.time() calcs(X) t2 = time.time() print(' %6.2f ms, create' % (1e3 * (t1 - t0))) print(' %6.2f ms, calcs()' % (1e3 * (t2 - t1))) print(' %6.2f ms, total' % (1e3 * (t2 - t0)))
user3483203's comment, above, provides answer to the issue. If I avoid the transpose by creating the matrix with: X = np.array([x[i:i+Np] for i in range(N)], dtype=complex) subsequent calcs() timing is as expected. Thank you, user3483203!
5
3
68,625,894
2021-8-2
https://stackoverflow.com/questions/68625894/pick-elements-from-list-of-sets-to-cover-all-sets-with-exactly-one-element
I am looking for the idiomatic and fast python solution for the following problem. Input is a list of sets. For example, 3 sets of strings. [ {a, b, c, d, e}, {a, c, e, f, g}, {e, f, a, d, l} ] I would like to find all choices of string combinations so that there is only one element in combination per set. For example, these are "mappings" that show at what list positions these strings occur: a -> 0, 1, 2 b -> 0 c -> 0, 1 d -> 0, 2 e -> 0, 1, 2 f -> 1, 2 g -> 1 l -> 2 So the correct solution is the following list of sets a b, g, l b, f e c, l d, g Here are some examples of incorrect solutions: a, b # incorrect because more than one element (2) from set 0 are used b, l # incorrect because less than one element (0) from set 1 is used I tried naive recurrent solution that adds one element at a time from the row and then checks remaining rows if they met requirements or not. It was extremely slow (probably because I overused list concatenations, copies, etc). I am looking for the solution that will work relatively fast (less than 10 s) for 100 rows of sets where each of them has 50 elements in average. As noted in the comments this problem may not be solvable in finite time with these constraints. In that case I am still interested in the python solution that will work with weaker constraints (e.g. alphabet size 30, average set size 10, number of sets 50).
I would use a SAT solver like z3 for this. import z3 sets = [ {"a", "b", "c", "d", "e"}, {"a", "c", "e", "f", "g"}, {"e", "f", "a", "d", "l"} ] alphabet = set.union(*sets) zvars = {w: z3.Bool(w) for w in alphabet} sol = z3.Solver() for s in sets: # Exactly one in each set. sol.add(z3.PbEq([(zvars[w], True) for w in s], 1)) # Iterate over all solutions. while sol.check() == z3.sat: model = sol.model() print({w for w, v in zvars.items() if bool(model[v])}) # Prevent same solution being returned. trues = [v for v in zvars.values() if bool(model[v])] falses = [z3.Not(v) for v in zvars.values() if not bool(model[v])] sol.add(z3.Not(z3.And(trues + falses))) Solution: {'e'} {'g', 'd'} {'g', 'b', 'l'} {'b', 'f'} {'c', 'l'} {'a'}
5
3
68,624,485
2021-8-2
https://stackoverflow.com/questions/68624485/hide-play-and-stop-buttons-in-plotly-express-animation
How can I remove the play and stop buttons and just keep the slider? import plotly.express as px import pandas as pd df = pd.DataFrame(dict(x=[0, 1, 0, 1], y=[0, 1, 1, 0], z=[0, 0, 1, 1])) px.line(df, "x", "y", animation_frame="z")
fig = px.line(df, "x", "y", animation_frame="z") fig["layout"].pop("updatemenus") fig.show()
4
7
68,616,781
2021-8-2
https://stackoverflow.com/questions/68616781/customizing-the-hue-colors-used-in-seaborn-barplot
I'm using seaborn to create the following chart: I'd like to customize the colors that are generated by hue , preferably to set the order of the colors as Blue, Green, Yellow, Red. I have tried passing a color or a list of colors to the color argument in sns.barplot however it yields either gradients of the color or an error. Please Advise. You can use the following code to reproduce the chart: import matplotlib.pyplot as plt import seaborn as sns import pandas as pd df = pd.DataFrame({'Early': {'A': 686, 'B': 533, 'C': 833, 'D': 625, 'E': 820}, 'Long Overdue': {'A': 203, 'B': 237, 'C': 436, 'D': 458, 'E': 408}, 'On Time': {'A': 881, 'B': 903, 'C': 100, 'D': 53, 'E': 50}, 'Overdue': {'A': 412, 'B': 509, 'C': 813, 'D': 1046, 'E': 904}}) df_long = df.unstack().to_frame(name='value') df_long = df_long.swaplevel() df_long.reset_index(inplace=True) df_long.columns = ['group', 'status', 'value'] df_long['status'] = pd.Categorical(df_long['status'], ["Early", "On Time", "Overdue", "Long Overdue"]) df_long = df_long.sort_values("status") fig, ax = plt.subplots(figsize=(18.5, 10.5)) g = sns.barplot(data=df_long, x='group', y='value', hue='status', ax=ax) for bar in g.patches: height = bar.get_height() ax.text(bar.get_x() + bar.get_width() / 2., 0.5 * height, int(height), ha='center', va='center', color='white') plt.xticks(fontsize=12) plt.legend(loc='upper left', prop={'size': 14}) ax.xaxis.label.set_visible(False) ax.axes.get_yaxis().set_visible(False) plt.show()
The hue variable of seaborn.barplot() is mapped via palette: palette: palette name, list, or dict Colors to use for the different levels of the hue variable. Should be something that can be interpreted by seaborn.color_palette(), or a dictionary mapping hue levels to matplotlib colors. So to customize your hue colors, either define a color list: palette = ['tab:blue', 'tab:green', 'tab:orange', 'tab:red'] or a hue-color dictionary: palette = { 'Early': 'tab:blue', 'On Time': 'tab:green', 'Overdue': 'tab:orange', 'Long Overdue': 'tab:red', } And pass that to palette: sns.barplot(data=df_long, x='group', y='value', hue='status', palette=palette, ax=ax)
10
15
68,603,658
2021-7-31
https://stackoverflow.com/questions/68603658/how-to-terminate-a-uvicorn-fastapi-application-cleanly-with-workers-2-when
I have an application written with Uvicorn + FastAPI. I am testing the response time using PyTest. Referring to How to start a Uvicorn + FastAPI in background when testing with PyTest, I wrote the test. However, I found the application process alive after completing the test when workers >= 2. I want to terminate the application process cleanly at the end of the test. Do you have any idea? The details are as follows. Environment Windows 10 Bash 4.4.23 (https://cmder.net/) python 3.7.5 Libraries fastapi == 0.68.0 uvicorn == 0.14.0 requests == 2.26.0 pytest == 6.2.4 Sample Codes Application: main.py from fastapi import FastAPI app = FastAPI() @app.get("/") def hello_world(): return "hello world" Test: test_main.py from multiprocessing import Process import pytest import requests import time import uvicorn HOST = "127.0.0.1" PORT = 8765 WORKERS = 1 def run_server(host: str, port: int, workers: int, wait: int = 15) -> Process: proc = Process( target=uvicorn.run, args=("main:app",), kwargs={ "host": host, "port": port, "workers": workers, }, ) proc.start() time.sleep(wait) assert proc.is_alive() return proc def shutdown_server(proc: Process): proc.terminate() for _ in range(5): if proc.is_alive(): time.sleep(5) else: return else: raise Exception("Process still alive") def check_response(host: str, port: int): assert requests.get(f"http://{host}:{port}").text == '"hello world"' def check_response_time(host: str, port: int, tol: float = 1e-2): s = time.time() requests.get(f"http://{host}:{port}") e = time.time() assert e-s < tol @pytest.fixture(scope="session") def server(): proc = run_server(HOST, PORT, WORKERS) try: yield finally: shutdown_server(proc) def test_main(server): check_response(HOST, PORT) check_response_time(HOST, PORT) check_response(HOST, PORT) check_response_time(HOST, PORT) Execution Result $ curl http://localhost:8765 curl: (7) Failed to connect to localhost port 8765: Connection refused $ pytest test_main.py =============== test session starts =============== platform win32 -- Python 3.7.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 rootdir: .\ collected 1 item test_main.py . [100%] =============== 1 passed in 20.23s =============== $ curl http://localhost:8765 curl: (7) Failed to connect to localhost port 8765: Connection refused $ sed -i -e "s/WORKERS = 1/WORKERS = 3/g" test_main.py $ curl http://localhost:8765 curl: (7) Failed to connect to localhost port 8765: Connection refused $ pytest test_main.py =============== test session starts =============== platform win32 -- Python 3.7.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 rootdir: .\ collected 1 item test_main.py . [100%] =============== 1 passed in 20.21s =============== $ curl http://localhost:8765 "hello world" $ # Why is localhost:8765 still alive?
I have found a solution myself. Thanks > https://stackoverflow.com/a/27034438/16567832 Solution After install psutil by pip install psutil, update test_main.py from multiprocessing import Process import psutil import pytest import requests import time import uvicorn HOST = "127.0.0.1" PORT = 8765 WORKERS = 3 def run_server(host: str, port: int, workers: int, wait: int = 15) -> Process: proc = Process( target=uvicorn.run, args=("main:app",), kwargs={ "host": host, "port": port, "workers": workers, }, ) proc.start() time.sleep(wait) assert proc.is_alive() return proc def shutdown_server(proc: Process): ##### SOLUTION ##### pid = proc.pid parent = psutil.Process(pid) for child in parent.children(recursive=True): child.kill() ##### SOLUTION END #### proc.terminate() for _ in range(5): if proc.is_alive(): time.sleep(5) else: return else: raise Exception("Process still alive") def check_response(host: str, port: int): assert requests.get(f"http://{host}:{port}").text == '"hello world"' def check_response_time(host: str, port: int, tol: float = 1e-2): s = time.time() requests.get(f"http://{host}:{port}") e = time.time() assert e-s < tol @pytest.fixture(scope="session") def server(): proc = run_server(HOST, PORT, WORKERS) try: yield finally: shutdown_server(proc) def test_main(server): check_response(HOST, PORT) check_response_time(HOST, PORT) check_response(HOST, PORT) check_response_time(HOST, PORT) Execution Result $ curl http://localhost:8765 curl: (7) Failed to connect to localhost port 8765: Connection refused $ pytest test_main.py ================== test session starts ================== platform win32 -- Python 3.7.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 rootdir: .\ collected 1 item test_main.py . [100%] ================== 1 passed in 20.24s ================== $ curl http://localhost:8765 curl: (7) Failed to connect to localhost port 8765: Connection refused
9
5
68,621,210
2021-8-2
https://stackoverflow.com/questions/68621210/runtimeerror-expected-a-cuda-device-type-for-generator-but-found-cpu
I am trying to train PeleeNet pytorch and got the following error train.py line 80 pelee_voc train configuration
Turning the shuffle parameter off in the dataloader solved it. Got the answer form here.
13
6
68,614,447
2021-8-1
https://stackoverflow.com/questions/68614447/how-to-display-boxplot-in-front-of-violinplot-in-seaborn-seaborn-zorder
To customize the styles of the boxplot displayed inside a violinplot, on could try to plot a boxplot in front of a violinplot. However this does not seem to work as it is always displayed behind the violinplot when using seaborn. When using seaborn + matplotlib this works (but only for a single category): import matplotlib.pyplot as plt import seaborn as sns import numpy as np df=pd.DataFrame(np.random.rand(10,2)).melt(var_name='group') fig, axes = plt.subplots() # Seaborn violin plot sns.violinplot(y=df[df['group']==0]['value'], color="#af52f4", inner=None, linewidth=0, saturation=0.5) # Normal boxplot has full range, same in Seaborn boxplot axes.boxplot(df[df['group']==0]['value'], whis='range', positions=np.array([0]), showcaps=False,widths=0.06, patch_artist=True, boxprops=dict(color="indigo", facecolor="indigo"), whiskerprops=dict(color="indigo", linewidth=2), medianprops=dict(color="w", linewidth=2 )) axes.set_xlim(-1,1) plt.show() However when using only seaborn to be able to plot across multiple categories, ordering is always wrong: sns.violinplot(data=df, x='group', y='value', color="#af52f4", inner=None, linewidth=0, saturation=0.5) sns.boxplot(data=df, x='group', y='value', saturation=0.5) plt.show() Even when trying to fix this with zorder this does not work.
The zorder parameter of sns.boxplot only affects the lines of the boxplot, but not the rectangular box. One possibility is to access these boxes afterwards; they form the list of artists in ax.artists. Setting their zorder=2 will put them in front of the violins while still being behind the other boxplot lines. In the comments, @mwaskom, noted a better way. sns.boxplot delegates all parameters it doesn't recognize via **kwargs to ax.boxplot. One of these is boxprops with the properties of the box rectangle. So, boxprops={'zorder': 2} would change the zorder of only the box. Here is an example: import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np df = pd.DataFrame(np.random.rand(10, 2)).melt(var_name='group') ax = sns.violinplot(data=df, x='group', y='value', color="#af52f4", inner=None, linewidth=0, saturation=0.5) sns.boxplot(data=df, x='group', y='value', saturation=0.5, width=0.4, palette='rocket', boxprops={'zorder': 2}, ax=ax) plt.show() Here is another example, using the tips dataset: tips = sns.load_dataset('tips') ax = sns.violinplot(data=tips, x='day', y='total_bill', palette='turbo', inner=None, linewidth=0, saturation=0.4) sns.boxplot(x='day', y='total_bill', data=tips, palette='turbo', width=0.3, boxprops={'zorder': 2}, ax=ax)
4
8
68,614,561
2021-8-1
https://stackoverflow.com/questions/68614561/project-file-window-is-yellow-in-pycharm
I'm working with PyCharm 2019 and Django, in Windows 10 in a project that I haven't opened in a year. The Project files window is showing up as yellow, which seems new. What does this mean and how to I get the files to appear as white.
What the yellow background usually means is that the files are excluded form the project (it can also mean the files are "read-only"). This might happen for several reasons, the .idea folder might have broken and you need to delete it and recreate the project. If your project is installed in a venv sometimes the source files are marked read-only (which means the source files being edited are the versions installed in the venv). So here it gets complicated because it can depend on the specifics of the project itself. My usual steps for this problem are: Close and reopen the project. See if marking one of the directories as sources root changes the file color in the project tree. (Files might have been marked as excluded from the project for whatever reason.) Just to help diagnosing the issue, open a search and go to costum scopes, see what scope those directories are associated with. Check if file permissions are read-only. This can happen if you logged into PyCharm (or the OS) with a user account that doesn't have editing permissions on those files. Delete the .idea folder (so the IDE recreates it) and create a new project with those files. (Remember to make a backup copy.)
13
26
68,583,341
2021-7-29
https://stackoverflow.com/questions/68583341/selenium-proxy-with-authentication
I have to use selenium and proxy with authentication. I have a few constraints I can't use selenium-wire (only pure selenium allowed) I have to use headless mode (e.g. chrome_options.add_argument("--headless")) I read this answer Python proxy authentication through Selenium chromedriver but it doesn't work for headless mode. Is it possible to use proxy with authentication in my case? (browser(Chrome, Firefox) is not important) I need python function to create selenium webdriver object authenticate to proxy
you can't because you need a GUI to handle it with selenium in your case so I would recommend using a virtual display like Xvfb display server You can use PyVirtualDisplay (a Python wrapper for Xvfb) to run headless. for Linux sudo apt-get install firefox xvfb install virtual display for python pip install pyvirtualdisplay then from pyvirtualdisplay import Display from selenium import webdriver display = Display(visible=0, size=(800, 600)) display.start() # now Firefox will run in a virtual display. # you will not see the browser. browser = webdriver.Firefox() browser.get('http://www.google.com') print browser.title browser.quit() display.stop() in this case, you do not need to add options.add_argument("--headless") argument and you can follow the answers commented above as a solution or doing it your way but I think this is the best solution for using pure selenium for proxy
5
2
68,606,518
2021-7-31
https://stackoverflow.com/questions/68606518/converting-pandas-dataframe-to-pyspark-dataframe-drops-index
I've got a pandas dataframe called data_clean. It looks like this: I want to convert it to a Spark dataframe, so I use the createDataFrame() method: sparkDF = spark.createDataFrame(data_clean) However, that seems to drop the index column (the one that has the names ali, anthony, bill, etc) from the original dataframe. The output of sparkDF.printSchema() sparkDF.show() is root |-- transcript: string (nullable = true) +--------------------+ | transcript| +--------------------+ |ladies and gentle...| |thank you thank y...| | all right thank ...| | | |this is dave he t...| | | | ladies and gen...| | ladies and gen...| |armed with boyish...| |introfade the mus...| |wow hey thank you...| |hello hello how y...| +--------------------+ The docs say createDataFrame() can take a pandas.DataFrame as an input. I'm using Spark version '3.0.1'. Other questions on SO related to this don't mention this problem of the index column disappearing: This one about converting Pandas to Pyspark doesn't mention this issue of the index column disappearing. Same with this one And this one relates to data dropping during conversion, but is more about window functions. I'm probably missing something obvious, but how do I get to keep the index column when I convert from a pandas dataframe to a PySpark dataframe?
Spark DataFrame has no concept of index, so if you want to preserve it, you have to assign it to a column first using reset_index in a pandas dataframe You can also use inplace to avoid additional memory overhead while resting the index df.reset_index(drop=False,inplace=True) sparkDF = sqlContext.createDataFrame(df)
4
4
68,606,631
2021-8-1
https://stackoverflow.com/questions/68606631/how-can-i-do-cross-validation-with-sample-weights
I'm trying to classify text data into multiple classes. I'd like to perform cross-validation to compare several models with sample weights. With each model, I can put a parameter like this. all_together = y_train.to_numpy() unique_classes = np.unique(all_together) c_w = class_weight.compute_class_weight('balanced', unique_classes, all_together) clf = MultinomialNB().fit(X_train_tfidf, y_train, sample_weight=[c_w[i] for i in all_together]) It doesn't seem cross_val_score() allows parameter about sample_weight. How can I do this with cross validation? models = [ RandomForestClassifier(n_estimators=200, max_depth=3, random_state=0), LinearSVC(), MultinomialNB(), LogisticRegression(random_state=0), ] all_together = y_train.to_numpy() unique_classes = np.unique(all_together) c_w = class_weight.compute_class_weight('balanced', unique_classes, all_together) CV = 5 cv_df = pd.DataFrame(index=range(CV * len(models))) entries = [] for model in models: model_name = model.__class__.__name__ f1_micros = cross_val_score(model, X_tfidf, y_train, scoring='f1_micro', cv=CV) for fold_idx, f1_micro in enumerate(f1_micros): entries.append((model_name, fold_idx, f1_micro)) cv_df_women = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'f1_micro'])
cross_val_score has a parameter called fit_params which accepts a dictionary of parameters (keys) and values to pass to the fit() method of the estimator. In your case, you can do cross_val_score(model, X_tfidf, y_train, scoring='f1_micro', cv=CV, fit_params={'sample_weight': [c_w[i] for i in all_together]})
5
10
68,602,274
2021-7-31
https://stackoverflow.com/questions/68602274/readwritememory-reading-memory-as-an-int-instead-of-a-float
from ReadWriteMemory import ReadWriteMemory rwm = ReadWriteMemory() process = rwm.get_process_by_name("javaw.exe") process.open() module_base = 0x6FBB0000 static_address_offset = 0x007FD7C0 static_address = module_base + static_address_offset pitch_pointer = process.get_pointer(static_address, offsets=[0xB8, 0x1C8, 0x1C8, 0x1D0, 0x178, 0xAC, 0x8C]) camera_pitch = process.read(pitch_pointer) print(camera_pitch) I am trying to get the camera pitch using a pointer I got in cheat engine, and the script works fine but the camera pitch is a float value, while process.read(pitch_pointer) returns an int, and that for example sets camera_pitch to 1108138163 instead of 35.2. Can't find how to get a float instead anywhere.
You can use the struct module. Something like this: >>> import struct >>> i_value = 1108138163 >>> struct.unpack("@f", struct.pack("@I", i_value))[0] 35.21162033081055 That is, you convert your integer to a 4-byte array, and then you convert that to a float. struct.unpack always returns a tuple, in this case of a single value, so use [0] to get to it.
5
4
68,591,271
2021-7-30
https://stackoverflow.com/questions/68591271/how-can-i-combine-hue-and-style-groups-in-a-seaborn-legend
I'm doing a Seaborn lineplot for longitudinal data which is grouped by "Subscale" using hue and by "Item" using style. Here is my code (I hope this is understandable also without data): ax = sns.lineplot(data = df, x = 'Week', y = 'Value', style = 'Item', hue = 'Subscale', palette = 'colorblind', markers = True) plt.legend(bbox_to_anchor = (1.03, 1.02), fontsize = 10) Whicht gets me this plot: What I want is to combine the sublegends so that it only shows the legend for "Item" but the items are colored according to "Subscale", kind of like this: I've failed to create this, so if any of you could help, this would be highly appreciated! Thanks :)
If I understand correctly, all items of a certain type have the same subscale. So, you already have (or can create) a dictionary that maps an item type to the corresponding subscale. Seaborn creates following labels for the legend: 'Subscale' for a subtitle each of the subscales 'Item' for a second subtitle each of the items Each label corresponds to a "handle" identifying how the marker looks like. The following code: finds the index of 'Item' to be able to split the arrays of labels and handles extracts the colors of the 'Subscale's applies these colors to the item markers only uses the items for the legend import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np # first create some random data similar to the description np.random.seed(123) items = ['01', '04', '05', '06', '07', '10', '11', '13'] N = len(items) subscale_dict = {'01': 'A', '04': 'C', '05': 'C', '06': 'C', '07': 'A', '10': 'B', '11': 'B', '13': 'A'} df = pd.DataFrame({'Week': np.tile(np.arange(7), N), 'Value': np.random.rand(7 * N), 'Item': np.repeat(items, 7)}) df['Subscale'] = df['Item'].apply(lambda i: subscale_dict[i]) df['Subscale'] = pd.Categorical(df['Subscale']) # creates a fixed order # create the line plot as before ax = sns.lineplot(data=df, x='Week', y='Value', style='Item', hue='Subscale', palette='colorblind', markers=True) # create a dictionary mapping the subscales to their color handles, labels = ax.get_legend_handles_labels() index_item_title = labels.index('Item') color_dict = {label: handle.get_color() for handle, label in zip(handles[1:index_item_title], labels[1:index_item_title])} # loop through the items, assign color via the subscale of the item idem for handle, label in zip(handles[index_item_title + 1:], labels[index_item_title + 1:]): handle.set_color(color_dict[subscale_dict[ label]]) # create a legend only using the items ax.legend(handles[index_item_title + 1:], labels[index_item_title + 1:], title='Item', bbox_to_anchor=(1.03, 1.02), fontsize=10) plt.tight_layout() plt.show()
5
4
68,561,245
2021-7-28
https://stackoverflow.com/questions/68561245/extract-or-set-input-output-tf-tensor-names-information-from-python-api-instea
I trained a simple model with Keras/TF2.5 and saved it as saved model. tf.saved_model.save(my_model,'/path/to/model') If I examine it via saved_model_cli show --dir /path/to/model --tag_set serve --signature_def serving_default I get these outputs/names: inputs['conv2d_input'] tensor_info: dtype: DT_FLOAT shape: (-1, 32, 32, 1) name: serving_default_conv2d_input:0 outputs['dense'] tensor_info: dtype: DT_FLOAT shape: (-1, 2) name: StatefulPartitionedCall:0 The names serving_default_conv2d_input and StatefulPartitionedCall can actually be used for inference. I want to extract them using python API. If I query it by loading the model: >>> m=tf.saved_model.load('/path/to/model') >>> m.signatures['serving_default'].inputs[0].name 'conv2d_input:0' >>> m.signatures['serving_default'].outputs[0].name 'Identity:0' I get entirely different names. Questions: How can I extract these names serving_default_conv2d_input and StatefulPartitionedCall from python API? Alternatively how can I define/fix the names when I call tf.saved_model.save? What does :0 mean? And side question: How do you handle deployment TF model to production via SavedModel?
The input/output tensor names displayed by saved_model_cli can be extracted as follows: from tensorflow.python.tools import saved_model_utils saved_model_dir = '/path/to/model' tag_set = 'serve' signature_def_key = 'serving_default' # 1. Load MetaGraphDef with saved_model_utils meta_graph_def = saved_model_utils.get_meta_graph_def(saved_model_dir, tag_set) # 2. Get input signature names input_signatures = list(meta_graph_def.signature_def[signature_def_key].inputs.values()) input_names = [signature.name for signature in input_signatures] print(input_names) # ['serving_default_conv2d_input:0'] # 3. Get output signature names output_signatures = list(meta_graph_def.signature_def[signature_def_key].outputs.values()) output_names = [signature.name for signature in output_signatures] print(output_names) # ['StatefulPartitionedCall:0'] Regarding the meaning of :0, op_name:0 means "the tensor that is the 0th output of an operation called op_name." So you might use …:1 to get the output of an operation with multiple outputs, but many operations are single-output so you'll always use …:0 for them (source: @mrry's comment).
5
3
68,591,676
2021-7-30
https://stackoverflow.com/questions/68591676/why-are-np-hypot-and-np-subtract-outer-very-fast-compared-to-vanilla-broadcast-a
I have two large sets of 2D points and need to calculate a distance matrix. I need it to be fast so I used NumPy broadcasting. Of two ways to calculate distance matrix I don't understand why one is better than the other. From here I have contradicting results. Cells [3, 4, 6] and [8, 9] both calculate the distance matrix, but 3+4 uses subtract.outer faster than 8 which uses broadcasting and 6 uses hypot faster than 9, which is the simple way. I did not try Python loops assuming it will never finish. Is there a faster way to calculate distance matrix? Why hypot and subtract.outer are faster? Code (I change seed to prevent cache re-use): ### Cell 1 import numpy as np np.random.seed(858442) ### Cell 2 %%time obs = np.random.random((50000, 2)) interp = np.random.random((30000, 2)) CPU times: user 2.02 ms, sys: 1.4 ms, total: 3.42 ms Wall time: 1.84 ms ### Cell 3 %%time d0 = np.subtract.outer(obs[:,0], interp[:,0]) CPU times: user 2.46 s, sys: 1.97 s, total: 4.42 s Wall time: 4.42 s ### Cell 4 %%time d1 = np.subtract.outer(obs[:,1], interp[:,1]) CPU times: user 3.1 s, sys: 2.7 s, total: 5.8 s Wall time: 8.34 s ### Cell 5 %%time h = np.hypot(d0, d1) CPU times: user 12.7 s, sys: 24.6 s, total: 37.3 s Wall time: 1min 6s ### Cell 6 np.random.seed(773228) ### Cell 7 %%time obs = np.random.random((50000, 2)) interp = np.random.random((30000, 2)) CPU times: user 1.84 ms, sys: 1.56 ms, total: 3.4 ms Wall time: 2.03 ms ### Cell 8 %%time d = obs[:, np.newaxis, :] - interp d0, d1 = d[:, :, 0], d[:, :, 1] CPU times: user 22.7 s, sys: 8.24 s, total: 30.9 s Wall time: 33.2 s ### Cell 9 %%time h = np.sqrt(d0**2 + d1**2) CPU times: user 29.1 s, sys: 2min 12s, total: 2min 41s Wall time: 6min 10s
First of all, d0 and d1 takes each 50000 x 30000 x 8 = 12 GB which is pretty big. Make sure you have more than 100 GB of memory because this is what the whole script requires! This is a huge amount of memory. If you do not have enough memory, the operating system will use a storage device (eg. swap) to store excess data which is much slower. Actually, there is no reason Cell-4 is slower than Cell-3 and I guess that you already do not have enough memory to (fully) store d1 in RAM while d0 seems to fit (mostly) in memory. There is not difference on my machine when both can fit in RAM (one can also reverse the order of the operations to check this). This also explain why further operation tends to get slower. That being said, Cells 8+9 are also slower because they create temporary arrays and need more memory passes to compute the result than Cells 3+4+5. Indeed, the expression np.sqrt(d0**2 + d1**2) first compute d0**2 in memory resulting in a new 12 GB temporary array, then compute d1**2 resulting in another 12 GB temporary array, then perform the sum of the two temporary array to produce another new 12 GB temporary array, and finally compute the square-root resulting in another 12 GB temporary array. This can required up to 48 GB of memory and require 4 read-write memory-bound passes. This is not efficient and do not use the CPU/RAM efficiently (eg. CPU cache). There is a much faster implementation consisting in doing the whole computation in 1 pass and in parallel using the Numba's JIT. Here is an example: import numba as nb @nb.njit(parallel=True) def distanceMatrix(a, b): res = np.empty((a.shape[0], b.shape[0]), dtype=a.dtype) for i in nb.prange(a.shape[0]): for j in range(b.shape[0]): res[i, j] = np.sqrt((a[i, 0] - b[j, 0])**2 + (a[i, 1] - b[j, 1])**2) return res This implementation use 3 times less memory (only 12 GB) and is much faster than the one using subtract.outer. Indeed, due to swapping, Cell 3+4+5 takes few minutes while this one takes 1.3 second! The takeaway is that memory accesses are expensive as well as temporary array. One need to avoid using multiple passes in memory while working on huge buffers and take advantage of CPU caches when the computation performed is not trivial (for example by using array chunks).
6
3
68,594,613
2021-7-30
https://stackoverflow.com/questions/68594613/django-core-exceptions-fielddoesnotexist-user-has-no-field-named-username
I'm trying to customize django's AbstractUser. When I try to reset username to None, I get the following exception: "django.core.exceptions.FieldDoesNotExist: User has no field named 'username'". Here is my code: class UserManager(BaseUserManager): use_in_migrations = True def _create_user(self, email, password, **extra_fields): if not email: raise ValueError("L'adresse e-mail donnée doit etre definie") email = self.normalize_email(email) user = self.model(email=email, **extra_fields) user.set_password(password) user.save(using=self._db) return user def create_user(self, email, password, **extra_fields): extra_fields.setdefault("is_staff", False) extra_fields.setdefault("is_superuser", False) return self._create_user(email, password, **extra_fields) def create_superuser(self, email, password, **extra_fields): extra_fields.setdefault("is_staff", True) extra_fields.setdefault("is_superuser", True) if extra_fields.get("is_staff") is not True: raise ValueError("Superuser must have is_staff=True") if extra_fields.get("is_superuser") is not True: raise ValueError("Superuser must have is_superuser=True") return self._create_user(email, password, **extra_fields) class User(AbstractUser): username = None email = models.EmailField('email adress', unique=True) telephone = models.CharField(max_length=20) REQUIRED_FIELDS = [] What can I do to solve this problem?
You haven't set a value for USERNAME_FIELD in your code. This must be set to a field that uniquely identifies a user instance. AbstractUser sets this to 'username' and hence you are getting the error. You can set this to 'email' to solve your problem: class User(AbstractUser): username = None email = models.EmailField('email adress', unique=True) telephone = models.CharField(max_length=20) # set below value USERNAME_FIELD = 'email' REQUIRED_FIELDS = []
8
2